id
stringlengths 3
9
| source
stringclasses 1
value | version
stringclasses 1
value | text
stringlengths 1.54k
298k
| added
stringdate 1993-11-25 05:05:38
2024-09-20 15:30:25
| created
stringdate 1-01-01 00:00:00
2024-07-31 00:00:00
| metadata
dict |
|---|---|---|---|---|---|---|
216470616
|
pes2o/s2orc
|
v3-fos-license
|
Spring-back factor applied for V-bending die design
To fabricate various parts with complex shapes in the sheet-metal bending industry, the V-die bending process is commonly applied. To achieve precisely bent sheet-metal parts, the design of a proper V-bending die which could prevent the spring-back characteristic and obtain the required bend angle as well as control and maintain the required bend radius is needed. The predicted spring-back and bent radius could be basically calculated by using the spring-back factor. In the present research, therefore, the accuracy of the spring-back factor for bend angle prediction and bend radius calculation in the V-die bending process was examined based on the finite element method (FEM) and laboratory experiments. The two existing spring-back factors, including the conventional and adjusted spring-back factors, were investigated. The results showed that, compared with experimental results, the accuracy of the bend angle prediction and the bend radius calculation obtained using the adjusted spring-back factor were better than those obtained using the conventional spring-back factor. These results were clarified based on the stress distribution analysis of the sheet-metal bent parts. Therefore, the adjusted spring-back factor was recommended over the conventional spring-back factor for V-bending die design applications to achieve a better accuracy bend angle prediction and bend radius calculation.
Introduction
A common and vital bending process to deform sheet metal into curved shapes, employed in many industrial fields such as the automotive, aerospace, electronics, and housing-utensil industries, is the V-die bending process. By using this process, various ranges of parts with complex shapes can be fabricated with an economical setup. In the past, therefore, most research on the V-die bending process has focused on the different ways of developing the process design as well as on achieving precision bent parts via the finite element method (FEM) and experimental analysis (Xie et al., 2015), (Thipprakmas, 2011), (Phanitwong and Thipprakmas, 2014), (Thipprakmas, 2013), , (Xiong et al., 2015), (Ahmadi et al., 2017), (Thipprakmas, 2010), , (Wang et al., 2013), (Leu, 2013), (Abea et al., 2017), (Hakan and Mustafa, 2013). For example, (Xie et al., 2015) applied the direct-current pulses technique on the V-bending of AZ31B magnesium alloy sheet and investigated the influence of direct-current pulses on the spring-back characteristics during the bending phase. The mechanism of coined-bead application used in the V-die bending process was investigated. By using the FEM simulation technique, the mechanism of coined-bead application was elucidated (Thipprakmas, 2011) and the effects of coined-bead size on the spring-back characteristics were revealed as well (Phanitwong and Thipprakmas, 2014) The sided coined-bead technique was proposed to obtain a precision bend radius in the V-die bending process (Thipprakmas, 2013). studied the effects of twinning and detwinning on the spring-back characteristics and the shift of the neutral layer in AZ31 magnesium alloy sheets in the V-die bending process. (Xiong et al., 2015) studied the geometric issues in the V-bending electromagnetic forming process of 2024-T3 aluminum alloy. (Ahmadi et al., 2017) investigated the deformation-induced martensitic transformation in V-bent anisotropic stainless steel 304L sheets using both experimental and numerical techniques.
However, in terms of bend angle, the spring-back characteristic is still the principal forming problem in this process, causing a pronounced decrease in the quality of the bent parts. In addition to the spring-back characteristics, the bent radius is also strictly controlled, especially in precision bent parts. To prevent the spring-back characteristic and achieve the required bend radius by achieving the proper V-bending die design for the fabrication of precision bent parts, the over-bending technique is commonly applied. This technique compensates for the spring-back characteristic by setting a smaller bend angle than that required. Therefore, the amount of spring-back should be neatly predicted for the proper V-bending die design. The tool radius should be properly designed to achieve the required bend radius after removing the tool and compensating for the spring-back characteristic. Basically, by using the spring-back factor, the spring-back can be predicted and the bend radius can be calculated. However, as the key factor for the spring-back prediction and bend radius calculation, the spring-back factor has not been thoroughly investigated, especially for the V-die bending process. Only a few research studies on this issue have been conducted (Ling et al., 2005), (Phanitwong and Thipprakmas, 2016), (Ling et al., 2005) confirmed the results, which corresponded well with the conventional spring-back factor (Lang, 1985) in that the decreases in the spring-back factor were obtained as the ratios of the die radius to the part thickness and the part radius to the part thickness increased. (Phanitwong and Thipprakmas, 2016) developed a new spring-back factor, named the adjusted spring-back factor, for a wiping die or L-die bending process. Their uses offered greater accuracy in the spring-back prediction than that achieved using the conventional spring-back factor.
In terms of the V-die bending process, the accuracy of the conventional spring-back factor application in V-bending die designs has not yet been examined-nor has the adjusted spring-back factor. In the present research on the conventional and adjusted spring-back factors, their uses were investigated for the examination of accuracy of spring-back prediction and bend radius calculation in the V-die bending process using the FEM and laboratory experiments. The results showed the better accuracy of spring-back prediction and bend radius calculation using the adjusted spring-back factor as compared to those achieved using the conventional spring-back factor. On the basis of the stress distribution analysis, the causes of these results were elucidated by the FEM simulation results. It was found that not only the bending stress was generated on a bending allowance zone, but the reversed bending stress was also generated on a bending allowance zone. In addition, the bending and reversed bending stresses were also generated on legs of the workpiece. Therefore, to design proper V-bending dies for precision bent parts, the spring-back factor should be strictly considered for spring-back prediction and bend radius calculation. It was thus concluded in the present research that the adjusted spring-back factor is more suitable than the conventional spring-back factor for V-bending die design applications, to achieve good accuracy spring-back prediction and bend radius calculation.
Spring-back and spring-back factor
On the basis of plastic deformation theory, in terms of bending process, the material is generally divided into two zones of elastic and the plastic during bending phase. After removing bending load, the elastic property tries to maintain the material in the initial shape, whereas the plastic property tries to retain the material in the deformed shape. This results in a partial recovery toward its initial shape and is called -spring-back‖. Therefore, after the bending operation, the bent part tries to slightly open out and causes the final bend angle is greater than initial bend angle formed. The radius of the bend will try and return in the same way the angle does as well. It means that the final radius is also greater than the initial radius formed. These cause the main barrier to control the dimensions of bent parts to meet their requirements. In general, the spring-back factor has come a long way to apply for spring-back prediction and bend radius calculation (Lang, 1985), (Sculer, 1998), (Ling et al., 2005), (Phanitwong and Thipprakmas, 2016). In terms of V-die bending process, the spring-back factor is commonly defined as the ratio of the bend angle of the tool to the bend angle of workpiece after unloading as shown in Equation (1). Next, during V-die bending phase, the workpiece is bent and the arc is generally formed on bending allowance zone (Lang, 1985), (Sculer, 1998). Absolutely, the length of the arc directly depends on the radius of the arc and the central angle of the arc and its relationship is shown as Equation (2). Namely, in terms of V-die bending process, it depends on the bend angle and bend radius. On the basis of these relationships of spring-back factor and arc length, the spring-back factor could be derived for bend radius calculation as shown in Equation (3).
Spring-back factor = ( t ) / ( w ) Where t is the bend angle of the tool and w is the bend angle of workpiece after unloading. Thipprakmas, Journal of Advanced Mechanical Design, Systems, and Manufacturing, Vol.14, No.3 (2020) Arc length = 2 R (C/360) Where C is the central angle of the arc in degrees and R is the radius of the arc.
Spring-back factor = (R t + 0.5t) / (R w + 0.5t) Where R t is the tool radius, R w is the bend radius of workpiece after unloading, and t is the material thickness.
However, in the recent years, the spring-back factor has been investigated and developed to achieve more accuracy of spring-back prediction and bend radius calculation (Ling et al., 2005), (Phanitwong and Thipprakmas, 2016). The new finding revealed in (Phanitwong and Thipprakmas, 2016) is that, in L-die bending process, not only the pure bending characteristics are generated in the bending allowance zone and reversed bending characteristics are also generated in the unclamped leg. These generated stress characteristics are considered for an adjusted spring-back factor. In addition, it also revealed that the spring-back factors depended on the bend angle. Specifically, the spring-back factors increased as the bend angles increased. (Phanitwong and Thipprakmas, 2016) confirmed that, to achieve the required precise bend angle, the use of the adjusted spring-back factor, which considers the effects of the bend angle on the bending characteristics in the bending allowance zone and the reversed bending characteristics in the unclamped leg of the workpiece, is vital and is strongly recommended.
The FEM simulation and experimental procedures 3.1 The FEM simulation procedures
In the present research, to compare the stress distribution analysis between L-and V-die bending processes, the models of L-and V-die bending processes were investigated, as respectively shown in Figures 1(a) and (b). The bend angles of 90 and 120 were investigated in the present research. An L-bending die model with die radii (R d ) of 2 mm and 8 mm and a V-bending die model with punch radii (R p ) of 2 mm, 6 mm, and 8 mm were used. The two-dimensional, implicit quasi-static finite element method of a commercial analytical code, DEFORM-2D, was used for the FEM simulation. The applied solution algorithm in these FEM models was based on the Newton-Raphson iteration. In addition, an adaptive remeshing technique was also applied to prevent divergence of the calculation due to excessive deformation of the elements during the bending phase. It was set in every three steps. Two-dimensional plane strain, with a workpiece length of 80 mm, was applied. As listed in Table 1, the punch and die were set as rigid types. The workpiece material was set as an elasto-plastic type and the plastic properties of this workpiece material were assumed to be isotropic and described by the von Mises yield function. The rectangular elements of approximately 3500 elements were generated. The aluminum A1050-H14 (JIS) was used as the workpiece material and it was described as an elasto-plastic, power-exponent, isotropic hardening model. As per the past researches (Thipprakmas and Phanitwong, 2012), (Thipprakmas and Boochakul, 2015), (Phanitwong and Thipprakmas, 2016), its mechanical properties were taken from the tensile testing data and its constitutive equation was determined from the stress-strain curve. The strength coefficient and the strain hardening exponent values were 114.18 MPa and 0.095, respectively. The other material properties are given in Table 1, where E, ν, and σ u denote the Young modulus, the Poisson's ratio, and the ultimate tensile stress, respectively. The other process parameter conditions were also designed, as shown in Table 1. On the basis of the contact surface model defined by a Coulomb friction law, as per past researches (Thipprakmas and Boochakul, 2015), (Phanitwong and Thipprakmas, 2016), a friction coefficient () of 0.10 was applied. In usual, deformations in V-bending are divided into 3 stages, i.e., 1) air bending stage, 2) bottoming stage and 3) coining stage. Spring-back phenomenon in V-bending depends on the stage just before the unloading started. In the present research, by observing the bending load and generated compressive stress distribution on the bent parts, the unloading stage immediately started after the steeply increasing in bending load formed as well as in generated compressive stress distribution formed on the bent parts. Thipprakmas, Journal of Advanced Mechanical Design, Systems, and Manufacturing, Vol.14, No.3 (2020)
The experimental procedures
The laboratory experiments were performed to validate the FEM simulation results. As per the experiments of past research studies (Thipprakmas, 2015), (Phanitwong and Thipprakmas, 2014), (Phanitwong and Thipprakmas, 2016), the press machine, which includes a 5-ton universal testing machine (Lloyd Instruments Ltd) and L-and V-bending dies were shown in Figure 2. In the present research, a workpiece width of 30 mm was applied. With this width, a ratio of workpiece width to thickness of 30 was obtained. Therefore, it was ensured that the bending deformation was primarily under plane strain conditions in the experiments. Again, to clearly determine the unloading stage, the bending load was carefully observed during bending phase and the unloading stage immediately started after the steeply increasing in bending load formed. To calculate the amount of spring-back and the bend radius, the bend angle and bend radius after unloading were measured. In the present research, five samples from each bending condition were used to inspect the obtained bend angles and bend radii and they were measured using a profile projector (Mitutoyo Model PJ-A3000). Based on these obtained bend angles, the average bend angle and the average bend radius with their standard deviations (SD) were reported. These experimentally determined average values of bend angle and bend radius were compared with those determined by the FEM simulations.
Results and discussion 4.1 The use of FEM simulation and its validation
In the present research, the basis of L-and V-die bending process, the characterizations of the stress distribution analysis, and the prediction of the obtained bend angle and bend radius were investigated and clarified by FEM simulation. Therefore, a validation of the FEM simulation results was performed. By comparing with the laboratory experiments, as respectively shown in Figures 3 and 4 for L-and V-die bending processes, the obtained bent parts were compared with the FEM simulation results, and a good agreement was found. Based on the five bent parts, the average measured bend angle and bend radius with their standard deviations were reported. The FEM simulation results showed that the predicted bend angle corresponded well with the experiments. Specifically, in the case of the L-die bending process, the FEM simulation results showed predicted bend angles of approximate 92.31 and 123.56 in the cases of 90 and 120 bend angles, respectively. The results also showed the same manner as the experimental results. The experimental results showed that the average bend angles of approximate 91.89 and 123.20 in the cases of 90 and 120 bend angles, respectively, were obtained. These FEM simulation and experimental results, in terms of spring-back characteristics, generally agreed with those reported in the literature (Phanitwong and Thipprakmas, 2016) in that the spring-back increased as the bend angle increased. The errors in the analyzed bend angle, by comparing with the experimental results, were approximately less than 1%. Next, in terms of bend radius, the FEM simulation results showed that the predicted bend radii were larger than the tool radius, which agrees with the bending theory (Lang, 1985), (Sculer, 1998). They were approximately 2.103 mm and 8.304 mm in the cases of 90 and 120 bend angles, respectively. These FEM simulation results again showed good agreement with the experimental results. The experimental results showed the average obtained bend radii of approximately 2.098 mm and 8.290 mm in the cases of 90 and 120 bend angles, respectively. The errors in the predicted bend radius were approximately less than 1%, as compared to those obtained by the experimental results.
Next, in the case of the V-die bending process, the FEM simulation results also showed good agreement with those obtained by the experiments, as shown in Figure 4. The FEM simulation results showed that the spring-back characteristic was generated in the case of a 90 bend angle and the spring-go (negative spring-back) characteristic was generated in the case of a 120 bend angle. This spring-go characteristic was also occurred in the experimental results. Specifically, for the 90 bend angle, the predicted bend angle of approximate 90.09 was analyzed by FEM simulation and the average bend angle of approximate 90.21 was obtained by the experiments. Next, in the case of the 120 bend angle, the predicted bend angle of approximate 119.46 was analyzed by FEM simulation and the average bend angle of approximate 118.98 was obtained by the experiments. The errors in the analyzed bend angles were approximately less than 1%, according to a comparison with those obtained by the experiments. In terms of the bend radius, the FEM simulation results again showed good agreement with the bending theory (Lang, 1985), (Sculer, 1998). Specifically, the predicted bend radii were larger than the tool radius in the case of 90 bend angles where the spring-back characteristic was generated, but the predicted bend radii were smaller than the tool radius in the case of 120 bend angles where the spring-go characteristic was generated. They were approximately 2.035 mm and 7.895 mm in the cases of 90 and 120 bend angles, respectively. By comparison with the experiments, these FEM simulation results again showed good agreement. The experimental results showed the average obtained bend radii of approximately 2.063 mm and 7.845 mm in the cases of 90 and 120 bend angles, respectively. The errors in the predicted bend radius were approximately less than 1%, as compared to those obtained by the experimental results.
Bend angle and bend radius predictions using spring-back factor application
In the present research, the spring-back factor was applied to predict the bend angle. Based on the bending theory (Sculer, 1998), the spring-back factor is the ratio of the bend angle of the tool to the bend angle of the workpiece after unloading, as shown Equation (1). In the present research, based on the conventional and adjusted spring-back factors, the predicted bend angles could be calculated; they are listed in Table 2. To clearly explain the expendability of current results when changing the material thickness of workpiece and bend radius, the material thickness of 3 mm was also investigated and the results were also reported in Table 2. In the case of 90 bend angle, 6-mm bend radius, and 3-mm material thickness, the spring-back factor was 0.975 in the case of conventional spring-back factor application (Lang, 1985), and the calculated bend angle was 92.30. On the other hand, in the case of adjusted spring-back factor application (Phanitwong and Thipprakmas, 2016), the spring-back factors were 0.976 for 90 bend angle. Therefore, its calculated bend angle was 92.21. By comparing with the bend angles obtained from the experiments, as listed in Table 2, the errors of the uses of the conventional and adjusted spring-back factors in the case of the 90° bend angle were respectively 4.38% and 4.28%. Next, in the case of the 120° bend angle, 8-mm bend radius, and 1-mm material thickness, the spring-back factor was 0.942 in the case of conventional spring-back factor application (Lang, 1985), and the calculated bend angle was 127.39. On the other hand, in the case of adjusted spring-back factor application (Phanitwong and Thipprakmas, 2016), the spring-back factor was 0.971 for 120 bend angle. Therefore, its calculated bend angle was 123.57. By comparing with the bend angles obtained from the experiments, as listed in Table 2, the errors of the uses Fig. 3 Comparison of FEM simulation and experimental results in the case of the L-die bending process. Thipprakmas, Journal of Advanced Mechanical Design, Systems, and Manufacturing, Vol.14, No.3 (2020) V-die bending model of the conventional and adjusted spring-back factors in the case of the 120° bend angle were respectively 6.60% and 3.71%. These results again confirmed that the spring-back prediction could be more accuracy by using the adjusted spring-back factor. In addition, these results also confirmed the expendability of current results when the various material thicknesses of workpieces were applied. As these results clearly illustrate, the adjusted spring-back factor offered greater accuracy in the bend angle prediction than the conventional spring-back factor. It was also observed that the error decreased as the bend angle decreased. These results confirmed that the adjusted spring-back factor could be more suitable than the conventional spring-back factor in applications for bend angle prediction in the V-die bending process.
Next, in terms of the bend radius, the spring-back factor could be applied to calculate the bend radius as well. Based on the bending theory (Sculer, 1998), the equation of bend radius calculation is shown in Equation (3). Again, based on the conventional and adjusted spring-back factors, the bend radii were calculated as listed in Table 3. To clearly explain the expendability of current results when changing the material thickness of workpiece and bend radius, the material thickness of 3 mm was also investigated and the results were also reported in Table 3. In the case of 90 bend angle, 6- mm bend radius, and 3-mm material thickness, the spring-back factor was 0.975 in the case of conventional spring-back factor application (Lang, 1985), and the calculated bend radius was 6.192 mm. On the other hand, in the case of adjusted spring-back factor application (Phanitwong and Thipprakmas, 2016), the spring-back factors were 0.976 for 90 bend angle. Therefore, its calculated bend radius was 6.184 mm. By comparing with the bend radii obtained from the experiments, as listed in Table 3, the errors of the uses of the conventional and adjusted spring-back factors in the case of the 90° bend angle were respectively 7.87% and 7.73%. Next, in the case of the 120° bend angle, 8-mm bend radius, and 1-mm material thickness, the spring-back factor was 0.942 in the case of conventional spring-back factor application (Lang, 1985), and the calculated bend radius was 8.523 mm. On the other hand, in the case of adjusted spring-back factor application (Phanitwong and Thipprakmas, 2016), the spring-back factor was 0.971 for 120 bend angle. Therefore, its calculated bend radius was 8.254 mm. By comparing with the bend radii obtained from the experiments, as listed in Table 3, the errors of the uses of the conventional and adjusted spring-back factors in the case of the 90° bend angle were respectively 7.96% and 5.00%. These results again confirmed that the bend radius calculation could be more accuracy by using the adjusted spring-back factor. In addition, these results also confirmed the expendability of current results when the various material thicknesses of workpieces were applied. As these results, in addition to bend angle, they again illustrated and confirmed that the adjusted spring-back factor offered greater accuracy in the bend radius calculation than the conventional spring-back factor. These results confirmed that the adjusted spring-back factor could be more suitable than the conventional spring-back factor in applications for bend radius calculation in the V-die bending process. Figure 5 shows the stress distribution analysis of the bent parts before unloading to clarify the differences of spring-back characteristics on L-and V-die bending processes. The cases of L-and V-die bending processes with a 90 bend angle are respectively shown in Figures 5(a), 5(b) and the cases of L-and V-die bending processes with a 120 bend angle are respectively shown in Figures 5(c), 5(d). The FEM simulation results illustrated that the analyzed stress distribution corresponded well with the bending theory and the literature in the case of L-and V-die bending processes. Namely, the bending stress distribution was commonly generated on the bending allowance zone in the case of the L-die bending process (Phanitwong and Thipprakmas, 2016). In contrast, because the bottom of workpiece was again bent in a reversed direction by the die, as shown in Figure 6(a), the workpiece was then moved upward to make more contact with the punch tip, as shown in Figure 6(b). This process resulted in an increase in the formation of reversed bending stress distribution, as shown in Figure 6(b). Therefore, as the workpiece was completely clamped by the punch and die, the reversed bending stress distribution was generated on the bending allowance zone instead of bending stress distribution. This resulted in decreases in the bending stress distribution on this zone, as shown in Figures 5(b),5(d). Next, the reversed bending stress distribution was generated in the unclamped leg in the L-die bending process (Phanitwong and Thipprakmas, 2016). According to past research studies (Thipprakmas, 2011), (Phanitwong and Thipprakmas, 2014), (Thipprakmas, 2013), the reversed bending stress distribution is usually generated in both leg sides in the V-die bending process. However, with a small bend angle and bend radius, the legs were reversely bent twice and then the bending and reversed bending stress distributions were generated, as shown in Figure 5(d). As per past research studies (Thipprakmas, 2011), (Phanitwong and Thipprakmas, 2014), (Thipprakmas, 2013), (Phanitwong and Thipprakmas, 2016), the bending stress caused the spring-back characteristic and the bent part slightly opened; in contrast, the reversed bending stress caused the spring-go characteristic and the bent part slightly closed. The spring-back and spring-go characteristics could Table 3 Bend radius calculation using the spring-back factor. be predicted by compensating these bending and reversed bending stress distributions and then the calculated bend angle could be obtained. In the case of the 90 bend angle, excluding the bending stress distribution generated on the bending allowance zone, the FEM simulation results showed that the small reversed bending stress distribution was generated on the unclamped leg in the L-die bending process, as shown in Figure 5(a). After compensating for these stress distributions, the predicted spring-back was 2.31° and the obtained bend angle was 92.31 in the case of the L-die bending process. In contrast, the bending and reversed bending stresses distribution was generated on the bending allowance and both leg sides in the V-die bending process, as shown in Figure 5(b). This manner of stress distribution analysis generally agrees with that reported in the literature (Phanitwong and Thipprakmas, 2014). The predicted spring-back was 0.09° and the obtained bend angle was 90.09 in the case of the V-die bending process. In the case of the 120 bend angle, the L-die bending process (as shown in Figure 5(c)) led to reversed bending stress distribution generated on the unclamped leg, and it was larger than that in the case of the 90 bend angle. This manner of stress distribution analysis generally agrees with that reported in the literature (Phanitwong and Thipprakmas, 2016). However, owing to the reversed bending stress generated on only the unclamped leg in the L-die bending process whereas the reversed bending stress was generated on both leg sides in the V-die bending process, as shown in Figure 5(d), the predicted spring-back value in the L-die bending process was larger than that in the V-die bending process. After compensating for these stress distributions, the predicted spring-back was 3.56° in the case of the L-die bending process. In contrast, the predicted spring-go was 0.54° in the case of the V-die bending process. Therefore, the calculated bend angles were 123.56 and 119.46 in the L-and V-die bending processes, respectively. Based on these stress distribution analyses, the results revealed different stress distributions and spring-back characteristics between the L-and V-die bending processes. As per past research (Phanitwong and Thipprakmas, 2016), the conventional spring-back factor, for which only the stress distribution generated on the bending allowance zone is considered, was modified and named the adjusted spring-back factor for the L-die bending process by including the reversed stress distribution generated on the leg of the workpiece. The results in the past research (Phanitwong and Thipprakmas, 2016) also illustrated that the application of the adjusted spring-back factor could provide greater accuracy in the spring-back prediction than that of the conventional spring-back factor. In the present research, as mentioned above, the reversed stress distribution was also generated on the bending allowance zone and the legs of the workpiece in the V-die bending process, which corresponded well with the literature (Thipprakmas, 2011), (Phanitwong and Thipprakmas, 2014), (Thipprakmas, 2013). Therefore, in comparison to the conventional spring-back factor, the use of the adjusted spring-back factor could be more suitable for V-die bending process applications to achieve better accuracy in spring-back prediction, bend angle prediction, and bend radius calculation.
Conclusions
To design a proper V-die, taking into account the bend angle and bend radius requirements, the determination of the predicted spring-back characteristic and calculated bend radius should adequately accurate. In general, the spring-back characteristic and bend radius could be predicted and calculated using the spring-back factor. In the present research, therefore, the accuracy of spring-back factor application for spring-back prediction, bend angle prediction, and bend radius calculation in the V-die bending process was investigated based on FEM simulation and experiments. The two existing spring-back factors-namely, conventional and adjusted spring-back factors-were examined. First, to use the FEM simulation as a tool for the clarification of spring-back characteristics, the FEM simulation results were validated by the experiments. The FEM simulation results showed that the bend angle prediction and bend radius calculation corresponded well with the experiments. Next, by using the spring-back factor, the results illustrated that the adjusted spring-back factor offered greater accuracy than the conventional spring-back factor in the spring-back prediction, bend angle prediction, and bend radius calculation. These results were verified based on the stress distribution analysis, which indicated that not only the bending stress distribution was generated on the bending allowance zone, but the reversed bending stress distribution was also generated on the leg of the workpiece in the case of the L-die bending process. These results were also found in the case of the V-die bending process. Moreover, the reversed bending stress distribution was also generated on the bending allowance zone, while the bending and reversed bending stress distributions were also generated on the legs. Therefore, the application of the adjusted spring-back factor, which was based on the bending stress distribution generated on the bending allowance zone and the reversed bending stress distribution generated on the leg of the workpiece, could provide higher accuracy in the determination of the bend angle prediction and bend radius calculation than did the application of the conventional spring-back factor, which was based on only the bending stress distribution generated on the bending allowance zone. However, although the adjusted spring-back factor was found to give higher accuracy in the spring-back prediction, bend angle prediction, and bend radius calculation than did the Fig. 6 Illustration of reversed bending stresses generated on bending allowance zone.
(a) Contact with die, (b) contact with punch tip.
conventional spring-back factor, as a future work, the spring-back factor for the V-die bending process still needs to be developed because the stress distribution generated in the bent parts is totally different from that generated in the bent parts obtained through the L-die bending process.
|
2020-04-02T09:13:54.602Z
|
2020-01-01T00:00:00.000
|
{
"year": 2020,
"sha1": "9311581e0be434ea3373b66d1dcb6dabdd1e61b7",
"oa_license": null,
"oa_url": "https://www.jstage.jst.go.jp/article/jamdsm/14/3/14_2020jamdsm0037/_pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "5d9b1dac45ae17d92b7e79f350adb405886f69bd",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": [
"Materials Science"
]
}
|
258172159
|
pes2o/s2orc
|
v3-fos-license
|
Monocyte subpopulations display disease-specific miRNA signatures depending on the subform of Spondyloarthropathy
Spondyloarthropathies (SpA) are a family of rheumatic disorders that could be divided into axial (axSpA) and peripheral (perSpA) sub-forms depending on the disease clinical presentation. The chronic inflammation is believed to be driven by innate immune cells such as monocytes, rather than self-reactive cells of adaptive immune system. The aim of the study was to investigate the micro-RNA (miRNA) profiles in monocyte subpopulations (classical, intermediate and non-classical subpopulations) acquired from SpA patients or healthy individuals in search for prospective disease specific and/or disease subtype differentiating miRNA markers. Several SpA-specific and axSpA/perSpA differentiating miRNAs have been identified that appear to be characteristic for specific monocyte subpopulation. For classical monocytes, upregulation of miR-567 and miR-943 was found to be SpA-specific, whereas downregulation of miR-1262 could serve as axSpA-differentiating, and the expression pattern of miR-23a, miR-34c, mi-591 and miR-630 as perSpA-differentiating markers. For intermediate monocytes, expression levels of miR-103, miR-125b, miR-140, miR-374, miR-376c and miR-1249 could be used to distinguish SpA patients from healthy donors, whereas the expression pattern of miR-155 was identified as characteristic for perSpA. For non-classical monocytes, differential expression of miR-195 was recognized as general SpA indicator, while upregulation of miR-454 and miR-487b could serve as axSpA-differentiating, and miR-1291 as perSpA-differentiating markers. Our data indicate for the first time that in different SpA subtypes, monocyte subpopulations bear disease-specific miRNA signatures that could be relevant for SpA diagnosis/differentiation process and may help to understand SpA etiopathology in the context of already known functions of monocyte subpopulations.
Spondyloarthropathies (SpA) are a family of rheumatic disorders that could be divided into axial (axSpA) and peripheral (perSpA) sub-forms depending on the disease clinical presentation. The chronic inflammation is believed to be driven by innate immune cells such as monocytes, rather than self-reactive cells of adaptive immune system. The aim of the study was to investigate the micro-RNA (miRNA) profiles in monocyte subpopulations (classical, intermediate and nonclassical subpopulations) acquired from SpA patients or healthy individuals in search for prospective disease specific and/or disease subtype differentiating miRNA markers. Several SpA-specific and axSpA/perSpA differentiating miRNAs have been identified that appear to be characteristic for specific monocyte subpopulation. For classical monocytes, upregulation of miR-567 and miR-943 was found to be SpA-specific, whereas downregulation of miR-1262 could serve as axSpA-differentiating, and the expression pattern of miR-23a, miR-34c, mi-591 and miR-630 as perSpA-differentiating markers. For intermediate monocytes, expression levels of miR-103, miR-125b, miR-140, miR-374, miR-376c and miR-1249 could be used to distinguish SpA patients from healthy donors, whereas the expression pattern of miR-155 was identified as characteristic for perSpA. For non-classical monocytes, differential expression of miR-195 was recognized as general SpA indicator, while upregulation of miR-454 and miR-487b could serve as axSpA-differentiating, and miR-1291 as perSpA-differentiating markers. Our data indicate for the first time that in Introduction Spondyloarthropathies (SpA) are a family of rheumatic disorders characterized by chronic inflammation within spine, peripheral joints and entheses with resultant unfavourable remodelling of the skeleton. Phenotypically, SpA could be divided into axial sub-form (axSpA) involving mainly joints of the spine, and peripheral SpA (perSpA), affecting peripheral skeleton with common clinical manifestations, including arthritis, enthesitis and dactylitis. Emerging data from immunopathology studies and clinical trials indicate that axial and peripheral SpA might be driven by different mechanisms and respond differently to treatment (1). In line with that, genetic, histopathological, and clinical evidences indicate that despite common downstream pathways, mediated e.g. by macrophage-derived TNF, inflammation in SpA is driven and maintained by different cellular and molecular mediators (2,3). Moreover, it has been recently proposed that SpA is an autoinflammatory disease driven by innate immune cells i.a. monocytes, rather than a genuine autoimmune disease triggered by self-reactive T and/or B lymphocytes (4), although certain phenomena of autoimmunity in pathogenesis of ankylosing spondylitis (AS), which is the model entity of axSpA, are also considered (5). Thus, different pathophysiology and clinical course of axial and peripheral SpA might be affected by changes in the count and/or percentage of populations of circulating mononuclear cells, their products and/or proinflammatory activity. Nevertheless, in the course of SpA, the pathophysiological role of specific monocyte subpopulations (i.e. classical CD14++CD16-, intermediate CD14++CD16+ and non-classical CD14+CD16++ monocytes) as a source of pro-and anti-inflammatory mediators as well as their impact on disease severity has not been fully elucidated. On the other hand, in other rheumatic diseases, e.g. rheumatoid arthritis (RA), increased monocyte count (especially CD14+CD16++ subpopulation) correlates with clinical manifestations and elevated parameters of inflammation localized in peri-articular tissue (6)(7)(8). Moreover, dendritic cells originated from migrating monocytes seem to play a significant role in pathogenesis of rheumatic inflammatory processes and participate in osteogenesis and inflammation-mediated destruction of bone tissue (9). In line, monocytes seem to favour maintenance of inflammation in peri-articular tissues in patients with AS (10), and the classical CD14++CD16-monocyte subpopulation is believed to be the source of osteoclasts in patients with RA (11,12).
MicroRNAs (miRNAs) are small endogenous, non-coding RNAs that regulate gene expression at post-transcriptional level. They are involved in a range of physiological and pathological processes associated with immune regulation and development of autoimmunity. Dysregulated expression of miRNAs has been described in numerous rheumatic disorders including Spondyloarthropathies (13,14). Nevertheless, here we show, for the first time, differential expression of miRNAs in monocyte subpopulations from SpA patients suffering from either peripheral or axial disease. Considering the critical role of miRNAs in the regulation of innate immune system together with their apparent contribution to pathological processes observed in two subforms of SpA, the obtained results could help complete the picture of SpA pathogenesis in the context of already known monocyte functions.
Materials and methods Patients
Forty-six patients with SpA,27 axial SpA according to the Assessment of SpondyloArthritis International Society (ASAS) classification criteria for axial SpA 2009 and 19 peripheral SpA according to ASAS classification criteria for peripheral SpA 2011 (15,16) and 20 healthy age-and sex-matched subjects were enrolled in the study (HChealthy controls). Patients were under 45 years, naive to synthetic, synthetic-targeted, or biologic Disease Modifying Anti-Rheumatic Drugs (DMARDs) and without administration of systemic glucocorticosteroids. Patients provided a signed informed consent, and the study protocol was approved by the local Bioethics Committee (KBET/252/B/2012). Table 1 presents characteristics of patients. Briefly, median age (years, IQR) of axSpA patients was (29.7-39.7) and perSpA patients was (31-38.5). Median disease duration (years, IQR) was (5-10.7) for axSpA and (2-9.5) for perSpA patients. 81% of axSpA and 37% of perSpA patients were HLA-B27 positive. Twenty-two (81%) axSpA patients fulfilled mNY criteria for AS.
Isolation of monocytes and their subsets
Monocyte subpopulations were isolated from peripheral blood mononuclear cells (PBMC) obtained from SpA patients or healthy donors. PBMC were isolated from EDTA-treated whole peripheral blood by the standard Pancoll human (Panbiotech, Aidenbach, Germany; P04-60500) density gradient centrifugation. PBMC were washed in PBS (Sigma-Aldrich, Saint Louis, USA) and then monocyte subsets (classical -CD14 ++ CD16 -, intermediate -CD14 ++ CD16 + and non-classical -CD14 + CD16 ++ ) were isolated using flow cytometry cell sorting. The following monoclonal antibodies (mAbs) were used to stain monocytes: anti-CD14-FITC (clone MjP9, BD Bioscience), anti-CD16-PE (clone 3G8, BD Bioscience) and anti -HLA -DR -PerCP (clone L243, BD Buoscience), in 1:25 dilution (v/v) and gated as previously described (17,18). The monocytes were then incubated for 30 min at 4°C, followed by sorting using the FACSAria II cell sorter (BD Biosciences, San Jose, CA, USA). Sorter was equipped with 488 nm and 561 nm lasers for excitation of FITC, PE and PerCP. The following band-pass filters were used for the measurement of fluorescence: 530/30 for FITC, 695/40 for PerCP (laser 488 nm) and 582/42 for PE (laser 561 nm). After isolation, the cells were washed in PBS, centrifuged for 10 min at 350 x g and kept frozen at -80°C until RNA isolation. The absolute numbers of FACS-sorted monocyte subpopulations were previously provided (19). Both panels together contain 754 human miRNA sequences from the Sanger miRBase v14, including target negative control (ath-miR159a) and target controls (U6 rRNA, RNU48, RNU44). Full list of miRNAs examined in this study is provided in Supplementary File 3.
QuantStudio OpenArray MicroRNA Expression
RNAs were isolated using miRVana microRNA isolation kit (Thermo Fisher Scientific; AM1560) and transcripted into cDNA using Megaplex ™ RT Primers and TaqMan ® MicroRNA Reverse Transcription Kit (Thermo Fisher Scientific; 4366597). Next, the cDNA was pre-amplified to increase its quantity before performing the OpenArray qPCR. Preamplification products were prepared in two separate reactions for Megaplex ™ PreAmp Primers Pool A and Pool B, corresponding to the Megaplex ™ RT Primers Pool used previously for reverse transcription. For each pre-amplification we used 2,5 µl cDNA and 22,5 mL of PreAmp Reaction Mix, containing Megaplex ™ PreAmp Primers Pool A or Pool B (2,5 µl), 2x TaqMan ® PreAmp Master Mix (12,5 µl) and nuclease-free water (all reagents by ThermoFisher Scientific, Waltham, MA, USA). Preamplification reaction tubes with reagents were incubated on ice for 5 minutes before performing RUN on SimplyAmp Thermal Cycler (Applied Biosystems by ThermoFisher Scientific, Waltham, MA, (53) 9 (47) BASDAI, Bath Ankylosing Spondylitis Disease Activity Index; ASDAS, Ankylosing Spondylitis Disease Activity Score; DAS28, Disease Activity Score (ESR) 28; IBP, inflammatory back pain; CRP, C-reactive protein; ESR, erythrocyte sedimentation rate; ax, axial, nr, non radiographic; per, peripheral, SpA, spondyloarthritis; AS, ankylosing spondylitis; PsA, psoriatic arthritis. NS, not significant. USA). The following thermal conditions were used to run preamplification: 95°C for 10 minutes, 55°C for 2 minutes, 72°C for 2 minutes, 12 cycles including 95°C for 15 seconds and 60°C for 4 minutes, hold step with 99,9°C for 10 minutes and final hold step 4°C . Negative control was used for each preamplification. Every single preamplification product was diluted 1:20 with nuclease-free water and used for the next step within 12 hours. For each sample, qPCR reaction was prepared by mixing 22,5 µl 2x TaqMan ® OpenArray ® Real-Time PCR Master Mix and 22,5 µl pre-amplification product on 96-well plate. Next, the qPCR reaction mixes with samples (5 µl) were transferred by pipetting to 384-well plate according to protocol created in OpenArray Sample Tracker Software (ThermoFisher Scientific, Waltham, MA, USA) and loaded to OpenArray plates by OpenArray ™ AccuFill ™ System (ThermoFisher Scientific, Waltham, MA, USA). OpenArray cases were sealed according to guidelines within 90 seconds. Ready OpenArray Plates were put into the QuantStudio 12K Flex Realtime PCR System with OpenArray block and processed.
After processing OpenArray miRNA Expression run, the data analysis was performed. Quality control images (QC Images) were exported and investigated to find potential loading errors. The result files created by QuantStudio Software v1.2.3 were uploaded to ThermoFisher Cloud and analyzed with Relative Quantification qPCR Application. miRNA Panel A and B were analyzed in separate analysis groups.
Statistical analysis
The expression of each miRNA type was calculated using 2 −DDCT method and snRNa U6 was used as a control to all miRNA analysis. The results were shown using volcano plot (pvalue vs. fold change) with following settings: fold change boundry 2,0, p-value boundry 0,05. Median values for each group were compared using Kruskal-Wallis test. Data analysis was performed in Mathematica 12 software (Wolfram Research, Inc., Mathematica, Version 12.0.0, Champaign, IL, USA) and GraphPad Prism version 9 (GraphPad Software Inc., San Diego, CA). The principal Component Analysis was performed in SPSS Statistisc software (version 29.0.0.0 (241)).
Results
SpA patients fulfilling the study inclusion criteria were enrolled. Patients were under 45 years, naive to synthetic, synthetic-targeted or biologic Disease Modifying Anti-Rheumatic Drugs (DMARDs) and without administration of systemic glucocorticosteroids. Detailed patients' characteristics is presented in Table 1 (Methods section). Further, monocytes from patients' peripheral blood were isolated and divided into separate subpopulations using FACS, which was followed by RNA isolation and gene expression analysis with quantitative PCR miRNA array.
In the analyses we focused on statistically different miRNAs expression between the SpA subtypes and healthy subjects (3 groups: axSpA, perSpA, HC) within monocyte subpopulations, that were upregulated at least by 2 fold or downregulated at least by 0.5 fold. We also selected certain miRNAs that complied with above criteria in 2 out of 3 studied groups (e.g. upregulation of miR-567 in axSpA vs. HC and perSpA vs. HC, but not in axSpA vs. perSpA groups). Such an approach allowed for identification of differentially expressed miRNAs, whose expression (down-or upregulation) in monocyte subpopulations would be typical for SpA (independent of the disease variant), or could be characteristic for the specific disease subtype (axSpA vs. perSpA and HC, or perSpA vs. axSpA and HC). The analysis results are shown as volcano plots in Figure 1. The highlighted miRNAs fulfilled the analysis criteria.
Several SpA-specific and axSpA/perSpA differentiating miRNAs have been identified that appeared to be characteristic for monocyte subpopulations. For classical monocytes, upregulation of miR-567 and miR-943 was found to be SpAspecific, whereas downregulation of miR-1262 could serve as axSpA-differentiating, and expression pattern of miR-23a, miR-34c, mi-591 and miR-630 as perSpA-differentiating markers. For intermediate monocytes, expression levels of miR-103, miR-125b, miR-140, miR-374, miR-376c and miR-1249 could be used to distinguish SpA patients from healthy donors, whereas expression pattern of miR-155 was identified as characteristic for perSpA. For non-classical monocytes, differential expression of miR-195 was recognized as general SpA indicator, while upregulation of miR-454 and miR-487b could serve as axSpA-differentiating, and miR-1291 as perSpA-differentiating markers. Furthermore, the associations between significantly differentially expressed miRNAs themselves has been investigated (Supplementary File 4) as well as Principal Component Analysis (PCA) has been performed in search for combined miRNA signatures that could potentially differentiate SpA categories from healthy controls. PCA revealed that expression patterns of combined miR-23a and miR-630 in classical monocyte subset could serve as a good discriminator between perSpA vs. axSpA and HC. Expression pattern of miR-1249 in intermediate monocyte subset could well discriminate between HC vs. axSpA and perSpA. Combined expression patterns of miR-195 and miR-1291 in non-classical monocyte subset could serve as a good discriminator between axSpA vs. perSpA and HC. On the other hand, differential expression of miR-487b could be used to discriminate between axSpA vs. HC. The results form PCA correspond well with the data presented in Figure 1. Interestingly, when all differentially expressed miRNAs were considered for PCA, the group or miRNAs (namely: miR-148b, miR-324-5p, miR-130b, miR-17, miR-30b, miR-1255, miR-302c, miR-26a-3p) has been identified that combined expression pattern in classical monocytes could discriminate between perSpA vs. axSpA and HC (PCA details available in Supplementary File 5).
Additionally, we examined whether expression of aboveidentified miRNAs correlates with patients' data and characteristic SpA clinical features. The identified significant correlations are summarized in Table 2.
Discussion
Dysregulation of miRNA expression is a common phenomenon that accompany numerous human diseases including, among others, immune deficiencies, neurodegenerative disorders and cancer (20)(21)(22). Similarly, in the course of chronic rheumatic inflammatory disorders such as SpA, miRNA profiles may vary indicating their influence in the underlying pathological processes (14). Here, we showed that SpA is not only a heterogeneous disease but also, we unveiled that immune processes associated with the body's response towards the ongoing SpA pathology might be related to the alteration of specific miRNA landscapes. We demonstrated that, in SpA, different monocyte subpopulations harbor distinct miRNA expression profile characteristic for the specific subform of the disease. Several of the identified differentially expressed miRNAs have been already linked to rheumatic disorders. Interestingly, miR-34c, whose strong repression in perSpA classical monocytes was observed, has been found to be upregulated in Rheumatoid Arthritis (RA) PBMCs (23). miR-34c has been long recognized as a strong osteogenic inhibitor (24,25), and hence its role in perSpA is likely to be related to the inflammation-driven bone structure remodeling. Similarly, the involvement miR-23a cluster -upregulated in perSpA classical monocytesin such processes as regulation of bone formation/destruction has been described (26,27). Additionally, miR-23a was implicated in IL-17 driven inflammation demonstrating an inhibitory effects on expression of IL-17mediated proinflammatory mediators (28). In immune system, miR-23a cluster plays a decisive role in promoting myelopoiesis over lymphopoiesis (29) and has been shown to have a profound impact on the function of myeloid cells, including M-CSF-induced differentiation of monocytes to macrophages (30) as well as myeloid cells activation (31). Intriguingly, the member of the miR-23a paralog cluster -miR-27bhas been identified in our study to be strongly upregulated in axSpA classical monocytes compared to the healthy counterparts. This miRNA has been i.a. linked with such rheumatic diseases-relevant processes as enhancing osteogenesis of human mesenchymal stem cells (32), inhibiting IL-17-induced monocyte chemoattractant protein-1 (MCP1) expression (33) or suppressing the NF-kB signaling pathway in osteoarthritic chondrocytes (34). Notably, miR-27b expression levels correlated with number of disease parameters (Table 2) indicating an important role of this miRNA in the disease activity, enthesitis and bone remodeling processes occurring in axSpA. Scarce SpA-related data is available on other miRNAs (i.e. miR-302c, miR-567, miR-591, miR-630, miR-943 and miR-1262) found to be significantly differentially expressed in classical monocytes among study groups indicating the yet unexplored role of these miRNAs in rheumatology research, possibly as novel SpA biomarkers. In addition, as 80-90% of classical monocytes migrate from peripheral blood to the target tissues ca. 1 day after being released from the bone marrow (35), their harboured miRNA signatures may exert significant impact on the cytokine milieu of the local inflamed tissues, i.e. synovitis, osteitis and enthesitis.
Vast majority of differentially expressed miRNAs in intermediate monocytes occurred to be specific for SpA in general. In fact, upregulation of only single miRNA, namely miR-155, has been found to be characteristic for perSpA. Furthermore, the expression levels of miR-155 correlated with disease duration (negatively) and DAS28 (positively) in perSpA. Dysregulation of miR-155 has been already associated with rheumatic disorders including RA (36) and spondyloarthritis (37). For instance, miR-155 is highly expressed in synovial fluid-derived monocytes/ macrophages compared with the peripheral blood counterparts from patients with RA. Incubation of peripheral blood CD14+ cells with RA synovial fluid stimulated the expression of miR-155 and release of TNF-alpha; while the cytokine production was downregulated by transfection of miR-155 inhibitor (38). Moreover, miR-155 is commonly believed as highly proinflammatory miRNA contributing to impaired Treg function (39), augmentation of Th17 response (38) and support of M1 macrophage polarization (40). Thus, our finding that miR-155 expression levels are at its highest at early stages of perSpA (inflammatory disease phase) and then gradually decrease with the course of the disease, seems to be consistent with the role of miR-155 as a critical driver of inflammatory mechanisms. Other differentially expressed intermediate monocyte miRNA -miR-103, miR-125b, miR-140 (all upregulated in SpA patients in comparison to HC)have been implicated to play diverse functions in development of several rheumatic disorders including osteoarthritis [miR-103 (41), miR-125b (42), miR-140 (43)], RA [miR-103 (44), miR-125b (45), miR-140 (46)] and juvenile idiopathic arthritis [miR-125b (47)]. Certain common features of the rheumatic diseases would suggest that the abovementioned miRNAs may as well be important in the pathogenesis of SpA. Interestingly, in a mouse model of axSpA, miR-103 levels were found elevated in animals subjected to reduced mechanical loads. Moreover, the increase of miR-103 expression led to upregulation of the potent osteogenesis inhibitor -Dkk-1 -and hence, reduced new bone formation/bone density in SpA mice (48). Whether similar, miR-103-driven mechanism operates also in human axSpA, remains to be verified.
In non-classical monocytes, only four differentially expressed miRNAs meeting the inclusion criteria have been identified, namely miR-195, miR-454, miR-487b and miR-1291. Little is known about miR-454 and miR-1291 in context of rheumatic disorders. On the other hand, miR-195 (upregulated in SpA vs. control monocytes) and miR-487b (upregulated in axSpA vs. perSpA and control monocytes) could be linked to SpA pathophysiology primarily through their influence on osteogenesis (miR-195) and inflammation (miR-487b). Several studies showed the pro-osteogenic function of miR-195 (49,50), which corresponds well with the positive correlation of miR-195 levels and bone density observed in examined axSpA patients. Furthermore, the opposite relation of miR-195 expression with the disease duration might indicate its importance in mechanisms leading to the loss of bone density observed at later disease stages. miR-487b, in contrary, is considered a potent inhibitor of inflammation processes (51,52). Consistently, a recent study revealed that miR-487b-laden extracellular vesicles possess a potent anti-inflammatory activity, likely through suppression of MAPK signaling pathway (53). Such a function might well explain the negative correlation of miR-487b levels and inflammation-related disease parameters identified in studied perSpA cohort. Furthermore, miR-487b has been found to play an important role in bone metabolism. Thus, impairment of osteoblastogenesis by interference with Notch-1 signaling might indicate its engagement in dysregulation of bone turnover processes observed in SpA (54).
In conclusion, the results of the study clearly demonstrate the dysregulation of miRNA signatures in SpA monocytes compared to their healthy counterparts. More, we showed for the first time, that miRNA profiles of monocyte subpopulations differ significantly depending on the predominant disease pathology. These results may therefore be of significant diagnostic value, especially at initial disease stages, when the definite distinction of SpA subvariant is problematic due to the lack of yet extensive remodeling of the bone tissue. Noticeably, the pathomechanisms governing the peripheral and the axial form of SpA seem to be very different, hence the proper, early recognition of the disease subvariant could be critical in terms of disease prognosis and choosing appropriate treatment strategy and expected response to the applied therapy, eventually resulting in either success or failure of the treatment.
Data availability statement
The original contributions presented in the study are publicly available. This data can be found here: https://www.ncbi.nlm.nih.gov/ geo/query/acc.cgi?acc=GSE223717.
Ethics statement
The studies involving human participants were reviewed and approved by Bioethics Committee of Jagiellonian University. The patients/participants provided their written informed consent to participate in this study.
Conflict of interest
The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
Publisher's note
All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.
Supplementary material
The Supplementary Material for this article can be found online at: https://www.frontiersin.org/articles/10.3389/fimmu.2023.1124894/ full#supplementary-material OpenArray MicroRNA Expression Workflow presenting methodological aspects of miRNA expression analysis in human monocyte subpopulations.
SUPPLEMENTARY FILE 3
Full list of miRNAs examined in this study. SUPPLEMENTARY FILE 4 Spreadsheets presenting associations between differentially expressed miRNAs themselves in classical, intermediate and non-classical monocyte.
SUPPLEMENTARY FILE 5
Detailed description of principal component analysis of differentailly expresssed miRNAs.
|
2023-04-17T13:08:27.174Z
|
2023-04-17T00:00:00.000
|
{
"year": 2023,
"sha1": "3dc822bab05eab4a04f74e468958825cfebc69be",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Frontier",
"pdf_hash": "3dc822bab05eab4a04f74e468958825cfebc69be",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
204864843
|
pes2o/s2orc
|
v3-fos-license
|
Effect of replacing fish meal by full fat soybean meal on growth performance, feed utilization and gastrointestinal enzymes in diets for African catfish Clarias gariepinus
Study aimed to evaluate the effect of replacing fish meal with different levels of full fat soybean meal (FFSBM) on growth and digestive enzyme activities in the stomach, Liver and intestine for Clarias gariepinus. Four diets (D1, D2, D3 and D4) were formulated with 0, 15, 20 and 20 g 100-1 protein + DL-methionine by alternating FFSBM with fish meal. The growth of C. gariepinus was found to be significantly decreased when FFSBM replacement increased. Final body weight was 89.69, 79.70, 70.82 and 68.29 g for fish fed on D1, D2, D3 and D4, respectively, with significant differences between treatments. Specific growth rate (SGR) ranged between 3.11 to 2.78%. Proteolytic activity was higher only with alkaline pHs, whereas only very low activity was shown with acidic. Results of liver showed approximately similar results at acid and alkaline. In contrast, higher proteolytic activity in the stomach was observed at acid pHs 3.0 and 4.0 μg tyrosine-1 minute-1 mg-1 protein) whereas lower values were observed at neutral pH 7.0 g tyrosine-1 minute-1 mg-1 protein for catfish fed on the experimental diets. Moreover, trypsin activity was higher for the stomach, followed by the intestine and liver. However, higher amount of amylase observed in the liver than intestine and stomach.
Introduction
Clarias gariepinus is an important species in fish culture sector related to its fertilization, high growth rate, tolerate of high stocking density, resistance to the common diseases and can able to accept a wide different of feed (Huisman and Richter, 1987). Many investigations have been conducted to evaluate the appropriateness of the plant and animal feedstuffs as substitutes for fishmeal in the diets of African catfish. Conventionally, fish meal has been considered to be the major source of protein because of its high contents of essential nutrients, balanced in amino acid profile, good growth performances and the fact that it is acceptable for most aquatic animals (Sun, et al., 2015). The problem, however, is that the protein source represents the highest cost element in feed for aquaculture and there is therefore an incentive to seek cheaper alternatives. Furthermore, the development of alternative protein sources, including plant proteins, means that fish meal can at least be partially substituted with more economical products (Francis, et al., 2001), like meal of poultry by product (Abdel-Warith, et al., 2001;Fagbenro and Davies, 2001;Goda, et al., 2007), waste of shrimp head (Nwanna et al., 2004) soybean meal (Imorou Toko, et al., 2008), rice husks meal (Zaid and Ganiyat, 2009), and grasshopper meal (Alegbeleye et al., 2012;Olaleye, 2015). Some investigations have been reported that partially substitute fish meal with soybean meal effects the performance of a few fish species, such as for Japanese seabass Lateolabrax japonicus , juvenile tench Tinca tinca L. (Garcia, et al., 2015), gilthead sea bream Sparus aurata L. (Kokou, et al., 2012), rainbow trout Oncorhynchus mykiss (Harlioğlu, 2011) and Nile tilapia Oreochromis niloticus (Abdel-Warith et al., 2013). Some Anti-nutritional factors ANFs suppress the activities of specific enzymes, such as proteinase and amylase; also, many protein components, such as haemagglutinins and lectins, can react in specific ways with certain carbohydrates (Hendricks, 2002). Protease enzymes such as trypsin and chymotrypsin are very important for the digestibility operations in digestive system of fish. These digestive enzymes, like others, are active into the proximal intestine. The hydrolysis of protein intake from the diets which occurs by these enzymes that active in the proximal intestine, serve to break down the protein to simple molecules that can then be absorbed throughout the intestine and used in the metabolic process (Lovatto et al., 2017). While some studies have focused their attention on the influence of different substitute plant proteins on fish growth and feed utilization, few have looked at how diet changes the activities of digestive enzymes) Xu et al. (2012) and Zhao et al. (2016).
This study aimed to investigate the potential use of partially substitution of fishmeal by full fat soybean meal in diets for the C. gariepinus by examine the subsequent effects on growth performance, feed utilization and the digestive enzyme activities in the stomach, hepato-pancreas and intestine, given that these are still poorly understood, especially in respect to C. gariepinus.
Experimental fish
A total of 160 C. gariepinus with average weight 7.82±2.02 g) were distributed into eight fiberglass tanks containing 100 L of water, and were suspended over a 1000 litre bio-filter. Water entered each tank via a spray bar after filtration and an aerator was placed in the center of the tank. Partial water changes amounted approximately 20% of the systems volume per week. Filters of the systems were cleaned daily to avoid the buildup of nitrate levels in the water. 20 fish have been distributed in each tank with replicate. Treatments of water temperature were monitored at 28±1°C by controlled heaters (Atman, AT-300W), the values of pH maintained at 7.1-8.0, ammonia (NH3) (0.07-0.20 mg L -1 ), nitrite (NO2) (0.15-0.35 mg L -1 ), nitrate (NO3) (4.35-5.77 mg L -1 ) and dissolved oxygen (5.3-6.7 mg L -1 ) were all monitored twice a week to ensure that they remained at acceptable levels. Other twenty fish were euthanized using buffered MS222 (50 mg/l) then kept frozen at -20°C to estimate the initial chemical analysis of carcass composition. At the end of the experiment, five fish of each treatment were dissected and sample of stomach, liver and intestine were removed and kept frozen at -80°C for enzymes assays, other five fish of each group were used for final carcass composition.
Diet formulation
Experimental diets were prepared in various levels of full fat soybean (FFSBM) substituted by fishmeal. Table 1 shows the formulation and chemical analysis of experimental diets, and Table 2 observed the diets contents of essential amino acids expressed as percentage of protein. Experimental diets were prepared to replace the fishmeal protein with full-fat soybean with ratio 0, 15, 20 and 20 g 100 -1 g protein +1% DL-methionine for D1, 2, 3 and 4, respectively.
Experimental procedure
Weight of fish have been done every two weeks and fed 3.5% by hand twice a day of their body weight six days a week for the first half of the period (6 weeks), reduced to 3% for the second half (6 weeks), giving an average of 3.25%. The experiment was held for 12-week and the feed intake was regulated according to the increase of biomass. Five fish from each group were collected at the end of the experimental period, then dissected and sample of stomach, liver and intestine were removed and kept frozen at -80 °C for enzymes assays, other five fish from each treatment were euthanized and frozen at -20 °C to determine final chemical composition.
Proximate composition
Chemical analyses of fish carcass and diets for moisture, lipid, protein, and ash were estimated using AOAC (1995) methods and gross energy were calculated according to Hepher et al. (1983).
Estimation of growth and nutrients efficiency
Whereas: FBP final body protein, FBW final body weight, IBP initial body protein, IBP initial body weight and TPI total protein intake
Determination of amino acids
The amino acid contents of the diets were determined following acid hydrolysis method of McCullagh et al. (2006). Amino acids were assayed using a Kontron Chromakon 500 automatic amino acid analyzer [column 250 × 4.6 mm, cation ion-exchange resin material (AS70)] and the procedures were done as described in our previous studies Abdel- Warith et al. (2014). Table 2 showed the amino acids composition expressed as % of protein in the diets.
Determination of enzymes
Proteolytic activity of enzymes were expressed as the amount of tyrosine (μg) digested by 100 μl of enzyme Hepher et al. (1983) using the equivalent factors of 5.65, 9.45 and 4.2; kcal/g for CP, EE, and NFE, respectively solution /minute/mg protein at acid, natural and alkaline pHs at 37 °C and determined using the casein hydrolysis according to the method of Kunitz (1947) as modified by Walter (1984). Trypsin activity was expressed as the amount of tyrosine (μg) liberated by 0.5 ml of enzyme extract per minutes /mg protein at 37 °C and measured in the test tubes using benzoyl-Arg-p-nitroanilide (BAPNA) as a substrate according to Erlanger et al. (1961). Amylase activity were expressed as the amount of maltose liberated by 50 μl of enzyme extract /minute/ml at 37 °C then assayed by the starch hydrolysis method According to Tietz (1970). However, the lipase activity was expressed as the amount of fatty acids neutralized by 0.05 NaOH liberated by 1 ml enzyme solution minute at 37°C, and determined by the aid of a Sigma diagnostic test-kit.
Statistical analysis
Data were analyzed using a one-way analysis of variance (ANOVA) technique. The means were separated by Fisher's LSD test and compared using Duncan's Multiple Range Test, as described by Snedecor and Cochran (1989). The significant differences level was defined at P < 0.05.
Growth performance
Results of growth performance and feed utilization for the C. gariepinus fed the experimental diets are displayed in Table 3. Growth performance (mean final weight, weight gain and specific growth rate (SGR) Equation 1 decreased significantly as increasing proportions of FFSBM were included in the diets (from D2 to D4). Amino acid supplementation had no effect on the parameters when compared with an unsupplemented diet3. The results obtained for final weight were 89.69, 79.70, 70.82 and 68.29 g for catfish fed the experimental diets, with significant differences between treatments (Table 3). The results for SGR were 3.11, 2.98, 2.82 and 2.78 for fish fed D1, D2, D3 and D4 respectively.
Feed consumption and feed utilization
Clarias gariepinus well accepted to the control diet whereas, the palatability for other test diets containing partial replacement with FFSBM were decline. Average of feed intake ranged between 0.85 and 0.72 g fish -1 day -1 . The presence of FFSBM in the diet had a noticeable effect on feed intake ( Table 3). The results of feed intake showed significant difference (P<0.05) between fish fed D1 and D2 when compared with D3 and D4 which containing high levels of FFSBM even with adding methionine to D4. The results obtained of Feed Conversion Ratio (FCR) Equation 2 showed significant differences (P<0.05) among fish fed on control diet (0.82) and the other groups fed on diets containing different levels FFSBM especially D4, However, D1, D2 and D3 are presented similar FCR, also, in the same way of D2, D3 and D4.with FCRs of 0.87 for D2, 0.91 for D3 and 0.93 for D4 (including amino acid supplementation) respectively, The protein efficiency ratio (PER) Equation 3 was also showed significant differences (P<0.05) between D1 and D4; However, D2, D3 and D4 are presented no significant differences between treatments. Superior PER (3.36) was obtained for fish fed control diet while fish fed different inclusion levels of FFSBM observed PERs of 3.24, 3.12 and 2.97. Apparent net protein utilization (ANPU %) Equation 4 also supported this trend with significant differences (P<0.05) between the control and D4. however, no significant differences between D2, D3, and D4. The ANPU for catfish fed a control diet was 54.48%, whereas the lowest value was observed for fish fed 20g 100 g -1 protein of FFSBM (D4) with amino acid supplementation, at 48.6% (Table 3). Table 4 shows the chemical of carcass composition for fish fed the experimental diets. Carcass composition of final body of fish showed slightly significant differences as a result of experimental diet. There were few differences in the moisture content, whereas, the protein and lipid content and ash content showed only slight differences (P>0.05) between treatments Table 4.
Proteolytic activity
The total activity of proteolytic enzymes used the (sum of pHs 1.5, 3, 4, 7, 8.5, 9, and 10) showed higher activity in the intestine than the activity of this enzyme in the stomach and liver which ranged Values in the same row with different letters indicate significant difference (P<0.05); 1 SGR: [Ln final body weight (g) -Ln initial body weight (g)]/feeding days ×100; 2 FCR: feed intake (g)/body weight gain (g); 3 PER: body weight gain (g)/protein intake (g); 4 ANPU (%) = (% final body protein × final body weight) -(% initial body protein × initial body weight) / total protein ntake (g) × 100. between 5.68 to 2.98 μg tyrosine -1 minute -1 mg -1 protein. The mean of the activities of proteolytic between C. gariepinus fed on all diets did not observe any significant difference (p> 0.05) in the intestine. However, the results obtained of proteolytic activities in the stomach showed a significant differences (P<0.05) between fish fed control diets when compared to other fish fed D2, D3 and D4 that containing high amount of FFSBM (Table 5). Proteolytic activity in the liver was less than the activity of the stomach, however, and also the mean of proteolytic activity of sum of pHs μg tyrosine -1 minute -1 mg -1 protein showed a decreased when replacement FFSBM increased for all organs for C. gariepinus fed the experimental diets. Proteolytic Activities of enzyme for the intestine were higher with alkaline pHs, whereas only very low amounts of activity were observed with acidic pHs (Figure 1). For liver, the proteolytic activity recorded showed similar results at acid and alkaline pHs. In contrary, the proteolytic activity in the stomach was higher at acid pHs (3.0 and 4.0 μg tyrosine -1 minute -1 mg -1 protein) whereas lower amounts were recorded at neutral pH 7.0 ( Figure 1) for catfish fed experimental diets.
Trypsin activity
Trypsin activities were also higher in the stomach, followed by intestine and liver. While there were no significant differences (p> 0.05) between the treatments in respect to the liver and stomach, there was significantly higher trypsin activity in the intestines 2.75 of fish fed the control basic diet ( Table 5) when compared with C. gariepinus fed D2, D3 and D4 which resulted 2.31, 2.07 and 1.71 respectively.
Amylase activity
Amylase activity showed the highest values in the liver compared to the intestine and stomach ( Table 5). Higher amount of amylase was observed in the hepatic tissue of catfish fed the control diet, 4.49 μg maltose -1 minute -1 ml -1 followed by that of C. gariepinus fed on D2 (2.94). C. gariepinus fed diets 3 and 4 showed lower values of amylase activity. Only a little value of amylase activities were revealed in the stomach and intestine Table 5.
Lipase activities
In addition, lipase activities supported this trend, with a significant difference among the groups in the intestine but not in the stomach and liver (Table 5). Although, results of lipase activities in the intestine were 1.87, 1.37, 1.14 and 1.07 for fish fed D1, D2, D3 and D4 respectively with significant differences (P<0.05).between D1 and D2 when compared with D3 and D4 therefore, data also, showed no significant different (p> 0.05) among all groups fed diets containing FFSBM Table 5.
Discussion
The results obtained in this study show that plant ingredients used as a protein sources for example full fat soybean can be effective when replaced less than 50% of the fishmeal protein (LT94) in the diets for C. gariepinus. Catfish growth was slightly inferior with a diet containing about 15g/100g protein D2 but highly significant inferior in catfish fed 20g/100g protein and 20g/100g of total protein of FFSBM with additional of DL-methionine compared with fishmeal as the only protein source. Also, catfish fed 20g/100g of total protein of FFSBM with supplementation of DL-methionine did not improve their growth performance when compared to the unsupplemented diet. These results are in agreement with data obtained by Fagbenro and Davies (2001) who found that high nutritional value of soybean flour possesses used as a protein source in African catfish diets, particular partial substitution (>50%) of the protein from fish meal source in the diet. They also showed that there was a decrease in growth, inferior in the utilization of protein efficiency.
The results in this study showed that FFSBM diets were able to replace 15 g 100 g -1 of total protein of fishmeal high quality protein in C. gariepinus diets however, the growth and utilization of feed reduced with diets D3 and D4. Santigosa et al. (2008) reported that substitution of fish meal by plant ingredients as a protein source caused a decline in growth in other two fish species, rainbow trout Oncorhynchus mykiss and sea bream Sparus aurata. We argue that additional of amino acids may be inefficiently utilized by C. gariepinus this might be related to the presence of many anti-nutritional factors in plant ingredients that limit the utilization of dietary amino acids.
In addition, the palatability of FFSBM for catfish appeared to be less than has been found for tilapia (Abdel-Warith et al., 2013), as observed in daily feed intake records (Table 3), and certainly less than for fishmeal based diets for C. gariepinus. The results in this study were in general in accordance with data obtained for yellowtail (Seriola quinquertidiat) by Shimeno et al. (1993) when they also found reduced palatability in fish fed protein sources from plant replaced by fish meal. Another reason for decline the utilization of feed in the FFSBM diets is inferior in the digestibility of plant ingredients protein (Davies et al., 2011) which might be related to the high contents of anti-nutrition factors (ANFs), which inhibit the digestive enzymes.
Data in this study are in agreement with the results obtained by Luo et al. (2006) who demonstrated that diets containing more than 50% of solvent extracted cotton seed meal fed to Oncorhynchus mykiss had poorer growth performance since the diet contained less lysine than this fish needs. In the present study amino acid supplementation in D4 replacing 20g/100g of total protein in the diets did not improved SGR Equation 1, FCR Equation 2, PER Equation 3, and ANPU Equation 4, for C. gariepinus when compared with the other diets. This might be because C. gariepinus cannot utilize the plant ingredients because catfish is a carnivorous species; this leads us to suggest that C. gariepinus can utilize diets containing up to 15g/100g of total protein (about 41% of FFSBM) to observed that performance is less negatively affected at the lower level of supplementation than higher levels Shiau et al. (1990) reported that WG, FCR, PER and the digestibility of protein in hybrid tilapia (O. niloticus × O. aureus) can be improved by the substitution of up to 30% of the fishmeal in diets with defatted soybean and full-fat soybean. This study observed that the decrease of performance may be related to the high level of replacement plant materials protein leading to an imponderable of nutrients, particularly composition of protein. This may be due to an insufficient amino acid profile when FFSBM is supplemented to D2, D3 and D4. Full fat soybean has a certain amounts of both lysine and methionine, which affects the dietary contents of these amino acids, except for D4 ( Table 2). ANPU values in the current study were not greatly affected except with methionine supplementation.
Protease Inhibitors especially in legumes are known to reduce the performance of growth in freshwater prawn Macrobrachium rosenbergii (Sriket et al., 2011). In the present study, only lower amount of proteolytic activity at acidic pH levels was found in the hepato-pancreas and intestine whilst, activity detected to be high in stomach ( Figure 1) this related possibly to the fact that there are some intra-cellular enzymes which perform optimally at acidic pH. Also, the results in the current research were in accordance with those of El-Beltagy et al. (2004) who illustrated that the highest activity of partially purified acidic protease had recorded at pH 2.5 and then declined with rising of the pH. In contrast, at alkaline pHs, a higher amount of proteolytic activity was observed in both the intestine and hepato-pancreas. This agrees with data the obtained by Melo et al. (2012) who reported that the digestive tract of juvenile silver catfish showed higher activities of enzyme in the anterior section of the intestine at higher alkaline proteases. Similar results are reported in other species such as Gptosternum maculatum (Xiong et al., 2011). Hidalgo et al. (1999) demonstrated that eels (Anguilla anguilla) had high proteolytic activity associated with a low gastrointestinal tract pH together with significant activities for proteolytic enzymes at higher gastrointestinal pH's. Das and Tripathi (1991) reported that optimum activity of protease was found when pH ranged between 7.6 and 8.4 in grass carp Ctenpharyngodon idella fingerling and adult.
In this experiment, however, the optimum proteolytic activity was recorded in different organs. I.e. in the catfish intestine optimum pH ranged between 8.5 to 10 Also, the catfish stomach showed optimum proteolytic activity at pH 3.0 and 4.0 however, Abdel-Warith et al. (2013) reported that the data obtained of tilapia showed optimum proteolytic activity in tilapia intestine was at pH 7.0-8.5, whereas, in the stomach was at pH 1.5-3.0, this gives an indicator that in thick-walled muscular stomachs, such as C. gariepinus, the pH is quite high at around 4. Lovatto et al. (2017) argued that higher trypsin activity in silver catfish (Rhamdia quelen) fed diets containing pumpkin seed meal represents the body's attempt to increase the digestibility of protein, that adverse in increased activities of the proteolytic. Alarcon et al. (2001) who found that the connection between the trypsin activities in the intestinal and the digestibility of protein in L. argentiventris and L. novemfasciatus, inhibitor activity of the enzyme appears to be offset by raising the secretion of enzymes proteolytic and increased protein absorption in distal parts of the intestine. In the contrary, the activities of trypsin were higher in Salmo salar fed pea protein concentrate (Penn et al., 2011). Song et al. (2014) also found that in Platichthys stellatus fed diets containing 15-70% of soybean protein hydrolysate replaced by fish meal showed higher trypsin activity.
Amylase activity in different organs (stomach, liver and intestine) also varied for C. gariepinus in the present study. The highest values were measured in the liver compared with the stomach and intestine. Fish fed high inclusion levels of FFSBM effected on the activities of amylase in the liver, with fish fed D4 (containing a high level of FFSBM with amino acid supplementation) showed higher values than those on FFSBM diets without AA supplementation. This indicated that the supplementation of amino acid improved enzyme activities. The lower value of amylase in the stomach, meanwhile, indicates that small amount of starch is digested before the food reaches the foregut. Al-Owafeir (1999) found that activity of α-amylase was especially existing in Nile tilapia; this might be indicate that the tilapia is more ability to using and digesting carbohydrates than African catfish. Relatively few investigations have been undertaken of lipase activity in African catfish.
Lipase activity in this study was shown to be slightly higher in the intestine than in the liver and stomach. Tengjaroenkul et al. (2000) reported that activity of lipolytic enzymes absolutely exists in Nile tilapia O. niloticus, and occurs fundamentally in the cranial half of the intestinal in the digestive tract. While Melo et al. (2012) reported that lipase was stimulated by the lipid content in the diets.
Based on the results of this study we conclude that African catfish responded to diets of varying levels of FFSBM incorporation and grew favorably up to an inclusion level of about 41%, of the original fishmeal component with less negatively affected at the lower level of supplementation than it is at the higher levels affected growth adversely, however, even with AAs supplementation, and also resulted in changes in the digestive enzyme activities. Anti-nutritional factors (ANF ' s) associated with FFSBM possibly caused a further depression in growth rate, feed utilization efficiency and also negative changes to several key enzyme activity levels associated with the gastrointestinal tract. Futures studies should consider how to improve the utilization of plant ingredients by adding some materials to the diets such as phosphorus, a different ratio of amino acids, minerals and other new additives such as prebiotics and probiotics to enhance digestive enzymes and immune responses.
|
2019-10-24T09:15:54.516Z
|
2020-09-01T00:00:00.000
|
{
"year": 2019,
"sha1": "d665b2fab10cb125eecb82e437cd4b202195c756",
"oa_license": "CCBY",
"oa_url": "http://www.scielo.br/j/bjb/a/LMzs3LxDj39hjx3NHPRdsFq/?format=pdf&lang=en",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "e4539f5c6e544346797dd3df8948ec923e1a62ac",
"s2fieldsofstudy": [
"Agricultural and Food Sciences"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
}
|
118172699
|
pes2o/s2orc
|
v3-fos-license
|
The Geometry of Hida Families and \Lambda-adic Hodge Theory
We construct \Lambda-adic de Rham and crystalline analogues of Hida's ordinary \Lambda-adic etale cohomology, and by exploiting the geometry of integral models of modular curves over the cyclotomic extension of \Q_p, we prove appropriate finiteness and control theorems in each case. We then employ integral p-adic Hodge theory to prove \Lambda-adic comparison isomorphisms between our cohomologies and Hida's etale cohomology. As applications of our work, we provide a"cohomological"construction of the family of (\phi,\Gamma)-modules attached to Hida's ordinary \Lambda-adic etale cohomology by Dee, and we give a new and purely geometric proof of Hida's finitenes and control theorems. We are also able to prove refinements of theorems of Mazur-Wiles and of Ohta; in particular, we prove that there is a canonical isomorphism between the module of ordinary \Lambda-adic cuspforms and the part of the crystalline cohomology of the Igusa tower on which Frobenius acts invertibly.
1. Introduction 1.1. Motivation. In his landmark papers [Hid86a] and [Hid86b], Hida proved that the p-adic Galois representations attached to ordinary cuspidal Hecke eigenforms by Deligne ([Del71a], [Car86]) interpolate p-adic analytically in the weight variable to a family of p-adic representations whose specializations to integer weights k ≥ 2 recover the "classical" Galois representations attached to weight k cuspidal eigenforms. Hida's work paved the way for a revolution-from the pioneering work of Mazur on Galois deformations to Coleman's construction of p-adic families of finite slope overconvergent modular forms-and began a trajectory of thought whose fruits include some of the most spectacular achievements in modern number theory.
Hida's proof is constructive and has at its heart theétale cohomology of the tower of modular curves {X 1 (N p r )} r over Q. More precisely, Hida considers the projective limit H 1 et := lim ← −r H 1 et (X 1 (N p r ) Q , Z p ) (taken with respect to the trace mappings), which is naturally a module for the "big" p-adic Hecke algebra H * := lim ← −r H * r , which is itself an algebra over the completed group ring Λ : By analyzing the geometry of the tower of modular curves, Mazur and Wiles [MW86] were able to relate the inertial invariants of the local (at p) representation ρ p to theétale cohomology of the Igusa tower studied in [MW83], and in so doing proved 1 that the ordinary filtration of the Galois representations attached to ordinary cuspidal eigenforms interpolates: both the inertial invariants and covariants are free of the same finite rank over Λ and specialize to the corresponding subquotients in integral weights k ≥ 2. As an application, they provided examples of cuspforms f and primes p for which the specialization of the associated Hida family of Galois representations to weight k = 1 is not Hodge-Tate, and so does not arise from a weight one cuspform via the construction of Deligne-Serre [DS74]. Shortly thereafter, Tilouine [Til87] clarified the geometric underpinnings of [Hid86a] and [MW86], and removed most of the restrictions on the p-component of the nebentypus of f . Central to both [MW86] and [Til87] is a careful study of the tower of p-divisible groups attached to the "good quotient" modular abelian varieties introduced in [MW84].
With the advent of integral p-adic Hodge theory, and in view of the prominent role it has played in furthering the trajectory initiated by Hida's work, it is natural to ask if one can construct Hodge-Tate, de Rham and crystalline analogues of e * H 1 et , and if so, to what extent the integral comparison isomorphsms of p-adic Hodge theory can be made to work in Λ-adic families. In [Oht95], Ohta has addressed this question in the case of Hodge cohomology. Using the invariant differentials on the tower of p-divisible groups studied in [MW86] and [Til87], Ohta constructs a Λ ⊗ Zp Z p [µ p ∞ ]-module from which, via an integral version of the Hodge-Tate comparison isomorphism [Tat67] for ordinary pdivisible groups, he is able to recover the semisimplification of the "semilinear representation" ρ p ⊗O Cp , where C p is, as usual, the p-adic completion of an algebraic closure of Q p . Using Hida's results, Ohta proves that his Hodge cohomology analogue of e * H 1 et is free of finite rank over Λ ⊗ Zp Z p [µ p ∞ ] and specializes to finite level exactly as one expects. As applications of his theory, Ohta provides a construction of two-variable p-adic L-functions attached to families of ordinary cuspforms differing from that of Kitagawa [Kit94], and, in a subsequent paper [Oht00], provides a new and streamlined proof of the theorem of Mazur-Wiles [MW84] (Iwasawa's Main Conjecture for Q; see also [Wil90]). We remark that Ohta's Λ-adic Hodge-Tate isomorphism is a crucial ingredient in the forthcoming proof of Sharifi's conjectures [Sha11], [Sha07] due to Fukaya and Kato [FK12].
1.2. Results. In this paper, we construct the de Rham and crystalline counterparts to Hida's ordinary Λ-adicétale cohomology and Ohta's Λ-adic Hodge cohomology, and we prove appropriate control and finiteness theorems in each case via a careful study of the geometry of modular curves and abelian varieties. We then prove a suitable Λ-adic version of every integral comparison isomorphism one could hope for. In particular, we are able to recover the entire family of p-adic Galois representations ρ p (and not just its semisimplification) from our Λ-adic crystalline cohomology. As a byproduct of our work, we provide geometric constructions of several of the "cohomologically elusive" semi-linear algebra objects in p-adic Hodge theory, including the family ofétale (ϕ, Γ)-modules attached to e * H 1 et by Dee [Dee01]. As an application of our theory, we give a new and purely geometric proof of Hida's freeness and control theorems for e * H 1 et . In order to survey our main results more precisely, we introduce some notation. Fix an algebraic closure Q p of Q p as well as a p-power compatible sequence {ε (r) } r≥0 of primitive p r -th roots of unity in Q p . We set K r := Q p (µ p r ) and K r := K r (µ N ), and we write R r and R r for the rings of integers in K r and K r , respectively. Denote by G Qp := Gal(Q p /Q p ) the absolute Galois group and by H the kernel of the p-adic cyclotomic character χ : G Qp → Z × p . We write Γ := G Qp /H Gal(K ∞ /K 0 ) for the quotient and, using that K 0 /Q p is unramified, we canonically identify Γ with Gal(K ∞ /K 0 ). We will denote by u (respectively v N ) the diamond operator 2 in H * attached to u −1 ∈ Z × p (respectively v −1 ∈ (Z/N Z) × ) and write ∆ r for the image of the restriction of · : Z × p → H * to 1 + p r Z p ⊆ Z × p . For convenience, we put ∆ := ∆ 1 , and for any ring A we write Λ A := lim ← −r A[∆/∆ r ] for the completed group ring on ∆ over A; if ϕ is an endomorphism of A, we again write ϕ for the induced endomorphism of Λ A that acts as the identity on ∆. Finally, we denote by X r := X 1 (N p r ) the usual modular curve over Q classifying (generalized) elliptic curves with a [µ N p r ]-structure, and by J r := J 1 (N p r ) its Jacobian.
Our first task is to construct a de Rham analogue of Hida's e * H 1 et . A naïve idea would be to mimic Hida's construction, using the (relative) de Rham cohomology of Z p -integral models of the modular curves X r in place of p-adicétale cohomology. However, this approach fails due to the fact that X r has bad reduction at p, so the relative de Rham cohomology of integral models does not provide good Z p -lattices in the de Rham cohomology of X r over Q p . To address this problem, we use the canoninical integral structures in de Rham cohomology studied in [Cai09] and the canonical integral model X r of X r over R r associated to the moduli problem ([bal. Γ 1 (p r )] ε (r) -can ; [µ N ]) [KM85] to construct wellbehaved integral "de Rham cohomology" for the tower of modular curves. For each r, we obtain a short exact sequence of free R r -modules with semilinear Γ-action and comuting H * r -action which is co(ntra)variantly functorial in finite K r -morphisms of the generic fiber X r , and whose scalar extension to K r recovers the Hodge filtration of H 1 dR (X r /K r ). Extending scalars to R ∞ and taking projective limits, we obtain a short exact sequence of Λ R∞ -modules with semilinear Γ-action and commuting linear H * -action Our first main result (see Theorem 5.2.3) is that the ordinary part of (1.2.2) is the correct de Rham analogue of Hida's ordinary Λ-adicétale cohomology: Theorem 1.2.1. There is a canonical short exact sequence of finite free Λ R∞ -modules with semilinear Γ-action and commuting linear H * -action As a Λ R∞ -module, e * H 1 dR is free of rank 2d, while each of the flanking terms in (1.2.3) is free of rank d, for d = p+1 k=3 dim Fp S k (Γ 1 (N ); F p ) ord . Applying ⊗ Λ R∞ R ∞ [∆/∆ r ] to (1.2.3) recovers the ordinary part of the scalar extension of (1.2.1) to R ∞ .
We then show that the Λ R∞ -adic Hodge filtration (1.2.3) is very nearly "auto dual". To state our duality result more succintly, for any ring homomorphism A → B, we will write (·) B := (·) ⊗ A B and (·) ∨ B := Hom B ((·) ⊗ A B, B) for these functors from A-modules to B-modules. If G is any group of automorphisms of A and M is an A-module with a semilinear action of G, for any "crossed" homomorphism 3 ψ : G → A × we will write M (ψ) for the A-module M with "twisted" semilinear G-action given by g · m := ψ(g)gm. Our duality theorem is (see Proposition 5.2.4): 2 Note that we have u −1 = u * and v −1 N = v * N , where · * and · * N are the adjoint diamond operators; see §2.3. 3 That is, ψ(στ ) = ψ(σ) · σψ(τ ) for all σ, τ ∈ Γ, Theorem 1.2.2. The natural cup-product auto-duality of (1.2.1) over R r := R r [µ N ] induces a canonical Λ R ∞ -linear and H * -equivariant isomorphism of exact sequences that is compatible with the natural action of Γ×Gal(K 0 /K 0 ) Gal(K ∞ /K 0 ) on the bottom row and the twist of the natural action on the top row by the H * -valued character χ a N , where a(γ) ∈ (Z/N Z) × is determined for γ ∈ Gal(K 0 /K 0 ) by ζ a(γ) = γζ for every N -th root of unity ζ.
We moreover prove that, as one would expect, the Λ R∞ -module e * H 0 (ω) is canonically isomorphic to the module eS(N, Λ R∞ ) of ordinary Λ R∞ -adic cusp forms of tame level N ; see Corollary 5.3.5.
To go further, we study the tower of p-divisible groups attached to the "good quotient" modular abelian varieties introduced by Mazur-Wiles [MW84]. To avoid technical complications with logarithmic p-divisible groups, following [MW86] and [Oht95], we will henceforth remove the trivial tame character by working with the sub-idempotent e * of e * corresponding to projection to the part where µ p−1 ⊆ Z × p ∆ acts non-trivially. As is well-known (e.g. [Hid86a,§9] and [MW84, Chapter 3, §2]), the p-divisible group G r := e * J r [p ∞ ] over Q extends to a p-divisible group G r over R r , and we write G r := G r × Rr F p for its special fiber. Denoting by D(·) the contravariant Dieudonné module functor on p-divisible groups over F p , we form the projective limits taken along the mappings induced by G r → G r+1 . Each of these is naturally a Λ-module equipped with linear (!) Frobenius F and Verscheibung V morphisms satisfying F V = V F = p, as well as a linear action of H * and a "geometric inertia" action of Γ, which reflects the fact that the generic fiber of G r descends to Q p . The Λ-modules (1.2.4) have the expected structure (see Theorem 5.5.2): Theorem 1.2.3. There is a canonical split short exact sequence of finite and free Λ-modules with linear H * and Γ-actions. As a Λ-module, D ∞ is free of rank 2d , while Dé t ∞ and D m ∞ are free of rank d , where d := p k=3 dim Fp S k (Γ 1 (N ); F p ) ord . For ∈ {m,ét, null}, there are canonical isomorphisms which are compatible with the extra structures. Via the canonical splitting of (1.2.5), D ∞ for =ét (respetively = m) is identified with the maximal subpace of D ∞ on which F (respectively V ) acts invertibly . The Hecke operator U * p ∈ H * acts as F on Dé t ∞ and as p N V on D m ∞ , while Γ acts trivially on Dé t ∞ and via χ(·) −1 on D m ∞ .
We likewise have the appropriate "Dieudonné" analogue of Theorem 1.2.2 (see Proposition 5.5.3): Theorem 1.2.4. There is a canonical H * -equivariant isomorphism of exact sequences of Λ R 0 -modules that is Γ × Gal(K 0 /K 0 )-equivariant, and intertwines F (respectively V ) on the top row with V ∨ (respectively F ∨ ) on the bottom. 4 Just as Mazur-Wiles are able to relate the ordinary-filtration of e * H 1 et to theétale cohomology of the Igusa tower, we can interpret the slope filtraton (1.2.5) in terms of the crystalline cohomology of the Igusa tower as follows. For each r, let I ∞ r and I 0 r be the two "good" irreducible components of X r × Rr F r (see Remark 2.3.12), each of which is isomorphic to the Igusa curve Ig(p r ) of tame level N and p-level p r . For ∈ {0, ∞} we form the projective limit with respect to the trace mappings on crystalline cohmology induced by the canonical degeneracy maps on Igusa curves. Then H 1 cris (I ) is naturally a Λ-module with linear Frobenius F and Verscheibung V endomorphisms. Letting f be the idempotent of Λ corresponding to projection to the part where µ p−1 ⊆ ∆ → Λ acts nontrivially, we prove (see Theorem 5.5.4): Theorem 1.2.5. There is a canonical isomorphism of Λ-modules, compatible with F and V, f H 1 cris (I 0 ) V ord ⊕ f H 1 cris (I ∞ ) F ord . which preserves the direct sum decompositions of source and target. This isomorphism is Hecke and Γ-equivariant, with U * p and Γ acting as p N V ⊕ F and χ(·) −1 ⊕ id, respectively, on each direct sum.
We note that our "Dieudonné module" analogue (1.2.6) is a significant sharpening of itsétale counterpart [MW86,§4], which is formulated only up to isogeny (i.e. after inverting p). From D ∞ , we can recover the Λ-adic Hodge filtration (1.2.3), so the latter is canonically split (see Theorem 5.5.7): Theorem 1.2.6. There is a canonical Γ and H * -equivariant isomorphism of exact sequences where the mappings on bottom row are the canonical inclusion and projection morphisms corresponding to the direct sum decomposition D ∞ = D m ∞ ⊕ Dé t ∞ . In particular, the Hodge filtration exact sequence (1.2.3) is canonically split, and admits a canonical descent to Λ.
We remark that under the identification (1.2.7), the Hodge filtration (1.2.3) and slope filtration (1.2.5) correspond, but in the opposite directions. As a consequence of Theorem 1.2.6, we deduce (see Corollary 5.5.8 and Remark 5.5.9): Corollary 1.2.7. There is a canonical isomorphism of finite free Λ (respectively Λ R 0 )-modules that intertwines T ∈ H := lim ← − H r with T * ∈ H * , where we let U * p act as p N V on D m ∞ and as F on Dé t ∞ . The second of these isomorphisms is in addition Gal(K 0 /K 0 )-equivariant. We are also able to recover the semisimplification of e * H 1 et from D ∞ . Writing I ⊆ G Qp for the inertia subgroup at p, for any Z p [G Qp ]-module M , we denote by M I (respectively M I := M/M I ) the sub (respectively quotient) module of invariants (respectively covariants) under I . and (1. Writing σ for the Frobenius automorphism of W (F p ), the isomorphism (1.2.8a) intertwines F ⊗ σ with id ⊗σ and id ⊗g with g ⊗ g for g ∈ G Qp , whereas (1.2.8b) intertwines V ⊗ σ −1 with id ⊗σ −1 and g ⊗ g with g ⊗ g, where g ∈ G Qp acts on the Tate twist D m ∞ (−1) := D ∞ ⊗ Zp Z p (−1) as χ(g) −1 ⊗ χ(g) −1 . Theorem 1.2.8 gives the following "explicit" description of the semisimplification of e * H 1 et : Corollary 1.2.9. For any T ∈ (H * ord ) × , let λ(T ) : G Qp → H * ord be the unique continuous (for the p-adic topology on H * ord ) unramified character whose value on (any lift of) Frob p is T . Then G Qp acts on (e * H 1 et ) I through the character λ(U * p −1 ) and on (e * H 1 et ) I through χ −1 · χ −1 λ( p −1 N U * p ). We remark that Corollary 1.2.7 and Theorem 1.2.8 combined give a refinement of the main result of [Oht95]. We are furthermore able to recover the main theorem of [MW86] To recover the full Λ-adic local Galois representation e * H 1 et , rather than just its semisimplification, it is necessary to work with the full Dieudonné crystal of G r over R r . Following Faltings [Fal99] and Breuil (e.g. [Bre00]), this is equivalent to studying the evaluation of the Dieudonné crystal of G r × Rr R r /pR r on the "universal" divided power thickening S r R r /pR r , where S r is the p-adically completed PD-hull of the surjection Z p [[u r ]] R r sending u r to ε (r) − 1. As the rings S r are too unwieldly to directly construct a good crystalline analogue of Hida's ordinaryétale cohomology, we must functorially descend the "filtered S r -module" attached to G r to the much simpler ring S r := Z p [[u r ]]. While such a descent is provided (in rather different ways) by the work of Breuil-Kisin and Berger-Wach, neither of these frameworks is suitable for our application: it is essential for us that the formation of this descent to S r commute with base change as one moves up the cyclotomic tower, and it is not at all clear that this holds for Breuil-Kisin modules or for the Wach modules of Berger. Instead, we use the theory of [CL12], which works with frames and windowsà la Lau and Zink to provide the desired functorial descent to a "(ϕ, Γ)-module" M r (G r ) over S r . We view S r as a Z p -subalgebra of S r+1 via the map sending u r to ϕ(u r+1 ) := (1 + u r+1 ) p − 1, and we write S ∞ := lim − → S r for the rising union 5 of the S r , equiped with its Frobenius automorphism ϕ and commuting action of Γ determined by γu r := (1 + u r ) χ(γ) − 1. We then form the projective limits for ∈ {ét, m, null} taken along the mappings induced by G r × Rr R r+1 → G r+1 via the functoriality of M r (·) and its compatibility with base change. These are Λ S∞ -modules equipped with a semilinear action of Γ, a linear and commuting action of H * , and a commuting ϕ (respectively ϕ −1 ) semilinear endomorphism F (respectively V ) satisfying F V = V F = ω, for ω := ϕ(u 1 )/u 1 = u 0 /ϕ −1 (u 0 ) ∈ S ∞ , and they provide our crystalline analogue of Hida's ordinaryétale cohomology (see Theorem 5.6.2): Theorem 1.2.11. There is a canonical short exact sequence of finite free Λ S∞ -modules with linear H *action, semilinear Γ-action, and commuting semilinear endomorphisms Again, in the spirit of Theorems 1.2.2 and 1.2.4, there is a corresponding "autoduality" result for M ∞ (see Theorem 5.6.4). To state it, we must work over the ring , with the inductive limit taken along the Z p -algebra maps sending u r to ϕ(u r+1 ).
Theorem 1.2.12. Let µ : Γ → Λ × S∞ be the crossed homomorphism given by µ(γ) := u 1 γu 1 χ(γ) χ(γ) . There is a canonical H * and Gal(K ∞ /K 0 )-compatible isomorphism of exact sequences From M ∞ , we can recover D ∞ and e * H 1 dR , with their additional structures (see Theorem 5.6.6): Theorem 1.2.14. Viewing Λ as a Λ S∞ -algebra via the map induced by u r → 0, there is a canonical isomorphism of short exact sequences of finite free Λ-modules which is Γ and H * -equivariant and carries F ⊗ 1 to F and V ⊗ 1 to V . Viewing Λ R∞ as a Λ S∞ -algebra via the map u r → (ε (r) ) p −1, there is a canonical isomorphism of short exact sequences of Λ R∞ -modules with i and j the canonical sections given by the splitting in Theorem 1.2.6.
To recover Hida's ordinaryétale cohomology from M ∞ , we introduce the "period" ring of Fontaine 6 E + := lim ← − O Cp /(p), with the projective limit taken along the p-power mapping; this is a perfect valuation ring of characteristic p equipped with a canonical action of G Qp via "coordinates". We write E for the fraction field of E + and A := W ( E) for its ring of Witt vectors, equipped with its canonical Frobenius automorphism ϕ and G Qp -action induced by Witt functoriality. Our fixed choice of p-power compatible sequence {ε (r) } determines an element ε := (ε (r) mod p) r≥0 of E + , and we Z p -linearly is the Teichmüller section. This embedding is ϕ and G Qp -compatible, with G Qp acting on S ∞ through the quotient G Qp Γ.
that is G Qp -equivariant for the "diagonal" action of G Qp (with G Qp acting on M ∞ through Γ) and intertwines F ⊗ϕ with id ⊗ϕ and V ⊗ϕ −1 with id ⊗ϕ −1 . In particular, there is a canonical isomorphism of Λ-modules, compatible with the actions of H * and G Qp , The Λ-adic splitting of the ordinary filtration of e * H 1 et was considered by Ghate and Vatsal [GV04], who prove (under certain technical hypotheses of "deformation-theoretic nature") that if the Λ-adic family F associated to a cuspidal eigenform f is primitive and p-distinguished, then the associated Λ-adic local Galois representation ρ F ,p is split split if and only if some arithmetic specialization of F has CM [GV04, Theorem 13]. We interpret the Λ-adic splitting of the ordinary filtration as follows: Theorem 1.2.18. The short exact sequence (1.2.10) admits a Λ S∞ -linear splitting which is compatible with F , V , and Γ if and only if the ordinary filtration of e * H 1 et admits a Λ-linear spitting which is compatible with the action of G Qp .
1.3. Overview of the article. Section 2 is preliminary: we review the integral p-adic cohomology theories of [Cai09] and [Cai10], and summarize the relavant facts concerning integral models of modular curves from [KM85] that we will need. Of particular importance is a description of the U p -correspondence in characteristic p, due to Ulmer [Ulm90], and recorded in Proposition 2.3.20.
In §3, we study the de Rham and crystalline cohomolgy of the Igusa tower, and prove the key "freeness and control" theorems that form the technical characteristic p backbone of this paper. Via an almost combinatorial argument using the description of U p in characteristic p, we then relate the cohomology of the Igusa tower to the mod p reduction of the ordinary part of the (integral p-adic) cohomology of the modular tower.
Section 4 is a summary of the theory developed in [CL12], which uses Dieudonné crystals of pdivisible groups to provide a "cohomological" construction of the (ϕ, Γ)-modules attached to potentially Barsotti-Tate representations. It is precisely this theory which allows us to construct our crystalline analogue of Hida's ordinary Λ-adicétale cohomology.
Section 5 constitutes the main body of this paper, and the reader who is content to refer back to §2-4 as needed should skip directly there. In §5.1, we develop a commutative algebra formalism for working with projective limits of "towers" of cohomology that we use frequently in the sequel. Using the canonical lattices in de Rham cohomology studied in [Cai09] (and reviewed in §2.1), we construct our Λ-adic de Rham analogue of Hida's ordinary Λ-adicétale cohomology in §5.2, and we show that the expected freeness and control results follow by reduction to characteristic p from the structure theorems for the de Rham cohomology of the Igusa tower established in §3. Using work of Ohta [Oht95], in §5.3 we relate the Hodge filtration of our Λ-adic de Rham cohomology to the module of Λadic cuspforms. In section 5.4, we study the tower of p-divisible groups whose cohomology allows us to construct our Λ-adic Dieudonné and crystalline analogues of Hida'sétale cohomlogy in §5.5 and §5.6, respectively. We establish Λ-adic comparison isomorphisms between each of these cohomologies using the integral comparison isomorphisms of [Cai10] and [CL12], recalled in §2.2 and §4.1, respectively. This enables us to give a new proof of Hida's freeness and control theorems and of Ohta's duality theorem in §5.6.
As remarked in §1.2, and following [Oht95] and [MW86], our construction of the Λ-adic Dieudonné and crystalline counterparts to Hida'sétale cohomology excludes the trivial eigenspace for the action of µ p−1 ⊆ Z × p so as to avoid technical complications with logarithmic p-divisible groups. In [Oht00], Ohta uses the "fixed part" (in the sense of Grothendieck [Gro72, 2.2.3]) of Néron models with semiabelian reduction to extend his results on Λ-adic Hodge cohomology to allow trivial tame nebentype character. We are confident that by using Kato's logarithmic Dieudonné theory [Kat89] one can appropriately generalize our results in §5.5 and §5.6 to include the missing eigenspace for the action of µ p−1 .
1.4. Notation. If ϕ : A → B is any map of rings, we will often write M B := M ⊗ A B for the Bmodule induced from an A-module M by extension of scalars. When we wish to specify ϕ, we will write M ⊗ A,ϕ B. Likewise, if ϕ : T → T is any morphism of schemes, for any T -scheme X we denote by X T the base change of X along ϕ. If f : X → Y is any morphism of T -schemes, we will write f T : X T → Y T for the morphism of T -schemes obtained from f by base change along ϕ. When T = Spec(R) and T = Spec(R ) are affine, we abuse notation and write X R or X × R R for X T .
We will frequently work with schemes over a discrete valuation ring R. We will often write X, Y, . . . for schemes over Spec(R), and will generally use X, Y, . . . (respectively X, Y, . . .) for their generic (respectively special) fibers.
Preliminaries
This somewhat long section is devoted to recalling the geometric background we will need in our constructions. Much (though not all) of this material is contained in [Cai09], [Cai10] and [KM85].
2.1. Dualizing sheaves and de Rham cohomology. We begin by describing a certain modification of the usual de Rham complex for non-smooth curves. The hypercohomology of this (two-term) complex is in general much better behaved than algebraic de Rham cohomology and will enable us to construct our Λ-adic de Rham cohomology. We largely refer to [Cai09], but remark that our treatment here is different in some places and better suited to our purposes.
Definition 2.1.1. A curve over a scheme S is a morphism f : X → S of finite presentation which is a flat local complete intersection 7 of pure relative dimension 1 with geometrically reduced fibers. We will often say that X is a curve over S or that X is a relative S-curve when f is clear from context. (1) The morphism f : X → S is a curve.
(2) For every s ∈ S, the fiber f s : X s → Spec k(s) is a curve.
(3) For every x ∈ X with s = f (x), the local ring O Xs,x is a complete intersection 8 and f has geometrically reduced fibers of pure dimension 1. Moreover, any base change of a curve is again a curve.
Proof. Since f is flat and of finite presentation, the definition of local complete intersection that we are using (i.e. [ Corollary 2.1.3. Let f : X → S be a finite type morphism of pure relative dimension 1. (1) If f is smooth, then it is a curve.
(2) If X and S are regular and f has geometrically reduced fibers then f is a curve.
(3) If f is a curve then it is Gorenstein and hence also Cohen Macaulay.
Proof. The assertion (1) Fix a relative curve f : X → S. We wish to apply Grothendieck duality theory to f , so we henceforth assume that S is a noetherian scheme of finite Krull dimension 9 that is Gorenstein and excellent, so that that O S is a dualizing complex for S [Har66, V, §10]. Since f is CM by Corollary 2.1.3 (3), by [Con00, Theorem 3.5.1]) the relative dualizing complex f ! O S has a unique nonzero cohomology sheaf, which is in degree −1, and we define the relative dualizing sheaf for X over S (or for f ) to be: Since the fibers of f are Gorenstein, ω X/S is an invertible O X -module by [Har66, V, Proposition 9.3, Theorem 9.1]. The formation of ω X/S is compatible with arbitrary base change on S andétale localization on X [Con00, Theorem 3.6.1].
Remark 2.1.4. Since S is Gorenstein and of finite Krull dimension and f ! carries dualizing complexes for S to dualizing complexes for X (see [Har66,V,§8]), the sheaf ω X/S (thought of as a complex concentrated in some degree) is a dualizing complex for the abstract scheme X.
Proposition 2.1.5. Let X → S be a relative curve. There is a canonical map of O X -modules whose formation commutes with any base change S → S, where S is noetherian of finite Krull dimension, Gorenstein, and excellent. Moreover, the restriction of c X/S to any S-smooth subscheme of X is an isomorphism. Definition 2.1.6. We define the two-term O S -linear complex (of O S -flat coherent O X -modules) concentrated in degrees 0 and 1 where d S is the composite of the map (2.1.1) and the universal O S -derivation O X → Ω 1 X/S . We view ω • X/S as a filtered complex via "la filtration bête" [Del71b], which provides an exact triangle in the derived category that we call the Hodge Filtration of ω • X/S . Since c X/S is an isomorphism over the S-smooth locus X sm of f in X, the complex ω • X/S coincides with the usual de Rham complex over X sm . Moreover, it follows immediately from Proposition 2.1.5 that the formation of ω • X/S is compatible with any base change S → S to a noetherian scheme S of finite Krull dimension that is Gorenstein and excellent.
Definition 2.1.7. Let f : X → S relative curve over S. For each nonnegative integer i, we define The complex ω • X/S and its filtration (2.1.3) behave extremely well with respect to duality: Proposition 2.1.8. Let f : X → S be a proper curve over S. There is a canonical quasi-isomorphism ) which is compatible with the filtrations on both sides induced by (2.1.3). In particular: (1) There is a natural quasi-isomorphism which is compatible with the filtrations induced by (2.1.3).
(2) If ρ : Y → X is any finite morphism of proper curves over S, then there is a canonical quasi-isomorphism . that is compatible with filtrations.
Proof. For the first claim, see the proofs of Lemmas 4.3 and 5.4 in [Cai09], noting that although S is assumed to be the spectrum of a discrete valuation ring and the definition of curve in that paper differs somewhat from the definition here, the arguments themselves apply verbatim in our context. The assertion (1) (respectvely (2)) follows from this by applying Rf * (respectively Rρ * ) to both sides of (2.1.4) and appealing to Grothendieck duality [Con00, Theorem 3.4.4] for the proper map f (respectively ρ); see the proofs of Lemma 5.4 and Proposition 5.8 in [Cai09] for details.
In our applications, we need to understand the cohomology H i (X/S) for a proper curve X → S when S is either the spectrum of a discrete valuation ring R of mixed characteristic (0, p) or the spectrum of a perfect field. We now examine each of these situations in more detail.
First suppose that S := Spec(R) is the spectrum of a discrete valuation ring R having field of fractions K of characteristic zero and perfect residue field k of characteristic p > 0, and fix a normal curve f : X → S that is proper over S with smooth and geometrically connected generic fiber X K . This situation is studied extensively in [Cai09], and we content ourselves with a summary of the results we will need. To begin, we recall the following "concrete" description of the relative dualizing sheaf: Lemma 2.1.9. Let i : U → X be any Zariski open subscheme of X whose complement consists of finitely many points of codimension 2 (necessarily in the closed fiber of X). Then the canonical map is an isomorphism. In particular, ω X/S i * Ω 1 U/S for any Zariski open subscheme i : U → X sm whose complement consists of finitely many points of codimension two.
Proof. The first assertion is [Cai10, Lemma 3.2]. The second follows from this, since X sm contains the generic fiber and the generic points of the closed fiber by our definition of curve.
Proposition 2.1.10. Let ρ : Y → X be a finite morphism of normal and proper S-curves.
(1) Attached to ρ are natural pullback and trace morphisms of complexes which are of formation compatible withétale localization on X and flat base change on S and are dual via the duality of Proposition 2.1.8 (2).
(2) For any S-smooth point y ∈ Y sm with image x := ρ(y) that lies in X sm , the induced mappings Proof. The assertions of (1) follow from the proofs of Propositions 4.5 and 5.5 of [Cai09], while (2) is a straightforward consequence of the very construction of ρ * and ρ * as given in [Cai09,§4].
Since the generic fiber of X is a smooth and proper curve over K, the Hodge to de Rham spectral sequence degenerates [DI87], and there is a functorial short exact sequence of K-vector spaces which we call the Hodge filtration of H 1 dR (X K /K). Proposition 2.1.11. Let f : X → S be a normal curve that is proper over S = Spec(R).
(1) There are natural isomorphisms of free R-modules of rank 1 which are canonically R-linearly dual to each other.
(2) There is a canonical short exact sequence of finite free R-modules, which we denote H(X/R), that recovers the Hodge filtration (2.1.5) of H 1 dR (X K /K) after extending scalars to K.
(3) Via the canonical cup-product auto-duality of (2.1.5), the exact sequence H(X/R) is naturally isomorphic to its R-linear dual.
(4) The exact sequence H(X/R) is contravariantly (respectively covariantly) functorial in finite morphisms ρ : Y → X of normal and proper S-curves via pullback ρ * (respectively trace ρ * ); these morphisms recover the usual pullback and trace mappings on Hodge filtrations after extending scalars to K and are adjoint with respect to the canonical cup-product autoduality of H(X/R) in (3).
We now turn to the case that S = Spec(k) for a perfect field k and f : X → S is a proper and geometrically connected curve over k. Recall that X is required to be geometrically reduced, so that the k-smooth locus U := X sm is the complement of finitely many closed points in X.
Proposition 2.1.12. Let X be a proper and geometrically connected curve over k.
(1) There are natural isomorphisms of 1-dimensional k-vector spaces which are canonically k-linearly dual to each other.
(2) There is a natural short exact sequence, which we denote H(X/k) which is canonically isomorphic to its own k-linear dual.
Proof. Consider the long exact cohomology sequence arising from the exact triangle (2.1.3). Since X is proper over k, geometrically connected and reduced, the canonical map k → H 0 (X, O X ) is an isomorphism, and it follows that the map d : H 0 (X, O X ) → H 0 (X, ω X/k ) is zero, whence the map H 0 (X/k) → H 0 (X, O X ) is an isomorphism. Thanks to Proposition 2.1.8 (1), we have a canonical quasi-isomorphism ] that is compatible with the filtrations induced by (2.1.3). Using the spectral sequence E m,n 2 := Ext k (H −n (X, ω • X/k )) =⇒ H m+n (R Hom • k (RΓ(X, ω • X/k ), k)) and the vanishing of Ext m k (·, k) for m > 0, we deduce that H 2 (X/k) H 0 (X/k) ∨ is 1-dimensional over k. Since Grothendieck's trace map H 1 (X, ω X/k ) → k is an isomorphism, we conclude that the surjective map of 1-dimensional k-vector spaces H 1 (X, ω X/k ) → H 2 (X/k) must be an isomorphism. It follows that the map d : H 1 (X, O X ) → H 1 (X, ω X/k ) is zero as well, as desired. The fact that that the resulting short exact sequence in (2) is canonically isomorphic to its k-linear dual, and the fact that the isomorphisms in (1) are k-linearly dual are now easy consequences of the isomorphism (2.1.6).
We now suppose that k is algebraically closed, and following [Con00, §5.2], we recall Rosenlicht's explicit description [Ros58] of the relative dualizing sheaf ω X/k and of Grothendieck duality.
Denote by k(X) the "function field" of X, i.e. k(X) := i k(ξ i ) is the product of the residue fields at the finitely many generic points of X, and write j : Spec(k(X)) → X for the canonical map. By definition, the sheaf of meromorphic differentials on X is the pushforward Ω 1 k(X)/k := j * Ω 1 k(X)/k . Our hypothesis that X is reduced implies that it is smooth at its generic points, so j factors through the open immersion i : U := X sm → X. By [Con00, Lemma 5.2.1], the canonical map of O X -modules is injective, and it follows that ω X/k is a subsheaf of Ω 1 k(X)/k . Rosenlicht's theory gives a concrete description of this subsheaf, as we now explain.
Let π : X n → X be the normalization of X. We have a natural identification of "function fields" k(X n ) = k(X) and hence a canonical isomorphism π * Ω 1 k(X n )/k Ω 1 k(X)/k of sheaves on X.
Definition 2.1.13. Let ω reg X/k be the sheaf of O X -modules whose sections over any open V ⊆ X are those meromorphic differentials η on π −1 (V ) ⊆ X n which satisfy for all x ∈ V (k) and all s ∈ O X,x , where res y is the classical residue map on meromorphic differentials on the smooth (possibly disconnected) curve X n over the algebraically closed field k.
Remark 2.1.14. Let Irr(X) be the set of irreducible components of X. Since π is an isomorphism over U and X is smooth at its generic points, X n is the disjoint union of the smooth, proper, and irreducible k-curves I n for I ∈ Irr(X). Therefore, a meromorphic differential η on X n may be viewed as a tuple η = (η I n ) I∈Irr(X) , with η I n a meromorphic differential on the smooth and irreducible curve I n . The condition for a meromorphic differential η on π −1 (V ) to be a section of ω reg X/k over V is then res y (s y η I n y ) = 0 for all x ∈ V (k) and all s ∈ O X,x , where I n y is the unique connected component of X n on which y lies and s y is the image of s under the canonical map O X,x → O I n y ,y . As any holomorphic differential on X n has zero residue at every closed point, the pushforward π * Ω 1 X n /k is naturally a subsheaf of ω reg X/k , and this inclusion is an equality at every x ∈ U (k) since π is an isomorphism over U . It likewise follows from the definition that any section of ω reg X/k must be holomorphic at every smooth point of X, so there is a natural inclusion which is an isomorphism over U . Moreover, by [Con00, Lemma 5.2.2], any section of ω reg X/k has poles at the finitely many non-smooth points of X with order bounded by a constant depending only on X, and it follows that ω reg X/k is a coherent sheaf on X. Since (2.1.9) is an isomorphism at the generic points of X, we have a quasi-coherent flasque resolution where X 0 is the set of closed points of X and i x : Spec(O X,x ) → X is the canonical map. The associated long exact cohomology sequence yields an exact sequence of k-vector spaces For x ∈ X 0 , the k-linear "residue" map kills ω reg X/k,x , and the induced composite map is zero by the residue theorem on the (smooth) connected components of X n . Thus, from (2.1.10) we obtain a k-linear "trace map" (2.1.11) res X : which coincides with the usual residue map when X is smooth. Rosenlicht's explicit description of the relative dualizing sheaf and of Grothendieck duality for X/k is: Proposition 2.1.15 (Rosenlicht). Let X be a proper and geometrically connected curve over k with k-smooth locus U . Viewing ω X/k and ω reg X/k as subsheaves of i * Ω 1 U/k via (2.1.7) and (2.1.9), respectively, we have an equality ω X/k = ω reg X/k inside i * Ω 1 U/k . Under this identification, Grothendieck's trace map H 1 (X, ω X ) → k coincides with − res X .
We now return to the situation that S = Spec(R) for a discrete valuation ring R with fraction field K of characteristic zero and perfect residue field k of characteristic p > 0.
Lemma 2.1.16. Let X be a normal and proper curve over S = Spec(R) with smooth and geometrically connected generic fiber, and denote by X := X k the special fiber of X; it is a proper and geometrically connected curve over k by Proposition 2.1.2 (2).
(1) The canonical base change map is an isomorphism.
(2) Let ρ : Y → X be a finite morphism of normal and proper curves over S with smooth and geometrically connected generic fibers. The canonical diagrams (one for ρ * and one for ρ * ) commute, where ρ n * and ρ n * are the usual pullback and trace morphisms on meromorphic differential forms associated to the finite flat map ρ n : Y n → X n of smooth curves over k.
Proof. Since X is of relative dimension 1 over S, the cohomologies H 1 (X, O X ) and H 1 (X, ω X/S ) both commute with base change, and they are both free over R by Proposition 2.1.11. We conclude that H i (X, O X ) and H i (X, ω X/S ) commute with base change for all i and hence that the left and right vertical maps in the base change diagram (1) (whose rows are exact by Propositions 2.1.11 and 2.1.12) are isomorphisms. It follows that the middle vertical map in (1) is an isomorphism as well. The compatibility of pullback and trace under base change to the special fibers, as asserted by the diagram in (2), is a straightforward consequence of Proposition 2.1.10 (2), using the facts that X and Y are smooth at generic points of closed fibers and that ρ : Y → X takes generic points to generic points as noted in the proof of Lemma 2.1.9.
Universal vectorial extensions and Dieudonné crystals.
There is an alternate description of the short exact sequence H(X/R) of Proposition 2.1.11 (2) in terms of Lie algebras and Néron models of Jacobians that will allow us to relate this cohomology to Dieudonné modules. To explain this description and its connection with crystals, we first recall some facts from [MM74] and [Cai10]. Fix a base scheme T , and let G be an fppf sheaf of abelian groups over T . A vectorial extension of G is a short exact sequence (of fppf sheaves of abelian groups) with V a vector group (i.e. an fppf abelian sheaf which is locally represented by a product of G a 's).
Assuming that Hom(G, V ) = 0 for all vector groups V , we say that a vectorial extension (2.2.1) is universal if, for any vector group V over T , the pushout map Hom T (V, V ) → Ext 1 T (G, V ) is an isomorphism. When a universal vectorial extension of G exists, it is unique up to canonical isomorphism and covariantly functorial in morphisms G → G with G admitting a universal extension.
Theorem 2.2.1. Let T be an arbitrary base scheme.
(1) If A is an abelian scheme over T , then a universal vectorial extension E (A) of A exists, with V = ω A t , and is compatible with arbitrary base change on T .
(2) If p is locally nilpotent on T and G is a p-divisible group over T , then a universal vectorial extension E (G) of G extsis, with V = ω G t , and is compatible with arbitrary base change on T .
(3) If p is locally nilpotent on T and A is an abelian scheme over T with associated p-divisible group G := A[p ∞ ], then the canonical map of fppf sheaves G → A extends to a natural map which induces an isomorphism of the corresponding short exact sequences of Lie algebras.
Proof. For the proofs of (1) and (2), see [MM74, I, §1.8 and §1.9]. To prove (3), note that pulling back the universal vectorial extension of A along G → A gives a vectorial extension E of G by ω A t . By universality, there then exists a unique map ψ : ω G t → ω A t with the property that the pushout of E (G) along ψ is E , and this gives the map on universal extensions. That the induced map on Lie algebras is an isomorphism follows from [MM74, II, §13].
For our applications, we will need a generalization of the universal extension of an abelian scheme to the setting of Néron models; in order to describe this generalization, we first recall the explicit description of the universal extension of an abelian scheme in terms of rigidified extensions.
For any commutative T -group scheme F , a rigidified extension of F by G m over T is a pair (E, σ) consisting of an extension (of fppf abelian sheaves) and a splitting σ : Inf 1 (F ) → E of the pullback of (2.2.2) along the canonical closed immersion Inf 1 (F ) → F . Two rigidified extensions (E, σ) and (E , σ ) are equivalent if there is a group homomorphism E → E carrying σ to σ and inducing the identity on G m and on F . The set Extrig T (F, G m ) of equivalence classes of rigidified extensions over T is naturally a group via Baer sum of rigidified extensions[MM74, I, §2.1], so the functor on T -schemes T Extrig T (F T , G m ) is naturally a group functor that is contravariant in F via pullback (fibered product). We write E xtrig T (F, G m ) for the fppf sheaf of abelian groups associated to this functor.
Proposition 2.2.2 (Mazur-Messing). Let A be an abelian scheme over an arbitrary base scheme T . The fppf sheaf E xtrig T (A, G m ) is represented by a smooth and separated T -group scheme, and there is a canonical short exact sequence of smooth group schemes over T Furthermore, (2.2.3) is naturally isomorphic to the universal extension of A t by a vector group.
In the case that T = Spec R for R a discrete valuation ring of mixed characteristic (0, p) with fraction field K, we have the following genaralization of Proposition 2.2.2: Proposition 2.2.3. Let A be an abelian variety over K, with dual abelian variety A t , and write A and A t for the Néron models of A and A t over T = Spec(R). Then the fppf abelian sheaf E xtrig T (A, G m ) on the category of smooth T -schemes is represented by a smooth and separated T -group scheme. Moreover, there is a canonical short exact sequence of smooth group schemes over T which is contravariantly functorial in A via homomorphisms of abelian varieties over K. The formation of (2.2.4) is compatible with smooth base change on T ; in particular, the generic fiber of (2.2.4) is the universal extension of A t by a vector group.
Proof. Since R is of mixed characteristic (0, p) with perfect residue field, this follows from Proposition 2.6 and the discussion following Remark 2.9 in [Cai10].
In the particular case that A is the Jacobian of a smooth, proper and geometrically connected curve X over K which is the generic fiber of a normal proper curve X over R, we can relate the exact sequence of Lie algebras attached to (2.2.4) to the exact sequence H(X/R) or Proposition 2.1.11 (2): Proposition 2.2.4. Let X be a proper relative curve over T = Spec(R) with smooth generic fiber X over K. Write J := Pic 0 X/K for the Jacobian of X and J t for its dual, and let J, J t be the corresponding Néron models over R. There is a canonical homomorphism of exact sequences of finite free R-modules that is an isomorphism when X has rational singularities. 11 For any finite morphism ρ : Y → X of S-curves satisfying the above hypotheses, the map (2.2.5) intertwines ρ * (respectively ρ * ) on the bottom row with Pic(ρ) * (respectively Alb(ρ) * ) on the top.
Remark 2.2.5. Let X be a smooth and geometrically connected curve over K admitting a normal proper model X over R that is a curve having rational singularities. It follows from Proposition 2.2.4 and the Néron mapping property that H(X/R) is a canonical integral structure on the Hodge filtration (2.1.5): it is independent of the choice of proper model X that is normal with rational singularities, and is functorial in finite morphisms ρ : Y → X of proper smooth curves over K which admit models over R satisfying these hypotheses. These facts can be proved in greater generality by appealing to resolution of singularities for excellent surfaces and the flattening techniques of Raynaud-Gruson [RG71]; see [Cai09,Theorem 5.11] for details.
We will need to relate universal extensions of p-divisible to their Dieudonné crystals. In order to explain how this goes, we begin by recalling some basic facts from crystalline Dieudonné theory, as discussed in [BBM82].
Fix a perfect field k and set Σ := Spec(W (k)), considered as a PD-scheme via the canonical divided powers on the ideal pW (k). Let T be a Σ-scheme on which p is locally nilpotent (so T is naturally a PD-scheme over Σ), and denote by Cris(T /Σ) the big crystalline site of T over Σ, endowed with the fppf topology (see [BM79,§2.2]). If F is a sheaf on Cris(T /Σ) and T is any PD-thickening of T , we write F T for the associated fppf sheaf on T . As usual, we denote by i T /Σ : T f ppf → (T /Σ) Cris the canonical morphism of topoi, and we abbreviate G := i T /Σ * G for any fppf sheaf G on T .
11 Recall that X is said to have rational singularities if it admits a resolution of singularities ρ : X → X with the natural map R 1 ρ * O X = 0. Trivially, any regular X has rational singularities.
Let G be a p-divisible group over T , considered as an fppf abelian sheaf on T . As in [BBM82], we define the (contravariant) Dieudonné crystal of G over T to be It is a locally free crystal in O T /Σ -modules, which is contravariantly functorial in G and of formation compatible with base change along PD-morphisms T → T of Σ-schemes thanks to 2.3.6.2 and Proposition 2.4.5 (ii) of [BBM82]. If T = Spec(A) is affine, we will simply write D(G) A for the finite locally free A-module associated to D(G) T .
The structure sheaf O T /Σ is canonically an extension of G a by the PD-ideal J T /Σ ⊆ O T /Σ , and by applying H om T /Σ (G, ·) to this extension one obtains (see Propositions 3.3.2 and 3.3.4 as well as Corollaire 3.3.5 of [BBM82]) a short exact sequence (the Hodge filtration) that is contravariantly functorial in G and of formation compatible with base change along PDmorphisms T → T of Σ-schemes. The following "geometric" description of the value of (2.2.7) on a PD-thickening of the base will be essential for our purposes: Proposition 2.2.6. Let G be a fixed p-divisible group over T and let T be any Σ-PD thickening of T . If G is any lifting of G to a p-divisible group on T , then there is a natural isomorphism that is moreover compatible with base change in the evident manner.
Remark 2.2.7. In his thesis [Mes72], Messing showed that the Lie algebra of the universal extension of G t is "crystalline in nature" and used this as the definition 12 of D(G). (See chapter IV , §2.5 of [Mes72] and especially 2.5.2). Although we prefer the more intrinsic description (2.2.6) of [MM74] and [BBM82], it is ultimately Messing's original definition that will be important for us.
2.3. Integral models of modular curves. We record some basic facts about integral models of modular curves that will be needed in what follows. We assume that the reader is familiar with [KM85], and will freely use the notation and terminology therein. Throughout, we fix a prime p and a positive integer N not divisible by p.
Definition 2.3.1. Let r be a nonnegative integer and R a ring containing a fixed choice ζ of primitive p r -th root of unity in which N is invertible. The moduli problem P ζ (2) P ∈ ker φ(S) and Q ∈ ker φ t (S) are generators of ker φ and ker φ t , respectively, which pair to ζ under the canonical pairing ·, Proposition 2.3.2. If N ≥ 4, then the moduli problem P ζ r is represented by a regular scheme M(P ε r ) that is flat of pure relative dimension 1 over Spec(R). The moduli scheme M(P ζ r ) admits a canonical compactification M(P ζ r ), which is regular and proper flat of pure relative dimension 1 over Spec(R).
Proof. Using that N is a unit in R, one first shows that for N ≥ 4, the moduli problem [µ N ] on (Ell /R) is representable over Spec(R) and finiteétale; this follows from 2.7.4, 3.6.0, 4.7.1 and 5.1.1 of [KM85], as [µ N ] is isomorphic to [Γ 1 (N )] over any R-scheme containing a fixed choice of primitive N -th root of unity (see also [KM85,8.4.11]). By [KM85,4.3.4], to prove the first assertion it is then enough to show that [bal. Γ 1 (p r )] ζ-can on (Ell /R) is relatively representable and regular, which (via [KM85, 9.1.7]) is a consequence of [KM85, 7.6.1 (2)]. For the second assertion, see [KM85,§8].
Recall that we have fixed a compatible sequence {ε (r) } r≥1 of primitive p r -th roots of unity in Q p .
Definition 2.3.3. We set X r := M(P ε (r) r ), viewed as a scheme over T r := Spec(R r ). There is a canonical action of Z × p × (Z/N Z) × by R r -automorphisms of X r , defined at the level of the underlying moduli problem by as one checks by means of the computation uP, Here, we again write v : µ N → µ N for the automorphism of µ N functorially defined by ζ → ζ v for any N -th root of unity ζ. We refer to this action of Z × p × (Z/N Z) × as the diamond operator action, and will denote by u (respectively v N ) the automorphism induced by u ∈ Z × p (respectively v ∈ (Z/N Z) × ). There is also an R r -semilinear "geometric inertia" action of Γ := Gal(K ∞ /K 0 ) on X r , which allows us to descend the generic fiber of X r to K 0 . To explain this action, for γ ∈ Γ and any T r -scheme T , let us write T γ for the base change of T along the morphism T r → T r induced by γ ∈ Aut(R r ). There is a canonical functor (Ell /(T r ) γ ) → (Ell /T r ) obtained by viewing an elliptic curve over a (T r ) γ -scheme T as the same elliptic curve over the same base T , viewed as a T r -scheme via the projection (T r ) γ → T r . For a moduli problem P on (Ell /T r ), we denote by γ * P the moduli problem on (Ell /(T r ) γ ) obtained by composing P with this functor; see [KM85, 4.1.3]. Each γ ∈ Γ gives rise to a morphism of moduli problems γ : where the subscript of γ means "base change along γ" (see §1.4). Since this really is a morphism of moduli problems on (Ell /T r ). We thus obtain a morphism of T r -schemes for each γ ∈ Γ, compatibly with change in γ. The induced semilinear action of Γ on the generic fiber of X r provides a descent datum with respect to the canonical map Spec(K r ) → Spec(K 0 ), which is necessarily effective as this map isétale. Thus, there is a unique scheme X r over K 0 = Q p with (X r ) Kr (X r ) Kr ; as the diamond operators visibly commute with the action of Γ, they act on X r by Q p -automorphisms in a manner that is compatible with this identification.
Remark 2.3.4. We may identify X r with the base change to Q p of the modular curve X 1 (N p r ) over Q classifying pairs (E, α) of a generalized elliptic curve E/S together with an embedding of S-group schemes α : µ N p r → E sm whose image meets each irreducible component in every geometric fiber. If instead we were to use the geometric inertia action on X r induced by then the resulting descent X r of the generic fiber of X r to Q p would be canonically isomorphic to the base change to Q p of the modular curve X 1 (N p r ) over Q classifying generalized elliptic curves E/S with an embedding of S-group schemes Z/N p r Z → E sm [N p r ] whose image meets each irreducible component in every geometric fiber. Of course, X 1 (N p r ) (respectively X 1 (N p r ) ) is the canonical model of the upper half-plane quotient Γ 1 (N p r )\H * with Q-rational cusp cusp i∞ (respectively 0).
Recall ([KM85, §6.7]) that over any base scheme S, a cyclic p r+1 -isogeny of elliptic curves φ : E → E admits a "standard factorization" (in the sense of [KM85, 6.7.7]) For each pair of nonnegative integers a < b ≤ r+1 we will write φ a,b for the composite φ a,a+1 •· · ·•φ b−1,b and φ b,a := φ t a,b for the dual isogeny. Using this notion, we define "degeneracy maps" ρ, σ : X r+1 ⇒ X r (over the map T r+1 → T r ) at the level of underlying moduli problems as follows (cf.: [KM85,11.3.3]): By the universal property of fiber products, we obtain morphisms T r+1 -schemes that are compatible with the diamond operators and the geometric inertia action of Γ.
Remark 2.3.5. On generic fibers, the morphisms (2.3.6) uniquely descend to degeneracy mappings ρ, σ : X r+1 ⇒ X r of smooth curves over Q p . Under the identification X r X 1 (N p r ) Qp of Remark 2.3.4, the map ρ corresponds to the "standard" projection, induced by "τ → τ " on the complex upper half-plane, whereas σ corresponds to the morphism induced by "τ → pτ ." Recall that we have fixed a choice of primitive N -th root of unity ζ N in Q p . The Atkin Lehner "involution" w ζ N on X r × Rr R r is defined as in [Col94,§8]. Following [KM85,11.3.2], we define the Atkin Lehner automorphism w ε (r) of X r over R r on the underlying moduli problem P ε (r) Proof. This is an easy consequence of definitions.
In order to describe the special fiber of X r , we must first introduce Igusa curves: Definition 2.3.7. Let r be a nonnegative integer. The moduli problem I r := ([Ig(p r )]; [µ N ]) on (Ell /F p ) assigns to (E/S) the set of triples (E, P ; α) where E/S is an elliptic curve and Proposition 2.3.8. If N ≥ 4, then the moduli problem I r on (Ell /F p ) is represented by a smooth affine curve M(I r ) over F p which admits a canonical smooth compactification M(I r ).
Proof. One argues as in the proof of Proposition 2.3.2, using [KM85,12.6.1] to know that [Ig(p r )] is relatively representable on (Ell /F p ), regular 1-dimensional and finite flat over (Ell /F p ).
Definition 2.3.9. Set Ig r := M(I r ); it is a smooth, proper, and geometrically connected F p -curve.
There is a canonical action of the diamond operators Z × p × (Z/N Z) × on the moduli problem I r via (u, v) · (E, P ; α) := (E, uP ; v • α); this induces a corresponding action on Ig r by F p -automorphisms. We again write u (respectively v N ) for the action of u ∈ Z × p (respectively v ∈ (Z/N Z) × ). Thanks to the "backing up theorem" [KM85, 6.7.11], one also has natural degeneracy maps on underlying moduli problems. This map is visibly equivariant for the diamond operator action on source and target. Let ss r be the (reduced) closed subscheme of Ig r that is the support of the coherent ideal sheaf of relative differentials Ω 1 Ig r / Ig 0 ; over the unique degree 2 extension of F p , this scheme breaks up as a disjoint union of rational points-the supersingular points. The map (2.3.7) is finite of degree p, genericallyétale and totally (wildly) ramified over each supersingular point.
We can now describe the special fiber of X r : Proposition 2.3.10. The scheme X r := X r × Tr Spec(F p ) is the disjoint union, with crossings at the supersingular points, of the following proper, smooth F p -curves: for each pair a, b of nonnegative integers with a + b = r, and for each u ∈ (Z/p min(a,b) Z) × , one copy of Ig max(a,b) .
We refer to [KM85, 13.1.5] for the definition of "disjoint union with crossings at the supersingular points". Note that the special fiber of X r is (geometrically) reduced; this will be crucial in our later work. We often write I (a,b,u) for the irreducible component of X r indexed by the triple (a, b, u) and will refer to it as the ( For the proof of Proposition 2.3.10, we refer the reader to [KM85, 13.11.2-13.11.4], and content ourselves with recalling the correspondence between (non-cuspidal) points of the (a, b, u)-component and [bal. Γ 1 (p r )] 1-can -structures on elliptic curves. 13 Let S be any F p scheme, fix an ordinary elliptic curve E 0 over S, and let (φ : E 0 → E r , P, Q; α) be an element of P 1 r (E 0 /S). By [KM85, 13.11.2], there exist unique nonnegative integers a, b with the property that the cyclic p r -isogeny φ factors as a purely inseparable cyclic p a -isogeny followed by an 13 Note that under the canonical ring homomorphism Rr Fp, our fixed choice ε (r) of primitive p r -th root of unity maps to 1 ∈ Fp, which is a primitive p r -th root of unity by definition [ KM85,9.1.1], as it is a root of the p r -th cyclotomic polynomial over Fp! etale p b -isogeny (this is the standard factorization of φ). Furthermore, there exists a unique elliptic curve E over S and S-isomorphisms E 0 E (p b ) and E r E (p a ) such that the cyclic p r isogeny φ is: . Conversely, suppose given (a, b, u) and an S-valued point of Ig max(a,b) which is neither a cusp nor a supersingular point (in the sense that it corresponds to an ordinary elliptic curve with extra structure). If a ≥ b and (E, Q; α) is the given S-point of Ig a then we set P : [KM85,13.11.3], the data gives an S-point of M(P 1 r ). These constructions are visibly inverse to each other. Remark 2.3.11. When r is even and a = b = r/2, there is a choice to be made as to how one identifies the (r/2, r/2, u)-component of are S-points of Ig p r/2 . Since uP = Q, the corresponding closed immersions Ig r/2 → X r are twists of each other by the automorphism u of the source. We will consistently choose (E, Q; p −r/2 V r/2 • α) to identify the (r/2, r/2, u)-component of X r with Ig r/2 . Remark 2.3.12. As in [MW86, pg. 236], we will refer to I ∞ r := I (r,0,1) and I 0 r := I (0,r,1) as the two "good" components of X r . The Q p -rational cusp ∞ of X r induces a section of X r → T r which meets I ∞ r , while the section induced by the K r -rational cusp 0 meets I 0 r . It is precisely these irreducible components of X r which contribute to the "ordinary" part of cohomology. We note that I ∞ r corresponds to the image of Ig r under the map i 1 of [MW86,pg. 236], and corresponds to the component of X r denoted by C ∞ in [Til87, pg. 343], by C ∞ r in [Sab96, pg. 231] and, for r = 1, by I in [Gro90,§7].
By base change, the degeneracy mappings (2.3.6) induces morphisms ρ, σ : X r+1 ⇒ X r of curves over F p which admit the following descriptions on irreducible components: Ig r+1 = I (0,r+1,1) p N ρ G G I (0,r,1) = Ig r : (a, b, u) = (0, r + 1, 1) and the restriction of the map ρ : X r+1 → X r to the (a, b, u)-component of X r+1 is: Here, for any F p -scheme I, the map F : I → I is the absolute Frobenius morphism.
Proof. Using the moduli-theoretic definitions (2.3.5) of the degeneracy maps, the proof is a pleasant exercise in tracing through the functorial correspondence between the points of X r and points of Ig (a,b,u) . We leave the details to the reader.
We likewise have a description of the automorphism of X r induced via base change by the geometric inertia action 14 (2.3.2) of Γ: Following [Ulm90, §7-8], we now define a correspondence π 1 , π 2 : Y r ⇒ X r on X r over R r which naturally extends the correspondence on X r giving the Hecke operator U p (see below for a brief discussion of correspondences).
Definition 2.3.15. Let r be a nonnegative integer and R a ring containing a fixed choice ζ of primitive p r -th root of unity in which N is invertible. The moduli problem Q ζ r := ([Γ 0 (p r+1 ); r, r] ζ-can ; [µ N ]) on (Ell /R) assigns to E/S the set of quadruples (φ : E → E , P, Q; α) where: (1) φ is a cyclic p r+1 -isogeny with standard factorization (2) P ∈ E 1 (S) and Q ∈ E r (S) are generators of ker φ 1,r+1 and ker φ r,0 , respectively, and satisfy Proposition 2.3.16. If N ≥ 4, then the moduli problem Q ζ r is represented by a regular scheme M(Q ζ r ) that is flat of pure relative dimension 1 over Spec(R). This scheme admits a canonical compactification M(P ζ r ), which is regular and proper flat of pure relative dimension 1 over Spec(R).
14 Since Γ acts trivially on Fp, for each γ ∈ Γ the base change of the Rr-morphism γ : Xr → (Xr)γ along the map induced by the canonical surjection Rr Fp is an Fp-morphism γ : Xr → (Xr)γ Xr.
Proof. As in the proof of Proposition 2.3.2, it suffices to prove that [Γ 0 (p r+1 ); r, r] ζ-can is relatively representable and regular, which follows from [KM85, 7.6.1]; see also §7.9 of op. cit.
Definition 2.3.17. We set Y r := M(Q ε (r) r ), viewed as a scheme over T r = Spec(R r ). The scheme Y r is equipped with an action of the diamond operators Z × p × (Z/N Z) × , as well as a "geometric inertia" action of Γ given moduli-theoretically exactly as in (2.3.1) and (2.3.2). The "semilinear" action of Γ is equivalent to a descent datum-necessarily effective-on the generic fiber of Y r , and we denote by Y r the resulting unique Q p -descent of (Y r ) Kr .
Remark 2.3.18. We may identify Y r with the base change to Q p of the modular curve X 1 (N p r ; N p r−1 ) over Q classifying triples (E 1 , α, C) where E 1 is a generalized elliptic curve, α : µ N p r → E sm 1 [N p r ] is an embedding of group schemes whose image meets each irreducible component in every geometric fiber, and C is a locally free subgroup scheme of rank p in E sm One checks that π is equivariant with respect to the action of the diamond operators and of Γ, and so descends to a map π : Y r → X r of smooth curves over Q p . It is likewise straightforward to check that the two projection maps σ, ρ : X r+1 ⇒ X r of (2.3.5) factor through π via unique maps of T r -schemes π 1 , π 2 : Y r ⇒ X r , given as morphisms of underlying moduli problems on (Ell /R r ) That these morphisms are well defined and that one has ρ = π • π 2 and σ = π • π 1 is easily verified (see [Ulm90,§7] and compare to [KM85, §11.3.3]). They are moreover finite of generic degree p, equivariant for the diamond operators, and Γ-compatible; in particular, π 1 , π 2 descend to finite maps π 1 , π 2 : Y r ⇒ X r over Q p . Via our identifications in Remarks 2.3.4 and 2.3.18, the map π 1 corresponds to the usual "forget C" map, while π 2 corresponds to "quotient by C". We stress that the "standard" degeneracy map ρ : X r+1 → X r factors through π 2 (and not π 1 ).
is the disjoint union, with crossings at the supersingular points, of the following proper, smooth F p -curves: for each pair of nonnegative integers a, b with a + b = r + 1 and for each u ∈ (Z/p min(a,b)Z ) × , one copy of We will write J (a,b,u) for the irreducible component of Y r indexed by (a, b, u), and refer to it as the (a, b, u)-component; again, J (a,b,u) is independent of u. The proof of Proposition 2.3.19 is a straightforward adaptation of the arguments of [KM85, 13.11.2-13.11.4] (see also [Ulm90, Proposition 8.2]). We recall the correspondence between non-cuspidal points of the (a, b, u)-component and [Γ 0 (p r+1 ); r, r] 1-can -structures on elliptic curves.
Fix an ordinary elliptic curve E 0 over an F p -scheme S, and let (φ : E 0 → E r+1 , P, Q; α) be an element of Q 1 r (E 0 /S). As before, there exist unique nonnegative integers a, b with a + b = r + 1 and a unique elliptic curve E/S with the property that the cyclic p r+1 -isogeny φ factors as Conversely, suppose given (a, b, u) and an S-point of Ig max(a,b) which is neither cuspidal nor supersingular. If r+1 > a ≥ b and (E, Q; α) is the given point of Ig a , then we set P : is an S-point of Ig r , then we let P ∈ E (p) (S) be the identity section and we obtain an S-point . If a = 0 and (E, P, α) is an S-point of Ig r , then we let Q ∈ E (p) (S) be the identity section and we obtain an S-point . Using the descriptions of X r and Y r furnished by Propositions 2.3.10 and 2.3.19, we can calculate the restrictions of the degenercy maps π 1 , π 2 : Y r ⇒ X r to each irreducible component of the special fiber of Y r . The following is due to Ulmer 15 [Ulm90, Proposition 8.3]: Proposition 2.3.20. Let a, b be nonnegative integers with a + b = r + 1 and u ∈ (Z/p min(a,b) Z) × . The restriction of the map π 1 : Y r → X r to the (a, b, u)-component of Y r is: Ig r = J (0,r+1,1) p N G G I (0,r,1) = Ig r : (a, b, u) = (0, r + 1, 1) 15 We warn the reader, however, that Ulmer omits the effect of the degeneracy maps on [µN ]-structures, so his formulae are slightly different from ours. and the restriction of the map π 2 : Y r → X r to the (a, b, u)-component of Y r is: Proof. The proof is similar to the proof of Proposition 2.3.13, using the correspondence between irreducible components of Y r , X r and Igusa curves that we have explained, together with the modulitheoretic definitions (2.3.9) of the degeneracy mappings. We leave the details to the reader.
We end this section with a brief discussion of correspondences on curves and their induced action on cohomology and Jacobians, which we then apply to the specific case of modular curves. Fix a ring R and a proper normal curve X over S = Spec R. Throughtout this discussion, we assume either that R is a discrete valuation ring of mixed characteristic (0, p) with perfect residue field, or that R is a perfect field (and hence the normal X is smooth).
Thanks to Proposition 2.1.11 (4), any correspondence T = (π 1 , π 2 ) on X induces an R-linear endomorphism of the short exact sequence H(X/R) via π 1 * π * 2 . By a slight abuse of notation, we denote this endomorphism by T ; as endomorphisms of H(X/R) we then have (2.3.10) T = π 1 * π * 2 and T * = π 2 * π * 1 . Given a finite map π : X → X, we will consistently view π as a correspondence on X via the association π (id, π). In this way, we may think of correspondences on X as "generalized endomorphisms." This perspective can be made more compelling as follows.
First suppose that R is a field, and fix a correspondence T given by an ordered pair π 1 , π 2 : Y ⇒ X of finite morphisms of smooth and proper curves. Then T and its transpose T * induce endomorphisms of the Jacobian J X := Pic 0 X/R of X, which we again denote by the same symbols, via (2.3.11) T := Alb(π 2 ) • Pic 0 (π 1 ) and Note that when T = (id, π) for a morphism π : X → X, the induced endomorphisms (2.3.11) of J X are given by T = Alb(π) and T * := Pic 0 (π). 16 Abusing notation, we will simply write π for the endomorphism Alb(π) of J X induced by the correspondence (1, π), and π * for the endomorphism Pic 0 (π) induced by (π, 1) = (1, π) * . When π : X → X is an automorphism, an easy argument shows that π * = π −1 as automorphisms of J X .
16 Because of this fact, for a general correspondence T = (π1, π2) the literature often refers to the induced endomorphism T (respectively T * ) of JX as the Albanese (respectively Picard) or covariant (respectively contravariant) action of the correspondence (π1, π2). Since the definitions (2.3.11) of T and T * both literally involve Albanese and Picard functoriality, we find this old terminology confusing, and eschew it in favor of the consistent notation we have introduced.
With these definitions, the canonical filtration compatible isomorphism H 1 dR (X/R) H 1 dR (J X /R) is T (respectively T * )-equivariant with respect to the action (2.3.10) on H 1 dR (X/R) and the action on H 1 dR (J X /R) induced by pullback along the endomorphisms (2.3.11); see [Cai10,Proposition 5.4]. Now suppose that R is a discrete valuation ring with fraction field K and fix a correspondence T on X given by a pair of finite morphisms of normal curves π 1 , π 2 : Y ⇒ X. Let us write T K for the induced correspondence on the (smooth) generic fiber X K of X. Via (2.3.11) and the Néron mapping property, T K and T * K induces endomorphisms of the Néron model J X of the Jacobian of X K , which we simply denote by T and T * , respectively. Thanks to Proposition 2.2.4, the filtration compatible morphism (2.2.5) is T -and T * -equivariant for the given action (2.3.10) on H 1 (X/R) and the action on Lie E xtrig R (J X , G m ) induced by (2.3.11) and the (contravariant) functoriality of E xtrig R (·, G m ).
Remark 2.3.22. As in Remark 2.2.5, if X is a normal proper curve over R with rational singularities, then any correspondence on X K induces a filtration compatible endomorphism of H 1 (X/R) via its action on J X K , the Néron mapping property, and the isomorphism (2.2.5) of Proposition 2.2.4.
We now specialize this discussion to the case of the modular curve X 1 (N p r ) over Q. For any prime , one defines the Hecke correspondences T for N p and U for |N p on X 1 (N p r ) as in [Col94,§8] (cf. also [Gro90,§3] and [MW84, Chapter 2, §5.1-5.8], though be aware that the latter works instead with the modular curves X 1 (N p r ) of Remark 2.3.4). If = p, we have similarly defined correspondences T and U on Ig r over F p (see [MW84, Chapter 2, §5.4-5.5]). For = p, the Hecke correspondences extend to correspondences on X r over R r , essentially by the same definition, while for = p the correspondence U p := (π 1 , π 2 ) on X r is defined using the maps (2.3.9). We use the same symbols to denote the induced endomorphisms (2.3.11) of the Jacobian J 1 (N p r ).
Definition 2.3.23. We write H r (Z) (respectively H * r (Z)) for the Z-subalgebra of End Q (J 1 (N p r )) generated by the Hecke operators T (respectively T * ) for N p and U (respectively U * ) for |N p, and the diamond operators u (respectively u * ) for u ∈ Z × p and v N (respectively v * N ) for v ∈ (Z/N Z) × . For any commutative ring A, we set H r (A) := H r (Z) ⊗ Z A and H * r (A) := H * r (Z) ⊗ Z A, and for ease of notation we set H r := H r (Z p ) and H * r := H * r (Z p ). The relation between the Hecke algebras H r and H * r is explained by the following: Proposition 2.3.24. Denote by w r the automorphism of (J r ) K r induced by the correspondence (1, w r ) on (X r ) K r over K r . Viewing H r and H * r as Z p -subalgebras of End K r ((J r ) K r ) ⊗ Z Z p , conjugation by w r carries H r isomorphically onto H * r : that is, w r T = T * w r for all Hecke operators T . Proof. This is standard; see, e.g., [Til87,pg. 336], [Oht95, 2.1.8], or [MW84, Chapter 2, §5.6 (c)].
Differentials on modular curves in characteristic p
We now analyze the "modified de Rham cohomology" ( §2.1) of the special fibers of the modular curves X r /R r , and we relate its ordinary part to the de Rham cohomology of the "Igusa Tower." 3.1. The Cartier operator. Fix a perfect field k of characteristic p > 0 and write ϕ : k → k for the p-power Frobenius map. In this section, we recall the basic theory of the Cartier operator for a smooth and proper curve over k. As we will only need the theory in this limited setting, we will content ourselves with a somewhat ad hoc formulation of it. Our exposition follows [Ser58,§10], but the reader may consult [Oda69, §5.5] or [Car57] for a more general treatment.
Let X be a smooth and proper curve over k and write F : X → X for the absolute Frobenius map; it is finite and flat and is a morphism over the endomorphism of Spec(k) induced by ϕ. Let D be an effective Cartier (=Weil) divisor on X over k, and write O X (−D) for the coherent (invertible) ideal sheaf determined by D. The pullback map F * : , so we obtain a canonical ϕ-semilinear pullback map on cohomology By Grothendieck-Serre duality, (3.1.1) gives a ϕ −1 -semilinear "trace" map 17 of k-vector spaces Proposition 3.1.1. Let X/k be a smooth and proper curve, D an effective Cartier divisor on X, and n a nonnegative integer.
(2) The map V "improves poles" in the sense that it factors through the canonical inclusion (3) If ρ : Y → X is any finite morphism of smooth proper curves over k, and ρ * D is the pullback of D to Y , then the induced pullback and trace maps (4) Assume that k is algebraically closed. Then for any meromorphic differential η on X and any closed point x of X, the formula holds, where res x is the canonical "residue at x map" on meromorphic differentials.
Proof. Both (1) and (2) follow from our discussion, while (3) follows (via duality) from the fact that the p-power map commutes with any ring homomorphism. Finally, (4) follows from the fact that the canonical isomorphism H 1 (X, Ω 1 X/k ) → k induced by the residue map coincides with the negative of Grothendieck's trace isomorphism (cf. Proposition 2.1.15), together with the fact that Grothendieck's trace morphism is compatible with compositions; see Appendix B and Corollary 3.6.6 of [Con00].
Remark 3.1.2. Quite generally, if ρ : Y → X is any finite morphism of smooth curves over k and y is any k-point of Y with x = ρ(y) ∈ X(k), then for any meromorphic differential η on Y we have where e is the ramification index of the extension of discrete valuation rings O X,x → O Y,y . Indeed, if I x and I y denote the ideal sheaves of the reduced closed subschemes x and y, then the pullback map Passing to the map on H 1 's and using Grothendieck duality, we We recall the following (generalization of a) well-known lemma of Fitting: Lemma 3.1.3. Let A be a commutative ring, ϕ an automorphism of A, and M be an A-module equipped with a ϕ-semilinear endomorphism F : M → M. Assume that one of the following holds: (1) M is a finite length A-module.
(2) A is a complete noetherian adic ring, with ideal of definition I A, and M is a finite A-module. Then there is a unique direct sum decomposition Proof. For the proof in case (1), we refer to [Laz75, VI, 5.7], and just note that one has: where one uses that ϕ is an automorphism to know that the image and kernel of F n are A-submodules of M . It follows immediately from this that the association M M F is a functor from the category of left A[F ]-modules of finite A-length to itself. It is an exact functor because the canonical inclusion M F → M is an A[F ]-direct summand. In case (2), our hypotheses ensure that M/I n M is a noetherian and Artinian A-module, and hence of finite length, for all n. Our assertions in this situation then follow immediately from (1), via the uniqueness of (3.1.5), together with fact that M is finite as an A-module, and hence I-adically complete (as A is).
We apply 3.1.3 to the k-vector space M := H 0 (X, Ω 1 X/k ) equipped with the ϕ −1 semilinear map V : In particular (taking D = 0) we also have γ The following "control lemma" is a manifestation of the fact that the Cartier operator improves poles (Proposition 3.1.1, (2)): Lemma 3.1.6. Let X be a smooth and proper curve over k and D an effective Cartier divisor on X. Considering D as a closed subscheme of X, we write D red for associated reduced closed subscheme.
(1) For all positive integers n, the canonical morphism ) induces a natural isomorphism on V -ordinary subspaces.
(2) For each positive integer n, the canonical map induces a natural isomorphism on F -ordinary subspaces.
Proof. This follows immediately from Proposition 3.1.1, (2) and Remark 3.1.5. Now let π : Y → X be a finite branched covering of smooth, proper and geometrically connected curves over k with group G that is a p-group. Let D X be any effective Cartier divisor on X over k with support containing the ramification locus of π, and put D Y = π * D X . As in Lemma 3.1.6, denote by D X,red and D Y,red the underlying reduced closed subschemes; as D Y,red is G-stable, the k-vector -modules for any n ≥ 1. The following theorem of Nakajima is the key to the proofs of our structure theorems for Λ-modules: Proposition 3.1.7 (Nakajima). Assume that π is ramified, let γ X be the Hasse-Witt invariant of X and set d := γ X − 1 + deg(D X,red ). Then for each positive integer n: -free of rank d and independent of n. Proof. The independence of n is simply Lemma 3.1.6; using this, the first assertion is then equivalent to Theorem 1 of [Nak85]. The second assertion is immediate from Remark 3.1.5, once one notes that for g ∈ G one has the identity g * = (g −1 ) * on cohomology (since g * g * = deg g = id), so g * and (g −1 ) * are adjoint under the duality pairing (3.1.6).
We end this section with a brief explanation of the relation between the de Rham cohomology of X over k and the Dieudonné module of the p-divisible group of the Jacobian of X. This will allow us to give an alternate description of the V -ordinary (respectively F -ordinary) subspace of H 0 (X, Ω 1 X/k ) (respectively H 1 (X, O X )) which will be instrumental in our applications.
Pullback by the absolute Frobenius gives a semilinear endomorphism of the Hodge filtration H(X/k) of H 1 dR (X/k) which we again denote by F = F * . Under the canonical autoduality of H(X/k) provided by Proposition 2.1.12 (2) , we obtain ϕ −1 -semilinear endomorphism whose restriction to H 0 (X, Ω 1 X/k ) coincides with (3.1.2). Let A be the "Dieudonné ring", i.e. the (noncommutative if k = F p ) ring A := W (k)[F, V ], where F , V satisfy F V = V F = p, F α = ϕ(α)F , and V α = ϕ −1 (α)V for all α ∈ W (k). We view H 1 dR (X/k) as a left A-module in the obvious way. (1) There are canonical isomorphisms of left A-modules (2) For any finite morphism ρ : Y → X of smooth and proper curves over k, the identification of (1) intertwines ρ * with D(Pic 0 (ρ)) and ρ * with D(Alb(ρ)).
(3) Let G = Gé t × G m × G ll be the canonical direct product decomposition of G into its maximaĺ etale, multiplicative, and local-local subgroups. Via the identification of (1), the canonical mappings in the exact sequence H(X/k) induce natural isomorphisms of left A-modules We recall that one has a canonical isomorphism (3.1.8) H 1 dR (X/k) H 1 dR (J/k) which is compatible with Hodge filtrations and duality (using the canonical principal polarization to identify J with its dual) and which, for any finite morphism of smooth curves ρ : Y → X over k, intertwines ρ * with Pic 0 (ρ) * and ρ * with Alb(ρ) * ; see [Cai10,Proposition 5.4], noting that the proof given there works over any field k, and cf. (2).
induces an isomorphism on V -ordinary (respectively F -ordinary) subspaces. On the other hand, by Dieudonné theory one knows that for any p-divisible group H, the semilinear endomorphism V 18 Alternately, one could appeal to [MM74], specifically to Chapter I, 4.
and it follows that the natural maps 3.2. The Igusa tower. We apply Proposition 3.1.7 to the Igusa tower (Definition 2.3.9). The canonical degeneracy map ρ : I r → I 1 defined by (2.3.7) is finiteétale outside 19 ss := ss r and totally (wildly) ramified over ss 1 , and so makes I r in to a branched cover of I 1 with group ∆/∆ r . The cohomology groups H 0 (I r , Ω 1 Ir/Fp (ss)) and H 1 (I r , O Ir (−ss)) are therefore naturally F p [∆/∆ r ]-modules.
Proposition 3.2.1. Let r be a positive integer, write γ for the p-rank of J 1 (N ) Fp , and set δ := deg ss.
(2) For any positive integer s ≤ r, the canonical trace mapping associated to ρ : Remark 3.2.2. Using the moduli interpretation of I r and calculations on formal groups of universal elliptic curves, one can show [KM85, Lemma 12.9.3] that pullback induces a canonical identification ρ * Ω 1 Is/k = Ω 1 Ir/k (−p r−1 (p r − p s ) · ss). If n is any positive integer, it follows easily from this that ρ * identifies H 0 (I s , Ω 1 Is/k (n · ss)) with the ∆ s /∆ r -invariant subspace of H 0 (I r , Ω 1 Ir/k (−N r,s (n) · ss)), for N r,s (n) = p r−1 (p r − p s ) − p r−s n. In particular, via pullback, H 0 (I 1 , Ω 1 I 1 /k (p r − p)) is canonically identified with the ∆/∆ r -invariant subspace of H 0 (I r , Ω 1 Ir/k ), so the k-dimension of this subspace grows exponentially with r. In this light, it is remarkable that the V -ordinary subspace has controlled growth. We will not use these facts in what follows, though see Remark 3.2.4.
In order to prove Proposition 3.2.1, we require the following Lemma (cf. [MW83, p. 511]): Lemma 3.2.3. Let π : Y → X be a finite flat and genericallyétale morphism of smooth and geometrically irreducible curves over a field k. If there is a geometric point of X over which π is totally ramified then the induced map of k-group schemes Pic(π) : Pic X/k → Pic Y /k has trivial scheme-theoretic kernel.
Proof. The hypotheses and the conclusion are preserved under extension of k, so we may assume that k is algebraically closed. We fix a k-point x ∈ X(k) over which π is totally ramified, and let y ∈ Y (k) be the unique k-point of Y over x. To prove that Pic X/k → Pic Y /k has trivial kernel, it suffices to prove that the map of groups π * R : Pic(X R ) → Pic(Y R ) is injective for every Artin local k-algebra R. We fix such a k-algebra, and denote by x R ∈ X R (R) and y R ∈ Y R (R) the points obtained from x and y by base change. Let L be a line bundle on X R whose pullback to Y R is trivial; our claim is that we may choose a trivialization π * L − → O Y R of π * L over Y R which descends to X R . In other words, by descent theory, we assert that we may choose a trivialization of π * L with the property that the two pullback trivializations under the canonical projection maps We first claim that the k-scheme Z := Y × X Y is connected and generically reduced. Since π is totally ramified over x, there is a unique geometric point (y, y) of Z mapping to x under the canonical map Z → X. Since this map is moreover finite flat (because π : Y → X is finite flat due to smoothness of X and Y ), every connected component of Z is finite flat onto X and so passes through (y, y). Thus, Z is connected. On the other hand, π : Y → X is genericallyétale by hypothesis, so there exists a dense open subscheme U ⊆ X over which π isétale. Then Z × X U isétale-and hence smooth-over U and the open immersion Z × X U → Z is schematically dense as U → X is schematically dense and π is finite and flat. As Z thus contains a k-smooth and dense subscheme, it is generically reduced.
Fix a choice e of R-basis of the fiber L (x R ) of L at x R . As any two trivializations of π * L over Y R differ by an element of R × , we may uniquely choose a trivialization which on x R -fibers carries e to 1. The obstruction to the two pullback trivializations under (3.2.2) being equal is a global unit on where the last equality rests on the fact that Y × X Y is connected, generically reduced, and proper over k. Thus, the obstruction to the two pullback trivializations being equal is an element of R × , whose value may be calculated at any point of Y R × X R Y R . By our choice (3.2.3) of trivialization of π * L , the value of this obstruction at the point (y R , y R ) is 1, and hence the two pullback trivializations coincide as desired.
Proof of Proposition 3.2.1. Since ρ : I r → I s is a finite branched cover with group ∆ s /∆ r and totally wildly ramified over ss s , we may apply Proposition 3.1.7, which gives (1).
To prove (2), we work over k := F p and argue as follows. Since ρ : I r → I s is of degree p r−s and totally ramified over ss s , we have ρ * ss s = p r−s · ss; it follows that pullback induces a map which we claim is injective. To see this, we observe that the long exact cohomology sequence attched to the short exact sequence of sheaves on I r (with O ss a skyscraper sheaf supported on ss) yields a commutative diagram with exact rows The left-most vertical arrow are is an isomorphism because I r is geometrically connected for all r.
Since ss is reduced, we have H 0 (I r , O ss ) = k deg ss for all r, so since ρ : I r → I s totally ramifies over every supersingular point, the second left-most vertical arrow in (3.2.5) is also an isomorphism.
are surjective for all r ≥ s ≥ 1. Passing to V -(respectively F -) ordinary parts and using Lemma 3.1.6 (1), we conclude that the canonical trace mappings attached to I r → I s induce surjective maps as in Proposition 3.2.1 (2). By (1), these mappings are then surjective mappings of free F p [∆/∆ s ]-modules of the same rank, and are hence isomorphisms.
Remark 3.2.4. If G is any cyclic group of p-power order, then the representation theory of G is rather easy, even over a field k of characteristic p. Denoting by γ any fixed generator of G, for each integer d with 1 ≤ d ≤ #G, there is a unique indecomposable representation of G of dimension d, given explicitly by the k[G]-module V d := k[G]/(γ − 1) d . By using Artin-Schreier theory for a G-cover of proper smooth curves Y → X over k, for any G-stable Cartier divisor D on Y it is possible to determine the multiplicity of V d in the k[G]-module H 0 (Y, Ω 1 Y /k (D)) purely in terms of the ramification data of Y → X. This is carried out for D = ∅ in [VM81]. For the G := ∆/∆ r -cover I r → I 1 , one finds as k[G]-modules, where g(I 1 ) is the genus of I 1 .
The space of meromorphic differentials H 0 (I 1 , Ω 1 I 1 /Fp (ss)) has a natural action of F × p via the diamond operators · , and the eigenspaces for this action are intimitely connected with mod p cusp forms: Proposition 3.2.5. Let S k (N ; F p ) be the space of weight k cuspforms for Γ 1 (N ) over F p , and denote by H 0 (I r , Ω 1 I 1 /Fp (ss))(k − 2) the subspace of H 0 (I r , Ω 1 I 1 /Fp (ss)) on which F × p acts through the character u → u k−2 . For each k with 2 < k < p + 1, there are canonical isomorphisms of F p -vector spaces which are equivariant for the Hecke operators, with U p acting as usual on modular forms and as the Cartier operator V on differential forms. For k = 2, p+1, we have the following commutative diagram: where A is the Hasse invariant.
Proof. This follows from Propositions 5.7-5.10 of [Gro90], using Lemma 3.3.5; we note that our forward reference to Lemma 3.3.5 does not result in circular reasoning.
Remark 3.2.6. For each k with 2 ≤ k ≤ p + 1, let us write d k := dim Fp S k (N ; F p ) ord for the F pdimension of the subspace of weight k level N cuspforms over F p on which U p acts invertibly. As in Proposition 3.2.1 (1), let γ be the p-rank of the Jacobian of X 1 (N ) Fp and δ := deg ss. It follows immediately from Proposition 3.2.5 that we have the equality 3.3. Structure of the ordinary part of H 0 (X r , ω Xr/Fp ). Keep the notation of §3.2 and let X r /R r be as in Definition 2.3.3. As before, we denote by X r := X r × Rr F p the special fiber of X r ; it is a curve over F p in the sense of Definition 2.1.1. In this section, using Rosenlicht's theory of the dualizing sheaf as explained in §2.1 and the explicit description of X r given by Proposition 2.3.10, we will compute the ordinary part of the cohomology H(X r /F p ) in terms of the de Rham cohomology of the Igusa tower. For notational ease, as in Remark 2.3.12 we write I ∞ r := I (r,0,1) and I 0 r := I (0,r,1) for the two "good" components of X r . Each of these components is abstractly isomorphic to the Igusa curve Ig(p r ) of level p r over X 1 (N ) Fp , and we will henceforth make this identification; for s ≤ r, we will write simply ρ : I r → I s for the the canonical degeneracy map induced by (2.3.7). Using Proposition 2.3.20, one checks that the H r -correspondences on X r restrict to the H r -correspondences on I ∞ r , (the point is that the degeneracy maps defining U p on X r restrict to a correspondence on I ∞ r ), while the H * rcorrespondences on X r restrict to the H * r -correspondences on I 0 r . In particular, U p = (F, p N ) on I ∞ r and U * p = (F, id) on I 0 r . For = 0, ∞, we denote by i r : I ∞ r → X r the canonical closed immersion.
which is Γ-equivariant for the "geometric inertia action" (2.3.3) on X r and the action γ → χ(γ) −1 on I 0 r (respectively the trivial action on I ∞ r ). The isomorphisms (3.3.1) induce identifications that are compatible with change in r: the four diagrams formed by taking the interior or the exterior arrows are all commutative for s ≤ r. Via the automorphism w r of X r and the identification I 0 r Ig(p r ) I ∞ r , the first diagram of (3.3.2) is carried isomorphically and compatibly on to the second. The same assertions hold true if we replace X r with X n r and Ω 1 I r (ss) with Ω 1 I r throughout.
Proof. We may and do work over k := F p , and we abuse notation slightly by writing X r for the geometric special fiber of X r . If X is an F p -scheme, we likewiseagain write X it's base change to k, and we write F : X → X for the base change of the absolute Frobenius of X over F p to k. Let X n r → X r be the normalization map; by Proposition 2.3.10, we know that X n r is the disjoint union of proper smooth and irreducible Igusa curves I (a,b,u) indexed by triples (a, b, u) with with a, b nonnegative integers satisfying a + b = r and u ∈ (Z/p min(a,b) Z) × . Via Proposition 2.1.15, we identify ω Xr/k with Rosenlicht's sheaf ω reg Xr/k of regular differentials, and we simply write ω Xr for this sheaf. By Definition 2.1.13 and Remark 2.1.14, we have a functorial injection of k-vector spaces with image precisely those elements (η (a,b,u) ) of the product that satisfy res x (a,b,u) (sη (a,b,u) ) = 0 for each supersingular point x ∈ X r (k) and all s ∈ O Xr,x , where x (a,b,u) is the unique point of I (a,b,u) lying over x and the sum is over all triples (a, b, u) as above. We henceforth identify η ∈ H 0 (X r , ω Xr ) with its image under (3.3.3), and we denote by η (a,b,u) the (a, b, u)-component of η.
Recall from (2.3.10) that the correspondence U p := (π 1 , π 2 ) on X r given by the degeneracy maps π 1 , π 2 : Y r ⇒ X r of (2.3.9) yields endomorphisms U p := (π 1 ) * •π * 2 and U * p := (π 2 ) * •π * 1 of H 0 (X r , ω Xr/Rr ); we will again denote by U p and U * p the induced endomorphisms U p ⊗ 1 and U * p ⊗ 1 of where the isomorphism is the canonical one of Lemma 2.1.16 (1). By the functoriality of normalization, we have an induced correspondence U p := (π n 1 , π n 2 ) on X n r , and we write U p and U * p for the resulting endomorphisms (2.3.10) of H 0 (X n r , Ω 1 k(X n r ) ). By Lemma 2.1.16 (2), the map (3.3.3) is then U p and U * pequivariant. The Hecke correspondences away from p and the diamond operators act on the source of (3.3.3) via "reduction modulo p" and on the target via the induced correspondences in the usual way (2.3.10), and the map (3.3.3) compatible with these actions thanks to Lemma 2.1.16 (2). Similarly, the semilinear "geometric inertia" action of Γ := Gal(K ∞ /K 0 ) on X r induces a linear action on X n r as in Proposition 2.3.14 (2.3.14), and the map (3.3.3) is equivariant with respect to these actions.
We claim that for any meromorphic differential η = (η (a,b,u) ) on X n r , we have The proof of this claim is an easy exercise using the definition of U p , the explicit description of the maps π n 1 and π n 2 given in Proposition 2.3.20, and the fact that F * kills any global meromorphic differential form on a scheme of characteristic p. In a similar manner, one derives the explicit description The crucial observation for our purposes is that for 0 < b ≤ r, the (a, b, u)-component of U p η depends only on the (a + 1, b − 1, u )-components of η for varying u , and similarly for 0 < a ≤ r the (a, b, u)component of U * p η depends only on the (a − 1, b + 1, u )-components of η. By induction, we deduce for any n ≥ r ≥ 1. For any r > 0 and for = ∞, 0 we define maps These maps are well-defined because F * = V is invertible on the V -ordinary subspace, and they are immediately seen to be injective by looking at (r, 0, 1)-components. Note moreover that the (a, b, u)component of γ r (η) is independent of u.
We claim that the maps γ r have image in H 0 (X n r , ω X n r ) (i.e. that they factor through (3.3.3)). To see this, we proceed as follows. Suppose that x is any supersingular point on X r and s ∈ O Xr,x is arbitrary. By Proposition 2.1.15 and Definition 2.1.13, we must check that the sum of the residues of sγ ∞ (η) at all k-points of X n r lying over x is zero. Using (3.3.6a), we calculate that this sum is equal to where x (a,b,u) denotes the unique point of the (a, b, u)-component of X n r over x, and the outer sums range over all nonnegative integers a, b with a + b = r. We claim that for any meromorphic differential ω on I (a,b,u) and any supersingular point y of I (a,b,u) over x, we have for all u ∈ Z × p , and, if in addition ω is V -ordinary, (3.3.8b) res y (sω) = s(x) res y (ω) Indeed, (3.3.8a) is a consequence of (3.1.2), using the fact that the automorphism u of I (a,b,u) fixes every supersingular point, while (3.3.8b) is deduced by thinking about formal expansions of differentials at y and using the fact that a V -ordinary meromorphic differential has at worst simple poles thanks to Lemma 3.1.6. Via (3.3.8a)-(3.3.8b), we reduce the sum (3.3.7) to where the first equality above follows from the fact that for fixed a, b, the points x (a,b,u) for varying u ∈ (Z/p min(a,b) Z) × are all identified with the same point on Ig (p max(a,b) ), and the second equality is a consequence of (3.1.2), since ρ(x (r,0,1) ) = x (r−1,1,1) . As η is V -ordinary, there exists a V -ordinary meromorphic differential ξ on I 0 r with η = F * ξ; substituting this expression for η in to (3.3.9) and applying (3.1.2) once more, we conclude that (3.3.9) is zero, as desired. That γ 0 r has image in H 0 (X r , ω Xr/k ) follows from a nearly identical calculation, and we omit the details.
It follows immediately from our calculations (3.3.4a)-(3.3.4b) and the definitions (3.3.6a)-(3.3.6b) that the relations U p • γ ∞ r = γ ∞ r • F * and U * p • γ 0 r = γ 0 r • p −1 N F * hold. Since F * is invertible on the source of γ r , it follows immediately that γ 0 r has image contained in e * r H 0 (X r , ω Xr ) and that γ ∞ r has image contained in e r H 0 (X r , ω Xr ).
To see that these containments an equalities, we proceed as follows. Suppose that ξ ∈ e r H 0 (X r , ω Xr ) is arbitrary. We claim that the meromorphic differential ξ (r,0,1) on I ∞ r has at worst simple poles along ss (and is holomorphic outside ss). Indeed, for each n > 0 we may find ξ (n) ∈ e r H 0 (X r , ω Xr ) with ξ = U n p ξ (n) . As discussed in §2.1, when viewed as a meromorphic differential on X n r any section of ω Xr has poles of order bounded by a constant depending only on r (see [Con00, Lemma 5.2.2]). Since F : I ∞ r → I ∞ r is inseparable of degree p (so totally ramified over every supersingular point), it follows from Remark 3.1.2 that there exists n > r such that the meromorphic differential F n * ξ (n) (r,0,1) has at worst simple poles along ss; by the formula (3.3.5a) for U n p , we conclude that the same is true of ξ (r,0,1) = (U n p ξ (n) ) (r,0,1) = F n * ξ (n) (r,0,1) .
Applying this with ξ (r) in the role of ξ, and using (3.3.5a) and (3.3.6a) we calculate (r,0,1) ), so γ ∞ r surjects onto e r H 0 (X r , ω Xr ) and is hence an isomorphism onto this image. A nearly identical argument shows that γ 0 r is an isomorphism onto e * r H 0 (X r , ω Xr ). Since pullback of meromorphic differentials along i ∞ r : I ∞ r → X H 0 (I (a,b,u) , Ω 1 k(I (a,b,u) ) ) proj (r,0,1) onto the (r, 0, 1)-component, the composition of γ ∞ r and (the restriction of) (i ∞ r ) * in either order is the identity map. Since i ∞ r is compatible with the H r -correspondences, the resulting isomorphism (3.3.1) is H r -equivariant (with U p acting on the target via F * ). Similarly, since the "geometric inertia" action (2.3.3) of Γ on X r is compatible via i ∞ r with the trivial action on I ∞ r by Proposition 2.3.14, the isomorphism (3.3.1) is equivariant for these actions of Γ. A nearly identical analysis shows that (i 0 r ) * is H * r -compatible (with U * p acting on the target as p −1 N F * ) and Γ-equivariant for the action of Γ on I 0 r via χ(·) −1 The commutativity of the four diagrams in (3.3.2) is an immediate consequence of the descriptions of the degeneracy mappings ρ, σ on X n r furnished by Proposition 2.3.13 and the explication (3.3.11) of pullback by i r in terms of projection. That w r interchanges the two diagrams in (3.3.2) is an immediate consequence of Proposition 2.3.6.
Finally, that the assertions of Proposition 3.3.1 all hold if X r and Ω 1 I r (ss) are replaced by X n r and Ω 1 I r , respectively, follows from a a similar-but much simpler-argument. The point is that the maps γ r of (3.3.6a)-(3.3.6b) visibly carry H 0 (I r , Ω 1 , from which it follows via our argument that they induce the claimed isomorphisms. Since X r is a proper and geometrically connected curve over F p , Proposition 2.1.12 (2) provides short exact sequences of F p [∆/∆ r ]-modules with linear Γ and H * r (respectively H r )-action which are canonically F p -linearly dual to each other. We likewise have such exact sequences in the case of X n r ; note that since X n r is smooth, the short exact sequence H(X n r /F p ) is simply the Hodge filtration of H 1 dR (X n r /F p ). Corollary 3.3.2. The absolute Frobenius morphism of X r over F p induces a natural F p [∆/∆ r ]-linear, Γ-compatible, and H * r (respectively H r ) equivariant splitting of (3.3.12a) (respectively (3.3.12b)). Furthermore, for each r we have natural isomorphisms of split short exact sequences (3.3.13a) which are compatible with the extra structures. The identification (3.3.13a) (respectively (3.3.13b)) is moreover compatible with change in r using the trace mappings attached to ρ : I r → I r−1 and to ρ : X r → X r−1 (respectively σ : X r → X r−1 ). The same statements hold true if we replace X r , Ω 1 I r (ss), and O I r (−ss) with X n r , Ω 1 I r , and O I r , respectively.
Proof. Pullback by the absolute Frobenius endomorphism of X r induces an endomorphism of (3.3.12a) which kills H 0 (X r , ω Xr/Fp ) and so yields a morphism of F p [∆/∆ r ]-modules (3.3.14) e * r H 1 (X r , O Xr ) G G e * r H 1 (X r /F p ) that is Γ and H * r -compatible and projects to the endomorphism F * of e * r H 1 (X r , O Xr ). On the other hand, Proposition 3.3.1 gives a natural Γ and H * r -equivariant isomorphism of F p [∆/∆ r ]-modules As this isomorphism intertwines F * on source and target, we deduce that F * acts invertibly on e * r H 1 (X r , O Xr ). We may therefore pre-compose (3.3.14) with (F * ) −1 to obtain a canonical splitting of (3.3.12a), which by construction is F p [∆/∆ r ]-linear and compatible with Γ and H * r . The existence of (3.3.13a) as well as its compatibility with Γ, H * r and with change in r now follows immediately from Proposition 3.3.1 and duality (see Remark 3.1.5). The corresponding assertions for the exact sequence (3.3.12b) and the diagram (3.3.13b) are proved similarly, and we leave the details to the reader. A nearly identical argument shows that the same assertions hold true when X r , Ω 1 I r (ss), and O I r (−ss) are replaced by X n r , Ω 1 I r , and O I r , respectively.
that are Γ and H * r (respectively H r ) equivariant.
Proof. This follows immediately from Proposition 3.2.1 and Corollary 3.3.2.
Remark 3.3.4. We warn the reader that the naïve analogue of Corollary 3.3.3 in the case of X n r is false: while H 0 (I r , Ω 1 (ss)) V ord is a free F p [∆/∆ r ]-module, the submodule of holomorphic differentials need not be. Over k = F p , the residue map gives a short exact sequence of k[∆/∆ r ]-modules with middle term that is free over k[∆/∆ r ]; see Theorem 2 of [Nak85]. The splitting of this exact sequence is then equivalent to the projectivity-hence freeness-of H 0 (I r , Ω 1 In order to formulate the correct analogue of Corollary 3.3.3 in the case of X n r , we proceed as follows. Denote by τ : F × p → Z × p the Teichmüller character, and for any Z p -module M with a linear action of F × p and any j ∈ Z/(p − 1)Z, let be the subspace of M on which F × p acts via τ j . As #F × p = p − 1 is a unit in Z × p , the submodule M (j) is a direct summand of M . Explicitly, the idenitity of Z p [F × p ] admits the decomposition into mutually orthogonal idempotents f j , and we have M (j) = f j M . In applications, we will consistently need to remove the trivial eigenspace M (0) from M , as this eigenspace in the p-adic Galois representations we consider is not potentially crystalline at p. We will write corresponding to projection away from the 0-eigenspace for F × p . Applying these considerations to the identifications of split exact sequences in Corollary 3.3.2, which are compatible with the canonical diamond operator action of Z × p F × p × ∆ on both rows, we obtain a corresponding identifiction of split exact sequences of τ j -eigenspaces, for each j mod p − 1. The following is a generalization of [Gro90, Proposition 8.10 (2)]: Lemma 3.3.5. Let j be an integer with j ≡ 0 mod p−1. For each r, there are canonical isomorphisms (3.3.18) H 0 (I r , Ω 1 Ir )(j) G G H 0 (I r , Ω 1 Ir (ss))(j) and The normalization map ν : X n r → X r induces a natural isomorphism of split exact sequences where the central vertical arrow is deduced from the outer two vertical arrows via the splitting of both rows by the Frobenius endomorphism. The same assertions hold if we replace e * r with e r .
Proof. The first map in (3.3.18) is injective, as it is simply the canonical inclusion. To see that it is an isomorphism, we may work over k := F p . If η is any meromorphic differential on I r on which F × p acts via the character τ j , then since the diamond operators fix every supersingular point on I r we have for any x ∈ ss(k) and all u ∈ F × p . As j ≡ 0 mod p − 1, so τ j is nontrivial, we must therefore have res x (η) = 0 for all supersingular points x. If in addition η is holomorphic outside ss with at worst simple poles along ss, then η must be holomorphic everywhere, so the first map in (3.3.18) is surjective, as desired. The second mapping in (3.3.18) is dual to the first, and hence an isomorphism as well. Now for each j ≡ 0 mod p − 1, we have a commutative diagram -modules with Γ and H * r -action in which the two vertical arrows are isomorphisms by Proposition 3.3.1 and the bottom horizontal mapping is an isomorphism as we have just seen. We conclude that the top horizontal arrow of (3.3.20) is an isomorphism as well. Thus, the left vertical map in (3.3.19) is an isomorphism, so the same is true of the right vertical map by duality. The diagram (3.3.19) then follows at once from the fact the both rows are canonically split by the Frobenius endomorphism, thanks to Corollary 3.3.2. A nearly identical argument shows that the same assertions hold if we replace e * r with e r throughout.
If A is any Z p [F × p ]-algebra and a ∈ A, we will write a := f a for the product of a with the idempotent f of (3.3.17), or equivalently the projection of a to the complement of the trivial eigenspace for F × p . We will apply this to A = H r , H * r , viewed as Z p [F × p ]-algebras in the usual manner, via the diamond operators and the Teichmüller section τ : Proposition 3.3.6. For each r there are natural isomorphisms of split short exact sequences Setting d := p k=3 d k where d k := dim Fp S k (N ; F p ) ord as in Remark 3.2.6, the terms in the top rows of (3.3.21a) and (3.3.21b) are free F p [∆/∆ r ]-modules of ranks d , 2d , and d . The identification (3.3.21a) (respectively (3.3.21b)) is Γ and H * r (respectively H r )-equivariant, and compatible with change in r using the trace mappings attached to ρ : I r → I s and to ρ : X r → X s (respectively σ : X r → X s ).
Proof. This follows immediately from Corollaries 3.3.2-3.3.3 and Lemma 3.3.5, using the fact that the group ring F p [∆/∆ r ] is local, so any projective F p [∆/∆ r ]-module is free.
As usual, we write Pic 0 X n r /Fp [p ∞ ] for the p-divisible group of the Jacobian of X n r over F p ; it is equipped with canonical actions of H r and H * r , as well as a "geometric inertia" action of Γ over F p .
which is compatible with H * r , Γ, and change in r and which carries D(Σ m r ) Fp (respectively D(Σé t r ) Fp ) isomorphically onto f H 0 (I 0 r , Ω 1 ) V ord (respectively f H 1 (I ∞ r , O) F ord ). In particular, Σ r is ordinary. Proof. First note that since the identifications (3.3.21a) and (3.3.21b) are induced by the canonical closed immersions i r : I r → X n r , they are compatible with the natural actions of Frobenius and the Cartier operator. The isomorphism (3.3.22) is therefore an immediate consequence of Propositions 3.1.8 and 3.3.6. Since this isomorphism is compaible with F and V , we have and we conclude that the canonical inclusion D(Σ m r ) Zp ⊕ D(Σé t r ) Zp → D(Σ r ) Zp is surjective, whence Σ r is ordinary by Dieudonné theory.
We now analyze the ordinary p-divisible group Σ r in more detail. Since X n r is the disjoint union of proper smooth and irreducible Igusa curves I (a,b,u) (see Proposition 2.3.10) with I 0 r := I (0,r,1) and I ∞ r = I (r,0,1) , we have a canonical identification Via the identification (3.3.24), we know that j r is a direct factor of Pic 0 X n r /Fp ; in these terms Alb(i r ) is the unique mapping which projects to the identity on j r and to the zero map on all other factors, while Pic 0 (i r ) is simply projection onto the factor j r . As Σ r is a direct factor of f Pic 0 which we (somewhat abusively) again denote by Alb(i 0 r ) and Pic 0 (i ∞ r ), respectively. The following is a sharpening of [MW84, Chapter 3, §3, Proposition 3] (see also [Til87,Proposition 3
.2]):
Proposition 3.3.9. The mappings (3.3.26a) and (3.3.26b) are isomorphisms. They induce a canonical split short exact sequences of p-divisible groups over F p which is: (1) Γ-equivariant for the geometric inertia action on Σ r , the trivial action on f j ∞ r [p ∞ ]é t , and the action via (3) Compatible with change in r via the mappings Pic 0 (ρ) on j r and Σ r .
Proof. It is clearly enough to prove that the sequence (3.3.27) induced by (3.3.26a) and (3.3.26b) is exact. Since the contravariant Dieudonné module functor from the category of p-divisible groups over F p to the category of A-modules which are Z p finite and free is an exact anti-equivalence, it suffices to prove such exactness after applying D(·) Zp . As the resulting sequence consist of finite free Z p -modules, exactness may be checked modulo p where it follows immediately from Propositions 3.3.6 and 3.3.8. The claimed compatibility with Γ, H * r , and change in r is deduced from Propositions 2.3.14, 2.3.20, and 2.3.13, respectively.
Remark 3.3.10. It is possible to give a short proof of Proposition 3.3.9 along the lines of [MW84] or [Til87] by using Proposition 2.3.20 directly. We stress, however, that our approach via Dieudonné modules gives more refined information, most notably that the Dieudonné module of Σ r [p] is free as an F p [∆/∆ r ]-module. This fact will be crucial in our later arguments.
Dieudonné crystals and (ϕ, Γ)-modules
In this section, we summarize the main results of [CL12], which provides a classification of p-divisible groups over R r by certain semi-linear algebra structures. These structures-which arise naturally via the Dieudonné crystal functor-are cyclotomic analogues of Breuil and Kisin modules, and are closely related to Wach modules. 20 4.1. (ϕ, Γ)-modules attached to p-divisible groups. Fix a perfect field k of characteristic p. Write W := W (k) for the Witt vectors of k and K for its fraction field, and denote by ϕ the unique automorphism of W (k) lifting the p-power map on k. Fix an algebraic closure K of K, as well as a compatible sequence {ε (r) } r≥1 of primitive p-power roots of unity in K, and set G K := Gal(K/K). For r ≥ 0, we put K r := K(µ p r ) and R r := W [µ p r ], and we set Γ r := Gal(K ∞ /K r ), and Γ := Γ 0 .
Let S r := W [[u r ]] be the power series ring in one variable u r over W , viewed as a topological ring via the (p, u r )-adic topology. We equip S r with the unique continuous action of Γ and extension of ϕ determined by γu r := (1 + u r ) χ(γ) − 1 for γ ∈ Γ and ϕ(u r ) := (1 + u r ) p − 1. (4.1.1) We denote by O Er := S r [ 1 ur ] the p-adic completion of the localization S r(p) , which is a complete discrete valuation ring with uniformizer p and residue field k((u r )). One checks that the actions of ϕ and Γ on S r uniquely extend to O Er .
For r > 0, we write θ : S r R r for the continuous and Γ-equivariant W -algebra surjection sending u r to ε (r) − 1, whose kernel is the principal ideal generated by the Eisenstein polynomial E r := ϕ r (u r )/ϕ r−1 (u r ), and we denote by τ : S r W the continuous and ϕ-equivariant surjection of W -algebras determined by τ (u r ) = 0. We lift the canonical inclusion R r → R r+1 to a Γ-and ϕequivariant W -algebra injection S r → S r+1 determined by u r → ϕ(u r+1 ); this map uniquely extends to a continuous injection O Er → O E r+1 , compatibly with ϕ and Γ. We will frequently identify S r (respectively O Er ) with its image in S r+1 (respectively O E r+1 ), which coincides with the image of ϕ on S r+1 (respectively O E r+1 ). Under this convention, we have E r (u r ) = E 1 (u 1 ) = u 0 /u 1 for all r > 0, so we will simply write ω := E r (u r ) for this common element of S r for r > 0.
Definition 4.1.1. We write BT ϕ Sr for the category of Barsotti-Tate modules over S r , i.e. the category whose objects are pairs (M, ϕ M ) where • M is a free S r -module of finite rank.
• ϕ M : M → M is a ϕ-semilinear map whose linearization has cokernel killed by ω, and whose morphisms are ϕ-equivariant S r -module homomorphisms. We write BT ϕ,Γ Sr for the subcategory of BT ϕ Sr consisting of objects (M, ϕ M ) which admit a semilinear Γ-action (in the category BT ϕ Sr ) with the property that Γ r acts trivially on M/u r M. Morphisms in BT ϕ,Γ Sr are ϕ and Γ-equivariant morphisms of S r -modules. We often abuse notation by writing M for the pair (M, ϕ M ) and ϕ for ϕ M .
and the commuting action of Γ given for γ ∈ Γ by −1 m)).
There is a natural notion of base change for Barsotti-Tate modules. Let k /k be an algebraic extension (so k is automatically perfect), and write W : , and so on. The canonical inclusion W → W extends to a ϕ and Γ-compatible W -algebra injection ι r : S r → S r+1 , and extension of scalars along ι r yields a canonical canonical base change functor ι r * : BT ϕ,Γ Sr → BT ϕ,Γ S r+1 which one checks is compatible with duality. Let us write pdiv Γ Rr for the subcategory of p-divisible groups over R r consisting of those objects and morphisms which descend (necessarily uniquely) to K = K 0 on generic fibers. By Tate's Theorem, this is of course equivalent to the full subcategory of p-divisible groups over K 0 which have good reduction over K r . Note that for any algebraic extension k /k, base change along the inclusion ι r : R r → R r+1 gives a covariant functor ι r * : pdiv Γ Rr → pdiv Γ R r+1 . The main result of [CL12] is the following: Theorem 4.1.3. For each r > 0, there is a contravariant functor M r : pdiv Γ Rr → BT ϕ,Γ Sr such that: (1) The functor M r is an exact equivalence of categories, compatible with duality.
(2) The functor M r is of formation compatible with base change: for any algebraic extension k /k, there is a natural isomorphism of composite functors ι r * • M r M r+1 • ι r * on pdiv Γ Rr .
(3) For G ∈ pdiv Γ Rr , put G := G × Rr k and G 0 := G × Rr R r /pR r . (a) There is a functorial and Γ-equivariant isomorphism of W -modules There is a functorial and Γ-equivariant isomorphism of R r -modules We wish to explain how to functorially recover the G K -representation afforded by the p-adic Tate module T p G K from M r (G). In order to do so, we must first recall the necessary period rings; for a more detailed synopsis of these rings and their properties, we refer the reader to [Col08, §6- §8].
As usual, we put 21 equipped with its canonical G K -action via "coordinates" and p-power Frobenius map ϕ. This is a perfect (i.e. ϕ is an automorphism) valuation ring of charteristic p with residue field k and fraction field E := Frac( E + ) that is algebraically closed. We view E as a topological field via its valuation topology, with respect to which it is complete. Our fixed choice of p-power compatible sequence {ε (r) } r≥0 induces an element ε := (ε (r) mod p) r≥0 of E + and we set E K := k((ε − 1)), viewed as a topological 22 subring of E; note that this is a ϕ-and G K -stable subfield of E that is independent of our choice of ε. We write E := E sep K for the separable closure of E K in the algebraically closed field E. The natural G K -action on E induces a canonical identification Gal(E/E K ) = H := ker(χ) ⊆ G K , so E H = E K . If E is any subring of E, we write E + := E ∩ E + for the intersection (taken inside E).
We now construct Cohen rings for each of the above subrings of E. To begin with, we put each of these rings is equipped with a canonical Frobenius automorphism ϕ and action of G K via Witt functoriality. Set-theoretically identifying W ( E) with ∞ m=0 E in the usual way, we endow each factor with its valuation topology and give A the product topology. 23 The G K action on A is then continuous and the canonical G K -equivariant W -algebra surjection θ : A + → O C K is continuous when O C K is given its usual p-adic topology. For each r ≥ 0, there is a unique continuous W -algebra map j r : O Er → A determined by j r (u r ) := ϕ −r ([ε] − 1). These maps are moreover ϕ and G K -equivariant, with G K acting on O Er through the quotient G K Γ, and compatible with change in r. We define A K,r := im(j r : O Er → A), which is naturally a ϕ and G K -stable subring of A that is independent of our choice of ε. We again omit the subscript when r = 0. Note that A K,r = ϕ −r (A K ) inside A, and that A K,r is a discrete valuation ring with uniformizer p and residue field ϕ −r (E K ) that is purely inseparable over E K . We define A K,∞ := r≥0 A K,r and write A K (respectively A K ) for the closure of A K,∞ in A with respect to the weak (respectively strong) topology.
Let A sh K,r be the strict Henselization of A K,r with respect to the separable closure of its residue field inside E. Since A is strictly Henselian, there is a unique local morphism A sh K,r → A recovering the given inclusion on residue fields, and we henceforth view A sh K,r as a subring of A. We denote by A r the topological closure of A sh K,r inside A with respect to the strong topology, which is a ϕ and G K -stable subring of A, and we note that A r = ϕ −r (A) and A H r = A K,r inside A. We note also that the canonical map Z p → A ϕ=1 is an isomorphism, from which it immediately follows that the same is true if we replace A by any of its subrings constructed above. If A is any subring of A, we define A + := A ∩ A + , with the intersection taken inside A.
Remark 4.1.4. We will identify S r and O Er with their respective images A + K,r and A K,r in A under j r . Writing S ∞ := lim − → S r and O E∞ := lim − → S r , we likewise identify S ∞ with A + K,∞ and O E∞ with A K,∞ . Denoting by S ∞ (respectively S ∞ ) the p-adic (respectively (p, u 0 )-adic) completions, one has for E rad K := ∪ r≥0 ϕ −r (E K ) the radiciel (=perfect) closure of E K in E and E K its topological completion. Via these identifications, ω := u 0 /u 1 ∈ A + K,1 is a principal generator of ker(θ : We can now explain the functorial relation between M r (G) and T p G K : 22 The valuation v E on E induces the usual discrete valuation on EK,r, with the unusual normalization 1/p r−1 (p − 1).
23 This is what is called the weak topology on A. If each factor of E is instead given the discrete topology, then the product topology on A = W ( E) is the familiar p-adic topology, called the strong topology.
Theorem 4.1.5. Let G ∈ pdiv Γ Rr , and write H 1 et (G K ) := (T p G K ) ∨ for the Z p -linear dual of T p G K . There is a canonical mapping of finite free A + r -modules with semilinear Frobenius and G K -actions that is injective with cokernel killed by u 1 . Here, ϕ acts as ϕ Mr(G) ⊗ ϕ on source and as 1 ⊗ ϕ on target, while G K acts diagonally on source and target through the quotient G K Γ on M r (G). In particular, there is a natural ϕ and G K -equivariant isomorphism These mappings are compatible with duality and with change in r in the obvious manner.
Corollary 4.1.6. For G ∈ pdiv Γ Rr , there are functorial isomorphisms of Z p [G K ]-modules which are compatible with duality and change in r. In the first isomorphism, we view A + r as a S ralgebra via the composite of the usual structure map with ϕ.
. For the remainder of this section, we recall the construction of the functor M r , both because we shall need to reference it in what follows, and because we feel it is enlightening. For details, including the proofs of Theorems 4.1.3-4.1.5 and Corollary 4.1.6, we refer the reader to [CL12].
Fix G ∈ pdiv Γ Rr and set G 0 := G × Rr R r /pR r . The S r -module M r (G) is a functorial descent of the evaluation of the Dieudonné crystal D(G 0 ) on a certain "universal" PD-thickening of R r /pR r , which we now describe. Let S r be the p-adic completion of the PD-envelope of S r with respect to the ideal ker θ, viewed as a (separated and complete) topological ring via the p-adic topology. We give S r its PD-filtration: for q ∈ Z the ideal Fil q S r is the topological closure of the ideal generated by {α [n] : α ∈ ker θ, n ≥ q}. By construction, the map θ : S r R r uniquely extends to a continuous surjection of S r -algebras S r R r (which we again denote by θ) whose kernel Fil 1 S r is equipped with topologically PD-nilpotent 24 divided powers. One shows that there is a unique continuous endomorphism ϕ of S r extending ϕ on S r , and that ϕ(Fil 1 S r ) ⊆ pS r ; in particular, we may define ϕ 1 : Fil 1 S r → S r by ϕ 1 := ϕ/p, which is a ϕ-semilinear homomorphism of S r -modules. Note that ϕ 1 (E r ) is a unit of S r , so the image of ϕ 1 generates S r as an S r -module.
Since the action of Γ on S r preserves ker θ, it follows from the universal mapping property of divided power envelopes and p-adic continuity considerations that this action uniquely extends to a continuous and ϕ-equivariant action of Γ on S r which is compatible with the PD-structure and the filtration. Similarly, the transition map S r → S r+1 uniquely extends to a continuous S r -algebra homomorphism : S r → S r+1 which is moreover compatible with filtrations (because E r (u r ) = E r+1 (u r+1 ) under our identifications), and for nonnegative integers s < r we view S r as an S s -algebra via these maps. • Fil 1 M contains (Fil 1 S r )M and the quotient M / Fil 1 M is a free S r / Fil 1 S r = R r -module.
• ϕ M ,1 : Fil 1 M r → M is a ϕ-semilinear map whose image generates M as an S r -module.
Morphisms in BT ϕ
Sr are S r -module homomorphisms which are compatible with the extra structures. As per our convention, we will often write M for a triple (M , Fil 1 M , ϕ M ,1 ), and ϕ 1 for ϕ M ,1 when it can cause no confusion. We denote by BT ϕ,Γ Sr the subcategory of BT ϕ Sr consisting of objects M that are equipped with a semilinear action of Γ which preserves Fil 1 M , commutes with ϕ M ,1 , and whose restriction to Γ r is trivial on M /u r M ; morphisms in BT ϕ,Γ Sr are Γ-equivariant morphisms in BT ϕ Sr .
The kernel of the surjection S r /p n S r R r /pR r is the image of the ideal Fil 1 S r + pS r , which by construction is equipped topologically PD-nilpotent divided powers. We may therefore define which is a finite free S r -module that depends contravariantly functorially on G 0 . We promote M r (G) to an object of BT ϕ,Γ Sr as follows. As the quotient map S r R r induces a PD-morphism of PDtheckenings of R r /pR r , there is a natural isomorphism of free R r -modules By Proposition 2.2.6, there is a canonical "Hodge" filtration ω G ⊆ D(G 0 ) Rr , which reflects the fact that G is a p-divisible group over R r lifting G 0 , and we define Fil 1 M r (G) to be the preimage of ω G under the composite of the isomorphism (4.1.6) with the natural surjection M r (G) M r (G) ⊗ Sr R r ; note that this depends on G and not just on G 0 . The Dieudonné crystal is compatible with arbitrary base change, so the relative Frobenius F G 0 : G 0 → G (p) 0 induces an canonical morphism of S r -modules which we may view as a ϕ-semilinear map ϕ Mr(G) : M r (G) → M r (G). As the relative Frobenius map ω G (p) 0 → ω G 0 is zero, it follows that the restriction of ϕ Mr(G) to Fil 1 M r (G) has image contained in pM r (G), so we may define ϕ Mr(G),1 := ϕ Mr(G) /p, and one proves as in [Kis06,Lemma A.2] that the image of ϕ Mr(G),1 generates M r (G) as an S r -module.
It remains to equip M r (G) with a canonical semilinear action of Γ. Let us write G Kr for the generic fiber of G and G K for its unique descent to K = K 0 . The existence of this descent is reflected by the existence of a commutative diagram with cartesian square (4.1.7) for each γ ∈ Γ, compatibly with change in γ; here, the subscript of γ denotes base change along the map of schemes induced by γ. Since G has generic fiber G Kr = G K × K K r , Tate's Theorem ensures that the dotted arrow above uniquely extends to an isomorphism of p-divisible groups over R r (4.1.8) compatibly with change in γ. By assumption, the action of Γ on S r commutes with the divided powers on Fil 1 S r and induces the given action on the quotient S r R r ; in other words, Γ acts by automorphisms on the object (Spec(R r /pR r ) → Spec(S r /p n S r )) of Cris((R r /pR r )/W ). Since D(G 0 ) is a crystal, each γ ∈ Γ therefore gives an S r -linear map and hence an S r -semilinear (over γ) endomorphism γ of M r (G). One easily checks that the resulting action of Γ on M r (G) commutes with ϕ M ,1 and preserves Fil 1 M r (G). By the compatibility of D(G 0 ) with base change and the obvious fact that the W -algebra surjection S r W sending u r to 0 is a PD-morphism over the canonical surjection R r /pR r k, there is a natural isomorphism It follows easily from this and the diagram (4.1.7) that the action of Γ r on M r (G)/u r M r (G) is trivial.
To define M r (G), we functorially descend the S r -module M r (G) along the structure morphism α r : S r → S r . More precisely, for M ∈ BT ϕ,Γ Sr , we define α r * (M) := (M, Fil 1 M, Φ 1 ) ∈ BT ϕ,Γ Sr via: The following is the key technical point of [CL12], and is proved using the theory of windows: Theorem 4.1.9. For each r, the functor α r * : BT ϕ,Γ Sr → BT ϕ,Γ Sr is an equivalence of categories, compatible with change in r.
Definition 4.1.10. For G ∈ pdiv Γ Rr , we write M r (G) for the functorial descent of M r (G) to an object of BT ϕ,Γ Sr as guaranteed by Theorem 4.1.9. By construction, we have a natural isomorphism of functors α r * • M r M r on pdiv Γ Rr . Example 4.1.11. Using Messing's description of the Dieudonné crystal of a p-divisible group in terms of the Lie algebra of its universal extension (cf. remark 2.2.7), one calculates that for r ≥ 1 with γ ∈ Γ acting as indicated. Note that both M r (Q p /Z p ) and M r (G m [p ∞ ]) arise by base change from their incarnations when r = 1, as follows from the fact that ω = ϕ(u 1 )/u 1 and ϕ r−1 (γu r /u r ) = γu 1 /u 1 via our identifications.
4.2.
The case of ordinary p-divisible groups. When G ∈ pdiv Γ Rr is ordinary, one can say significantly more about the structure of the S r -module M r (G). To begin with, we observe that for arbitrary G ∈ pdiv Γ Rr , the formation of the maximalétale quotient of G and of the maximal connected and multiplicative-type sub p-divisible groups of G are functorial in G, so each of Gé t , G 0 , and G m is naturally an object of pdiv Γ Rr as well. We thus (functorially) obtain objects M r (G ) of BT ϕ,Γ Sr which admit particularly simple descriptions when =ét or m, as we now explain.
As usual, we write G for the special fiber of G and D(G ) W for its Dieudonné module. Twisting the W -algebra structure on S r by the automorphism ϕ r−1 of W , we define objects of BT ϕ,Γ with γ ∈ Γ acting as indicated. Note that these formulae make sense and do indeed give objects of BT ϕ,Γ Sr as V is invertible 25 on D(G m ) W and γu r /u r ∈ S × r . It follows easily from these definitions that ϕ M r linearizes to an isomorphism when =ét and has image contained in ω · M m r (G) when = m Of course, M r (G) is contravariantly functorial in-and depends only on-the closed fiber G of G .
Proposition 4.2.1. Let G be an object of pdiv Γ Rr and let Mé t r (G) and M m r (G) be as above. The map . These identifications are compatible with change in r in the sense that for =ét (respectively = m) there is a canonical commutative diagram in BT Γ S r+1 where the left vertical isomorphism is deduced from Theorem 4.1.3 (2).
Proof. For ease of notation, we will write M r and and D for M r (G) and D(G ) W , respectively. Using (4.1.10), one finds that Mé t r := α r * (Mé t r ) ∈ BT ϕ,Γ Sr is given by the triple (4.2.4a) Mé t r := (Dé t ⊗ W,ϕ r S r , Dé t ⊗ W,ϕ r Fil 1 S r , F ⊗ ϕ 1 ) with Γ acting diagonally on the tensor product. Similarly, α r * (M m r ) is given by the triple where v r = ϕ(E r )/p and γ ∈ Γ acts on D m ⊗ W,ϕ r S r as γ⊗χ(γ) −1 ϕ r (γu r /u r )·γ. Put λ := log(1+u 0 )/u 0 , where log(1 + X) : Fil 1 S r → S r is the usual (convergent for the p-adic topology) power series and 25 A ϕ −1 -semilinear map of W -modules V : D → D is invertible if there exists a ϕ-semilinear endomorphism V −1 whose composition with V in either order is the identity. This is easily seen to be equivalent to the invertibility of the linear map V ⊗ 1 : D → ϕ * D, with V −1 the composite of (V ⊗ 1) −1 and the ϕ-semilinear map id ⊗1 : D → ϕ * D.
It follows from (4.2.5) that the S r -module automorphism of D m ⊗ W,ϕ r S r given by multiplication by λ carries (4.2.4b) isomorphically onto the object of BT ϕ,Γ Sr given by the triple (4.2.6) M m r := (D m ⊗ W,ϕ r S r , D m ⊗ W,ϕ r S r , V −1 ⊗ ϕ) with Γ acting diagonally on the tensor product.
On the other hand, since Gé t 0 (respectively G m 0 ) isétale (respectively of multiplicative type) over R r /pR r , the relative Frobenius (respectively Verscheibung) morphism of G 0 induces isomorphisms of p-divisible groups over R r /pR r , where we have used the fact that the map x → x p r of R r /pR r factors as R r /pR r k → R r /pR r in the final isomorphisms of both lines above. Since the Dieudonné crystal is compatible with base change and the canonical map W → S r extends to a PD-morphism (W, p) → (S r , pS r + Fil 1 S r ) over k → R r /pR r , applying D(·) Sr to (4.2.7a)-(4.2.7b) yields natural isomorphisms D(G 0 ) Sr D ⊗ W,ϕ r S r for =ét, m which carry F to F ⊗ ϕ. It is a straightforward exercise using the construction of M r (G ) given in §4.1 to check that these isomorphisms extend to give isomorphisms M r (Gé t ) Mé t r and M r (G m ) M m r in BT ϕ,Γ Sr . By Theorem 4.1.9, we conclude that we have natural isomorphisms in BT ϕ,Γ Sr as in (4.2.2). The commutativity of (4.2.3) is straightforward, using the definitions of the base change isomorphisms. Now suppose that G is ordinary. As M r is exact by Theorem 4.1.3 (1), applying M r to the connected-étale sequence of G gives a short exact sequence in BT ϕ,Γ Sr (4.2.8) which is contravariantly functorial and exact in G. Since ϕ Mr linearizes to an isomorphism on M r (Gé t ) and is topologically nilpotent on M r (G m ), we think of (4.2.8) as the "slope flitration" for Frobenius acting on M r (G). On the other hand, Proposition 2.2.6 and Theorem 4.1.3 (3b) provide a canonical "Hodge filtration" of M r (G) ⊗ Sr,ϕ R r D(G 0 ) Rr : which is contravariant and exact in G. Our assumption that G is ordinary yields (cf. [Kat81]): Lemma 4.2.2. With notation as above, there are natural and Γ-equivariant isomorphisms Composing these isomorphisms with the canonical maps obtained by applying D(·) Rr to the connectedetale sequence of G 0 yield functorial R r -linear splittings of the Hodge filtration (4.2.9). Furthermore, there is a canonical and Γ-equivariant isomorphism of split exact sequences of R r -modules with i, j the inclusion and projection mappings obtained from the canonical direct sum decomposition Proof. Applying D(·) Rr to the connected-étale sequence of G 0 and using Proposition 2.2.6 yields a commutative diagram with exact columns and rows (4.2.12) where we have used the fact that that the invariant differentials and Lie algebra of anétale p-divisible group (such as Gé t and G mt G té t ) are both zero. The isomorphisms (4.2.10) follow at once. We likewise immediately see that the short exact sequence in the center column of (4.2.12) is functorially and R r -linearly split. Thus, to prove the claimed identification in (4.2.11), it suffices to exhibit natural isomorphisms of free R r -modules with Γ-action both of which follow easily by applying D(·) Rr to (4.2.7a) and (4.2.7b) and using the compatibility of the Dieudonné crystal with base change as in the proof of Proposition (4.2.1).
From the slope filtration (4.2.8) of M r (G) we can recover both the (split) slope filtration of D(G) W and the (split) Hodge filtration (4.2.9) of D(G 0 ) Rr : Proposition 4.2.3. There are canonical and Γ-equivariant isomorphisms of short exact sequences Here, i : Lie(G t ) → D(G 0 ) Rr and j : D(G 0 ) Rr ω G are the canonical splittings of Lemma 4.2.2, the top row of (4.2.14b) is obtained from (4.2.8) by extension of scalars, and the isomorphism (4.2.14a) intertwines ϕ Mr(·) ⊗ ϕ with F ⊗ ϕ and ψ ⊗ 1 with V ⊗ 1.
Results and Main Theorems
In this section, we will state and prove our main results as described in §1.2. Throughout, we will keep the notation of §1.2 and of §4.1 with k := F p . 5.1. The formalism of towers. In this preliminary section, we set up a general commutative algebra framework for dealing with the various projective limits of cohomology modules that we will encounter.
Definition 5.1.1. A tower of rings is an inductive system A := {A r } r≥1 of local rings with local transition maps. A morphism of towers A → A is a collection of local ring homomorphisms A r → A r which are compatible with change in r. A tower of A -modules M consists of the following data: (1) For each integer r ≥ 1, an A r -module M r .
(2) A collection of A r -module homomorphisms ϕ r,s : M r → M s ⊗ As A r for each pair of integers r ≥ s ≥ 1, which are compatible in the obvious way under composition. A morphism of towers of A -modules M → M is a collection of A r -module homomorphisms M r → M r which are compatible with change in r in the evident manner. For a tower of rings A = {A r }, we will write A ∞ for the inductive limit, and for a tower of A -modules M = {M r }, we set for any A ∞ -algebra B, with the projective limit taken with respect to the induced transition maps.
Lemma 5.1.2. Let A = {A r } r≥0 be a tower of rings and suppose that I r ⊆ A r is a sequence of proper principal ideals such that A r is I r -separated and the image of I r in A r+1 is contained in I r+1 for all r. Write I ∞ := lim − → I r for the inductive limit, and set A r := A r /I r for all r. Let M = {M r , ρ r,s } be a tower of A -modules equipped with an action 26 of ∆ by A -automorphisms. Suppose that M r is free of finite rank over A r for all r, and that ∆ r acts trivially on M r . Let B be an A ∞ -algebra, and observe 26 That is, a homomorphism of groups ∆ → Aut A (M ), or equivalently, an Ar-linear action of ∆ on Mr for each r that is compatible with change in r. that M B is canonically a module over the completed group ring Λ B . Assume that B is either flat over A ∞ or that B is a flat A ∞ -algebra, and that the following two conditions hold for all r > 0 (5.1.1a) M r := M r /I r M r is a free A r [∆/∆ r ]-module of rank d that is independent of r.
(5.1.1b) For all s ≤ r the induced maps ρ r,s : M r G G M s ⊗ As A r are surjective. Then: (1) M r is a free A r [∆/∆ r ]-module of rank d for all r.
(2) The induced maps of A r [∆/∆ s ]-modules (4) For each r, the canonical map is an isomorphism of finite free Λ B -modules.
Proof. For notational ease, let us put Λ Ar,s := A r [∆/∆ s ] for all pairs of nonnegative integers r, s. Note that Λ Ar,s is a local A r -algebra, so the principal ideal I r := I r Λ Ar,s is contained in the radical of Λ Ar,s . Let us fix r and choose a principal generator f r ∈ A r of I r (hence also of I r ). The module M r is obviously finite over Λ Ar,r (as it is even finite over A r ), so by hypothesis (5.1.1a) we may choose m 1 , . . . , m d ∈ M r with the property that the images of the m i in M r = M r / I r M r freely generate M r as an A r [∆/∆ r ] = Λ Ar,r / I r -module. By Nakayama's Lemma [Mat89, Corollary to Theorem 2.2], we conclude that m 1 , . . . , m d generate M r as a Λ Ar,r -module. If is any relation on the m i with x i ∈ Λ Ar,r , then necessarily x i ∈ I r Λ Ar,r , and we claim that x i ∈ I j r for all j ≥ 0. To see this, we proceed by induction and suppose that our claim holds for j ≤ N . Since I r is principal, for each i there exists x i ∈ Λ Ar,r with x i = f N r x i , and the relation (5.1.2) reads f N r m = 0 with m ∈ M r given by m : Since M r is free as an A r -module, it is in particular torsion free, so we conclude that m = 0. Since the images of the m i freely generate M r / I r M r , it follows that x i ∈ I r and hence that x i ∈ I N +1 r , which completes the induction. By our assumption that A r is I r -adically separated, we must have x i = 0 for all i and the relation (5.1.2) is trivial. We conclude that m 1 , . . . , m d freely generate M r over Λ Ar,r , giving (1).
To prove (2), note that our assumption (5.1.1b) that the maps ρ r,s are surjective for all r ≥ s implies that the same is true of the maps ρ r,s (again by Nakayama's Lemma) and hence that the induced map of Λ Ar,s -modules in (2) is surjective. As this map is then a surjective map of free Λ Ar,s -modules of the same rank d, it must be an isomorphism.
Since the kernel of the canonical surjection Λ Ar,r Λ Ar,s lies in the radical of Λ Ar,r , we deduce by Nakayama's Lemma that any lift to M r of a Λ Ar,s -basis of M s ⊗ As A r is a Λ Ar,r -basis of M r . It follows easily from this that the projective limit M B is a free Λ B -module of rank d for any flat A ∞ -algebra B. The corresponding assertions for any flat A ∞ -algebra B follow similarly, using the hypotheses (5.1.1a) and (5.1.1b) directly, and this gives (3).
Observe that the mapping of (4) is obtained from the canonical surjection M B M r ⊗ Ar B by extension of scalars, keeping in mind the natural identification It follows at once that this mapping is surjective. By (1) and (3), we conclude that the mapping in (4) is a surjection of free B[∆/∆ r ]-modules of the same rank and is hence an isomorphism as claimed.
It remains to prove (5). Extending scalars, the canonical maps M B M r ⊗ Ar B induce surjections that are compatible in the evident manner with change in r. Passing to inverse limits gives the mapping M B ⊗ Λ B Λ B → M B of (5). Due to (3), this is then a map of finite free Λ B -modules of the same rank, so to check that it is an isomorphism it suffices by Nakayama's Lemma to do so after applying ⊗ Λ B B [∆/∆ r ], which is an immediate consequence of (4).
We record the following elementary commutative algebra fact, which will be extremely useful to us: In particular, M B and M B are canonically Λ B -linearly dual to eachother.
Proof. An easy reindexing argument shows that (·, ·) r is Λ Ar,r -linear in the right factor, from which it follows that it is also Λ Ar,r -linear in the left due to our assumption that δ ∈ ∆ is self-adjoint with respect to ·, · r . To prove that (·, ·) r is a perfect duality pairing, we analyze the Λ Ar,r -linear map (5.1.5) M r m →(m,·)r G G Hom Λ Ar ,r (M r , Λ Ar,r ) .
Due to Lemma 5.1.2, both M r and M r are free Λ Ar,r -modules, necessarily of the same rank by the existence of the perfect A r -duality pairing (5.1.3). It follows that (5.1.5) is a homomorphism of free Λ Ar,r -modules of the same rank. To show that it is an isomorphism it therefore suffices to prove it is surjective, which may be checked after extension of scalars along the augmentation map Λ Ar,r A r by Nakayama's Lemma. Consider the diagram where I ∆ = ker(Λ Ar,r A r ) is the augmentation ideal We conclude that (5.1.5) is an isomorphism, as desired. The argument that the corresponding map with the roles of M r and M r interchanged is an isomorphism proceeds mutatis mutandis.
From (·, ·) Λ B we obtain in the usual way duality morphisms which we wish to show are isomorphisms. Due to Lemma 5.1.2 (3), each of (5.1.8) is a map of finite free Λ B -modules of the same rank, so we need only show that these mappings are surjective. As the kernel of Λ B Λ B,r is contained in the radical of Λ B , we may by Nakayama's Lemma check such surjectivity after extension of scalars along Λ B Λ B,r for any r, where it follows from (5.1.7) and the fact that M r and M s are free Λ Ar,r -modules, so that the extension of scalars of the perfect duality pairing (·, ·) r along the canonical map Λ Ar,r → Λ B,r is again perfect. 5.2. Ordinary families of de Rham cohomology. Let {X r /T r } r≥0 be the tower of modular curves introduced in §2.3. As X r is regular and proper flat over T r = Spec(R r ) with geometrically reduced fibers, it is a curve in the sense of Definition 2.1.1 (thanks to Corollary 2.1.3) which moreover satisfies the hypotheses of Proposition 2.1.11. Abbreviating Proposition 2.1.11 (2) provides a canonical short exact sequence H(X r /R r ) of finite free R r -modules which recovers the Hodge filtration of H 1 dR (X r /K r ) after inverting p. The Hecke correspondences on X r induce, via Proposition 2.1.11 (4) (or by Proposition 2.2.4 and Remark 2.2.5), canonical actions of H r and H * r on H(X r /R r ) via R r -linear endomorphisms. In particular, H(X r /R r ) is canonically a short exact sequence of Z p [(Z/N p r Z) × ]-modules via the diamond operators. Similarly, pullback along (2.3.3) yields R r -linear morphisms H((X r ) γ /R r ) → H(X r /R r ) for each γ ∈ Γ; using the fact that hypercohomology commutes with flat base change (byČech theory), we obtain an action of Γ on H(X r /R r ) which is R r -semilinear over the canonical action of Γ on R r and which commutes with the actions of H r and H * r as the Hecke operators are defined over K 0 = Q p . For r ≥ s, we will need to work with the base change X s × Ts T r , which is a curve over T r thanks to Proposition 2.1.2. Although X s × Ts T r need no longer be regular as T r → T s is not smooth when r > s, we claim that it is necessarily normal. Indeed, this follows from the more general assertion: Lemma 5.2.1. Let V be a discrete valuation ring and A a finite type Cohen-Macaulay V -algebra with smooth generic fiber and geometrically reduced special fiber. Then A is normal.
Proof. We claim that A satisfies Serre's "R 1 +S 2 "-criterion for normality [Mat89,Theorem 23.8]. As A is assumed to be CM, by definition of Cohen-Macaulay A verifies S i for all i ≥ 0, so we need only show that each localization of A at a prime ideals of codimension 1 is regular. Since A has geometrically reduced special fiber, this special fiber is in particular smooth at its generic points. As A is flat over V (again by definition of CM), we deduce that the (open) V -smooth locus in Spec A contains the generic points of the special fiber and hence contains all codimension-1 points (as the generic fiber of Spec A is assumed to be smooth). Thus A is R 1 , as desired.
of R r [∆/∆ r ]-modules with linear H * r -action and R r -semilinear Γ-action in which each term is free as an R r -module. 29 Similarly, for each pair of nonnegative integers r ≥ s, the trace mappings (5.2.4) induce a commutative diagram with exact rows We will apply Lemma 5.1.2 with A r = R r , I r = (π r ), B = R ∞ nd with M r each one of the terms in (5.2.7). In order to do this, we must check that the hypotheses (5.1.1a) and (5.1.1b) are satisfied.
Applying ⊗ Rr F p to the short exact sequence (5.2.7) and using the fact that the idempotent e * commutes with tensor products, we obtain, thanks to Lemma 2.1.16 (1), the short exact sequence of F p -vector spaces (3.3.12a). By Corollary 3.3.3, the three terms of (3.3.12a) are free F p [∆/∆ r ]-modules of ranks d, 2d, and d respecvitely, so (5.1.1a) is satisfied for each of these terms. Similarly, applying ⊗ Rr F p to the diagram (5.2.8) yields a diagram which by Corollary 3.3.2 is naturally isomorphic to the diagram of F p [∆/∆ r ]-modules with split-exact rows Each of the vertical maps in this diagram is surjective due to Proposition 3.2.1 (2), and we conclude that the hypothesis (5.1.1b) is satisfied as well. Furthermore, the vertical maps in (5.2.8) are then surjective by Nakayama's Lemma, so applying ⊗ Rr R ∞ yields an inverse system of short exact sequences in which the first term satisfies the Mittag-Leffler condition. Passing to inverse limits is therefore (right) exact, and we obtain the short exact sequence (5.2.5).
Due to Proposition 2.1.11 (3), the short exact sequence (5.2.2) is auto-dual with respect to the canonical cup-product pairing (·, ·) r on H 1 dR,r . We extend scalars along R r → R r := R r [µ N ], so that the Atkin-Lehner "invoultion" w r is defined, and consider the "twisted" pairing on ordinary parts (5.2.9) ·, · r : (e * H 1 dR,r ) R r × (e * H 1 dR,r ) R r G G R r given by x, y r := (x, w r U * p r y).
It is again perfect and satisfies T * x, y = x, T * y for all x, y ∈ (e * H 1 dR,r ) R r and T * ∈ H * r .
Proposition 5.2.4. The pairings (5.2.9) compile to give a perfect Λ R ∞ -linear duality pairing x r , δ −1 * y r r · δ 29 Indeed, e * M is a direct summand of M for any H * r -module M , and hence Rr-projective (= Rr-freee) if M is. for x = {x r } r and y = {y r } r in (e * H 1 dR ) Λ R ∞ . The pairing ·, · Λ R∞ induces a canonical isomorphism where we have used Proposition 2.3.6 and the fact that the cup product is Galois-equivarant. It now follows easily from definitions that and the claimed Γ × Gal(K 0 /K 0 )-equivariance of (5.2.4) is equivalent to this. We expect that for W any specialization of e * H 1 et along a continuous homomorphism Λ → K ∞ , there is a canonical isomorphism between D := D Sen (W ⊗ C K ) and the corresponding specialization of e * H 1 dR , with the Sen operator Θ D induced by the Gauss-Manin connections on H 1 dR,r . In this way, we might think of e * H 1 dR as a Λ-adic avatar of "D Sen (e * H 1 et ⊗ Λ Λ O C K )." We hope to pursue these connections in future work.
which is an isomorphism after inverting p. In particular, the map (5.3.3) is injective, Γ-equivariant, and compatible with the natural action of H r (respectively H * r ) on source and target for = ∞ (respectively = 0), and in the case of = 0 intertwines the action of Gal(K 0 /K 0 ) via the character a −1 N on source with the natural action on the target. Remark 5.3.2. The image of (5.3.3) for = ∞ is naturally identified with the space of weight 2 cuspforms for Γ r whose formal expansion at every cusp has R r -coefficients.
Applying the idempotent e (respectively e * ) to (5.3.3) with = ∞ (respectively = 0) gives an injective homomorphism which is compatible with the canonical actions of Γ and of H r (respectively H * r ) on source and target and in the case of (5.3.4a) is Gal(K 0 /K 0 )-equivariant. Theorem 5.3.4 (Ohta). Then there is a canonical isomorphism of Λ R∞ -modules that intertwines the action of T ∈ H on the source with that of T * ∈ H * on the target, for all T ∈ H. This isomorphism is Gal(K ∞ /K 0 )-equivariant for the natural action of Gal(K ∞ /K 0 ) on e * S * 2 (N, R ∞ ) and the twisted action γ · F := χ(γ) −1 a(γ) −1 N γF on eS(N ; Λ R∞ ). Proof. For the definition of the canonical map (5.3.6), as well as the proof that it is an isomorphism, see Theorem 2.3.6 and its proof in [Oht95]. With the conventions of [Oht95], the claimed compatibility of (5.3.6) with Hecke operators is a consequence of [Oht95, 2.5.1], while the Gal(K ∞ /K 0 )-equivariance of (5.3.6) follows from [Oht95, Proposition 3.5.6].
Corollary 5.3.5. There is a canonical isomorphism of Λ R∞ -modules that intertwines the action of T ∈ H on the source with T * ∈ H * on the target and is Γ-equivariant for the canonical action of Γ on e * H 0 (ω) and the twisted action γ · F := χ(γ) −1 γF on eS(N ; Λ R∞ ).
Proof. This follows immediately from Proposition 5.3.3 and Theorem 5.3.4.
Λ-adic
Barsotti-Tate groups. In order to define a crystalline analogue of Hida's ordinary Λadicétale cohomology, we will apply the theory of §4 to a certain "tower" {G r /R r } r≥1 of p-divisible groups (a Λ-adic Barsotti Tate group in the sense of Hida [Hid05a], [Hid05b]) whose construction involves artfully cutting out certain p-divisible subgroups of J r [p ∞ ] over Q and the "good reduction" theorems of Langlands-Carayol-Saito. The construction of {G r /R r } r≥1 is certainly well-known (e.g. [MW86,§1], [MW84, Chapter 3, §1], [Til87, Definition 1.2] and [Oht95, §3.2]), but as we shall need substantially finer information about the G r than is available in the literature, we devote this section to recalling their construction and properties. For nonnegative integers i ≤ r, write Γ i r := Γ 1 (N p i ) ∩ Γ 0 (p r ) for the intersection (taken inside SL 2 (Z)), so Γ r = Γ r r . We will need the following fact (cf. [Til87,pg. 339], [Oht95,2.3.3]) concerning the trace mapping (5.3.1) attached to the canonical inclusion Γ r ⊆ Γ i for r ≥ i; for notational clarity, we will write tr r,i : S k (Γ r ) → S k (Γ i ) for this map. Writing U : S k (Γ i r ) → S k (Γ i r−d ) for the "Hecke operator" given by (e.g. [Oht99, §3.4]) Γ i r α d Γ i r−d , an easy computation using 5.4.4 shows that the composite coincides with U d p on q-expansions. By the q-expansion principle, we deduce that U d p on S k (Γ i r ) indeed factors through the subspace S k (Γ i r−d ), as desired.
For each integer i and any character ε : (Z/N p i Z) × → Q × , we denote by S 2 (Γ i , ε) the H i -stable subspace of weight 2 cusp forms for Γ i over Q on which the diamond operators act through ε(·). Define (5.4.5) where the inner sum is over all Dirichlet characters defined modulo N p i whose p-parts are primitive (i.e. whose conductor has p-part exactly p i ). We view V r as a Q-subspace of S 2 (Γ r ) in the usual way (i.e. via the embeddings ι id ). We define V * r as the direct sum (5.4.5), but viewed as a subspace of S 2 (Γ r ) via the "nonstandard" embeddings ι α r−i : S 2 (Γ i ) → S 2 (Γ r ).
As in (3.3.17), we write f for the idempotent of Z p [µ p−1 ] corresponding to "projection away from the trivial µ p−1 -eigenspace." From the formulae (3.3.16) we see that h := (p − 1)f lies in the subring Z[µ p−1 ] of Z p [µ p−1 ] and satisfies h 2 = (p − 1)h . We define endomorphisms of S 2 (Γ r ): Corollary 5.4.3. As subspaces of S 2 (Γ r ) we have w r (V * r ) = V r . The space V r (respectively V * r ) is naturally an H r (resp. H * r )-stable subspace of S 2 (Γ r ), and admits a canonical descent to Q. Furthermore, the endomorphisms U r and U * r of S 2 (Γ r ) factor through V r and V * r , respectively. Proof. The first assertion follows from the relation w r • ι α r−i = ι id • w i as maps S 2 (Γ i ) → S 2 (Γ r ), together with the fact that w i on S 2 (Γ i ) carries S 2 (Γ i , ε) isomorphically onto S 2 (Γ i , ε −1 ). The H rstability of V r is clear as each of S 2 (Γ i , ε) is an H r -stable subspace of S 2 (Γ r ); that V * r is H * r -stable follows from this and the comutation relation T * w r = w r T of Proposition 2.3.24. That V r and V * r admit canonical descents to Q is clear, as G Q -conjugate Dirichlet characters have equal conductors. The final assertion concerning the endomorphisms U r and U * r follows easily from Lemma 5.4.2, using the fact that h : S 2 (Γ r ) → S 2 (Γ r ) has image contained in r i=1 S k (Γ i r ).
Definition 5.4.4. We denote by V r and V * r the canonical descents to Q of V r and V * r , respectively. Following [MW84, Chapter III, §1] and [Til87, §2], we recall the construction of certain "good" quotient abelian varieties of J r whose cotangent spaces are naturally identified with V r and V * r . In what follows, we will make frequent use of the following elementary result: Lemma 5.4.5. Let f : A → B be a homomorphism of commutative group varieties over a field K of characteristic 0. Then: (1) The formation of Lie and Cot commutes with the formation of kernels and images: the kernel (respectively image) of Lie(f ) is canonically isomorphic to the Lie algebra of the kernel (respectively image) of f , and similarly for cotangent spaces at the identity. In particular, if A is connected and Lie(f ) = 0 (respectively Cot(f ) = 0) then f = 0.
(2) Let i : B → B be a closed immersion of commutative group varieties over K with B connected.
If Lie(f ) factors through Lie(i) then f factors (necessarily uniquely) through i.
A be a surjection of commutative group varieties over K with connected kernel. If Cot(f ) factors through Cot(j) then f factors (necessarily uniquely) through j.
Proof. The key point is that because K has characteristic zero, the functors Lie(·) and Cot(·) on the category of commutative group schemes are exact. Indeed, since Lie(·) is always left exact, the exactness of Lie(·) follows easily from the fact that any quotient mapping G H of group varieties in characteristic zero is smooth (as the kernel is a group variety over a field of characteristic zero and hence automatically smooth), so the induced map on Lie algebras is a surjection. By similar reasoning one shows that the right exact Cot(·) is likewise exact, and the first part of (1) follows easily. In particular, if Lie(f ) is the zero map then Lie(im(f )) = 0, so im(f ) is zero-dimensional. Since it is also smooth, it must beétale. Thus, if A is connected, then im(f ) is both connected andétale, whence it is a single point; by evaluation of f at the identity of A we conclude that f = 0. The assertions (2) and (3) now follow immediately by using universal mapping properties.
To proceed with the construction of good quotients of J r , we now consider the diagrams of "degeneracy mappings" of curves over Q for i = 1, 2 where π and π i are the maps induced by (2.3.8) and (2.3.9), respectively. These mappings covariantly (respectively contravariantly) induce mappings on the associated Jacobians via Albanese (respectively Picard) functoriality. Writing JY r := Pic 0 Yr/Q and setting K i 1 := JY 1 for i = 1, 2 we inductively define abelian subvarieties ι i r : K i r → JY r and abelian variety quotients α i r : J r B i r as follows: (5.4.8 i ) B i r−1 := J r−1 / Pic 0 (π)(K i r−1 ) and K i r := ker(JY r α i r−1 •Alb(π i ) −−−−−−−−→ B i r−1 ) 0 for r ≥ 2, i = 1, 2, with α i r−1 and ι i r the obvious mappings; here, (·) 0 denotes the connected component of the identity of (·). As in [Oht95, §3.2], we have modified Tilouine's construction [Til87,§2] so that kernel of α r is connected; i.e. is an abelian subvariety of J r (cf. Remark 5.4.8). Note that we have a commutative diagram of abelian varieties over Q for i = 1, 2 (5.4.9 i ) with bottom two horizontal rows that are complexes.
Warning 5.4.6. While the bottom row of (5.4.9 i ) is exact in the middle by definition of α i r , the central row is not exact in the middle: it follows from the fact that Alb(π i ) • Pic 0 (π i ) is multiplication by p on J r−1 that the component group of the kernel of α i r−1 • Alb(π i ) : JY r → B i r−1 is nontrivial with order divisible by p. Moreover, there is no mapping B i r−1 → B i r which makes the diagram (5.4.9 i ) commute.
In order to be consistent with the literature, we adopt the following convention: Definition 5.4.7. We set B r := B 2 r and B * r := B 1 r , with B i r defined inductively by (5.4.8 i ). We likewise set α r := α 2 r and α * r := α 1 r .
Remark 5.4.8. We briefly comment on the relation between our quotient B r and the "good" quotients of J r considered by Ohta [Oht95], by Mazur-Wiles [MW84], and by Tilouine [Til87]. Recall [Til87,§2] that Tilouine constructs 31 an abelian variety quotient α r : J r B r via an inductive procedure nearly identical to the one used to define B r = B 1 r : one sets K 1 := JY 1 , and for r ≥ 2 defines B r−1 := J r−1 / Pic 0 (π)(K r−1 ) and K r := ker(JY r Using the fact that the formation of images and identity components commutes, one shows via a straightforward induction argument that α r : J r B r identifies B r with J r /(ker α r ) 0 ; in particular, our B r is the same as Ohta's [Oht95, §3.2] and Tilouine's quotient α r : J r → B r uniquely factors through α r via an isogeny B r B r which has degree divisible by p by Warning 5.4.6. Due to this fact, it is essential for our purposes to work with B r rather than B r . Of course, following [Oht95, 3.2.1], we could have simply defined B r as J r /(ker α r ) 0 , but we feel that the construction we have given is more natural.
On the other hand, we remark that B r is naturally a quotient of the "good" quotient J r A r constructed by Mazur-Wiles in [MW84, Chapter III, §1], and the kernel of the corresponding surjective homomorphism A r B r is isogenous to J 0 × J 0 .
Proposition 5.4.9. Over F := Q(µ N p r ), the automorphism w r of J rF induces an isomorphism of quotients B rF B * r F . The abelian variety B r (respectively B * r ) is the unique quotient of J r by a Q-rational abelian subvariety with the property that the induced map on cotangent spaces has image precisely V r (respectively V * r ). In particular, there are canonical actions of the Hecke algebras 32 H r (Z) on B r and H * r (Z) on B * r for which α r and α * r are equivariant.
Proof. By the construction of B i r and Proposition 2.3.6, the automorphism w r of J r,F carries ker(α r ) to ker(α * r ) and induces an isomorphsm B r,F B * r,F over F that intertwines the action of H r on B r with H * r on B * r . The isogeny B r B r of Remark 5.4.8 induces an isomorphism on cotangent spaces, compatibly with the inclusions into Cot(J r ). Thus, the claimed identification of the image of Cot(B r ) with V r follows from [Til87, Proposition 2.1] (using [Til87, Definition 2.1]). The claimed uniqueness of J r B r follows easily from Lemma 5.4.5 (3). Similarly, since the subspace V r of S 2 (Γ r ) is stable under H r , we conclude from Lemma 5.4.5 (3) that for any T ∈ H r (Z), the induced morphism J r T − → J r B r factors through α r , and hence that H r (Z) acts on B r compatibly (via α r ) with its action on J r . 31 The notation Tilouine uses for his quotient is the same as the notation we have used for our (slightly modified) quotient. To avoid conflict, we have therefore chosen to alter his notation. 32 We must warn the reader that Tilouine [Til87] writes Hr(Z) for the Z-subalgebra of End(Jr) generated by the Hecke operators acting via the (·) * -action (i.e. by "Picard" functoriality) whereas our Hr(Z) is defined using the (·) * -action. This discrepancy is due primarily to the fact that Tilouine identifies tangent spaces of modular abelian varieties with spaces of modular forms, rather than cotangent spaces as is our convention.
y y commute; these maps are moreover H * r (Z)-equivariant. By a slight abuse of notation, we will simply write Alb(σ) and Pic 0 (ρ) for the induced maps on B * r and B * r−1 , respectively. Proof. Under the canonical identification of Cot(J r ) ⊗ Q Q with S 2 (Γ r ), the mapping on cotangent spaces induced by Alb(σ) (respectively Pic 0 (ρ)) coincides with ι α : S 2 (Γ r−1 ) → S 2 (Γ r ) (respectively tr r,r−1 : S 2 (Γ r ) → S 2 (Γ r−1 )). As the kernel of α * r : J r B * r is connected by definition, thanks to Lemma 5.4.5 (3) it suffices to prove that ι α (respectively tr r,r−1 ) carries V * r−1 to V * r (respectively V * r to V * r−1 ). On one hand, the composite ι α • ι α r−1−i : S 2 (Γ i , ε) → S 2 (Γ r ) coincides with the embedding ι α r−i , and it follows immediately from the definition of V * r that ι α carries V * r−1 into V * r . On the other hand, an easy calculation using (5.4.1) shows that one has equalities of maps S 2 (Γ i , ε) → S 2 (Γ r ) Thus, the image of ι α • tr r,r−1 : V * r → S 2 (Γ r ) is contained in the image of ι α : V * r−1 → S 2 (Γ r ); since ι α is injective, we conclude that the image of tr r,r−1 : V * r → S 2 (Γ r−1 ) is contained in V * r−1 as desired. Proposition 5.4.11. The abelian varieties B r and B * r acquire good reduction over Q p (µ p r ). As in §3.3, we denote by e * := f e * ∈ H * and e := f e ∈ H the sub-idempotents of e * and e, respectively, corresponding to projection away from the trivial eigenspace of µ p−1 .
We view the maps (5.4.6) as endomorphisms of J r in the obvious way, and again write U * r and U r for the induced endomorphism of B * r and B r , respectively. To prove Proposition 5.4.12, we need the following geometric incarnation of Corollary 5.4.3: Lemma 5.4.13. There exists a unique H * r (Z) (respectively H r (Z))-equivariant map W * r : B * r → J r (respectively W r : B r → J r ) of abelian varieties over Q such that the diagram Proof. Consider the endomorphism of J r given by U r . Due to Corollary 5.4.3, the induced mapping on cotangent spaces factors through the inclusion Cot(B r ) → Cot(J r ). Since the kernel of the quotient mapping α r : J r B r giving rise to this inclusion is connected, we conclude from Lemma 5.4.5 (3) that U r factors uniquely through α r via an H r -equivariant morphism W r : B r → J r . The corresponding statements for B * r are proved similarly. Proof of Proposition 5.4.12. From (5.4.11) we get commutative diagrams of p-divisible groups over Q (5.4.12) in which all vertical arrows are isomorphisms due to the very definition of the idempotents e * and e .
An easy diagram chase then shows that all arrows must be isomorphisms.
We will write B r , B * r , and J r , respectively, for the Néron models of the base changes (B r ) Kr , (B * r ) Kr and (J r ) Kr over T r := Spec(R r ); due to Proposition 5.4.12, both B r and B * r are abelian schemes over T r . By the Néron mapping property, there are canonical actions of H r (Z) on B r , J r and of H * r (Z) on B * r , J r over R r extending the actions on generic fibers as well as "semilinear" actions of Γ over the Γ-action on R r (cf. (4.1.7)). For each r, the Néron mapping property further provides diagrams (5.4.13) Alb(ρ) y y of smooth commutative group schemes over T r+1 in which the inner and outer rectangles commute, and all maps are H * r+1 (Z) (respectively H r+1 (Z)) and Γ equivariant. Definition 5.4.14. We define G r := e * (B * r [p ∞ ]) and we write G r := G ∨ r for its Cartier dual, each of which is canonically an object of pdiv Γ Rr . For each r ≥ s, noting that U * p is an automorphism of G r , we obtain from (5.4.13) canonical morphisms (5.4.14) ρ r,s : that is compatible with change in r using the trace mappings attached to ρ : I r → I s and the maps on Dieudonné modules induced by ρ r,s : G s → G r . The hypotheses (5.1.1a) and (5.1.1b) of Lemma 5.1.2 are thus satisfied with d as in the statement of the theorem, thanks to Proposition 3.2.1 (1)-(2) and Lemma 3.3.5. We conclude from Lemma 5.1.2 that (1) and (2) hold. As F (respectively V ) acts invertibly on D(Gé t r ) (respectively D(G m r )) for all r, assertion (3) is clear, while (4) and (5) follow immediately from Proposition 5.4.16.
As in Proposition 5.2.4, the short exact sequence (5.5.2) is very nearly "auto dual": Proposition 5.5.3. There is a canonical isomorphism of short exact sequences of Λ R 0 -modules that is H * and Γ × Gal(K 0 /K 0 )-equivariant, and intertwines F (respectively V ) on the top row with V ∨ (respectively F ∨ ) on the bottom.
Proof. We apply the duality formalism of Lemma 5.1.4. Let us write ρ r,s : G r → G s for the maps on special fibers induced by (5.4.14). Thanks to Proposition 5.4.15, the definition 5.4.14 of G r := G ∨ r , the natural isomorphism G r × Rr R r G r ( χ a N ) × Rr R r , and the compatibility of the Dieudonné module functor with duality, there are natural isomorphisms of R 0 -modules that are H * r -equivariant, Gal(K r /K 0 )-compatible for the standard action σ · f (m) := σf (σ −1 m) on the R 0 -linear dual of D(G r ) ⊗ Zp R 0 , and compatible with change in r using ρ r,s on D(G r ) and ρ r,s on D(G r ). We claim that the resulting perfect "evaluation" pairings which, as in the proof of Proposition 5.2.4, follows from Lemma 5.4.1 via Lemma 5.4.5. Again, by the H * r -compatibility of (5.5.4), the action of H * r is self-adjoint with resect to (5.5.5), so Lemma 5.1.4 gives a perfect Gal(K ∞ /K 0 )-compatible duality pairing ·, · : D ∞ ( χ a N )⊗ Λ Λ R 0 ×D ∞ ⊗ Λ Λ R 0 → Λ R 0 with respect to which T * is self-adjoint for all T * ∈ H * . That the resulting isomorphism (5.5.3) intertwines F with V ∨ and V with F ∨ is an immediate consequence of the compatibility of the Dieudonné module functor with duality.
We can interpret D ∞ in terms of the crystalline cohomology of the Igusa tower as follows. Let I 0 r and I ∞ r be the two "good" components of X r as in Remark 2.3.12, and form the projective limits H 1 cris (I ) := lim ← − r H 1 cris (I r ) for ∈ {∞, 0}, taken with respect to the trace maps on crystalline cohomology (see [Ber74, VII, §2.2]) induced by the canonical degeneracy mappings ρ : I r → I s . Then H 1 cris (I ) is naturally a Λ-module (via the diamond operators), equipped with a commuting action of F (Frobenius) and V (Verscheibung) satisfying F V = V F = p. Letting U * p act as F (respectively p N V ) on H 1 cris (I ) for = ∞ (respectively = 0) and the Hecke operators outside p (viewed as correspondences on the Igusa curves) act via pullback and trace at each level r, we obtain an action of H * on H 1 cris (I ). Finally, we let Γ act trivially on H 1 cris (I ) for = ∞ and via χ −1 for = 0. Theorem 5.5.4. There is a canonical H * and Γ-equivariant isomorphism of Λ-modules F ord which respects the given direct sum decompositions and is compatible with F and V .
Proof. From the exact sequence (5.4.19), we obtain for each r isomorphisms that are H * and Γ-equivariant (with respect to the actions specified in Proposition 5.4.16), and compatible with change in r via the mappings D(ρ r,s ) on D(G r ) and D(ρ) on D(j r [p ∞ ]). On the other hand, for any smooth and proper curve X over a perfect field k of characteristic p, thanks to [MM74] and [Ill79, II, §3 C Remarque 3.11.2] there are natural isomorphisms of W (k)[F, V ]-modules (5.5.8) D(J X [p ∞ ]) H 1 cris (J X /W (k)) H 1 cris (X/W (k)) that for any finite map of smooth proper curves f : Y → X over k intertwine D(Pic(f )) and D(Alb(f )) with trace and pullback by f on crystalline cohomology, respectively. Applying this to X = I r for = 0, ∞, appealing to the identifications (5.5.7), and passing to inverse limits completes the proof.
Applying the idempotent f of (3.3.17) to the Hodge filtration (5.2.5) yields a short exact sequence of free Λ R∞ -modules with semilinear Γ-action and linear commuting action of H * : (5.5.9) 0 G G e * H 0 (ω) G G e * H 1 dR G G e * H 1 (O) G G 0 .
The key to relating (5.5.9) to the slope filtration (5.5.2) is the following comparison isomorphism: Proposition 5.5.5. For each positive integer r, there is a natural isomorphism of short exact sequences (5.5.10) that is compatible with H * r , Γ, and change in r using the mappings (5.4.14) on the top row and the maps ρ * on the bottom. Here, the bottom row is obtained from (5.2.2) by applying e * and the top row is the Hodge filtration of D(G r,0 ) Rr given by Proposition 2.2.6.
Proof. Let α * r : J r B * r be the map of Definition 5.4.7. We claim that α * r induces a canonical isomorphism of short exact sequences of free R r -modules that is H * r and Γ-equivariant and compatible with change in r using the map on Néron models induced by Pic 0 (ρ) and the maps (5.4.14) on G r . Granting this claim, the proposition then follows immediately from Proposition 2.2.4.
To prove our claim, we introduce the following notation: set V := Spec(R r ), and for n ≥ 1 put V n := Spec(R r /p n R r ). For any scheme (or p-divisible group) X over V , we put X n := X × V V n . If A is a Néron model over V , we will write H(A) for the short exact sequence of free R r -modules obtained by applying Lie to the canonical extension (2.2.4) of A t 0 . If G is a p-divisible group over V , we similalry write H(G n ) for the short exact sequence of Lie algebras associated to the universal extension of G t n by a vector group over V n (see Theorem 2.2.1, (2)). If A is an abelian scheme over V then we have natural and compatible (with change in n) isomorphisms Since these isomorphisms are induced via the Néron mapping property and the functoriality of H(·) by the H * r (Z)-equivariant map α * r : J r B * r , they are themselves H * r -equivariant. Similarly, since α * r is defined over Q and compatible with change in r as in Lemma 5.4.10, the isomorphism (5.5.14) is compatible with the given actions of Γ (arising via the Néron mapping property from the semilinear action of Γ over K r giving the descent data of J rK r and B rK r to Q p ) and change in r. Reducing (5.5.14) modulo p n and using the canonical isomorphism (5.5.12) yields the identifications which are clearly compatible with change in n, and which are easily checked (using the naturality of (5.5.12) and our remarks above) to be H * r and Γ-equivariant, and compatible with change in r. Since the surjection R r R r /pR r is a PD-thickening, passing to inverse limits (with respect to n) on (5.5.15) and using Proposition 2.2.6 now completes the proof.
Corollary 5.5.6. Let r be a positive integer. Then the short exact sequence of free R r -modules (5.5.16) 0 G G e * H 0 (ω r ) G G e * H 1 dR,r G G e * H 1 (O r ) G G 0 is functorially split; in particular, it is split compatibly with the actions of Γ and H * r . Moreover, (5.5.16) admits a functorial descent to Z p : there is a natural isomorphism of split short exact sequences (5.5.17) that is H * and Γ equivariant, with Γ acting trivially on Gé t r and through χ −1 on G m r . The identification 5.5.17 is compatible with change in r using the maps ρ * on the top row and the maps induced by on the bottom row.
Proof. Consider the isomorphism (5.5.10) of Proposition 5.5.5. As G r is an ordinary p-divisible group by Proposition 5.4.16, the top row of (5.5.10) is functorially split by Lemma 4.2.2, and this gives our first assertion. Composing the inverse of (5.5.10) with the isomorphism (4.2.11) of Lemma 4.2.2 gives the claimed identification (5.5.17). That this isomorphism is compatible with change in r via the specified maps follows easily from the construction of (4.2.11) via (4.2.13).
We can now prove Theorem 1.2.6. Let us recall the statement: Theorem 5.5.7. There is a canonical isomorphism of short exact sequences of finite free Λ R∞ -modules that is Γ and H * -equivariant. Here, the mappings on bottom row are the canonical inclusion and projection morphisms corresponding to the direct sum decomposition D ∞ = D m ∞ ⊕ Dé t ∞ . In particular, the Hodge filtration exact sequence (5.5.9) is canonically split, and admits a canonical descent to Λ.
Proof. Applying ⊗ Rr R ∞ to (5.5.17) and passing to projective limits yields an isomorphism of split exact sequences On the other hand, the isomorphisms G r = G m r × Gé t r V −r ×F r G G G m r × Gé t r = G r induce an isomorphism of projective limits which is visibly compatible with the the canonical splittings of source and target. The result now follows from Lemma 5.1.2 (5) and the proof of Theorem 5.5.2, which guarantee that the canonical mapping D ∞ ⊗ Λ Λ R∞ → lim ← −ρ (D(G r ) ⊗ Zp R ∞ ) is an isomorphism respecting the natural splittings.
As in §5.3, for any subfield K of C p with ring of integers R, we denote by eS(N ; Λ R ) the module of ordinary Λ R -adic cuspforms of level N in the sense of [Oht95, 2.5.5]. Following our convention of §3.3, we write e S(N ; Λ R ) for the direct summand of eS(N ; Λ R ) on which µ p−1 → Z × p ⊆ H acts nontrivially. Corollary 5.5.8. There is a canonical isomorphism of finite free Λ-modules D m ∞ ⊗ Λ Λ R∞ e * H 0 (ω) e S(N, Λ R∞ ) e S(N, Λ) ⊗ Λ Λ R∞ and that the resulting composite isomorphism intertwines T * ∈ H * on D m ∞ with T ∈ H on e S(N, Λ) and is Γ-equivariant, with γ ∈ Γ acting as χ(γ) −1 ⊗ γ on each tensor product. Indeed, the first and second isomorphisms are due to Theorem 5.5.7 and Corollary 5.3.5, respectively, while the final isomorphism is a consequence of the definition of e S(N ; Λ R ) and the facts that this Λ R -module is free of finite rank [Oht95, Corollary 2.5.4] and specializes as in [Oht95, 2.6.1]. Twisting the Γ-action on the source and target of the composite (5.5.20) by χ therefore gives a Γ-equivariant isomorphism (5.5.21) D m ∞ ⊗ Λ Λ R∞ S(N, Λ) ⊗ Λ Λ R∞ with γ ∈ Γ acting as 1 ⊗ γ on source and target. Passing to Γ-invariants on (5.5.21) yields (5.5.19).
Remark 5.5.9. Via Proposition 5.5.3 and the natural Λ-adic duality between eH and eS(N ; Λ) [Oht95, Theorem 2.5.3], we obtain a canonical Gal(K 0 /K 0 )-equivariant isomorphism of Λ R 0 -modules that intertwines T ⊗ 1 for T ∈ H acting on e H by multiplication with T * ⊗ 1, with U * p acting on Dé t ∞ ( a N ) as F . From Theorem 5.5.4 and Corollary 5.5.8 we then obtain canonical isomorphisms e S(N ; Λ) f H 1 cris intertwing T (respectively T ⊗1) with T * (respectively T * ⊗1) where U * p acts on crystalline cohomology as p N V (respectively F ⊗1). The second of these isomorphisms is moreover Gal(K 0 /K 0 )-equivariant.
In order to relate the slope filtration (5.5.2) of D ∞ to the ordinary filtration of e * H 1 et , we require: Lemma 5.5.10. Let r be a positive integer let G r = e * J r [p ∞ ] be the unique Q p -descent of the generic fiber of G r , as in Definition 5.4.14. There are canonical isomophisms of free W (F p )-modules that are H * r -equivariant and G Qp -compatible for the diagonal action on source and target, with G Qp acting trivially on D(Gé t r ) and via χ −1 · χ −1 on D(G m r )(−1) := D(G m r ) ⊗ Zp Z p (−1). The isomorphism (5.5.22a) intertwines F ⊗ σ with 1 ⊗ σ while (5.5.22b) intertwines V ⊗ σ −1 with 1 ⊗ σ −1 .
are surjective for all r ≥ r . By Lemma 5.1.2, we conclude that H 1 studied in §4.1 to the map on connected-étale sequences induced by ρ r,s and using the exactness of M r and its compatibility with base change (Theorem 4.1.3), we obtain maps of exact sequences in BT Γ with the projective limits taken with respect to the mappings induced by (5.6.1).
Sr
Each of (5.6.2) is naturally a module over the completed group ring Λ S∞ and is equipped with a semilinear action of Γ and a ϕ-semilinear Frobenius morphism defined by F := lim ← − (ϕ Mr ⊗ ϕ). Since ϕ is bijective on S ∞ , we also have a ϕ −1 -semilinear Verscheibung morphism defined as follows. For notational ease, we provisionally set M r := M r (G r ) ⊗ Sr S ∞ and we define G G ϕ −1 * ϕ * M r M r with ψ Mr as above Definition 4.1.2. It is easy to see that the V r are compatible with r, and we put V := lim ← − V r on M ∞ . We define Verscheibung morphisms on M ∞ for =ét, m similarly. As the composite of ψ Mr and 1 ⊗ ϕ Mr in either order is multiplication by E r (u r ) = u 0 /u 1 =: ω, we have Due to the functoriality of M r , we moreover have a Λ S∞ -linear action of H * on each of (5.6.2) which commutes with F , V , and Γ.
Theorem 5.6.2. As in Proposition 3.3.6, set d := p k=3 dim Fp S k (N ; F p ) ord . Then M ∞ (respectively M ∞ for =ét, m) is a free Λ S∞ -module of rank 2d (respectively d ) and there is a canonical short exact sequence of Λ S∞ -modules with linear H * -action and semi linear actions of Γ, F and V (5.6.4) Extension of scalars of (5.6.4) along the quotient Λ S∞ S ∞ [∆/∆ r ] recovers the exact sequence (5.6.5) for each integer r > 0, compatibly with H * , Γ, F , and V .
Proof. Since ϕ is an automorphism of S ∞ , pullback by ϕ commutes with projective limits of S ∞modules. As the canonical S ∞ -linear map ϕ * Λ S∞ → Λ S∞ is an isomorphism of rings (even of S ∞ -algebras), it therefore suffices to prove the assertions of Theorem 5.6.2 after pullback by ϕ, which will be more convenient due to the relation between ϕ * M r (G r ) and the Dieudonné crystal of G r .
Pulling back (5.6.1) by ϕ gives a commutative diagram with exact rows (5.6.6) and we apply Lemma 5.1.2 with A r := S r , I r := (u r ), B = S ∞ , and with M r each one of the terms in the top row of (5.6.6). The isomorphism (4.2.14a) of Proposition 4.2.3 ensures, via Theorem 5.5.2 (1), that the hypothesis (5.1.1a) is satisfied. Due to the functoriality of (4.2.14a), for any r ≥ s, the mapping obtained from (5.6.6) by reducing modulo I r is identified with the mapping on (5.5.1) induced (via functoriality of D(·)) by ρ r,s . As was shown in the proof of Theorem (5.5.2), these mappings are surjective for all r ≥ s, and we conclude that hypothesis (5.1.1b) holds as well. Moreover, the vertical mappings of (5.6.6) are then surjective by Nakayama's Lemma, so as in the proof of Theorems 5.2.3 and 5.5.2 (and keeping in mind that pullback by ϕ commutes with projective limits of S ∞ -modules), we obtain, by applying ⊗ Sr S ∞ to (5.6.6), passing to projective limits, and pulling back by (ϕ −1 ) * , the short exact sequence (5.6.4).
Remark 5.6.3. In the proof of Theorem 5.6.2, we could have alternately applied Lemma 5.1.2 with A r = S r and I r := (E r ), appealing to the identifications (4.2.14b) of Proposition 4.2.3 and (5.5.10) of Proposition 5.5.5, and to Theorem 5.2.3.
The short exact sequence (5.6.4) is closely related to its Λ S∞ -linear dual. In what follows, we write , taken along the mappings u r → ϕ(u r+1 ); it is naturally a S ∞ -algebra.
Proof. We first claim that there is a natural isomorphism of S ∞ [∆/∆ r ]-modules (5.6.8) that is H * -equivariant and Gal(K ∞ /K 0 )-compatible for the standard action γ · f (m) := γf (γ −1 m) on the right side, and that intertwines F and V with V ∨ and F ∨ , respectively. Indeed, this follows immediately from the identifications 2) of duality in BT ϕ,Γ Sr ; here, the first isomorphism in (5.6.9) results from Proposition 5.4.15 and Theorem 4.1.3 (2), while the final identification is due to Theorem 4.1.3 (1). The identification (5.6.8) carries F (respectively V ) on its source to V ∨ (respectively F ∨ ) on its target due to the compatibility of the functor M r (·) with duality (Theorem 4.1.3 (1)).
Proof. To prove the first assertion, we apply Lemma 5.1.2 with A r = S r , I r = (u r ), B = S ∞ , B = Z p (viewed as a B-algebra via τ ) and M r = M r for ∈ {ét, m, null}. Thanks to (4.2.14a) in the case G = G r , we have a canonical identification M r := M r /I r M r D(G r ) Zp that is compatible with change in r in the sense that the induced projective system {M r } r is identified with that of Definition 5.5.1. It follows from this and Theorem 5.5.2 (1)-(2) that the hypotheses (5.1.1a)-(5.1.1b) are satisfied, and (5.6.12) is an isomorphism by Lemma 5.1.2 (5).
In exactly the same manner, the second assertion follows by appealing to Lemma 5.1.2 with A r = S r , I r = (E r ), B = S ∞ , B = R ∞ (viewed as a B-algebra via θ • ϕ) and M r = M r , using (4.2.14b) and Theorem 5.2.3 to verify the hypotheses (5.1.1a)-(5.1.1b).
Proof of Theorem 1.2.15 and Corollary 1.2.16. Applying Theorem 4.1.5 to (the connected-étale sequence of) G r gives a natural isomorphism of short exact sequences (5.6.14) Due to Theorem 5.6.2, the terms in the top row of 5.6.14 are free of ranks d , 2d , and d over A r [∆/∆ r ], respectively, so we conclude from Lemma 5. ) is free of rank 2d over Z p [∆/∆ r ]. Using the fact that Z p → A r is faithfully flat, it then follows from the surjectivity of the vertical maps in (5.6.6) (which was noted in the proof of Theorem 5.6.2) that the canonical trace mappings which is compatible with change in r by construction. By Γ-descent and Tate's theorem, there is a natural isomorphism Hom pdiv Γ
Rr
(Gé t r , G r ) Hom Zp[G Qp ] (T p Gé t r , T p G r ) and we conclude that the connected-étale sequence of G r is split (in the category pdiv Γ Rr ), compatibly with change in r. Due to the functoriality of M r (·), this in turn implies that the top row of (5.6.1) is split in BT Γ Sr , compatibly with change in r, which is easily seen to imply the splitting of (5.6.4).
|
2012-09-04T02:28:56.000Z
|
2012-09-01T00:00:00.000
|
{
"year": 2012,
"sha1": "a2a6fc51d047405fea7968262bc2a1b33096d124",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "a2a6fc51d047405fea7968262bc2a1b33096d124",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
}
|
55568624
|
pes2o/s2orc
|
v3-fos-license
|
Varieties of English in current English language teaching
The systematic description of varieties of English, native and non-native, is steadily gaining momentum in contemporary sociolinguistics (c.f., e.g., Kortmann and Schneider 2004, cf. also Schneider 1997a,b; Labov, Ash and Boberg 2005; Burridge and Kortmann 2008; Kortmann and Upton 2008; Mesthrie 2008; Schneider 2008). English has long been identified to be a pluricentric language (Clyne 1991), and more recently linguists have been paying increasing attention to the use of English as a Lingua Franca (ELF), as there is widespread agreement about the fact that "the vast majority of verbal exchanges in English do not involve any native speakers at all" (Seidlhofer 2005a: 339, b; cf. also Jenkins 2005). Cook (2003) points out that it is communicative language teaching, an approach based on the introduction of the concept of communicative competence by Hymes (1972), that still remains "the dominant orthodoxy in progressive language teaching" today (Cook 2003: 36). This also means that 21st century speakers and learners of English need to be linguistically, sociolinguistically and pragmatically equipped to be able to communicate with native and non-native speakers of English from various regional, social and cultural backgrounds (Bieswanger 2007: 405). ELT, broadly defined by the Oxford University Press ELT Journal as "the field of teaching English as a second or foreign language", is thus currently facing new challenges in a changing and increasingly globalized world.
Introduction
The systematic description of varieties of English, native and non-native, is steadily gaining momentum in contemporary sociolinguistics (c.f., e.g., Kortmann and Schneider 2004, cf. also Schneider 1997a,b;Labov, Ash and Boberg 2005;Burridge and Kortmann 2008;Kortmann and Upton 2008;Mesthrie 2008;Schneider 2008).English has long been identified to be a pluricentric language (Clyne 1991), and more recently linguists have been paying increasing attention to the use of English as a Lingua Franca (ELF), as there is widespread agreement about the fact that "the vast majority of verbal exchanges in English do not involve any native speakers at all" (Seidlhofer 2005a: 339, b;cf. also Jenkins 2005).Cook (2003) points out that it is communicative language teaching, an approach based on the introduction of the concept of communicative competence by Hymes (1972), that still remains "the dominant orthodoxy in progressive language teaching" today (Cook 2003: 36).This also means that 21st century speakers and learners of English need to be linguistically, sociolinguistically and pragmatically equipped to be able to communicate with native and non-native speakers of English from various regional, social and cultural backgrounds (Bieswanger 2007: 405).ELT, broadly defined by the Oxford University Press ELT Journal as "the field of teaching English as a second or foreign language", is thus currently facing new challenges in a changing and increasingly globalized world.The issue of which varieties should be learned by non-native learners of English and the question of acceptability of linguistic variation in ELT have become widely discussed topics in academic writing about ELT and are everyday issues for contemporary language teachers (cf.Görlach 1999;McArthur 2001;Gnutzmann 2005).The increasing importance of different Englishesparticularly native varieties other than British and American English, the New Englishes and ELF -however, appears to be still only marginally reflected in ELT curricula and teaching material.This paper presents an analysis of a current German secondary school ELT curriculum, accompanying teaching material and selected German university programs for prospective teachers of English.The present study addresses a number of variation-related questions, such as "How important should varieties be in ELT?" and "What is the role of ELF?"These questions (further delineated in section 2 below) are equally relevant for ELT in countries like Germany, where English is taught as a foreign language, and multilingual countries such as South Africa and India, where English has a whole range of different functions.
Why more varieties of English in ELT?
The original motivation for this paper stems from many years of teaching English to adults in the context of evening classes, intended to refresh the knowledge of English they had acquired in secondary school.The intermediate and advanced learners of English in my classes, who had all received between five and nine years of English foreign language education in German secondary schools, frequently reported frustrating experiences they had had in English-speaking environments at home and abroad.They often complained about situations in which their native or non-native interlocutors had been speaking "so strangely" [i.e., employing a variety of English with which they were not familiar] that their "school English" [i.e., the English they had learned in secondary school] did not enable them to take part in certain English-language conversations.More precisely, these learners of English could not cope in situations in which they either had to speak English in a native-speaking context or use ELF with other non-native speakers of English.The conversation failed because their interlocutors did not speak the type of standardized English they had themselves learned in secondary school, but used a variety they considered "strange".Additionally, the encounter with more or less intelligible varieties of English obviously added to the so-called "culture shock" generally caused by foreign environments.The above reports indicate that many years of English foreign language education in secondary school had not prepared these Varieties of English in current English language teaching 29 speakers for the sociolinguistic reality in an increasingly globalized world and had failed to create any kind of awareness of the considerable regional variation in the use of English.This violates the basic principle of English language and foreign language education as formulated in Klippel and Doff's (2007: 36) handbook of English foreign language education: "school is supposed to prepare children and teenagers for successfully coping with their lives" (my translation).
In the course of the twentieth century, English has become a world language and the undisputed global lingua franca, and as a result the ability to effectively communicate in English is currently considered a skill that enables individuals to deal with the demands and challenges of everyday life in a globalized world (Centre for Applied Linguistics 2006; Klippel and Doff 2007: 36;McArthur 2001: 1).Similar demands for a sound command of English are found in the curriculum for the so-called Realschule in Bavaria, the intermediate of the three types of secondary schools within the state of Bavaria, Germany.The following translated passage from the introductory section of the curriculum for the subject English at the Bavarian Realschule defines the aims of English foreign language education at the
Realschule:
Internationally, English is the most important second language and lingua franca.In many areas of interaction -in the economy and technology, in science and the arts, in tourism and sports -the English language is a natural means of communication The curriculum focuses on the ability to communicate and has to be placed in the tradition of communicative language-learning, which is still widely agreed to be the dominant approach in ELT, despite recent movements such as focus on form (cf. Doughty and Williams 1998;Cook 2003: 36).In light of the above-mentioned reports by the learners in my evening classes, the demands for communicative competence, in both ELT literature as well as the curriculum referred to above, lead to two main research questions that will be addressed in this paper: (i) How can current ELT (better) help to produce graduates who are able to communicate in as many situations as possible?
(ii) What role can sociolinguistic aspects play in this context?
These questions are themselves based on two main developments.As I have mentioned above, communicative competence has been the main aim of ELT since the 1970s, and during the same period the situation of English in the world has changed.Today, the vast majority of English speakers/users worldwide are non-native speakers/users of English, and even most native speakers do not speak the widely taught but highly generalized varieties usually labeled "British English" and "American English".This fact is essentially undisputed, although it is extremely difficult to give reliable numbers for speakers/users of English in the world.Gnutzmann and Intemann (2005) state in the introduction to their seminal volume on The Globalization of English and the English Language Classroom: Facing these uncertainties, it becomes obvious that counting speakers is nearly impossible and that all data available is largely, if not exclusively, based on estimates.
[…] If the numbers of speakers of English as a Native Language (ENL) is uncertain, then the estimates for speakers of English as a Second Language (ESL) are even more so.
[…] Estimating the number of speakers of English as a Foreign Language (EFL) cannot be anything more than a good guess.(Gnutzmann and Intemann 2005: 13) The most difficult task in this context is the definition of what constitutes a speaker or user of English.Quantitative estimates of speakers/users of English vary considerably because "the terms 'speaker' and 'user' are not clearly defined", and because there is no agreement among linguists as to "the minimum level of proficiency that can be counted as English-speaking,using or -knowing" (Bieswanger and Becker 2008: 34).Estimates put the number of nativespeakers of English between 300 and 400 million (Viereck, Viereck and Ramisch 2002: 242;Crystal 2003: 65) and the number of non-native-speakers at a range from several hundred million to more than two billion (Bieswanger and Becker 2008: 34).Whatever the exact numbers, it is undisputed that the English language is a truly global language that is used in a vast number of countries and territories for a myriad of different functions.The linguistic variation that accompanies the geographical distribution and functional range of English is at least partly due to the fact that "no other language has ever been put to so many uses so massively by so many people in so many places" (McArthur 1998: 54).
It was stated above that the majority of native-speakers do not actually speak the varieties labeled as "British English" or "American English" that are usually taught to foreigners.The Received Pronunciation (RP) accent, often associated with -or by non-linguists even understood to be identical to -"British English", is one example of a widely taught variety which is only spoken by a very small percentage of native-speakers of English.In an interview, Crystal (2002: 20) summed up the current status of RP as follows: So RP has only ever been spoken at most by about five per cent of the population of England.And don't forget that it is England we are talking about.We're not talking about Wales, where RP was never an important element, nor Scotland, nor Northern Ireland, let alone Ireland as a whole.But even in England, 50 years ago, only about five per cent of the people would have spoken RP, and that figure is now down, I imagine, to less than two per cent …" (Crystal 2002: 20) Speakers of RP are thus only a small minority of English-speakers in Britain and a tiny fraction of all native-speakers of English worldwide.We find a similar situation when we look at standard national varieties of English.We no longer only distinguish between British We have seen that the majority of English-speakers/users are now non-native users of the language, that there is a huge diversity of varieties of English in use today and that even native speakers of English rarely speak the highly generalized traditional Standard Englishes.
It is, however, not only the proportion of native speakers to non-native speakers or RP speakers to non-RP speakers that should be considered; it is a whole set of new or changing attitudes and parameters that should lead to a rethinking of educational policies in ELT and a strengthened role of varieties in the classroom.Three developments are particularly relevant, of which only one is directly linked to globalization: Firstly, there is an ongoing process of strengthening of regional and social varieties in areas where English is traditionally the native language of the majority of the population.Particularly in the United Kingdom there seems to be an increasing acceptance of regional and social varieties in society, going hand-in-hand with devolution and a strengthening of the regions.This is reflected, for example, in changing language policies in the media, such as the hiring of non-RP-speaking announcers by the BBC, and an increasing interest of linguists in these varieties (cf.Hughes, Trudgill and Watt 2005).Secondly, there is a growing self-confidence of previously often stigmatized native and non-native national and regional varieties outside the UK and the USA, such as Australian English, New Zealand English, South African English and Indian English to name but a few.
The traditional prestige and standard varieties are no longer considered target norms by many speakers of these national and regional varieties, resulting in a decreasing influence of the traditional norms.The increasing self-confidence and independence of these varieties is reflected in and probably to a certain extent fostered by the systematic linguistic description of these varieties (cf.Hickey 2004;Kortmann and Schneider 2004).Thirdly, due to globalization and the availability of affordable means of long-distance communication and transportation, there is growing contact between learners of English and native-speakers on the one hand, and between learners of English from different backgrounds on the other.This leads to the increased use of ELF and has lead some scholars to argue in favor of abandoning the native-speaker standard altogether (Jenkins 2005;Seidlhofer 2005b).
These changes, however, do not only warrant questions with respect to norms but have lead ELT researchers to discuss a whole range of consequences for ELT.Gnutzmann and Intemann (2005) demand an increased consideration of varieties of English in ELT, particularly as far as receptive abilities and intercultural skills are concerned: As a result of globalisation the function of English as an international tool for communication needs rethinking in the English language classroom.This does not only include linguistic skills to understand various kinds of accents and to be understood by others, but it also includes knowledge of other cultures which provides the learners with the ability to respond adequately to problems arising from cultural differences between the participants in international communication.(Gnutzmann and Intemann 2005: 20) Görlach (1999: 18) also demands that "students' receptive competence (in reading and listening comprehension) should be trained at an early age…", and adds that the confrontation with varieties "could and should include texts from the periphery".Seidlhofer (2005b: 170), on the other hand, raises the criticism that "the control over the norms and how it [i.e.
English] 'should be used' is still assumed to rest with the minority of its speakers, namely English native speakers", and argues for a "questioning of established authority" in favor of ELF.However, I would not consider ELF a suitable norm for ELT at the present stage, nor for at least the foreseeable future, as ELF has not been sufficiently studied and described, and because ELF remains so diverse that it is questionable whether it can even be called a variety of English.Gnutzmann and Intemann (2005: 17) express similar resentments when they say that "the use of ELF is as diverse as the competences of its speakers are" and conclude that "[l]inguistic evidence for ELF as a variety seems to be scarce so far".With ELF not or not yet being an option, the demand for a heightened role of varieties in the ELT classroom currently seems to be the most promising way to provide learners of English with an education which gives them the maximum possible communicative competence.
This general demand for a more elaborate attention to varieties in ELT should be specified separately for the various levels of linguistic analysis, i.e. for grammar, pronunciation and vocabulary.There have recently been attempts to address the issue of pragmatic variation with respect to ELT as well (cf.Barron 2005;Barron and Schneider 2008), but this would have to be the topic of a separate investigation.While formulating these demands, we should keep in mind, however, that there are usually only a few weekly lessons devoted to ELT in institutionalized settings such as in secondary schools.
From the point of view of communicative competence, varieties of English are generally thought to differ least at the level of grammar (Trudgill and Hannah 2002: 18;Jenkins 2003: 74), implying that the grammatical differences between varieties of English do not have to be addressed extensively in the English language classroom, particularly not at the beginning and intermediate stages.
The situation is completely different with respect to pronunciation.Starting at an early age, learners should get broad receptive training and should be confronted with as many accents as possible (cf.Görlach 1999: 18), progressing from the most common to the least frequently used pronunciation varieties.From a communicative point of view, it is important to enable 21st century learners of English to understand a variety of accents so that they can effectively communicate with most speakers of English.At the same time, such receptive training will create awareness among learners that English, most likely just like their mother tongue, is not monolithic.As far as production is concerned it seems to be sufficient, again from a communicative point of view, to train learners to be able to produce any widely understood accent of English.Being able to produce or imitate different accents of English does not benefit learners' communicative competence.
At the lexical level, learners should also receive broad receptive training and should be confronted with a number of different varieties from an early age, to enable them to understand speakers from different geographical and social backgrounds and to increase awareness that there is a considerable amount of variation in language use in English.
Receptive training at the lexical level should certainly start with, but not be limited to, the most common standard varieties.However, not everybody will agree with Görlach (1999), who takes the issue a step further when he demands that the confrontation with varieties should include material from "the periphery", arguing as follows: If the teachability of Shakespeare's plays has never been questioned even though the language employed is pretty useless for modern communication, then there is no doubt Varieties of English in current English language teaching 35 that short texts from Scots, AmBlE [American Black English] and IndE etc. can be taught, too.(Görlach 1999: 18) As far as vocabulary is concerned there should also be a certain amount of variety-related productive training, particularly at the intermediate and advanced level, to enable learners to communicate with speakers from different backgrounds.Such training also helps to increase learners' awareness that a certain amount of knowledge about varieties is necessary to be able to communicate effectively in English in speech and writing, and that communication has to be geared to their recipients' backgrounds to avoid misunderstandings and communicative breakdown.
In this section it has been shown that there is a need to include more exposure to varieties and variety-related training in ELT to create awareness that English is not monolithic and to provide learners with communicative competence that enables them to effectively communicate in a variety of situations in a changing and increasingly globalized world.
The role of varieties of English in curricula, teaching material and university programs for prospective teachers
We have seen in the previous section that varieties are, or at least should be, an important
Realschule: Curriculum and textbooks
As already mentioned in section 2, the introductory section of the curriculum for the subject English at the Realschule emphasizes that "[i]nternationally, English is the most important second language and lingua franca" (BMUK 2000-2003: 52, my translation), which implies that the actual curriculum should reflect this sentiment by the inclusion of varieties of English at all stages as demanded in section 2 of this paper.A year-by-year analysis of the curriculum and textbook series yielded some interesting results.For year five, no inclusion of varieties could be detected in either the curriculum or the accompanying textbook Red Line New 1.For year six, the curriculum contains only one general remark at the very beginning, stating that students should be informed about the existence of varieties.The curriculum does not mention knowledge of features of varieties or learning about potential structural differences between varieties.In the accompanying textbook Red Line New 2, varieties do not occur at all, with the exception of a few words marked "AE" for American English in the vocabulary section at the end of the book, such as movie (Red Line New 2:154) and cellphone (Red Line New 2:155).
For year seven, there are no variety-related demands in the curriculum at all, whereas the textbook Red Line New 3 at least includes one listening comprehension exercise concerning accents from 4 major regions of the United Kingdom, namely England, Scotland, Wales and Northern Ireland (Red Line New 3: 9).
The curriculum of year eight demands that learners be shown "the peculiarities of American English" (BMUK 2000(BMUK -2003: 310: 310, my translation).However, the curriculum then goes limits this aim to receptive abilities at the level of listening and to the understanding of texts that contain features of American English, including American English vocabulary (BMUK 2000(BMUK -2003: 310-311): 310-311).The inclusion of varieties in the curriculum of year eight at first glance sounds like a step in the right direction, but we have to take into account that there are only three weekly English lessons of 45 minutes each at the Realschule in year eight, which means that we can expect that there is most likely very little time for the coverage of American English.This expectation is confirmed by the analysis of the accompanying textbook Red Line New 4, which contains only very few explicit references to American English.There are only two basic exercises comparing British English and American English in the whole textbook (Red Line New 4: 17, 46), and two boxes with comparisons of features of British English and American English -for the most part lexical and spelling differences -in the vocabulary section at the end of the book (Red Line New 4: 112, 139).
The curriculum of year nine demands that learners should be trained to understand texts that "contain features of different varieties of the global language English (e.g., Asia, Africa)" (BMUK 2000(BMUK -2003: 408, my translation): 408, my translation).Again this sounds like a step in the right direction, but as in year eight there are only three weekly English lessons in year nine and a multitude of other issues also have to be covered, which means that there is most likely not much time for varieties.The textbook Red Line New 5 reflects this situation by devoting only rather limited room to varieties.There is a minor listening exercise requiring learners to identify speakers of Indian English and British English (Red Line New 5: 24), a recording of a story about life in South Africa (Red Line New 5: 42-43), and a short recording of a conversation between a taxi driver from Trinidad and a passenger from Britain (Red Line New 5: 57).Only the exercise on page 24, however, focuses on the language use of regional varieties of English.
The last year at the Realschule, year ten, focuses on the preparation for the final schoolleaving exam, as explicitly stated in the introductory paragraph of the curriculum of this year (BMUK 2000(BMUK -2003: 508): 508).This means that year ten is dominated by teaching to the test, an exam in which varieties do not play an important role.The curriculum also demands that learners "encounter and understand additional varieties of English as a lingua franca" but does not specify this requirement any further (BMUK 2000(BMUK -2003: 509, my translation): 509, my translation).The textbook Red Line New 6 reacts to this demand by brief coverage of Canadian English, Australian English and New Zealand English.There are a few variety-related recordings on the CD accompanying the textbook, namely one example of a conversation between a British girl and two Canadians (Red Line New 6: 23) and a few short passages for listening that are read by speakers of Australian English and New Zealand English (Red Line New 6: 60-61).
The only exercise specifically concentrating on variation in language use, however, is a task aimed at the identification of English speakers from Australia, Britain, India and the United States (Red Line New 6: 67).In the vocabulary section at the end of the book, there is a box with fifteen so-called "Australian and New Zealand words" (Red Line New 6: 145).However, the phonetic transcription of G'day, which is often considered the signature feature of Australian English, is given in Received Pronunciation (Red Line New 6: 140).
Summing up the analysis of the curriculum for the Realschule and the accompanying textbook series Red Line New with respect to questions (i) and (ii) posed at the beginning of this section, we can conclude that there are number of steps in the right direction of raising awareness and introducing students to distinct varieties of English.This holds true particularly as far as variety-related demands in the curriculum are concerned, where varieties are given some attention in years eight, nine and ten.Despite these demands in the curriculum, rather little room is devoted specifically to varieties in the textbook series Red Line New.We cannot blame the textbooks alone, however, but should for the most part blame the fact that there are not sufficient weekly lessons of English at the Realschule, particularly when considering the status of English as a global means of communication (cf.Ammon 2008: 12-18 on the current importance of English in the world).The lack of a sufficient number of lessons (i.e. a sufficient amount of time) is certainly a key factor contributing to a weak representation of varieties in the textbooks and, as is presumed, a fairly limited consideration of varieties in day-to-day teaching, despite some ambitious demands in the curriculum.
Varieties of English in current English language teaching 39
Gymnasium: Curriculum and textbooks
The curriculum for the Gymnasium is characterized by a strong focus on the United Kingdom and the United States, and their associated standard varieties of English, which is emphasized in the general remarks about the profile of the subject English at the beginning of the curriculum.Differentiated socio-cultural knowledge about the United Kingdom and the United States is said to be "at the center" of the subject English at the Gymnasium (BMUK 2004-2007: subject profile, my translation).As far as language is concerned, British English and American English are the only accepted norms, other varieties only play an explicit role with respect to listening comprehension: The subject [English] is based on the standard language; British English and American English are equally accepted as norms.As far as listening comprehension is concerned, the students should also encounter important regional and social varieties of English.
( BMUK 2004BMUK -2007: subject profile, my translation) The year-by-year analysis of the curriculum for the Gymnasium and the accompanying textbook series Green Line New also yielded some interesting results.In the curriculum of year five, varieties are not represented at all.The textbook Green Line New 1 does contain two pages on "English -a world language" (Green Line New 1: 98-99), providing mostly geographical information about the distribution of English as a world language.There are also a few words labeled "American English" in the vocabulary section at the end of the book, such as American English fries versus British English chips (Green Line New 1: 192).
For year six, the curriculum mentions in the section on pronunciation and intonation that students should encounter some easy-to-understand regional varieties (BMUK 2004(BMUK -2007: year six): year six).The analysis of the textbook Green Line New 2 revealed only one box with a comparison of vocabulary differences between British English and American English, many of which differ only in spelling (Green Line New 2: 173).
Similar to the curriculum of year six, the curriculum of year seven states in the section on pronunciation and intonation that students should encounter "more regional varieties" without being more specific (BMUK 2004(BMUK -2007: year seven, my translation).The textbook does address varieties on two occasions, but instead of including other regional varieties, as demanded in the curriculum, it contains one exercise that asks students to identify speakers as either using British English or American English (Green Line New 3: 57), and a box with a list of British English and American English words, some spelling differences and three pronunciation differences (Green Line New 3: 154).As with year six, the textbook does not appear to contain material concerning "regional varieties".
For year eight, the curriculum demands that students should be trained to be able to identify "typical features of British and American pronunciation" (BMUK 2004(BMUK -2007: year eight, my translation).Unfortunately, other varieties of English are not represented in the section on language in the curriculum of year eight, although some of the topics in the section on intercultural learning would provide the ideal framework for introducing new varieties to learners.The section on intercultural learning specifically demands that learners get to know the "situation and way of life of young people in another English-speaking country (e.g., Australia, Canada)" (BMUK 2004(BMUK -2007: year eight, my translation).The textbook Green Line New 4 for year eight reflects the demand in the curriculum for a comparison of British and American pronunciation by a single exercise on differences between British English and American English, which does not even focus on pronunciation and includes all "four areas of language" (Green Line New 4: 83).The textbook does briefly make the connection between the cultural studies topic Australia and Australian language use by presenting some vocabulary items which are considered to be typically Australian, such as G'day and barbie ("barbeque"), but does not mark them as Australian English in the vocabulary section (Green Line New 4: 8,124,126).On the whole, varieties of English play hardly any role in year eight.
The focus of the curriculum of year nine is still on British English and American English.
According to the curriculum, learners should be trained to be able to understand texts that are primarily spoken in Standard British English and Standard American English (BMUK 2004(BMUK -2007: year nine, my translation): year nine, my translation).The curriculum also demands that students should encounter "additional regional and social varieties" without being more specific (BMUK 2004(BMUK -2007: : year nine, my translation).The textbook Green Line New 5 for year nine reflects the lack of specific instructions concerning varieties by not including explicit references to variation in language use at all, except for a few words labeled "American English" in the vocabulary section (e.g., Green Line New 5: 128).
The curriculum of year ten demands the inclusion of more regional and social varieties by means of authentic audio and video material.Unfortunately, the curriculum is so new that the accompanying textbook has not yet been published.However, if Green Line New 6 is in line with the other textbooks of the series, it may again not adequately reflect the variety-related demands of the curriculum.The curricula of years eleven and twelve so far exist only as rough drafts that contain just a few references to varieties, exclusively in the area of listening comprehension.
As noted above with regard to the Realschule, there are some steps in the right direction as far as varieties of English are concerned in the curriculum for the Gymnasium, but the varietyrelated demands are rather vague throughout the curriculum and varieties play a considerably smaller role than they do in the case of the Realschule.The curriculum contains specific instructions concerning varieties only for years eight and ten, with vocabulary differences only being included in year eight and only for British English and American English.The curriculum and textbooks for the Gymnasium, which are brand-new and will thus most likely last well into the second decade of the 21 st century, are somewhat anachronistic by almost exclusively focusing on British English and American English.In both the curriculum as well as the textbooks there is generally very little room devoted to varieties, again not reflecting the status of the English language today (cf.Ammon 2008: 12-18).
Teacher training programs
The role of varieties in day-to-day teaching in the classroom does, of course, also depend to some extent on the teacher.This is why two university programs for prospective teachers at the Realschule and Gymnasium in two different states were reviewed for this paper, namely (i) the program at the Catholic University of Eichstätt, and (ii) the program at the Johann-Wolfgang-Goethe University Frankfurt.Both programs include, among other things, classes concerning practical language skills, usually taught by native-speakers with either a British or an American background, and basic linguistics classes.The relevant departments of English linguistics at both universities offer seminars on varieties of English on a more or less regular basis, but these classes are by no means mandatory and only a small number of the students attends such a class in the course of their studies.We can only conclude that varieties unfortunately do not receive enough attention in university programs for prospective teachers and thus teachers are frequently not adequately prepared for addressing or coping with variety-related issues in the classroom.Lack of appropriate training and the resulting linguistic insecurity of teachers with respect to varieties may at least to a certain extent be responsible for the way in which many teachers treat varieties other than Standard British English, namely as "deviations" from what they consider "correct English".There is also a certain amount of hesitation among teachers to include varieties of English in ELT as this implies acceptance of a more complex pattern of language use, which makes straightforward right vs. wrong answers in the grading of exams rather problematic.Last but not least, varieties are frequently not considered important by teachers, as they usually do not play a major role in important exams.
In this section the example of Germany has made it clear that varieties are still not adequately represented in current English curricula, accompanying teaching material and university programs for prospective teachers, especially in light of the status of English in the world and the amount of variation learners of English will likely encounter outside their secondary school environment.The following section will be concerned with the question of how the use of more varieties in ELT can be implemented.
The implementation of more varieties in ELT
The previous sections have made it clear from a linguistic as well as a pedagogical point of view why we need to include more varieties of English in ELT in a more effective manner.
The question remains as to how this can be done, especially without making naive demands and in consideration of the existing framework of the education system.The following suggestions could be among the necessary steps in the right direction.
Firstly, curriculum designers need to recognize the position of English in our changing world and the fact that the global spread of English has made the language and issues around its use more complex.The role of native and non-native varieties has to be strengthened in curricula and more time has to be devoted to English lessons to give teachers the opportunity to include teaching on varieties in the classroom; three times 45 minutes a week, as in years eight and Varieties of English in current English language teaching 43 nine of the Realschule, is simply not enough and does not do justice to the importance of English (cf.Ammon 2008: 12-18).It is, however, not enough to simply include ambitious demands concerning varieties in the curriculum and to devote more time to English learningit also has to be ensured that the accompanying textbooks adequately reflect these demands.
A second important and necessary step would be to begin curriculum-based ELT at an earlier age.In the case of the German state of Bavaria, and in fact in all states of Germany, primary school learners do receive foreign language instruction, usually English, some from year one and some from year three.However, the content of such lessons depends largely on the selection of the teacher since curricula are either vague or have not yet been established.This means that the intake of secondary schools is extremely heterogeneous with regard to previous knowledge of English and as a result English instruction at the secondary level usually has to start from scratch in year five.In order to make English learning in primary school worthwhile, curricula for English learning at primary school level would need to be introduced in all states as soon as possible (cf.Klippel and Doff 2007: 25).This would automatically provide two to four more years of meaningful English learning and at the same time free up instruction time for varieties of English.Since the system of English language education in primary schools is already in place, sticking to the current curriculum-free approach would mean a lost opportunity.
A third essential step for the implementation of more varieties in ELT would be the introduction of adequate and mandatory variety-related training of prospective and active teachers.Variety-related training must be provided to keep already active teachers up to date, because teaching "the same old same old" for many decades after having left university cannot be an option in a world constantly changing at an increasing pace.For prospective teachers, it would be necessary to make adequate variety-related training a mandatory part of English programs at universities.
Conclusion
Summing up the findings of this paper, it seems safe to conclude that varieties of English are still not adequately represented in current ELT, as has been demonstrated with the help of an example from Germany in section 3. Analyses of ELT curricula, teaching materials and teacher training programs elsewhere would probably differ concerning certain details, but the basic findings of this exploratory study will most likely hold true for ELT in many other countries and territories.Referring back to the two main research questions underlying this paper (cf.section 2), it has been shown that there is no alternative to an increased representation of varieties of English in ELT if we are serious about the main aim of communicative language learning, which is to enable learners to effectively communicate a maximum of situations.Students should encounter as many varieties as possible, develop an awareness of linguistic diversity and learn systematically about how varieties can differ from each other in order to lay the foundations for life-long learning.It has to be pointed out to learners of English that the English language is not monolithic but a constantly evolving dynamic system with a pluricentric structure.Teachers, textbook authors, curriculum designers, foreign language education researchers, applied linguists, sociolinguists and other ELT-related experts should make good use of the growing body of systematic linguistic descriptions of varieties of English and work together to produce material that helps learners of English with learning about varieties and variation.These linguistic resources, however, have not yet received adequate attention by materials and curriculum developers.Finally, the suggestions in section 4 need to be implemented to make maximal use of the available resources, to give learners a meaningful English language education that prepares them for the sociolinguistic reality and thus the challenges of everyday life in a globalized world, and to provide teachers with the tools to help their students to achieve this goal.Despite some small steps in the right direction that have already been made, the current representation of varieties of English in ELT, and its distribution over the school years, can be summarized as follows: too little, too late.It is, however, certainly never too late to start developing, testing and implementing new ideas, some of which have been laid out in this paper.
Fortunately, we can
draw on a growing number of current and systematic linguistic descriptions of varieties of English (e.g.,Kortmann and Schneider 2004;Labov, Ash and Boberg 2005; Schneider 1997a,b), textbooks addressing varieties of English (e.g.,Trudgill and Jannah 2002;Bauer 2003;Jenkins 2003; Burridge and Kortmann 2008; Kortmann and Upton 2008;Mesthrie 2008;Schneider 2008), and an increasing number of English-language corpora of authentic language data, such as the International Corpus of English (ICE) and the British National Corpus (BNC).
element of ELT at the beginning of the twenty-first century.For this study, I analyzed selected ELT curricula, teaching material and university programs for prospective teachers of English in Germany.The first step was to analyze the recently introduced new curricula for ELT at two types of secondary schools in the German state of Bavaria, namely the Realschule [year 5-10] and the Gymnasium [year 5-12], in terms of demands concerning varieties of English.The next step was to analyze the Klett-published textbook series Red Line New [for Realschule] and Green Line New [for Gymnasium] in terms of their treatment of varieties of English, and to compare the results to the variety-specific demands in the underlying curricula, keeping in mind that many teachers consider the textbooks the so-called "secret curriculum".Last but not least, two programs for prospective teachers at two German universities in two different states, namely the Catholic University of Eichstätt-Ingolstadt in Bavaria and the Johann Wolfgang Goethe-University of Frankfurt in Hesse, were screened for the status of varieties of English in the course of the program.The analyses of the curricula, the textbook series and the university programs were based on three questions:(i)What role do varieties play in current curricula?(ii)How do the accompanying textbook series reflect variety-specific demands in the curricula?(iii)What kind of mandatory variety-related training do prospective teachers of English have to go through?
|
2018-12-05T10:03:46.533Z
|
2012-12-01T00:00:00.000
|
{
"year": 2012,
"sha1": "80a8feb89d71445164036769ad64f3240c1e3e5d",
"oa_license": "CCBY",
"oa_url": "http://spil.journals.ac.za/pub/article/download/21/56",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "80a8feb89d71445164036769ad64f3240c1e3e5d",
"s2fieldsofstudy": [
"Linguistics"
],
"extfieldsofstudy": [
"Psychology"
]
}
|
260232677
|
pes2o/s2orc
|
v3-fos-license
|
Omenn syndrome in a 10-month-old male with athymia and VACTERL association
We describe the case of a 10-month-old boy with vertebral defects, anal atresia, cardiac defects, tracheoesophageal fistula, renal anomalies, and limb abnormalities (VACTERL) association and athymia who developed Omenn syndrome.
microduplication and features of VACTERL association.The patient discussed in this article was reported to have vertebral deformity, a vascular ring, tracheoesophageal fistula with esophageal atresia, cardiac malformations (ventricular septal defect and double aortic arch), and limb abnormalities. 4 This was the second such known case of VACTERL association and a 22q11 microduplication. 4In this report, we present a patient with VACTERL association and athymia/DGS who developed Omenn syndrome (OS).
CASE REPORT
The patient was a full-term male born via cesarean section with features of VACTERL association identified during prenatal care.These features included multiple vertebral abnormalities, tetralogy of Fallot with hypoplastic pulmonary valve, and unilateral renal agenesis.The patient was macrocephalic without hydrocephalus.At birth, he required mechanical ventilation for respiratory distress due to significant chest wall deformity (Fig 1, A).He also had hypocalcemia and hypoparathyroidism.The patient received parenteral and enteral calcium supplementation and required pulmonary valvuloplasty and patent ductus arteriosus stenting by the cardiology department.
The immunology department was consulted following an abnormal result of newborn screening for SCID.The cycle threshold of the T-cell receptor rearrangement excision circle was 45 (abnormal > _ 40) performed at when the patient was 1 day old and repeated at age 7 days.A flow cytometry study at age 2 weeks revealed absent CD3 1 T cells (13 cells/mm 3 ) and CD4 1 CD45RA 1 naive T cells (13 cells/mm 3 ) but normal CD19 1 B-cell levels (588 cells/mm 3 ) and CD56 1 natural killer cell levels (175 cells/mm 3 ) (Table I).The patient's serum IgG level was normal.Chest radiography revealed marked chest wall deformities (Fig 1, A) and absent thymic shadow (Fig 1, B).Of note, there was gestational diabetes in the mother, which was treated with insulin starting at 21 weeks' gestation; however, no other conditions or medication use that have been associated with DGS were reported.There was no known parental consanguinity.Repeat studies at when the patient was 2.5 monthsold revealed a similar finding of absent CD3 1 T cells (0 cells/mm 3 ) and I).The result of whole genome chromosome single-nucleotide polymorphism microarray analysis was normal, and the result of testing with a primary immunodeficiency 407 gene panel that included TBX1, CHD7, and FOXN1 (Invitae) was normal.The genetics department did not recommend that whole exome sequencing be performed initially owing to the lack of known genes associated with VACTERL association.Also, a hematopoietic stem cell-autonomous defect was not ruled out, as the patient's CD34 1 cells were not cultured in an artificial thymic organoid system.A diagnosis of complete DGS superimposed on VAC-TERL association was made, and the patient was treated conservatively with subcutaneous immunoglobulin and prophylactic sulfamethoxazole and trimethoprim.Because of the chest wall deformity requiring respiratory ventilation, the patient was deemed unsuitable for transfer to Duke Medical Center to receive a cultured thymic epithelium transplant.Additionally, the patient could not be transferred for chest wall repair until cultured thymic epithelium transplantation had been performed.He continued to receive immune globulin replacement and prophylactic antibiotics until age 8 months.He experienced interval infections up until that point; these infections included recurrent pneumonias that were successfully treated with antibiotics.The patient had no history of viral or fungal infections.Subsequently, the immunology department was consulted again when the patient was 10 months-old to evaluate the development of generalized lymphadenopathy, splenomegaly, generalized eczematous skin rash, and eosinophilia that began at age 8 months.The eczema was characterized as a scaly erythrodermic rash of the entire body and generalized alopecia (Fig 1 , C).As seen in Fig 1 , D, the patient's blood eosinophil count increased dramatically at age 8 months, peaking at 6116 eosinophils/mm 3 at age 10 months.His serum IgE level was also dramatically increased at age 10 months (5096 IU/mL).Further immunologic evaluation revealed levels of CD3 1 T cells (705 cells/mm 3 ) and CD4 1 CD45RO 1 memory T cells (172 cells/ mm 3 ) that were increased from the levels found in previous studies (Table I).Examination of TCR-Vb families by flow cytometry (Cincinnati Immunology Laboratory, Cincinnati, Ohio) demonstrated oligoclonal T cells.The result of examination for maternal engraftment was negative, with 100% XYas determined by fluorescence in situ hybridization (Mayo Clinic Laboratory, Rochester, Minn).A diagnosis of OS in our patient with VAC-TERL association and now atypical complete DGS was established.His OS was initially treated with methylprednisone, 2 mg/kg per day.The patient improvement markedly, with resolution of the rash, splenomegaly, and lymphadenopathy, and methylprednisolone was gradually tapered.However, there was a recurrence of the patient's erythroderma after tapering of the steroids.He subsequently continued receiving prednisone (2 mg/kg per day) throughout most of his hospital stay.Whole exome sequencing (GeneDx, Gaithersburg, Md) was performed; it showed biparental inheritance of 2 WNT10A variants, a paternally inherited pathogenic mutation (c.682T>A; p.F228I), and maternally inherited variant of uncertain significance (VUS) (c.649G>A; p.D217N).WNT10A mutations, even in a heterozygous form, can be associated with ectodermal dysplasiaassociated conditions, including dry skin, dystrophic nails, oligodontia, and sparse hair.The parents did not have any clinical evidence of ectodermal dysplasia other than early-onset alopecia in the father and eczema in a sibling.
DISCUSSION
This case highlights a rare presentation of VACTERL association with initial lymphopenia as a result of a clinical diagnosis of complete DGS and subsequent development of OS.Congenital athymia leads to profound T-cell immunodeficiency with absent T cells but normal B-cell and natural killer cell levels.The immunodeficiency of VACTERL association with athymia needs to be differentiated from other congenital immunodeficiencies, such as SCID and/or combined immunodeficiency.Patients with DGS and VACTERL association have increased susceptibility to infections, development of graft-versus-host disease, and now OS.For athymia, cultured thymus epithelial tissue implantation is the preferred treatment in addition to antimicrobial prophylaxis and immunoglobulin replacement. 5Without thymus transplantation, athymia is fatal, with almost all children dying from infections by age 2 years.Comorbidities associated with underlying disorders can also interfere with ability to receive a thymic transplant, as was the case for this patient. 6Hematopoietic stem cell transplantation has been performed in patients with congenital athymia.However, survival after hematopoietic stem cell transplantation in patients with congenital athymia is low compared with that in patients with SCID (41% versus < _90%, respectively). 2 Our patient developed OS at age 8 months.OS is a distinct inflammatory process that can be associated with genetically diverse SCID disorders.As opposed to typical patients with SCID, who have a paucity of lymphoid tissue, patients with OS have enlarged lymph nodes and splenomegaly.They also develop generalized erythroderma, as well as alopecia and loss of eyebrows and eyelashes.Other presenting symptoms include chronic diarrhea, pneumonitis, and failure to thrive during the first year of life. 5Eosinophilia and elevated IgE level are frequently present.OS is caused by oligoclonal expansion of autoreactive T cells as a result of abnormal thymic negative selection.In addition, there is an absence of proper regulation by other immune system components, such as IL-10 and regulatory T cells.OS may be associated with various syndromic disorders, including SCID, cartilage hair hypoplasia, DGS, and coloboma, heart defects, atresia choanae, retardation of growth and development, genitourinary abnormalities, and ear abnormalities (CHARGE) syndrome.These associated disorders must be considered when encountering a patient with OS. 5 Previous case reports have described an overlap between VACTERL association and 22q11 deletions and/or duplications as well as OS with atypical complete DGS. 4,7Stone et al 7 reported a case of an infant with atypical complete DGS and OS.The patient presented with erythroderma, unilateral kidney agenesis, tetralogy of Fallot, pulmonary atresia, ventricular septal defect, and absent thymus.Laboratory findings included eosinophilia, elevated IgE level, undetectable T-cell receptor rearrangement excision circle, oligoclonal T-cell population, and abnormal TCR-Vb spectratyping.The patient's genetic testing results were negative, with no 22q11 deletion.It is notable that 22q11 deletions are not the only pathway by which atypical DGS can occur. 7Oligoclonal T-cell expansion in the setting of complete DGA is considered an atypical phenotype.The increased number of T cells develop at some point after birth, and maternal engraftment must be ruled out before making the diagnosis.Markert et al 8 estimated that approximately 30% of patients with DGA have an atypical DGA or OS-like phenotype.Although there are previous reports of overlap between features of VACTERL and DGS with 22q11 deletions, as well as OS and DGS, to our knowledge, a patient with overlapping features of VACTERL association with athymia and OS without 22q11 deletion has not been reported.
Many of aforementioned conditions that overlap with OS and athymia also have clinical overlap and multiple features in common with VACTERL association and thus should be considered in the differential diagnoses.VACTERL association is estimated to be present in between 1 to 10,000 and 1 to 40,000 live births, such that on the basis of the current literature, this combination of VACTERL association with atypical complete DGS remains very rare.The etiology of VACTERL association remains unknown.The condition is typically sporadic and often has many comorbid conditions.There is likely causal and clinical heterogeneity. 2 Adam et al 9 recently reported an overlap between VACTERL association and a number of other multiple embryonic malformations, and they have termed this group of conditions recurrent constellations of embryonic malformations, the etiologies of which are currently unknown.Further research into the potential association and molecular etiologies of VACTERL association and athymia with OS is needed.
DISCLOSURE STATEMENT
Disclosure of potential conflict of interest: The authors declare that they have no relevant conflicts of interest.
FIG 1 .
FIG 1. Chest wall deformity (A) and absent thymus (B) in a newborn boy with VACTERL association.At age 10 months, the patient developed a diffuse scaly erythrodermic skin rash and alopecia, as shown on the scalp (written parental consent for use of the photo has been obtained) (C), and he developed eosinophilia (D) associated with generalized lymphadenopathy and splenomegaly consistent with OS.TCR-Vb families study by flow cytometry (E) (Cincinnati Immunology Laboratory) demonstrated oligoclonal T cells with increased ([) TCR-Vb 13.2, Vb14, and Vb23 and decreased (Y) TCR-Vb2, Vb4, Vb5.1, Vb5.2, Vb9, Vb16, and Vb22.
TABLE I .
Immunologic studies in a 10-month-old infant boy with VACTERL association and athymia who developed OS AEC, Absolute eosinophil count; ALC, absolute lymphocyte count.
|
2023-07-28T15:45:18.282Z
|
2023-07-01T00:00:00.000
|
{
"year": 2023,
"sha1": "63437f9f32d77c0f8ddece77b7d4130f9937e425",
"oa_license": "CCBYNCND",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "b517b00cb3ac687a58d46a093cb6ed698357d0be",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
}
|
18327383
|
pes2o/s2orc
|
v3-fos-license
|
Pulmonary Mycobacterium avium infection demonstrating unusual lobar caseous pneumonia
Mycobacterium avium complex (MAC) infection is a major medical concern in Japan because of its increased prevalence and associated mortality. A common radiological feature in pulmonary MAC infection is a mixture of two basic patterns: fibrocavitary and nodular bronchiectatic; however, lobar consolidation is rare. We report an 83‐year‐old man with lobar caseous pneumonia caused by pulmonary MAC infection. Radiological findings were predominantly composed of dense lobar consolidation and ground‐glass opacity. A diagnosis was made in accordance with the clinical and microbiological criteria set by the American Thoracic Society. A histological examination of lung specimens obtained by using a bronchoscope revealed a caseous granulomatous inflammation with an appearance of Langhans cells. The patient was treated using combined mycobacterium chemotherapy with an initial positive response for 6 months; however, the disease progressed later. We suggest that an awareness of lobar pneumonic consolidation as a rare radiological finding in pulmonary MAC infection is important.
Introduction
Mycobacterium avium complex (MAC) accounts for 80-85% of pulmonary non-tuberculous mycobacteria (NTM) infections and is a major medical concern in Japan because of its increased prevalence and associated mortality. Pulmonary MAC infection in immunocompetent patients is radiologically characterized by two basic patterns: fibrocavitary (FC) and nodular bronchiectatic (NB); however, lobar consolidation is rare. We report a unique case of pulmonary M. avium infection with lobar caseous pneumonia in an immunocompetent patient.
Case Report
An 83-year-old man was admitted to our hospital because of productive cough and intermittent blood-stained sputum. He had no history of illness and was not exposed to cigarette smoking. Chest radiography revealed dense infiltration in the right upper lung field accompanied by an elevated right diaphragm position (Fig. 1A). A retrospective evaluation of a chest radiograph, obtained 10 months before admission, revealed a patchy infiltration in the right upper lung field (Fig. 1B). A chest computed tomography (CT) scan, obtained on admission, revealed a dense consolidation occupying the whole right upper lobe and a mixed dense and ground-glass opacity in a portion of the right lower lobe ( Fig. 1C-E). The upper lobe consolidation was accompanied by an air bronchogram with mild bronchiectasis and bubble-like small cavities. Unlike a typical case of pulmonary MAC infection, no remarkable finding was evident in the middle lobe or lingular segment. The radiological findings were similar to those observed for tuberculous caseous pneumonia. The laboratory findings were as follows: white blood cells, 8600/mm 3 with normal differentiation; aspartate transaminase, 28 IU/L; alanine transaminase, 18 IU/L; lactate dehydrogenase, 221 IU/L; creatinine, 1.41 mg/dL; and C-reactive protein, 6.7 mg/dL. The serum immunoglobulin levels were normal. The patient was tested negative for diabetes and for human immunodeficiency virus infection. The result of an interferon-γ releasing assay for M. tuberculosis was negative. The results of acid-fast bacterium (AFB) smears and cultures of sputum samples were repeatedly positive, and a nucleic acid amplification test confirmed the presence of M. avium. We finally established a diagnosis of pulmonary M. avium infection manifesting as lobar pneumonic consolidation based on the criteria set by the American Thoracic Society [1]. Informed consent was obtained, and appropriate mycobacterium chemotherapy comprising rifampicin (450 mg daily), clarithromycin (600 mg daily), and ethambutol (500 mg daily) [1,2] was administered. We subsequently discovered that the M. avium strain isolated at the initial evaluation was resistant to clarithromycin (minimal inhibitory concentration > 32 μg/mL).
After initiating the chemotherapy, the patient's condition stabilized for about 6 months, with lesser respiratory symptoms and decreased AFB burden in the sputum. The lobar consolidation was ameliorated in the first 3 months and stabilized until 6 months as observed on the chest CT scan (Fig. 2). However, the clinical, radiological, and microbiological test results worsened thereafter, and a reevaluation was needed. The screening for fungal infection was negative. In addition, the result of a DNA-DNA hybridization test of the cultured colonies was positive only for M. avium among the 20 comprehensive types of Mycobacterium species. Finally, bronchoscopy was performed, and lung biopsy specimens showed granulomatous inflammation and caseous necrosis along with Langhans cells. While the result of Ziehl-Neelsen staining was histologically negative, the AFB smear and culture of the lavage fluid were positive for M. avium. None of the histological features of organizing pneumonia, vasculitis, or neoplasm was evident. Based on these findings, we conclusively reconfirmed the diagnosis of pulmonary MAC infection.
Discussion
The common radiological feature of pulmonary MAC infection in immunocompetent patients is a mixture of basic patterns: FC and NB. FC-MAC shows CT-based findings similar to those of pulmonary tuberculosis, with nodules and cavities predominantly presenting in the upper lobes; NB-MAC involves centriacinar nodules and bronchiectasis in the middle lobe/lingual segment. While a consolidation was detectable on a CT scan in 11% of the cases [3], it is usually small in size and accompanies to a basic pattern. The present case is unique because the MAC infection manifested as large lobar consolidation without a basic pattern. The slow-growth tendency and distinct host immune response may explain the unlikelihood of developing a large caseous pneumonia in pulmonary MAC infection. MAC is known to cause various diseases in subjects with different immune status, including chronic pulmonary infection, disseminated systemic infection, immune reconstitution syndrome, and hypersensitive pneumonia, implying that host immunity plays pivotal roles to determine disease phenotypes. A recent report indicated that non-human immunodeficiency virus immunocompromised patients with pulmonary MAC infection were likely to have larger consolidations than immunocompetent patients [4]. Although the patient was immunocompetent with regard to general examination, we could not deny the possibility that a hidden immunocompromised status might have contributed to the infection.
Owing to the unusual radiological findings that indicated a pulmonary MAC infection, we carefully excluded the conditions related to co-infection with other pathogens and accompanying non-infectious diseases. Various pathogens are known to chronically co-infect with MAC, including M. tuberculosis, non-MAC NTM, Aspergillus species, and Nocardia species. Patients with organizing pneumonia occurring secondary to NTM infections were reported to demonstrate similar radiological findings [5]. Thus, we performed a histological assessment of lung specimens and confirmed MAC infection without the coexistence of organizing pneumonia or other non-infectious diseases that potentially manifest a similar radiological finding (vasculitis, lymphoma, or invasive mucinous adenocarcinoma).
In summary, we report a unique case of pulmonary MAC infection with lobar caseous pneumonia. This case emphasizes the importance of awareness of lobar pneumonic consolidation as a rare radiological finding in pulmonary MAC infection.
Disclosure Statements
No conflict of interest declared. Appropriate written informed consent was obtained for publication of this case report and accompanying images.
|
2018-04-03T02:42:28.265Z
|
2016-07-05T00:00:00.000
|
{
"year": 2016,
"sha1": "4efc29c810d44b4fe36ec8322fb1a47d02fd4884",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.1002/rcr2.176",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "4efc29c810d44b4fe36ec8322fb1a47d02fd4884",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
250957983
|
pes2o/s2orc
|
v3-fos-license
|
Dictionary learning compressed sensing reconstruction: pilot validation of accelerated echo planar J-resolved spectroscopic imaging in prostate cancer
Objectives This study aimed at developing dictionary learning (DL) based compressed sensing (CS) reconstruction for randomly undersampled five-dimensional (5D) MR Spectroscopic Imaging (3D spatial + 2D spectral) data acquired in prostate cancer patients and healthy controls, and test its feasibility at 8x and 12x undersampling factors. Materials and methods Prospectively undersampled 5D echo-planar J-resolved spectroscopic imaging (EP-JRESI) data were acquired in nine prostate cancer (PCa) patients and three healthy males. The 5D EP-JRESI data were reconstructed using DL and compared with gradient sparsity-based Total Variation (TV) and Perona-Malik (PM) methods. A hybrid reconstruction technique, Dictionary Learning-Total Variation (DLTV), was also designed to further improve the quality of reconstructed spectra. Results The CS reconstruction of prospectively undersampled (8x and 12x) 5D EP-JRESI data acquired in prostate cancer and healthy subjects were performed using DL, DLTV, TV and PM. It is evident that the hybrid DLTV method can unambiguously resolve 2D J-resolved peaks including myo-inositol, citrate, creatine, spermine and choline. Conclusion Improved reconstruction of the accelerated 5D EP-JRESI data was observed using the hybrid DLTV. Accelerated acquisition of in vivo 5D data with as low as 8.33% samples (12x) corresponds to a total scan time of 14 min as opposed to a fully sampled scan that needs a total duration of 2.4 h (TR = 1.2 s, 32 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${k}_{x}$$\end{document}kx×16 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${k}_{y}$$\end{document}ky×8 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${k}_{z}$$\end{document}kz, 512 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${t}_{2}$$\end{document}t2 and 64 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${t}_{1}$$\end{document}t1). Supplementary Information The online version contains supplementary material available at 10.1007/s10334-022-01029-z.
Introduction
Prostate cancer (PCa) is the second leading cause of cancer mortality and the most common cancer in men with an estimated 248,530 new cases of prostate cancer diagnosed in 2021 [1,2]. PCa is typified by an unpredictable clinical course; therefore, early detection and accurate staging are not only paramount to identify patient-specific therapies but also for an early accurate assessment of aggressiveness of localized disease. Although serum prostatic-specific antigen (PSA) is a very sensitive test during early diagnosis, the specificity is low for cancer diagnosis [3,4]. Suspected masses have been identified in the prostate using multiparametric (mp) MRI, compared to transrectal ultrasound imaging, where mp-MRI includes T 2 -weighted imaging, diffusion-weighted images (DWI), apparent diffusion coefficients (ADC) maps derived from DWI and dynamic contrast-enhanced (DCE) MRI [5][6][7][8].
Three decades ago, 1 H and 31 P magnetic resonance spectroscopy (MRS) of the human prostate was first performed by Thomas et al. using a trans-rectal probe, demonstrating the ability of trans-rectal MRS to characterize proton and phosphorylated metabolites of normal, hyperplastic, and malignant prostates [9,10]. Biochemical and histochemical studies have confirmed that the normal human prostate has a high level of citrate (Cit), which is greatly reduced, while choline (Ch) increases in malignant prostate [9,11].
Magnetic Resonance Spectroscopic Imaging (MRSI), also known as Chemical Shift Imaging (CSI), facilitates the acquisition of spectral data from multiple regions of the prostate from either a selected volume of interest (VOI) or multiple slices [12,13]. The total duration is very long since conventional MRSI uses phase-encoding steps to encode the spatial dimensions; however, elliptical weighted or averageweighted schemes have been used to shorten the total duration [14]. MRSI is a valuable technique for assessing the extent and aggressiveness of primary and recurrent PCa and the threshold (choline + creatine)/citrate images, when overlaid in color on T 2 W images, can estimate the spatial extent of PCa and benign prostatic hyperplasia (BPH). Kurhanewicz and co-workers assessed the efficacy of combined MRI and three-dimensional 1 H MRSI in the detection and localization of PCa [15][16][17]. The aggressiveness of prostate cancer was evaluated by Scheenen and co-workers using MRSI [18]. Carroll and co-workers summarized findings from TRUS-guided biopsy, MRI and MRSI. In contrast to the TRUS detectability of only 35% of 114 patients showing lesions, 79 out of 114 patients showed an anatomic lesion characteristic of cancer using MRI and MRSI [19]. Using MRSI, DWI and MRSI + DWI, Hricak and co-workers recently developed statistically based rules for identifying cancer in the peripheral zone (PZ) [20]. Correlation of MRSI and MRI with molecular markers was demonstrated by Shukla-Dave et al. [21]. A multi-institutional prostate cancer study evaluated the incremental benefit of combined endorectal MRI and MRSI, as compared with endorectal MRI alone, for sextant localization of peripheral zone (PZ) prostate cancer [22]. Other multi-center studies conducted further validation of prostate cancer localization and aggressiveness [23,24].
Even though k-space-weighted and average-weighted schemes have been used to shorten the total duration of MRSI, echo-planar spectroscopic imaging (EPSI) can further accelerate the total acquisition duration [25,26]. Chen et al. showed high-speed 3 T spectroscopic imaging of prostate using flyback echo-planar encoding [27]. Adding a 2nd spectral dimension to MR spectroscopy helps to disperse the spectrum better. However, acquisition of MRSI after adding the 2nd spectral encoding can increase the total acquisition time significantly. Furuyama et al. applied compressed sensing (CS) reconstruction of accelerated J-resolved spectroscopic imaging acquisition in healthy human prostates [28]. Nagarajan et al. demonstrated the detection of Cit, Ch/Cr and Spm in prostate cancer using accelerated echoplanar J-resolved spectroscopic imaging (EP-JRESI) where a single-slice was localized and the efficiency of non-linear reconstruction using total variation (TV) and maximum entropy (MaxEnt) was compared [29].
Dictionary learning (DL) is another approach for adaptive sparse representation of signals in a CS framework [30]. In medical imaging, DL has been widely used for reconstruction in different areas like MRI, PET and CT [31][32][33][34][35][36][37]. DL in a CS reconstruction involves a process of learning dictionaries from training data and then generating a sparse representation using the learned dictionaries in an iterative manner [34]. The overcomplete set of basis functions learned by DL captures the underlying features of a signal devoid of noise, such that the learned set of basis functions can achieve a higher sparsity level for that particular signal [36,37]. Ravishankar et al. proposed a DL-MRI scheme based on the K-SVD algorithm for learning the sparsifying transform for MRI reconstruction [34,35]. It has since been one of the most popular methods to train dictionaries for MRI reconstruction among other methods like Method of Optimized Directions (MOD), Online Dictionary Learning (ODL) and Recursive Least Squares (RLS) [38][39][40].
In this work, we have used a five-dimensional (5D) MR Spectroscopic Imaging (MRSI) (3D spatial + 2D spectral) technique, which generates 2D spectra from multiple spatial locations covering a larger volume of prostate tissues in a single scan. One of the major challenges of this technique is the increased scan time, which can lead to higher motion related artifacts. A straightforward solution to this problem is an accelerated acquisition. However, with higher undersampling rates, the reconstruction performance also becomes a major contributing factor to the overall spectral quality. Therefore, the main aim of this study was to implement a more sophisticated reconstruction technique such as dictionary learning for MRSI and to assess its performance at higher undersampling factors compared with more conventional CS reconstruction techniques such as total variation, in order to test the feasibility of higher undersampling factors for prostate MRSI.
Hence, we have implemented and evaluated the performance of a hybrid DLTV reconstruction where DL iteratively trains dictionaries from a TV filtered data using K-SVD, on a set of non-uniformly sampled (NUS) 5D EP-JRESI data acquired in prostate cancer patients and healthy controls. Furthermore, we have compared the performance to using DL alone, TV and Perona-Malik (PM) reconstruction techniques [41,42]. Undersampling rates of 8x and 12x were imposed prospectively along two spatial ( k y , k z ) and one spectral dimensions ( t 1 ) while the remaining spatial ( k x ) and spectral dimensions ( t 2 ) were fully encoded using an echo-planar readout. Multiple undersampling rates were retrospectively imposed on a fully encoded 5D EP-JRESI phantom data and reconstructions using DL, PM, TV and DLTV were investigated. Relative metabolite levels were quantified using the prior-knowledge based fitting (ProFit) algorithm [43].
Human subjects
Nine PCa patients (mean age of 63 years) and three healthy males (mean age of 42.7 years) were investigated between May 2013 and June 2015. Gleason scores in the patients varied between 6 and 7. Prostate-specific antigen (PSA) levels among the PCa patients varied from 3 to 6.9 ng/mL. The PCa patients and healthy subjects were scanned with endorectal "receive" coil for patients and external phasedarray "receive" coil for healthy using a 3 T Siemens (Siemens Medical Solution, Erlangen, Germany) MRI scanner. The protocol combining MRI and MRS was performed at least 8 weeks after the transrectal ultrasound-guided sextant biopsy in the PCa patients. The entire protocol was approved by the Institutional Review Board (IRB), and informed consent was obtained from each subject.
MRI and MRSI
All patients and healthy subjects were imaged in the supine (feet-first) position. Axial images were oriented to be perpendicular to the long axis of the prostate, which was guided by the sagittal images. Axial, coronal, and sagittal T 2 -weighted (T 2 W) turbo spin-echo images were recorded using the following parameters: repetition time/echo time (TR/TE), 3850-4200/96-101 ms; slice thickness, 3 mm; field of view, 20 × 20 cm 2 ; echo train length (ETL), 13 and data matrix, 320 × 256.
A maximum echo-based 5D EP-JRESI sequence [44] as shown in Fig. 1a was used and the volume-of-interest (VOI) was localized using a semi-LASER PRESS module with five slice-selective radio-frequency (RF) pulses, the first 90 0 RF pulse followed by two pairs of adiabatic full passage RF pulses [45] (Fig. 1a). The acquisition parameters for the 5D EP-JRESI were: TR/TE/Avg = 1200 ms/41 ms/1, 16 k y* 8 k z phase encoding steps, field of view (FOV) = 16 × 16 × 12 cm 3 , 512 complex points (t 2 ) with an F 2 spectral bandwidth of 1190 Hz along the detected spectral dimension. For the indirect (2nd) dimension ( F 1 ), 64 t 1 increments with a spectral bandwidth of ± 250 Hz were used. The spatial resolution in terms of cubic voxels with nominal dimensions were calculated from the FOV and the matrix size as 1 × 1 × 1.5cm 3 [46]. Since the EPSI readout simultaneously acquires one spatially encoded dimension ( k x ) and one temporal dimension ( t 2 ), we imposed nonuniform undersampling (NUS) along the remaining ( k y -k z -t 1 ) dimensions. NUS rates of 8x and12x were imposed as shown in Fig. 1b, c. Combination of spatial and spectral dimensions in the reconstruction is expected to introduce more sparsity than the undersampled spatial dimensions alone. Two sets of data were collected, one water suppressed scan (WS) with a total scan An example of non-uniform sampling pattern based on exponentially decaying probability density for 12x undersampling. White and black colors represent acquired and unacquired locations in the k-space 1 3 time of 21 min (8x) and 14 min (12x), and a second nonwater suppressed scan (NWS) using one average and one t 1 increment (approx. 2.6 min). The NWS scan was used for eddy current phase correction and coil combinations. WET-suppression was used for the global suppression of water [47]. The average full width half maximum (FWHM) of the water peak was 30.13 ± 9.29Hz over the localized VOI including the cancer and non-cancer locations.
Data analysis and working principles of CS techniques in MRSI
Non-linear reconstructions were performed on the non-uniformly undersampled data using DL, DLTV, TV and PM. Random sampling causes the undersampling artifacts to appear noise-like. When the signal to be reconstructed has a sparse representation, CS theory holds that the artifact free signal can be recovered by approximating it using a few of its sparse coefficients in a non-linear fashion [48][49][50]. The presence of aliasing artifacts decreases the sparsity of the data and, therefore, the sparse approximation enables recovery of the underlying signal devoid of artifacts. While TV and PM reconstruction techniques assume data sparsity in the finite difference representation, DL learns an overcomplete set of basis functions, or dictionary, that can represent the data in sparse form. DLTV on the other hand assumes sparsity with respect to both the finite difference representation and the learned basis.
Perona-Malik and total variation
The feasibility of PM and TV based CS reconstruction is due to the fact that the MRSI data has sparse gradients. Please note that the term 'gradient' from here on refers to the directional change in the intensity of MRSI data in the image/spectral domain and is not to be confused with the variation in the magnetic field. Gradients with larger magnitude are generally representative of the signal of interest as compared to the lower magnitudes which usually represent noise. Hence, both PM and TV attempt to separate the signal from undersampling artifacts based on its gradient magnitude. While TV denoises the signal by minimizing the l 1 norm of the gradients, PM achieves the same by minimizing the Lorentzian error norm [51,52]. The conventional PM denoises the signal m by diffusing it over a small time t as where is a regularization parameter (step-size) controlling the strength of denoising and div is the divergence operator, and g(|∇m|) is the diffusivity function. The diffusivity function defines a spatially varying weight that controls the extent of smoothing such that stronger smoothing is performed in the regions where the gradient magnitudes are smaller. The Lorentzian error norm is minimized when the choice of diffusivity function is where is a gradient threshold parameter that separates the gradient magnitudes of signal from noise. The formulation becomes equivalent to TV when g(|∇m|) = 1∕|∇m|.
In a compressed sensing framework, PM operates by performing denoising followed by a data consistency step in each iteration so as to minimize a cost function of the form and i is the number of elements in m [41,51]. The term s is the acquired undersampled k-space data with zeros at unacquired locations. F u computes the forward and inverse Fourier transforms of m in image and temporal domains, respectively, and then sets the values at unacquired locations of k-space as zeros. The constraint in Eqn. [3] ensures that the deviation of reconstructed data from the acquired data at sampled locations is restricted.
A good value of can be estimated from the data as the mean/median absolute deviation of the gradients or using the noise estimator described by Canny [53]. We used the mean absolute deviation (MAD) as reported in [41] to estimate in this work. While TV becomes sensitive to the choice of , the presence of helps PM to be stable over a wider range of values of , with a typical choice being = 0.1 for PM [41]. This gives a good compromise between reconstruction quality and reconstruction time. Smaller values of generally do not give a significant improvement in reconstruction quality but increases the number of iterations required for convergence [41].
The TV based reconstruction is formulated as a cost function minimization of the form Due to the better sensitivity to the regularization parameter when solved in a modified Split-Bregman framework as reported in [28,29], we used the Split-Bregman algorithm for CS-TV reconstruction in this work, as described in [44]. CS-PM was implemented as described in [41]. In both cases, the choice of controls the fidelity with values at sampled locations of k-space. Larger values of allows the reconstruction to deviate more from the known k-space samples at acquired locations and helps to minimize the noise by smoothing. A strict data fidelity constraint of F u m = s can lead to noisy reconstructions when the noise level in the acquired data are high. Ideally, should be set at the level of noise in the acquired data. In the case of PM, adaptation of helps to minimize noise in reconstructed data. Therefore, a strict data fidelity is used and is adaptively chosen in each iteration using the MAD of the gradients.
Dictionary learning and hybrid dictionary learning-total variation
The DL-based CS reconstruction learns an overcomplete set of basis functions, or dictionary, that captures the underlying features of a signal devoid of noise, such that the learned dictionary can achieve a higher sparsity level for the particular signal of interest [35,36]. When the aliasing due to non-uniform undersampling has noise-like properties, trained dictionaries become capable of removing the aliasing artifacts, and thereby reconstruct the underlying signal.
One of the drawbacks of using a fixed basis (as in the case of finite difference representation) is that such a basis might not be universally optimal for all datasets [35]. Since DL works by learning a basis which is specific to the data under reconstruction, it has the potential to find a better sparse representation for it, which can in turn improve the quality of the reconstructed data in a CS framework.
Basic elements in a sparsifying dictionary are called atoms, whose linear combinations can represent a given signal in sparse form. We use one of the most popular approaches to train such dictionaries, called the K-SVD algorithm [34]. In this method, the dictionary is updated iteratively atom by atom. In each iteration, sparse coefficients of the signal are updated based on the current estimate of the dictionary and then the dictionary atoms are updated to best fit the current sparse representation of the signal. The algorithm was implemented as described in [54], based on the MATLAB codes for the same, publicly available at [55]. DL is known to have a slow reconstruction speed as compared to TV and PM, due to the training of dictionaries in each iteration. Therefore, we have used the acceleration technique of fast iterative soft thresholding algorithm (FISTA) to accelerate the reconstruction [56].
Further acceleration was achieved by operating the 3D-DL reconstruction in a customized 3D space formed by stacking the direct spectral dimension ( F 2 ) as shown in the work-flow in Fig. 2. This approach accelerates the reconstruction by training a single dictionary for F 2 , instead of having to learn separate dictionaries for each point along F 2 . The variables y , z and F 1 represent the Fourier transforms of two phase encoded spatial dimensions k y and k z , and indirect spectral dimension t 1 . x represents the Fourier transform of the fully sampled readout dimension k x .
The acquired data were first zero-filled and Fourier transformed before rearranging into n x groups as shown in Fig. 2. Overlapping 3D blocks were then extracted from a regular grid on the real and imaginary parts of the readout points, x(i)∀i = 1, 2, … , n x , which were used by the K-SVD algorithm to train the dictionary [54]. Once the dictionary was learned, an orthogonal matching pursuit (OMP) algorithm was used to sparsely code the real and imaginary parts of x(i) independently [57]. Then the data consistency step was enforced by correcting the values at locations of acquired k-space samples in the reconstructed data. This process was then repeated to find the set of updated dictionaries and the subsequent set of sparse coefficients for the data, followed by a data consistency step in each iteration until convergence.
The associated cost function minimization at a point x(i) ∀i = 1, 2, … , n x is of the form where s x(i) and m x(i) are the acquired and reconstructed custom-3D k-space data, respectively, at every point in x . controls the consistency of reconstructed data with acquired k-space samples. D is a real valued dictionary that can sparsely represent both real and imaginary components of s x(i) , and is adaptively learned. ℝ is an operator that extracts 3D blocks from the customized 3D space and j is the patch number. is the sparse representation of the extracted blocks. R and I denote the real and imaginary components of the complex data, respectively.
The DLTV is a combination of DL and TV that sparsely approximates the data using both learned dictionaries and finite difference representations, thereby further increasing the sparsity of the data [58].
This modifies the cost function in Eqn. [5] as where controls the gradient sparsity. A comparison of the DL and DLTV workflows is shown by the yellow and blue arrows in Fig. 2. The main difference in the workflow is that the DLTV trains dictionaries from TV filtered data instead of directly learning the dictionaries from the zero-filled and Fourier transformed data in DL. The parameters of reconstruction were empirically chosen for an efficient overall reconstruction of undersampled phantom as follows: A fully sampled phantom data was retrospectively undersampled at different sampling rates and reconstructed using DLTV. The DLTV parameters were then chosen for each undersampling level to minimize the error in reconstruction and then used for in-vivo reconstruction. With the acceleration scheme used in this paper for DL, it is observed that DL converges in around half the number of iterations as that of TV, despite the fact each iteration of DL was much slower than that of TV. Hence, the DLTV reconstruction framework was designed to keep this factor in consideration, such that the overall convergence rate in terms of the number of iterations is relatively the same for both TV and DL in DLTV.
Error metric evaluation
The following normalized root mean squared error (nRMSE) measure is used to evaluate the performance of the different reconstruction techniques compared in this work: where data GT is the fully sampled ground truth, data R is the reconstructed data and N s is the number of elements in data R .
Furthermore, the 2D J-resolved spectra were individually assessed both qualitatively (by inspecting the difference in recovered signal intensity) and quantitatively (by priorknowledge fitting of the spectra using various metabolites which are reported in prostate tissues) to determine the quality of each reconstruction, particularly the ability of each method to recover the diagonal peaks and cross-peaks of the main prostate metabolites.
The fitting of metabolites is based on the ProFit algorithm that fits the set of simulated spectra and measures the quality of the fit by comparing creatine 3.9 (Cr3.9) to creatine 3.0 (Cr3.0) ratios. This ratio should ideally be 1 since the number of protons are already considered in the basis-set creation for Cr3.9 and Cr3.0. Results with higher Cr3.9/Cr3.0 ratios can be excluded due to the lack of acceptable fitting.
Results
The reconstruction performances of PM, TV, DL and DLTV were studied using retrospectively undersampled phantom data and prospectively undersampled 5D EP-JRESI as described in the following sub-sections. DLTV reconstructions of 4D EP-JRESI datasets are also included in the Supplementary section. Figure 3c shows a typical 2D J-resolved spectrum, generated from a voxel of phantom data with metabolites at physiological concentrations acquired using the fully sampled 5D EP-JRESI sequence. The corresponding volume localization images (Fig. 3b) and metabolite maps (Fig. 3a) indicate that 4 out of 8 slices are within the volume of interest (VOI) represented by the white box, either fully or partially. The metabolite maps were obtained by integrating the corresponding 2D metabolite peaks. Figure 3d shows the fit and residual of the spectrum using ProFit, and the respective Cramér Rao lower bounds (CRLB) of the fit for each metabolite. The fit error is minimized within the spectral range indicated by the white box in the spectrum. The CRLB is used an indicator of the minimum error for estimated parameters, where lower values indicate a more reliable fit. CRLB < 20% are shown in parentheses including Cr, Spm, Cit, Glx(Glu + Glx), tCh(Ch + pCh), mI and sI. The CRLB for Tau was higher due to significant overlap with other metabolites. mI/Ch, Ch/Spm and sI/Tau in this figure as well as the in all the remaining figures indicate the overlapping resonances and not their ratios.
Phantom
This fully sampled prostate phantom data was retrospectively undersampled at 2x, 4x, 8x, 12x and 16x accelerations corresponding to 50%, 25%, 12.5%, 8.33% and 6.25% acquired k-space samples, respectively. Reconstruction was then done using DL, TV, DLTV and PM at each undersampling factor. The reconstruction error relative to the fully sampled data (ground truth) was measured using nRMSE as defined in eqn. [7] and the values are listed in Table 1. Section (a) in the table shows how well each method reconstructs the data within the entire F 2 -F 1 plane. Section (b) shows the error in reconstruction within the VOI in the range of 1 to 4.5 ppm and -50 to + 50 Hz in F 2 and F 1 dimensions, respectively. The nRMSE values in section (a) show similar performance for DL, TV, PM and DLTV at all acceleration factors. In section (b), TV, PM and DLTV show similar performance from 2x to 8x accelerations. However, on closer observation, we can see that the DLTV has the lowest nRMSE values (bolded numbers). Both TV and DLTV show lower nRMSE values compared to PM and DL at 12x and 16x accelerations. The values for DL are higher than PM and TV at lower undersampling factors, and these values become comparable at higher undersampling levels. Sections (c) and (d) in the table report the nRMSE values in the range of 2.2 to 2.9 ppm containing the citrate peak, and from 2.9 to 3.3 ppm containing peaks due to creatine, choline, spermine and taurine. While the nRMSE measures across different acceleration factors follow a similar trend as previously mentioned, the error measures from 2.9 to 3.3 ppm appear to be higher than those from 2.2 to 2.9 ppm.
The fact that the SNR in the former region is lower compared to the latter suggests an overall better reconstruction performance in the regions with high SNR. fitting is shown in Figs. 5. While the metabolite ratios were not significantly different between these reconstruction methods, better reliability of fitting based on CRLB for more number of metabolites was observed with DLTV. ProFit fitted spectra from another malignant location in a 74-year-old patient (Gleason score of 3 + 3), reconstructed using different approaches, are shown in Fig. 6. Better fit of metabolites in terms of CRLB was achieved with DLTV reconstruction. In addition to Cit (5%), Ch (4%), Spm (11%), Cr3.0 (20%) and Cr3.9 (11%), mI and Tau were fitted with higher CRLB (> 25%). The results of quantitation comparing metabolite ratios in healthy and malignant locations are shown in Fig. 7. Healthy voxels were selected from healthy volunteers and the malignant voxels were selected based on the anatomical images combined with results from biopsy in each patient. Even though trends of increased Ch/Cr, mI/ Cr, and decreased Glx/Cr, Cit/Cr and sI/Cr ratios agreed with previously reported ex-vivo HR-MAS studies [59,60], overestimation of Spm/Cr ratio was observed in malignant lesions [43]. Figure 8 shows the reconstruction performance at 12x acceleration. Only 8.33% k-space samples were collected from a 48-year-old PCa patient (Gleason score of 4 + 3) using an endorectal probe and the remaining samples were estimated using PM, TV, DL and DLTV. The J-resolved 2D spectra ( F 2 : 2 to 3.6 ppm, F 1 : -25 to + 25 Hz) from voxels within the blue box are shown below the localization images, followed by the ProFit fitted spectra of voxel 1. Depleted citrate is seen in voxel 1 indicates a cancerous location. CRLB shows better fit of metabolites using ProFit when reconstructed using DLTV with Cit (16%), Ch (3%), Spm (2%), Cr3.0 (6%) and sI (3%).
In vivo
Extension of the proposed reconstruction using DLTV for the prospectively undersampled 4D-EPJRESI data is Fig. 4 for DLTV, TV, PM and DL reconstructions. CRLB < 25% is shown in the insert of the fit straightforward by using 2D dictionaries instead of 3D [35,54]. Additional figures showing the results of DLTV reconstruction of undersampled 4D EP-JRESI datasets are included in the supplementary section, comparing the reconstruction performance of TV and DLTV,as opposed to TV and maximum entropy reported in [29].
Discussion
The application of DL and DLTV reconstruction techniques to non-uniformly undersampled 5D EP-JRESI data was studied and evaluated in comparison with the gradient sparsitybased techniques of PM and TV, using both phantom and in-vivo datasets at various acceleration factors. The theory of CS requires that the data have an associated sparse representation to reconstruct it from the non-uniform samples collected at a sub-Nyquist rate. Higher sparsity helps to better recover the data in a CS framework. Therefore, we have compared finite difference based sparse representation, DL based sparse representation, and a combination of both (DLTV), for CS reconstruction at undersampling factors ranging from 2x to 16x.
While the chemical shift misregistration is expected to be reduced with the semi-LASER localization, a less effective outer volume suppression or the effect of Gibbs ringing due to low spatial resolution can cause contamination of voxels. The effect of Gibbs ringing can be reduced by increasing the spatial resolution, but it will then reduce the sensitivity of measurement and hence lead to a trade off with SNR. Similarly, one could apply k-space filters like Fermi or Hamming filter functions [13] to reduce the ringing effect during post processing at the cost of blurring (increased voxel size). The figures shown do not include the application of such filters.
The ability of resolving 2D J-resolved peaks including mI, sI, Tau, Cit, Cr, Spm, Ch and Glx, in multiple voxels, as well as the ability to quantify individual metabolite ratios with respect to Cr3.0 using ProFit is evident in Figs. 3-8. This is in contrast to a conventional 1D MRS analysis which uses (Cho + Cr)/Cit ratio or (Cho + Spm + Cr)/Cit ratio [61].
nRMSE is a useful metric to measure the difference in performance between different reconstruction techniques in retrospectively undersampled phantom. Table 1 shows better performance of DLTV in terms of nRMSE from 2x to 16x undersampling levels. In the absence of ground truth as in the case of in vivo data, a comparison based on metabolite ratios or reliability of ProFit based fitting in terms of CRLB can be made. Since the average difference in metabolite ratios were in a similar range across all four reconstruction techniques invivo, Figs. 4-6, 8 also show CRLB, which indicated an improved reconstruction performance of DLTV. In Fig. 6 for example, metabolites were fitted with CRLB < 30% for Cr3.9(11%), Cr3.0(20%), Ch(4%), Spm(11%), Cit(6%) and Tau(28%) in DLTV, as opposed to TV which had only Cr3.9(14%), Ch(6%), Spm(11%) and Cit(5%), PM which had Cr3.0(20%), Ch(3%) and Spm(8%), and DL which had only Cr3.0(22%), Ch(10%) and Spm(24%). As expected, the results of DLTV showed reliable fit for metabolites which were individually picked up by either TV or DL.
DL builds a basis that can sparsely represent the data at hand as opposed to a fixed basis used by TV and PM, leading to a better sparse representation. However, the effectiveness of DL is also dependent on the quality of training data. The TV-filtered training data in DLTV, helps it to find a better sparse representation, leading to an overall improved performance. Similar improvement may be achieved by combining PM and DL as well [42,62].
Potential drawbacks and challenges of DLTV
One of the main drawbacks of DLTV is the increased reconstruction time due to learning dictionaries in each iteration. However, it has been reported that a graphics processing unit (GPU) can significantly improve this reconstruction time [63], which we have not used yet in this work. Another challenge with the implementation of DLTV is the additional reconstruction parameters that need to be tuned compared to using TV or DL independently. The effect of the choice of block size in DL, for example, is applicable in the case of DLTV as well. The larger size of blocks leads to over-smoothing, but retains noise if too small. A similar effect can be expected based on the choice of in eqn. [6]. While higher values of can introduce oversmoothing, lower values can lead to noise retention in both DL and DLTV. In addition to other parameters of DL like the choice of dictionary size and number of training samples, the strength of TV denoising can also affect the quality of reconstructed data in DLTV as it has a direct effect on the quality of training data.
It is observed that a set of parameters tuned for a particular undersampling factor using a phantom scan can be used for reconstructing in-vivo datasets at the same undersampling level. Choosing the parameters for in-vivo reconstruction based on phantom scans is, however, only approximate. A more accurate choice of parameters needs an estimation of sparsity level and the number of atoms in the dictionary, among other parameters such as size of the blocks used to train dictionaries for the data at hand, since these optimal values are data-dependent. The same is true for the optimal regularization parameters of other reconstruction techniques. However, the increased number of parameters puts DLTV at a disadvantage. In this work, we have shown the feasibility of applying dictionary learning for 5D EP-JRESI reconstructions, with parameters estimated from phantom scans to give performance on par or better than the less sophisticated reconstruction approaches. This approach is promising since further improvements can be achieved by using methods like adaptive sparsity level and dictionary size estimation for dictionary learning as reported in [64], which is the future direction of the work. Further optimizations may be achieved by adjusting the strength of the TV filter in the DLTV reconstruction based on the feasibility of the piecewise constant assumption by first order gradient in TV and a priori information about the sparsity of data in the finite difference representation. Another limitation of this pilot study is the limited number of human PCa and healthy subjects.
Conclusion
This work investigated the feasibility and performance of reconstructing undersampled 5D EP-JRESI data (8x and 12x) using multiple sparse representations within a CS framework. It is observed that the higher undersampling rates for MRSI can be made feasible with a more sophisticated reconstruction technique like hybrid DLTV, which considers the data to be sparse with respect to both a learned basis and in the finite difference-based representation. J-resolved MRSI is shown to be capable of reconstructing and clearly distinguishing Cit, Ch, Spm, mI, Glx, sI, Tau and Cr peaks as reported in ex vivo HR-MAS studies [59,60]. 2D J-resolved spectroscopy combined with ProFit gives individual metabolite ratios as listed above in contrast with 1D spectroscopy where (Ch + Spm + Cr)/Cit is commonly used (61). While further optimization is needed to make the reconstruction more computationally efficient and more sensitive to metabolites at lower physiological concentrations, this approach can facilitate bringing down the total scan time of the 5D EP-JRESI scan from 21 to 14 min by using a 12x undersampling factor instead of 8x, assuming a TR of 1.2 s and 64 t 1 increments to encode the 2nd spectral dimension.
|
2022-07-23T13:20:18.003Z
|
2022-07-23T00:00:00.000
|
{
"year": 2022,
"sha1": "fcbfe765a1a73f9732143f8559d6011186cf37d2",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s10334-022-01029-z.pdf",
"oa_status": "HYBRID",
"pdf_src": "Springer",
"pdf_hash": "ee350f67c0ae3653472573d69e1cf94961ea258e",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
268060158
|
pes2o/s2orc
|
v3-fos-license
|
Deciphering Density Fluctuations in the Hydration Water of Brownian Nanoparticles via Upconversion Thermometry
We investigate the intricate relationship among temperature, pH, and Brownian velocity in a range of differently sized upconversion nanoparticles (UCNPs) dispersed in water. These UCNPs, acting as nanorulers, offer insights into assessing the relative proportion of high-density and low-density liquid in the surrounding hydration water. The study reveals a size-dependent reduction in the onset temperature of liquid-water fluctuations, indicating an augmented presence of high-density liquid domains at the nanoparticle surfaces. The observed upper-temperature threshold is consistent with a hypothetical phase diagram of water, validating the two-state model. Moreover, an increase in pH disrupts the organization of water molecules, similar to external pressure effects, allowing simulation of the effects of temperature and pressure on hydrogen bonding networks. The findings underscore the significance of the surface of suspended nanoparticles for understanding high- to low-density liquid fluctuations and water behavior at charged interfaces.
The two-state model and a hypothetical phase diagram of liquid water
The strongest evidence supporting the two-state model of water comes from the liquid-liquid phase transition (LLPT) hypothesis proposed by Poole, Sciortino, Essmann, and Stanley based on molecular dynamics simulations. 1 According to this hypothesis, a second critical point for water in the supercooled regime separates the LDL from the HDL in a discontinuous phase transition. 2- 3 t the molecular level, the spacious LDL is formed when water molecules in the first hydration shell assemble in a more organized tetrahedral hydrogen-bonding network, while the more tightly connected HDL forms when an additional water molecule from the second hydration shell enters the first hydration shell, disrupting the LDL organization and creating a smaller and distorted hydrogen-bonding motif. 4Direct observations of the LLPT are difficult due to the quick crystallization of supercooled liquid water, which only exists in one state below 215 K. [5][6] Nevertheless, recent studies of isothermal volume changes in diluted polyol and trehalose aqueous solutions under varying pressures have confirmed the existence of two states of water experiencing an LLPT between the metastable LDL and HDL, [7][8] corroborating the hypothesis that pure water undergoes an LLPT as well.Furthermore, the conversion between HDL and LDL has been experimentally observed by Kim et al.. 9 The LLPT hypothesis offers a fresh perspective for understanding the singular behavior of liquid water in terms of LDL and HDL motifs at varying pressures and temperatures, as demonstrated by the hypothetical phase diagram depicted in Figure S1. 10 The phase diagram portrays the liquid-liquid coexistence line between LDL and HDL in terms of simple liquid regions.Additionally, the diagram includes the liquid-liquid critical point (LLCP), which may be either real or virtual, the Widom line (W) marking the crossover between the metastable and stable regions in the one-phase region, and fluctuations on various length scales emerging from LLCP, resulting in local spatially separated regions in the anomalous region.The amorphous solid states of LDL and HDL can exist at extremely low temperatures as low-density (LDA, low pressure) and high-density (HDA, high pressure) amorphous ice, respectively.In fact, recent findings have also demonstrated the possibility of obtaining medium-density amorphous ice under specific conditions of pressure and temperature. 11This discovery indicates that the proposed phase diagram still has room for optimization and potential for further improvements.
To remove the methanol, the reaction temperature was increased to 378 K.The solution was then heated to 553 K under an argon flow for 1.5 h, followed by cooling to room temperature.The resulting nanoparticles were precipitated by adding ethanol, collected by centrifugation, washed with ethanol, and finally redispersed in 4 mL of cyclohexane.
For the optically inert shell of the 15 nm UCNPs, an aqueous solution (2 mL) containing Gd(CH 3 CO 2 ) 3 (0.121 g, 0.400 mmol) was combined with oleic acid (3 mL) and 1-octadecene (7 mL) in a 50 mL flask.The resulting mixture was heated at 423 K for 1 h under stirring and then cooled to 323 K.The as-synthesized NaGdF 4 :Yb/Er(18/2%) 15 nm core nanoparticles, dispersed in 4 mL of cyclohexane, were then added to the flask, followed by the addition of a 6 mL methanol solution of ammonium fluoride (1.6 mmol) and sodium hydroxide (1.0 mmol).The mixture was stirred at 323 K for 30 min and then the reaction temperature was increased to 373 K.After removing the methanol, the solution was heated at 563 K under an argon atmosphere for 1.5 h and then cooled to room temperature.The resulting core-shell nanoparticles were precipitated by the addition of ethanol, collected via centrifugation, washed with ethanol, and redispersed in cyclohexane.
Synthesis of NaYF 4 :Lu/Yb/Er core-only nanoparticles (52, 64, 78, and 106 nm).In a typical experimental procedure, a 2 mL aqueous solution of Ln(CH 3 CO 2 ) 3 (0.2 mol•L −1 , Ln = Lu, Y, Yb, and Er) was added to a 50 mL flask containing oleic acid (3 mL) and 1-octadecene (7 mL).The mixture was heated to 423 K for 1 h.After cooling to 323 K, a methanol solution (6 mL) containing ammonium fluoride (1.6 mmol) and sodium hydroxide (1.0 mmol) was added under stirring for 30 min.After removing the methanol through evaporation, the solution was heated at 563 K under argon for 3 h and then cooled down to room temperature.The resulting nanoparticles were washed several times with ethanol and redispersed in 4 mL of cyclohexane.The same procedure was used to obtain UCNPs of varying sizes.The heating duration, temperature, and concentration composition were tuned to achieve different doping ratios of Lu/Yb/Er: 40/18/2% (52 nm), 47/18/2% (64 and 78 nm), and 50/18/2% (106 nm).
Preparation of ligand-free nanoparticles.The as-synthesized oleate-capped UCNPs were dispersed in a solution containing 1 mL of ethanol and 1 mL of hydrochloric acid (2 mol•L −1 ), followed by ultrasonication for 10 min to remove the oleate capping.The resulting ligand-free UCNPs were collected by centrifugation at 16,500 rpm for 20 min, washed with ethanol and deionized water several times, and then redispersed in deionized water.The same procedure was used for all the different-sized UCNPs, resulting in aqueous dispersions of ~100 mg•mL −1 .
Preparation of the nanofluids.The pH of the solutions and suspensions was measured with a compact pH meter (SC S210-K, Mettler Toledo) at 293 K.The pH meter was calibrated by using a two-point calibration method with technical buffer calibration solutions at pH values of 4.01 and 10.01 (Mettler Toledo).Stock solutions with pH values of 2.70 0.01, 5.10 0.01, 6.30 0.01, and 8.50 0.01 were obtained by adding aqueous solutions of hydrochloric acid (0.1 mol•L −1 ) or sodium hydroxide (0.1 mol•L −1 ) to deionized water.Then, aqueous nanofluids were obtained at different pH values by dispersing the different-sized ligand-free UCNPs in the corresponding pH stock solutions under sonication.All nanofluids were prepared with a volume fraction (ϕ) of UCNPs of 0.085%.This choice ensures a sufficient signal-to-noise ratio in the photoluminescence studies while keeping the concentration of the UCNPs as low as possible, thus decreasing particleparticle interactions, known to increase Brownian velocity. 14 S2).
Electron microscopy.Transmission electron microscopy (TEM) images of the UCNPs (Figure S5) were obtained using a field-emission transmission electron microscope (JEM-2100F, JEOL) operated at an acceleration voltage of 200 kV.The values of the diameters (d) and their corresponding uncertainties were retrieved from the mean and standard deviations of the lognormal function adjusted to the size distribution of the UCNPs (Figure S6 and Table S2).
Experimental setup for temperature-dependent photoluminescence measurements
The emission spectra of UCNPs were recorded in the experimental setup presented in Figure S7 (adapted from Reference 15 ).The nanofluids were excited with a continuous-wave (CW) nearinfrared laser diode (DL980-3W0-T, CrystaLaser) at 980 nm.The laser beam was collimated by a plano-convex lens (LA1145-AB, Thorlabs), resulting in a power density of 62 W•cm −2 .The laser beam irradiates a semi-micro rectangular quartz cuvette (9F-Q-10, Starna Cells) filled with 0.50 mL of the nanofluids.The scanning position of the cuvette along the x-axis was controlled by a moving stage with a minimum step of 0.1 mm.Detection of the upconverting emission was performed by a USB portable spectrometer (Maya 2000 Pro, Ocean Insight) coupled to an optical fiber (P600-1-UV-VIS, Ocean Insight) using a short-pass filter (FESH0750, Thorlabs) to cut off the peak of the laser during spectral acquisition.Spectral acquisition was performed with a constant boxcar width (one pixel, 0.5 nm) and integration time of 250 ms.The temperature of the nanofluids was increased at one side of the cuvette through thermal contact by attaching it to a Kapton thermofoil heater (HK6906, Minco) mounted in a copper plate (4.6 × 2.5 cm 2 ) and coupled to a temperature controller (E5CN, Omron).The temperature controller is equipped with a K-type thermocouple (KA01-3, TME Thermometers) with a thermal resolution of 0.1 K.
Temperature mapping through upconversion nanothermometry
In the experimental setup depicted in Figure S7, the nanofluids were irradiated with a CW 980 nm laser diode until the stabilization of the temperature.After reaching different initial equilibrium temperatures (ranging from 303 to 343 K), one side of the cuvette (containing the nanofluids) was heated (temperature increment of 15 K for 300 s) and time-dependent upconversion emission spectra were recorded at different fixed positions along the xx-direction perpendicular to the laser beam (x i = 0.0-6.0mm, i = 1-4).For each time instant, the luminescence intensity ratio between the emission bands corresponding to 2 H 11/2 → 4 I 15/2 (I H , 510-534 nm) and 4 S 3/2 → 4 I 15/2 (I S , 534-554 nm) transitions was used to define the thermometric parameter (Δ = I H /I S ) and calculated the absolute temperature (T) as: 16 1 where ΔE is the energy separation between the Er 3+ 2 H 11/2 and 4 S 3/2 thermally coupled levels, k B is the Boltzmann constant, and 0 is the thermometric parameter at room temperature (T 0 ).The value of E was calculated as the difference between the barycenters of the 2 H 11/2 (525 nm) and 4 S 3/2 (545 nm) emitting levels (Figure S8a).Since 0 corresponds to the value of without laserinduced heating, its value can be obtained from the intercept of the curve of measured as a function of the laser power density (Figure S8b).The thermometric parameter increases as the temperature rises because the relative population between the 2 H 11/2 and 4 S 3/2 levels is in thermal equilibrium, following Boltzmann's distribution. 16Therefore, this approach provides a reliable parameter to record time-dependent temperature profiles based on the emission spectra of the nanofluids containing UCNPs while heating them (Figure S8c).
The thermal sensing ability of the nanofluids can be assessed by the relative thermal sensitivity (S r ) and uncertainty in temperature (δT), which are the two figures of merit commonly used to compare the thermometric performance of luminescent thermometers.The value of S r represents the Δ change per degree of temperature (in %•K −1 ): where is the rate of change of Δ in response to the variation of temperature, k B is ∆ Boltzmann's constant, and T is the absolute temperature.The value of δT corresponds to the smallest temperature resolvable by the thermometer (in K): where is the relative uncertainty of Δ.The maximum relative thermal sensitivity and ∆ ∆ minimum uncertainty at the temperature obtained for each nanofluid are summarized in Table S3.
Data denoising and determination of the instantaneous ballistic Brownian velocity
The absolute temperature values measured by upconversion nanothermometry were computed as the reduced temperature θ(t): where T(t), T i , and T f are the instantaneous, initial, and final values of the temperature, respectively.
The values of T(t) were obtained by applying Equation S1 to the time-dependent upconverting emission spectra of the samples.Once the temperature profiles were recorded at distinct T i , this procedure was performed to obtain thermal transients with the same range of temperature for comparison purposes.
A nonlinear noise reduction method based on the discrete wavelet transform (DWT) was used to reduce the noise arising from the θ(t) curves. 17The denoising procedure was implemented through a custom MATLAB R2022a script in five steps, following a previously reported procedure. 15The used script i) imports the thermometric parameter = I H /I S from the as-measured temperature-dependent emission spectra; ii) computes the temperature by applying Equation S1 to the resulting time-dependent ; iii) converts the temperature into reduced temperature through Equation S4; iv) applies the DWT denoising method to obtain a denoised reduced temperature (threshold parameter: 15, 5 stages); and v) calculate the noise from the difference between the denoised and measured reduced temperature. The denoised θ(t) curves were used to compute the critical onset time (t i ), which corresponds to the instant time at which the initial temperature starts to increase.After analyzing the data, the script marks t i as the instant time at which the change in the denoised signal is higher than the standard deviation of the noise (extracted from the corresponding histogram).For the same initial temperature, the θ(t) curves were registered by irradiating the laser at distinct positions along the xx-direction (x i = 0.0-6.0mm, i = 1-4).The instantaneous ballistic Brownian velocity was then estimated as the slope of the x i versus t i plot, as demonstrated in Figure S9.
Calculation of crossover temperature
The value of the crossover temperature (T c ) was calculated from the intersection between the two straight lines adjusted to the bilinear trend of the temperature-dependent Brownian velocity of the UCNPs in the aqueous nanofluids.All possible combinations of two straight lines giving the best fit to the bilinear trend observed in the 300-350 K temperature range were computed by using a custom script written in MATLAB R2022a.The T c values were then determined from the intersection between the two linear fits that give the maximum product of the coefficient of the determination (r 2 ) from each line.The uncertainty in T c (δT c ) was estimated as: where T cmax and T cmin are the maximum and minimum predicted values of T c , respectively, σ E1 and σ E2, and m 1 and m 2 are the standard error of the estimate (σ E ) and the slope (m) of the linear dependency below and above T c , respectively.σ E is defined as:
Equipartition theorem
Equipartition theorem.The equipartition theorem describes the Brownian velocity as = , where k B is the Boltzmann constant, T is the temperature, and is the effective mass * * of the nanoparticles, representing the combined mass of UCNPs and half of the liquid mass moving cooperatively with them. 20In this sense, dispersing UCNPs within denser solvents results in a lower , as shown in Figure 2a of the main text.
Brownian velocity and number of UCNPs per mL in the nanofluids
The number of UCNPs per unit of volume (V) was calculated considering the mass of the UCNPs in suspension where ρ c is the mass concentration.The mass of each particle (assumed as a sphere with diameter d, TEM diameter, Table S2) is: where ρ n is the density of the UCNPs.The values of ρ c , ρ n , and d are presented in Table S1.The resulting number of UCNPs per unit of volume was estimated by: and is presented in Table S4.Plotting the Brownian velocity at 300 K as a function of the diameter and number of UCNPs per mL, results in the plots presented in Figure S10.
We notice the increase in the Brownian velocity with the increase of the number of UCNPs, in line with what was reported by us before, stressing the pivotal role of particle-particle interactions in the measured Brownian velocity. 14 S2.S2.
Supplementary Figures
For the nanofluids obtained in heavy water (D 2 O) and ethanol (EtOH), aqueous dispersions of 15 nm ligand-free UCNPs were freezedried to remove water by sublimation.The mass of dried UCNPs required to obtain ϕ = 0.085% was then dispersed in the proper volume of D 2 O or EtOH under sonication.Table S1 summarizes the mass concentration (ρ c ) and density (ρ n ) according to the chemical composition of UCNPs used to obtain the different nanofluids at ϕ = ρ c /ρ n = 0.085%.Colloidal characterization.The hydrodynamic diameter (d H ) of UCNPs was measured by dynamic light scattering (DLS) in a Malvern Zetasizer Nano ZS instrument (red badge ZEN3600, Malvern Instruments) operating with a 632.8 nm laser.To analyze the colloidal stability of UCNPs, zeta potential (ζ) measurements were carried out with the same instrument applying the Smoluchowski model.The d H and ζ measurements were performed at 298 K in a folded capillary cell (DTS1070, Malvern Instruments).Three measurements with ten scans each were performed and the average values were used for data analysis.The resulting distribution of sizes and zeta potentials was adjusted to lognormal and Gaussian functions, respectively, using the mean and standard deviations to determine the average d H and ζ and their uncertainties (Figures S2-S4, and Table S6) with ν and ν′ corresponding to the measured and fitted values of the Brownian velocity, respectively, and n is the number of data points from each fitted line.The estimation of δT c is illustrated in the schematic representation in Figure S10.The same procedure was employed regardless of the UCNPs' size or pH of the aqueous media.
Figure S1.Hypothetical phase diagram of liquid water showing the coexistence of LDL (blue) and HDL (red) domains near the W line. Below W, LDL dominates with fluctuations in HDL domains, whereas above W HDL dominates with LDL fluctuations.The white star represents the liquid-liquid critical point.As one moves away from the critical point, the fluctuations decrease in size, as evidenced by the diminishing size of the blobs.The black line delimits the so-called "funnel of life" at which water exhibits its unusual properties that are crucial for sustaining life.Reproduced from Pettersson, L. G. M. In A two-state picture of water and the funnel of life, Modern Problems of the Physics of Liquid Systems, L.A., B.; L., X., Eds.Springer Proceedings in Physics: 2019; pp 3-39.Copyright 2019 Springer Nature.
Figure S2 .
Figure S2.Hydrodynamic diameter (d H , top) and ζ (bottom) of the 15 nm nanofluids prepared in (a, d) water (pH = 5.10), (b, e) heavy water, and (c, f) ethanol.The lines are the best fits to the data using lognormal (d H ) and Gaussian (ζ) functions (r 2 > 0.97).The values of d H obtained for the other nanofluids prepared in water are summarized in TableS2.
Figure S4 .
Figure S4.pH dependence of ζ for the (a) 15 nm, (b) 24 nm, (c) 52 nm, (d) 64 nm, (e) 78 nm, and(f) 106 nm UCNPs dispersed in water, obtained from the data in FigureS3.The lines are guides for the eyes.The decrease in ζ with the increase in pH is in good accordance with previous reports,21 highlighting how the presence of ions in the medium can affect the surface charge of UCNPs.The stability of a colloid is the result of van der Waals attraction and repulsion (steric and electrostatic).The zeta potential provides information about the repulsive forces due to the electric double layer and, thus, the absolute values, |ζ|, measure the magnitude of the electrostatic repulsion.The nanofluids are considered stable if |ζ| ≥ 20 mV (green-shaded regions).22
Figure S7 .
Figure S7.Schematic of the experimental setup used to record the thermal transients of the nanofluids.The cuvette is placed on a controlled moving stage that allows the nanofluid to be irradiated at different positions along the xx direction.The 980 nm laser beam is collimated by a plano-convex lens and the light emission from the nanofluid is collected by an optical fiber coupled to a portable spectrometer.
Figure S9 .
Figure S9.Determination of the instantaneous Brownian velocity of 15 nm UCNPs dispersed in water (pH = 5.10 ± 0.01).(a) Illustrative denoising procedure applied to four reduced temperatures (symbols) recorded at positions x 1 = 0.0 mm, x 2 = 2.0 mm, x 3 = 4.0 mm, and x 4 = 6.0 mm.The dashed lines are the denoised signal obtained through the DWT procedure, where the onset t i values are indicated.(b) Histograms of the noise curves in panel (a).The dashed lines are the best fits to experimental data using Gaussian functions.It is possible to observe that the noise is nearly centered at zero, presenting a high coefficient of determination (r 2 > 0.98) for all samples, indicating that it corresponds to an additive Gaussian noise.(c) Determination of the instantaneous Brownian velocity ν, corresponding to the slope (Δx i /Δt i ) of the straight line adjusted to the experimental data from panel (a).The error bars in x i and t i are the uncertainty in the position of the moving stage from Figure S7 (0.1 mm) and the integration time used for spectral acquisition (250 ms), respectively.
Figure S10 .
Figure S10.Schematic representation of the calculation of the crossover temperature and its uncertainty (T c ± δT c ).The solid blue and red lines represent linear fits to the experimental data before and after T c , respectively.The corresponding dashed lines depict the average deviation in the prediction of the Brownian velocity.
Figure S11 .
Figure S11.Brownian velocity of the UCNPs in the aqueous nanofluids.The Brownian velocity is represented as a function of (a) the particle size and (b) the number of UCNPs per mL at 300 K and pH=5.10 (illustrative temperature and pH values).
Figure S12 .
Figure S12.Effect of pH on Brownian velocity of 24 nm, 52 nm, and 106 nm UCNPs.The lines are the best linear fits at each pH for T<T c and the same linear fit for all the pH values for T>T c (r 2 >0.98 for all samples).The pH-dependent variation in the Brownian velocity of the 100 nm UCNPs with a silica shell was not assessed due to their lower colloidal stability for pH values different than 5.10.
Figure S13 .
Figure S13.Crossover temperature as a function of the (a) pH and (b) |ζ| for the 24, 52, and 106 nm diameter UCNPs.The lines are guides for the eyes.
Table S2 .
TEM (d)and hydrodynamic (d H ) diameters of the obtained UCNPs.The values of hydrodynamic diameters from DLS measurements correspond to the water-based nanofluids at pH = 5.10 ± 0.01.An excellent agreement was observed for the mean diameters reported by TEM and DLS.
Table S3 .
Energy separation (ΔE), maximum S r (S m ), minimum temperature uncertainty (δT), and corresponding temperature at which they occur for each nanofluid.
Table S4 .
Number of UCNPs dispersed in 1 mL of the investigated aqueous nanofluids (N/V) for a volume fraction of 0.085%.
|
2024-03-01T06:18:25.387Z
|
2024-02-29T00:00:00.000
|
{
"year": 2024,
"sha1": "8b9e0989b8d0f635253b77232101bb05eb28d7ea",
"oa_license": "CCBY",
"oa_url": "https://pubs.acs.org/doi/pdf/10.1021/acs.jpclett.4c00044",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "121e217dae0d8b5a5a321c125bc99cf1dfb92f76",
"s2fieldsofstudy": [
"Materials Science",
"Physics",
"Chemistry"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
271352929
|
pes2o/s2orc
|
v3-fos-license
|
Citrus Pomace as a Source of Plant Complexes to Be Used in the Nutraceutical Field of Intestinal Inflammation
This study aims to recover the main by-product of Citrus fruits processing, the raw pomace, known also as pastazzo, to produce plant complexes to be used in the treatment of inflammatory bowel disease (IBD). Food-grade extracts from orange (OE) and lemon (LE) pomace were obtained by ultrasound-assisted maceration. After a preliminary phytochemical and biological screening by in vitro assays, primary and secondary metabolites were characterized by proton nuclear magnetic resonance (1H-NMR) and liquid chromatography coupled to diode array detection and electrospray ionization mass spectrometry (LC-DAD-ESI-MS) analyses. The intestinal bioaccessibility and antioxidant and anti-inflammatory properties were investigated by in vitro simulated gastro-intestinal digestion followed by treatments on a lipopolysaccharide (LPS)-stimulated human colorectal adenocarcinoma cell line (Caco-2). The tight junctions-associated structural proteins (ZO-1, Claudin-1, and Occludin), transepithelial electrical resistance (TEER), reactive oxygen species (ROS)-levels, expression of some key antioxidant (CAT, NRF2 and SOD2) and inflammatory (IL-1β, IL-6, TNF-α, IL-8) genes, and pNFkB p65 nuclear translocation, were evaluated. The OE and LE digesta, which did not show any significant difference in terms of phytochemical profile, showed significant effects in protecting against the LPS-induced intestinal barrier damage, oxidative stress and inflammatory response. In conclusion, both OE and LE emerged as potential candidates for further preclinical studies on in vivo IBD models.
Introduction
Over the past decades, it has been widely demonstrated that the consumption of functional foods such as fresh fruit and vegetables and their processed products is essential to ensure our body's vitality and health.However, the increased consumption, especially of processed products such as juices, extracts, centrifuges, fourth range products, etc., implies a significant increase in process wastes, which are often very expensive to dispose [1,2].Despite this, these waste products can still be considered precious raw materials for producing plant complexes or isolating pure molecules to be recovered and used both in the agri-food chain as well as in the nutraceutical field.
Citrus represent some of the most important and valued fruit crops in the world [3].Consumed by humans since ancient times, these fruits are well-known for their health effects thanks to the wide range of hydrophilic and lipophilic bioactive compounds they contain [4].World production of Citrus fruits has grown steadily over the past three decades [5].Data provided by the Food and Agriculture Organization Corporate Statistical Database (FAOSTAT) reveal that about 150 million tons of Citrus fruits are produced globally every year [1].Italy, after Spain and Egypt, holds the third place in the ranking of the main Citrus fruit-producing countries in Europe [3].Among the Italian regions, Sicily holds the record in Citrus fruits production, comprising more than 60% of the entire national market [6,7].Oranges and lemons are the most cultivated and marketed Citrus fruits and consequently also the most processed, producing annually about 230.000 tons of raw pomace (also known as pastazzo), a by-product consisting of flavedo, albedo, seeds and pulp fruit residues [3,8].
Despite being a waste product, Citrus raw pomace is a rich source of value-added compounds such as polyphenols, polysaccharides, organic acids, terpenes, amino acids, minerals, vitamins and carotenoids [8].In this context, over the last decades, research has focused on the study of alternative applications which would allow its recovery from a circular economy perspective such as its use as a fertilizer, in animal feed or to produce biofuels [3,4,9].However, the use of this type of waste currently lacks evidence for application in the health field, although many bioactive compounds typical of Citrus fruit, in particular flavanones, have proven to be very promising for their strong antioxidant and anti-inflammatory properties [3,10], particularly when used in combination for their proved synergistic activity [11].As suggested by experimental studies, the properties of these natural compounds could have valuable preventive and therapeutic effects on several noncommunicable diseases (NCDs), such as metabolic dysfunction associated steatotic liver disease and type 2 diabetes [12,13].
Among the NCDs that provide major contributions to the reduction of the quality of life and life expectancy, there are inflammatory bowel diseases (IBD), a group of pathologies including Crohn's disease (CD) and ulcerative colitis (UC).IBD affects mainly adolescents and middle-aged people, and in 2017 a Global Burden of Disease study estimated approximately 6.8 million cases of IBD globally [14].IBDs are characterized by recurrent non-infectious gastro-intestinal tract inflammation [15], whose symptoms may include abdominal pain, diarrhea, weight loss and rectal bleeding [16,17].Although the etiology remains unknown, it is possible to speculate that, in genetically predisposed individuals, the onset of IBD may be due to a disruption of the host immune response and intestinal commensal bacteria balance.Furthermore, environmental, behavioral and dietary factors play a key role in the onset of IBD, so much so that they are referred to as multifactorial diseases [18,19].Current pharmacological approach consists in symptomatic treatments and complication-managing drugs such as antibiotics, corticosteroids, immunosuppressants and tumor necrosis factor α (TNF-α)-inhibitors, which however often fail to achieve and sustain remission, and can even cause serious side effects [16,17].Given their chronic and progressive nature and the healthcare costs, the rapidly increasing incidence of IBD has become a major socio-economic concern [20]; the search for alternative therapies represents a challenge for research studies on IBD [21].Indeed, in recent years, there has been a significant increase in studies on IBD and natural substances, to find alternatives to conventional therapy, but to date, research has mainly focused on natural products and extracts obtained from edible parts, and plant complexes obtained from the agri-food waste have rarely been considered.
In this context, the aim of the present study was to investigate the phytochemical profile and intestinal bioaccessibility of standardized and titrated food-grade extracts of conventional blond orange and organic lemon raw pomace, and to test their antioxidant and anti-inflammatory properties by in vitro cell-free and cell-based assays to select plant complexes potentially useful for nutraceutical purposes in the context of IBD.
Sample Preparation
Raw pomace samples of conventional blond orange (Citrus sinensis (L.) Osbeck cultivar "Valencia") coming from Carlentini, Lentini and Messina, and organic lemon (Citrus limon L. Burm.cultivar "Femminello") coming from Syracuse, were kindly provided by Simone Gatto S.r.l., a leader Sicilian company in the production of Citrus essential oils and juices of absolute purity, which currently distributes its processing products in 27 countries.This company was also chosen for the quality of the starting material, which is guaranteed by the selection of Citrus groves based on sustainable supply chain, fair price, low environmental impact, compliance to the varieties and pesticide control.
To standardize the extraction process, three different batches for each Citrus raw pomace type were supplied and processed independently.Samples were cryo-powdered in liquid nitrogen with a blade analytical mill (A11, IKA ® -Werke GmbH & Co. KG, Staufen, Germany) to inhibit enzymatic activity, thus preserving the native phytochemical profile.Food-grade hydroalcoholic extracts were obtained by ultrasound-assisted extraction (matrix/solvent 1:10, w/v) at room temperature (RT) according to Smeriglio et al. [22] using four different ethanol/water ratios: 50:50, 60:40, 70:30 and 80:20 v/v, respectively.The extraction procedure was repeated three times, and the obtained supernatants were collected and dried, at RT and in the dark, by rotary evaporator (Büchi R-205, Cornaredo, Italy).Dry orange and lemon raw pomace extracts (OE and LE, respectively) were stored in a vacuum glass desiccator overnight with anhydrous sodium sulfate.After calculating the extraction yield, both extracts were stored in burnished sealed vials with nitrogen headspace.At the time of the analyses, fresh DMSO stock solutions were prepared and then diluted in Milli-Q water to carry out all cell-free and cell-based in vitro assays.
1 H NMR spectra were recorded at 25 • C on a Varian Inova instrument (equipped with a reverse triple-resonance probe) operating at a frequency of 600.13 MHz, and using MeOH-d4 as internal lock.Each 1 H NMR spectrum consisted of 256 scans (corresponding to 16 min) with a relaxation delay (RD) of 2 s, acquisition time 0.707 s and spectral width of 9595.8Hz (corresponding to δ 16.0).A presaturation sequence (PRESAT) was used to suppress the residual water signal at δ 4.83 (power = −6 dB, presaturation delay 2 s).
Five different extracts were measured to test reproducibility.Semi-quantitative analysis was performed by integration of the diagnostic signals of the compounds of interest in comparison with TMSP internal standard.Compounds identification was based on the literature and in-house database [23,24].
Secondary Metabolites Screening by Colorimetric Assays Total Phenolic Compounds (TPC)
Total phenolics were quantified according to Ingegneri et al. [25].Briefly, 10 µL of OE and LE (0.625-5.0 mg/mL) were added to 90 µL of Milli-Q water and mixed 1:1 (v/v) with Folin-Ciocalteu reagent.After 3 min, 100 µL 10% sodium carbonate were added and samples incubated in the dark at RT for 60 min, shaking every 10 min.Absorbance was read at 785 nm by using a Multiskan™ GO Microplate Spectrophotometer (Thermo Scientific, Waltham, MA, USA) against Milli-Q water as blank.Gallic acid was used as a reference compound (0.075-0.6 mg/mL), and results were expressed as g of gallic acid equivalents (GAE)/100 g dry extract (DE).
Total Flavonoid Compounds (TFC)
Total flavonoids were quantified according to Lenucci et al. [26].Briefly, 50 µL of OE and LE (1.25-10 mg/mL) were added to 450 µL of Milli-Q water and 30 µL of 5% NaNO 2 .After 5 min, 60 µL of 10% AlCl 3 were added, and samples incubated for 6 min at RT. Two hundred microliters of 1 M NaOH and 210 µL of Milli-Q water were added, and samples were vortex-mixed.The absorbance was recorded at 510 nm by an UV-1601 spectrophotometer (Shimadzu, Kyoto, Japan).Rutin was used as a reference standard (0.125-1.0 mg/mL), and results were expressed as g of rutin equivalents (RE)/100 g DE.
Vanillin Index
Vanillin index is a specific assay useful to detect flavan-3-ols and dihydrochalcones that have a single bond at the 2,3-position, and free meta-oriented hydroxy groups on the B ring.Briefly, 0.5 mL of OE and LE (20 mg/mL) were added to 1.5 mL 0.5 M sulfuric acid and loaded onto a conditioned Sep-Pak C18 cartridge (Waters, Milan, Italy), which was then washed with 2.0 mL of 5.0 mM sulfuric acid.Samples were eluted with 5.0 mL of methanol, and 1 mL of each eluate was added to 6.0 mL of 4% vanillin methanol solution and incubated at 20 • C for 10 min.HCl (3 mL) was added, and after 15 min at RT, the absorbance was recorded at 500 nm [27] using the same instrument and blank reported in Total Flavonoid Compounds (TFC) Section.Catechin was used as a reference standard (0.0625-0.50 mg/mL).Results were expressed as g of catechin equivalents (CE)/100 g DE.
Proanthocyanidins
Proanthocyanidins were quantified by hot acid hydrolysis [27], diluting OE and LE (40 mg/mL) in 0.05 M sulfuric acid (2 mL).Solutions were loaded onto conditioned Sep-Pak C18 cartridges (Waters, Milan, Italy).Proanthocyanidin-rich fractions obtained were eluted with methanol (3 mL) and collected in 100 mL round bottom flasks shielded from light and containing 9.5 mL of absolute ethanol.After this, 12.5 mL of 300 mg/L FeSO 4 • 7H 2 O hydrochloric acid solution was added and samples left to reflux for 50 min.After cooling, the absorbance was recorded at 550 nm using the same instrument and blank reported in Total Flavonoid Compounds (TFC) Section.To subtract the starting anthocyanins content of samples, the absorbance of samples prepared under the same conditions, but cooled in ice instead of warmed, was subtracted from that of the heated samples to obtain the net value of absorbance.Proanthocyanidins concentration was expressed as g of cyanidin chloride equivalents (ε = 34,700) (CyE)/100 g DE.
In Vitro Simulated Gastrointestinal Digestion
The in vitro simulated gastrointestinal digestion of OE and LE was carried out according to the INFOGEST protocol [28].
OE and LE solution were added (1:1, v/v) to a simulated gastric fluid (SGF) consisting of 1.25X electrolytes stock solution, 0.3 M calcium dichloride dihydrate, porcine pepsin (2000 U/mL), gastric lipase (60 U/mL), Milli-Q water and 5 M HCl for pH adjustment.Samples were then incubated under agitation at pH 3.0 for 2 h.The gastric chyme was then diluted (1:1, v/v) with simulated intestinal fluid (SIF), consisting of 1.25X electrolytes stock solution, 0.3 M calcium dichloride dihydrate, porcine trypsin (100 U/mL), bovine chymotrypsin (25 U/mL), porcine pancreatic α-amylase (200 U/mL), porcine pancreatic lipase (2000 U/mL), porcine pancreatic colipase (4000 U/mL), 10 mM bile salts, Milli-Q water and 5 M NaOH for pH adjustment.Samples were then incubated under agitation at pH 7 for a further 2 h.At the end of the procedure, according to the INFOGEST protocol for bioaccessibility of phytochemicals [28], OE and LE digesta were centrifuged and filtrated using a 0.20 µm nylon syringe filter, and immediately stored at −80 • C until subsequent analyses.Extraction of digesta samples for phytochemical analyses were carried out according to Denaro et al. [10].
In Vitro Antioxidant and Anti-Inflammatory Assays
The antioxidant and anti-inflammatory activity of OE and LE were evaluated by several in vitro spectrophotometric and spectrofluorimetric assays based on different mechanisms and reaction environments.Results were expressed as inhibition (%) of the oxidative/inflammatory activity by calculating the half-inhibitory concentration (IC 50 ) and the respective confidence limits (C.L.) at 95% by Litchfield and Wilcoxon's test (PHARM/PCS 4, MCS Consulting, Wynnewood, PA, USA).The following reported concentration ranges refer to final concentrations in the reaction mixture.
Trolox Equivalent Antioxidant Capacity (TEAC) Assay
The blue-green cationic radical solution, obtained by incubating at RT for 12 h the 1.7 mM diammonium salt of 2,20-azino-bis (3-ethylbenzothiazolin-6-sulphonic acid (ABTS)) with 4.3 mM K 2 S 2 O 8 , was diluted with Milli-Q water to obtain an absorbance of 0.7 ± 0.02 at 734 nm, and used within 4 h.Ten microliters of OE and LE (31.25-250.0µg/mL) were added to the radical solution (200 µL) and incubated at RT for 6 min [25].The absorbance decrease was recorded at 734 nm using the same instrument and blank reported in Total Phenolic Compounds (TPC) Section.Trolox was used as a reference standard (1.25-10.0µg/mL).
β-Carotene Bleaching (BCB) Assay
The BCB assay was carried out according to Smeriglio et al. [2] with some modifications [2].Briefly, 80 µL of OE and LE (62.5-500.0µg/mL) were added to 2 mL of a β-carotene emulsion consisting of β-carotene chloroform solution (2.5 mg/mL), 4 µL of linoleic acid, and 100 µL of Tween-40.A β-carotene free emulsion was used as a negative control, whereas a β-carotene emulsion with Milli-Q water was used as a blank.Samples were incubated for 120 min at 50 • C in a shaking water bath, monitoring the absorbance decay every 20 min at 470 nm, using the same instrument reported in Total Phenolic Compounds (TPC) Section.Butylhydroxytoluene (BHT) was used as reference standard (0.06-0.5 µg/mL).
Iron-Chelating Activity (ICA) Assay
The iron-chelating activity was evaluated according to Smeriglio et al. [2] with some modifications.Briefly, 25 µL of 2.0 mM iron (II) chloride tetrahydrate were added to 50 µL of OE and LE (75.0-600.0µg/mL, respectively) and incubated at RT for 5 min.Then, 50 µL of 5 mM ferrozine were added and the reaction mixture were diluted to 1.5 mL with Milli-Q water, vortex-mixed, and incubated for 10 min at RT.The absorbance was read at 562 nm using the same instrument and blank reported in Total Flavonoid Compounds (TFC) Section.EDTA was used as a reference standard (1.5-12.0µg/mL).
2.4.7.Heat-Induced Bovine Serum Albumin Denaturation (ADA) OE and LE (0.25-2.0 mg/mL and 0.125-1.0mg/mL, respectively) were added to 0.4% fatty-acid-free bovine serum albumin (BSA) solution and PBS pH 5.3 (4:5:1 v/v/v, respectively) [10].Once the starting absorbance had been recorded at 595 nm, samples were incubated for 30 min at 70 • C in a shaking water bath, recording the final absorbance at the same wavelength and using the same instrument and blank reported in Total Phenolic Compounds (TPC) Section.Diclofenac sodium was used as a reference standard (3.0-24.0µg/mL).
Protease-Inhibitory Activity (PIA)
Twenty microliters of OE and LE (31.25-250.0µg/mL) were added to 12 µL of trypsin (10 µg/mL), 188 µL of Tris-HCl buffer pH 7.5 (25 mM) and 400 µL of casein (0.8%) and incubated for 20 min at 37 • C in a shaking water bath [10].Perchloric acid (400 µL) was added to stop the reaction.After centrifugation (3500× g for 10 min), the absorbance of the supernatants was recorded at 280 nm using the same instrument and blank reported in Total Flavonoid Compounds (TFC) Section.Diclofenac sodium was used as a reference standard (2.0-16.0µg/mL).
Cell Culture and Treatments
The human colorectal adenocarcinoma Caco-2 cell line purchased from and certified by American Type Culture Collection (ATCC, Manassas, VA, USA) was cultured in Eagle's Minimum Essential Medium (EMEM) (ATCC) supplemented with 10% fetal bovine serum (Gibco-Thermo Fisher Scientific, Waltham, MA, USA) and 1% penicillin/streptomycin (Euroclone, Milan, Italy) and incubated at 37 • C with 5% CO 2 in a humidified atmosphere.The medium was changed three times a week and possible mycoplasma contamination was checked by using Venor GeM Advance Mycoplasma Detection KIT (Minerva Biolabs, Berlin, Germany), thus performing all the experiments only in mycoplasma-free cells.For the next experiments, all treatments were added in the culture medium as detailed in the specific methods sections.
Cell Viability
Caco-2 cells were seeded in a 96-multiwell plate at a confluence of 8000 cells per well in quintuplicate and then treated with different concentrations of OE and LE (25,50,100,200, and 250 µg/mL), or lipopolysaccharide (LPS) purified from the Gram-negative E. coli 0111:B4 purchased by InvivoGen Europe (Toulouse, France) at different concentrations (1, 10, and 25 µg/mL), alone or in combination.The cell viability was then assessed at two different timepoints (24 and 48 h) by using the cell proliferation kit II-XTT (Roche, Basel, Switzerland) according to the manufacturer's protocol.Briefly, the kit evaluated the viability of the treated cells by measuring the absorbance at 492 and 620 nm of the watersoluble formazan using the Tecan spectrophotometer (Tecan, Maennedorf, Switzerland).
Cell Proliferation by IncuCyte
Cell proliferation was real-time monitored after treatments with 25, 50, 100, 200, 250 µg/mL OE and LE, or 1, 10, 25 µg/mL LPS, alone or in combination.Approximately 8000 cells per well were seeded in quintuplicate in 96-multiwell plates.Cell proliferation rate assessed by confluency percentage was evaluated using an IncuCyte live-cell analysis system (Sartorius, Göttingen, Germany), acquiring four images per well every 2 h by using a 10× objective lens over a time course of 48 h.Then, the IncuCyte basic software version 2021A (Sartorius, Gottinga, Germany) was used to perform classic confluence analysis.
Transepithelial Electric Resistance (TEER) Measurement
Caco-2 cells were seeded on PET membrane inserts with 0.4 µm pores (Greiner Bio One, Kremsmünster, Austria) placed in a 24-multiwell plate at a density of 3 × 10 5 cells/cm 2 and maintained in complete medium until complete differentiation, changing medium three times a week.TEER was then measured to assess the barrier integrity of the monolayer before and after the treatments using the volt-ohm meter Millicell ERS-2 (Merck Millipore, Burlington, MA, USA).The data were presented as percentage of initial values of unit area resistance calculated by dividing resistance values by the effective membrane area.Membrane inserts without cells were used as blank.
Immunofluorescence
Caco-2 cells with differentiated monolayers were fixed after treatments with 4% paraformaldehyde in H 2 O for 10 minutes.Cells were then washed twice with PBS, blocked with 3% BSA in PBS at RT for 30 min, and then incubated with the primary antibodies diluted 1:100 in PBS/BSA 1% overnight at 4 • C (see Table S1 for the list of antibodies used).After two washes with PBS, cells were incubated with the secondary antibody Alexa Fluor 488 and/or Alexa Fluor 555 (Table S1) in PBS/BSA 1% for 1 h at RT. Finally, cells were incubated with 1:10,000 Hoechst in PBS for 10 minutes at RT for nuclear staining.Image acquisition was performed by using the original digital images format acquired with an Olympus Fluoview FV3000 Confocal Laser Scanning Microscope (Olympus, Tokyo, Japan).The region of interest (ROI) was drawn to perform quantitative fluorescence imaging anal-ysis (QFIA) and the intensity average of fluorescence was calculated using ImageJ software, version 1.8.0 (National Institutes of Health, Bethesda, MD, USA).
Intracellular Reactive Oxygen Species (ROS) Levels
Intracellular ROS levels of Caco-2 cells were evaluated by using the chloromethyl derivative of H 2 DCFDA (CM-H 2 DCFDA), often used as a general oxidative stress indicator (Invitrogen-Thermo Fisher Scientific).Briefly, 8000 cells per well were seeded into a 96-multiwell black plate and after the treatments were incubated for 30 min at 37 • C with 10 µM CM-H2DCFDA fluorescent probe, and with 1:3000 Hoechst, used to normalize the cell amounts by nuclear staining.The fluorescence intensity was then measured at 495 nm excitation and 530 nm emission by using a BioTek Synergy H1 microplate reader (Agilent, Santa Clara, CA, USA).Unstained cells were used as control.Representative images of stained cells were acquired using a Leica DMi8 microscope (Leica Camera AG, Wetzlar, Germany).
Statistical Analyses
Data were expressed as IC 50 with respective 95% C.L. (see Section 2.3), as mean ± standard deviation (SD) of three independent experiments in triplicate for in vitro cell-free assays, and of three independent experiments in quintuplicate for in vitro cell-based assays.The statistical significance was evaluated using one-way analysis of variance (ANOVA) followed by Tukey's test for the phytochemical and in vitro cell-free assays, and 2-tailed Student's t test for cell-based assays.Values of p < 0.05 were considered statistically significant.Data analysis was performed with GraphPad Prism 9.0 (GraphPad Software, San Diego, CA, USA).
Standardization and Titration of OE and LE
With the aim of obtaining a constant phytochemical profile with the maximum concentration of bioactive compunds, thus guaranteeing reproducibility of the biological effects observed, a standardized extraction procedure was developed.To this end, three different batches of orange and lemon raw pomace were supplied and independently extracted with four different solvent ratios (see Section 2.1 for details).The extraction yield obtained, total phenols and flavonoids content, as well as the concentration of the two chosen phytochem-ical markers (namely hesperidin and narirutin for OE, eriocitrin and hesperidin for LE) were used as critical parameters.
The 80:20 v/v hydroalcoholic mixture proved to be the best one, not only in terms of extraction yield (11.35 ± 0.36% and 7.30 ± 0.08% for OE and LE, respectively), but also in terms of the greatest concentration of total phenolic compounds (2.41 ± 0.16 g/100 g and 2.46 ± 0.14 g/100 g for OE and LE, respectively), total flavonoids (1.36 ± 0.09 g/100 g and 1.53 ± 0.08 g/100 g for OE and LE, respectively) and concentration of the chosen phytochemical markers (hesperidin 2.36 ± 0.05 g/100 g and narirutin 0.37 ± 0.01 g/100 g for OE, hesperidin 1.20 ± 0.03 g/100 g and eriocitrin 1.14 ± 0.02 g/100 g for LE).Finally, using the chosen extraction process, no statistically significant difference between the different batches of orange and lemon raw pomace was observed for all the considered critical parameters.
1 H-NMR Profiling
In this work, we measured the 1 H NMR profiling of orange and lemon raw pomace.This technique is apt to provide an overview of the most abundant compounds present within an extract and it is increasingly employed to investigate complex matrices, especially for metabolomic studies [29].
1 H NMR profiling is a robust analytical technique relying on easily standardized sample preparation procedures, producing raw data suitable to be recycled and reused.In this context, the storage of the raw 1 H NMR profiles in a data repository makes them easily available to the scientific community, which, for instance, might use them to build databases or data analysis models capable of making predictions based on the 1 H NMR profile.
In this work, the 1 H NMR profiling of the extracts was important to have a picture of the primary metabolites, complementing the LC-DAD-ESI-MS analysis, which was focused on the secondary metabolites, whose concentration was too low to be detected through the 1 H NMR profiling.The raw spectral data has been shared in a data repository [30].Figure 1 shows the profiles elucidation, while Table 1 reported the results of the semiquantitative analysis.According to this analysis, LE and OE extracts were both rich in sugars, which is not surprising considering that they are by-products from fruit processing.In fact, the sugars comprised more than half of the extract mass (around 69% in LE and 88% in OE).
The most abundant sugar was fructose, yielding 318.1 ± 4.0 mg/g in LE and 347.6 ± 3.2 mg/g in OE, followed by glucose, whose overall concentration (including both αand β-forms) was approximately 293 mg/g in LE and 305 mg/g in OE.Finally, sucrose was more abundant in OE (224.8 ± 2.3 mg/g) than LE (76.5 ± 1.0 mg/g).Conversely, as expected, citric acid was more abundant in LE (210.7 ± 4.6 mg/g) than OE (47.2 ± 1.0 mg/g).Both extracts contained GABA, while succinic acid and malic acid were found only in OE, and aspartic acid in LE.The profiles also revealed the presence of amino acids.Alanine and proline were detected in both extracts, while asparagine and tyrosine were detected only in LE and OE, respectivey.
Secondary Metabolites: Phytochemical Screening and LC-DAD-ESI-MS Analysis
OE and LE secondary metabolites were firstly investigated by colorimetric assays aimed at quantifying the total phenolic compounds, flavonoids, flavan-3-ols and dihydrochalcones (vanillin index), as well as proanthocyanidins content (Table 2).
The quantification of these last two classes of compounds also allows calculation of the so-called polymerization index (vanillin index/proanthocyanidins), useful for determining whether an extract contains mainly monomeric or polymeric molecules.Indeed, proanthocyanidins are flavan-3-ols and/or flavan-3,4-diol oligomers, so that if the polymerization index is greater than 1, it indicates an abundance of monomeric molecules.As shown in Table 2, OE and LE have comparable total phenolics and flavonoids content, while statistically significant differences (p < 0.01) were detected in terms of vanillin index and, therefore, in terms of concentration of monomeric molecules, which appear to be more present in LE rather than in OE.In any case, flavonoids appear to be the most abundant polyphenolic compounds in both extracts under examination as confirmed by subsequent phytochemical analyses carried out by LC-DAD-ESI-MS (Table 3).Compounds were detected and tentatively identified by comparison of mass and UV-Vis spectra with literature data, online free consulting spectra databases as well as with commercially available reference standards (Table 3).
Eighty secondary metabolites have been identified (54 and 58 in OE and LE, respectively), belonging mainly to 6 classes: flavones (43%), flavanones (23%), phenolic acids (9%), limonoids (9%), flavonols (8%) and anthocyanins (5%).Of these, only 21 were common to OE and LE, showing a completely different phytochemical profile already from a qualitative point of view, as expected from two Citrus fruits belonging to different species.Indeed, as shown in Figure 2, although flavones were the most representative polyphenols class in both extracts under examination, they were mostly expressed in LE rather than OE (50% vs. 41%), whereas OE was characterized by a greater expression of flavanones (26% vs. 17% of LE).In addition, LE was also characterized, numerically, by the greatest content of phenolic acids, limonoids and flavonols (Figure 2).On the contrary, anthocyanins were detected only in OE, because it was obtained from raw pomace of blond oranges characterized by a light red streaks-pulp.
However, the qualitative phytochemical profile, which sees flavones as predominant compounds, does not correspond to the quantitative phytochemical profile, which sees the clear predominance of flavanones, in particular hesperidin and narirutin in OE (2.36 ± 0.05 g/100 g and 0.37 ± 0.01 g/100 g, respectively), and hesperidin and eriocitrin in LE (1.20 ± 0.03 g/100 g and 1.14 ± 0.02 g/100 g, respectively).polyphenols class in both extracts under examination, they were mostly expressed in LE rather than OE (50% vs. 41%), whereas OE was characterized by a greater expression of flavanones (26% vs. 17% of LE).In addition, LE was also characterized, numerically, by the greatest content of phenolic acids, limonoids and flavonols (Figure 2).On the contrary, anthocyanins were detected only in OE, because it was obtained from raw pomace of blond oranges characterized by a light red streaks-pulp.Numerically speaking, apigenin, kaempferol and diosmetin derivatives were the most abundant flavones, whereas among flavanones, the most representative compounds were eriodyctiol, naringenin and sakuranin derivatives.
However, the qualitative phytochemical profile, which sees flavones as predominant compounds, does not correspond to the quantitative phytochemical profile, which sees the clear predominance of flavanones, in particular hesperidin and narirutin in OE (2.36 ± 0.05 g/100 g and 0.37 ± 0.01 g/100 g, respectively), and hesperidin and eriocitrin in LE (1.20 ± 0.03 g/100 g and 1.14 ± 0.02 g/100 g, respectively).
Intestinal Bioaccessibility
To evaluate the bioaccessibility of the identified phytochemicals, OE and LE were subjected to a simulated in vitro gastro-duodenal digestion.The buccal digestion step was
Intestinal Bioaccessibility
To evaluate the bioaccessibility of the identified phytochemicals, OE and LE were subjected to a simulated in vitro gastro-duodenal digestion.The buccal digestion step was specifically skipped as the present study aimed to evaluate the bioaccessibility of the bioactive compounds within the extracts that will be potentially commercialized as a nutraceutical, therefore potentially formulated as tablets or caps.The aim was also to evaluate whether these extracts required also a gastro-resistant formulation to remain unchanged and thus exert their antioxidant and anti-inflammatory activity at the intestinal epithelium level.Quali-quantitative pre-and post-digestion analyses were carried out according to the validated LC-DAD-ESI-MS method described in Section 2.2.3.Results are shown in Figure 3.
No statistically significant difference was observed in the phytochemical profile of OE and LE between pre-and post-digestion analyses (Figure 3).These results were corroborated also by the quantification of the four most abundant compounds chosen as phytochemical markers (narirutin and hesperidin for OE, and eriocitrin and hesperidin for LE).Indeed, they showed comparable results between starting plant complexes (2.36 ± 0.05 g/100 g and 0.37 ± 0.01 g/100 g, for narirutin and hesperidin, respectively; and 1.20 ± 0.03 g/100 g and 1.14 ± 0.02 g/100 g, for eriocitrin and hesperidin, respectively) and relative digested samples (2.18 ± 0.07 g/100 g and 0.33 ± 0.02 g/100 g, for narirutin and hesperidin, respectively; and 1.14 ± 0.04 g/100 g and 1.08 ± 0.03 g/100 g, for eriocitrin and hesperidin, respectively), taking into account also the extraction process, which returned, during method validation, a recovery value ≥ 90%.
No interferences, such as any degradation products, metabolites, or co-eluting compounds, were recorded.Moreover, the chromatographic separation of the OE and LE constituents did not show any overlap or interferences from matrix constituents in the digested samples at the retention time of the identified phytochemicals, which appeared well-separated and easy identifiable.
nutraceutical, therefore potentially formulated as tablets or caps.The aim was also to evaluate whether these extracts required also a gastro-resistant formulation to remain unchanged and thus exert their antioxidant and anti-inflammatory activity at the intestinal epithelium level.Quali-quantitative pre-and post-digestion analyses were carried out according to the validated LC-DAD-ESI-MS method described in Section 2.2.3.Results are shown in Figure 3.No statistically significant difference was observed in the phytochemical profile of OE and LE between pre-and post-digestion analyses (Figure 3).These results were corroborated also by the quantification of the four most abundant compounds chosen as phytochemical markers (narirutin and hesperidin for OE, and eriocitrin and hesperidin for LE).Indeed, they showed comparable results between starting plant complexes (2.36 ± 0.05 g/100 g and 0.37 ± 0.01 g/100 g, for narirutin and hesperidin, respectively; and 1.20 ± 0.03 g/100 g and 1.14 ± 0.02 g/100 g, for eriocitrin and hesperidin, respectively) and relative digested samples (2.18 ± 0.07 g/100 g and 0.33 ± 0.02 g/100 g, for narirutin and hesperidin, respectively; and 1.14 ± 0.04 g/100 g and 1.08 ± 0.03 g/100 g, for eriocitrin and hesperidin, respectively), taking into account also the extraction process, which returned, during method validation, a recovery value ≥ 90%.
In Vitro Cell-Free Assays
The antioxidant and anti-inflammatory activity of OE and LE was first investigated by in vitro spectrophotometric and spectrofluorimetric tests based on different environments and reaction mechanisms.This allowed evaluation of some specific activities such as the direct free-radical scavenging activity against several charged radicals, the iron-chelating capacity, the anti-peroxidative activity and the anti-inflammatory activity using enzymatic and non-enzymatic tests.Furthermore, this allowed us to make a first comparison between the two plant complexes and to establish which was the most appropriate range of concentrations to be tested in the Caco-2 cell model.
After a preliminary screening in a wide concentration range, four concentrations were selected for each extract with the aim of calculating the IC 50 with the respective C.L. (Table 4).
Both extracts showed a similar trend, with a concentration-dependent antioxidant and anti-inflammatory behavior (R 2 > 0.990) and the same order of potency: ORAC > BCB > TEAC > FRAP > DPPH for antioxidant assays, and PIA > ADA for anti-inflammatory assays.Despite the similar antioxidant and anti-inflammatory activity behavior of the two extracts, analyzing the IC 50 values (Table 4), it is clear that, in accordance with the phytochemical data, LE, which is the richest in secondary metabolites, is also the strongest from both antioxidant and anti-inflammatory point of view, with statistically significant results in the DPPH (p < 0.001), BCB (p < 0.05) and ADA (p < 0.001) assays.Furthermore, according to the phytochemical data, it showed a significantly greater iron chelating capacity than OE, probably due to the conspicuous presence of monomeric molecules with free hydroxyl groups, mainly located in the ortho position, demonstrating, once again, how a linear correlation between secondary metabolites content and biological activity occurs.Table 4. Antioxidant and anti-inflammatory activity of orange and lemon raw pomace extracts (OE and LE) in comparison with the reference standards.Results, which represent the mean of three independent experiments in triplicate (n = 3), are expressed as g of reference standard equivalents (RSE)/100 g dry extract (DE), and as the concentration inhibiting 50% of the oxidant/inflammatory activity (IC 50 ) with 95% confidence limits (between brackets).
Effects of OE and LE on Cell Viability and Proliferation
To investigate the effects of OE and LE on an in vitro model of intestinal cells, we first evaluated the viability of Caco-2 cells after administration of OE and LE in the culture media at 25, 50, 100, 200, and 250 µg/mL for 24 h and 48 h.The highest DMSO concentration used was 0.1%.The results demonstrated that OE and LE had no significant cytotoxic effects; indeed, both compounds increased the cell viability with respect to the untreated control as determined by XTT assay (Figure 4).The effects of OE and LE on proliferation were also analyzed by cell confluence real-time monitoring by IncuCyte platform over a time course of 48h.In line with the increased cell viability, both OE and LE induced a more pronounced cell turnover with respect to the untreated control (Figure S1).No significant differences on cell viability and cell proliferation emerged between the different treatment concentrations, thus we chose to continue the subsequent experiments using the concentration closest to the average of the most promising IC 50 values obtained by testing the extracts under examination (200 µg/mL for both OE and LE).
Since previous studies reported that LPS stimulation was effective in inducing the typical damage occurring in IBD, including the disruption of the intestinal barrier, inflammatory and oxidant reactions [31,32], this model was established to assess the potential effects of OE and LE.Therefore, we treated Caco-2 cells with different concentrations of LPS (1, 10, and 25 µg/mL) to mimic the pathological condition.As shown in Figure 5A,B, LPS treatment (25 µg/mL) for 24 h and 48 h induced a maximum decrease of 15% in cell viability; thus this amount seemed to be the most suitable to induce the model without excessive cytotoxic effects.Then, the OE and LE ability to restore the cell viability and proliferation rate in LPS-treated Caco-2 cells, was investigated.As reported in Figure 5C-F, after 24 h and 48 h, the cell viability and confluency were significantly increased in LPS + OE and LPS + LE cells with respect to LPS-treated (LPS) or untreated cells (Ctrl).
platform over a time course of 48h.In line with the increased cell viability, both OE and LE induced a more pronounced cell turnover with respect to the untreated control (Figure S1).No significant differences on cell viability and cell proliferation emerged between the different treatment concentrations, thus we chose to continue the subsequent experiments using the concentration closest to the average of the most promising IC50 values obtained by testing the extracts under examination (200 µg/mL for both OE and LE).Since previous studies reported that LPS stimulation was effective in inducing the typical damage occurring in IBD, including the disruption of the intestinal barrier, inflammatory and oxidant reactions [31,32], this model was established to assess the potential effects of OE and LE.Therefore, we treated Caco-2 cells with different concentrations of LPS (1, 10, and 25 µg/mL) to mimic the pathological condition.As shown in Figure 5A,B, LPS treatment (25 µg/mL) for 24 h and 48 h induced a maximum decrease of 15% in cell viability; thus this amount seemed to be the most suitable to induce the
Effects of OE and LE on Intestinal Barrier Permeability
We next sought to analyze the integrity of Caco-2 cell monolayers after 24 h and 48 h treatment with LPS, LPS + 200 µg/mL OE and LPS + 200 µg/mL LE.Our data revealed that after 24 h of treatment, LPS induced a significant decrease of TEER mean values, an effect that was intensified after 48 h (Figure 6A).On the contrary, as shown in Figure 6A, both
Effects of OE and LE on Intestinal Barrier Permeability
We next sought to analyze the integrity of Caco-2 cell monolayers after 24 h and 48 h treatment with LPS, LPS + 200 µg/mL OE and LPS + 200 µg/mL LE.Our data revealed that after 24 h of treatment, LPS induced a significant decrease of TEER mean values, an effect that was intensified after 48 h (Figure 6A).On the contrary, as shown in Figure 6A, both OE and LE were able to counteract the effect of LPS by maintaining the TEER mean values near to the control untreated cells (Ctrl).To confirm this functional effect, the expression of the tight junction (TJ) proteins ZO-1, Claudin-1, and Occludin by immunofluorescence staining was also evaluated.As shown in Figure 6B, 48 h LPS treatment caused a decreased expression of TJ proteins, but this reduction was less evident in LPS + 200 µg/mL OE and LPS + 200 µg/mL LE, especially under OE treatment.
OE and LE were able to counteract the effect of LPS by maintaining the TEER mean values near to the control untreated cells (Ctrl).To confirm this functional effect, the expression of the tight junction (TJ) proteins ZO-1, Claudin-1, and Occludin by immunofluorescence staining was also evaluated.As shown in Figure 6B, 48 h LPS treatment caused a decreased expression of TJ proteins, but this reduction was less evident in LPS + 200 µg/mL OE and LPS + 200 µg/mL LE, especially under OE treatment.
Effects of OE and LE on Oxidative Stress and Inflammatory Response
To evaluate the potential antioxidant effect of OE and LE in the Caco-2 cells model resembling the impairment of intestinal permeability (i.e.LPS treatment), the intracellular ROS levels as well as the gene expression levels of the antioxidant enzymes CAT, SOD2 and NRE2L2 (gene encoding for Nrf2), were evaluated.The CM-H2DCFDA-staining revealed that, after 4 h, both OE and LE significantly reduced the rate of increase of LPSdependent ROS levels in Caco-2 cells (Figure 7A,B).Moreover, as reported in Figure 7C-E, even if after 24 h LPS treatment was ineffective on the expression of CAT, SOD2 and
Effects of OE and LE on Oxidative Stress and Inflammatory Response
To evaluate the potential antioxidant effect of OE and LE in the Caco-2 cells model resembling the impairment of intestinal permeability (i.e., LPS treatment), the intracellular ROS levels as well as the gene expression levels of the antioxidant enzymes CAT, SOD2 and NRE2L2 (gene encoding for Nrf2), were evaluated.The CM-H 2 DCFDA-staining revealed that, after 4 h, both OE and LE significantly reduced the rate of increase of LPS-dependent ROS levels in Caco-2 cells (Figure 7A,B).Moreover, as reported in Figure 7C-E, even if after 24 h LPS treatment was ineffective on the expression of CAT, SOD2 and NRE2L2 genes with respect to the control cells, the addition of 200 µg/mL OE or LE caused the up-regulation of all the antioxidant genes.NRE2L2 genes with respect to the control cells, the addition of 200 µg/mL OE or LE caused the up-regulation of all the antioxidant genes.Finally, the effects on inflammatory response after treatments were assessed.As shown in Figure 8A-D, LPS induced a significant increase in gene expression of the proinflammatory cytokines IL-1β, IL-6, IL-8, and TNF-α, while 200 µg/mL OE and 200 µg/mL LE prevented this effect.According to the increased pro-inflammatory genes, LPS treatment also enhanced nuclear translocation of the phosphorylated/active form of nuclear factor kappa-light-chain-enhancer of activated B cells p65 (pNFκB p65), but this effect was not observed when 200 µg/mL OE and 200 µg/mL LE were added to LPS-treated cells (Figure 8E,F).Finally, the effects on inflammatory response after treatments were assessed.As shown in Figure 8A-D, LPS induced a significant increase in gene expression of the proinflammatory cytokines IL-1β, IL-6, IL-8, and TNF-α, while 200 µg/mL OE and 200 µg/mL LE prevented this effect.According to the increased pro-inflammatory genes, LPS treatment also enhanced nuclear translocation of the phosphorylated/active form of nuclear factor kappa-light-chain-enhancer of activated B cells p65 (pNFκB p65), but this effect was not observed when 200 µg/mL OE and 200 µg/mL LE were added to LPS-treated cells (Figure 8E,F).
Discussion
Every year, it is estimated that about 15 million tons of Citrus by-products are produced worldwide [33].However, the chemical composition of Citrus by-products, as can be expected from any vegetable raw material, changes depending on the pedoclimatic conditions to which the native plant is exposed, and on the fruit processing (e.g. to obtain juice or essential oil) and extraction method applied to recover the phytochemicals of interest [34].Considering this, the critical steps to be addressed in preparing plant complexes to be used in the nutraceutical and pharmaceutical field are the selection of the most appropriate green extraction technique, its optimization and standardization, an in-depth characterization of the obtained extracts, their titration, bioaccessibility studies as well as the evaluation of the health properties by pre-clinical studies [35].Once the starting material has been selected, the extraction technique and conditions must be optimized, not only in terms of the extracted compounds, but also in terms of phytochemical profile.Indeed, Citrus raw pomace contains different phytochemicals with powerful bioactivities that can potentially find application in the nutraceutical and pharmaceutical fields, especially in the context of chronic inflammatory diseases.
Despite being a waste product, Citrus pomace represents one of the major sources of polyphenols as the latter are mainly distributed in the flavedo and albedo, rather than in the edible part of the fruit.Considering this, one of the main limiting factors for Citrus agro-industrial residue utilization is the lack of a cost-effective extraction method for highquality compounds.Green extractions have the potential to overcome such limitations and provide higher yields and energy savings [36].
The main goal of a green method is to avoid the use of toxic solvents.Over the years, several supercritical fluids and ionic liquids have been investigated.The former, however,
Discussion
Every year, it is estimated that about 15 million tons of Citrus by-products are produced worldwide [33].However, the chemical composition of Citrus by-products, as can be expected from any vegetable raw material, changes depending on the pedo-climatic conditions to which the native plant is exposed, and on the fruit processing (e.g., to obtain juice or essential oil) and extraction method applied to recover the phytochemicals of interest [34].Considering this, the critical steps to be addressed in preparing plant complexes to be used in the nutraceutical and pharmaceutical field are the selection of the most appropriate green extraction technique, its optimization and standardization, an in-depth characterization of the obtained extracts, their titration, bioaccessibility studies as well as the evaluation of the health properties by pre-clinical studies [35].Once the starting material has been selected, the extraction technique and conditions must be optimized, not only in terms of the extracted compounds, but also in terms of phytochemical profile.Indeed, Citrus raw pomace contains different phytochemicals with powerful bioactivities that can potentially find application in the nutraceutical and pharmaceutical fields, especially in the context of chronic inflammatory diseases.
Despite being a waste product, Citrus pomace represents one of the major sources of polyphenols as the latter are mainly distributed in the flavedo and albedo, rather than in the edible part of the fruit.Considering this, one of the main limiting factors for Citrus agro-industrial residue utilization is the lack of a cost-effective extraction method for highquality compounds.Green extractions have the potential to overcome such limitations and provide higher yields and energy savings [36].
The main goal of a green method is to avoid the use of toxic solvents.Over the years, several supercritical fluids and ionic liquids have been investigated.The former, however, are too expensive for industrial scalability and too selective for lipophilic compounds, while the use of the ionic liquids is rather controversial because they seem to be potentially harmful for the environmental eco-system.Considering this, the most cheap and renewable solvents remain ethanol and water, and the limiting factor becomes the extraction technique used.Putnik and co-workers [35] reviewed the latest studies concerning novel and greener methods for valorization of Citrus by-products.Microwaves, ultrasound, pulsed electric fields and high-pressure methods were compared between themselves and to the conventional techniques to highlight pros, cons and potential scalability of these technologies.Ultrasound-assisted extraction, disrupting cells by cavitation and promoting the diffusion of bioactive compounds from plant matrix via solvents, has proved the most cheap, reproducible and simple alternative to conventional extraction methods for the recovery of bioactive compounds from Citrus raw pomace.Furthermore, it gives higher extraction yields at lower temperatures and extraction times, ideal parameters for photosensitive and thermolabile compounds [35].Several authors have recently evaluated and optimized the ultrasound exposure, solvent type, and solvent concentration for the extraction of polyphenols from Citrus pomace, and the best yields were achieved, according to our results, with hydroalcoholic mixture with 80% organic solvent, using a matrix/solvent ratio of 1:10, w/v [37,38].
In addition, several primary metabolites, including simple sugars, amino acids and organic acids, were detected in Citrus raw pomace.In the early stage of fruit development, sucrose is the major accumulated sugar, with a sucrose-glucose-fructose ratio of 2:1:1 [47].However, during fruit maturation, it is hydrolyzed either to fructose and UDP-glucose by sucrose synthase, or to glucose and fructose by invertase.Accordingly, we observed that the ratios between sugars changed in favor of glucose and fructose.
Although citrate is the major organic acid accumulated in Citrus fruit, the synthesis and accumulation of a discrete amount of malic acid in orange and lemon fruit were already reported [48], whereas other organic acids such as oxalic, tartaric, benzoic, succinic, and malonic were detected only in traces [49].
Aspartic acid, asparagine, proline and GABA have been previously detected among the most abundant amino acids in Citrus fruit.Generally, they increase during fruit maturation; however, a conspicuous difference in terms of total amino acids content between different Citrus species was observed, with lemon showing, in line with our results, a higher content with respect to sweet orange [50].
Many studies have shown that Citrus extracts decrease the onset and progression of several chronic diseases by preventing oxidative stress, tissue damage, and inflammatory processes [51][52][53].
The comparison of the radical scavenging activity measured by in vitro assays based on different mechanisms and reaction environments allows the establishment of key structure-activity relationships (SAR).Recently, it has been demonstrated how hesperidin, hesperetin and neohesperidin were found to be more active in hydrogen atom transfer assays such as ORAC and TEAC, whereas eriocitrin and neoeriocitrin were more active in electron transfer assays, such as FRAP and DPPH.Furthermore, it has been also demon-strated that by combining them, they showed an interesting synergistic antioxidant activity [10].These results were also corroborated by the evaluation of the anti-inflammatory activity, investigated by the same assays carried out in the present study (ADA and PIA), where the flavanones' mix showed the strongest anti-inflammatory activity [10].These data appear even more interesting considering the number of bioactive compounds present in a plant complex and the ability of flavanones to remain unchanged, after in vitro simulated gastro-duodenal digestion [10], and after 12 and 24 h in the small intestine and in the colon of rats after oral administration of a Citrus extract [54].
These properties are directly correlated to the flavonoids content of Citrus fruit, able to inhibit different enzymes involved in different cellular processes [55], but also to minor compounds such as phenolic acids and limonoids with well-known strong free radical quenching activity [46].
Here, we investigated the biological effects of OE and LE in an in vitro model of IBD consisting of Caco-2 cell monolayers stimulated with LPS to induce the typical damage observed in this disease.In our model, 25 µg/mL LPS treatment, even if continued for 48 h, did not cause strong toxicity but reduced cell viability and proliferation by approximately 15%.Accordingly, other studies demonstrated that LPS may act on Caco-2 cells as an antiproliferative and inflammatory stimulus able to impair gut barrier integrity [56,57].On the other hand, other authors reported that LPS may upregulate cell proliferation rates [58,59].This discrepancy may be attributable to the source of the LPS used.Indeed, LPS structure is variable between different bacterial strains, and this can influence its effects [60].In any case, the evidence that LPS, regardless of bacterial strain, can destroy the integrity of the intestinal barrier through disruption of TJs is numerous, as is the evidence that various compounds can play a protective role in this process [61].The present in vitro results seem to confirm the beneficial effects of orange and lemon raw pomace.Indeed, our findings highlighted that OE and LE were able to counteract the LPS detrimental effect on cell proliferation and intestinal barrier integrity.Accordingly, it has been demonstrated that hesperidin enhances the intestinal barrier integrity in Caco-2 cell monolayers increasing the TEER as well as the mRNA expression and protein levels of occludin, MarvelD3, JAM-1, claudin-1, and claudin-4 [62].A similar effect was reported also in Caco-2 and a RAW264.7 cells co-culture model treated with naringenin, nobiletin and hesperetin [63].The protective effect of OE and LE against the LPS-dependent impairment of intestinal barrier integrity and TJ destruction, as well as for other natural compounds, could be mechanistically associated to the activity of these plant complexes on oxidative stress and inflammation [64,65].Indeed, our data demonstrated that OE and LE may suppress ROS production, thus hampering the vicious cycle between NFkB p65 nuclear translocation and consequent transcriptional activation of pro-inflammatory and repression of antioxidant genes.
These effects confirm the evidence that flavanones and polymethoxylated flavones are inhibitors of important proteins involved in the activation of the inflammatory cascade [22,66].It has been demonstrated that hesperidin was able to inhibit the mitogen-activated protein kinases (MAPKs) and phosphodiesterases [67], whereas several flavanones were found to down-regulate NFκB [68,69], in turn involved in the modulation of iNOS, COX-2, IL-6, and TNF-α gene expression [70].
We have also previously demonstrated that hesperidin, neohesperidin, hesperetin, eriocitrin, and eriocitrin exhibit strong antioxidant activity by reducing the ROS release, the formation of carbonylated proteins and lipid peroxides, as well as the oxidation of GSH to GSSG in Caco-2 cell monolayers.They were also able to exert strong anti-inflammatory activity by inhibiting COX enzymes, with a selectivity towards COX-2, as also demonstrated by molecular modelling studies [22].Indeed, all these factors may contribute to the evident beneficial effects that we found under OE and LE treatments.Several studies tested other natural compounds against IBD [63], but among them, only few compounds exhibit as broad a range of effects as we have seen in the present study.
In summary, in the present study, we found that both OE and LE preserved the integrity of the intestinal barrier against LPS-induced damage due to colonization of pathogenic bacteria [71].However, another important aspect to consider is that in IBD, intestinal barrier dysregulation alone is insufficient to cause disease, but enhanced gut permeability can also accelerate disease onset and increase severity by the activation of pro-inflammatory ROS-sensitive pathways in immune cells [72].Therefore, although this hypothesis requires further experimental studies to be confirmed, it is conceivable that OE and LE may also counteract intestinal dysbiosis, thus representing a promising therapeutic approach to reverse the IBD exacerbation.
Conclusions
In conclusion, this study demonstrated that orange and lemon raw pomace may be considered for the development of drugs and nutraceutical products for the treatment and prevention of IBD.The combination of a wide range of substances such as flavones, flavanones, phenolic acids, limonoids, etc. confer on them a potentially high therapeutic effectiveness on the gut barrier, acting via different mechanisms that include preservation of TJ proteins and activation of the antioxidant and anti-inflammatory pathways.The possibility of overcoming the high cost of processing waste is also a strong advantage.
However, being this a preliminary study based on in vitro cell-free and cell-based models, the results transability to the complex in vivo scenario must be done very carefully.Therefore, further in vivo and clinical studies to deeply investigate the antioxidant and antiinflammatory properties of these plant complexes, as well as the molecular mechanisms and cellular targets involved, are needed to justify their potential role in IBD management.
Figure 2 .
Figure 2. Distribution percentage of phytochemical classes identified in orange and lemon raw pomace extracts (OE and LE, respectively).
Figure 2 .
Figure 2. Distribution percentage of phytochemical classes identified in orange and lemon raw pomace extracts (OE and LE, respectively).
Figure 3 .
Figure 3. Representative LC-DAD chromatograms of orange raw pomace extract (OE, panel A) and lemon raw pomace extract (LE, panel B) pre-(black) and post-gastro-duodenal digestion (orange and green chromatogram, respectively) acquired at 292 nm.
Figure 3 .
Figure 3. Representative LC-DAD chromatograms of orange raw pomace extract (OE, panel A) and lemon raw pomace extract (LE, panel B) pre-(black) and post-gastro-duodenal digestion (orange and green chromatogram, respectively) acquired at 292 nm.
Figure 4 .
Figure 4. OE and LE effects on Caco-2 cell viability.Cell viability evaluated by XTT assay and expressed as percentage of cell viability in Caco-2 cells untreated or treated with different concentrations of OE for 24 h (A) and 48 h (B); and in Caco-2 cells untreated or treated with different concentrations of LE for 24 h (C) and 48 h (D).Values are the mean ± SD of three independent experiments repeated at least in quintuplicate.Data were analyzed by 2-tailed Student's t test.* p < 0.05; ** p < 0.01; ns: non-significant.
Figure 4 .
Figure 4. OE and LE effects on Caco-2 cell viability.Cell viability evaluated by XTT assay and expressed as percentage of cell viability in Caco-2 cells untreated or treated with different concentrations of OE for 24 h (A) and 48 h (B); and in Caco-2 cells untreated or treated with different concentrations of LE for 24 h (C) and 48 h (D).Values are the mean ± SD of three independent experiments repeated at least in quintuplicate.Data were analyzed by 2-tailed Student's t test.* p < 0.05; ** p < 0.01; ns: non-significant.
Figure 5 .
Figure 5. Cell viability and proliferation in Caco-2 cells under different treatments.Cell viability evaluated by XTT assay and expressed as percentage of cell viability in Caco-2 cells untreated or treated with different concentrations of LPS for 24 h (A) and 48 h (B); and in Caco-2 cells untreated (Ctrl) or treated with LPS, LPS + 200 µg/mL OE and LPS + 200 µg/mL LE for 24 h (C) and 48 h (D).Cell proliferation monitored by using the Incucyte live cell imaging system was expressed as fold change of mean cell confluence in Caco-2 cells Ctrl, LPS, LPS + 200 µg/mL OE, and LPS + 200 µg/mL LE for 24 h (E) and 48 h (F).Values are the mean ± SD of three independent experiments repeated at least in quintuplicate.Data were analyzed by 2-tailed Student's t test.* p < 0.05; ** p < 0.01; *** p < 0.001; ns: non-significant.
Figure 5 .
Figure 5. Cell viability and proliferation in Caco-2 cells under different treatments.Cell viability evaluated by XTT assay and expressed as percentage of cell viability in Caco-2 cells untreated or treated with different concentrations of LPS for 24 h (A) and 48 h (B); and in Caco-2 cells untreated (Ctrl) or treated with LPS, LPS + 200 µg/mL OE and LPS + 200 µg/mL LE for 24 h (C) and 48 h (D).Cell proliferation monitored by using the Incucyte live cell imaging system was expressed as fold change of mean cell confluence in Caco-2 cells Ctrl, LPS, LPS + 200 µg/mL OE, and LPS + 200 µg/mL LE for 24 h (E) and 48 h (F).Values are the mean ± SD of three independent experiments repeated at least in quintuplicate.Data were analyzed by 2-tailed Student's t test.* p < 0.05; ** p < 0.01; *** p < 0.001; ns: non-significant.
• C, 9 L/min and 40 V. Acquisition was carried out in full-scan mode (90-2000 m/z).Data were acquired by Agilent ChemStation software version B.01.03 and Agilent trap control software version 6.2 (Agilent Technologies, Santa Clara, CA, USA).
Table 1 .
Semi-quantitative analysis done by1H NMR of the compounds identified in orange and lemon raw pomace extracts (OE and LE).Results are expressed in mg metabolite/g of dried extract (DE) and each value is the mean ± standard deviation of five independent measurements.
Table 1 .
Semi -quantitative analysis done by 1 H NMR of the compounds identified in orange and lemon raw pomace extracts (OE and LE).Results are expressed in mg metabolite/g of dried extract (DE) and each value is the mean ± standard deviation of five independent measurements.* d = doublet, dd = double doublet, m = multiplet, t = triplet, s = singlet; § n.d.= not detected.
Table 3 .
Secondary metabolites of orange and lemon raw pomace extracts (OE and LE, respectively) tentatively identified by LC-DAD-ESI-MS using both the positive and negative ionization modes.
|
2024-07-24T15:55:45.435Z
|
2024-07-01T00:00:00.000
|
{
"year": 2024,
"sha1": "05c8f36a2386f7ea3465591d1c3c52af9f7fb948",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.3390/antiox13070869",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "d2476cd867593fbec2d804f407f87532e8981835",
"s2fieldsofstudy": [
"Medicine",
"Agricultural and Food Sciences",
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
245481787
|
pes2o/s2orc
|
v3-fos-license
|
Kinetics Study of Hydrothermal Degradation of PET Waste into Useful Products
: Kinetics of hydrothermal degradation of colorless polyethylene terephthalate (PET) waste was studied at two temperatures (300 ◦ C and 350 ◦ C) and reaction times from 1 to 240 min. PET waste was decomposed in subcritical water (SubCW) by hydrolysis to terephthalic acid (TPA) and ethylene glycol (EG) as the main products. This was followed by further degradation of TPA to benzoic acid by decarboxylation and degradation of EG to acetaldehyde by a dehydration reaction. Furthermore, by-products such as isophthalic acid (IPA) and 1,4-dioxane were also detected in the reaction mixture. Taking into account these most represented products, a simplified kinetic model describing the degradation of PET has been developed, considering irreversible consecutive reactions that take place as parallel in reaction mixture. The reaction rate constants ( k 1 – k 6 ) for the individual reactions were calculated and it was observed that all reactions follow first-order kinetics.
Introduction
The global production of plastic materials is still growing and exceeded 367 million metric tons in 2020 [1], making the recycling of plastic waste an urgent global commitment. Polyethylene terephthalate (PET) is the third most widely used polymer [2] in the production of packaging after polyethylene (PE) and polypropylene (PP). The primary use of PET polymer is in the textile industry, although vast amounts of this material are used to make packaging in the food, medical and cosmetic industries. PET bottle is one of the most largely consumed plastic products [3]. PET bottles are durable, clear, light, inexpensive, heat-stable, and they are usually used for beverage and food applications [4]. The global production of PET bottles reached 13.1 million tons in 2020 [5]. PET is one of the main condensation polymers, which are known to be composed of different monomers. Specifically, PET can be synthesized by esterification of monomers of terephthalic acid (TPA) and ethylene glycol (EG) or by transesterification of dimethyl terephthalate (DMT) and EG [6]. Condensation polymers have ether, ester, or amide bonds, which can be depolymerized back to monomers by solvolysis (hydrolysis, alcoholysis) under mild conditions [7]. Even though PET can be mechanically recycled several times, a large amount of waste is still generated [3,8]. Above all, recycling problems occur when mixing PET bottles by color (the significant problem present the colorants and additives [9]) and with other plastic materials (the caps of PET bottles are usually made from polyolefins and have to be recycled separately) [3]. In addition, PET also can be recycled by pyrolysis. Pyrolysis is one of the most studied methods for the chemical recycling of different polymers including PET. Due to the possibility of obtaining valuable chemicals, liquid fuel and gas, this method has received increasing attention in recent years [10][11][12][13]. In general, a high temperature (400-800 • C) is needed for complete degradation of PET waste. Pyrolysis of PET waste mainly produces a gas phase, where more than 45% of the PET material is degraded to CO/CO 2 and the light hydrocarbons (4 wt%), where 2.8% corresponds to methane [10]. Furthermore, by pyrolysis, the residues of char decrease while the yield of gas products increases with increasing temperature and reaction time [10,14,15].
Traditionally used plastic conversion methods, such as incineration and mechanical recycling, raise some environmental concerns and are not economically attractive. In recent years, the chemical recycling of waste using supercritical fluids (SCF) has become interesting due to several advantages. Processes using SCFs represent green technology that is easy to use and has other important advantages (reducing energy consumption and emissions into the environment). One promising media is water in subcritical (SubCW) and supercritical (SCW) state, which provides a strong oxidation environment and can degrade plastic and bio-waste and produce various high value-added products [16]. These processes, if carefully optimized, can produce fuels and even monomers that can be re-used to synthesize polymers, reducing the direct demand for fossil raw materials [7]. SubCW is often used for the processing and decomposition of biomass [16][17][18], and currently, it is also increasingly used for chemical recycling of polymers into monomers [7,19]. Although SubCW has many beneficial properties (compared to conventional hydrolysis media), on the other hand, much work remains to be done in order to understand the reaction pathways of polymers in this media. Understanding the hydrolysis process and the degradation mechanism is crucial for scaling up the recycling process to industrial level. Due to the complex composition and different reactivity of individual components contained in waste materials, the mechanisms of reactions and the influence of process parameters are still relatively unexplored. Recycled monomers and other low molecular weight compounds are an important category of secondary raw materials used in several promising industries such as chemistry, paint industry, textile industry, and pharmacy.
PET in SubCW decomposes into the major monomers TPA and EG and other secondary products [19][20][21]. TPA is an organic component usually produced by homogeneous oxidation of p-xylene in the liquid phase [22]. TPA is of great industrial importance as a raw material for various widely used plastics, such as PET, polybutyl terephthalate (PBT), and bioplastics, which have been in great demand recently [23]. TPA is an important chemical and it is also widely used in the pharmaceutical industry (raw material for certain drugs) [24], paints and coatings industry (as a carrier), in arms industry (TPA smoke-for grenade filler) [25], and in the manufacture of clothing and plastic bottles [26]. Production of purified TPA reached around 84 million metric tons in 2018 and forecasts indicate that it is expected to reach 107 million metric tons by 2023 [27]. Another important carboxylic acid formed during the degradation of PET in SubCW is benzoic acid [19,23]. Benzoic acid is currently produced by partial oxidation of toluene and oxygen in the presence of manganese naphthalenes or cobalt [28]. Benzoic acid is widely used in the food industry as an antimicrobial preservative in food and beverages, especially in carbonated drinks [28]. It has the strongest antibacterial action, which is a significant cause of food spoilage. It is used in the cosmetics and personal care industries [29]. Benzoic acid is also used as a precursor of other products such as plasticizers, fungal ointments for medical use, and as a calibrating substance for bomb calorimeters [30,31]. In 2020, the global production of benzoic acid amounted to 538,770 tons and it is expected to grow to 628,350 tons by 2026 [32]. Therefore, due to the increasing use of TPA and benzoic acid, the recovery of these organic compounds from waste is highly important.
In our previous work, it was found that SubCW can be used as an efficient medium for the hydrolysis of colored and colorless PET waste. A high yield of the desired TPA product (approximately 90%) can be obtained under relatively mild hydrothermal conditions (300 • C, 95 bar, and 30 min). Nevertheless, besides TPA and EG as primary products, isophthalic acid (IPA) and 1,4-dioxane, as well as secondary products such as acetaldehyde, and benzoic acid were also identified in the reaction mixture after degradation, which indicates further degradation of primary products [19].
Although many authors have studied the hydrothermal degradation of PET waste [20,21,33] no data could be found in the literature dealing with decomposition kinetics. The aim of this work is to study the kinetics of hydrothermal degradation of colorless PET waste, to estimate kinetic data and to evaluate its degradation pathway.
Sample Preparation
PET colorless beverage bottles were collected, and material was prepared for experiments. First, the caps and the labels from the bottles were removed and then the material was cleaned and cut into small equal parts (1 × 1 cm).
Elemental Analysis
The colorless PET bottle waste sample was also characterized with the Perkin Elmer 2400 Series II system analyzer. The content of the main elements (carbon, hydrogen, nitrogen, and sulfur) was determined (Table 1), while oxygen was calculated by the mass balance.
FTIR Analysis
Fourier transform infrared (FTIR) spectroscopy of colorless PET waste, recycled TPA, and standard of TPA were performed on an IRAffinity-1 spectrophotometer equipped with an attenuated total reflectance cell (ATR). The spectra of individual samples were recorded at wavelengths from 4000 to 400 nm against the air as the background at a resolution of 4 cm −1 and a total of 20 co-added scans. After all, the data were analyzed with high-performance IRsolution software.
Subcritical Water Treatment of PET
The decomposition of colorless PET waste in SubCW (material/water ratio (1/10)) was performed in a 75 mL high-temperature and high-pressure batch reactor (Parr Instruments, Moline, IL, USA). A detailed description of the experimental system ( Figure 1) and procedure can be found in the literature [19]. The degradation reactions of PET were carried out at temperatures 300 and 350 • C and reaction time between 1 and 240 min. The mixture of PET and water in a weight ratio 1/10 was loaded into the batch reactor at room temperature. Before hydrolysis of PET waste, the reactor was flushed several times with nitrogen and then this gas was used to set the initial pressure in the reactor to 20 bar. The reactor was heated by a high-power heating collar controlled by a temperature controller. At the temperature of 300 and 350 • C, the pressure in the reactor was 89 ± 2 bar and 168 ± 2 bar, respectively. During the degradation of colorless PET waste, the reaction mixture was stirred constantly at 1000 min −1 . After a specific time, the reactor was immediately quenched with cold water, then the gas was released from the system and the products from the reactor were collected. The post-reaction mixture contained liquid and solid products. First, 1 mL of the aqueous post-reaction mixture was separated and filtered through a 45 µm filter and the filtrate was stored for further HPLC analysis. All remaining reaction mixture was treated with 4 M NaOH to form sodium salt of TPA, that is soluble in water and then filtered to remove any undegraded PET residues. Finally, TPA was precipitated with 1.5 M HCL. After filtration and drying, the concentration of formed TPA and undegraded PET waste residues in the post-reaction mixture was calculated by the following Equations (1) and (2): where: c TPA is the concentration of TPA, c PET is the concentration of PET, m TPA is the mass of precipitated TPA, m PET is the mass of unreacted colorless PET waste and V H 2 O is the initial volume of distilled water before hydrolysis.
mixture of PET and water in a weight ratio 1/10 was loaded into the batch reactor at room temperature. Before hydrolysis of PET waste, the reactor was flushed several times with nitrogen and then this gas was used to set the initial pressure in the reactor to 20 bar. The reactor was heated by a high-power heating collar controlled by a temperature controller. At the temperature of 300 and 350 °C, the pressure in the reactor was 89 ± 2 bar and 168 ± 2 bar, respectively. During the degradation of colorless PET waste, the reaction mixture was stirred constantly at 1000 min −1 . After a specific time, the reactor was immediately quenched with cold water, then the gas was released from the system and the products from the reactor were collected. The post-reaction mixture contained liquid and solid products. First, 1 mL of the aqueous post-reaction mixture was separated and filtered through a 45 μm filter and the filtrate was stored for further HPLC analysis. All remaining reaction mixture was treated with 4 M NaOH to form sodium salt of TPA, that is soluble in water and then filtered to remove any undegraded PET residues. Finally, TPA was precipitated with 1.5 M HCL. After filtration and drying, the concentration of formed TPA and undegraded PET waste residues in the post-reaction mixture was calculated by the following Equations (1) and (2): where: is the concentration of TPA, is the concentration of PET, is the mass of precipitated TPA, is the mass of unreacted colorless PET waste and is the initial volume of distilled water before hydrolysis.
HPLC Method
The concentrations of products in the aqueous phase after hydrolysis of colorless PET waste were determined using an HPLC system (model 1100, Agilent technology, Waldbronn, Germany) with an UV-Vis detector [19]. The column was an Agilent ZORBAX Eclipse XDB C18 column (4.6 x 150 mm, 3.5 μm particle diameter) and the temperature of the column was 30 °C. Mobile phase A was Milli-Q water with 0.1% TFA/ACN in a ratio of 60/40, while mobile phase B was Milli-Q water with 0.1% TFA/ACN in a ratio of 90/10. The gradient was: 0 min 90% B, 3 min 70% B, 8 min 60% B at 242 nm. The volume of initiated samples was 20 μL, while the flow rate was 1 mL/min. The identification and the
HPLC Method
The concentrations of products in the aqueous phase after hydrolysis of colorless PET waste were determined using an HPLC system (model 1100, Agilent technology, Waldbronn, Germany) with an UV-Vis detector [19]. The column was an Agilent ZORBAX Eclipse XDB C18 column (4.6 × 150 mm, 3.5 µm particle diameter) and the temperature of the column was 30 • C. Mobile phase A was Milli-Q water with 0.1% TFA/ACN in a ratio of 60/40, while mobile phase B was Milli-Q water with 0.1% TFA/ACN in a ratio of 90/10. The gradient was: 0 min 90% B, 3 min 70% B, 8 min 60% B at 242 nm. The volume of initiated samples was 20 µL, while the flow rate was 1 mL/min. The identification and the quantification of the resulting products were performed using calibration curves of individual standards [19].
Degradation of Colorless PET Waste in SubCW
The results of our previous work [19] and data from the literature [20,21] show that the depolymerization of PET starts at 250 • C at a short reaction time, where with the Processes 2022, 10, 24 5 of 12 reaction of hydrolysis, the main monomers, TPA and EG, are formed. At the same time, some small amounts of unreacted PET, oligomers, IPA, and 1,4-dioxane were also present in the post-reaction mixture [19,21]. With further increase in temperature to 300 • C, all PET waste degraded in the SubCW and gave the highest yield of TPA and EG [19,21]. At higher temperatures, the concentration of EG in the reaction mixture decreased with increasing temperature and time [21] and at the same time, acetaldehyde, diethylene glycol, and triethylene glycol were formed from EG via dehydration and dimerization reactions, respectively [19,21]. Similarly, the concentration of TPA started to decrease at 300 • C with the extension of reaction time (<30 min), and benzoic acid was formed due to decarboxylation reaction, where CO 2 [19] was eliminated from TPA molecules (Figure 2).
Processes 2022, 10, 24 6 of 12 In the following, based on our research and data from the literature [19,21,34], the possible degradation mechanisms of PET waste in SubCW is presented in Figure 2.
FTIR Analysis of Colorless PET Waste and Recycled TPA
The FTIR technique was used to identify the main functional groups present in the colorless PET waste before hydrolysis and in the resulting recycled TPAs after SubCW treatment. The spectrum of the colorless PET waste sample (Figure 3) contained an absorption band at 3432 cm −1 (OH group), at 3100-2800 cm −1 (aromatic and aliphatic C-H stretch), at 1720 cm −1 (ester carbonyl C=O stretch), and at 1453-1342 cm −1 (bending and wagging vibrational modes of the EG segment). The main observation of the absorption bands at 1240 cm −1 and 1124 cm −1 confirmed the terephthalate group (OOCC6H4-COO). The absorption bands were consistent with previous works [35][36][37]. FTIR analysis of a solid product obtained after hydrothermal degradation of colorless PET waste confirmed that the product contains main monomer TPA of high purity. Ac- To study the kinetics of PET degradation into valuable products, in the present work, colorless PET waste plastic was degraded in SubCW at temperatures of 300 • C and 350 • C and a reaction time of up to 240 min. The focus was on the most represented components in the aqueous phase. The concentration (g product /g PET ) of main products, by-products, and secondary products are presented in Table 2. It was found that all PET waste has decomposed already during the heating period to the desired temperature (12 min), and the main products were formed. The data in Table 2 show that the highest concentration of main products was achieved at 300 • C, namely for EG (0.22 g EG /g PET ) at 10 min and for TPA (0.76 g TPA /g PET ) at 30 min. The maximum concentrations of by-products (IPA and 1,4-dioxane) were also detected at the reaction temperature of 300 • C when the reaction time increased to 60 min for 1,4-dioxane and to 120 min for IPA. The concentrations of products were 0.05 g 1,4-dioxan /g PET and 0.018 g IPA /g PET . The Processes 2022, 10, 24 6 of 12 secondary products (acetaldehyde and benzoic acid) were formed with hydrothermal degradation of the main products. It was found that the concentrations of these secondary products increase with increasing reaction temperature to 350 • C and with increasing reaction time. The highest concentrations of benzoic acid (0.07 g benzoic acid /g PET ) and acetaldehyde (0.27 g acetaldehyde /g PET ) were detected at 350 • C and reaction time of 240 min and 180 min, respectively. At these conditions, new unidentified products in the aqueous phase were also observed (e.g., possible residues of additives). Moreover, it was assumed that while benzoic acid is formed, part of the benzoic acid is simultaneously further degraded by the decarboxylation reaction and causes the formation of a new product-benzene [34].
In the following, based on our research and data from the literature [19,21,34], the possible degradation mechanisms of PET waste in SubCW is presented in Figure 2.
FTIR Analysis of Colorless PET Waste and Recycled TPA
The FTIR technique was used to identify the main functional groups present in the colorless PET waste before hydrolysis and in the resulting recycled TPAs after SubCW treatment. The spectrum of the colorless PET waste sample ( Figure 3) contained an absorption band at 3432 cm −1 (OH group), at 3100-2800 cm −1 (aromatic and aliphatic C-H stretch), at 1720 cm −1 (ester carbonyl C=O stretch), and at 1453-1342 cm −1 (bending and wagging vibrational modes of the EG segment). The main observation of the absorption bands at 1240 cm −1 and 1124 cm −1 confirmed the terephthalate group (OOCC 6 H 4 -COO). The absorption bands were consistent with previous works [35][36][37].
In the following, based on our research and data from the literature [19,21,34], the possible degradation mechanisms of PET waste in SubCW is presented in Figure 2.
FTIR Analysis of Colorless PET Waste and Recycled TPA
The FTIR technique was used to identify the main functional groups present in the colorless PET waste before hydrolysis and in the resulting recycled TPAs after SubCW treatment. The spectrum of the colorless PET waste sample ( Figure 3) contained an absorption band at 3432 cm −1 (OH group), at 3100-2800 cm −1 (aromatic and aliphatic C-H stretch), at 1720 cm −1 (ester carbonyl C=O stretch), and at 1453-1342 cm −1 (bending and wagging vibrational modes of the EG segment). The main observation of the absorption bands at 1240 cm −1 and 1124 cm −1 confirmed the terephthalate group (OOCC6H4-COO). The absorption bands were consistent with previous works [35][36][37]. (O-H bend). Similar results of the functional groups of TPA have been reported previously [19,20]. (O-H bend). Similar results of the functional groups of TPA have been reported previously [19,20].
Kinetics of Hydrothermal Degradation of Colorless PET Waste
Due to the fact that PET waste already completely decomposes during the heating period to the desired temperature (12 min), the reaction kinetics during the heating period were also studied and degradation products were analyzed and considered for the kinetic modeling. The results show that the hydrothermal decomposition of colorless PET waste can be represented by a kinetic scheme for irreversible reactions, as presented in Figure 5. The method used in this study follows the concentration profile of PET waste (A), major PET hydrolysis products TPA (B) and EG (E), PET degradation by-products 1,4-dioxane (D) and IPA (G), and secondary products that are formed from major hydrolysis products, i.e., benzoic acid (C) and acetaldehyde (F). By prolonging the reaction time, CO2 is cleaved from the TPA molecule [19] by decarboxylation to form the secondary product benzoic acid (C), while acetaldehyde (F) is most likely formed by dehydration of EG ( Figure 6). As mentioned above, the proposed kinetic model representing the decomposition of colorless PET waste has been simplified; specifically, it does not consider unknown components that appear in the aqueous phase in the initial stage of PET degradation and most likely represent oligomers of PET polymer [21], as well as decomposition products of additives present in the waste PET plastic [7,38].
Kinetics of Hydrothermal Degradation of Colorless PET Waste
Due to the fact that PET waste already completely decomposes during the heating period to the desired temperature (12 min), the reaction kinetics during the heating period were also studied and degradation products were analyzed and considered for the kinetic modeling. The results show that the hydrothermal decomposition of colorless PET waste can be represented by a kinetic scheme for irreversible reactions, as presented in Figure 5. The method used in this study follows the concentration profile of PET waste (A), major PET hydrolysis products TPA (B) and EG (E), PET degradation by-products 1,4-dioxane (D) and IPA (G), and secondary products that are formed from major hydrolysis products, i.e., benzoic acid (C) and acetaldehyde (F). By prolonging the reaction time, CO 2 is cleaved from the TPA molecule [19] by decarboxylation to form the secondary product benzoic acid (C), while acetaldehyde (F) is most likely formed by dehydration of EG ( Figure 6). As mentioned above, the proposed kinetic model representing the decomposition of colorless PET waste has been simplified; specifically, it does not consider unknown components that appear in the aqueous phase in the initial stage of PET degradation and most likely represent oligomers of PET polymer [21], as well as decomposition products of additives present in the waste PET plastic [7,38].
Kinetics of Hydrothermal Degradation of Colorless PET Wa
Due to the fact that PET waste already completely d period to the desired temperature (12 min), the reaction kin were also studied and degradation products were analyzed modeling. The results show that the hydrothermal decom can be represented by a kinetic scheme for irreversible reac The method used in this study follows the concentration p PET hydrolysis products TPA (B) and EG (E), PET degrad (D) and IPA (G), and secondary products that are formed fr i.e., benzoic acid (C) and acetaldehyde (F). By prolonging th from the TPA molecule [19] by decarboxylation to form t acid (C), while acetaldehyde (F) is most likely formed by d mentioned above, the proposed kinetic model representing PET waste has been simplified; specifically, it does not c that appear in the aqueous phase in the initial stage of PE represent oligomers of PET polymer [21], as well as decom present in the waste PET plastic [7,38]. temperature (350 °C) the concentrations of TPA and EG decrease faster with prolonga of the reaction time as at 300 °C and, therefore, reaction rate constants k2 and k5, indica the rate of formation of their degradation products benzoic acid and acetaldehyde, higher at 350 °C. The reaction rate constant (k1) for TPA is similar at both temperature It can also be seen from Figure 6 that at both temperatures, the concentration products IPA, 1,4-dioxane, and acetaldehyde first increase and then begin to decline a a certain time. Figure 6. Colorless PET waste and its degradation products: TPA, EG, benzoic acid, 1,4-diox acetaldehyde, and IPA at temperatures of 300 and 350 °C during heating period (first 12 min) reaction at constant conditions. The symbols represent experimental measurements, while curves represent a kinetic model. The significant differences between the experimental and model results for dec position of primary and secondary products were assigned to the formation of unide fied products. It would be expected that with further degradation of TPA, the concen tion of benzoic acid would increase even more with the prolongation of the reaction t Figure 6. Colorless PET waste and its degradation products: TPA, EG, benzoic acid, 1,4-dioxane, acetaldehyde, and IPA at temperatures of 300 and 350 • C during heating period (first 12 min) and reaction at constant conditions. The symbols represent experimental measurements, while the curves represent a kinetic model.
For the kinetic modeling of the decomposition of colorless PET waste, it was assumed that the hydrolysis of PET waste follows first-order kinetics because the difference between the experimental and model data was the lowest compared to zero-and second-order kinetics. The corresponding reaction rates for all six reactions were described by the following differential Equations (3)-(9): dc acetaldehyde dt = k 5 c EG (8) dc IPA dt = k 6 c PET waste (9) where: c PET waste is the concentration of PET waste, c TPA the concentration of TPA, c benzioc acid the concentration of benzoic acid, c 1,4-dioxane the concentration of 1,4-dioxane, c EG the con-centration of EG, c acetaldehyde the concentration of acetaldehyde and c IPA the concentration of IPA; k 1 , k 2 , k 3 , k 4 , k 5 in k 6 are the reaction rate constants of individual reactions, and t is the reaction time. Differential Equations (3)-(9) were numerically integrated using Scientist (MicroMath) software, minimizing the objective function represented by the sum of the square differences between the experimental and calculated concentrations. The calculated reaction rate constants are shown in Table 3. The standard deviation of constants k 2 , k 3 , and k 6 is significant due to the scattering of experimental data. 0.00804 ± 0.00458 0.02697 ± 0.01590 k 6 (min −1 ) 0.00261 ± 0.00376 0.00169 ± 0.00392 Figure 6 shows the distribution of the PET waste and its decomposition products in dependence of the reaction time and temperature, where first 12 min present the heating period, followed by reaction at constant conditions, and comparison between experimental and model data. Results show that the concentration of PET waste significantly decreases with increasing temperature and reaction time, while TPA and EG concentrations first increase and reach the maximum at 300 • C and 42 min for TPA (76.12 mg/mL) and 22 min for EG (22.4 mg/mL). After that, the concentrations of TPA and EG start to decline. During the hydrolytic degradation reaction at 300 • C and shorter reaction times (13 min), two new by-products, i.e., 1,4-dioxane and IPA start to form, with concentrations of 0.2 mg/mL and 0.75 mg/mL, respectively. IPA is an isomer of TPA and is often added in the production of PET materials [39], while 1,4-dioxane is a cyclic ether formed as a by-product in the PET production process [40]. Corresponding reaction rate constants k 3 and k 6 at 300 • C are similar and are higher as at 350 • C, which indicates faster degradation of both compounds at higher temperature. By further prolonging the reaction time at 300 • C, the formation of benzoic acid is observed in the post-reaction mixture at a reaction time of 42 minutes and its concentration increases with time due to the decarboxylation of CO 2 from the TPA molecule, which was also confirmed by gas analysis in our previous study [19]. The highest concentration of benzoic acid is achieved at a temperature of 350 • C and reaction time of 240 min and it is 6.91 mg/mL. Further, Figure 6 also shows that at higher temperature (350 • C) the concentrations of TPA and EG decrease faster with prolongation of the reaction time as at 300 • C and, therefore, reaction rate constants k 2 and k 5 , indicating the rate of formation of their degradation products benzoic acid and acetaldehyde, are higher at 350 • C. The reaction rate constant (k 1 ) for TPA is similar at both temperatures.
It can also be seen from Figure 6 that at both temperatures, the concentrations of products IPA, 1,4-dioxane, and acetaldehyde first increase and then begin to decline after a certain time.
The significant differences between the experimental and model results for decomposition of primary and secondary products were assigned to the formation of unidentified products. It would be expected that with further degradation of TPA, the concentration of benzoic acid would increase even more with the prolongation of the reaction time at 350 • C. However, HPLC analysis of the aqueous solution containing degradation products showed that unknown (new unidentified) degradation products began to form, which could be attributed to the degradation of benzoic acid. Lindquist and Yang [34] found that benzoic acid degraded in SubCW at 350 • C as the reaction time increased (more than 25% of benzoic acid was degraded by a reaction time of 240 min) as CO 2 was cleaved from the benzoic acid molecule, thereby causing the formation of a new benzene product [34], which was not analyzed in these study.
Conclusions
Kinetic study of hydrothermal degradation of PET waste to major monomers TPA and EG, and secondary products (IPA, 1-4 dioxane, acetaldehyde and benzoic acid) shows that all individual reactions follow first-order kinetics. The reaction rate constant of formation of TPA (k 1 ) was not significantly different at both temperatures, while the reaction rate constants k 3, and k 6 , indicating formation of IPA and 1,4-dioxane, were higher at 300 • C than at 350 • C, which indicates that at higher temperatures, these compounds undergo faster further degradation. Conversely, the reaction rate constants k 2 and k 5 , indicating formation of benzoic acid and acetaldehyde, were higher at 350 • C as at 300 • C, which indicates a higher degradation rate of TPA and EG at higher temperature. The maximum concentration of TPA in the aqueous phase was determined to be 300 • C and the reaction time (heating period not included) was 30 min (0.76 g TPA /g PET ), while the highest concentration of EG was achieved at 300 • C and 10 min, and was 0.22 g EG /g PET . The concentration of benzoic acid, a degradation product of TPA, increased with increasing temperature and time and the highest concentration was detected at 350 • C and 240 min and was 0.69 g benzoic acid /g PET . Similar was observed for acetaldehyde, a degradation product of EG, where the highest concentration was achieved at 350 • C and 180 min and it was 0.27 g acetaldehyde /g PET . The concentrations of 1,4-dioxane (0.05 g 1,4-dioxane /g PET ) and IPA (0.018 g IPA /g PET ) were the highest at temperature of 300 • C and reaction time of 60 min and 120 min, respectively.
|
2021-12-26T16:09:04.072Z
|
2021-12-23T00:00:00.000
|
{
"year": 2021,
"sha1": "616e007a5170fdc94e6954822dcffdad97c06c2c",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2227-9717/10/1/24/pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "47baee46f10793e6ba82bbf98eb436a5aea05a5d",
"s2fieldsofstudy": [
"Chemistry"
],
"extfieldsofstudy": []
}
|
257782423
|
pes2o/s2orc
|
v3-fos-license
|
Research on time-varying economic dispatch of smart grid based on Lagrangian pairing
. To design a class of time-varying economic dispatching algorithms for smart grids based on the Lagrangian dual idea for the time-varying load problems that arise in practical applications in smart grids, and to demonstrate the convergence of the algorithms.
Introduction
Currently, smart grid is a hot issue in power system research [1,2]. Smart grid, called "Grid 2.0", is an intelligent grid that combines the traditional grid with advanced information and communication facilities to achieve two-way information transfer between the grid and end users. It isolates faulty components of the grid and automatically restores the system to its normal state to ensure system stability; it facilitates user participation in managing the operation of the power system; it provides a fast solution for restoring power when the grid is attacked and fails; and it allows all types of power generation systems to be easily connected to the system [3]. All countries attach great importance to the construction and development of smart grids. For example, the U.S. is building a smart grid in three directions: 1) upgrading the grid infrastructure to improve the reliability of power supply; 2) applying advanced information, communication and computer technologies to the power system; and 3) enhancing the information transfer between customers and power companies through advanced meter improvements. The main focus of the European smart grid is to improve the utilization of new energy sources to improve environmental, climate and energy issues. In addition, the main focus of China's smart grid is to try to meet the energy supply needed for the country's economic development. The development of smart grid is accompanied by a series of new technologies, new equipment updates and breakthroughs, and will encounter many new problems [4]. Among them, the issue of economic dispatch is widely The main focus is on the power distribution of the power generation, which aims to save the cost of power generation and ensure the balance of the grid.. The existing economic scheduling studies of smart grids mainly focus on the case of constant load, which is too strict an assumption [5]. A new class of economic dispatching algorithms is designed considering the timevarying load case in practical applications, and the convergence of the algorithms is rigorously demonstrated.
Symbols
and ∇ �� ��x, �� denote the first order as well as the second order partial derivatives of the function ��x, �� with respect to the vector x. ∇ �� ��x, �� denotes the function ∇ � ��x, �� with respect to t for the firstorder partial derivative.
Problem Description
Suppose there is a grid with N buses and each bus line contains a generator and a local load, and denote by � � � ∈ ℝ and � � � ∈ ℝ denote the active power generated by the generators on the bus and the active power required by the load on the bus, respectively, and assume that each bus has a local load. the active power generated by the generator on the bus and the active power required by the load on the bus, respectively, and assuming that each each bus is assigned a local cost function.
where � � � , � � � , � � � . Next consider the following economic scheduling problem: where � is known only for the bus. The goal of this paper is to design an algorithm for the bus is to design an algorithm such that the system � * � � � (5) realizes � → � * � � , → , where � * � � is the above optimal solution of the optimization problem. Assume 1. The first-order and second-order derivatives of the load � exist and are bounded. Note 1. From � � 0 it can be inferred that the overall cost function ∑ � � � � is convex , so the optimal solution of problem (2) exists. Assumption 1 can be satisfied in many cases because the actual load demand always varies slowly.
Algorithm derivation
. Consider the following The following economic scheduling algorithm is The algorithm can realize the time-varying economic dispatch problem of smart grid. Proof: The economic scheduling problem is essentially a constrained optimization problem. problem, and according to the design ideas of typical optimization algorithms, the optimization algorithm can be designed by constructing Lagrangian function, based on the idea of duality to design the optimization algorithm. Specifically, the following Lagrangian function is defined that, (8) Consider the original optimization variables and the dyadic variables in combination that is, let � � � � , … , � , �. Thus the above economic scheduling algorithm can be written in a more concise form, by means of the Lagrangian function The expression can be obtained whose first-order gradient has the following form: In addition, the Lagrangian function expression gives its second order gradient degree has the following form.
To further investigate the convergence properties of the algorithm, by simplifying the calculation of the Lagrangian Langeland function first-order gradient and its partial derivative with respect to time has a special symmetric form. Specifically, it is not difficult to compute Further, the inverse matrix of the Hessian matrix can be obtained as the following expressions are obtained: Based on the algorithm in Theorem 1, the numerical simulation leads to the specific of the bus load trajectory, as shown in Figures 1 and 2. It can be seen that the algorithm can achieve the economic scheduling goal and the convergence rate has exponential characteristics.
Conclusion
The time-varying economic dispatching problem of smart grid is studied, and the corresponding algorithm is designed based on the Lagrangian pair idea, and the effectiveness of the algorithm is demonstrated. The designed scheduling algorithm is simple in form, low in complexity, and easy to calculate, which is well adapted to the actual grid system operation requirements; in addition, the algorithm has exponential convergence speed and is robust to external disturbance signals, so it is applicable to smart grids of different scales. Future research can turn to the economic dispatching problem of communication-constrained smart grid, the economic dispatching problem of information transmission with time delay and the economic dispatching problem of finite time convergence, etc.
|
2023-03-29T15:24:11.867Z
|
2023-01-01T00:00:00.000
|
{
"year": 2023,
"sha1": "f5e64bc7e9339e8ddc3acc76f2abbc52a6fd6608",
"oa_license": "CCBY",
"oa_url": "https://www.e3s-conferences.org/articles/e3sconf/pdf/2023/12/e3sconf_esat2023_03010.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "f29e274be8e3680dda1421971ca2b960e851ce4a",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": []
}
|
236341626
|
pes2o/s2orc
|
v3-fos-license
|
SEASONAL CHANGE OF AEROBIC PERFORMANCE OF YOUNG SOCCER PLAYERS
Longitudinal studies evaluating the seasonal change of aerobic capacity in young soccer players of different age categories are limited. The aim of this study was to investigate the seasonal changes in the aerobic level of the youth academy players of a professional soccer team. This research study was carried out with a total of 51 soccer players in the U14, U15, U16, U17, and U18 categories of an elite soccer team. Aerobic capacities of the athletes were measured by Yo-YoIRT 1 Test. In the analyses, the normal distribution evaluations of the data were made with the Shapiro-Wilk test and the variance homogeneities were tested with the Levene’s Test. One Way ANOVA test was used to analyze all the parametric data. All statistical evaluations were performed with the help of the SPSS 21 package program. According to the Yo-YoIRT 1 test, increases or decreases were determined in the pre-season, mid-season and end-of-season evaluations. As a result, in the present study conducted to examine the seasonal changes of the young elite soccer players in the U14, U15, U16, U17, and U18 categories, Yo-YoIRT 1 values in all categories increased significantly at the end of the season when compared to the pre-season and the mid-season. U14 and U16 age groups gained increasing acceleration starting from the pre-season. However, in the U15, U17, U18 age groups, the case is that acceleration decreases in the middle of the season and increases at the end of the season. It is thought that differences can be observed in the responses to the training loads during maturation. It can be suggested that the increase at the end of the season PE N SA R E N M O VI M IE N TO . D ow nl oa de d fro m w w w .re vi st as .u cr .a c. cr /in de x. ph p/ pe m / o nl y fo r p er so na l u se . T hi s Ju st B ak ed m an us cr ip t i s th e ac ce pt ed m an us cr ip t p rio r c op y ed iti ng a nd p ag e co m po si tio n. It w ill lik el y di ffe re nt fr om th e fin al v er si on . Pl ea se c ite a s: D ik er , G ., Zi le li, R ., Ö zk am ci ., & Ö n, S . S ea so na l c ha ng e of a er ob ic p er fo rm an ce o f y ou ng s oc ce r p la ye rs . P en sa rM ov (2 02 1) . d oi : h ttp s: //d oi .o rg /1 0. 15 51 7/ pe ns ar m ov .v 19 i1 .4 45 17
compared to the pre-season in all groups may be due to the fact that the adaptation of the athletes to the training programs are affected by the developmental characteristics regarding the age categories over time.
Las capacidades aeróbicas de los atletas se midieron mediante la prueba Yo-Yo IRT 1. En los análisis, las evaluaciones de distribución normal de los datos se realizaron con la prueba de Shapiro-Wilk y las homogeneidades de varianza se probaron con la prueba de Levene. Se utilizó la prueba una vía ANOVA para analizar todos los datos paramétricos; todas las evaluaciones estadísticas se realizaron con la ayuda del programa SPSS 21. De acuerdo con la prueba Yo-Yo IRT 1, se determinaron aumentos y/o disminuciones en las evaluaciones de pretemporada, mitad de temporada y final de temporada. Como resultado, en el presente estudio realizado para examinar los cambios estacionales de los jóvenes futbolistas de élite en las categorías U14, U15, U16, U17 y U18, los valores de Yo-Yo IRT 1 en todas las categorías aumentaron significativamente al final de la temporada, en comparación con la pretemporada y la temporada media. Los grupos de edad U14 y U16 lograron una aceleración creciente a partir de la pretemporada. Sin embargo, en los grupos de edad U15, U17, U18, el caso es que la aceleración disminuye en la mitad de la temporada y aumenta al final de la temporada. Se cree que se pueden observar diferencias en las respuestas a las cargas de entrenamiento durante el crecimiento y la maduración. Se puede sugerir que el incremento al final de la temporada respecto a la pretemporada, en todos los grupos, puede deberse a que la adaptación de los deportistas a los programas de entrenamiento se ve afectada por las características del desarrollo en cuanto a las categorías de edad a lo largo del tiempo.
INTRODUCTION
Considering the duration of the soccer game, aerobic metabolism is dominant. This reveals that the aerobic endurance feature is an important condition for success in soccer. However, it is not possible to talk about aerobic metabolism as the key to success in soccer.
Due to the multifactorial nature of soccer, besides the development of aerobic methods for success, many features such as anaerobic structures, technical and tactical dimensions, psychological state, among others, must be at a high level. The association of recovery between high-intensity activities in the game with aerobic metabolism makes the aerobic endurance feature more important (Vanttinen, Blomqvist, Nyman, & Hakkinen, 2011;Hammami et al., 2013;Sagelv et al., 2019).
Physical characteristics of soccer players are known to improve with biological age and training age. It was emphasized that the soccer game is a sport that requires the fitness levels of soccer players to be constant at a high level, but the foundations of physical performance should be formed during their youth (Dragijsky, Maly, Zahalka, Kunzmann, & Hank, 2017). In terms of talent development, soccer players between 15 and 18 years old are in a critical development period (Saether & Aspvik, 2014). It is known that the increases in aerobic performance with the training performed has positive improvements in combating the stress faced by soccer players during the match (Helgerud, Engen, Wisløff, & Hoff 2001;Impellizzeri et al., 2006). The capacity of soccer players to gain and maintain good physiological performance throughout the season is very important (Reilly & Williams, 2003). It is important to determine what effects this may have on pre-season and seasonal change levels (Caldwell & Peters, 2009).
Yo-YoIRT 1 is a valid field test widely used in soccer. The widespread use of Yo-YoIRT 1 is due to its simplicity, low cost, and the ability to test several players at the same time. (Fanchini et al., 2014). Yo-YoIRT 1 gives results similar to laboratory tests in the evaluation of VO2max (Sylejmani et al., 2019). Studies have shown that it is associated with the high-intensity activities during a match and it informs about competition levels throughout the season (Bangsbo, Iaia, & Krustru, 2008;Mohr & Krustrup, 2014). Yo-YoIRT 1 has significant correlations with the total distance (r = 0.62) and the high-intensity distance (r = 0.73) of sports athletes in the underage teams (Roe & Malone, 2016).
Although there are studies investigating the seasonal changes of soccer players, there is no study investigating the aerobic levels of the players of different age groups of a professional soccer team with the same soccer culture. Knowing the changes in aerobic capacity of youth academy players in pre-season, mid-season and end-of-season will provide important information for optimal performance and enable critical periods to be determined.
This study predicts that aerobic capacity increase as the soccer players' ages increase. Within the frame of this hypothesis, we assumed that there would be differences in seasonal variation PENSAR EN MOVIMIENTO. Downloaded from www.revistas.ucr.ac.cr/index.php/pem/ only for personal use. This Just Baked manuscript is the accepted manuscript prior copy editing and page composition. It will likely different from the final version.
Please cite as: Diker, G., Zileli, R., Özkamci., & Ön, S. Seasonal change of aerobic performance of young soccer players. PensarMov (2021). doi: https://doi.org/10.15517/pensarmov.v19i1.44517 depending on the age of the teams in different categories. If looked from, it is anticipated that this research would contribute significantly to the identification of critical stages in the development of soccer players trained in youth academies. In addition, being aware of the weaknesses of the players during their youth and training for them are considered to be important in reaching the optimal performance level by eliminating these weaknesses until adulthood. Based on the above information, the purpose of this study is to investigate the seasonal changes in the aerobic levels of the youth academy players in the U14, U15, U16, U17, and U18 categories of a professional team. given according to the speeds set from the tempo timer.
Anthropometric Measurements
The heights of the subjects were measured in an anatomical posture, bare feet, toe heels united, and while the subject was holding the breath. The head was measured in the frontal plane after the overhead table was positioned at the vertex point, and the values were recorded in centimeters. Body weights were taken in kg when the subjects were in bare feet and an anatomical posture, with only shorts on.
Aerobic Fitness
Maximal aerobic capacities of the soccer players were measured in meters with Yo-Yo IRT1. This is a test in which speed increases regularly. The test consists of a track with 20 meters of shuttle runs; there is a 5+5-meter recovery section where athletes actively rest at the end of each round. If the athlete fails to reach the finish line twice in time, the test is considered to be completed and the distance covered by the athlete is evaluated as the Yo-Yo IRT1 performance. At the first level of this test, there are a total of 4 shuttle runs and the speed is 10-13 km/h; At the second level, there are 7 shuttle runs and speeds are 13.5-14 km/h; and the next levels consist of 8 shuttle runs and a 0.5 km/h speed increase, which continued until the athlete was exhausted or two successive mistakes were made. Besides, the following formula was used in order to calculate VO 2max: Yo-Yo IRT 1VO2max (ml/kg -1 /min -1 ): Running Distance x 0.0084+36.4 (Bangsbo et al., 2008).
Design and Procedures
All tests were conducted at the end of the first preparation period (pre), at the end of the first competition period (mid), and at the end of the second competition period (end) at the same time of the day. Warm-up was performed for 10 minutes before all tests. During the tests, verbal suggestions were made so that the athletes could reach their maximum effort levels.
Statistical Analysis
After the descriptive statistics of the volunteers were calculated, the data was tested using the Shapiro-Wilk test to determine whether it showed a normal distribution and the variance homogeneities were evaluated with the Levene test separately for each category. As the data were all parametric, the analysis was carried out using One Way ANOVA test. Then, a post hoc test-Bonferroni-was used. Cohen's d effect sizes were also calculated and the outputs were described as follows: <0.20 (trivial), 0.20-0.59 (small), 0.6-1.19 (moderate), 1.2-1.99 (large); ≥2.0 (very large) (Hopkins, Marshall, Batterham, & Hanin, 2009). Futhermore, the development ratio (%) was calculated. Statistical data was tested using the SPSS 21 Package program. The significance level was set at p<0.005. Table 2. Table 3.
Estimated VO 2max Values of the Participants Pre-season, Mid-season and End-of-season
Pre x ̅ ± sd (ml/kg -1 /min -1 ) Mid x ̅ ± sd (ml/kg -1 /min -1 ) End x ̅ ± sd (ml/kg -1 /min -1 ) VO 2max results of all groups at different times are given in Table 3. Change in VO2max in all age categories for mid-seasons, the beginning of the season and the end of the season was observed.
In the U14 age category, it increased from 48.77 ml/kg -1 /min -1 to 49.71 ml/kg -1 /min -1 in the mid-season compared to the beginning of the season. It increased from 49.71 ml/kg -1 /min -1 to 51.52 ml/kg -1 /min -1 at the beginning of the season compared to the end of the season. It increased from 48.77 ml/kg -1 /min -1 to 51.52 ml/kg -1 /min -1 at the end of the season compared to the beginning of the season. In the U15 age category, it decreased from 49.15 ml/kg -1 /min -1 to 47.43 ml/kg -1 /min -1 in the mid-season compared to the beginning of the season, increased from 47.43 ml/kg -1 /min -1 to 52.93 ml/kg -1 /min -1 at the end of the season compared to the mid-season and increased from 49.15 ml/kg -1 /min -1 to 52.93 ml/kg -1 /min -1 at the end of the season compared to the beginning of the season. In the U16 age category, it increased from 50.07 ml/kg -1 /min -1 to 52.13 ml/kg -1 /min -1 in the mid-season compared to the beginning of the season, increased from 52.13 ml/kg -1 /min -1 to 56.21 ml/kg -1 /min -1 at the end of the season compared to the mid-season and increased from 50.07 ml/kg -1 /min -1 to 56.21 ml/kg -1 /min -1 at the end of the season compared to the beginning of the season. In the U17 age category, it decreased from 55.34 ml/kg -1 /min -1 to 54.07 ml/kg -1 /min -1 in the mid-season compared to the beginning of the season, increased from 54.07 ml/kg -1 /min -1 to 58.74 ml/kg -1 /min -1 at the end of the season compared to the mid-season, and increased from 55.34 ml/kg -1 /min -1 to 58.74 ml/kg -1 /min -1 at the end of the season compared to the beginning of the season. In the U18 age category, it decreased from 53.43 ml/kg -1 /min -1 to 52.39 ml/kg -1 /min -1 in the mid-season compared to the beginning of the season, increased from 52.39 ml/kg -1 /min -1 to 55.49 ml/kg -1 /min -1 at the end of the season compared to the mid-season, and increased from 53.43 ml/kg -1 /min -1 to 55.49 ml/kg -1 /min -1 at the end of the season compared to the beginning of the season.
DISCUSSION
The purpose of this study was to investigate the seasonal changes in the aerobic levels of the youth academy players of a professional soccer team in the U14, U15, U16, U17, and U18 categories.
The aerobic system and the anaerobic system are considered to be very important in soccer game. However, during a 90-minute soccer match, the main source of Adenosine Triphospate (ATP) production is regarded as the aerobic system. The maximum aerobic power determined by the maximum oxygen intake can vary depending on the league and the positions in which the soccer players participate (Vanttinen et al., 2011;Teplan et al., 2012).
In the present study, aerobic development increased statistically significantly at the end of the season compared to the mid-season and similarly at the end of the season compared to the beginning of the season in all age categories (p<0.005). At the end of the season, the highest development was in the U16 category (47.54%), followed by the U15 (29.53%), U14 (22.17%), U17 (17.96%) and U18 (12.25%) categories. In U15, U17, and U18 categories, a decreasing percentage was observed in mid-season compared to the beginning of the season, but these rates were not statistically significant (p>0.005). All teams reached their highest running distance at the end of the season. While the percentage of development at the end of the season is higher in U16 and the younger age groups, the running distance is higher in U16 and older age groups (Table 2). Besides, VO2max increased at the end of the season compared to the beginning of the season in all age categories (U14 from 48.77 ml/kg -1 /min -1 to 51.52 ml/kg -1 /min -1 , U15 from 49.15 ml/kg -1 /min -1 to 52.93 ml/kg -1 /min -1 , U16 from 50.07 ml/kg -1 /min -1 to 56.21 ml/kg -1 /min -1 , U17 from 55.34 ml/kg -1 /min -1 to 58.74 ml/kg -1 /min -1 , U18 from 53.43 ml/kg -1 /min -1 to 55.49 ml/kg -1 /min -1 ) ( Table 3). Standard deviation in Yo-YoIRT 1 and VO2max parameters of athletes differ in terms of the data obtained at the beginning, in the middle and at the end of the season. This change shows that all of the athletes were affected disparately from the training.
The increase in aerobic capacity is expected to be the usual effect of pre-season training (Jastrzębski, Dargiewicz, et al., 2011 (Thomas, Dawson, & Goodman, 2006). Caldwell & Peters also found the VO2max value (56 ml/kg -1 /min -1 ) at the beginning of the season lower than other periods of the season using the Multi-stage Fitness Test (Caldwell & Peters, 2009). Similar to the U14 and U16 age group results in the present study; Silva et al. (2015), in their study with elite young soccer players aged 17, reported that the running distance increased (35%) according to the Yo-YoIRT 1 test results obtained at the end of the week 5 of a 7-week preparation period. In the study where the development of the male soccer players in the U16 category was followed for an 8-week period, Hammami et al. (2013) reported an increase in Yo-YoIRT 1 (VO2 max from 47.9 to 55.7 ml/kg -1 /min -1 ) performances of the players. Vanttinen et al. (2011) in their study with the (Yo-YoIRT 1) 14(d=0.29-small), 15(d=0.21-small), and 16(d=0.03-trivial)-year-old finnish soccer players in the a-year follow-up in response to cardiovascular endurance reported identical findings to the current study.
In soccer-related studies, it was emphasized that the short off-season (8 weeks) was associated with a decrease in aerobic fitness level (Reilly & Williams, 2003) and an increase in body fat percentage (Hoshikawa et al., 2005). In the studies which showed that the level of aerobic fitness in soccer players increased in the mid-season compared to the pre-season and decreased at the end of the season, the reason for the increase in VO 2max in the mid-season compared to the beginning of the season stemmed from the training and competitions, and this situation was associated with the decrease in body fat percentage. They emphasized that the decrease in VO 2max in the second half of the season might be caused by overload/overtraining and fatigue at the end of the season (Haritonidis, Koutlianos, Koudi, Haritonidou, & Deligiannis, 2004;Caldwell & Peters, 2009).
Aerobic performance development in young adolescents is related to the performance in the matches. The development of aerobic performance is affected by the process of biological maturation and weekly training volume (Teixeira et al., 2014). The most important factor that can affect the sportive condition of the player and the whole team in a comprehensive way is the training load (Jastrzębski, Dargiewicz, et al., 2011). Studies show that exposure to training loads among elite young soccer players increased between the ages of 11 and 19 (Baxter-Jones & Helms, 1996;Brito et al., 2012;Malina et al., 2000). "Practice makes perfect" is a wellknown expression in most sports, including soccer. However, severe training and match load increase the risk of injury, weariness and burnout (Saether & Aspvik, 2014). The pressure to be successful for talented players is felt at earlier ages (Hill, 2013).
Researchers have suggested that such players, who had a low level of condition at the start of the season, had to work extremely hard to reach the high level they required. They also argue that to start the season in such a short time may be associated with a potentially high catabolic condition, causing mental and physical fatigue later in the season (Kraemer et al., 2004). Pressure to be successful can lead to a potential lack of motivation and burnout (Saether & Aspvik, 2014). Hill (2013) stated that one out of four 13-16 years-old players selected for a British professional club experienced burnout at least once in their career (Hill, 2013).
According to the research findings of young soccer players, the period in which growth / development and individual differences are most evident is between 11 and 16 years old. On the other hand, there is limited information about late adolescent soccer players aged 17-19 (Sylejmani et al., 2019). players, but maturity status is affected by these processes in different ways (Morris et al., 2018).
In addition to the positive adaptations from training, changes in performance in youth athletes will also reflect the effects of normal growth and maturation, therefore any conclusions regarding the magnitude of performance improvement to specific training must account for changes in maturation. As such this highlights the importance of considering physical testing results in the context of maturation status when evaluating the performance of youth soccer players (Emmonds, Sawczuk, Scantlebury, Till, & Jones, 2020).
Moreover, to the beneficial improvements of training, changes in the success of young athletes will likely demonstrate the results of natural development and maturation, thus all assumptions on the extent of skill adjustment of individual preparation will account for the changes in maturation. This emphasizes the importance of considering the results of physical testing in the context of maturation status when evaluating the performance of young soccer players (Emmonds et al., 2020).
CONCLUSIONS
The present study aimed to examine the seasonal changes of the young elite soccer players in the U14, U15, U16, U17, and U18 categories. The findings show that the aerobic performance of U14 and U16 age groups increased from the beginning of the season to the end of the season.
When we look at the mean values the aerobic performance of the U15, U17, U18 age groups, we can state that there is a decrease in the middle of the season and increase at the end of the season. While the increase is statistically significant, the decrease is not statistically significant. However, this decrease is worthy of notice. It is thought that some differences in the responses to the training loads during maturation can be observed (Roe & Malone, 2016).
Significant increase at the end of the season compared to the beginning of the season in all groups may be due to the fact that the adaptation of the athletes to the training programs was positively affected by the development characteristics according to the age categories. For this reason, carefully selected trainings should be performed in all age categories according to the developmental characteristics of the athletes. Furthermore, the development of the athletes should be tested at short intervals.
|
2021-07-27T00:05:41.724Z
|
2021-05-25T00:00:00.000
|
{
"year": 2021,
"sha1": "738d1d000fdf07567a7773fb3ff1006d5111c8d3",
"oa_license": "CCBYNCSA",
"oa_url": "https://revistas.ucr.ac.cr/index.php/pem/article/download/44517/46643",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "d24ee90d48b260bbc7a68019d6163b51a6d66509",
"s2fieldsofstudy": [
"Education"
],
"extfieldsofstudy": []
}
|
258041923
|
pes2o/s2orc
|
v3-fos-license
|
Metformin regulates bone marrow stromal cells to accelerate bone healing in diabetic mice
Diabetes mellitus is a group of chronic diseases characterized by high blood glucose levels. Diabetic patients have a higher risk of sustaining osteoporotic fractures than non-diabetic people. The fracture healing is usually impaired in diabetics, and our understanding of the detrimental effects of hyperglycemia on fracture healing is still inadequate. Metformin is the first-line medicine for type 2 diabetes (T2D). However, its effects on bone in T2D patients remain to be studied. To assess the impacts of metformin on fracture healing, we compared the healing process of closed-wound fixed fracture, non-fixed radial fracture, and femoral drill-hole injury models in the T2D mice with and without metformin treatment. Our results demonstrated that metformin rescued the delayed bone healing and remolding in the T2D mice in all injury models. In vitro analysis indicated that compromised proliferation, osteogenesis, chondrogenesis of the bone marrow stromal cells (BMSCs) derived from the T2D mice were rescued by metformin treatment when compared to WT controls. Furthermore, metformin could effectively rescue the impaired detrimental lineage commitment of BMSCs isolated from the T2D mice in vivo as assessed by subcutaneous ossicle formation of the BMSC implants in recipient T2D mice. Moreover, the Safranin O staining of cartilage formation in the endochondral ossification under hyperglycemic condition significantly increased at day 14 post-fracture in the T2D mice receiving metformin treatment. The chondrocyte transcript factors SOX9 and PGC1α, important to maintain chondrocyte homeostasis, were both significantly upregulated in callus tissue isolated at the fracture site of metformin-treated MKR mice on day 12 post-fracture. Metformin also rescued the chondrocyte disc formation of BMSCs isolated from the T2D mice. Taken together, our study demonstrated that metformin facilitated bone healing, more specifically bone formation and chondrogenesis in T2D mouse models.
Introduction
Diabetes mellitus is a group of chronic diseases characterized by high blood glucose levels. It is estimated that more than 347 million people worldwide currently have diabetes and many more people are estimated to become diabetic soon [1]. Among the diabetic patients, more than 90% are suffering from Type-2 diabetes (T2D), which is caused by insulin resistance in peripheral tissues. Many tissues including the skeleton will be adversely affected by hyperglycemia if not controlled [2]. Both Type 1 diabetes and T2D are associated with an increased risk of osteoporosis and fragility fractures [2].
It is recognized that oral antidiabetic medicines affect bone metabolism and turnover [2].
As an insulin sensitizer, patients with T2D are frequently prescribed with metformin. In T2D patients, metformin treatment was associated with a decreased risk of bone fracture [3]. The osteogenic effects of metformin have been documented in both cellular and rodent models. Metformin promoted osteoblast differentiation and inhibited adipocyte differentiation in rat bone marrow mesenchymal stem cells culture [4]. In rat primary osteoblasts culture, metformin increased trabecular bone nodule formation [5]. In ovariectomized rats, metformin was also shown to improve the compromised bone mass and quality [6]. Furthermore, in streptozotocin-induced Type-1 DM model, metformin stimulated bone lesion regeneration in rats [7]. However, to date, there is no study on the skeletal effects of metformin in a T2D model. A recent study found that the incidence of total knee replacement over four years was 19% lower among patients with type 2 diabetes who were regular metformin users, compared with non-users [8]. In addition to reducing glucose levels, metformin may modulate inflammatory and metabolic factors, leading to reduced inflammation and plasma lipid levels [8]. Better knowledge of how metformin treatment influences skeletal tissues under T2D condition is of great clinical relevance in view of the fast-growing population of patients with T2D.
MKR mouse model was generated and characterized by LeRoith and colleagues and expresses a dominant negative mutant of human IGFI receptor specifically in skeletal muscles [9]. The expression of this dominant negative mutant human IGFI receptor decreased glucose uptake and causes insulin resistance, the MKR transgenic mouse rapidly develops severe diabetes [9]. This mouse model has been widely used in T2D research. In this study, the effects of metformin on bone cell lineage determination and regeneration was characterized using MKR mouse model.
Animals
Bone injury models
Femoral closed fracture model
The gravity-induced Bonnarens and Einhorn fracture model was adapted here as previously described in order to establish a standard closed fracture [10,11]. Briefly, 12-week old WT and MKR male mice were anesthetized with a Ketamine/ Xylazine cocktail, and a 1 cm sagittal incision was made at the right knee beneath the patella. A 3/8-inch length 27-gauge needle was inserted into bone canal right between medial and lateral condyles. The needle end was cut, and the blunt end was pushed forward and buried between medial and lateral condyles to avoid tissue damage afterwards. With the fixative needle inside of the femur canal, animal was moved to the fracture apparatus. The right femur was placed over the two supports, and the blunt guillotine blade was dropped from a pre-tested height onto the femur, to created sufficient force to cause the fracture. The wound was then closed with suture, and mice were randomly assigned into vehicle (PBS) or metformin (Met, 200mg/kg BW) daily treatment groups.
Radius non-union fracture model
Male WT and MKR mice of 12-week-old were anesthetized using a Ketamine/ Xylazine cocktail. A 0.5 cm coronal incision was made over the right radius. The brachioradialis and pronator teres were carefully separated with blunt surgical instruments to reveal the radius. A super sharp Stevens Tenotomy Scissor was used to cut at the middle of the radius and created the non-union radial fracture. The wound was then closed, and mice were randomly assigned into PBS or metformin daily treatment groups.
Femoral drill hole model
WT and MKR male mice of 12-week-old were anesthetized using a Ketamine/ Xylazine cocktail. A 1 cm coronal incision was made over the right lateral femur. Quadriceps were carefully separated with blunt surgical instruments to reveal the femur. A drill bit (#66) was used to create a 0.8 mm diameter hole on the femur. After closing the wound, mice were randomly assigned into PBS or metformin daily treatment groups.
Tissue Collection and Processing
The micro computed tomography (μCT) and bone histomorphometry were utilized to assess static and dynamic indices of bone structure and formation. Briefly, bone injuries were introduced in animals as described above, PBS and metformin treatment was administrated daily for indicated time. After sacrificing the animal, injured bone samples were fixed in 10% buffered formalin for 48 hours, then rinsed with PBS before being analyzed by μCT. Bones were evaluated using a SkyScan 1172 highresolution scanner (Brucker, Billerica, MA, USA) with 60 kV voltage and 167μA current at a 9.7 μm resolution and reconstructed using NRecon (V.1.6.10.2.). Whole scanned region was included as VOI (volume of interest) [12] from femoral fracture model to generate a general 3D view of the femur fracture cite. In the radius fracture model, a total of 3 mm (311 transverse anatomic slides) radius including the entire injury site was selected as VOI. In order to analyze only the fracture callus bone parameters, two sets of regions of interest [9] were manually drawn within each VOI slides. The first set traced the external fracture callus (inclusion ROI) and the second set traced the cortical bones within the callus (exclusion ROI). In the femur drill hole model, a round shaped ROI with 0.611 mm diameter (63 pixels) was surrounded in the center of injury hole throughout the depth where callus exhibit to generate the VOI consist of callus within the drill cite. VOI from each animals were analyzed with CTan (V.1.13.2.1.) to calculate the following morphometric parameters: bone mineral density (BMD), relative bone volume (BV/TV), trabecular thickness (Tb.Th), trabecular separation (Tb. Sp), porosity, and total pore space. CTVox (V.3.3.0.0.) was used to generate the 3 spatial images of the VOI.
For double labeling experiment, ten-week-old male WT and MKR mice were randomly assigned to receive PBS or metformin daily injections for 14 days. Mice also received intraperitoneal injections of calcein (10 mg/kg) on the 5 th day and alizarin red (15 mg/kg) on the 12 th days during the 14-day period. For histomorphometry and peripheral quantitative computed tomography, femurs were preserved in 70% ethanol until they were processed for plastic embedding in methyl methacrylate resin or decalcified for paraffin embedding.
Blood was collected by cardiac puncture after euthanasia, left at room temperature for 30 min before centrifuging at 200 g for 10 min to separate sera.
Cell culture and analysis:
As previously described, after 14-day treatment of PBS or metformin in-vivo, WT and MKR mice were sacrificed and proceeded with bone marrow primary cell culture.
Excessive tissue was removed from femur and tibia from each mouse, and quickly rinsed with 70% ethanol and then followed by triple cold PBS wash to ensure sterility. incubator. After removed non-adherent cells, the adherent cells were cultured as bone marrow stromal cells (BMSCs) for 7 days and sub-cultured for following assays: 1. Colony-forming unit fibroblastic assay (CFU-F) Isolated BMSCs were seeded at 100, 500, and 1000 cells/well in 6-well plate, followed by 10 days culture with α-MEM complete medium. Cells were fixed with 10% buffered formalin, washed 3 times with PBS followed by staining with 0.25% wt/vol crystal violet solution. Plate images were captured using ChemiDoc XRS System (BioRad Laboratories, Inc. Hercules, CA, USA) and analyzed with ImageJ.
Osteoblast differentiation
Isolated BMSCs were seeded at 6.4 x 10 4 cells/well into 6-well plates, and cultured with osteogenic medium containing α-MEM complete medium supplement with 50 µg/mL L-Ascorbic acid-2-phosphate and 10mM β-glycerophosphate for 2~3 weeks followed by ALP and von Kossa staining. Briefly, after 2 weeks of osteogenic differentiation, cells were fixed and checked for alkaline phosphatase activity using ALP kit (86R-1KT, Sigma) following manufacture's protocol. After 3 weeks under osteogenic culture, cells were fixed with 95% ethanol and rehydrated through gradient ethanol to water. Cells were then carefully washed with water after being incubated with 5% silver nitrate solution at 37 °C for one hour, exposed under UV light for 10 minutes. All plates' images were captured using ChemiDoc XRS System, and ALP positive area and mineralized regions were measured by ImageJ software.
Chondrocytes differentiation and disc formation
Isolated BMSCs were further cultured and passaged twice using standard growth media of DMEM+10% FBS in order to enrich the cell number. A cell solution of 1.6x10 7 cells/mL was generated using StemPro™ Chondrogenesis Differentiation Kit (ThermoFisher), and 5 µL droplets were applied in the center of 48-well plate wells for micro-mass culture. 3 hours later, warmed chondrogenic medium were overlayed over the micro-mass and the formation of osteogenic pellets were observed after 3 days of culture.
In vivo Ossicle formation assay
As described above, BMSCs from different treatment groups were obtained and cultured using standard growth media of DMEM+10% FBS with customized glucose level. We measured and calculated mean value of recipient MKR mice blood glucose level (N=4, average glucose level = 436 mg/dL) and prepared the ex-vivo culture medium accordingly to avoid glucose level change from ex-vivo culture to the body fluid of the recipient MKR mice. Briefly, BMSCs from PBS or metformin treated mice were growing to 90% confluent, then 1.0x10 8 cells/mL cells solution was obtained and 20 µL of this cell solution was soaked in a 4mm x 4mm gel foam and grafted at the flank of the recipient MKR mice subcutaneously. Four weeks after the implantation, the gel foams were dissected and processed for histological assays.
Histology
Femurs from fracture models and bone ossicles were decalcified using 10% EDTA for 2 weeks. EDTA solution was refreshed every other day for the best decalcification efficacy.
Tissues were then processed through automatic tissue processor, followed by paraffin embedding. H&E, safranin O, and Mason's Trichrome staining were performed respectively.
Statistics
We used ANOVA analysis when the study subjects included more than two groups, followed by the Bonferroni t-test. We used the two-tailed Student's t-test to compare the difference between two experimental groups. A value of P<0.05 was considered to be statistically significant. Bars in figures represent the mean ± SEM unless stated otherwise.
Metformin promotes healing in fracture models under hyperglycemic condition.
In order to evaluate the fracture healing which involves endochondral ossification in mice, we adapted this well accepted Bonnarens and Einhorn fracture mouse model ( Fig 1A).
Animals were sacrificed on day 14, 23, or 31 post-fracture representing the inflammatory stage, the endochondral stage, and the remodeling stage during the femur fracture repair process [13]. As shown in Fig 1B, A similar effect of metformin was also observed in a non-fixed radial fracture model (Fig. 2). After non-fixed radial fracture was introduced, animal was treated with either PBS or metformin for 14 or 23 days ( Fig. 2A). In the WT groups at 14 days post-fracture, both PBS and metformin treated animals started to exhibit sufficient amount of callus with the sign of bridging of the fracture ends (Fig. 2B). In the 14-day post-fracture MKR mice, those treated with metformin exhibited more healing callus than the PBS treated ones with significantly greater percentage of callus bridging at the fracture site (Fig. 2C). Bone mineral density was also higher in metformin-treated MKR mice when compared to the PBS-treated MKR mice (Fig. 2D). Bone volume/tissue volume ratio ( Tracing of the drill-hole injury sites presented a detailed view of the callus tissues formed within the drill hole (Fig. 3D). In the WT animals, quantitative analysis of the µCT images showed no difference between the PBS and the metformin treatments. Significant lower BMD (Fig. 3E) and BV/TV ratio (Fig. 3F) were observed in PBS treated MKR mice when compared to the metformin treated ones. The PBS treated MKR mice also demonstrated prominently higher bone porosity (Fig. 3G) and total pore space (Fig. 3H) within the callus tissue, suggesting delayed bone healing and remodeling in MKR mice was rescued by metformin.
Metformin accelerates bone formation under hyperglycemic condition.
In order to examine metformin's effects on bone formation in vivo, we conducted bone Alizarin Red and Calcein Double Labeling injections on WT and MKR mice. Fig. 4A suggested that the florescent labels in WT mice remained the same between PBS and metformin treated animals. In contrast, the distance between the two labels was greater in metformin treated than the PBS treated MKR mice. This observation was supported by serum levels of amino-terminal propeptide of type 1 procollagen (P1NP). P1NP is considered a sensitive marker of bone formation [14] and the ELISA assay was performed using the serum samples collected at 14, 23, and 31 days post femoral fracture.
In WT animals, there is no difference in P1NP levels between metformin treated and PBS-treated mice at all three time points (Fig. 4B). On the contrary, we observed significantly higher P1NP levels at all three time points post-fracture in the MKR mice treated with metformin compared those treated with PBS ( Fig 4B). Collectively, the data indicate that metformin can promote bone formation only under hyperglycemic conditions.
Metformin regulates proliferation and lineage commitment of the bone marrow stromal cell (BMSC) in MKR mice.
Considering that the multipotent mesenchymal stem/progenitor cells (BMSCs) in bone are critical in maintaining bone quality, function and regeneration, we then tested whether metformin stimulated BMSCs proliferation in MKR mice. As expected, hyperglycemic condition in MKR mice impaired the proliferation of their BMSCs, as indicated by CFU-F staining when compared to WT controls (Fig. 5A). Administration of metformin in MKR mice successfully salvaged the BMSC proliferation capability and brought it back to the equivalent level as observed in WT group. Under all three seeding densities, compromised CFU-F colony formation observed in MKR group can be rescued by metformin treatment as shown in Fig. 5B. However, metformin did not affect the proliferation of BMSCs in WT group when compared to the PBS vehicle. It is noteworthy that metformin was not administrated to the culture. The daily metformin treatment in MKR mice prior to cell isolation appeared to be sufficient to protect the proliferation potential of BMSCs from the detrimental effects of hyperglycemia. BMSCs lineage differentiation potential was also tested by ALP staining and von Kossa staining.
After 14 days under osteogenic [1] differentiation, in contrast to the weak ALP activity observed in MKR-PBS group, MKR-met group showed compatible ALP activity to the WT groups ( Fig. 5C and 5D). ALP played a critical role in calcium crystallization and mineralization during bone formation, therefor we speculated this aberrant ALP activity of the MKR-PBS group would further lead to an impaired bone mineralization. As expected, bone mineralization was barely detected in MKR-PBS group after 21 days osteoblast differentiation visualized by von Kossa staining and metformin significantly enhanced bone mineralization in MKR mice when compared to the PBS-treated MKR animals ( Fig. 5E and 5F). Notably, the BMSCs from metformin treated T2D mice could maintain the improved bone formation feature when re-exposed to the same hyperglycemic levels as in the T2D mice. BMSCs isolated from metformin treated MKR mice and PBS controls were implanted to the recipient MKR mice (Fig. 5G). Masson's Trichrome staining on the ossicles formed by the BMSCs from metformin treated MKR mice showed greater bone formation than that of PBS treated MKR mice (Fig5 H-I).
These results implied that in vivo treatment of metformin could effectively rescue the impaired differentiation potential of BMSCs in MKR mice.
Improved chondrogenesis of BMSC from metformin treated MKR mice
Chondrogenesis and endochondral ossification are critical steps during the healing process after a bone injury. In order to examine metformin's effect on chondrogenesis during bone healing, we compared the cartilage deposition within fracture healing sites throughout the healing process (14 days, 23 days, and 31 days post fracture). The cartilage deposition was significantly lower in PBS-treated MKR mice as compared to the PBS-treated WT mice at 14days post fracture (Fig. 6A-B). As expected, no difference between the PBS or metformin treated animals was observed in WT groups. On the other hand, metformin significantly promoted cartilage formation in MKR mice on day 14 post fracture, and the trend continued till day 23 (Fig. 6A-B). By day 31 post fracture, except in the PBS treated MKR mice, no discernible callus remained in any other groups (Fig. 6A). To further investigate if metformin modulates chondrogenesis of BMSCs. BMSCs were isolated for chondrogenesis culture in vitro. Only BMSCs isolated from PBS treated MKR mice failed to form the chondrocyte disc as shown in Fig. 6C after 3 days chondrogenic culture. BMSCs isolated from metformin treated MKR mice could form the chondrocyte disc as well as the WT controls (Fig. 6C). We also harvested the callus tissue at fracture sites on day 12 and day 21 post fracture to examine the expression of genes that contribute to chondrogenesis. At day 12 post fracture, the chondrocyte transcript factor SOX9 was significantly upregulated in metformin treated MKR mice ( Fig. S2C). SOX9 is a master transcription factor that play a key role in chondrogenesis [15]. At day 21 post fracture, the PGC1α was significantly upregulated in metformin treated MKR mice (Fig. S2G). PGC1α is required for chondrocyte metabolism and cartilage homeostasis [16,17].
Discussion
Being the most commonly prescribed diabetic medication in the world, metformin's effects on bone healing in T2D patients remain unclear. To assess the impacts of metformin on fracture healing under hyperglycemic condition, we applied several classic bone fracture models in T2D mice. Our results demonstrated that in all injury models tested, metformin successfully rescued the delayed bone healing and remolding in T2D mice by facilitating bone formations. Further cell culture studies demonstrated the mechanism of metformin's action at cellular level via promoting the proliferation, differentiation, and lineage commitment of primary BMSCs. Taken together, metformin showed its potential as an effective drug for increasing the rate and success of bone healing in diabetic patients that are not taking metformin on regular basis.
In all the bone fracture models studied in this study, metformin significantly enhanced bone-healing parameters in MKR mice. However, in the WT animals, quantitative analysis of images showed no difference between the PBS and the metformin treatment in terms of bone healing. These data suggest that metformin is only beneficial for bone healing under hyperglycemic conditions but does not enhance bone healing in WT animals without diabetes. In all BMSC based essays, metformin was not administered to culture media in vitro but administered to animals in vivo before the BMSC were isolated.
Administration of metformin in MKR mice successfully salvaged the BMSC proliferation capability and lineage commitment and brought it back to the equivalent level as observed in WT group. Interestingly, metformin did not affect the proliferation and lineage commitment of BMSCs in WT group when compared to the PBS vehicle. These data are consistent with those obtained from bone fracture models, suggesting that metformin may exert its effects through normalizing hyperglycemia (Fig. S1A), glucose tolerance (Fig. S1B) and other metabolic disturbance under diabetic conditions and does not enhance bone healing in WT animals. In ovariectomized rats, impaired bone density and quality were significantly improved by the treatment of metformin [6]. Taken together, it seems that metformin does not promote further bone growth under physiological conditions but helps to maintain bone homeostasis under pathological conditions such as hyperglycemia and lack of estrogen. To our knowledge, there are very scarce amount of research on the direct effects of metformin on BMSCs. In an in vitro study using bone marrow-derived multipotent mesenchymal stromal cells, metformin causes inhibition of proliferation and abnormalities of their morphology and ultrastructure [18]. This might be due the level of metformin used in the culture system or that metformin does not work directly on BMSCs but requires other tissues, cells and the in vivo context to exert its effect in mice. In another paper, metformin added to culture is shown to promote the osteogenesis of BMSCs isolated from T2D patients and osseointegration when administered in rats [19].Whether metformin has direct beneficial effects on BMSCs remains to be investigated.
As reviewed by Roszer, T. [20], diabetes is accompanied by increased level of proinflammatory factors, reactive oxygen species (ROS) generation and accumulation of advanced glycation end products (AGEs). The increased inflammatory state could result in apoptosis of osteoblasts and prolonged survival of osteoclasts, which lead to early destruction of callus tissue and impair bone fracture healing of diabetic patients. Thus, antagonizing inflammatory signal pathways and inhibition of inflammation may deserve greater attention in the management of diabetic fracture healing. There are substantial evidence supporting that metformin not only improves chronic inflammation by attenuating hyperglycemia but also has a direct anti-inflammatory effect. Targeting inflammatory pathways seems to be an important part of the comprehensive mechanisms of action of this drug [21]. In addition to AMPK activation and inhibition of mTOR pathways, metformin acts on mitochondrial function and cellular homeostasis processes such as autophagy [21]. Both dysregulated mitochondria and failure of the autophagy pathways affect cellular health drastically and can trigger the onset of metabolic and age-associated inflammation and diseases. For example, T-helper type 17 (Th17) cells, an important proinflammatory CD4+ T cell subset secreting interleukin 17 (IL-17), has been suggested to play an essential role in development of diabetes mellitus [22]. Metformin can ameliorate the pro-inflammatory profile of Th17 by increasing autophagy and improving mitochondrial bioenergetics [23]. In addition, at day 21 post fracture, the PGC1α in callus tissues isolated from the fracture site was significantly upregulated in metformin treated MKR mice (Fig. S2H). As reviewed by Halling and Pilegaard [24], PGC-1α not only regulates mitochondrial biogenesis but also its function. PGC-1αmediated regulation of mitochondrial quality may contribute to many age-related dysfunctions including insulin sensitivity. Anti-inflammation and enhancement of mitochondrial function could be very important means that metformin utilizes to facilitate bone formation and healing under hyperglycemic conditions. Diabetic hyperglycemia has been suggested to play a role in osteoarthritis. The metabolic alterations in body fluid such as hyperglycemia could negatively affect the cartilage through direct effects on chondrocytes by stimulating the production of advanced glycosylation end products (AGEs) accumulation in the synovium [25]. PPARγ is highly expressed in adipocytes and the downregulation of PPARγ expression in the callus of metformin treated MKR mice reflected the shift of mesenchymal cells fate. In T2DM mouse model, differentiation of growth plate chondrocytes is delayed and this delay may result from premature apoptosis of the growth plate chondrocytes [26]. Besides its effects on bone formation, there are also interests in studying the effects of metformin on chondrocytes, especially in the context of osteoarthritis development. Limited reference showing that metformin is protective against development of osteoarthritis by reducing chondrocyte apoptosis and alleviating chondrocyte degeneration [27][28][29]. Consistent with the above reports, our data suggest that metformin promoted the cartilage formation in the endochondral ossification at day 14 post-fracture in T2D mice. Moreover, metformin rescued the chondrocyte disc formation in BMSCs isolated from T2D mice when compared to the PBS treated control. Metformin also upregulated chondrocyte transcript factor SOX9 and in callus tissue isolated at the fracture site in metformin treated MKR mice. Sox-9 plays an essential role in regulation of cartilage matrix production and cartilage repair [15]. In addition, PGC1α was significantly upregulated in metformin treated MKR mice when compared to the PBS-treated MKR animals.
PGC1alpha is important to maintain chondrocyte metabolic flexibility and tissue homeostasis. The loss of PGC1α in chondrocytes during OA pathogenesis resulted in the activation of mitophagy and stimulated cartilage degradation and apoptotic death of chondrocytes [30]. The activation of PGC1α is a potential strategy to delay or prevent the development of OA. The cellular signaling pathways through which metformin exerts its protective effects in chondrocytes warrant further research.
In conclusion, our study demonstrated that metformin can facilitate bone healing, bone formation and chondrogenesis in T2D mice. The molecular mechanism of metformin's action demands further research in hope to identify specific therapeutic target to facilitate bone healing and repair in diabetic patients.
Conflict of interest disclosure
Authors declare no conflict of interest.
ACKNOWLEDGMENTS
This study was funded by New York University Start-up and Career Enhancement Award to XL. The authors would like to thank the support from the microCT core facility at New York University College of Dentistry.
|
2023-04-10T13:12:24.952Z
|
2023-04-06T00:00:00.000
|
{
"year": 2023,
"sha1": "e6ca21de6b306c64cb87fa4f5cf482c64590f4be",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "BioRxiv",
"pdf_hash": "e6ca21de6b306c64cb87fa4f5cf482c64590f4be",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Biology"
]
}
|
245427038
|
pes2o/s2orc
|
v3-fos-license
|
HIF-1α Is a Rational Target for Future Ovarian Cancer Therapies
Ovarian cancer is the eighth most commonly diagnosed cancer among women worldwide. Even with the development of novel drugs, nearly one-half of the patients with ovarian cancer die within five years of diagnosis. These situations indicate the need for novel therapeutic agents for ovarian cancer. Increasing evidence has shown that hypoxia-inducible factor-1α(HIF-1α) plays an important role in promoting malignant cell chemoresistance, tumour metastasis, angiogenesis, immunosuppression and intercellular interactions. The unique microenvironment, crosstalk and/or interaction between cells and other characteristics of ovarian cancer can influence therapeutic efficiency or promote the disease progression. Inhibition of the expression or activity of HIF-1α can directly or indirectly enhance the therapeutic responsiveness of tumour cells. Therefore, it is reasonable to consider HIF-1α as a potential therapeutic target for ovarian cancer. In this paper, we summarize the latest research on the role of HIF-1α and molecules which can inhibit HIF-1α expression directly or indirectly in ovarian cancer, and drug clinical trials about the HIF-1α inhibitors in ovarian cancer or other solid malignant tumours.
INTRODUCTION
Ovarian cancer is the eighth most commonly diagnosed cancer among women worldwide (1). Epithelial ovarian cancer (EOC) represents one of the deadliest cancers among women, with 47% of patients dying 5 years after EOC diagnosis (2). The standard treatment for ovarian cancer is debulking surgery combined with chemotherapy (3). Unfortunately, even when patients accept standard treatment, recurrence occurs within 2 years in approximately 75% of patients who suffer from advanced-stage EOC (4).
The complex and rich multicellular environment in which a tumour develops is defined as the tumour microenvironment (TME) (5). In recent years, numerous studies have indicated that the TME plays a vital role in the malignant biological properties of tumours (6,7), including ovarian cancer (8). With the uncontrolled growth of tumour cells and abnormalities in tumour microcirculation (9), hypoxia is an obvious feature of the TME, which is positively associated with tumour growth, angiogenesis, resistance to apoptosis and chemotherapy, and tumour metastasis (10). Hypoxia-inducible factors (HIFs) constitute a family of transcription factors that are involved in the regulation of the cellular response to hypoxic stress (11)and include three members: HIF-1 (12), HIF-2 (13), and HIF-3 (14).
HIFs, which form dimers, are composed of an oxygen-sensitive a-subunit and constitutively expressed b subunit (15,16). There are three types of a-subunits (HIF-1a, HIF-2a and HIF-3a). The structures of HIF-1a and HIF-2a are similar but not identical, and they heterodimerize with the aryl hydrocarbon nuclear receptor translocator (also known as HIF-1b) to form HIF-1 and HIF-2, respectively (17). HIFs belong to the basic-helix-hoop-helix Per-Arnt-Sim (bHLH-PAS) protein family and contain a bHLH domain (the bHLH domain mediates the DNA-binding activity of HIF-a through the specific amino acids located in this domain), followed by a PAS domain. There are two different PAS domains, named PAS-A and PSA-B. The PAS domain of HIF-1a is required for the binding of hypoxia response elements (HREs) and the formation of active heterodimers. HIFs also contain an oxygen-dependent degradation domain (ODD) that is highly conserved and Nterminal and C-terminal transactivation domains (18)(19)(20)(21)(22)(23) (Figure 1).
Numerous studies have found that HIF-1 participates in the process of metastasis, resistance to chemotherapy or radiotherapy and cancer stem-like cell maintenance in various types of cancers (24,25) and is associated with the prognosis of gynaecological cancers (26). Thus, considering the constitutive expression of the b subunit, targeting HIF-1a may be a novel approach to treat ovarian cancer. This review summarizes recent studies on HIF-1a in ovarian cancer.
HIF-1a IS CONSIDERED A POOR PROGNOSTIC FACTOR FOR OVARIAN CANCER
The significance of HIF-1a in solid malignant cancer varies. It is a favourable prognostic factor in renal cell cancer and early-stage squamous cell carcinomas of the oral floor (34,35), but unfavourable in breast or oesophageal squamous cell carcinoma (35,36). Many studies have indicated that a shorter OS is related to the positive HIF-1a expression (30-32, 37, 38). In late-stage and poorly differentiated ovarian cancer, positive HIF-1a expression is related to a shorter OS time but not a shorter progression-free interval(PFI), while patients who underwent suboptimal cytoreduction and had positive HIF-1a expression exhibited a shorter PFI than HIF-1a-negative patients (29).Only one report found no association between HIF-1a and the overall survival (OS) of ovarian cancer (27). In summary, the majority of studies have indicated that HIF-1a is a good predictor of a poor prognosis in ovarian cancer ( Table 1).
HIF-1a expression may be associated with the response to chemotherapy. Alabiad et al. reported a good response to chemotherapy in patients with low HIF-1a expression (33). In contrast, researchers found that HIF-1a-expressing patients were more sensitive to paclitaxel/carboplatin combination chemotherapy (28), and Birner noted that HIF-1a does not influence the response to platinum-based chemotherapy (27). Considering the large number of cell experiments proving that HIF-1a contributes to the chemoresistance of ovarian cancer (discussed later) and the small number of samples in the studies mentioned previously, we need to further investigate the relationship between HIF-1a expression and chemotherapy sensitivity ( Table 1).
stress, the p53 protein, which is supposed to be induced by doxorubicin or cisplatin, was downregulated so that apoptosis of lung and colon cancer cells mediated through p53 protein was diminished (42). Cisplatin can kill ovarian cancer cells through the p53-dependent apoptotic pathway (43,44). Basmina et al. found that HIF-1a protein binding to p53 protein, so that the transcriptional function of p53 decreased, and thus the expression of BAX downregulated, thereby affecting the apoptosis process mediated by p53 (45). Scientists have already discovered that the ODD region of the HIF-1a protein can directly bind to the DNAbinding region of the p53 protein and may abolish the function of p53, thus hampering gene transactivation in nonmalignant cells (46).However, the accurate binding mechanism between the p53 protein and HIF-1a protein in ovarian cancer is still not clear ( Figure 2).
HIF-1a Promotes the Expression of IL-6
Interleukin-6 (IL-6) is a multifunctional cytokine that participates in the progression of many kinds of malignant tumours (47). IL-6 is highly expressed in the serum and ascites of patients with ovarian cancer, and its upregulation is significantly associated with the poor prognosis (48)(49)(50). In colon tumour cells, HIF-1a can regulate IL-6 expression via miR-338-5p (51).However, under hypoxic stress, the HIF-1 complex can promote the transcription and expression of neuronal pentraxin II (NPTX2). IL-6 expression is upregulated by NPTX2 overexpression, and the JAK2/STAT3 axis is activated via overexpression of IL-6 to promote the proliferation, invasion and migration of EOC cells (52). In addition, IL-6 can induce nuclear translocation and elevate the transcriptional activity of HIF-1a via STAT3 signalling to enhance the chemoresistance against cisplatin of ovarian cancer cells (53). It seems that there is a positive feedback loop between HIF-1 and IL-6 that is mediated by JAK/STATA3 signalling ( Figure 2).
LncRNAs Promote the Progression of Ovarian Cancer via HIF-1a
Long noncoding RNAs play variable roles in malignant tumours. HIF-1a can regulate the expression of these noncoding RNAs, and noncoding RNAs can interact with mRNA-HIF-1a to regulate the expression of HIF-1a protein and then induce the progression of many types of tumours, including breast cancer (54) and ovarian cancer (55,56).
The lncRNA CDKN2B-AS1 is overexpressed in ovarian cancer and can silence miR-411-3p, release HIF-1a mRNA, whose translation production plays a critical role in the transcription of VEGF and p38, and then promote the migration and invasion of cancer cells (55). LncRNA DSCR8 is upregulated in ovarian cancer tissue and promoted tumour growth. HIF-1a promote the expression of DSCR8, which can sponge miR-98-5p, so that stopping miR-98-5p targeting to the 3'-UTR of STAT3 and then promoting ovarian cancer progression by stimulating the STAT3/HIF-1a pathway, which in turn upregulates DSCR8, creating a positive feedback loop to promote the progression of ovarian cancer (56) (Figure 2).
HIF-1a Can Stimulate the AKT/mTOR Pathway
AKT/mTOR pathway plays a vital role in the progression of ovarian cancer (57). Knockdown the HIF-1a expression via siRNA in A2780 and SKOV3 cells significantly downregulated the phosphorylation of AKT/mTOR (58).Besides, AKT pathway regulates the expression of HIF-1a (59,60) and Herpesvirus entry mediator(HVEM) is overexpressed in ovarian cancer (61). A hypoxic environment upregulates HVEM expression and enhances the phosphorylation of AKT/mTOR, thus inducing the expression of HIF-1a, which can promote the cell proliferation (62). It is speculated that the HEVM/ AKT/mTOR/HIF-1a axis and HIF-1a/AKT/mTOR axis may construct a feedback loop to promote ovarian cancer progression, which needs further investigation ( Figure 2). (28) 52
HIF-1a Promotes the Glycolysis Pathway in Ovarian Cancer
Metabolites of the glycolysis pathway are abnormally activated in malignant tumours even under normoxia (called the Warburg effect) and promote the progression of cancers (63), including gallbladder cancer (64), pancreatic cancer (65), cervical cancer (66) and ovarian cancer (67). HIF-1a, as a transcription factor, can regulate metabolism-associated genes, which contribute to Warburg effect (68,69). SIK2 is associated with poor outcomes in ovarian cancer, and previous studies have demonstrated that SIK2 induces ovarian cancer progression by activating the PI3K/ AKT pathway (70,71). SIK2 upregulates the expression level of HIF-1a, which enhances the transcription of glycolysisassociated genes (HK2 and PFKL), inducing the metastasis and invasion of ovarian cancer (72). As the major rate-limiting enzymes in the glycolysis pathway, HK2 and PFKL overexpression promotes Warburg effect, which could assist the uncontrolled proliferation of cancer cells (73)(74)(75). The expression level of the long noncoding RNA (lncRNA) GEHT1 is enhanced in ovarian cancer tissue compared with normal tissue and is associated with poor prognosis. LncGEHT1 can interact with von Hippel-Lindau (VHL) to block the degradation of HIF-1a, thus modulating lactate production and influencing the growth of ovarian cancer (76) ( Figure 2).
Mesothelial Cells
Mesothelial cells are among the main cellular components compromising the peritoneal cavity and omentum, which are the most common metastatic sites of advanced ovarian cancer. Mesothelial cells have been proven to play a critical role in contributing to ovarian cancer metastasis (77). A collagenremodelling gene signature containing COL1A1 and LOX is associated with the progression of ovarian cancer and unfavourable patient survival (78). Lysyl oxidase (LOX) has been proven to act as a tumour promoter (79) and regulate by HIF-1a in ovarian cancer (80). Under hypoxic stress, HIF-1 could promote the expression of COL1A1 in the mesothelial cells and the expression of LOX in both the mesothelial and cancer cells, which remodels collagen to accelerate the invasion of ovarian cancer (81) (Figure 3).
Immune Cells
The tumour immune microenvironment contains immune cells that play considerable roles in the processes of tumour promotion and suppression (82). Studies have demonstrated that different types of immune cells infiltrating the tumour can indicate different prognoses in patients, and M2 macrophages have been significantly associated with worse outcomes for patients with ovarian cancer (83,84). In the hypoxic microenvironment, ovarian cancer cells can recruit macrophages and induce their M2 transformation. Transformed macrophages likely promote the expression of miR-233 via an HIF-1a-dependent pathway, and miR-233 is then secreted by exosomes, which can be internalized by ovarian cancer cells. Drug resistance is promoted via exosomalderived miR-233, which activates the PI3K/AKT pathway by suppressing the expression of PTEN (85). Cancer stem-like cells (CSCs) constitute a group of special cells that have self-renewal ability and are associated with chemoresistance (86). Cytokineinduced killer cells (CIKs) were recognized in the 1990s, and investigations demonstrated that CIKs may serve in a novel treatment of cancers, including ovarian cancer (87,88). Lymphocyte function-associated antigen-1 (LFA-1) is located on the membrane of CIKs and can specifically recognize intercellular adhesion molecule-1 (ICAM-1), which is highly expressed in tumour cells, thereby mediating tumour cell death (89)(90)(91), which means that the downregulation of ICAM-1 may contribute to cancer cell protection against the killing effect. In spheroid cells, which are mainly constructed by CSCs, HIF-1a downregulates ICAM-1, shielding CSCs from the effect of cellular lysis mediated by CIK cells (92), and contributes to the progression of ovarian cancer ( Figure 3).
Adipocytes
Obesity has been proven to be associated with a poor prognosis in ovarian cancer (93,94). Studies have demonstrated that adipocytes promote ovarian cancer progression (95,96). If metastasis was a random event in ovarian cancer, then the organs in the peritoneal cavity would be equally affected by focal metastasis. However, the most common distant metastasis site is the omentum, which is primarily composed of adipocytes (97). Adipocytes secrete monocyte chemotactic protein-1 (MCP-1) to bind C-C motif chemokine receptor 2 (CCR-2) on ovarian cancer cells to activate the PI3K/AKT/mTOR pathway, thereby increasing the expression of HIF-1a, which contributes to ovarian cancer metastasis (98). During the process of adipocyte differentiation, autotaxin (ATX) is released from adipocytes and promotes the synthesis of lysophosphatidic acid (LPA) (99), which is present at a high concentration in the ascites of patients with ovarian cancer (100). Early in 2006, research showed that the PI3K/Akt/mTOR pathway may be required for LPA-induced activation of HIF-1a (101). Activation of the PI3K/AKT/mTOR/HIF-1a axis promoted the expression of Twist, a transcription factor that increases discoidin domain receptor 2 (DDR2), which is activated by (106). Ginsenoside 20(S)-Rg3, could upregulate the expression of miR-519a-5p, which could bind to the 3'-UTR of HIF-1a mRNA, then directly downregulated the expression of HIF-1a (107). Considering that the Warburg effect plays a large role in promoting cancer progression (63), the inhibition of HIF-1a mediated by miR-519a-5p suppressed the expression of HK2, which plays an important role in the Warburg effect, and this pathway may explain, at least partly, the reason why ginsenoside 20(S)-Rg3 shows antitumoural activity ability in ovarian cancer (107) (Figure 4).
Topotecan (TPT) is a derivative of camptothecin which originates from the Camptotheca acuminata (108) and is used in the second-line treatment of ovarian cancer. A clinical trial demonstrated that TPT can downregulate HIF-1a in solid advanced tumours (109). In human glioma cells, TPT can downregulate HIF-1a in a topo-1-dependent manner (110). U251-HRE xenografts were treated with a low dose of daily TPT combined with bevacizumab; tumour growth was suppressed significantly, and the DAN-damage level of the two-agent treatment group was similar to that of the TPTtreatment group which indicates that the suppression of HIF-1a protein may contribute to the growth suppression (111). In ovarian cancer, TPT promotes mRNA-HIF-1a:Topo I complex formation and then hinders the translation of the HIF-1a protein (45). Because the p53 transcriptional function is eliminated when p53 binds with HIF-1a, the deletion of HIF-1a mediated by TPT can restore the function of p53, downregulate the expression of ABCB5 and ABCB1, modulate the cisplatin and paclitaxel resistance of ovarian cancer and promote apoptosis (45) (Figure 4).
For many years, phenolic compounds extracted from plants have been shown to play a critical role in the fight against cancer (112). In 2020, research showed that polyphenol extracts of Carya cathayensis can inhibit the proliferation of ovarian cancer and suppress VEGF expression via the inhibition of HIF-1a (113). However, early in 2016, gallic acid, a main polyphenolic compound of C. cathayensis, was shown to upregulate PTEN expression and suppress the phosphorylation of AKT, which led to the downregulation of HIF-1a and VEGF to hamper angiogenesis in ovarian cancer (114) (Figure 4).
The total extract of Scutellaria baicalensis inhibits the expression and enhances the degradation of HIF-1a via the inactivation of the PI3K/AKT and MEK/ERK pathways and the promotion of the proteasome and lysosome pathways, respectively. The downregulation of HIF-1a reverses the chemoresistance of ovarian cancer cells to cisplatin (115). Wogonin is a main component of S. baicalensis Georgi. It has been demonstrated that FV-429, a derivative of wogonin, has antitumoural activity (116). In hypoxic ovarian cancer cells, FV-429 can interfere with the expression and phosphorylation of c-Scr, inhibit the translocation and DNA binding activity of STAT3, and inhibit HIF-1a expression, causing the downregulation of HK2 and VEGF and enhancement of the G2/M arrest induced by paclitaxel (117) (Figure 4).
The total triterpenoid saponins extracted from the seeds of Camellia sinensis contribute to the antiangiogenetic effect on ovarian cancer by reducing VEGF expression in a HIF-1adependent manner (118). Theasaponin E1, as the main component of the C. sinensis extract (119), can reduce the expression of Dll4 and Jagged1 to inhibit the Notch1 pathway, and the Notch1 pathway is known to inactivate ATM in other studies. The activation of ATM upregulates the expression of PTEN and reduces the phosphorylation of AKT and the downstream proteins of AKT pathways, such as HIF-1a, thereby inhibiting the expression of VEGF (120, 121) ( Figure 4).
Compounds Extracted From Animal
Not only compounds extracted from plants, but also animal can inhibit HIF-1a expression and exhibit the ability to suppress ovarian cancer progression. Bufalin, which is obtained from the skin and parotid venom glands of toads, is a common traditional Chinese medicine. Bufalin has been proven to protect against various kinds of cancers, including ovarian cancer (122,123). Bufalin did not affect the viability of normal ovarian epithelial cells even at doses as high as 40 mM but significantly restrained the growth of the OAW28 cell line (an ovarian epithelial carcinoma cell line). In ovarian cancer cells, bufalin could downregulation of HIF-1a via inhibiting the phosphorylation of mTOR and then inducing the suppression cell growth and migration (124) (Figure 4).
Synthetic Drugs
Currently, cisplatin is the first-line chemotherapy drug for a variety of malignant tumours and HIF-1a is associated with cisplatin-resistance (124).However, in the cisplatin-sensitive ovarian cancer cells, cisplatin promotes HIF-1a degradation via the proteasome pathway, induces the downregulation of LDH-A expression, and then increases the level of reactive oxygen species (ROS) by inducing the cells to produce ATP through oxidative phosphorylation, which modulates cisplatin resistance and promotes the death of ovarian cancer cells (125)(126)(127) (Figure 4).
Although metformin is a common agent for diabetes treatment, a study has shown that metformin can inhibit the expression of HIF-1a and the growth of ovarian cancer cells (128). As previously noted, mesothelial cells in the tumour microenvironment of ovarian cancer play crucial roles in tumour progression (81). In addition to its influence on cancer cells alone, in mesothelial cells, metformin induces the expression of the tricarboxylic acid (TCA) enzyme succinyl CoA ligase (SUCLG2), leading to metabolic reprogramming and reducing the production of succinic acid. As an inhibitor of PDH, succinic acid causes HIF-1a degradation. In addition, metformin induces the downregulation of TGF-1b in ovarian cancer cells, and the reduction in secreted TGF-1b restores PDH activity, leading to increases in HIF-1a degradation. In summary, the reduced expression of HIF-1a results in the downregulation of IL-8 and hinders the invasion of ovarian cancer cells (129). Considering that IL-8 can promote ovarian cancer progression through several pathways (130)(131)(132), it is recommended that further investigation be directed towards the pathways by which metformin mediates its effects on ovarian cancer (129) (Figure 4).
SC-144, a novel synthetic agent, can target gp130 and kill ovarian cancer cells (133). A genome-wide bromouridine sequencing (Bru-seq) analysis showed that longer exposure to SC144 led to lower HIF-1a expression but a higher hypoxiainducible factor antisense (HIF-1a-AS) level (134). Considering that HIF-1a-AS downregulates the expression of HIF-1a (135) and because HIF-1a plays a role in the progression of cancer, we speculate that SC-144 inhibits the proliferation of ovarian cancer, at least to some extent, via the HIF-1a-AS/HIF-1a axis. However, the function of HIF-1a-AS in malignant tumours is complicated (136,137), and data on the role of HIF-1a-AS in ovarian cancer have not been reported. Hence, future investigation into the function of the HIF-1a-AS/HIF-1a axis in ovarian cancer is recommended (Figure 4).
Noncoding RNAs
MiRNAs belong to the family of noncoding RNAs, and some miRNAs serve as a sponge to regulate the expression of genes and influence the development of cancer. Transfection of miR-195-5p can inhibit PSAT1 directly because this miRNA interacts with the 3'-UTR of PSAT1 mRNA, thus suppressing the phosphorylation of b-catenin and GSK3b, downregulating the expression of HIF-1a and VEGF, inducing apoptosis and reducing cisplatin chemoresistance (138). MiR-138 is downregulated in ovarian cancer, especially in invasive cell sublines, and acts as a cancer suppressor. Overexpression of miR-138 downregulates HIF-1a expression and induces the inhibition of Slug (139), which is associated with ovarian cancer metastasis (140) (Figure 4).
2-Methoxyestradiol (2ME2)
2ME2 is a derivative of estradiol and has been proven to downregulate HIF-1a at the posttranscriptional level (153). In 2009, a phase II study of 2ME2 administered at a dose of 1000 mg four times per day in recurrent, platinum-resistant ovarian cancer patients reported that no objective response was observed in the study, but 7 out of 18 patients had stable disease and 2 of them had stable disease for more than 12 months (142). In taxane-refractory, metastatic castrationresistant prostate cancer patients, 2ME2 did not benefit patients with a poor PFS at 6 months rate (only 5.35%) (141). In another phase II clinical trial, patients were divided into two arms (arm A,2ME2 alone, n=10; arm B, 2ME2 combined with sunitinib malate, n=7). However, owing to intolerance toxicities that may be caused by a high dose of 2ME2 (1500 mg three times per day), 6 patients were required to quite the study, and no objective responses were observed in the two arms (143) ( Table 2).
Tanespimycin
Heat-shock protein 90 could stabilize the HIF-1a protein by inhibiting the ubiquitination and proteasomal degradation of HIF-1a (154). Tanespimycin is a heat-shock protein 90 inhibitor (155). In 2006, 20 renal cell cancer (RCC) patients were enrolled in a phase II study that focused on the efficacy and toxicities of tanespimycin. Five of eight papillary renal cell cancer patients and 9 of 12 patients had stable disease, but none of them achieved complete or partial response. Thirty percent of patients required a reduced dose because of toxicities (144). In hormone-refractory metastatic prostate cancer patients, none achieved a PSA response and the 6-month OS rate was 71% (145) ( Table 2).
Vorinostat
Vorinostat inhibits HIF-1a protein expression at the translational level (156).In 2014, a total of 32 melanoma patients were given vorinostat, 18 of whom had stable disease with a median PFS of 5 months or partial response. For the patients with partial response, one remained for 7 cycles, and the other remained for 5 cycles; each cycle lasted 28 days. In addition, two patients who had stable disease had dramatic responses (33-50% shrinkage), which lasted only approximately
EZN-2968
EZN-2968 is an RNA antagonist that can specifically bind to and inhibit the expression of HIF-1a mRNA to downregulate HIF-1a protein expression in cancer cells (157). In a pilot trial of patients with refractory advanced solid tumours, EZN-2968 could downregulated HIF-1a expression at the mRNA (5/6) and protein (3/5) levels in some patients (148). In a phase Ib trial, 2 of 9 advanced hepatocellular cancers had a partial response or stable disease, and the HIF-1a mRNA was downregulated in the cancer tissue (149) ( Table 2).
EZN-2208
EZN-2208 is a soluble derivative of SN-38, which is an active metabolite of irinotecan (158).EZN-2208 could inhibit the expression of HIF-1a mRNA and protein, which is superior to irinotecan, thus controlling the angiogenic response (159). A total of 211 advanced colorectal cancer patients were enrolled in a phase II study, and were divided into 3 arms(arm A: EZN-2208, for KRAS-mutant patients; arm B: EZN-2208+cetuximab, for KRAS-wild-type patients; arm C: irinotecan+cetuximab, for KRAS-Wild type patients). When comparing the OR, OS, PFS and PFS at 6 months rate between arm B and arm C, arm B showed slightly superior efficacy. However, there was no statistically significant difference between these two arms (150) ( Table 2).
CRLX101
Antiangiogenic therapy induced increased HIF-1a expression, and CRLX101 reduced the HIF-1a expression when combined with bevacizumab in animal models (160). In a phase II study of 63 recurrent ovarian cancer patients, 29 patients who received single agent CRLX101 had an overall response rate (ORR) of 11%. When 34 patients were treated CRLX101 combined with bavacizumab, the ORR was increased to 18% (152). However, in another phase II study of 111 advanced renal cell cancer patients, CRLX101 combined with bevacizumab did not show any added benefit to patients compared with standard treatment (151) ( Table 2).
CONCLUSION AND FUTURE PROSPECTS
HIF-1a has been proven to be overexpressed in more than 70% of human cancers, including ovarian cancer (Figures 5A-C) (161,162), and occupies a central position in multiple pathways of ovarian cancer. HIF-1a acts as a transcription factor to regulate a variety of proteins, thereby promoting the development of ovarian tumours. In the ovarian cancer microenvironment, various factors can also regulate the expression of HIF-1a expression in nontumour cells and affect the malignant biological properties of tumour cells.
On the basis of the proposed concept of precision medicine, targeted drugs developed based on tumour characteristics have emerged in an endless stream, and among these drugs, antiangiogenic agents mainly target VEGF, thereby inhibiting a series of pathophysiological processes regulated by VEGF and benefiting patients with tumours such as ovarian or breast cancer (163,164). Because VEGF is a downstream gene of HIF-1a, VEGF expression is decreased when the expression or function of HIF-1a is inhibited (55,113,114,118), and HIF-1a can regulate the expression of other genes that promote tumour progression. Therefore, we concluded that targeting HIF-1a may effectively inhibit tumour development. Studies have shown that, regardless of whether a therapy is based on monomeric components extracted from plants or classic drugs that have been clinically used in cancer treatment for many years, a therapy can inhibit ovarian cancer progression after directly or indirectly inhibiting HIF-1a. Recently, clinical trials have been conducted to evaluated drugs that could modulate HIF-1a expression in many kinds of solid tumours. However, the efficacy has been limited and varied in these trials, and only EZN-2968 could combined with the HIF-1a mRNA to regulate HIF-1a protein expression. The remaining drugs all regulated HIF-1a indirectly. It is suggested to explore new drugs that could interact with HIF-1a protein directly. In addition, almost all of the drugs were taken orally. Hypoxia occurs in tumours and is associated with the newly formed abnormal microvessels (165), and chemotherapy drugs cannot reach the tumour site due to the high interstitial fluid pressure caused by the abnormal microvessels (166). This situation means that HIF-1a inhibitors may not influence the cells that produce HIF-1a. It is not only recommended to develop new agents that target HIF-1a directly but also attach importance to the delivery method of drugs so that ideal drug concentrations can be reached. In a clinical trial of ovarian cancer, the ORR and PFS were superior in the bevacizumab+HIF-1a inhibitor group to the HIF-1a inhibitor group (152). We may infer that in the application of HIF-1a inhibitor to treat ovarian cancer, it is better to combine HIF-1a inhibitor with other agents.
In view of the tremendous heterogeneity between different types of tumours, the unsatisfactory results found in cancers now do not necessarily indicate a failure of these kinds of agents in the future. Clinical trials have shown that combining the HIF-1a inhibitors and bevacizumab may benefit ovarian cancer patients (152). Further exploration into the efficacy of HIF-1a inhibitors in ovarian cancers is necessary. In addition, since HIF-1a is a transcription factor that facilitates both malignant and normal cell adaptation to hypoxic stress in the internal environment, it is particularly important to design drugs targeting only HIF-1a expressed in tumours to reduce the adverse effects.
AUTHOR CONTRIBUTIONS
HZ and XW contributed to the conception, design and drafting of the manuscript. Z-wD, T-mX, and X-jW contributed to data collection and drafting the manuscript. WL and J-lG prepared the figures. JL prepared the tables. All authors approved the final version for submission. HZ oversaw the study.
|
2021-12-24T14:27:04.533Z
|
2021-12-24T00:00:00.000
|
{
"year": 2021,
"sha1": "86fa03f44385ed51d16ec6f9b9d3f9a2fe711df5",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Frontier",
"pdf_hash": "86fa03f44385ed51d16ec6f9b9d3f9a2fe711df5",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
}
|
255051875
|
pes2o/s2orc
|
v3-fos-license
|
Clinical study of late-onset hemorrhagic cystitis after allo-HSCT without in vitro T-cell depletion
This study is to investigate the hemorrhagic cystitis (HC) after allogeneic hematopoietic stem cell transplantation (allo-HSCT) without in vitro T-cell depletion. Patients receiving allo-HSCT in 2019 were enrolled. The occurrence and clinical characteristics of HC after HLA-identical HSCT and haploidentical HSCT were retrospectively analyzed. BK, JC, cytomegalovirus, and other viruses were monitored when HC occurred. Conventional HC treatment was performed. Additionally, 5 cases of severe refractory HC were treated with adipose-derived mesenchymal stem cell (ADSC) besides conventional HC treatment. Totally, 54 patients with allo-HSCT were enrolled, including 12 cases with HLA-identical HSCT and 42 cases with haploidentical HSCT. Among them, 17 developed late-onset HC (LOHC). There was no early-onset HC. The median onset time was 33.5 (9–189) days, with a median duration of 19 (5–143) days. There were 8 cases of grade III HC and 2 cases of grade IV HC. The cumulative incidence of LOHC in 54 patients was 29.6%, and the cumulative incidence of LOHC in 42 patients with haploidentical HSCT was 40.5%. The 1-year expected progression-free survival (PFS) of 26 patients without HC was 86.6%, and the 1-year expected PFS of 16 HC patients was 74.5%. However, there was no statistically significant difference (P = .326). The urine BK virus of 14 patients was positive, with the lowest of 1.98 × 105 copies/mL, and the highest of 8.96 × 105 copies/mL. For the 5 patients with severe refractory HC, the lowest infusion dose of ADSC was 0.9 × 106/kg and the highest was 1.4 × 106/kg. All 5 patients were cured. The incidence of LOHC is higher after haploidentical HSCT. LOHC is positively correlated with urine BK virus. LOHC has no obvious effect on the overall PFS of patients. ADSC infusion has a good therapeutic effect on severe and prolonged LOHC.
Introduction
Allogeneic hematopoietic stem cell transplantation (allo-HSCT) has been currently recognized as 1 of the effective treatments for hematological malignancies. Hemorrhagic cystitis (HC), 1 of the common and serious complications of allo-HSCT, is clinically characterized by hematuria and other bladder irritation symptoms including frequent urination, urination urgency, and painful urination. HC is defined as, starting from pretreatment, the continuous presence of hematuria based on microscopic or gross observation for 7 days or above, in the absence of bleeding, diffusive intravascular coagulation, multiple organ dysfunction syndrome, or sepsis caused by menstruation and/or other gynecological diseases.
According to the occurrence time, HC can be divided into the early-onset HC (occurring within 28 to 72 hours of conditioning regimen) and late-onset HC (LOHC; occurring at more than 72 hours after conditioning regimen). It has been shown that, the incidence of HC is 7% to 68%, while the incidence of severe HC is 29% to 44%. [1][2][3] Severe HC seriously affects the quality of life of patients' after transplantation, and also increases the mental and economic burden of patients. In this cross-sectional study, the patients receiving allo-HSCT in 2019 in our Transplant Center were included, and the occurrence and treatment of HC were analyzed. The cumulative occurrence of HC, the relationship with BK virus, the impact on progression-free survival (PFS) after transplantation, and the treatment of severe HC by ADSCs were investigated.
Study participants
Totally, 54 patients receiving allo-HSCT in 2019 in our Transplant Center were included in this study. There were 31 males and 23 females, with a median age of 37.5 years (ranging from 4 to 61 years). Among these patients, there were 30 cases of acute myeloid leukemia, 13 cases of acute lymphoblastic, 6 cases of severe aplastic anemia (SAA), 2 cases of myelodysplastic syndrome, 2 cases of chronic myelogenous leukemia, and 1 case of primary mucopolysaccharidosis. Moreover, among these patients, there were 12 cases of HLA-identical sibling donor (ISD) HSCT (9 cases of myeloablative conditioning (MAC) and 3 cases of reduced intensity conditioning (RIC)), and 42 cases of haploidentical related donor (Haplo-RD) HSCT (33 cases of MAC and 9 cases of RIC) ( Table 1). The follow-up ended on May 31, 2020. All patients had engraftment and complete hematopoietic reconstruction. Informed consent was obtained from each patient. This study was approved by the Ethics Committee of the First Affiliated Hospital of Xinjiang Medical University (approval no.: 20180622-10).
Stem cell source
The source of stem cells in SAA patients was bone marrow mobilized by G-CSF plus peripheral blood, and the source for the rest of patients was the peripheral blood mobilized by G-CSF.
HC classification and prevention
For the HC classification, the cases with microscopic hematuria were classified as grade I, gross hematuria as grade II, gross hematuria with blood clot as grade III, and gross hematuria and blood clot complicated with urethral obstruction as grade IV. Grades I-II were mild, while grades III-IV were severe. [4][5][6] For the prevention of HC, during the Cy application, a large amount of uniform fluid replacement was given for 24 hours, and Mesna (sodium thioethanesulfonate) was used to prevent hemorrhagic cystitis. For the high-dose rehydration, the total daily fluid volume was calculated at 100 to 120 mL/kg/day, through a uniform intravenous drip continuing for 24 hours. To alkalinize the urine, sodium bicarbonate dosage was 0.5% of the total rehydration volume. To achieve diuresis, the furosemide injection was given, 20 mg each time, once every 6 hours, while the potassium was supplemented. The dosage of Mesna was 1.2 times that of CTX, with the initial dosage of 20% of CTX, which was applied simultaneously with CTX, and the rest of Mesna was maintained for 24 hours through intravenous drip.
Examinations and virus detection for HC
The patients developing HC were subjected to the urinary tract ultrasound, gynecological ultrasound (for females), multiple urine routine tests, urinary bacteria and fungi cultures, and cystoscopy if necessary. Moreover, the patients developing HC also received the detection of the blood cytomegalovirus (CMV) antibody and DNA, blood BK and JC virus, and the PCR detection for urine CMV, BK and JC viruses. Meanwhile, blood and urine BK and JC viruses were tested for patients without HC during the same period.
General treatment of HC
Once diagnosed, the patients received a large dose of fluids immediately (the daily fluid volume should be calculated at 100 to 120 mL/kg), and the intravenous injection continued for 24 hours. Meanwhile, the sodium bicarbonate would also be given, combined with treatment with furosemide and diuresis, together with the empirical application of the ribavirin or acyclovir antiviral therapy.
Adipose-derived mesenchymal stem cell (ADSC) treatment for severe HC
For the patients with severe HC, if no improvement was achieved after more than 1 month of comprehensive treatments (such as antiviral therapy, rehydration, and diuresis), the ADSCs were then used as the adjuvant therapy. ADSCs were obtained from the healthy third parties. Hailong Yuan, the project leader, provided the source of ADSCs and was proven as a healthy donor, who was subjected to the detections on hepatitis immunity, CMV, Epstein-Barr virus, and AIDS in the First Affiliated Hospital of Xinjiang Medical University (Photos, videos and testimonials were available for these medical examinations). ADSC preparation was performed by the Beijing Health & Biotech Co, Ltd (Beijing, China). A framework agreement with our hospital has been signed, confirming the company's qualification according to the Office of Translational Medicine and the Medical Affairs Department.
The dosage of ADSC was 1 × 10 6 /kg for each infusion (once a week), following intravenous injection with 5 mg dexamethasone. During the ADSC infusion, the patient's blood pressure, heart rate, respiration, body temperature, and with or without chilling and/or dyspnea, were closely monitored, and the patient's subjective symptoms were also recorded. After 2, 4, 8, 12 hand 24 hours, respectively, the urine samples were obtained. The patient received the urine routine test every day, and the patient's symptoms and signs were recorded in details. The hepatitis B, hepatitis C, humoral immunity, CMV-DNA, Epstein-Barr virus, as well as the blood and urine BK and JC viruses were checked every 2 weeks, until 1 month after ADSC treatment. After 3 infusions of ADSCs, if there was still no significant improvement in the assessment of symptoms, the infusion would be stopped.
Therapeutic efficiency evaluation
The therapeutic efficiencies for HC were evaluated as follows: curative: frequent urination, urgency and dysuria symptoms disappeared; the urine routine tests showed no abnormalities for 7 consecutive days; significantly effective: severe HC was changed into mild HC (levels I-II); effective: frequent urination, urgency, pain and other symptoms were relieved, the urine red blood cell count was decreased by more than 50%, and the grade IV HC was changed into grade III; and ineffective: the patient's symptoms and laboratory tests showed no improvement.
Statistical analysis
The SPSS25.0 statistical software was used for statistical analysis. The t-test was used for the comparison of the ages between these 2 groups, and the χ 2 test was used for the comparison of gender and disease types. Cumulative incidence of HC was analyzed by the Kaplan-Merier survival analysis and the Logrank test.
Analysis of HC incidences
In these 54 patients with allo-HSCT, 17 (11 females and 6 males) developed late-onset HC, while no early-onset HC case was reported. The baseline information for the 17 patients with HC is presented in Table 2. All of these cases were with haploidentical HSCT, while no HC was reported in the patients with HLAidentical HSCT. The median onset time was 33.5 days (from 9 to 189 days) after transplantation, with the median duration of 19 days (from 5 to 143 days). There were 1 case of grade I, 6 cases of grade II, 8 cases of grade III, and 2 cases of grade IV. For these 54 patients, the accumulated LOHC incidence was 29.6% (95% CI, 17.5%-41.7%), and the cumulative incidence of LOHC in the 42 patients with haploidentical HSCT was 40.5% (95% CI, 25.6%-55.4%) (Figs. 1 and 2).
Among these 42 patients with haploidentical HSCT, 33 cases were pretreated with MAC, 15 cases developed HC, and 6 cases developed severe HC, with the LOHC cumulative incidence of 45.5% (95% CI, 28.45%-62.55%). On the other hand, 9 cases were pretreated with RIC, and 2 cases developed HC (all severe HC cases), with the LOHC cumulative incidence of 23% (95% CI, 7.72%-38.28%). The cumulative incidence of LOHC in MAC patients was higher than that in RIC patients, with however no statistically significant differences (P > .05) (Fig. 3).
Analysis of HC and survival
Till the end of the follow-up period at May 2020, the 1-year expected PFS of 26 patients without HC was 86.6% (95% CI, 76.8%-96.4%), and the 1-year expected PFS of 16 HC patients was 74.5% (95% CI, 56.9%-92.1%). PFS for patients without HC was higher than that of the HC patients, with however no statistically significant differences (P = .326) (Fig. 4).
Analysis of HC and virus infection
Totally, 17 HC patients were negative for CMV in blood and urine, and negative for the Epstein-Barr virus. One patient was positive for BK-DNA in blood, and the remaining 15 patients were negative for BK-DNA. Moreover, 14 patients were positive for BK virus in the urine, with a minimum of 1.98 × 10 5 copies/mL and a maximum of 8.96 × 10 5 copies/mL. Furthermore, 2 patients were negative for BK and JC viruses in the blood and urine. Two patients were positive for BK and JC viruses in the urine. The occurrence of HC was positively correlated with urine BK virus.
Corticosteroid treatment for severe HC
Totally, 4 patients received the corticosteroid treatment. Our results showed that 1 patient was negative for the tests of related viruses. After the failure of conventional treatment, methylprednisolone was administered intravenously without obvious infection, and the symptoms improved significantly after 3 days. Moreover, 3 patients had acute graft-versus-host disease (aGVHD) before HC occurrence or when HC occurred. After BK virus became negative, if the patients still had symptoms, they were given methylprednisolone intravenous treatment for 3 to 5 days. These patients were subsequently cured.
ADSC treatment for severe HC
Among the 17 HC patients, 5 patients had delayed symptoms for 1 month due to the disease course, and then the ADSC treatment was performed ( Table 3). The planned dose of each infusion was 1 × 10 6 /kg, and the actual infusion dose for the patients ranged from 0.9 × 10 6 /kg to 1.4 × 10 6 /kg. All 5 patients were cured, showing therapeutic effectiveness after 3 injections. For the second patient, the HC lasted for 120 days, who received the hemostasis treatment under cystoscope and the visceral artery embolization for the bilateral internal iliac arteries, which did not achieve sustained clinical efficacy. When ADSC was infused for 3 times, the therapeutic efficacy was evaluated as effective, and the efficacy was evaluated as markedly effective at 1 week after the fifth infusion. The second patient was cured after 2 weeks. For the third patient, the HC duration lasted for 143 days, which was the longest duration. The patient received continuous treatment for the bladder irrigation and the hemostasis treatment under cystoscope, which did not achieve sustained clinical efficacy. After 3 infusions of ADSCs, the efficacy was evaluated as effective, after the fifth infusion, the efficacy was evaluated as markedly effective.
Discussion
HC is a common complication after allo-HSCT. Lu et al [7] have shown that the occurrence of HC during Haplo-RD HSCT was significantly higher than that of ISD HSCT. ATG has been considered to be related to the occurrence of HC. Kerbauy et al [8] have shown that the using ATG in conditioning regimen would significantly increase the incidence of HC associated with the BK virus infection. In this study, the HC patients were all Haplo-RD HSCT patients, also suggesting that ATG represented an independent pathogenic factor of HC. Salem et al [9] have shown that in the haploidentical HSCT with fludarabine in conditioning regimen, the incidence of BK virus-related HC in the ATG 7.5 mg/kg group was higher than that of the ATG 6 mg/kg group (15% and 3%, respectively). In this study, 10 mg/ kg ATG was applied. However, whether different doses of ATG can affect the occurrence of HC remain to be studied. Viral infection has been generally considered to be the main cause of HC. It has been shown that the ADV infection is a major cause of LOHC after allo-HSCT. [10,11] Moreover, the occurrence of LOHC is related to CMV and influenza virus. [12,13] Arthur et al [14] have described the relationship between HC and BK virus after bone marrow transplantation for the first time, and most of the serum samples from adults were positive for the BK virus, a member of polyoma virus. In allo-HSCT, the chemotherapy in conditioning regimen would cause damages to the urothelium, prolong the duration of immunosuppression, cause virus replication and shedding, induce inflammation and damages to the bladder mucosa, and ultimately lead to hematuria, pain and other uncomfortable symptoms. BK virus has been generally considered to be 1 of the main causes of HC, and the occurrence of about 4% to 50% HC cases is related to BK virus. [8,[14][15][16][17][18][19] In this study, our results showed that BK virus is the main causes of HC occurrence. Almost all the HC patients had positive BK virus in the urine with a high virus copy number, but no significant correlation between the copy number and the HC severity was found. There seemed to be no obvious correlation between the JC virus and the HC occurrence. Almost no BK or JC was detected in the blood samples from the HC patients, suggesting that blood BK and JC detection may not have clinical significance for the diagnosis of HC. In recent years, studies have shown that BK virus would increase the risk of treatment-related mortality, but only had limited impact on overall survival after transplantation. [19][20][21] In this study, our results showed that HC did not increase treatment-related mortality and exerted no significant effect on PFS. Retrospective and prospective investigations with larger sample sizes are still needed to address these issues in the future.
Mesenchymal stem cells (MSCs) represent a type of adult pluripotent stem cells with multiple differentiation potentials. [22] MSCs play a role in promoting tissue repair through the following 2 pathways [23] : directly participating in tissue damage repair under local microenvironments; and indirectly participating in tissue damage repair by secreting a variety of cytokines and cell growth factors to improve the microenvironment at the tissue damage site. MSCs have low immunogenicity and have immunomodulatory effects. [24][25][26] Therefore, in recent years, there have been more and more studies using MSCs to treat HC. Ringden et al [27] have reported 12 cases of HC patients after HSCT, and they have shown that, after intravenous infusion of MSC, hematuria completely disappeared [29] 7 out of 33 HC patients were treated with MSCs, for at least 1 MSCs infusion, with 6 patients receiving MSCs infusion within 3 days of the beginning of hematuria and 1 case receiving MSCs infusion at 40 days after the beginning of hematuria. Among these patients, 3 patients reported significant efficacy, Therefore, it is difficult to evaluate the therapeutic efficacy. In this study, these 5 cases of HC in our center were all refractory and protracted patients. They were treated with various treatment methods before, and none of disease cases were relieved. After infusion of ADSCs, satisfactory clinical effects were achieved, suggesting that ADSCs indeed have a good clinical treatment effect on severe HC. At present, in our center, the ADSCs treatment has been mainly used for refractory patients whose disease course lasts for more than 1 month. In the future, it is necessary to carry out long-term close follow-up for patients receiving ADSCs infusion. For HC patients, combined with previous treatment experience, the stratified treatment ideas of our center are summarized as follows: although Cidofovir is the first-line treatment for BK virus, there is also study showing that when Cidofovir alone for the treatment of BK virus did not show relief of severe HC symptoms. [30] Mert et al [31] reported 3 cases of BK-related HC after allo-HSCT, and they concluded that the treatment with Cidofovir combined with immunosuppressive drugs would be more effective. Therefore, for BK virus and other virus-related HC, on the basis of conventional treatment, the treatment should include reducing or stopping the glucocorticoids and other immunosuppressants, and continuously instilling human immunoglobulin for 3 to 4 days to enhance nonspecific immunotherapy. Bladder epithelium cells are not the classic target organs for the immune damages in aGVHD. Seber et al [5] have retrospectively analyzed 1908 patients after HSCT, and concluded that GVHD was an independent risk factor for LOHC. Lee et al [6] have performed univariate and multivariate analyses, and shown that aGVHD of grades Ⅲ-Ⅳ was a risk factor for LOHC. This may be related to the bladder epithelium being attacked as the target organ of GVHD, as well as the immunosuppressive state related to GVHD increasing the viral reactivation. Therefore, some patients have GVHD before the occurrence of HC, especially when the virus test shows positive finings at diagnosis, which is converted after treatment. If the HC is still severe, the HC may be related to GVHD, and it is recommended to use methylprednisolone intravenous treatment for patients with severe HC. The treatment course is generally 3 to 5 days, which should be ceased when failing.
(3) For patients with a long course of disease and prolonged unhealing process, after further excluding other factors, our center recommends active infusion of MSCs. With sufficient treatment course, the overall curative effects should be evaluated after 3 infusions.
Conclusion
For allo-HSCT, especially haploid transplantation, the incidence of HC is relatively high. BK virus is the main cause of HC, while GVHD is also related to the occurrence of HC. HC is still a common and potentially life-threatening complication after transplantation. In particular, severe HC can increase transplant-related mortality and have a serious impact on the quality of life of patients. Identifying the risk factors for severe HC is a necessary prerequisite for improving patient prevention and early intensive treatment. Avoiding these susceptible factors could reduce the occurrence and risk of severe HC. Of course, further in-depth studies are still needed to explore the etiology, pathogenesis and treatment methods of HC.
|
2022-12-24T16:17:29.251Z
|
2022-12-16T00:00:00.000
|
{
"year": 2022,
"sha1": "d22d178818e1db0db1cfc557331b40a71bf5ab0d",
"oa_license": "CCBYNC",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "638bab2dc95042e81fd910e30c303ff573896467",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": []
}
|
235518114
|
pes2o/s2orc
|
v3-fos-license
|
Towards a Parsimonious Pathway Model of Modifiable and Mediating Risk Factors Leading to Diabetes Risk
Modifiable risk factors are of interest for chronic disease prevention. Few studies have assessed the system of modifiable and mediating pathways leading to diabetes mellitus. We aimed to develop a pathway model for Diabetes Risk with modifiable Lifestyle Risk factors as the start point and Physiological Load as the mediator. As there are no standardised risk thresholds for lifestyle behaviour, we derived a weighted composite for Lifestyle Risk. Physiological Load was based on an index using clinical thresholds. Sociodemographics are non-modifiable risk factors and were specified as covariates. We used structural equation modeling to test the model, first using 2014/2015 data from the Indonesian Family Life Survey. Next, we fitted a smaller model with longitudinal data (2007/2008 to 2014/2015), given limited earlier data. Both models showed the indirect effects of Lifestyle Risk on Diabetes Risk via the mediator of Physiological Load, whereas the direct effect was only supported in the cross-sectional analysis. Specifying Lifestyle Risk as an observable, composite variable incorporates the cumulative effect of risk behaviour and differentiates this study from previous studies assessing it as a latent construct. The parsimonious model groups the multifarious risk factors and illustrates modifiable pathways that could be applied in chronic disease prevention efforts.
Introduction
Diabetes mellitus alone contributed to 1.6 million deaths worldwide in 2017, a figure that is estimated to double by 2040 [1]. In terms of global burden, China (89.5 million), India (67.8 million), the USA (30.7 million), Indonesia (21.0 million), and Mexico (13.1 million) have been identified as five countries with the most diabetes cases, together with the most deaths and the highest disability-adjusted life years (DALYs) due to diabetes [2]. Among low-and middle-income countries, the disease is growing rapidly with Type 2 diabetes accounting for the majority of the cases. Three-quarters of diabetics are living in these countries [3,4].
Public health efforts in prevention and early control are needed to stem the rise in prevalence, given the disease burden from the higher risks of developing health complications, disability, and premature death [2]. Consequently, the healthcare and economic costs are also set to grow [1][2][3]. Identifying the risk factors and understanding their pathways Int. J. Environ. Res. Public Health 2021, 18, 10907 2 of 20 towards disease development is thus crucial in aiding preventative and early intervention efforts [3][4][5].
The development of Type 2 diabetes has been closely associated with multiple risk factors, from demographic factors e.g., age, sex, ethnicity, socioeconomic status, and educational level, to physiological factors and behaviour, e.g., hypertension, hyperlipidaemia, high body mass index (BMI), little sleep, pulse rate, c-reactive protein, unhealthy diets, physical inactivity, and tobacco use [4,[6][7][8][9][10][11][12][13][14][15]. The risk factors may be classified according to whether they are modifiable, a good example being behaviour, or non-modifiable, such as demographic and genetic factors [11]. These risk factors can also be further differentiated in terms of their temporality: From the start point of a normal baseline, indicators of unfavourable metabolic and physiological changes, such as weight gain and increased blood pressure [11], can be regarded as downstream factors caused by preceding unhealthy lifestyle-related behaviour (e.g., a lack of physical activity, poor diet, smoking and lack of sleep) [16]. Obesity is known as the leading risk factor for Type 2 diabetes [17], but in order to prevent obesity itself, we need to go upstream to the risk factors for high BMI. Indeed physiological risk factors are often associated with lifestyle behavioural factors [18][19][20]. As a risk factor for diabetes, high BMI is associated with an unhealthy diet that includes high consumption of sodium and saturated fats [3,21,22].
As predictors of diabetes, these risk factors have often been investigated in terms of their direct relationship with diabetes outcomes [23] without distinguishing between the types of risk factors and potential dependence effects in a pathway system [16]; for example, how the risk factors may influence one other (both as independent and dependent variables) to impact diabetes risk. Furthermore, it is common to use categorical outcomes of diabetes status based on diagnostic thresholds, which neglects the continuum of diabetes risk and may also result in unbalanced samples of groups with and without diabetes [24,25]. This poses a problem for statistical analyses and requires compensating techniques (e.g., oversampling) [24].
To address the challenges of multiple risk factors of different types and temporal effects, we used the approach to group risk factors in terms of their hypothesised effects, thus aiming to develop a parsimonious pathway model for diabetes risk, i.e., a simple model with minimal variables yet flexible enough to incorporate various classes of risk factors to explain how diabetes risk may be modulated by health behaviour changes. It is with this in mind that the modifiable lifestyle risk factors were set as the start point of the pathways (Figure 1). Lifestyle Risk variables form a natural grouping of independent variables concerned with health-related behaviour, such as the level of physical inactivity, smoking, the consumption frequency of unhealthy food, and insufficient sleep [4,6,[8][9][10]. The second group of variables is termed Physiological Load, comprising clinical monitoring indicators associated with diabetes, such as body mass index (BMI), resting pulse rate (RPR), c-reactive protein (CRP), systolic (SBP), and diastolic (DBP) blood pressure [11,16,[18][19][20]. Risk factors that cannot be modified, such as sociodemographics, will be specified as covariates of each main factor, rather than as main factors themselves. This strategy allows for the necessary accounting of the effects of the non-modifiable risk factors while focusing on the impact of the modifiable factors on diabetes risk. Lastly, we used the continuous variable of HbA1c as a marker of diabetes risk, which has the advantages of avoiding unbalanced diabetes outcome categories, gaining statistical power, and allowing a more informative interpretation of the relationships.
We aimed to examine the direct and indirect effects of two groups of modifiable risk factors on Diabetes Risk, namely "Lifestyle Risk" and "Physiological Load". We hypothesised that Lifestyle Risk would have a positive but indirect effect on Diabetes Risk through the mediator of Physiological Load (Figure 1), as Lifestyle Risk is postulated to be upstream of Physiological Load. Using data from the Indonesian Family Life Survey (IFLS), which had five survey waves from 1993 to 2015, we performed structural equation modeling (SEM) on two main models (Figures 2 and 3). As the hypothesis concerns temporal effects, longitudinal data would be important. However, as only the latest survey wave (IFLS5) had all the indicators of interest, we first tested a comprehensive model (Model A) with cross-sectional data from IFLS5. This model includes all four Lifestyle Risk factors of interest. We then specified a second model (Model B) to allow the testing of longitudinal data (IFLS4 and IFLS5), but with only two of the Lifestyle Risk factors, as sleep and food intake were not collected in the earlier wave. We account for sociodemographic risk factors as covariates in these analyses. (IFLS4 and IFLS5), but with only two of the Lifestyle Risk factors, as sleep and food intake were not collected in the earlier wave. We account for sociodemographic risk factors as covariates in these analyses. (IFLS), which had five survey waves from 1993 to 2015, we performed structural equation modeling (SEM) on two main models (Figures 2 and 3). As the hypothesis concerns temporal effects, longitudinal data would be important. However, as only the latest survey wave (IFLS5) had all the indicators of interest, we first tested a comprehensive model (Model A) with cross-sectional data from IFLS5. This model includes all four Lifestyle Risk factors of interest. We then specified a second model (Model B) to allow the testing of longitudinal data (IFLS4 and IFLS5), but with only two of the Lifestyle Risk factors, as sleep and food intake were not collected in the earlier wave. We account for sociodemographic risk factors as covariates in these analyses.
Principal component analysis Structural Equation Modelling
Physical inactivity
Study Design and Respondents
In this retrospective cohort study, we used publically available data from the Indonesian Family Life Survey (IFLS), which has been organised by the RAND Corporation in collaboration with Lembaga Demografi, the University of Indonesia, Center for Population and Policy Studies, the University of Gadjah Mada, the University of California, Los Angeles, and SurveyMETER. A stratified random sampling scheme on the provinces was adopted and the resulting sample included 13 of 27 provinces in Indonesia, representative of 83% of the population. Households that were randomly selected and participated in the first survey in 1993 were followed up in subsequent waves. Sampling and survey methods have been discussed in detail elsewhere [26].
We analysed data from Wave 4 ("IFLS4", conducted from 2007-2008) and Wave 5 ("IFLS5", conducted from 2014-2015), given that the indicators of interest were found in these two waves. Both waves had individual-level data on anthropometric measurements, morbidity indicators, healthcare utilization, health behaviours, employment, and household expenditure [26], though specific data on food frequency, sleep duration, and glycosylated haemoglobin (HbA1c) were not available in IFLS4. Figure 4 illustrates the data sampling for the two main models. To reduce the possible confounding effect of medication in this cross-sectional study, respondents who self-reported to be on anti-diabetic or anti-hypertensive medication were excluded. The final study sample for Model A included 4000 respondents aged 18 and above (12.9% of total 31,102 adult respondents in IFLS5). For Model B, it included 2027 respondents aged 18 and above (11.7% of the total 17,396 adult respondents present in IFLS4 and 5).
Sociodemographic Factors
Sociodemographic variables are important determinants in chronic disease and should be accounted for [27,28]. Age, sex, ethnicity, and highest educational level were
Sociodemographic Factors
Sociodemographic variables are important determinants in chronic disease and should be accounted for [27,28]. Age, sex, ethnicity, and highest educational level were specified as covariates separately for each of the main factors (Lifestyle Risk, Physiological Load, and Diabetes Risk) to account for confounding effects at any level.
Ethnicity was grouped into 3 categories: Javanese; Sundanese; and Others. Besides the Javanese and Sundanese, the other ethnicities each made up less than 6% of the sample population and thus were combined, similar to other studies [29,30].
The highest level of education attained was grouped into four categories following previous studies [9,29,31]: no education (unschooled); elementary (grade school, kindergarten, Islamic elementary school, adult education A); high school (general junior high, vocational junior high, Islamic Junior high, adult education B, general senior high, vocational senior high, Islamic senior high school, adult education C, Pesantren boarding school); and college/university (open university, Diploma, University).
Estimation of Lifestyle Risk
Four unhealthy lifestyle behaviours (physical inactivity, smoking, consumption frequency of unhealthy food, and insufficient sleep) were used as indicators of Lifestyle Risk (though in Model B, only physical inactivity and smoking were available for testing).
The level of physical activity in the last 7 days was assessed through the International Physical Activity Questionnaire (IPAQ). If respondents reported engaging in walking or any moderate or vigorous physical activities, the total duration of activities was converted to Metabolic Equivalent of Task (MET)-hours per week using the IPAQ recommended formula [32]. The amount of physical activity was reverse-coded and used as a measure of "physical inactivity", which would contribute to Lifestyle Risk. The negatively phrased physical inactivity was used to maintain a consistent interpretation of Lifestyle Risk indicators.
The number of cigarettes smoked was used as an indicator for smoking intensity. This indicator was collected by questionnaires in IFLS4 and 5, which included questions on smoking status, whether the respondent has ever smoked self-rolled cigarettes, manufactured cigarettes, or cigars, followed by questions on smoking intensity, how many cigarettes/cigars smoked currently or before quitting. Respondents who reported "No" for questions on smoking status would have a smoking intensity of zero.
Unhealthy foods are highly processed food items that are higher in saturated fat, sugar, and sodium, such as instant noodles, sweet snacks, fried snacks, fast food, or soft drinks [33]. High consumption of such foods has been shown to exacerbate the risk of chronic diseases [34]. Therefore, for the purpose of this study, the consumption frequency of unhealthy foods in the past 7 days was used as an indicator of Lifestyle Risk. Unhealthy foods refer to instant noodles, sweet snacks, fried snacks, fast food, or soft drinks. (Food Frequency Section of IFLS5). The total score was calculated by summing the consumption frequencies of individual unhealthy food items in the past week with saturation at a score of seven, as the focus was on the consumption frequency of any type of unhealthy food within a week, therefore, the range of scores was 0-7.
Sleep duration was determined by two self-reported questions in IFLS5, the time the respondent slept the night before and the time the respondent woke up the day before the survey. The time respondent slept was subtracted by the time respondent woke up to compute sleep duration. Sleep duration was then reverse-coded and used as a measure of insufficient sleep, i.e., shorter sleep duration corresponded to more insufficient sleep. The negatively phrased insufficient sleep was used to maintain a consistent interpretation of Lifestyle Risk indicators.
To specify an appropriate structural equation model, we first assessed the suitability of Lifestyle Risk to be a latent construct within the model. A statistical check for an underlying latent construct requires its effect indicators to be positively correlated with one another [35]. However, we found weak correlation results for the indicators of Lifestyle Risk (Spearman's rank r < |0.4|), showing that it would not be appropriate to create a Lifestyle Risk latent variable. See Table S1. The Lifestyle Risk indicators would be better identified as a composite variable because it does not violate this conceptual check [35]. Furthermore, the composite or cumulative effect of these high-risk lifestyle behaviours is associated with higher health risks [36]. As a composite score of high-risk lifestyle behaviours, this means that Lifestyle Risk can be estimated, rather than it being an underlying, unobservable construct.
As there are no standardized thresholds for risk levels in lifestyle behaviours, we used principal component analysis (PCA) to derive the weighted composites of the Lifestyle Risk indicators. The Lifestyle Risk indicators were included as continuous variables in the PCA models, and the principal components (i.e., weighted composites) were orthogonally transformed using Varimax rotation and underwent Kaiser normalization to achieve a structure with independent components for greater interpretability. Principal components with eigenvalue > 1, scree test, and parallel analysis were considered in determining the number of components to retain [37][38][39].
Estimation of Physiological Load
In this study, Physiological Load is conceptualised as an estimate of the cumulative physiological burden or stress on the body system, similar to the concept of allostatic load or "cumulative biological risk" [40]. Allostatic load describes the physiological consequences of the body's attempts to adapt to chronic stressors, which may result in dysregulation spreading among multiple body systems, potentially combining to increase disease risk [40,41]. Summary measures are typically used to characterise allostatic load across the cardiovascular, metabolic, immune, nervous, and hormonal systems [40,42]. As this study does not focus on the nervous or hormonal systems, we have used the term, "Physiological Load", based on five routine clinical monitoring indicators (body mass index, resting pulse rate, c-reactive protein, systolic and diastolic blood pressure) from IFLS4 and 5. Each of these Physiological Load indicators has been shown to be associated with diabetes [12,[43][44][45][46][47][48].
The body mass index (BMI) was computed by taking the weight (kilograms) divided by the height (metres) squared. Resting pulse rate (RPR) and blood pressure were averaged from three measurements on alternate arms while respondents were seated upright. CRP concentrations were derived from finger prick dried blood samples and measured by a high-sensitivity CRP enzyme-linked immunosorbent assay (ELISA) [49]. CRP plasma equivalent values were used.
Prior to creating a summary measure of Physiological Load, we performed a conceptual check, similar to the one done for Lifestyle Risk, in order to ascertain suitability for specification as a latent construct. Except for SBP and DBP, which are closely related, low correlations, Spearman rank r < |0.4|, were found between the rest of the Physiological Load indicators, indicating that a latent variable would not be suitable [35] (See Table S1). We then proceeded to create a summary index of Physiological Load by using the method similar to allostatic load summary measurements: for each biomarker, a score of one is given for values beyond a clinical threshold reflecting high risk, with a score of zero given if otherwise [50]. These were then added up to form a non-weighted summary index (range of 0 to 5) of the Physiological Load for each respondent. High risk was defined as: BMI of ≥ 25 kg/m 2 [51], RPR of ≥ 90 bpm [52], SBP ≥ 140 mmHg and DBP ≥ 90 mmHg [53]. CRP ≥ 30 mg/L (3 mg/dL) [54].
Estimation of Diabetes Risk
As the primary outcome variable, the risk of diabetes was estimated with the level of glycosylated haemoglobin (HbA1c), a surrogate biomarker of average glycemic control over the previous three months. HbA1c was assayed using dried blood samples from the IFLS respondents using a validated protocol [49,55] and was available only in IFLS5.
Structural Equation Modelling
Categorical variables (e.g., sex, ethnicity) were summarised through counts and percentages, while all the other variables were continuous and were summarised through minimum, maximum, median, and interquartile ranges (IQR).
Mediation analysis via SEM [56] was used to test the hypothesis that Lifestyle Risk has a positive but indirect effect on Diabetes Risk through Physiological Load mediators. In the models, weighted composites representing Lifestyle Risk were considered independent (exogenous) variables, Physiological Load was a mediator and HbA1c was the marker of the outcome variable of Diabetes Risk. Sociodemographic variables were controlled for as covariates of Lifestyle Risk, Physiological Load, and HbA1c [35].
Two main models were tested. Model A was a comprehensive model, encompassing all indicators of interest ( Figure 2). We first tested the comprehensive model with crosssectional data from IFLS5, as only IFLS5 had all indicators of interest. We then created a smaller model (Model B, Figure 3) that allowed us to do a longitudinal analysis with IFLS4 data. This model was similar to Model A, except for the Lifestyle Risk indicators of the frequency of unhealthy food consumption and sleep duration, both of which were unavailable in IFLS4. Physiological Load was modelled as the mean of Physiological Load from IFLS4 and 5, in order to obtain a value that would be temporally midway between the surveys of IFLS4 and 5. The continuous outcome variable of Diabetes Risk was represented by HbA1c values.
We used the maximum likelihood procedure that provides correction to estimates and standard errors, and a mean-and variance-adjusted chi-square test statistic robust to non-normality [57]. Standardized estimates with 95% confidence intervals (CI) and p-values were reported. Model fit was assessed with the root mean squared error of approximation (RMSEA), comparative fit index (CFI), Tucker-Lewis index (TLI), and standardized root mean squared residual (SRMR). The following values indicate acceptable model fits: RMSEA < 0.08, CFI and TLI > 0.8, and SRMR < 0.08 [58].
Data preparation and descriptive analyses were performed using STATA (version 14.0) software [59]. PCA and parallel analysis were performed on Rstudio (R version 4.0.2) software, using the psych (version 1.9.12) and paran (version 1.5.2) packages [60][61][62]. All mediation analyses were performed using Mplus 8 (version 1.5) software [63]. In all statistical analyses, p < 0.05 and 95% CI that do not include zero were considered statistically significant.
Post-Hoc Analyses
To investigate the possibility that the involvement of physical labour in respondents' primary jobs confounded physical inactivity levels of Lifestyle Risk composite variables, physical labour was assessed by responses to two self-reported questions, whether respondents' primary jobs involved physical effort or heavy lifting all the time, almost all the time, or most of the time. In this model (Model A2), involvement of physical labour in primary jobs was classified as "yes" or "no" based on their responses and adjusted for as a covariate of Lifestyle Risk.
To mitigate the weakness that the analysis for Model A used cross-sectional data, we compared the model fit results against an alternative model with reversed directional relationships (Model A3). The Bayesian information criterion (BIC) values were assessed, with the smaller BIC value indicating a better model fit. Table 1 presents a descriptive summary of demographic characteristics of the samples used in the analysis of Model A and Model B. It also includes summary statistics of the Lifestyle Risk indicators, Physiological Load indicators, and HbA1c levels.
Sample Characteristics and Comparisons with the National Population
The median age in the IFLS5 sample was 40 years (range 18-102 years) with 53.0% males. 59.5% of the respondents attained highest education level of high school and above (High school = 46.1%; College/university = 13.4%). This is similar to the Indonesian population, where 50.3% of the national population were males, the two largest ethnic groups were also Javanese and Sundanese, and 54.9% of the population attained an education level of at least high school and above [30,[64][65][66]. The IFLS4 sample had some different characteristics. While the Javanese and Sundanese remained the two largest ethnic groups, the median age was 55 years (range 23-88 years) with 43.4% males, and the highest level of education for the majority was elementary school (56.2%). Differences in these factors between the IFLS4 sample and the national statistics can be attributed to the filtering of respondents that did not meet the inclusion criteria as outlined in the Methods (see Figure 4). Differences between the IFLS4 and IFLS5 samples could also be attributed to change in demographic progression across the 7-8 years between Waves 4 and 5, such as the increase in the proportion of those with higher education. Figure 5 illustrates the distribution of the HbA1c level among the respondents. The median HbA1c level was 5.45% (range 3.50-14.0%). Applying the World Health Organisation (WHO) cut-off for indication of diabetes, i.e., HbA1c ≥ 6.50% [67], 6.83% of our respondents were diabetic (see Table 1), which is very similar to the 6.90% national diabetes prevalence reported in 2013 Indonesia Basic Health Research survey (RISKESDAS) [68], indicating representativeness of our sample.
The median age in the IFLS5 sample was 40 years (range 18-102 years) with 53.0% males. 59.5% of the respondents attained highest education level of high school and above (High school = 46.1%; College/university = 13.4%). This is similar to the Indonesian population, where 50.3% of the national population were males, the two largest ethnic groups were also Javanese and Sundanese, and 54.9% of the population attained an education level of at least high school and above [30,[64][65][66].
The IFLS4 sample had some different characteristics. While the Javanese and Sundanese remained the two largest ethnic groups, the median age was 55 years (range 23-88 years) with 43.4% males, and the highest level of education for the majority was elementary school (56.2%). Differences in these factors between the IFLS4 sample and the national statistics can be attributed to the filtering of respondents that did not meet the inclusion criteria as outlined in the Methods (see Figure 4). Differences between the IFLS4 and IFLS5 samples could also be attributed to change in demographic progression across the 7-8 years between Waves 4 and 5, such as the increase in the proportion of those with higher education. Figure 5 illustrates the distribution of the HbA1c level among the respondents. The median HbA1c level was 5.45% (range 3.50-14.0%). Applying the World Health Organisation (WHO) cut-off for indication of diabetes, i.e., HbA1c ≥ 6.50% [67], 6.83% of our respondents were diabetic (see Table 1), which is very similar to the 6.90% national diabetes prevalence reported in 2013 Indonesia Basic Health Research survey (RISKESDAS) [68], indicating representativeness of our sample.
Model A
Model A was tested with IFLS5 data. Following PCA to determine the composite weights of Lifestyle Risk, we selected the first two components for Lifestyle Risk data upon inspection of the scree plot, applying the Kaiser-Guttman rule, and parallel analysis. The two components were termed LR1 and LR2. The variance explained by LR1 and
Model A
Model A was tested with IFLS5 data. Following PCA to determine the composite weights of Lifestyle Risk, we selected the first two components for Lifestyle Risk data upon inspection of the scree plot, applying the Kaiser-Guttman rule, and parallel analysis. The two components were termed LR1 and LR2. The variance explained by LR1 and LR2 was 28.27% and 26.63%, respectively. The total proportion of variance explained by the selected components was 54.9% for IFLS5 data. Table 2 presents the retained components and their loadings. The strong loadings in LR1 were physical inactivity (−0.75) and smoking (0.74), while the strong loadings in LR2 were consumption frequency of unhealthy food (0.72) and insufficient sleep (0.71). All component loadings were positive, except for physical inactivity (−0.75) and the consumption frequency of unhealthy food (−0.10) in LR1.
Model B
Model B was tested with longitudinal data from IFLS4-5. Following PCA to determine Lifestyle Risk, we also selected the first two components (LR1 and LR2), as they passed the Kaiser-Guttman rule, though parallel analysis recommended retaining only the first component. The positive loadings of the second component were also in line with our hypothesis. LR1 and LR2 explained 55.97% and 44.03% of variance, respectively. The loadings of LR1 were similar to that of Model A (Table 2), where the loadings for physical inactivity and smoking were −0.75 and 0.75, respectively. The loadings of LR2 were 0.66 for both physical inactivity and smoking.
The model had acceptable fits (RMSEA < 0.08; CFI > 0.95; SRMR < 0.08) (See Table 3). The first Lifestyle Risk component, LR1, did not have a significant effect on Physiological Load or Diabetes Risk. However, the second Lifestyle Risk component, LR2, had a positive indirect effect on Diabetes Risk through positive effects on Physiological Load. For detailed results, including statistics for the sociodemographic covariates, refer to Table S3.
Comparisons with Alternative Models
With reference to physical activity guidelines [69,70], at least 8.30 MET hours/week of physical activity is recommended. Compared to this, the medians for the IFLS4 and IFLS5 samples were higher by approximately 6 times (49.0 MET hours/week) and 4 times (31.5 MET hours/week) respectively (Table 1). Due to the surprisingly robust physical activity levels, we postulated that these levels could be confounded by jobs that involved physical labour. Indeed, individuals with physical labour in their primary jobs had significantly higher MET levels than individuals who did not, t(3998) = 13.1, p < 0.001, indicating that the jobs with manual labour contributed to this "lifestyle behaviour". Therefore, the involvement of physical labour in respondents' primary jobs was adjusted for as a covariate of Lifestyle Risk composite variables in a post-hoc analysis (Model A2). There was a positive and significant association between the involvement of physical labour and the Lifestyle Risk composite variable of LR1 (0.123, p < 0.001), but not for LR2. Otherwise, the standardized estimates and significance of direct and indirect relationships in the model were similar to Model A, with model fit indices indicating good fit (RMSEA < 0.05; CFI > 0.95; TLI > 0.80; SRMR < 0.08) (see Table S4). Abbreviations: Lifestyle Risk component 1 (LR1); Lifestyle Risk component 2 (LR2); Root mean squared error of approximation (RMSEA); Comparative fit index (CFI); Tucker-Lewis index (TLI); Standardized root mean squared residual (SRMR); Significant estimates at p < 0.05 are shown in bold. All values were rounded off to 3 decimal places. Model A (using IFLS5 data) adjusted for sociodemographic covariates: age, sex, ethnicity, and highest education level attained in IFLS5 (full model results in Table S2). Model B (using IFLS4 and 5 data) adjusted for sociodemographic covariates: age, sex, ethnicity, and highest education level attained in IFLS4 (full model results in Table S3). Statistically significant estimates (in bold) showed Physiological Load mediated the indirect effects of Lifestyle Risk (LR2) on Diabetes Risk. Model fit indices were within acceptable thresholds.
Given that Model A was based on cross-sectional data, we compared it to a model in which the pathways were reversed (Model A3). BIC values were lower for Model A (42,703.629) than Model A3 (42,709.770), indicating a better fit for Model A.
Sociodemographic Covariates
Results of the sociodemographic covariates can be found in Figures 2 and 3, and Tables S2 and S3. The significant covariates were mostly consistent between the models. In summary, increasing age was associated with increasing Physiological Load. Males were associated with poorer lifestyles, especially in terms of diet and sleep (LR2). Higher education was associated with higher Lifestyle Risk and Physiological Load.
Discussion
We have developed a general pathway model from the start point of modifiable lifestyle behaviour and have demonstrated how the behavioural components in Lifestyle Risk can affect Diabetes Risk via the mediating factor of Physiological Load. The overall results support our hypothesis that the effect of Lifestyle Risk on Diabetes Risk is likely to be indirect, and thus offers a stepwise perspective, whereby upstream and downstream modifiable factors could be modeled pathwise. Specifying Lifestyle Risk as an observable composite variable incorporates the cumulative effect of risk behaviour and differentiates this study from previous studies looking at it as a latent construct [16,71,72]. There was also the advantage of being able to assess causality using seven-year follow-up data, albeit only for Model B. From a disease prevention perspective, it helps to narrow our focus to an initial set of lifestyle risk factors, from which to monitor the progression of health risk towards more downstream physiological factors to disease.
The use of mediation analysis via SEM allowed us to simultaneously assess multiple pathways within a single model, in addition to accommodating a variable to be both independent and dependent (i.e., a mediator). This confers advantages over traditional regression, where multiple pathways in a single model need to be tested separately, resulting in potential problems with multiple comparisons [73]. To date, only a few studies have simultaneously analysed risk factors in diabetes in a multiple pathway system. Bardenheier et al., 2013 [16] performed the first study, using 10 variables with 27 hypothesised pathways in an SEM and found through best-fit iterations of the model that physical activity and poor diet were significant lifestyle factors (other lifestyle factors were not studied) that contributed to diabetes risk via large waist circumference, high blood pressure, triglycerides, and high-density lipoprotein (HDL). Subsequently several other studies applied similar models in their own population data [71,72] with varying results, but generally finding that physical activity and poor diet impact diabetes risk through separate mediators like BMI, blood pressure, HDL, and triglycerides. One difference between the earlier studies and this study is the use of latent constructs to model lifestyle behaviour in the earlier studies. As latent variables are meant to be unobservable constructs [35], a behaviour such as physical activity may not be suitable to be modeled as a latent construct, since it is actually observable. It would be necessary to evaluate the assumption for latent variables by testing if the effect indicators are correlated, a result that we did not find in our analysis (Table S1). We thus modelled Lifestyle Risk as a composite variable comprising linear, weighted combinations of risk from the uncorrelated lifestyle behaviours [35].
Components of Lifestyle Risk
In both models, two Lifestyle Risk components (LR1 and LR2) were derived from the data as complementary composites of unhealthy behaviour. PCA as a data-driven approach was used because there are currently no standardized thresholds for risk levels in lifestyle behaviours. The LR2 component encompassed a straightforward set of unhealthy lifestyle behaviours (all behaviour indicators had positive coefficients). As hypothesised, an increase in Lifestyle Risk goes on to increase Diabetes Risk through the mediator of Physiological Load. Physical activity and poor diet are both well established risk factors corresponding to the risk of chronic disease, in particular diabetes [74][75][76]. Smoking is also another risk factor, showing a dose-response phenomenon with risk of diabetes [77], while short sleep duration is gaining attention as a factor involved in developing risk of diabetes through associations with BMI and blood pressure [78][79][80]. The path coefficients from LR2 to Physiological Load were relatively small (0.04-0.05), though the effect sizes were within the range of a study that used regressions to assess lifestyle variables and BMI with the same IFLS dataset [81]. While the indirect effect of LR2 on Diabetes Risk was clear in both models, the direct effect was uncertain, given that this effect was found in Model A but not in Model B. However the use of longitudinal data to test Model B lends weight to its results and aligns with studies showing the indirect effects of physical activity on the risk of developing diabetes through intermediate variables like BMI, but without detected direct effects [16,73,82].
The other lifestyle risk component, LR1, is an intriguing mix of behaviour found in both models: the composite weightings for physical inactivity and smoking are in opposite directions, i.e., being physically active while smoking contributes to LR1. This apparent contradiction in lifestyle may be explained by our post hoc analysis showing that a significant contributor to the high physical activity scores in the sample is involvement in physical labour as part of work, rather than recreational exercise. The effects of LR1 on Diabetes Risk were equivocal, as there was a significant indirect effect in Model A, but not in Model B. We lean towards the results of Model B, given the longitudinal testing. The factor of time may explain the different results, for example, it has been shown that the number of cigarettes smoked appears to have a negative correlation with the risk of high blood pressure, but when adjusted for life-course, the correlation turned positive [36]. In the longitudinal Model B, the negative effects of smoking may have over time counterbalanced the protective effects of physical activity, thus nullifying any overall effect of LR1 on Diabetes Risk.
Physiological Load as a Mediator of Diabetes Risk
Physiological Load was found to be a significant mediator in both main models, with every unit increase in Physiological Load corresponding to an increase in HbA1c value by approximately 0.2 percentage points (based on the unstandardised estimates). (A unit increase in Physiological Load can be achieved by a cross in the clinical threshold for any of the five indicators, which are also used in routine clinical monitoring.) The pathway coefficients are within the ranges found in studies relating individual physiological markers to diabetes risk [16,71,72]. Furthermore, each of them have been associated with the effects of lifestyle behaviour [7,19,21,[83][84][85][86], and thus may be considered intermediate markers of diabetes risk. The measure of Physiological Load is a subset of the allostatic load summary measure [40], as we focused on using typical clinical routine monitoring indicators in the metabolic and cardiovascular domains and did not include indicators of nervous and hormonal responses to chronic stress [40][41][42]. There is growing research that points to the utility of using grouped measures of physiological indicators to predict clinical risk, for example, the presence of adverse risk factors across multiple physiological systems strongly predicts morbidity and mortality [87]. A higher allostatic load has been found in patients with Type 2 diabetes [88,89] and correlates with higher glycated haemoglobin [89]. Physiological dysfunction can also spread across multiple physiological systems and combine to elevate disease risk [42,50]. Importantly, the grouped measures of physiological dysregulation appear to better predict morbidity and mortality risks as compared to individual risk indicators [50,90,91].
Diabetes Risk
In this large sample, 6.83% of respondents had HbA1c values of 6.50% and above and can be classified diabetic, according to WHO criteria [67]. This statistic is very similar to the 6.90% found in a national health survey in Indonesia done around the same time [68], indicating the representativeness of our sample. We observed that 90% of survey respondents with HbA1c values of ≥6.50% did not report themselves as having received a diagnosis of diabetes (see Table S5). The incongruence of high HbA1c values with a low incidence of awareness/reported diagnosis is a concern for public health efforts, notwithstanding the disease burden from known cases, being one of the top five countries with diabetes cases [2].
The availability of continuous HbA1c values in the IFLS data was well-suited for the performing of SEM and conferred an advantage over studies using categorical or binary diabetes outcomes, as unbalanced samples may require the employment of correction techniques, such as oversampling [24]. Besides yielding greater statistical power and precision, having a single continuous outcome reduces the number of parameters in the model, contributing to its parsimony [92].
Applications and Limitations
As the purpose of the general model is to provide a simple pathway framework that groups distal (lifestyle) and proximal (physiological) factors, it can be applied to any analysis concerned with evaluating the relationship of lifestyle risks with chronic disease via the mediator of physiological risk. Specific composites of the distal or proximal factors can be determined by knowledge of the disease etiology, or driven by the dataset at hand, such as was done with the Lifestyle Risk indicators in this study.
Food frequency and sleep duration were lifestyle indicators important to our model, but as they were only found in IFLS Wave 5, the comprehensive model (Model A) was constrained to using the available cross-sectional data, which limited causal attribution. We assumed, similar to other studies (e.g., Bardenheier et al., 2013 [16]), that the reported lifestyle behaviour was habitual, built up over time, and thus preceded Physiological Load and Diabetes Risk. We mitigated the issue of cross-sectional data analysis in three ways. First, we excluded all the respondents who reported that they were taking medication for diabetes and hypertension, in order to avoid the confounding effect of medication on the Physiological Load indicators. Second, we generated a model with all pathways reversed to compare with our hypothesised model, in order to check that the hypothesised model was the better fit, which was shown to be the case. Third, we did a longitudinal analysis using a smaller model (Model B), without the Lifestyle Risk indicators of food and sleep. This longitudinal analysis supported the mediation effect found in the comprehensive model.
A potential confounding factor for the longitudinal analysis is the changing exposure to health policies and health promotion programs during the seven to eight years in between survey waves. The decentralized, district-level approach in Indonesian healthcare [93] precludes straightforward adjustments of effects to program exposure, since the IFLS respondents would not have had uniform exposure to programs, being from districts across Indonesia. The interpretation of the cross-sectional analysis for Model A remains unaffected.
The self-reports for lifestyle behaviour in the study pose a potential weakness, as the Lifestyle Risk indicators are subject to measurement errors and self-recall biases. As wearables and fitness trackers become more commonplace in the future, health behaviour data from such devices would form more objective sources for feeding into the model.
Conclusions
We have presented a general model illustrating modifiable pathways from Lifestyle Risk to Diabetes Risk via the mediating factor of Physiological Load and have tested it using large datasets from Wave 4 and Wave 5 of the Indonesian Family Life Survey. Non-modifiable sociodemographic covariates were accounted for in the model, while focusing on what is amenable for health outcomes. The model illustrates parsimonious and modifiable pathways that could be applied in public health efforts for diabetes or chronic disease prevention.
Informed Consent Statement:
The study did not collect data directly from patients and confined to secondary analysis of de-identified data made publicly available.
Data Availability Statement:
Publicly available datasets were analyzed in this study. This data can be found here: https://www.rand.org/well-being/social-and-behavioral-policy/data/FLS/IFLS/ access.html (accessed on 25 August 2021).
|
2021-06-22T17:55:56.575Z
|
2021-04-20T00:00:00.000
|
{
"year": 2021,
"sha1": "c3a80400fbb85ca175ccf7fe2d8f12652791d76d",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1660-4601/18/20/10907/pdf",
"oa_status": "GREEN",
"pdf_src": "PubMedCentral",
"pdf_hash": "6f4254160baa4fa4ae6a6f2df90792252b17bacf",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
267127593
|
pes2o/s2orc
|
v3-fos-license
|
Multi-Dimensional Validation of the Integration of Syntactic and Semantic Distance Measures for Clustering Fibromyalgia Patients in the Rheumatic Monitor Big Data Study
This study primarily aimed at developing a novel multi-dimensional methodology to discover and validate the optimal number of clusters. The secondary objective was to deploy it for the task of clustering fibromyalgia patients. We present a comprehensive methodology that includes the use of several different clustering algorithms, quality assessment using several syntactic distance measures (the Silhouette Index (SI), Calinski–Harabasz index (CHI), and Davies–Bouldin index (DBI)), stability assessment using the adjusted Rand index (ARI), and the validation of the internal semantic consistency of each clustering option via the performance of multiple clustering iterations after the repeated bagging of the data to select multiple partial data sets. Then, we perform a statistical analysis of the (clinical) semantics of the most stable clustering options using the full data set. Finally, the results are validated through a supervised machine learning (ML) model that classifies the patients back into the discovered clusters and is interpreted by calculating the Shapley additive explanations (SHAP) values of the model. Thus, we refer to our methodology as the clustering, distance measures and iterative statistical and semantic validation (CDI-SSV) methodology. We applied our method to the analysis of a comprehensive data set acquired from 1370 fibromyalgia patients. The results demonstrate that the K-means was highly robust in the syntactic and the internal consistent semantics analysis phases and was therefore followed by a semantic assessment to determine the optimal number of clusters (k), which suggested k = 3 as a more clinically meaningful solution, representing three distinct severity levels. the random forest model validated the results by classification into the discovered clusters with high accuracy (AUC: 0.994; accuracy: 0.946). SHAP analysis emphasized the clinical relevance of "functional problems" in distinguishing the most severe condition. In conclusion, the CDI-SSV methodology offers significant potential for improving the classification of complex patients. Our findings suggest a classification system for different profiles of fibromyalgia patients, which has the potential to improve clinical care, by providing clinical markers for the evidence-based personalized diagnosis, management, and prognosis of fibromyalgia patients.
Introduction
Fibromyalgia represents the most prevalent source associated with chronic widespread musculoskeletal pain, accompanied by fatigue and sleep disturbances, which are present for at least three months and not explained by any other medical condition [1].Fibromyalgia patients may exhibit a variety of other somatic symptoms including functional impairment and psychiatric symptoms [2].It is most common in women, and the prevalence rises with age [3][4][5].The estimated prevalence is 6.4% (7.7% in women and 4.9% in men) in the United States [4], and 3.3 to 8.3% in Europe and South America [5].The etiology and pathophysiology of fibromyalgia is currently not known, and there is no evidence of inflammation in the soft tissues [2].It is considered a pain regulation disorder, often classified as central sensitization [6], due to alterations in central nervous system pain and sensory processing [7].
Identifying patient subgroups can assist in comprehending the modifiable risk factors associated with each cluster and optimize personalized therapeutic strategies [8].This is important in fibromyalgia patients, as physicians may hesitate to accept them due to difficulty in controlling symptoms and a lack of information about treatments and causes [9].Prior research identified subgroups of women with fibromyalgia based on various characteristics, such as pain, tender points, disability, sensory, cognitive, psychological, or physical features [10][11][12][13][14]. Previous clustering research on fibromyalgia excluded patients with trauma history, and comorbid systemic and rheumatological diseases [10].However, it is important to include comorbidities and trauma since fibromyalgia is more frequent in rheumatic diseases [15].Moreover, up to one-fourth of the patients had precipitating physical trauma [16], and psychological trauma; especially, childhood trauma is a risk factor for the fibromyalgia onset [17].
In recent years, machine learning (ML) has emerged as a pivotal tool in various fields, including the medical field [18,19] due to its ability to uncover patterns and insights from complex datasets.For instance, graph-based deep learning has been utilized for medical diagnosis [20], and inverse reinforcement learning (IRL) algorithms have optimized performance in complex systems [21].These advancement in ML, particularly in clustering techniques, have shown promise in various medical applications [22][23][24].The potential of ML in enhancing the understanding and treatment of complex medical conditions like fibromyalgia is significant, especially given the challenges in subgroup identification and the need for personalized treatment strategies.
Recent advances in clustering methods lack consensus on optimal methods and validation approaches.Therefore, the primary aim of our study is to address this unmet need by developing and evaluating a novel comprehensive multi-dimensional.clustering methodology.This methodology is designed to be broadly applicable in various contexts, with a specific emphasis on determining the optimal number of clusters in a given dataset.The secondary objective is the application of this developed methodology to the specific case of clustering fibromyalgia patients.This application is intended to demonstrate the utility of the methodology in a practical healthcare context, providing insights into the heterogeneity of fibromyalgia.By implementing the suggested novel clustering methodology, we aim to identify the optimal clustering approach for fibromyalgia patients and provide a generalizable method for other clinical datasets.This study presents a significant contribution to clustering methods and to clinical knowledge discovery, offering a robust and comprehensive novel clustering framework.Furthermore, unlike prior research in the fibromyalgia domain, which included dozens of [14] or several hundred patients [10,11,13], our study includes 1370 patients with a comprehensive documentation of their socio-demographics, comorbidities, symptoms, trauma, sleep, pain, functional problems, and treatment modalities.This enabled us to address the full heterogeneity of the population of fibromyalgia patients.
Data Source, Study Participants and Questionnaire
This research is part of the Rheumatic Monitor study, which focuses on advancing personalized medicine by identifying patterns that predict the severity of rheumatic diseases and treatment response [25].In the Rheumatic Monitor study, we developed a mobile application for iPhone and Android operating systems that collects baseline and dynamic questionnaires and includes an option to report on pain attacks and visualize pain reports.More about the Rheumatic Monitor study and the application can be found on the research website: https://www.rheumaticmonitor.org/,accessed on 1 January 2024 [25].
We recruited 1370 fibromyalgia patients voluntarily from an Israeli fibromyalgia association who responded to a comprehensive questionnaire.In total, 163 features, 151 categorial and 12 numerical, were obtained via a the 28-question online survey, based on the Rheumatic Monitor application questionnaire, including variables for painful areas, co-morbidities, sleep problems, and other domains.The parameters used in the analyses are depicted in Figures 3-7.
Eligibility criteria Inclusion criteria: Patients aged 18-99 years, with a fibromyalgia diagnosis given by their rheumatologist.
Exclusion criteria: Patients under 18 years, and pregnant women/breastfeeding women; patients under 18 years of age due to the need for additional ethical approvals required for minors and their distinct epidemiological and medical characteristics.
Ethical Approval
The study received approval from the Institutional Review Board (IRB) of Hadassah Medical Organization (HMO), approval number 0205-19-HMO.As the study only entailed anonymous survey analysis, an exemption from informed consent was granted by the IRB.
The Clustering, Distance Measures and Iterative Statistical and Semantic Validation (CDI-SSV) Methodology
We propose a comprehensive multi-dimensional validation methodology for clustering fibromyalgia patients, integrating both syntactic (based on data's quantitative attributes) and semantic (based on meaning) distance measures.Figure 1 illustrates this methodology.We refer to the first phase of our methodology as the CDI phase, a syntactic analysis that employs several clustering algorithms, and distance measures.These are followed by multiple iterations to evaluate the influence of varying initial seeds, clustering consistency with partial data, and within and between algorithm clustering consistency.Subsequently, the SSV phase utilizes statistical analysis to validate the clinical semantics of the potential clustering options that survived our rigorous pipeline.Finally, validation of the clusters is conducted using a supervised machine learning (ML) model to classify the patients back into the discovered clusters, and the interpretation is further enhanced through Shapley additive explanations (SHAP) analysis.The CDI phase serves as the initial step, involving the evaluation of cluster quality, the impact of different starting seeds, and the consistency of clusters across various algorithms and pre-defined values of k.Within-and between-consistency checks, along with evaluations of internal semantic consistency, are performed to assess the optimal algorithm and values of k.In the subsequent SSV phase, an external semantic analysis of the results is conducted, with a particular focus on the clinical context, thus enhancing the validation process.Finally, machine learning techniques are employed to validate the results, and their interpretation is facilitated by SHAP (Shapley additive explanations) values.
Data Scaling
Prior to clustering the dataset, we applied feature-wise scaling to the data using StandardScaler from sklearn.preprocessing, so each feature contributed equally to the analysis.This standardized each feature to a mean of zero and a standard deviation of one.Such standardization ensures that features with larger ranges do not disproportionately influence the clustering, thereby maintaining comparability across our dataset's diverse features, such as clinical and demographic variables.We then applied various clustering algorithms available in the scikit-learn (sklearn) library in Python [26].We evaluated and compared three widely-used clustering algorithms: K-means [27,28], Gaussian mixture [29,30], and agglomerative clustering [31], utilizing different linkage methods (complete, ward, average, and single) [32]).These algorithms were selected for their proven effectiveness in handling diverse data types and their widespread use in similar studies.For each of these algorithms, we employed the default parameters as implemented in the scikit-learn (sklearn) library in Python.This decision was made to ensure consistency with standard practices in the field and to facilitate reproducibility by other researchers.We also used Gower's distance metric [33] as the distance function between data points, suitable for mixed data types like ours.
Distance Measures Syntactic Clustering-Quality Evaluation Metrics
To assess clustering quality, we used internal metrics like the silhouette index (SI) [34,35], Davies-Bouldin index (DBI) [36], and Calinski-Harabasz index (CHI) [37].These metrics were chosen for their ability to provide a comprehensive assessment of clustering quality.The SI score provides insights into the matching of data points to their assigned clusters and neighboring clusters, with higher scores indicating better matching.The DBI score measures the separation between clusters, with lower scores indicating better separation.The CHI score indicates the degree of cluster definition, with higher scores representing better-defined clusters.For each algorithm and number k of clusters, we calculated the SI, DBI, and CHI.In addition to these metrics, we employed the adjusted Rand index (ARI) to quantify the similarity between two clustering solutions.The ARI score ranges from −1 to 1 (0: random correlation; 1: perfect correlation).These metrics collectively offer a balanced evaluation of cluster cohesion and separation, essential for our study's objectives.
Assessment of the Clustering's Quality via Multiple Syntactic Distance Evaluation Metrics
To assess the robustness and stability of the clustering algorithms under various conditions, we employed several approaches [38][39][40].We computed three evaluation distance measures metrics (the SI, CHI, and DBI) for each algorithm (K-means, Gaussian mixture, and agglomerative clustering using all four linkage methods), and for each value of k, with and without the use of Gower's distance metric.This allowed us to compare the performance of the different algorithms and examine the impact of the number of clusters (k) on the quality metrics.
Iterative Phase
We tested the stability of our algorithms under different conditions, such as varying starting seeds, and using subsets of data.This helped us ensure the reliability of our clustering results.
Assessing the Clustering's Sensitivity to Starting Seeds
We conducted a thorough evaluation to examine the impact of initial seeds on the performance of the K-means and Gaussian algorithms.To assess their sensitivity, we performed 30 iterations of each algorithm, both with and without the utilization of the Gower's metric.This evaluation used various k values, employing SI, CHI, and DBI as the evaluation metrics.The results were presented in a box plot showcasing the mean score index across all runs.This analysis allowed us to assess the stability of algorithms under diverse starting conditions.
Within and between Clustering Consistency Using the Adjusted Rand Index (Ari)
We performed 30 iterations using a randomly selected subset amounting to 70% of the data to assess cluster consistency.We counted the number of "bad clusters" defined as clusters containing <5% of the data, and calculated the SI, CHI, and DBI scores for each algorithm and k value.The mean, standard deviation, and distribution of these scores were analyzed using box plots.
Within and between Clustering Consistency Using the Adjusted Rand Index (ARI)
To evaluate the overall clustering consistency, we applied each algorithm to the dataset for 10 iterations using random seeds.We saved the resulting labels after each iteration.Intra-algorithm consistency was assessed by calculating ARI scores for all possible pairs of labels (45 pairs in total for 10 iterations), assessing the consistency of patient assignment to the same cluster across different iterations, seeds, or metrics for each algorithm.Additionally, inter-algorithm similarity was examined by comparing the results of two different algorithms, aiming to verify the consistency of patient assignment with different algorithms.
Internal Semantic Assessment through Multiple Bagging Iterations Using Partial Subsets (70%) of the Data In addition to the internal evaluation metrics and ARI scores, we conducted a semantic evaluation of the clustering results.For each k value, we clustered the dataset 10 times using a random 70% subset selected through bagging.To assign semantic labels to clusters across iterations, we manually identified semantically similar clusters based on key clinical features, to ensure that those with similar semantics had the same label.For example, we consistently labeled cluster "Ci,0" from iterations i = 1 to 10 as "Cluster 0", which represented the cluster that appeared to be the "sickest" in each iteration.We identified cluster semantics using aggregative features, such as the sum of pain locations, and compared the proportions of categorical demographic and clinical features (e.g., percentage of females) among clusters with different semantics generated in different iterations.Specifically, we compared Cluster C_( i,k m ) (e.g., the semantically identified sickest cluster, generated for k = k, in iteration i) to Cluster C_( j,k m ) for 1 ≤ i, j ≤ 10, i ̸ = j (e.g., the semantically identified sickest cluster generated in each of the 10 iterations).This comparison was performed for all m = 1..k clusters, resulting in 45 × k pairs of clustering instances being compared.We employed a Z proportion test to calculate the difference in proportion of each of the 151 categorical features for each cluster.This analysis helped us to assess the consistency of cluster semantics across iterations and identify potential sources of variability in the clustering results.
The SSV (Statistical and Semantic Validation) Phase
Once the method and the optimal number of k clusters were determined, we moved to the SSV phase.Here, we statistically validated the clusters' clinical relevance by analyzing associations with various patient features.
External (Clinical) Semantic Assessment Using Statistical Analysis
To statistically evaluate the selected clusters, we analyzed the associations of the clusters with continuous and categorical features.For continuous features, we calculated the mean and standard deviation and employed the student t-test (k = 2) or ANOVA corrected with Bonferroni (k > 2) to examine the differences in cluster distributions.For categorical features, we computed frequencies and percentages and utilized either Pearson's chi-square test (k = 2) or the likelihood ratio test (k > 2).The significance level was set at 0.05 to determine the statistical significance of the observed results.
Cluster Validation and Interpretation Using Machine Learning and SHAP
To validate the clusters identified, we used a random forest model to predict the cluster assignments for each patient.For this model, we utilized the default parameters as implemented in the scikit-learn (sklearn) library in Python.This machine learning approach was chosen for its robustness and ability to handle complex, multi-dimensional data.Further, to understand which features most influenced these predictions, we utilized SHAP values.SHAP values provide insights into the contribution of each feature to the prediction made by the model, thereby clarifying which features are most influential in defining each cluster, enhancing the interpretation of the clustering results.To facilitate this computation, we utilized the TreeExplainer method, designed for tree-based models [37,38]; like random forest, this method allows for an efficient and accurate interpretation of the model's output.Moreover, to enhance interpretability, we grouped features into aggregative sums, enabling us to analyze the collective impact of related features on the clustering, providing a more holistic view of the factors that differentiated the patient clusters.
Results of the Clustering Phase
In total, 1370 subjects were included in the analysis.Initially, we employed principal component analysis (PCA) [41,42] to visualize the outcomes of various clustering algorithms across different k values.The PCA analysis incorporated 88 components, which accounted for over 80% of the data's variance.For a visual representation of each algorithm across different k values, refer to Figure A1, Appendix A. The visualizations indicate that the K-means and Gaussian clustering methods exhibit greater similarity in their cluster assignments compared to those under the agglomerative clustering method using the Ward linkage criterion.Interestingly, applying different linkage criteria to the agglomerative method often resulted in most data points being assigned to a single cluster, suggesting that linkage criteria other than that of Ward may yield less meaningful cluster assignments.
Additionally, we examined the impact of various linkage criteria on agglomerative clustering results as illustrated by a dendrogram in Figure A2, Appendix A. The dendrogram reinforces our observation that employing linkage criteria other than that of Ward tends to result in less meaningful cluster assignments.Consequently, the careful selection of an appropriate linkage criterion is crucial for achieving meaningful results in agglomerative clustering.
Results of the Distance Measure Phase
The evaluation metrics (SI, CHI, and DBI) were employed to assess the quality of clusters generated by the K-means, Gaussian mixture, and agglomerative Ward algorithms for various k values.These results are depicted in Figure 2.For clarity, we omitted results from agglomerative algorithms with linkages that clustered almost all points into a single cluster.However, their results can be found in Figures A3-A5, Appendix A.
Silhouette Index (SI)
The SI measure displayed in Figure 2A shows that using Gower's distance metric improved the results.Specifically, K-means with Gower's distance metric achieved the highest SI score for k = 2, 3, and 5, followed by Gaussian mixture with Gower's distance metric, which exhibited a slightly better score for k = 4.The agglomerative Ward algorithm performed relatively worse across most k values.Additionally, K-means outperformed Gaussian mixture for k = 1 and 5 but not for k = 2 and 3. Notably, the SI score tends to decline with an increasing k value in almost all algorithms, except for K-means, where it remains relatively consistent for k = 3, 4, and 5.
Calinski-Harabasz Index (CHI)
Figure 2B illustrates the CHI measure.K-means consistently achieved the highest score for all k values, followed by Gaussian mixture and the agglomerative algorithm with Ward linkage.Interestingly, the use of Gower's metric led to inferior results.The CHI score also decreased as k increased.
Davies-Bouldin Index (DBI)
The DBI measure is depicted in Figure 2C.The use of Gower's metric significantly worsened the results, leading to their exclusion from Figure 2C.K-means consistently attained the best (lowest) DBI score across all k values.Gaussian mixture outperformed the agglomerative algorithm with Ward linkage for k = 2 and k = 3 but not for k = 4 and k = 5.Unlike the SI and CHI scores, no improvement in the DBI score was observed as the k increased.
In summary, Figure 2 shows that K-means outperformed the other algorithms in two of the three evaluation metrics.Specifically, in terms of the CHI score, K-means demonstrated superior performance across all k values, surpassing all other algorithms.Additionally, for the DBI score, K-means achieved the best (lowest and thus best) score across all k values after excluding algorithms that clustered most points into a single cluster.These results suggest that K-means exhibits greater robustness and stability compared to those of the other algorithms examined in our study.
Results of the Iterative Phase Assessment of Clustering Algorithms' Sensitivity to Initial Seeds
We conducted 30 iterations of the K-means and Gaussian algorithms, with varying starting seeds for different k values.As expected, agglomerative clustering was not influenced by the starting seed.The results are depicted in Figures A6-A8, Appendix A, which present the boxplots of SI, CHI, and DBI scores.
Although clustering algorithms are acknowledged to be sensitive to initial seeds, we found minimal variation in performance across different seeds in our dataset.K-means with k = 2, 3, and 4 exhibited a standard deviation of performance of less than 0.05 across seeds.However, using Gower's metric led to increased variance in certain cases, yielding inferior results in terms of the DBI score.Hence, our findings suggest that while Gower's metric can improve performance and reduce variance in some scenarios, it might increase variance in others.
Evaluation of Cluster Consistency Using Random Subsets of 70% of the Data
In this assessment, involving counting the number of "bad clusters", K-means, both with and without the Gower metric, did not generate any bad clusters for k = 2, 3, and 4. Conversely, agglomerative algorithms using average, single, and complete linkages consistently generated a high number of bad clusters, as detailed in Table A1, Appendix A. These findings are supported by the visualization in Figure A1, where these algorithms clustered most points into a single cluster, resulting in underrepresented clusters.Using the Gower metric in K-means, Gaussian mixture, and agglomerative clustering with complete linkage reduced the number of bad clusters and improved the clustering iterations.Notably, an increase in the value of k corresponded to a proportional rise in the number of bad clusters across all algorithms.
Comparison of SI, CHI, and DBI Scores using 100% and 70% of Data To explore clustering performance, we calculated the SI, CHI, and DBI scores for each iteration and k value, using a random subset of 70% of the data, and compared them to the scores obtained when using the complete dataset.The comparative analysis, presented in Figure A9, Appendix A, reveals consistent performance, with mean scores showing little variation between using 100% and 70% of the data.
Assessing Consistency within and between Clustering Methods Using the Adjusted Rand Index (ARI)
To evaluate consistency within and between clustering algorithms, we conducted an analysis of intra-algorithm and inter-algorithm similarity.For each algorithm and k value, we performed clustering on the entire dataset using 10 random seeds, saved the resulting labels, and calculated the ARI score for all possible pairs of labels, resulting in 45 pairwise comparisons.The results of the intra-algorithm and inter-algorithm analyses are presented in Table 1.In the intra-algorithm similarity analysis, K-means demonstrated remarkable robustness, with minimal differences observed for k = 2 and k = 3.The utilization of Gower's metric improved the algorithm's robustness across all k values.Interestingly, both K-means and Gaussian mixture produced highly similar clustering results, regardless of whether they used Gower's distance metric or not, with ARI scores of 0.944 and 0.978, respectively.As expected, the agglomerative algorithm was unaffected by different seeds and consistently yielded identical results, resulting in an intra-score of 1.The ARI scores of K-means with different metrics were quite similar, particularly for k = 2 and k = 3 (0.944 and 0.819 respectively).Similarly, Gaussian mixture with different metrics also achieved a very high score for k = 2 (0.978).
The inter-algorithm similarity analysis revealed a high ARI between K-means and Gaussian mixture for both metrics.Interestingly, when both algorithms employed the Gower metric, the ARI increased for k = 4 and k = 5 (0.931 and 0.900 respectively).The agglomerative algorithm with Ward linkage also exhibited a high ARI score, while the agglomerative algorithm with other linkages demonstrated lower similarity.
The Semantic Phase
Semantic Assessment of Clustering Methods Using 70% of the Data Following the internal evaluation metrics and ARI score, we conducted a semantic assessment of the clustering algorithms using subsets comprising 70% of the data.K-means was chosen due to its superior performance in previous assessments, evidenced by its CHI and DBI scores, robustness in intra-algorithm analysis, and similarity to Gaussian mixture and agglomerative (Ward) algorithms in the inter-algorithm analysis.K-means generated no bad clusters for k = 2, 3, and 4, but had a few bad clusters for k = 5.Despite the known influence of the starting seed, we noted minimal variability in the scores across different seed runs.
To conduct this analysis, we clustered the dataset 10 times using random subsets of 70% of the data for each k value.In each iteration, we manually relabeled clusters.We then conducted Z proportion tests to compare demographic and clinical categorical features between clusters with different semantics.
For k = 2 and k = 3, no statistically significant differences were observed between any pair of clusters at an alpha level of 0.001, indicating semantic consistency even with 70% of the data.For k = 4 and k = 5, we found statistically significant differences in 55 pairs and 2822 pairs, respectively, at alpha = 0.001.
External Semantic Assessment Using Statistical and Clinical Evaluation of Selected Clusters
Although both k = 2 and k = 3 were viable syntactic solutions for K-means, our semantic statistical analysis indicated that k = 3 held more clinical significance.Therefore, we will detail the k = 3 clusters generated by K-means in the following paragraphs.The results for k = 2 are included in Appendix A and discussed below.
Demographics and Smoking Habits across the Clusters
The age range was 8-85, the mean age was 44.5 ± 12.4 years, and 1243 (90.7%) of the participants were women while 127 (9.3%) were men.The demographics and smoking habits across the clusters are presented in Figure 3.The distribution of the clusters within the study population was as follows: Cluster 0 (293 subjects, 21.4%), Cluster 1 (632, 46.1%), and Cluster 2 (445, 32.5%) (Figure 3A).
No statistically significant associations were found between any specific cluster and the following demographic characteristics: age (p = 0.384, Figure 3B), sex (p = 0.228, Figure 3E), being native Israeli (p = 0.793, Figure 3C), being born in any other immigrant countries (Figure 3D), and marital status (Figure 3E).However, significant differences were observed among the clusters in relation to other factors.As depicted in Figure 3E, Cluster 1 reflected the least severe condition, Cluster 0 reflected the worst, and Cluster 2 fell in between.The following comparisons showed statistically significant differences among the clusters: having a steady job (p < 0.001), reporting a worsening of fibromyalgia in the last year (p < 0.001), and current smoking status (p < 0.001).Cluster 0 had the highest prevalence among those with a high school education (p = 0.001) and diploma education (postgraduate qualification after high school, but not an academic degree) (p < 0.001).
Comorbidities and History of Trauma across the Clusters
The distribution of comorbidities and trauma history across the clusters are presented in Figure 4. Cluster 0 had a significantly higher prevalence of all analyzed systemic diseases (Figure 4A), as well as of rheumatological conditions, except for systemic lupus erythematosus (SLE), where it showed a significantly lower prevalence (Figure 4B).Additionally, Cluster 0 exhibited a higher number of emotional and physical traumatic life events both before and after the onset of fibromyalgia (Figure 4C).There were no statistically significant differences observed between the three clusters regarding the presence of certain systemic comorbidities, including malignancy (p = 0.619), hyperthyroidism (p = 0.194), liver disease (p = 0.086), and kidney disease (p = 0.921).Similarly, no significant differences were found among the clusters for various comorbid rheumatological conditions, including rheumatoid arthritis (p = 0.209), Sjögren syndrome (p = 0.977), Ankylosing spondylitis (p = 0.155), psoriatic arthritis (p = 0.073), familial mediterranean fever (p = 0.587), scleroderma (p = 0.307), gout (p = 0.074), and pseudogout (p = 0.214); these non-significant findings are not shown in the figures.
Symptoms, Sleep and Functional Problems and Treatment Modalities across the Clusters
The distribution of symptoms, sleep problems, functional mobility problems, and treatment modalities across the clusters are presented in Figure 5. Cluster 0 exhibited a significantly higher number of symptoms (Figure 5A) along with a greater prevalence of sleep problems (Figure 5B) and functional mobility issues (Figure 5C).Regarding treatment modalities, Cluster 0 underwent more treatments overall, except for exercising (p < 0.001).Notably, no significant differences were observed in the use of certain treatments, such as the Tai Chi (p = 0.256) and Feldenkrais method (p = 0.539) (Figure 5D).The ANOVA analysis and post hoc Bonferroni tests examining the years with fibromyalgia, pain levels, sleep, quality of life, and treatment effectiveness across the clusters is presented in Figure 6.The number of years patients had fibromyalgia did not show any statistically significant differences between the clusters (p = 0.161).As illustrated in Figure 6A, Cluster 0 represents the most severe condition, Cluster 1 represents the least severe condition, and Cluster 2 falls in between.Significant differences were observed among the clusters in terms of pain levels, sleeping hours, sleep quality, and quality of life.Cluster 0 reported the lowest scores in treatment effectiveness, which were statistically significantly lower than those of Cluster 1 (p < 0.001), but not statistically significant compared to those of Cluster 2 (p = 0.319).
The distribution of specific pain locations across the clusters is depicted in Figure 6B.Statistically significant differences were observed between the clusters for all body locations.Contrary to previous observations, the highest proportions of patients reporting pain were found in Cluster 2, followed by Cluster 0, which exhibited similar proportions in all painful areas.Cluster 1 had the lowest proportions of patients reporting pain in various body areas.Notably, none of the patients in Cluster 1 reported pain in all body areas.
In summary, our statistical and clinical evaluation of the k = 3 clusters indicates that Cluster 0 represents the most severe condition, Cluster 1 represents the least severe condition, and Cluster 2 falls in between.Significant differences were observed among the clusters in terms of comorbid medical conditions, symptoms, sleep patterns, functionality, and treatment outcomes.However, no significant differences were observed in terms of pain locations.
The Validation Phase: A Cluster Classification Model and Computation of Its SHAP Values to Assess the Relative Importance of Different Features When Forming Clusters
We validated the clustering results using a random forest model to predict cluster assignments, incorporating aggregated features like medical comorbidities and treatments.We obtained a mean ROC (receiver operating characteristic) AUC (area under the curve) score of 0.9943 and an overall accuracy of 0.9459 with 10-fold cross-validation.To assess the relative importance of these aggregated features in predicting and interpreting the clusters, we calculated SHAP values.
Figure 7A displays the top 20 features in the cluster prediction.Dot plots for cluster 0, 1, and 2 are presented in Figure 7B-D, respectively.Figure 7B shows that Cluster 0 (sickest) was uniquely positively associated with mobility functional problems, the most significant feature for this cluster.In contrast, Cluster 1 (healthiest) and Cluster 2 ranked the sum of painful areas as the most significant parameter and exhibited a negative association with mobility problems, as depicted in Figure 7C,D, respectively.While Cluster 0 and 2 were positively associated with the sum of painful areas, Cluster 1 demonstrated a negative association.Cluster 0 also had positive associations with several symptoms, painful areas, comorbidities, sleep problems, mental health, and work absence, but showed negative associations with quality of life, steady employment, and sleep quality and duration.Age did not significantly contribute to cluster differences.
The k = 2 Solution
The k = 2 solution represented valid syntactic clustering, as determined by the threedistance metrics used.However, both k = 2 and k = 3 were legitimate syntactic solutions according to the ARI stability metric.Therefore, we conducted a semantic statistical analysis to assess the clinical relevance of the clusters for both k = 2 and k = 3.The k = 3 solution emerged as a meaningful form of clustering, identifying three sub-classes of fibromyalgia severity: Cluster 0 (most severe condition), Cluster 1 (least severe condition), and Cluster 2 (intermediate).Significant differences were observed in various comparisons related to comorbid medical conditions, symptoms, sleep patterns, functionality, and treatment outcomes, although not in terms of pain locations.
To evaluate the k = 2 solution, we employed the same statistical tests, ML model (random forest), and SHAP explanations.Detailed results for the k = 2 solution are available in the Appendix A. k = 2 clustering resulted in two clusters: Cluster 0 with 731 subjects and Cluster 1 with 639 subjects.Cluster 0 consisted of patients with more severe conditions, while Cluster 1 comprised patients with less severe conditions.Further analysis showed that Cluster 0 in the k = 2 solution combined elements of both Cluster 0 (most severe) and Cluster 2 (intermediate severity) from the k = 3 solution.Patients in Cluster 1 for the k = 3 solution predominantly remained as Cluster 1 (healthier cluster) in the k = 2 solution.
To assess differences within each feature between the two clusters, we used Pearson's chi-square test for categorical parameters and an independent t-test for continuous variables.The results of these tests are detailed in Table A2 in Appendix A. Although there were differences between the two clusters in the k = 2 solution, the k = 3 solution exhibited a greater number of statistically and clinically significant features.The absence of significant differences in certain features could be attributed to the merging of the most severe and intermediate clusters.
In the prediction models for k = 2 using the random forest model with 10-fold crossvalidation, we achieved an ROC AUC and accuracy of 0.99.The SHAP algorithm results for K-means clustering with k = 2 are presented in Appendix A (Figure A10).Pain locations and mobility functional problems were highly ranked in both Cluster 0 and Cluster 1, but with opposite associations.
Both k = 2 and k = 3 partitioning options using K-means are valid clustering solutions.However, the k = 3 solution holds greater clinical significance and may contribute to a better understanding of the underlying mechanisms of fibromyalgia, potentially leading to more effective therapeutic interventions.Therefore, both solutions are presented in the results of our study.
To better understand the nuanced differences and key characteristics that distinguish the k = 2 and k = 3 clustering solutions, we included Figure 8.This figure displays the cluster visualizations as defined by the k-means algorithm for both k = 2 and k = 3 scenarios, using the first two PCA components.This approach provides a more instinctive comprehension of the clusters' structure and the critical factors differentiating them.Additionally, the figure includes bar plot graphs that highlight the top five influential features for each cluster, as identified through our SHAP analysis.These bar plots provide insights into the defining characteristics of each patient group, thereby enhancing our understanding of each cluster in the context of fibromyalgia.
Discussion
The present study introduces the CDI-SSV methodology, a novel multi-dimensional approach to discover and validate the optimal number of clusters.Unlike traditional clustering approaches that often rely on a single algorithm or metric, our method uniquely integrates several clustering algorithms, distance measures, and bagging and clustering iterations (the CDI phase), followed by the SSV phase, computing statistical differences among clusters for several meaningful additional clinical semantic features.Finally, to validate our results, we generated a machine learning model that classified the patients into clusters and assessed the importance of the demographic and clinical features, using SHAP values.
A key innovation of our study is the application of this multi-dimensional approach to a large cohort of 1370 fibromyalgia patients, a scale significantly larger than that of most previous studies in this domain.This extensive sample size allows for the capturing of a broader spectrum of patient variability, thereby enhancing the reliability and applicability of our findings.
To the best of our knowledge, this is the first study published that employs such a holistic and multi-dimension methodology in a medical context, demonstrated here with fibromyalgia patients.The integration of multiple clustering algorithms alongside both syntactic and semantic validation techniques sets our approach apart from existing methods.Furthermore, the incorporation of SHAP values in the validation process not only provides a deeper understanding of the influence of demographic and clinical features on cluster formation but also highlights the potential of our methodology in the realm of personalized medicine.
We suggest that the CDI-SSV methodology can be effectively applied in across various medical domains for clustering analysis to identify patient sub-groups.Its capability to handle large datasets and integrate multiple data dimensions makes it a versatile tool for uncovering meaningful patterns in complex medical data.This approach has the potential to significantly contribute to the advancement of patient stratification and personalized treatment strategies, extending well beyond the scope of fibromyalgia to other medical conditions.
The present study found K-means to be a more robust and stable clustering method compared to other algorithms tested.This was noted due to several findings.First, our results, presented in Figure 2, indicated that K-means outperformed other algorithms in two out of three evaluation metrics (CHI and the DBI scores).K-means with Gower's distance metric also had the best SI score for k = 2, 3, and 5.Moreover, we conducted 30 iterations of K-means and Gaussian mixture clustering algorithms with different seeds to assess their average performance.Figures A6-A8 in Appendix A show the results.Our findings demonstrate that the variation in performance across different seeds was minimal, especially for K-means with k = 2 and 3, in our dataset.Furthermore, we conducted 30 iterations using randomly bagged subsets (selected with replacement) comprising 70% of the data, and the results presented in Table A1 demonstrate that K-means did not create "bad" clusters, defined as clusters with less than 5% of the data, for k = 2, 3, or 4 with or without using the Gower distance metric.The mean SI, CHI, and DBI scores did not vary significantly when using 100% compared to 70% of the data (Figure A9).The assessment of the consistency within and between clustering methods using the adjusted Rand index (ARI) revealed that again, K-means was found to be a very robust algorithm, which was able to cluster individuals similarly to Gaussian mixture and agglomerative (Ward) algorithms with almost no difference in the ARI for k = 2 and k = 3.Based on the best overall performance of K-means according to all these assessments, we chose K-means as the preferred method and performed a semantic assessment of clustering methods using 70% of the data.No statistically significant differences were found, across all 151 categorical features, between any pair of equivalent clusters for k = 2 and k = 3 at a significance level of alpha = 0.001.Considering that both K = 2 and k = 3 were legitimate syntactic solutions, we further performed statistical analysis and evaluated the clinical relevance of the created clusters.While a cluster number of k = 2 yielded better syntactic performance in the SI, CHI, and DBI scores, the ARI scores of k = 2 were similar to those of k = 3, suggesting that even with a larger number of clusters, stability is maintained with respect to the same pairs of patients appearing in the same cluster.Even more importantly, k = 3 partitioning seemed to represent a more clinically meaningful partition, since the three clusters' solution better explained the clinical picture presented by fibromyalgia patients, which seems to be composed of low-, intermediate-, and high-grade severity patients.Compared to the k = 2 solution, the k = 3 solution manifested more statistically significant differences in all comparisons among clusters in terms of comorbid medical conditions, symptoms, sleep patterns, functionality, and treatment outcomes, but not in terms of pain locations.
A recent study by Fernández-de-las-Peñas et al. [10] also found differences between subgroups of fibromyalgia patients in terms of psychological, cognitive, health-related, and physical features but similar widespread pressure pain sensitivity.However, their study, which identified only two subgroups, had a smaller population size (113) compared to that of our study (1370) and included only women.Additionally, their methodology differed from ours, as we employed a detailed CDI method to assess clustering.Finally, in our study, the sickest cluster was the smallest, representing 21.4% of the population, which may be challenging to capture in smaller cohorts.
Widespread pain is the hallmark of fibromyalgia, and therefore may not discriminate well between fibromyalgia patients.Fibromyalgia is now thought to be a pain regulation disorder, often classified as central sensitization [6], due to alterations in central nervous system pain and sensory processing [7].We found differences between clusters not only in subjective parameters, but also in objective parameters, such as the presence of systemic and rheumatological comorbidities, symptoms, and functional problems such as using a walking stick or a wheelchair, which indicates a more serious clinical condition.These comorbid conditions may also contribute to pain.Therefore, the clinical implications of identifying these subgroups could imply different underlying mechanisms in each of these subgroups, a hypothesis that should be studied in future research.
Finally, to validate our clustering results using a supervised classification methodology, our random forest model accurately classified patients into three clusters with an AUC of 0.994 and accuracy of 0.946.Then, by computing the model's SHAP values, we identified a distinct profile that enabled the model to classify the patients into each cluster.In particular, Cluster 0, the sickest cluster, is characterized by mobility functional problems, accompanying symptoms, painful areas, comorbidities, sleep and mental health problems, absenteeism, a lower quality of life, and treatment effectiveness self-assessment.These features serve as markers for evidence-based personalized diagnosis and might suggest that this subgroup requires different management strategies, providing clinical application points to patient-centered treatment.
The identification of three distinct fibromyalgia patient profiles in our study, as shown in Figure 8, has important implications for clinical management.These profiles enable more personalized treatment strategies, allowing clinicians to tailor interventions to each subgroup's severity and characteristics.For example, the most severely affected cluster may require aggressive, multidisciplinary treatment, while others could benefit from less intensive therapies focused on lifestyle and symptom management.These findings also inform future research into fibromyalgia's pathophysiology, particularly in understanding different patterns of central sensitization across subgroups.This knowledge is crucial for developing targeted therapies.Applying our CDI-SSV methodology in clinical practice can facilitate the early identification of patient subgroups, leading to earlier, more effective interventions and potentially better long-term outcomes.Ultimately, our study's insights could significantly refine fibromyalgia diagnosis, management, and treatment, aligning with personalized medicine principles and improving patient care.
Strength and limitations:
The main contributions of the study include [1] a novel, highly general, multidimensional clustering methodology, CDI-SSV, for identifying patient subgroups; [2] the application of the CDI-SSV methodology to a dataset of fibromyalgia patients, which demonstrated its effectiveness in uncovering three distinct patient profiles, enabling a more nuanced understanding of fibromyalgia based on demographic and clinical features, and providing a potential to improve clinical care.The provision of clinical markers for evidence-based personalized diagnosis, management, and prognosis enables a more personalized tailoring of treatments and interventions.
Regarding limitations, although this study analyzed important features, it would also be useful to obtain genetic and laboratory results, thus enabling us to better understand the clinical significance of the different clusters.
Conclusions
In conclusion, our study highlights the value of the CDI-SSV methodology in clustering and classifying fibromyalgia patients, demonstrating its potential applicability beyond fibromyalgia to other medical domains.This methodology facilitates enhanced patient stratification, paving the way for improved clinical outcomes across various conditions.The identification of distinct profiles within fibromyalgia patients allows for a more targeted and personalized approach in diagnosis, management, and prognosis.The practical implications of these findings, including the potential for more effective and patient-centric treatment strategies, underscore the significance of our work in advancing the understanding and care of fibromyalgia.Ultimately, this work contributes to the evolving field of personalized medicine, offering data-driven insights and evidence-based practices that can transform patient care.
To further investigate the impact of the choice of linkage criterion on the results of agglomerative clustering, we present in Figure A2 a dendrogram of the hierarchical clustering algorithm using different linkage criteria.As can be observed from the dendrogram, the choice of linkage criterion has a significant impact on the results of agglomerative clustering.Different linkage criteria can produce different cluster assignments for the same data set.This reinforces our previous conclusion that using linkage criteria other than Ward's may result in less meaningful cluster assignments and highlights the importance of carefully selecting an appropriate linkage criterion when using agglomerative clustering.For the SI score, Figure A6 shows that using the Gower distance metric in K-means clustering improved performance for all values of k (as was shown in Semantic Assessment of Clustering Methods using 70% of the Data) and reduced the variance in performance across seeds.For Gaussian mixture, using Gower's distance metric reduced variance only for k = 2.For the BDI score, Figure A8 shows that using the Gower distance metric increased variance and produced worse results.
Figure 1 .
Figure 1.The CDI-SSV Methodology: An Integrated Approach for Clustering Validation.The figure provides an overview of the Clustering, Distance measures, and Iterative Statistical and Semantic Validation (CDI-SSV) methodology.The CDI phase serves as the initial step, involving the evaluation of cluster quality, the impact of different starting seeds, and the consistency of clusters across various algorithms and pre-defined values of k.Within-and between-consistency checks, along with evaluations of internal semantic consistency, are performed to assess the optimal algorithm and values of k.In the subsequent SSV phase, an external semantic analysis of the results is conducted, with a particular focus on the clinical context, thus enhancing the validation process.Finally, machine learning techniques are employed to validate the results, and their interpretation is facilitated by SHAP (Shapley additive explanations) values.
Figure 2 .
Figure 2. Evaluation of clustering algorithms using evaluation metrics.(A) Silhouette index (SI): the SI scores for different values of k indicate that K-means with Gower's distance metric achieved the highest score for k = 2, 3, and 5. (B) Calinski-Harabasz index (CHI): K-means consistently outperformed other algorithms, achieving the best score across all values of k. (C) Davies-Bouldin index (DBI): K-means demonstrated superior results for all values of k.
Figure 4 .
Figure 4. Comorbidities and history of trauma across the clusters (likelihood ratio).
Figure 5 .
Figure 5. Symptoms, sleep and functional problems and treatment modalities across the clusters (likelihood ratio).Years with Fibromyalgia, Pain, Sleep, Quality of Life and Treatment Effectiveness across the Clusters
Figure 6 .
Figure 6.Years with fibromyalgia, sleep, quality of life, treatment effectiveness, and pain level and locations (analysis of variance (ANOVA) corrected with Bonferroni test for multiple comparisons).
Figure 8 .
Figure 8. Comparative visualization of k = 2 and k = 3 clustering solutions in fibromyalgia patient analysis.The top-left panel displays k = 2 clustering using PCA components, clearly delineating Clusters 0 and 1.The top-right panel presents k = 3 clustering, offering a detailed view of Clusters 0, 1, and 2. The bottom panel includes bar plots that highlight the five most significant attributes for each cluster, with the left side pertaining to the k = 2 solution and the right side pertaining to the k = 3 solution.
Figure A2 .
Figure A2.Dendrograms of hierarchical clustering using different linkage criteria.Each vertical line represents a merge between clusters.The height of the vertical lines represents the distance (or dissimilarity) at which clusters are merged.The colors signify different clusters formed, based on the standard threshold (70% of the maximum linkage distance).
Figures
Figures A3-A5 present the scores of the different algorithms, for the different values of k, for each of the SI, CHI, and DBI scores respectively.
Figure A3 .
Figure A3.The SI scores of each algorithm in the different values of k.The agglomerative algorithm with complete, average and single linkage had the best scores for every k.The agglomerative algorithm with ward linkage was the worst for almost every k (except k = 5).K-means and Gaussian mixture have very similar scores, both when using Gower's distance metric and when not using it.
Figure A4 .
Figure A4.The CHI score of K-means was superior for every k, followed by Gaussian mixture and the agglomerative algorithm with ward linkage.
Figure A5 .
Figure A5.On the left, the DBI scores of all algorithms; on the right, after the removal of the three worst (highest) ones.The agglomerative algorithm with complete, average, and single linkage had the lowest (best) results for all ks.K-means performed better than Gaussian mixture did for all ks.Gaussian mixture performed better than did the agglomerative algorithm with Ward linkage for k = 2 and 3 but not for k = 4 and 5.
Figures
Figures A6-A8: Distribution of SI, CHI, and DBI scores for K-means and Gaussian mixture with and without Gower's distance metric for the different values of k, sampling 100% of the data, when using different starting seeds.
Figure A6 .
Figure A6.Distribution of SI scores for K-means and Gaussian with and without Gower's distance metric for the different values of k, sampling 100% of the data, when using different starting seeds.
Figure A7 .
Figure A7.Distribution of CHI scores for K-means and Gaussian mixture with and without Gower's distance metric for the different values of k, sampling 100% of the data, when using different starting seeds.
Figure A8 .
Figure A8.Distribution of DBI scores for K-means and Gaussian mixture with and without Gower's distance metric for the different values of k, sampling 100% of the data, when using different starting seeds.
Figure A9 .
Figure A9.Comparison of silhouette index (SI), Calinski-Harabasz index (CHI), and Davies-Bouldin index (DBI) scores using 100% (left) and 70% (right) of the data.The figure shows consistent performance between 70% and 100% of the data, as mean scores did not significantly vary between them.
Table 1 .
Intra-algorithm and inter-algorithm adjusted Rand index (ARI) scores using 10 random seeds.
Table A1 .
Number of bad clustering defined as clusters containing less than 5% of data by each algorithm for each k, when using random subsets of 70% of the data.
Table A2 .
Pearson 's chi-square (2) test for k = 2 was used to determine whether or not the distributions of the two clusters differed significantly within each categorical feature.
|
2024-01-24T16:49:55.399Z
|
2024-01-01T00:00:00.000
|
{
"year": 2024,
"sha1": "f6b4aca3107e46840ea6dea03a46029718a01fb3",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2306-5354/11/1/97/pdf?version=1705675116",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "2570bd39dd80c63c7022e5f272bf26c5b37439cb",
"s2fieldsofstudy": [
"Medicine",
"Computer Science"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
56384451
|
pes2o/s2orc
|
v3-fos-license
|
EEG and Sonic Platforms to Enhance Mindfulness Meditation
Extensive research into the wide-reaching benefits of mindfulness is currently taking place in the fields of psychology and neuroscience. Parallel to this, cutting edge technologies are becoming more accessible. This article documents a new movement in Human-Computer interaction design in which artists and designers are employing new technologies to research and explore the practice of mindfulness. This paper explores three interactive designs that encourage mindfulness through sensors and novel input technology. We begin by giving an overview of the historical background of the Electroencephalogram (EEG), before going on to discuss the physiological processes of meditation and the history behind the clinical practice of mindfulness. We show how two artists are employing EEG sensors that measure the electrical activity of the brain to visualise mindfulness meditation practices. A design that uses respiration sensors to record the breath patterns of the user to trigger distinct soundscapes that directly relate to the correlation between breath and meditative state is then discussed. Lastly, we conclude by debating the future of the three artworks.
Introduction
In 2014 Time Magazine declared that the far-reaching cultural and academic interest in mindfulness is such that we have entered the age of the Mindful Revolution. Buddhists have coveted the practice of mindfulness for centuries, since the 1970's there has been an increase in interest in the psychological and physical benefits of mindfulness practice in fields as diverse as design, education and modern warfare training. In the fields of psychology and neuroscience there is deluge of new research into how mindfulness can benefit our daily lives. We notice a correlation between the growing cultural interest in mindfulness practices and the increasing pace of technological progress. Contemporary western societies are saturated with ever growing and evolving technologies; our bodies have become embedded with, and connected to technology in such a way that the boundaries between the biological and the digital have become intertwined. Although technology can fracture our attention, it can provide design tools, methods and practices that help focus our attention in highly engaging ways. We demonstrate a new movement amongst artists and designers working at the intersection of mindfulness and emerging technologies. In order to explore how technology is being used to promote mindfulness and bring our attention back to the present moment we examine three sensor-driven installation art applications: Narcissus Brainwave (2016), Visualise your Mind (2015), and Sonic Cradle (2012).
Methodology
This paper acts as a resource to scholars at the emerging intersections of human-computer interaction design, sonic studies and mindfulness studies. It also contributes to the fields of healthcare and psychology and argues that more research into the potential for EEG headsets and Sonic Sensors to promote mindfulness practices needs to be conducted. Through analysis of the following research questions we show how the visualisation and sonification of mindfulness has the potential to enhance meditative experiences. In what ways can artists use sensors to encourage the practice of mindfulness? Do wireless EEG devices have the potential to help mindfulness? Interactive design has the potential to engage with and inspire the user: how can this potential be employed to promote the practice of mindfulness?
In order to contextualise this research we first trace the fleeting and fascinating history of the EEG and show how EEG headsets are employed by researchers to measure the specific electrical activity of the brain. We then take the reader on a brief tour of the history of mindfulness practice before elucidating how designers and artists are using Brain-computer interface devices that use EEGs. Narcissus Brainwave and Visualise Your Mind are two interactive designs that use EEG headsets. Sonic Cradle is an interactive sensor-driven design that has the potential to use EEG sensors in the future. The three projects in this paper used qualitative thematic analysis to analyse the user's needs. The data was then grouped into themes that described the success of each project. We conclude by discussing future directions for the three projects. More detailed description of each methodology used for the projects is in the sections below.
Background on EEGs
We begin our journey with a history into how technology has been used to record brainwave data. Research findings, generated using EEG sensor based technology, have revolutionised discoveries in neuroplasticity. Moreover, the changes in the human brain resulting from mindfulness meditation practices can now be visualised and understood in new tangible ways due to EEG headsets.
In 1924 Hans Berger recorded the first human brain activity by an EEG. EEG sensors measure the electrical activity of the brain. Brain cells communicate by producing electrical signals; an EEG measures this activity. Berger was trained in medicine and neuro-psychiatry. Berger wanted to demonstrate that the electromagnetic fields of the human brain could be used for telepathy. Although the signals he detected were unsuccessful for this purpose, the EEG was widely adopted by clinicians and scientists.
Brainwaves observed with an EEG allow researchers to record brain wave patterns. Humans have five brainwaves: Gamma Beta, Alpha, Theta, Delta. The frequency of the brainwave is associated with its speed and measured in cycles per second. Different frequencies indicate different types of activities. Delta waves have a very low frequency (below 4 hertz) which occur during sleep. Alpha waves, 8 to 13 hertz, occur during relaxed times. Beta waves, 15 to 40 hertz, are the next fastest, and they occur when actively thinking. Gamma waves (greater than 40 hertz) have the highest frequency and are involved in higher mental activity.
Gamma waves and meditation
Each brainwave serves a unique purpose and helps us to cope with various situations, whether it is to facilitate the process of learning new information, or to help us calm down after a long stressful day. Our ability to move through the different brain wave frequencies plays a large role in how successful we are at managing stress, focusing on tasks, and getting a good night's sleep. If one of the five types of brain waves is either overproduced or under produced in our brain it causes problems.
Gamma waves are the fastest brain waves measured on an EEG. Gamma waves relate to the speed we mentally process experiences, and our ability to focus, and richness and depth of sensory experiences. People with high gamma activity have exceptionally vivid and rapid memory recall. In a high gamma state the brain can process information quickly, remember it, and retrieve it later. High gamma activity corresponds to a state of peak performance. Elite athletes, musicians and high achievers in all fields produce more gamma waves than average.
Gamma waves have been documented to help with attention and focus (Collura, 1993). Individuals with high level mental processing and functioning exhibit greater gamma activity (Fries, 2001;Jia X, 2011;Yi-Yuan Tang 2007). Gamma waves are important for learning, memory and information processing.
Neuroscientists believe that gamma waves link information from all parts of the brain. Thus, the entire brain is influenced by gamma waves.
In a study with Buddhist monks, it was found that gamma activity increased in a meditative state of compassion (Jia X, 2011). Some research suggests that mindfulness meditation enhances compassionate behaviour (Davidson, 2004). A study that performed sleep high-density EEG recordings, in long-term meditators of Buddhist meditation practices and meditation naive individuals, indicates that meditation practice produces measurable changes in brain activity. Therefore, meditation is the optimum way to increase gamma activity.
Mindfulness
Research demonstrates that mindfulness can greatly enhance our health and wellbeing (Hofmann, 2010;Jha, 2007;Jon Kabat-Zinn, 1998;Lazar, 2011;Mitchell, 2013). The clinical practice known as 'mindfulness' has been around since 1979 when Jon Kabat-Zinn introduced his Mindfulness-Based Stress Reduction (MBSR) program as part of the University of Massachusetts Medical School. Kabat-Zinn adapted the Buddhist tradition of mindfulness for the clinical setting in order to help with psychological conditions such as stress and chronic pain management.
Mindfulness dates back around 2600 years ago to the beginning of Buddhism. The Buddha's teachings, or Dharma, were not a doctrinal belief system, rather a collection of principles and practices that offer support and encouragement in the universal quest for happiness and spiritual freedom; a "system of training that leads to insight and the overcoming of suffering" (Williams & Kabat-Zinn, 2011).
Buddhist meditation encompasses a variety of meditation techniques that aim to develop mindfulness, concentration, tranquillity, and insight. Increasingly, non-Buddhists are adopting Buddhist meditation techniques. Psychologists and psychiatrists are increasingly using these techniques to help alleviate a variety of health conditions such as anxiety and depression.
The etymology of the word 'mindfulness' can be traced to the concept of Sati (Pali) or Smrti (Sanskrit). There has been much debate surrounding the exact translation of smrti and sati and while mindfulness is generally accepted as a broad translation, the grand semantic breadth of each word must be taken into consideration when looking for a direct translation. The cultivation of Sati, or non-judgmental mindful-awareness, sits at the core of Buddhist practice. To practice mindfulness is possible without Buddhism but the practice of Buddhism is not possible without mindfulness. According to Jon Kabat-Zinn" Mindfulness practice means that we commit fully in each moment to be present; inviting ourselves to interface with this moment in full awareness, with the intention to embody as best we can an orientation of calmness, mindfulness, and equanimity right here and right now." (Williams & Kabat-Zinn, 2011) John Kabat-Zin describes mindfulness as way to understand the events of our lives with equanimity. He proposes that by maintaining an attitude of non-judgmental direct observation and keeping attention to the present moment we are able to see more clearly the characteristics of our mind and body process (Jon Kabat-Zinn, 2009). To practice mindfulness means being continuously aware of the present moment; the following interactive designs are built to enable users to do this.
Brain-computer interface devices that use EEGs and Sonic Platforms
We believe that interactive design and art applications have the potential to engage with and inspire a user to practice mindfulness. The first art works we discuss are Narcissus Brainwave and Visualise your Mind. They are both brain-computer interfaces that encourage mindfulness meditation though an EEG device. The participant's EEG data is recorded using a headset called the NeuroSky MindWave. The MindWave is an EEG that uses dry sensors. The NeuroSky was used to give participants feedback of their brainwave activities while undertaking mindful-based activities. The visualization of the participants' brainwaves provided a tool to help develop mindfulness practice.
Sonic Cradle, the third project, does not currently use an EEG headset but uses two respiration sensors that track diaphragm and chest breathing. However, in future iterations of the project, the team is planning to incorporate another EEG headset similar to the MindWave called Muse. Therefore, we still believe it is an important project to discuss when analysing how interactive installation art can encourage mindfulness.
In the following three examples we sought to discover whether EEG devices and other biofeedback devices have the ability to encourage the practice of mindfulness in users. We discuss how interactive applications support and promote the practice of mindfulness through direct visual brainwave data feedback and sound based interaction. We examine how wireless EEG devices can engage the user and promote the practice mindfulness. We outline three novel installation artworks that sought to incorporate human-computer interaction to encourage the practice of mindfulness.
Narcissus brainwave
Narcissus Brainwave (Figure 2 and 3) is a compelling example that promotes mindfulness through the use of a novel sensor-driven design that visualises the users' brainwaves while they meditate. The installation uses the EEG headset MindWave. Users are invited to wear the MindWave headset and sit in meditation. EEG data is sent to the program processing through the MindWave device. After they have meditated, Narcissus Brainwave displays their brainwaves during the different states of meditation they had endured. The participant's brainwave data creates digital paintings based on Tibetan Buddhism Mandala patterns. The aesthetic visualisation patterns of Mandalas were created to enable users to discern different brainwave states of meditation. In Carl Jung's understanding of the Mandala, it is a psychological expression of the totality of the self (Jung,1972).
The first user study was conducted to develop the visualisation rules that would enable an evaluation of brainwave changes between meditative and non-meditative state. Raw data from the MindWave was collected to distinguish four states of mind which were then represented as visualisations (a, b, c, and d) respectively). By correlation of these factors, the size and pattern of the Mandala pattern was determined. User study two was designed to analyse the discernibility of the visualisation patterns. 7 out of the 11 participants were able to discern differences between the visualisation patterns.
Participants viewed the pre-recorded visualisation patterns of meditators and were asked to evaluate them. The majority of participants (7 out of 11) could differentiate between the patterns of meditators and non-meditators. Through user study two, it was also reported that colour theme, scale of pattern and the rate of change are important factors to differentiate between the visualised patterns representing the different levels of meditative state. A colour theme was set to distinguish each brainwave with the highest amplitude to enable users to recognise their brainwave status, also the scale of the pattern (expansion or contraction) was subsequently set to monitor status changes in meditation.
The audience can view the user's individual brainwave-visualisation as the user meditates in situ, it can also be viewed after the meditation experience is over. By using EEG sensors to visualise brainwave data the user is able to see how their meditation status changed during the session. The 'rate of change' is the time required to reach the different levels of meditation. Meditators, who are more experienced practitioners, take less time to attain a meditative state. Narcissus Brainwave allows users to distinguish between meditative and non-meditative states through a logical, aesthetic approach.
Narcissus Brainwave uses patterns to integrate multiple dimensions of information instead of viewing graphs of brainwave data. The final visualisation rules have been made based on a conceptual model of the meditation stages. User testing and an interactive development process validate the discernibility of visualisation patterns of meditators and non-meditators. Narcissus Brainwave has the potential to enable practitioners a greater understanding of the effects of meditation on their brainwave activity. By viewing how their mind state changes dynamically during meditation, users can explore the effect of various meditation techniques and discover the ones that are most beneficial for their practice. For non-meditators, the use of this tool enables them to perceive brainwave changes as soon as they start meditation. This immediate feedback can encourage and inspire people who hesitate to practice meditation because of the perceived prejudice that meditation is hard to attain. This may give encouragement and inspiration to the users. Non-meditators may be curious whether the custom-made software program is showing their meditation states properly or not. With this curiosity, they will continuously use the software and experience benefits of mindfulness. Users who have a MindWave headset can access and download the published software easily. Visualise your Mind (Figure 4) is a new platform to visualise brain wave data in meditation using EEG sensor technology. This project incorporated two different ways of interacting with EEGs headsets. In the first interaction, users put on an EEG headset while meditating and are then able to view their brainwaves in a visualisation chart (see Figure 5 left). It is intended that the visualisation chart be used to evaluate and promote the users meditation practice. In the second interaction using EEG headsets, users can create abstract paintings using their brainwave data during meditation (see figure 6).The data visualisation patterns reveal users' live brainwave data: alpha, beta, gamma, theta, delta waves and meditation and attention levels. As a result users could monitor their progress in meditation, and create abstract paintings using data from their own brainwaves. The artworks in Visualise your Mind were controlled directly by users brainwave data. In order to distinguish the meditative state of the users brainwave data, the chakra colours system was adopted. Chakra colours use seven different colour schemes ( Figure 6). Violet was depicted for the highest meditation state because it has the strongest wavelength (Wills, 2014).Magenta, the combination of red and violet, was used to depict meditators deepest relaxation stage ( Figure 6). In order to develop Visualise your Mind, a user needs analysis study was conducted. Firstly we analysed the design requirements for an application and website. Secondly we collated experts' and users' suggestions to find appropriate mindfulness tutorials. The user study contained an online qualitative questionnaire and interviews with 42 participants. We used thematic analysis to group the data into themes and develop the design concept. The results from the user study found most participants were less informed about EEG brainwave data, but had literacy about websites and apps. The website and mobile applications were designed to be used in correlation with the EEG headset MindWave. Users were encouraged to use an EEG headset while they practiced the mindful-based tutorials. From the users' needs analyse study, the website and mobile application were developed to include mindful-based tutorials such as guided meditations and relaxation exercises, created by Patty Kikos and Tripura Yoga. From the mindful-based tutorials, the user study revealed participants preferred the breathing exercises and body scans featured in the tutorials.
Visualise your mind
Visualise your Mind promotes mindfulness using novel EEG sensors, input and output technology to generate brainwave data. It also promotes mindfulness through the use of its website and phone application.
Sonic cradle: A sound-based interaction for mindfulness
Sonic Cradle (Figure 7) is an interactive design invention that encourages the user to enter a state of mindfulness meditation using spatial sound. It aims to "intentionally promote the specific pattern of awareness and attention characteristic of mindfulness meditation" (Kornfield, New edition, December 1989) in an easily accessible way. The user is suspended in a hammock within a darkened chamber fitted with respiratory biofeedback sensors to initiate and guide the surrounding sound. By focusing on a realtime correlation between respiration and the sonic experience the user is offered a tangible creative mode through which an understanding of the relationships between breath and meditation is made possible. Within this contemplative space, usually filled by external distraction, the user has the opportunity to focus on their inner sensations and how it relates to their cognitive and emotional wellbeing.
Much like the previous work we discussed, Narcissus Brainwave and Visualise your Mind, this interactive design transforms the unseen effects of meditation practice into a sensory commodity. Unlike the innovative designs using EEG devices, Sonic Cradle was initially created as an interactive medium that uses no visual stimulation. However, it is important to discuss this piece as it assists us in uncovering our research questions on how we can design mindful-technologies in human-computer interaction. The absence of visual stimuli and the suspension of the user are both design concepts that aim to free the user of external distraction and interoceptive senses in order to facilitate full concentration on internal sensations.
The creator of Sonic Cradle Jay Vidyarthi, designed a Max 6 patch. The respiration data is then sent through the patch to create an individualised soundscape that is played through four Mackie MR5MK2 speakers surrounding the chair and a large subwoofer situated below it. Two respiration sensors are attached to participants' abdomen and thorax to measure chest expansion (Thought Technology's SA9311M and ProComp2 encoder: 32 Hz) (Vidyarthi, 2012). (2014). (Vidyarthi, 2012). From the speakers an evolving spatial soundscape emerges. The system enables the user to create different sounds by breathing in different ways, using their breath to shape and form the pre-recorded clips into their own personal meditation soundtrack. The clips randomly play through different speakers to keep the soundscape individualised and interesting for the user. The goal of the experience is to use the participants' prior knowledge and their psychological attributes to keep them engaged and generate an individual immersive sound experience (Vidyarthi, 2012).
Sonic Cradle's sounds are designed to guide the user back to their breath and on-going enquiry into the influence the breath has on the heard environment. Originally, the sounds were created by the artist, however, the sounds were too similar and therefore, thirty sounds were crowd sourced from different musicians and sound artists (Vidyarthi, 2012). When mental distractions arise the user's breath will continue to influence the surrounding sound, retriggering focus on the sound-breath loop.
When the users' hold their breath for a full 4 seconds, the user will feel vibrations from the subwoofer and a chime. Then, a new sound will be added to the experience. As the soundscape gets more complex, the user has to hold their breath for longer periods of time making it harder to call upon new sounds. This creative process will enable the user to explore the immersive experience with their breath (Vidyarthi, 2012).
Timing and ratio is also influence in the soundscape. The length of the inhale and the exhale manipulate the reverberation of the sound. For example, if the participant takes a slow breath, they will feel like they are in a larger room. If the participant breathes from their abdominal area more than their chest, they will experience a louder sound.
However, if the user gets overwhelmed with the number of sounds, they are able to breathe rapidly to get the sounds to disappear. This gives the user a constant sense of control over their own environment (Vidyarthi, 2012). Figure 7 describes the tangible similarities between a typical mindfulness meditation practice and the paradigm created by Sonic Cradle.
Sonic Cradle was subject to a qualitative investigation with 39 selected participants at the TED active conference in Palm Springs, California 2012. Participants were grouped into those who had no meditation experience and those who had some experience. A short interview was conducted after the experience. A subsequent systematic analysis that used three independent data coders resulted in the following report of 14 primary themes that emerged in the participants' use of Sonic Cradle.
Common themes for both beginners and those who had some experience included: relaxation, floating sensation, exploration of the soundscape, yearning to continue the experience, visual imagery in their mind, bodily sensations, loss of time, meditative-state, association with other meditative practices, semi-conscious state, and revelations and discoveries. Interestingly, only those with some meditation experience noted an intense engagement with sound. Similarly, those in this group also noted a direct correlation between the Sonic Cradle experience and that of meditative practice. Only those in the no meditation experience group noted feeling personal epiphanies (Vidyarthi, 2012).
Much as Narcissus Brainwave, has the potential to provide non-meditators with non-judgmental and tangible feedback regarding their meditative practice, so Sonic Cradle has the opportunity to encourage and inspire users to take up the practice of meditation.
As the creator of Sonic Cradle Jay Vidyarthi notes, a feeling of failure can be common in those starting out on their meditation journey. The lack of formal instruction regarding meditation while using Sonic Cradle is intentional: the continued sonification of the breath process provides a situation whereby the user is perpetually reminded of their breathing-state. "The Sonic Cradle interaction paradigm aims to enable the calm refocusing of attention to proceed unencumbered as a natural response to the interaction paradigm, potentially providing an experience which parallels more advanced mindfulness practitioners" (Vidyarthi & Riecke, 2014). As we have already seen, Narcissus Brainwave and Visualise your Mind in generously transform the actual physical imprint of meditative data into visual data. One of the exciting questions that Sonic Cradle unveils sits at the intersection of design, psychology and the sonic experience: can meditation really help us to listen better?
The surprising result of an "intense engagement with sound" reported by users with some meditation experience suggests that more enquiry is needed. According to H. Ellie Falter mindful practice techniques can be employed to deepen sonic understanding and enhance our connection to music (Falter, 2016). The intimacy of sound has the potential to change the way we experience time and blur the edges of our physical and mental perception. Similarly, the practice of mindfulness has been shown to have the potential to expand our notions of consciousness, enhance somatic awareness, and thus enable a greater mind-body harmony. Sonic Cradle uses interactive technology to highlight the aesthetic experience of listening in order to remind us of the timelessness of presence. Within this perpetual continuation of 'now' a narrative is revealed that enables us to consider time as a type of infinite object (Zinovieff, 2013). The sound that is experienced in Sonic Cradle can therefore be viewed as a tool that enables a direct relationship to the infinite nature of both time and consciousness. This humancomputer paradigm truly is at the cutting edge of interaction design.
Future work and Conclusion
As consumer EEG technology continues to increase in reliability, two of the projects discussed in this paper will continue to evolve in an iterative design process. Visualise your Mind is currently being developed further to be supported by the Emotiv for a research study with 100 Junior Medical Officers (doctors) to discover if using the visual feedback from the EEG device will help develop their meditation practice. This research project is aimed at increasing the resilience of junior medical officers. It also seeks to generate results from an eight-week compassion focused mindfulness meditation course. If successful, this will support a wide spread roll-out across New South Wales Health's junior medical workforce. Secondary benefits may accrue to patients in the form of (a) more empathic doctors and (b) physicians who have become personally familiar with mindfulness meditation and thus better able to identify patients who may also benefit from the practice. This project will be innovative in some areas over and of the growing interest in mindfulness meditation-based techniques to enhance resilience and reduce stress. The research study will use EEG's for enhancement to improve practice, engagement in mindfulness in an occupational setting.
Visualise your Mind currently uses the Neurosky Mindwave headset, however, due to the limitations of the device, it is not suitable for clinical use. The headset has a sampling rate of 1Hz and only has one electrode positioned on the forehead. The wide error margin for the data is a huge issue when testing for reliability in the data. In a current future study of Visualise your Mind will we use the Emotiv Insight headset. EMOTIV Insight is a 5-channel, wireless EEG headset that records brainwave data. The Insight has a wider distribution of sensors surrounding the head and therefore offers greater data accuracy.
Sonic Cradle provided an immersive soundscape that assisted users to focus on their breathing rather than thoughts and experiences. One of the most common negative comments that participants had while interactive with the soundscape was their distaste for specific sounds. Currently, there is no recorded pattern to which sounds the users were irritated with (Vidyarthi & Riecke, 2014). Therefore, in future iterations, the group of artists working on the project want to incorporate EEG data from the Interaxon Muse headset. The muse headset enables the system to know when the user is in a state of focused attention. With the new data, the artists are looking at removing distracting sounds from the soundscape and playing with the volume depending on what type of meditative state the participant is currently experiencing.
This article has outlined three innovative interaction design and art applications that use breathing sensors and EEGs to visulisalise and sonify brainwaves in order to encourage mindfulness. Though the use of EEG and respiration data, we defined how applications have the potential to promote the practice of mindfulness, especially for novice practitioners. The three examples illustrated novel ways in which Interactive art can promote the practice of mindfulness through sensory installations using visual and sonic stimuli. The success of Visualise your Mind and Narcissus Brainwave have shown that wireless EEG devices do have the potential to improve and inspire mindfulness practice. We hope the three pieces inspire new ideas to develop interactive visual and soundscapes that can keep a user on a path to regular meditation sessions. We intend that this research inspires and informs the work of other artists, designers, and researchers to develop applications that can assist both novice and experienced meditators in their practice.
|
2018-12-15T04:27:17.573Z
|
2016-09-25T00:00:00.000
|
{
"year": 2016,
"sha1": "abff8716163c6cef73cbbbd17cc07fffd269db5d",
"oa_license": "CCBY",
"oa_url": "https://theartsjournal.org/index.php/site/article/download/1012/496",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "abff8716163c6cef73cbbbd17cc07fffd269db5d",
"s2fieldsofstudy": [
"Art",
"Psychology",
"Computer Science"
],
"extfieldsofstudy": [
"Psychology"
]
}
|
28156072
|
pes2o/s2orc
|
v3-fos-license
|
Cross-reactive antibodies in convalescent SARS patients' sera against the emerging novel human coronavirus EMC (2012) by both immunofluorescent and neutralizing antibody tests
Summary Objectives A severe acute respiratory syndrome (SARS)-like disease due to a novel betacoronavirus, human coronavirus EMC (HCoV-EMC), has emerged recently. HCoV-EMC is phylogenetically closely related to Tylonycteris-bat-coronavirus-HKU4 and Pipistrellus-bat-coronavirus-HKU5 in Hong Kong. We conducted a seroprevalence study on archived sera from 94 game-food animal handlers at a wild life market, 28 SARS patients, and 152 healthy blood donors in Southern China to assess the zoonotic potential and evidence for intrusion of HCoV-EMC and related viruses into humans. Methods Anti-HCoV-EMC and anti-SARS-CoV antibodies were detected using screening indirect immunofluorescence (IF) and confirmatory neutralizing antibody tests. Results Two (2.1%) animal handlers had IF antibody titer of ≥1:20 against both HCoV-EMC and SARS-CoV with neutralizing antibody titer of <1:10. No blood donor had antibody against either virus. Surprisingly, 17/28 (60.7%) of SARS patients had significant IF antibody titers with 7/28 (25%) having anti-HCoV-EMC neutralizing antibodies at low titers which significantly correlated with that of HCoV-OC43. Bioinformatics analysis demonstrated a significant B-cell epitope overlapping the heptad repeat-2 region of Spike protein. Virulence of SARS-CoV over other betacoronaviruses may boost cross-reactive neutralizing antibodies against other betacoronaviruses. Conclusions Convalescent SARS sera may contain cross-reactive antibodies against other betacoronaviruses and confound seroprevalence study for HCoV-EMC.
Introduction
The emergence of the novel human coronavirus EMC (HCoV-EMC) in the Middle East since April 2012 has so far led to 17 cases of human infection with 11 being fatal as of 26 March 2013. 1e3 The first 2 laboratory-confirmed cases were reported by the World Health Organization (WHO) on 23 September 2012. 1 The index case was a 60-year-old man from Jeddah, the Kingdom of Saudi Arabia, who presented with severe acute community-acquired pneumonia and acute renal failure on 6 June 2012 and later succumbed on 24 June 2012 despite maximal supportive treatment. 1,4 A sputum sample obtained on admission showed cytopathic changes suggestive of virus replication in LLC-MK2 and Vero cells, and was positive for coronavirus by pan-coronavirus RT-PCR. Subsequent phylogenetic analysis of the viral genome sequences showed that the virus was a novel coronavirus with close genetic relatedness to Tylonycteris-bat-coronavirus-HKU4 (Ty-BatCoV-HKU4) and Pipistrellus-bat-coronavirus-HKU5 (Pi-BatCoV-HKU5) discovered in the lesser bamboo bat (Tylonycteris pachypus) and Japanese Pipistrelle bat (Pipistrellus abramus) of Hong Kong, China respectively. 4e7 Closely related coronaviruses have also been found in other bat species in Europe and Ghana. 8,9 The second case was a 49-year-old man from Qatar who kept camels and sheep in his farm and had a travel history to the Kingdom of Saudi Arabia before symptom onset. 1, 10 He developed severe acute communityacquired pneumonia and acute renal failure requiring extracorporeal membrane oxygenation in an intensive care unit of London. The lower respiratory tract samples were positive for coronavirus using pan-coronavirus RT-PCR. The 250 bp PCR fragments of the viral isolates in the first 2 cases showed 99.5% sequence homology with only 1 nucleotide mismatch over the regions compared. 10 Subsequently, 15 more laboratory-confirmed cases of HCoV-EMC infection were reported in the Middle East and the United Kingdom with a total of 9 in the Kingdom of Saudi Arabia, 2 in Qatar, 2 in Jordan, 1 in United Arab Emirates and 3 in the United Kingdom. 2,3 Most of the cases developed severe pneumonia, at least 6 cases had concomitant acute renal failure, and 11 cases died. This unusually high crude fatality rate of over 50% and the severe clinical manifestations of acute respiratory and renal failure are unique among human coronavirus infections. 11e18 The source, transmissibility and seroprevalence of HCoV-EMC are not well established at present. As with other highly pathogenic viruses which are capable of causing epidemics such as SARS coronavirus (SARS-CoV) and avian H5N1 influenza A virus, an animal source of the virus leading to interspecies jumping to humans is possible. 7,11,19e22 This hypothesis is supported by the epidemiological link to animal exposure in some of these patients with laboratory-confirmed HCoV-EMC infection, 1,3 the close phylogenetic relatedness between HCoV-EMC and Ty-BatCoV HKU4 and Pi-BatCoV HKU5, 5,6 and the broad species tropism of HCoV-EMC in different animal cells including bat, primate, swine, civet, and rabbit. 23,24 Human-to-human transmission appears to be limited at this stage with only 4 epidemiologically-linked clusters being identified so far. The Jordanian cluster was retrospectively traced back to April 2012 with no further evidence of spread. Moreover, none of 2400 residents in the Kingdom of Saudi Arabia had serum antibody against HCoV-EMC. 4 Thus, HCoV-EMC is likely different from other human coronaviruses associated with mild respiratory tract infections, namely HCoV-OC43, HCoV-229E, HCoV-NL63 and HCoV-HKU1 which account for 5e30% of all respiratory infections with up to 21.6% of the general population having serum antibodies. 25,26 Rather, it may be similar to SARS-CoV which crossed species barriers from its natural bat reservoir to intermediate amplification animal hosts and humans and caused severe infection or subclinical non-pneumonic infection in about 0.5% of the general population. 12 In order to further substantiate the hypothesis of HCoV-EMC being a zoonotic agent and elicit evidence for intrusion of HCoV-EMC and its related viruses into humans, we studied the antibody titers using immunofluorescence (IF) as screening and neutralization as confirmatory tests in at-risk groups working in a wild life market in Guangzhou of Southern China who were constantly exposed to a wide range of game food animals, SARS patients who might have acquired their infection directly from wild animals, and healthy blood donors.
Materials and methods
The study was approved by the Institutional Review Board of the Hospital Authority in Hong Kong.
Subjects and sera
Archived sera obtained from 94 subjects belonging to at-risk groups working in a wild life market in Guangzhou, 28 patients with laboratory-confirmed SARS by RT-PCR, and 152 healthy blood donors in Hong Kong Special Administrative Region, Southern China were retrieved from À70 C refrigerator. The at-risk groups consisted of game food animal market retailers, animal slaughterers and animal transporting personnel. All subjects were aged 18 years or above. The 94 animal handlers had a mean age of 35.4 years (range, 19e76 years), and the male-to-female ratio was 60:34. All of them had exposure to live and/or dead chickens, ducks, geese, pigeons, sparrows, seagulls, turtledoves, cranes, foxes, wild boars, sika deers, rabbits, and/or cats. Their average exposure time was 3.91 years (range, 1 month to 16 years).
Viral isolate
A clinical isolate of HCoV-EMC was kindly provided by Fouchier and Zaki et al. 4 The isolate was amplified by one additional passage in Vero cell lines to make working stocks of the virus. All experimental protocol involving live HCoV-EMC coronavirus isolate followed the standard operating procedures of the approved biosafety level-3 facility as we previously described. 27
Preparation of antigens of human betacoronaviruses as infected cell smears
HCoV-EMC and SARS-CoV-infected Vero, HCoV-OC43infected BSC-1, HCoV-229E-infected MRC-5 and HCoV-NL63-infected LLC-MK2 cell smears were used for the study. Smears were prepared as we previously described. 28 Briefly, when 60%e70% of cells had early evidence of cytopathic effect (CPE) as shown by rounding up of cells under inverted microscopy, the cells were harvested by trypsinization and air dried on Tefllon slides (Immuno-cell Int, Mechelen, Belgium), and fixed with chilled acetone for 10 min at À20 C and were stored at À80 C until use.
Indirect immunofluorescent antibody test ( Fig. 1) Anti-HCoV-EMC and anti-SARS-CoV IF antibody detection was performed using indirect IF as we previously described with slight modifications. 28 Sera were screened at a dilution of 1 in 20 on infected and non-infected control cells at 37 C for 45 min. The cells were washed twice in PBS for 5 min each time. Anti-human IgG (INOVA Diagnostic, San Diego) were then added and the cell smears further incubated for 45 min at 37 C. Sera positive at a screening dilution of 1 in 20 were further titrated with serial 2-fold dilutions. A positive result was scored when fluorescent intensity equaled or was higher than that of a positive control used in our previous studies. 28e32 For HCoV-EMC antibody testing, Vero cells were infected with 0.01 MOI for 36e40 h before harvesting. The infected cells were then coated on Teflon slides 8-well, air dried and fixed with chilled acetone at 20 C for 10 min, and kept at À80 C until use. Guinea pig anti-N hyper-immune sera were prepared as positive controls for testing with each new batch of infected and non-infected cells together with non-immune guinea pig sera as a negative control. 23 Positive and negative guinea control sera were included in each run of antibody testing. The IF antibody titer was taken to be the highest serum dilution giving a positive result. Anti-HCoV-OC43 IF antibody titers were further determined for sera with positive anti-HCoV-EMC IF antibody titers.
Neutralizing antibody test
All sera were inactivated at 56 C for 30 min before neutralizing antibody test. Starting with a serum dilution of 1 in 10, serial 2-fold dilutions of sera were prepared in 96-well microtiter plates as we have previously described. 28 Each serum dilution of 0.05 ml was mixed with 0.05 ml of 200 50% tissue culture infectious doses (TCID 50 ) of HCoV-EMC or SARS-CoV (HK39849), and incubated at 37 C for 1.5 h in a CO 2 incubator. Then 0.1 ml of the virus-serum mixture was inoculated in duplicate wells of 96-well microtiter plates with preformed monolayers of Vero cells and further incubated at 37 C for 3e4 days. A virus backtitration was performed to assess the actual virus titer used in each experiment. CPE was observed using an inverted microscope on day 3 and 4 post-inoculation. The neutralizing antibody titer was determined as the highest dilution of serum which completely suppresses the CPE in at least half of the infected wells. The experiment was read when the virus back-titration showed the virus dose to be 100 TCID 50 as expected. Mouse anti-whole HCoV-EMC hyper-immune sera were used as positive controls. All sera with positive neutralizing antibody titers were repeated for confirmation. Anti-HCoV-OC43 neutralizing antibody titers were further determined for sera with positive HCoV-EMC IF antibody titers.
Bioinformatic analysis of spike proteins
Amino acid sequences of the S proteins of HCoV-EMC, SARS-CoV, HCoV-OC43 and HCoV-HKU1 were downloaded from NCBI GenBank. Structure-based sequence alignment of the S1 and S2 domains of HCoV-EMC, SARS-CoV, HCoV-OC43 and HCoV-HKU1 were performed by PROMALS3D server. 33 Immunogenic regions containing potential human B-cell epitopes were predicted using Epitopia. 34 The transmembrane domain preceding the cytoplasmic tail was predicted using TMHMM version 2.0. 35 Heptad repeat regions within the S2 domains were predicted using MARCOIL. 36
Statistical analysis
Fisher exact test was used to determine the differences in proportion of the 3 groups with positive antibody titers by IF and NT between animal handlers and healthy blood donors, SARS patients and healthy blood donors, and animal handlers and SARS patients. Computation was performed using the Predictive Analytics Soft Ware (PASW) Version 18.0 for Windows. Correlation between the IF and neutralizing antibody titers against HCoV-EMC, SARS-CoV and HCoV-OC43 was performed using IBM SPSS Statistics 19, with titers of <1:20 and <1:10 regarded as 1:10 and 1:5 respectively. A p-value of <0.05 was considered as statistically significant.
Indirect IF and neutralizing antibody titers
Two of 94 (2.1%) animal handlers working at a wild game food animal market in South China had positive anti-HCoV-EMC IgG detected by indirect IF with titer of 1:20 and 1:40 (Table 1). Case 1 was a 38-year-old man with exposure to pigeons for more than 2 years. Case 2 was a 39-year-old man with exposure to chickens, ducks, and geese for more than 3 years. Both of them also had positive anti-SARS-CoV IgG by indirect IF with a titer of 1:40 and anti-HCoV-OC43 IgG with titers >Z1:320 (Table 2). Case 2 who had adequate archived serum for testing of anti-HCoV-OC43 neutralizing antibody had a titer of 1:80. Another 11 animal handlers had positive anti-SARS-CoV IgG by indirect IF and 4 of them had anti-SARS-CoV neutralizing antibodies ( Table 1). None of the animal handlers had anti-HCoV-EMC neutralizing antibody.
Among the 28 SARS patients, 17 (60.7%) had positive anti-HCoV-EMC IgG detected by indirect IF with titers ranging from 1:20 to 1:320 (Table 1). Most had a titer between 1:80 to 1:160 (6/28 or 21.4% each). All 17 patients had anti-HCoV-OC43 IgG detected by indirect IF (Table 2). Surprisingly, 7 (25%) of the SARS patients also had low titers of anti-HCoV-EMC neutralizing antibody of 1:20 or less, and all 17 of them had anti-HCoV-OC43 neutralizing antibodies. Anti-SARS-CoV IF and neutralizing antibodies were found in the majority (96.4%) of the SARS patients as expected. Most of them had high titers of 1:80 or above. Four of the 28 SARS patients had paired acute and convalescent sera available for comparison ( Table 3). The anti-HCoV-EMC IF IgG titer rose from <1:20 in the acute sera to 1:40 and 1:320 in the convalescent sera in 2 of these patients, while there was no significant rise in the other two. These patients also had 4-fold rise in IF antibody titer against another human betacoronavirus HCoV-OC43.
None of 152 (0%) healthy blood donors had anti-HCoV-EMC or anti-SARS-CoV antibodies by indirect IF and neutralization (Table 1). There was an overall significant correlation between the indirect IF IgG titers against HCoV-EMC and SARS-CoV (Pearson correlation 0.587, p < 0.01), and between the neutralizing antibody titers against HCoV-EMC and SARS-CoV (Pearson correlation 0.422, p < 0.01). For subgroup analysis of SARS patients with positive anti-HCoV-EMC IF and/or neutralizing antibodies, the correlation was strongest between antibodies against SARS-CoV and HCoV-OC43 (Pearson correlation 0.593 and 0.605 for IF and neutralizing antibodies respectively; p < 0.01 in both cases).
Bioinformatic analysis of spike proteins
While there was little amino acid sequence identity (16.6%) between the receptor-binding domain in the S1 proteins of HCoV-EMC and SARS-CoV, their S2 proteins showed an amino acid sequence identity of 40.3%. Epitopia was used to predict immunogenic regions that might be B-cell epitopes in the S1 and S2 domains. 34 While epitopes were predicted in aligned regions of S1 from HCoV-EMC and SARS-CoV, it is unlikely that cross-neutralization by antibodies would occur in these regions as the sequence identity of the predicted epitopes between the two viruses is low (Fig. 2). Three and two immunogenic regions were predicted in the S2 domains of HCoV-EMC and SARS-CoV respectively (Fig. 3). The immunogenic regions identified in S2 of HCoV-EMC overlapped the predicted regions in S2 of SARS-CoV. Notably, the identified immunogenic regions sars-I and emc-II overlapped the heptad repeat 2 region of the S2 domain of both HCoV-EMC and SARS-CoV, which is known to harbor an epitope for broadly neutralizing antibody in the case of SARS-CoV. 37
Discussion
While looking for evidence of intrusion by the novel betacoronavirus HCoV-EMC into at-risk groups and the general population, convalescent SARS patients' sera were found to contain significant titers of antibodies against other betacoronaviruses. There was a positive correlation between the antibody titers against the SARS-CoV and HCoV-EMC using both the indirect IF and neutralization antibody tests. The finding of cross-reactive IF antibodies was not that unexpected because these could be induced by crossreactive epitopes against structural proteins such as the nucleoprotein which is the most abundant structural protein in the coronaviruses as we had previously reported. 38 Indeed, cross-reactive antibodies among human betacoronaviruses by IF are well known, and have made large scale surveillance studies and epidemiologic surveys of human coronavirus infections difficult. 39 On the other hand, crossreactive neutralizing antibodies among betacoronaviruses have rarely been reported except between the closely related human and palm civet SARS-CoVs. 40 The significant neutralizing antibody titers against HCoV-EMC in SARS patients' sera in this study were surprising because neutralization is generally considered as the most specific serological test. Our previous surveillance study showed that anti-SARS-CoV neutralizing antibody in our population was extremely low despite a high seroprevalence of anti-HCoV-OC43 and anti-HCoV-HKU1 antibodies. 12 Zaki and colleagues also failed to detect cross-reactive anti-HCoV-EMC antibodies among 2400 patients in the Kingdom of Saudi Arabia who likely also had serum anti-HCoV-OC43 and/or anti-HCoV-HKU1 antibodies. Furthermore, none of the 152 healthy blood donors in the present study had serum anti-HCoV-EMC antibodies detected by indirect IF and neutralization. Therefore we assessed the structural homologies between these betacoronaviruses for possible explanations of the observed cross-reactive neutralizing antibodies.
Of all the surface proteins, only the ectodomains of S (spike) and Orf3a can induce significant neutralizing antibody with some augmentation from the M (matrix) and E (envelope) proteins. 41,42 Though Orf3a is absent in HCoV-EMC, we cannot completely exclude the possibility that similar Orf3a-like proteins are being coded by the accessory protein gene but homology search does not reveal the presence of similar protein. All betacoronaviruses use the S protein for attachment and fusion of the virion with the host cell membrane. Trimers of the S protein form the peplomers that radiate from the lipid envelope and give the virus a characteristic corona solis-like appearance under the electron microscope. The spike protein ectodomain consists of the S1 and S2 domains. The S1 domain contains the receptor binding domain and is responsible for recognition and binding to the host cell receptor. The S1 fragment between amino acids 318 and 510 is the receptor binding domain for ACE2 in the case of SARS-CoV. However, the homology of S1 between SARS-CoV and HCoV-EMC is low with only 16.6% amino acid identity. Indeed, this region is generally more divergent relative to the S2 region for coronaviruses. Hence, while the S1 region induces the majority of the neutralizing antibody in convalescent sera of SARS patients, 43,44 it would be unlikely to result in antibodies with significant cross-neutralizing activity.
The S2 domain, responsible for fusion, contains the putative fusion peptide and the heptad repeat HR1 and HR2. The binding of S1 to the cellular receptor will trigger conformational changes which collocates the fusion peptide upstream of the two heptad repeats of S2 to the transmembrane domain, and, finally, fusion of the viral and cellular lipid envelopes. An epitope situated between amino acids 1055 to 1192 and around heptad repeat 2 of the S2 subunit is likely to have induced the cross-reactivity of neutralizing antibody against HCoV-EMC and SARS-CoV. 63 Our phylogenetic and antigenic epitope analysis suggested that this area is highly conserved among these 4 Table 3 Titers of anti-human-coronaviruses antibodies by immunofluorescence and/or neutralization in SARS patients with available paired acute and convalescent serum samples. Table 3 and Case 3 in Table 2 were the same specimens. b Case D (convalescent) in Table 3 and Case 17 in Table 2 were the same specimens. Table 2 and Case C (convalescent) in Table 3 were the same specimens. b Case 17 in Table 2 and Case D (convalescent) in Table 3 were the same specimens. c Test was not performed due to insufficient quantity of archived sera.
HCoV-EMC IF HCoV-EMC NT SARS-CoV IF SARS-CoV NT
betacoronaviruses and therefore could not completely explain the presence of cross-reactive anti-HCoV-EMC neutralizing antibodies among SARS patients but not the general population. We postulate that in addition to the structural homologies between HCoV-EMC, SARS-CoV, HCoV-OC43 and HCoV-HKU1, the different clinical manifestations and subsequent host immunological response of these infections may account for this pattern of neutralizing antibody cross-reactivity. While SARS-CoV causes severe infection with viremia, 45 HCoV-OC43 and HCoV-HKU1 predominantly cause superficial mucosal infections of the upper respiratory tract which is self-limiting. Therefore unlike the highly virulent SARS-CoV or HCoV-EMC which can induce a solid humoral immune response, an insufficient B cell maturation process with failure to induce high avidity antibodies is more likely to occur with Figure 2 Structure-based protein sequence alignment of the S1 region of HCoV-EMC, SARS-CoV, HCoV-OC43 and HCoV-HKU1, constructed using PROMALS3D (http://prodata.swmed.edu/promals3d/). The receptor binding domain is highlighted. Identical and similar residues are shaded in black and grey respectively. Immunogenic regions predicted by Epitopia of at least 10 residues in length are highlighted by a black line. Only 1 representative sequence from each virus is used to improve clarity of presentation.
other betacoronavirus infections in the general population but their neutralizing antibody titer against these less virulent betacoronaviruses such as HCoV-OC43 can be boosted with superimposed SARS-CoV or HCoV-EMC infections ( Table 2). These viral, clinical and immunological differences may explain the absence of cross-reactive neutralizing antibody against both SARS-CoV and HCoV-EMC in normal blood donors despite that most of them should have been exposed to HCoV-OC43 and HCoV-HKU1 in the past. Our finding has important implications in the serodiagnostic testing, treatment and development of vaccine for the prevention of human infection caused by betacoronaviruses. The possibility of cross-reactive antibodies giving rise to false-positive results concurs with the suggestion of a recent report to use anti-HCoV-EMC IF antibody test only in patients with very clear epidemiological linkage. 46 Besides the possibility of wrong serodiagnosis due to crossreactivity, this observation would support the use of antiviral peptides in the treatment of this emerging HCoV-EMC infection as antiviral peptides targeting the heptad repeat 2 has been successfully used in neutralizing SARS-CoV in cell culture. 47 Furthermore, this antigenic epitope could be an important vaccine target though the danger of immunopathology must also be considered. The possibility of low level neutralizing antibody leading to immune enhancement should also be considered if SARS convalescent plasma or normal intravenous immunoglobulin are used for the treatment of HCoV-EMC infection. 48 No definitive evidence of intrusion of HCoV-EMC into atrisk groups was found in the present study. Two out of 94 sera from animal handlers had indirect IF antibody against both HCoV-EMC and SARS-CoV but no specific neutralizing activity toward these 2 viruses. Though this can be due to cross-reactivity with any betacoronaviruses such as HCoV-OC43, the possibility of cross-reactivity to Ty-BatCoV HKU4 and Pi-BatCoV HKU5 remains a distinct possibility which may represent sporadic interspecies jumping in this high risk group. Indeed, coronaviruses are found in many mammalian and avian species, 49e53 and have repeatedly crossed species barriers to cause interspecies transmission throughout history and occasionally caused major zoonotic outbreaks with disastrous consequences. 11,54e56 Phylogenetic analysis showed that the lineage A betacoronavirus HCoV-OC43 might have jumped from a bovine source into Figure 3 Structure-based protein sequence alignment of the S2 region of HCoV-EMC, SARS-CoV, HCoV-OC43 and HCoV-HKU1 constructed using PROMALS3D (http://prodata.swmed.edu/promals3d/). Identical and similar residues are shaded in black and grey respectively. Immunogenic regions predicted by Epitopia of at least 20 residues in length are highlighted by a black line. The heptad repeat regions are highlighted. Only 1 representative sequence from each virus is used to improve clarity of presentation. human in the 1890s. 57 The more recent example of interspecies transmission was the jumping of the lineage B betacoronavirus SARS-CoV from bats to civets and then to humans which caused the SARS epidemic in 2003. 11,19,58e62 Though the seroprevalence of anti-HCoV-EMC antibody found no indication of positivity among residents in the Kingdom of Saudi Arabia, their demographic details, particularly the history of animal exposure, were not described. 4 Further studies including seroprevalence studies with more refined serological test should be conducted among at-risk groups in the Middle East to confirm the zoonotic nature of this emerging human coronavirus.
There were a number of limitations in this study. First, only a relatively small number of SARS patients were tested because of the lack of archived sera. However, most of the positive anti-HCoV-EMC IgG titers in this group were of high values between 1:80 to 1:160 which made the results less ambiguous. It would be interesting to test a larger group of laboratory-confirmed SARS patients with different viral strains to substantiate our observation. Second, the low seroprevalence of anti-SARS-CoV in the general population make the possibility of wrong serodiagnostics due to crossreactivity less important for routine diagnostics. However, the finding is essential for confirmation of serological surveillance studies especially in some Southeast Asian countries including China where the seroprevalence for anti-SARS-CoV may not be well established, as HCoV-EMC may continue to spread and cause an epidemic in this densely populated area in the future.
|
2018-04-03T01:06:45.885Z
|
2013-04-10T00:00:00.000
|
{
"year": 2013,
"sha1": "ae3e1dfa152af8f928e0f78c8851480393c51838",
"oa_license": null,
"oa_url": "http://www.journalofinfection.com/article/S0163445313000716/pdf",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "1a43a0980cd9d8025ff37de176287af5cd9fa0ff",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
113638060
|
pes2o/s2orc
|
v3-fos-license
|
Evaluation of the Wet Bulb Globe Temperature (WBGT) Index for Digital Fashion Application in Outdoor Environments
Objective: This paper presents a study to evaluate the WBGT index for assessing the effects of a wide range of outdoor weather conditions on human responses. Background: The Wet Bulb Globe Temperature (WBGT) index was firstly developed for the assessment of hot outdoor conditions. It is a recognised index that is used world-wide. It may be useful over a range of outdoor conditions and not just for hot climates. Method: Four group experiments, involving people performing a light stepping activity, were conducted to determine human responses to outside conditions in the U.K. They were conducted in September 2007 (autumn), December 2007 (winter), March 2008 (spring) and June 2008 (summer). Environmental measurements included WBGT, air temperature, radiant temperature (including solar load), humidity and wind speed all measured at 1.2m above the ground, as well as weather data measured by a standard weather station at 3m to 4m above the ground. Participants' physiological and subjective responses were measured. When the overall results of the four seasons are considered, WBGT provided a strong prediction of physiological responses as well as subjective responses if aural temperature, heart rate and sweat production were measured. Results: WBGT is appropriate to predict thermal strain on a large group of ordinary people in moderate conditions. Consideration should be given to include the WBGT index in warning systems for a wide range of weather conditions. However, the WBGT overestimated physiological responses of subjects. In addition, tenfold Borg's RPE was significantly different with heart rate measured for the four conditions except autumn (p<0.05). Physiological and subjective responses over 60 minutes consistently showed a similar tendency in the relationships with the WBGT head and WBGT abdomen . Conclusion: It was found that either WBGT head or WBGT abdomen could be measured if a measurement should be conducted at only one height. The relationship between the WBGT values and weather station data was also investigated. There was a significant relationship between WBGT values at the position of a person and weather station data. For UK daytime weather conditions ranging from an average air temperature of 6°C to 21℃ with mean radiant temperatures of up to 57℃, the WBGT index could be used as a simple thermal index to indicate the effects of weather on people. Application: The result of evaluation of WBGT might help to develop the smart clothing for workers in industrial sites and improve the work environment in terms of considering workers' wellness.
Introduction
Exposure to extreme environments causes many risks for human health.Hot and humid outdoor conditions with solar radiation threaten the health of people who perform outdoor activities especially those involving high work intensity.A thermal index provides a single value that is representative of how a number of factors in combination (such as air temperature, radiant temperature, humidity, and wind) affect a person.For a valid and sensitive index, the combined effects of the environment on a person vary, and the index value will vary.Many thermal indices have been developed over the years, but most are restricted to specific conditions (heat stress, thermal comfort, cold stress).There is a need to identify a thermal index that can be used to quantify the effects of outdoor conditions on people.If such an index can be identified, it could be used to interpret weather data in terms of likely effects on peoples' health and safety, comfort and productivity.It could also allow early warning of the effects of extreme weather such as heat waves and prolonged cold weather.The Wet Bulb Globe Temperature (WBGT) index was firstly developed for the assessment of hot outdoor conditions.It is a recognised index that is used world-wide (Kwon et al., 2015).In principle it may be useful over a range of outdoor conditions and not just for hot climates.This paper presents a study to evaluate the use of the WBGT index in outdoor conditions found in the United Kingdom.
In hot environments a simple method based upon the wet bulb globe temperature (WBGT) index provides for monitoring and regulating heat stress (ISO 7243, 1989).This index was originally developed by Yaglou and Minard (1957) to reduce heat casualties during the outdoor training of military recruits in the United States (USA).WBGT limit values were used to indicate when military recruits could train.The use of the WBGT index (instead of only air temperature) to determine safe conditions, led to a decrease in the number of heat casualties and time lost by moratoriums on training in the heat.The WBGT index was adapted by the American Conference of Governmental Industrial Hygienists (ACGIH) and is now accepted and established as international standard ISO 7243.This index is also convenient to use in industrial environments as it provides a trade-off between uncomplicated measurement of the thermal environment and the accuracy of the index (ISO 7243, 1989).WBGT is influenced by radiation from hot and cold surfaces and the sun, air temperature, humidity and wind speed.It is expressed with the measurement of natural wet-bulb temperature (t nwb ), globe temperature (t g ) and dry bulb temperature (t a ).The wet bulb globe temperature (WBGT) is calculated from the combination of natural wet bulb temperature, globe temperature and air temperature either in the sunlight or no sunlight.
If people are exposed to a heterogeneous thermal environment, the WBGT index should be determined at three heights above the ground; at the head, the abdomen and the ankles (i.e.0.1m, 1.1m and 1.7m for standing persons and 0.1m, 0.6m and 1.1m for seated persons).If the thermal environment is homogeneous, the WBGT index can be measured at only abdomen level.The mean value of the WBGT index for heterogeneous conditions is calculated through the following equation: The formula applies to the evaluation of the mean effect of heat on man during a period representative of his activity.The method provided in ISO 7243 (1989) involves the measurement of the WBGT and ISO 7243 provides the WBGT reference values depending on activity levels.The value is then compared with WBGT reference values which indicate limits above which the environment may cause a risk to health.
The WBGT index can evaluate heat stress on the human body during a certain period of activity.If the WBGT reference values are exceeded, more elaborate assessment methods may be required.The reference values apply for people wearing light clothing.Several studies have pointed out the limitations of the WBGT index.To find the single value of the WBGT index, natural wet-bulb temperature, dry bulb temperature, and globe temperature are required: the composite of these varying environmental parameters may result in the same WBGT value.This is the nature of a thermal index however and the index value will indicate where different combinations would have equivalent effect.Azer and Hsu (1977) pointed out that WBGT is less sensitive at a wind speed of above 1.5m/s and that WBGT became less sensitive when air temperature and relative humidity went up.Nash (2004) noted that WBGT should consider the time period for time-weighting the average of the metabolic demands and the mean of the WBGT.Budd (2008) stated that the index underestimated heat stress in an environment with high humidity or low wind speed and sweat was evaporated inactively at those conditions.Rastogi et al. (1992) suggested that the WBGT index could not presuppose physiological strain in a severe thermal stress environment.Parsons (2006) pointed out that interpretation of the WBGT index might be influenced by estimates of metabolic rate which varies with activity, circumstance of measurement and individual differences such as gender, race and human behaviour from culture differences.
The measurements of natural wet-bulb temperature (t nwb ), globe temperature (t g ), and air temperature (t a ) are needed for the evaluation of heat stress with WBGT index.Whether the heat stress should be reduced or not can be decided by comparing the collected data with the reference values (ISO 7243, 1989).This comparison is relatively easy and simple, and the WBGT index may be used in four season including cold as well as hot thermal environments.However, the usability of WBGT often becomes a controversial issue.The measurement of WBGT relies on just the natural wet-bulb temperature and the globe temperature so that any error of instrument or measurements may greatly influence the interpretation of WBGT.In outdoor weather the diversity of conditions such as cloud cover and a sudden change of weather and the way of dealing with the WBGT would affect the result and the interpretation (Holmér, 2010;Kwon, 2009;Parsons, 2006).
The aim of this paper was to evaluate the WBGT index for assessing the effects of outdoor weather conditions on physiological and subjective responses of people in the U.K. A secondary aim was to consider the relationship between WBGT values measured at the level of people on the ground and weather station data gathered by a standard weather station.
Method
Four group experiments were conducted in each season over a year between September, 2007 and June, 2008 in Loughborough, the U.K (Figure 1).MetOffice does exist not in Loughborough but in Sutton Bonington (Latitude=52.84N:Longitude=01.25W;approximately 8km far from Loughborough).Mean maximum air temperature for September, December, March and June in 2008 were 17.9℃, 6.5℃, 10.5℃ and 19.1℃ respectively.Mean relative humidity for the four months were 87%, 90%, 75%, and 80% respectively and mean wind speed were 2.8m/s, 3.3m/s, 6.1m/s, and 3.2m/s respectively.An experiment for autumn (September)
J Ergon Soc Korea
Journal of the Ergonomics Society of Korea was conducted on the 6 th of September, 2007 between 1pm and 2pm and an experiment for winter (December) was conducted on the 12 th of December, 2007 between 11.30am and 2pm.An experiment for spring (March) was conducted on the 11 th of March, 2008 and an experiment for summer (June) was conducted on the 10 th of June, 2008 between 1pm and 3pm.For each experiment people carried out light stepping activity positioned below a weather station.As well as weather station data gathered at 3m to 4m, WBGT values were measured at the level of the participants.Subjective and physiological responses were also recorded.The study was confirmed as acceptable by the Loughborough University Ethical Advisory Committee.All participants signed the informed consent form after being instructed in the experiment procedures as well as the aim of the experiment.
Participants
This study involved a total of 38 men aged 18 to 49 years and only eight participants took part in the experiment of December (Table 1).All subjects were healthy and included students and professional workers.
Measurement of weather
All parameters were recorded every 15 minutes over a year.Data was collected from October 2007 till September 2008 and the data were selected on separate four days.The site of the weather station is away from buildings and roads on Loughborough university campus, surrounded by a secure wire fence.The size of the meteorological station is 25m×20m.The air temperature and relative humidity were measured using a HMP45C TEMP&RH probe (Campbell scientific, Inc.) located at a height of three meters above the ground.The wind speed was measured using a CSAT3 three dimensional sonic anemometer (Campbell scientific, Inc.) located four meters above the ground.Solar radiation was measured using a CNR1 net radiometer (Campbell scientific, Inc.) located three meters above the ground.
Environmental measurement
Environmental conditions were recorded near subjects every minute.Air temperature and relative humidity were measured using a whirling hygrometer.Dry and wet bulb temperatures (Grant CT-U-V3-1 probe, U.K.) were measured using shielded thermistors and black globe temperature with a diameter of 0.15m which a thermistor (a grant type EU, U.K.) was at the centre of the globe was measured at the same place as the dry and wet bulb temperatures were measured in the sunlight at the three heights of 0.2m, 1.2m and 1.7m.Radiation levels were measured using a pyranometer (Kipp and Zonen CM11, Holland).Wind speed was measured using an anemometer (Brüel & Kjaer MM 0038, Denmark) and a weather station (Oregon WMR 928 NX, USA) at 1.2m above the ground.Clothing worn varied with weather conditions: three types of ensembles were selected depending on basic air temperature as well as wind, cloud cover and the sun (Table 2).The clothing insulation of each ensemble was measured before the study in a climate chamber using a thermal manikin (Victoria, Espergerde, Denmark).The thermal conditions of the climatic chamber when measuring the clothing insulation were air temperature of 21.2(±0.23)℃,relative humidity of 45(±5)%, and wind speed of 0.15(±0.05)ms - .
The wet bulb globe temperature (WBGT) is calculated from the following equations: (2) Where t nwb = temperature of a natural wet bulb thermometer (℃) t g = temperature of a 150mm diameter black globe thermometer (℃) t a = air temperature (℃).
The weather station's instrumentation was mounted after the first group experiment during September was conducted.
Physiological measurement
Subjects wore aural thermistors in one side of ears that were insulated from the influence of outside conditions.Cotton wool and plastic ear plugs were used for providing good insulation.Aural temperature was measured every minute during each 60 minute exposure using thermistors connected to a data logger (Grant SQ1000, Cambridge, UK).Subjects' body weights were measured just before and after each experiment session.Each subject was weighed minimally clothed before and after the exposure using a Multi-range Digital Dynamic Scale (Mettler 1D1, Mettler Toledo, USA).The amount of sweat was found using the difference between subjects' semi-nude weights before and after the experiments.Sweat evaporated was determined by taking sweat trapped in clothing from total mass loss from the body (Parsons, 2003).Heart rate monitor (Polar Electro, Kempele, Finland) was used for measuring heart rate every minute.Metabolic rate was estimated and was adjusted by based on individuals' body weight and
Procedures
When participants arrived at the experiment place (the Loughborough University weather station compound), the details of the experiments were informed.Aural thermistors and polar testers were fitted.The weights of subjects' semi-nude and fully clothed bodies were measured.The participants exercised for an hour, performing a step test in time to a metronome set at a rate of 20 steps per minute on 100mm high step (ACSM, 2009;ISO 8996, 2004).Each minute, the subjects' physiological responses and the environmental parameters were measured.A subjective response was measured every ten minutes (every five minutes for March and June).At the end of the experiment, subjects' semi-nude and clothed weights were recorded.Participants in September all started at the same time but participants in December, March and June started at different times with approximately five minutes intervals between participants.This avoided queuing and allowed measurements of body weight just before and after each 60 minute exposure.
Data analysis
One way ANOVA were used on physiological parameters to investigate differences between participants for September and December.Environmental conditions for December and March were slightly different among participants, and the analysis was not used.The difference between Borg's RPE & heart rate was found out using Mann-Whitney U test.Pearson correlation coefficients were calculated to find correlations between weather parameters, the environment and physiological responses and WBGT among all 38 participants of the four groups.Spearman rho correlations were used to investigate the relationships between subjective responses and the WBGT among all 38 participants of the four groups.Sweat evaporated was correlated with average WBGT over 60 minutes.A single value of clothing insulation was applied to an experiment in 60 minutes.The correlation between the WBGTs and environmental conditions determined from weather station data was therefore analysed using the results for December, March and June.Regression models were produced through linear regression analyses using SPSS 12.0.
Environmental measurements
The environmental conditions measured at a height of 1.2m for each group experiment are shown in Table 3.Standard deviation for September was the biggest and one for December was the smallest literally.However, the averaged value of solar radiation for December was the lowest as 123Wm -2 and the value of solar radiation for June was the largest as 876Wm -2 .Therefore, the standard deviation for December was relatively large.The range of environmental conditions was wide, and three types of ensembles were used (Table 2).Holmér ( 2010) also pointed out that weather data and WBGT have the accuracy of the relationship.
Physiological measurements
The physiological responses were various among groups.Especially, sweat production was largely diverse among groups.The mean value (SD) of sweat evaporated was 245(31.5)gm - h -1 for September, 43(5.8)gm - h -1 for December, 86(15.6)gm - h -1 for March, and 132(40.4)gm - h -1 for June (Figure 2).Therefore, evaporative sweat losses were 4.1gm -2 min -1 for September, 0.7gm -2 min -1 for December, 1.4gm -2 min -1 for March, and 2.3gm -2 min -1 for June.U.K.weather is relatively moderate over a year and the averaged maximum air temperature of Sutton Bonington in 2008 was just 1.2℃ different between June and September.Furthermore, there are hot days in autumn so called an Indian summer.The current study showed that air temperature in September was higher than one in summer.It made sweat rates and aural temperatures in September higher.
Aural temperature increased as time passed and the averaged end value (SD) of aural temperature was 37. *clo is a unit which gives an estimate of clothing insulation on human body.For example, 0clo is for a nude person.
Subjective measurements
The averaged values of subjective responses over 60 minutes are shown in Table 4.The thermal sensation for September and June showed higher than 'slightly warm' but the value of December and March showed lower than 'slightly warm'.Thermal comfort was 'slightly uncomfortable' for all four groups.Participants wanted to improve the thermal environment; participants of September, March and June would like to be slightly cooler but participants of December wanted to be slightly warmer (Table 4).Participants' pleasantness for December, March, and June felt more positive than 'neither pleasant nor unpleasant'.Kwon and Choi (2012) found that sedentary Korean women could feel comfort with 1clo when air temperature was 27℃.The current study about subjective responses, for example in September, could be more positive than current results.In addition, Kwon et al. (2015) showed that elderly farmers put on 0.66clo (including shoes, hats, and accessories) on their own initiative while they worked in the metabolic rate ranging 165Wm -2 to 185Wm -2 at WBGT of 26℃, and they tended to wear more clothing which can affect subjective responses.
The averaged value of Borg's RPE for September, December, March, and June was 10.3, 7.6, 8.8, and 8.3 respectively, and participants felt from extremely light to less than light for the stepping test.If heart rate was estimated from Borg's RPE which is multiplied by 10, there was significant difference between heart rate and Borg's RPE for all conditions except September (p<0.05).Therefore, December, March, and June was lower than heart rate measured and Borg's RPE for September was considered as the same actual heart rate.
Environmental conditions from weather station vs WBGTs
The WBGTs were measured at three levels above the ground: WBGT head , WBGT abdomen , and WBGT ankles .Environmental conditions from the weather station and WBGTs are shown in Table 3.
All environmental conditions from the weather station showed significant correlation with the WBGTs (Figure 3, p<0.05).Air temperature showed the strongest correlation with the WBGTs, while solar radiation had the second highest correlation (Table 5, Figure 3).Wind speed showed the lowest correlation with WBGTs.WBGT abdomen had the highest correlation with solar radiation and wind speed (Table 5, p<0.05).However, WBGT showed a reasonably strong correlation with all four environmental parameters.
A significant relationship was found between WBGT and weather conditions.The regression function using air temperature (T a ) could be derived with 97% of variance for the estimation of WBGT (°C); WBGT = 0.349 + 1.003T a (p<0.05).WBGT could therefore be predicted by the combination of air temperature and solar radiation with 98% of variance (Table 6).(ISO 7243, 1989) of the WBGT and the result can be interpreted.
Physiological and subjective responses vs WBGTs
The WBGTs for December showed a strong correlation with all physiological and subjective responses (Table 7).June also had a significant relationship with human responses.WBGTs for March did not show any correlation with physiological and subjective responses except heart rate (p<0.05).The WBGTs for September had a significant relationship with only aural temperature..000 r2 =.98, adjusted r 2 =.98 groups were combined.Azer and Hsu (1977) suggested that WBGT was less sensitive at wind speed of more than 1.5m/s, and it seemed that WBGT became less sensitive when air temperature and relative humidity went up.This is possibly why March did not show a relationship between WBGT and human responses.
Table 2 .
Three types of ensembles +clo is a unit which gives an estimate of clothing insulation on human body.Although actual values are provided to two decimal places, repeatability of measurements would suggest an accuracy of one decimal place.body surface area.
Table 3 .
Environmental conditions at 1.2m and data from weather station at 3m during exposure times [Mean (SD)]
Table 5 .
Pearson correlation coefficients between WBGTs and mean environmental conditions from weather station at 3m height for Groups B, C and D If only air temperature is known, a WBGT of 15.4℃ can be anticipated by the equation when air temperature is 15℃.If environmental conditions from a weather forecast are known and a measurement is not able to be carried out, Table6can be used to estimate the WBGT.The value of the WBGT can be applied to the reference value
Table 6 .
Regression coefficients for air temperature and solar radiation from a weather station
Table 7 .
Correlation coefficients between WBGTs and physiological and subjective responses (Continued)
|
2019-04-15T13:11:29.057Z
|
2017-02-28T00:00:00.000
|
{
"year": 2017,
"sha1": "cac44458680e1fd13d4c9882d7d733eecf7cdb4f",
"oa_license": "CCBYNC",
"oa_url": "https://scholarworks.unist.ac.kr/bitstream/201301/22407/1/Evaluation_of_the_Wet_Bulb_Globe_Tempe%C2%B7%C2%B7.pdf",
"oa_status": "GREEN",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "cac44458680e1fd13d4c9882d7d733eecf7cdb4f",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Engineering"
]
}
|
259714101
|
pes2o/s2orc
|
v3-fos-license
|
Medication Adherence among Patients with Diabetes Mellitus and Its Related Factors—A Real-World Pilot Study in Bulgaria
Background and Objectives: The objective is to evaluate medication adherence level (MA) and the relevant determinants of MA among patients with type 2 diabetes mellitus (T2DM) monitored in ambulatory settings by general practitioners. Materials and Methods: A cross-sectional study was conducted among patients with T2DM monitored in a GP practice in Sofia, Bulgaria (September–December 2022). All patients were interviewed according to a predesigned questionnaire after granting informed consent. MA level was evaluated through the Morisky–Green four-item questionnaire, and health-related quality of life was evaluated by EQ-5D-5L and VAS (visual analogue scale). Data were aggregated and statistically evaluated. Results: The total number of observed patients was 93. Around 48.4% of patients were female, and 90.3% of patients were between 50 and 80 years of age. Multimorbidity was identified among 70% (n = 65) of the respondents. High and medium levels of MA were revealed in 64.51% and 33.3% of respondents, respectively. Patients treated with insulin secretagogues were most adherent to the therapy (n = 83.3%) in comparison with the other treatment groups. The onset of the disease, professional status, age, gender, number of therapies, and quality of life did not affect the level of MA (p > 0.05). VAS scores among nonsmokers (VAS = 63.16 ± 20.45 vs. 72.77 ± 14.3) and non-consumers of alcohol (VAS = 63.91 ± 19.34 vs. VAS = 72.54 ± 15.98) were statistically significant lower (p < 0.05). A significant related factor for MA was years lived with diabetes (OR = 3.039, 95% CI 1.1436–8.0759, p = 0.0258). The longer the disease duration, the more the odds for a high MA level increased. Conclusions: The number of nonadherent diabetic patients in Bulgaria is low, which might be evidence of patients’ concern about their own health and understanding about the importance of prescribed therapy. Further comprehensive study with additional patients is required to confirm the results and investigate the predicting factors for a high level of MA.
Introduction
Diabetes mellitus is a metabolic disorder characterized by hyperglycemia due to impaired insulin secretion, impaired insulin action, or both. Early detection of diabetes and control of risk factors, including control of blood sugar, blood pressure, duration of the disease, and control of blood lipids, is of utmost importance for the development and severity of the disease [1]. Diabetes is a chronic disease that requires long-term care and patients' education for the purposes of prevention of complications [2]. The number of Bulgarian citizens with diabetes mellitus in the outpatient lists for 2018 was 503,753. The number of newly diagnosed patients with diabetes mellitus for the analyzed period was 55,172. The number of deceased patients with diabetes mellitus for the analyzed period was 26,813; 18,641 of them had a physician visit in 2018 [3]. The economic burden of the disease is forecast to grow from USD 1.3 trillion in 2015 to USD 2.1 trillion under by 2030 [4].
According to data provided in the literature, 4.4 million people worldwide are affected by the disease [5].
Adherence has been defined as "the extent to which a person's behavior (in terms of taking medications, following diets, or executing other lifestyle changes) coincides with the agreed recommendations from a health care provider" [6]. It is dependent upon a variety of factors such as patients' characteristics, the patient and their family members' behavior, interactions with healthcare providers, and the healthcare system itself [7]. Medication nonadherence is defined as "taking less than 80% of prescribed doses". However, nonadherence could be related with taking too many doses. Nonadherence is associated with a high risk for poor health status, mortality, and complications [8].
The key to achieving the desired therapeutic outcomes is a high level of adherence. Diabetes has serious long-term consequences on health and requires early and timely medical care and preventive measures. Enhanced glycemic control, which can be achieved through a high level of medication adherence, can significantly reduce microangiopathic and macroangiopathic complications [9]. Studies have shown that patients treated with fixed doses have lower variations in their number of prescriptions. Therefore, these patients have a higher level of change in their medication adherence and increased persistence in comparison with patients treated with loose-dose combinations. Results show that a fixed-dose combination for treatment of type 2 diabetes is related to improvement in medication adherence levels in comparison with a loose-dose combination [10,11].
A study has investigated factors influencing the level of adherence among patients with diabetes [11]. Some of the factors that decrease medication adherence are beliefs and lifestyle modifications. These have a larger impact on diabetes control than medications. Reasons for medication nonadherence could be medication costs and poor communication with providers. The belief that diabetes medication is important to maintain a good health status, and the presence of family support and providers are some of the main factors for medication adherence improvement [11]. Measures to improve patient satisfaction and medication adherence levels, such as simplifying the prescribing regimen, educational programs, improved communication between patients and healthcare professionals, reminders, and lower treatment costs, must be multifactorial [8].
The main goal of the current survey is to evaluate the medication adherence (MA) level and the relevant determinants of MA among individuals with type 2 diabetes mellitus (T2DM) followed up by general practitioners. The other crucial aim was to measure the patients' quality of life and its correlation with MA level.
Design, Setting, Patient Recruitment, and Sample Size
We conducted a cross-sectional, 4-month study among 93 (7.13% of the total number of patients) of 107 patients with diabetes mellitus followed up by a general practitioner with 1304 total patients in Sofia city. All adult patients with type 2 diabetes over 18 years of age who were on pharmacotherapy (ICD codes E10, E11) and granted informed consent were included. The study period was between September and December 2022. Patients who were able to speak and understand Bulgarian language, had type 2 diabetes diagnosed at least one year before the study, visited GP office on site, and were caregiver-independent were eligible for the current study. Exclusion criteria included refusal to grant informed consent and persistence of any condition that could interfere with the patients' ability to complete the questionnaire.
Characteristics and Cost of Therapy
Clinical data for each patient was collected on the basis of a specifically designed questionnaire. Demographic characteristics (age, gender), medical history of diseases, disease-specific data (type of diabetes mellitus, duration of the disease), pharmacotherapy, concomitant diseases, annual hospitalization rate, consumed medicines, risk factors, level of medication adherence, assessment of patients' lifestyle, and days off work due to diabetes were collected. Healthcare costs were calculated for the 4-month period (September-December 2022) using micro-costing approach: direct costs (for diabetes treatment, hospitalization, and medications per year) and indirect costs (time off from work in days). Reimbursement levels were recorded for medicinal products for patients with diabetes for those part of the Positive Drug List (PDL). Each product was recorded using the unit price covered by the NHIF (National Health Insurance Fund), a single-payer healthcare system administering the compulsory health insurance in the country (the rate of coverage is 8% for 2023, including an employee contribution of 3.2%) [12]. The PDL only consists of prescription paid for by the NHIF, the Ministry of Health, and the hospitals' budgets [12].
The indirect costs were calculated using the number of days spent in hospital by every patient (extracted from the questionnaire completed by each patient) and gross domestic product per capita for 2022 (source of data: National Statistical Institute, Bulgaria [13]). This type of indirect costs was calculated on the basis of days out of work (absenteeism) data, applying the human capital approach formula.
The number of patients was multiplied by the number of days out of work for the observed period and by the average salary per capita per day. Indirect costs were represented as lost productivity using the following formula for the human capital approach: The collected data, such as demographic and clinical data, quality of life and economic outcomes, were considered in order to identify independent variables of nonadherence among the observed patients with diabetes.
Evaluation of Medication Adherence and Quality of Life
We used the definition for MA provided by ABC project (Ascertaining Barriers to Compliance: policies for safe, effective and cost-effective use of medicines in Europe). "Adherence to medications" is defined as the process by which patients take the medicines as they were prescribed and recommended by their healthcare professional. It consists of three crucial phases: "Initiation", "Implementation", and "Discontinuation".
The Morisky-Green 4-item questionnaire (medication adherence (MA) questionnaire) was applied in order to define MA level. It consists of 4 questions; each has two possible options ("yes" or "no"), as the range is between 0 and 4. The levels of MA are as follows: high (in case of 0 points), medium (1-2 points), and low (3-4 points) MA [14]. The questionnaire was administered to those participants with diabetes willing to be involved in the study.
Quality of life (QoL) was assessed using EQ-5D-5L (5-level EuroQol 5D version). This questionnaire is a unidirectional QoL measure providing utility value in the range of 0 (equal to death) and 1 (equal to perfect health). The questionnaire consists of 5 questions, each related to different aspect of quality of life (mobility, self-care, depression/anxiety, usual activities, pain/discomfort). Each dimension has 5 options for answering: no problems, slight, moderate, severe, and extreme problems. The answers provided are used to calculate a single index "utility" score through a specific algorithm. The UK value set and scoring algorithm are used to calculate the scores for each patient because a Bulgarian scoring algorithm is not available. The EQ-5D-5L includes a visual analogue scale which provides a single rating of self-perceived health between 0 and 100, representing "the worst health you can imagine" and "the best health you can imagine", respectively [15].
The patients were asked to answer these questionnaires only once during the observation period: at the moment when they were invited for monitoring in the GP's office.
Statistics
Appropriate statistical methods (descriptive statistics and comparison of proportions) were considered for the purposes of description and assessment of the correlations among the collected data. Through the method of descriptive statistics, we systematized the patients' demographic characteristics including sex, age, place of birth, time of diagnosis, pharmacotherapy, quality of life, level of MA, etc. A comparison between the collected data for the patients was performed by MedCalc statistical software version 16.4.1. The factors influencing medication adherence levels, such as clinical, demographic data, quality of life, and cost variables, were evaluated. Logistic regression, considering the influencing factors, was estimated (p < 0.05 means statistical significance). The odds ratio (OR) of possessing high or low/medium level of MA was calculated using the method of logistic regression for several patient characteristics (gender, number of medicines prescribed (polypharmacy was defined as taking 5 or more medicines), multimorbidity (3 or more diseases), years lived with diabetes, and age).
Ethics
The study included all patients registered with the GP, with diabetes type 2, who visited the GP office for periodic health examinations and agreed to be involved in the study. These patients provided signed written informed consent, authorizing the investigators to use their anonymized (pseudonymized) data only for the purposes of the current study. The study was carried out in accordance with the requirements of the Declaration of Helsinki.
Demographic Data and Risk Factors
The patients with diabetes admitted for periodic examination at the GP office (n = 107) during September-December 2022 were invited to answer the questions. Of the total, 93 patients agreed to participate in the study. The number of men and women participants was almost equal (45 men vs. 48 women). Most participants (90.33%) were in the age group over 50. Most participants were of retirement age (58.06%) and suffered from type 2 diabetes (98.92%). In 38.71% of the patients, the disease had been diagnosed within the last 5 years (Table 1). Of the 93 participants, 30 reported hospitalizations in the past year. These hospitalized patients were exposed to risk factors such as smoking (15 of the participants), use of alcoholic beverages (11 of the participants), and inability to maintain blood pressure within normal limits (10 of the participants).
Pharmacotherapy for Concomitant Diseases
The definition of multimorbidity involves two or more medical diseases/conditions, each lasting over one year. Of the total number of participants, 72% were diagnosed with two or more additional chronic conditions and 28% were diagnosed with one additional chronic condition. Patients with this disease were more likely to die at an earlier age and Most of the observed patients were diagnosed with non-insulin-dependent diabetes mellitus without complications (n = 73), followed by non-insulin-dependent diabetes mellitus with neurological complications (n = 16), and non-insulin-dependent diabetes mellitus with peripheral vascular complications (n = 4).
Pharmacotherapy for Concomitant Diseases
The definition of multimorbidity involves two or more medical diseases/conditions, each lasting over one year. Of the total number of participants, 72% were diagnosed with two or more additional chronic conditions and 28% were diagnosed with one additional chronic condition. Patients with this disease were more likely to die at an earlier age and to be hospitalized more often. Polypharmacy was defined as the routine use of five or more medications. Of the total, 53% of patients were using at least five or more medications. Of the total, 40% of patients were using monotherapy and 60% were using combined therapy.
The most common concomitant disease among the studied patient group was arterial hypertension (67.89%). Agents affecting the sympathoadrenal system were the most often prescribed antihypertensive therapy (n = 66), followed by combination products (n = 44) ( Table 2).
Some of the patients were diagnosed with angina pectoris (n = 30), and all the respondents used statins. The most common diabetes complication among the observed patients was diabetic polyneuropathy (n = 14) as the preferred prescribed therapy included tioctic acid and uridine monophosphate. Figure 2]. The total out-of-pocket payment was lower for patients on antidiabetic therapy in comparison with copayment for concomitant therapy (BGN 1513.65 BGN vs. BGN 3371.85).
The median costs per patient per month paid by the fund were higher for the antidiabetic therapy (BGN 12.92) in comparison with the therapy for concomitant diseases (BGN 4.61) [ Figure 3]. Out-of-pocket payment per patient was lower for antidiabetic therapy in contrast to concomitant therapy (BGN 7.51 vs. BGN 28.89).
We estimate that 12 patients will experience loss of productivity (BGN 4595.80 per patient).
Total Cost
The total cost for a month (public funds and patients) for all observed patients accounted for BGN 13 747.48. The costs are shown in Figure 4.
Quality of Life
The same number of patients completed both questionnaires (for QoL and for MA). On a VAS visual analog scale, patients provided a score ranging between 25 and 96 out of We estimate that 12 patients will experience loss of productivity ( EQ-5D scores among nonsmokers (EQ-5D = 0.51 ± 0.32 vs. EQ-5D = 0.62 ± 0.31) (p > 0.05) and non-consumers of alcohol (EQ-5D = 0.49 ± 0.32 vs. EQ-5D = 0.648 ± 0.31) (p < 0.05) were lower than those for smokers and consumers of alcohol, respectively. Similar results for VAS scores (63. 16
Quality of Life
The same number of patients completed both questionnaires (for QoL and for MA). On a VAS visual analog scale, patients provided a score ranging between 25 and 96 out of 100 units. When evaluated by EQ5D, the values ranged between −0.157 and 1.0, with the highest possible score being 1.
Level of Medication Adherence and Factors Influencing Medication Adherence
The absolute number of respondents to the MA questionnaire was 86.9% (n = 93). High and medium levels of MA were revealed in 64.51% and 33.3% of respondents, respectively. Duration of the disease, professional status, age, gender, number of therapies, and quality of life did not affect the level of MA (p > 0.05). Those of the patients treated with monotherapy were more likely to comply with their treatment and demonstrate a higher level of adherence (36 vs. 21, p = 0.89) compared to those on dual therapy and those on triple therapy (36 vs. 3, p = 0.22).
When examining the factor "number of comorbidities" versus medication adherence, it was observed that most of the patients were in the "high adherent" group regardless of the number of comorbidities. Among the group of highly adherent patients, the ratio of high adherents with three comorbidities was the highest. Comparison of the male and female groups did not show a significant difference in adherence levels (28 vs. 32, p = 0.78). In the female group, there was a higher propensity for high adherence compared to medium and low adherence. Comparing the working and retired groups, higher adherence was observed in the retired group (39 vs. 21, p = 0.069).
No statistically significant difference between the costs paid by the patients with a medium and high level of adherence was observed (BGN 32.89 ± BGN 23.85 vs. 37.31 ± 25.56, p = 0.43). Patients treated with insulin secretagogues were most adherent to the therapy (n = 83.3%) in comparison with the other treatment groups. In total, 17 out of 30 (56.67%) participants treated with metformin, and half of those on a combination therapy (metformin + insulin secretagogues), were high-adherent (Table 4). A high level of MA was found among more women than men (OR = 1.7582, p = 0.2) as well as among those using polypharmacy (OR = 1.32 p = 0.5267); however, these differences were not statistically significant. The patients with more than 3 diseases and over 60 years of age were more adherent to prescribed therapy (OR = 1.4416, p = 0.4414 and OR = 1.2647, p = 0.6153, respectively) ( Table 5). For this reason, it should be concluded that no statistical reason exists to claim that MA level differs between different men and women, age groups, number of prescribed medicines, and number of diagnosed diseases. A significant related factor for MA was years lived with diabetes (OR = 3.039, 95% CI 1.1436-8.0759, p = 0.0258). The longer the disease duration, the more the odds for high MA level increased (Table 5).
Discussion
Diabetes is a chronic disorder which should be controlled by a diet, exercise, and pharmacological therapies to achieve glycemic control and prevent complications. Medication adherence is of exceptional importance in influencing diabetes and a positive outcome for the patient. Noncompliance with the therapeutic regime is responsible for complications and increased mortality. Higher adherence to therapy may be observed relative to disease duration. As the duration of illness increases, the proportion of higher adherents increases. Various studies have shown that nonadherence leads to increased direct and indirect costs [16].
Many factors could lead to lack of adherence: misperception of treatment benefits, a complex treatment scheme, and adverse events [17]. Focusing on adherence levels, it is critical to avoid future complications and achieve desired outcomes at the lowest possible cost. Patient adherence could be improved by various approaches such as patient involvement, active collaboration between healthcare professionals and patients, etc. [18]. Effective patient care could be achieved through education, supervision, and simplified therapeutic regimens. A reduction in self-control on the part of patients is necessary. Clinicians need to improve their prescribing and patient counseling to increase adherence [19]. Patients with lower adherence had more frequent hospitalizations and longer hospital stays than those with higher adherence [20].
Research on the level of medication adherence and related determinants among patients with diabetes is crucial. The results from the current study showed that the disease affects men and women equally (48 vs. 45), with 98.92% of them suffering from type 2 diabetes. The majority of patients were over 50 years old (90.33%). The current study showed that a greater proportion of patients were on biguanide therapy for diabetes (58 vs. 35) and were using more than 1 diabetes medication. Regarding accompanying diseases, the most common was hypertension (67.89%). No significant associations between sociodemographic characteristics (sex, age, living status, and diabetes duration) and adherence were identified. Data from other studies also showed a lack of association between these indicators and medication adherence. However, the current study affirmed that only a longer disease duration is related to a higher level of MA (OR = 3.039, p = 0.0258).
There are controversial findings about duration of diabetes and the level of adherence. Gelaw et al. [21] revealed a statistically significant correlation between these two factors, as 82.07% of observed patients with a duration of diabetes ≤ 5 years were more likely to adhere. Other studies [22,23] revealed that patients treated for diabetes for more than 5 years were high-adherent, which is similar to our findings. This could be explained by the fact that these patients are better educated about their disease and possible complications, are more aware of their own condition, have more and probably stronger relations with their healthcare providers considering longer treatment, might have a better understanding of their medicines, and are more motivated to use the prescribed medications as they are prescribed by their physician.
Other studies found that male patients tended to consume alcohol and smoke more than female patients. This may be because social interactions between men are more likely to involve tobacco and alcohol, and because men are more likely to perceive smoking and alcohol consumption as desirable male behaviors [24]. Alcohol use is a barrier to management of medication adherence. Excessive alcohol consumption negatively impacts diabetes self-care and the course of diabetes [25]. However, our data do not provide any evidence for correlation between the level of adherence and alcohol consumption, probably due to the small sample size. Only a correlation between QoL scores and alcohol consumption was revealed, showing that smokers and patients drinking alcohol have better QoL scores than nonsmokers and those who did not consume alcohol. Our results are in contrast with other studies finding that the probability for higher quality of life is lower among smokers in comparison with nonsmokers [26]. Other studies concluded that people with low QoL and depression have higher odds to start smoking and lower odds to stop smoking [27]. The conflicting results are indicative of the necessity of conducting further, wider study among Bulgarian patients with diabetes.
Limitations of our study include the lack of analysis of the influence of factors such as levels of glycosylated hemoglobin (HbA1c), body mass index, and others. However, various studies provide evidence of this [28]. Another strong limitation is the limited number of patients observed. In the study, patients already diagnosed with diabetes from only one GP practice were examined. Given that there may be other undiagnosed diabetics, it would be beneficial to examine this group as well and conduct a comparison between the two groups. Many studies show a high number of patients with undiagnosed diabetes. We also observed that comorbidities play a role in patient adherence rates. Among the observed cohort of diabetic patients, the proportion of high-adherent patients diagnosed with several concomitant diseases exceeded the proportion of low-and middle-adherent patients. Other studies also found that medication adherence was higher among patients "with" in comparison with those "without" concomitant diseases, probably due to their increased concerns about worsening in their condition [29,30]. The odds of medication adherence among patients with chronic conditions taking four medicines are higher than those taking one medicine [30]. Indicators influencing adherence include the following: the high number of drugs that patients take, complex therapeutic regimens, and a large number of accompanying diseases [31].
Few patients in the studied GP practice had weak adherence-only 2% of the observed cohort. The patients diagnosed with more comorbidities have better medication adherence, which could be explained with stricter follow-up of the condition and patients' awareness about their own health. Factors related to the physician's and patients' behavior and attitude towards medication adherence issues influencing the high level of medication adherence should be investigated in further studies. The study raises questions about medication adherence in patients with diabetes and their treatment. The need for more documentation for patients with diabetes is strongly emphasized. There are not many similar studies in Bulgaria. This study may be a stepping stone for larger studies analyzing the influence and specifics of GP practice on the monitoring and improving of MA among diabetic patients from ambulatory practice.
Conclusions
The number of nonadherent diabetic ambulatory patients observed in GP practices in Bulgaria is low, which might be proof of patients' concern about their own health and understanding about the importance of prescribed therapy. This hypothesis should be further investigated; thus, we are planning to conduct a subsequent study. A comprehensive study including more patients is needed to confirm the results and to investigate the predicting factors for a high level of MA.
|
2023-07-12T07:50:55.644Z
|
2023-06-26T00:00:00.000
|
{
"year": 2023,
"sha1": "e2e35e345838bb5f623a432176facee281d778ea",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1648-9144/59/7/1205/pdf?version=1687845171",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "0c5baf451f372e3b638be9c39157d75b11c5db79",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
}
|
43831904
|
pes2o/s2orc
|
v3-fos-license
|
Big Data Technology for Resilient Failure Management in Production Systems
. Due to a growing complexity within value chains the susceptibility to failures in production processes increases. The research project BigPro explores the applicability of Big Data to realize a pro-active failure management in production systems. The BigPro-platform complements structured production data and unstructured human data to improve failure management. In a novel approach, the aggregated data is analyzed for reoccurring patterns that indicate possible failures of the production system, known from historic failure events. These patterns are linked to failures and respective countermeasures and documented in a catalog. The project results are validated in three industrial use cases.
Introduction
The amount of data generated in production companies is continuingly growing. One reason for this development is the advancing integration of system control and measurement utilities within the production, due to new cost-efficient, highperformance information technologies. These allow for an intelligent connection of different production systems units and in general an increased interconnectedness of the production systems in total. The idea of interconnected machines and the overall production integration is labelled as "Industry 4.0" in Germany. Industry 4.0 aims at the systematic network integration of machines to make efficient use of the company's available information resources [1]. As part of this development, the value of production data and the hereby generated information has obtained increasing value for a company. The following approach illustrates a new strategy on how big amounts of data can be systematically used for a failure management system in production.
In a world with complex production procedures and globally operating corporate groups, an efficient failure management system can be a significant advantage in competition. With downtime costs on average as high as 22.000$ per minute, failures should be avoided or at least detected as soon as possible [2].
The research project BigPro addresses this issue by creating a Big Data driven, pro-active failure management system, capable of processing various data from the production environment. Within the platform, the generated production data will be analyzed for data patterns that indicate possible failures in the production system.
2
Literature Review
Big Data
In 2001 the Meta Group (Gartner) proposed a report about future data management, proposing the three dimensions: Variety, Velocity and Volume [3]. The term Big Data was not yet invented, but the data classification into the three V's prevailed and has been supplemented in 2013 by the dimension Veracity in an IBM study [4].
The dimension Volume is still the most common perception of Big Data and describes the amount of data that is generated and processed, at times comprising of petabytes of data. Velocity describes the speed at which the data is generated and processed, with special emphasis on the increasing significance of real-time data transmission. The fact that the majority of data is unstructured or semi-structured is regarded by the dimension Variety. The newly introduced dimension Veracity covers the aspect of the uncertain quality of data and the outcome of data analyses, taking into account that data is partially imprecise, nuanced, and may be redundant or incomplete [5].
Big Data introduces new capabilities of data storage, processing, and analysis. With increasing data sources in companies the available data in companies exceeds their processing capabilities. This not only holds true for the data volume, but also for its variety. With roughly 80% of the data being unstructured or semi-structured, the ability to consider all kinds of data for analytical tasks, infused by Big Data technology, is of great importance for a company's success. The processing data in real-time is another important aspect that makes Big Data technology capable for failure management systems, since a short reaction time to identify failures is of crucial economic importance for production companies [6].
Complex online optimization and Complex Event Processing
The term complex online optimization summarizes hard to solve optimization problems with high response time requirements while including different decision makers and project phases [7]. These challenges exist especially when a failure occurs or is suspected and the production system needs to be stabilized. In most classic failure management approaches, production managers try to cushion failures by including buffers within the production plan. However, there are new approaches which introduce a dynamic component to adapt production plans to occurring failures, e.g. simulation-based rescheduling. Most of these approaches concentrate on a particular machine, ignoring the succeeding production steps and the changes that come with the adjustment of the production plan for the following machines. The BigPro approach includes different kinds of data from several decision-making levels to create a comprehensive failure management in production. This approach includes not only the already mentioned rescheduling concept, but also approaches of event-based failure identification and prevention activities, which are part of an automated data analysis. Complex event processing (CEP) describes the direct tracking, processing and analyzing of data streams in near real-time. The aim of complex event processing is to gain insight in data patterns and identify meaningful business events within a complex data context [8,9]. The advantage of complex event processing is that these event streams can be processed directly on the data stream. This technology shows great potential for the use in an intelligent and agile production, where great amounts of data from different sources such as sensor-data streams, service data and external data need to be analyzed on-the-fly. In BigPro this technology will be used to analyze failure patterns to initialize preventive actions. Here, not only current but also past event patterns are considered to create a larger information basis and make the forecasting system more reliable and resilient.
FMEA incident management
The failure mode and effects analysis is an established systematic technique, used to identify and analyze failures and failure types. The FMEA analysis enables the detection of failure possibilities and weak points within a process and identifies proactive measures to prevent these failures [10]. Furthermore, FMEA optimizes existing processes and can even be used to bundle all information regarding past detected failures and their connection for further use. The FMEA method therefore is a suitable tool to define failure groups as part of the reactive failure management in BigPro.
Mood tracking and Sentiment Analysis
Monitoring human related data such as emotions and physical activities have gained increasing awareness in many different research areas [11]. However, stress management in production context is a rather new research area. Due to newly developed biosensors it is possible to measure different parameters such as heart rate variability, heart beat or skin conductance which are reliable indicators for stress. This information can be merged in a production environment to identify stressful situations and prevent failure or production downtime by taking measures accordingly.
Sentiment Analysis refers to the analysis of written human interaction to identify the emotional state of the author, at the time the message was written. A message can contain not only an informative, but also an emotional message [12]. The analysis of human data will be included in BigPro as another potential failure indicator to gain better insight into the production system, and improve failure management.
Identified research gap
The integration of Big Data technology into a failure management system has not yet been put to the test. This enables the merge of structured and unstructured data in a production context and to create a more precise virtual image of the production system. It also requires more sophisticated CEP algorithms to better process and merge structured and unstructured data in a failure management context. To ensure portability of the solution, another challenge is to cover three different use cases with very distinct information systems and business cases.
After the data is processed and a potential failure is recognized, a user-oriented visualization is necessary to suggest or initiate countermeasures. Depending on the failure's seriousness and impact on production, countermeasures need to be taken by persons from different hierarchies with different authorization levels in the company. Hence, a user-oriented visualization (management decides on aggregated information, while production workers require actual information on machine status) of failures needs to be developed. Furthermore, the integration of human data as an indicator for failures requires new data privacy concepts.
3
Big Data for Production Failure Management
Failure recognition with Big Data to increase production resilience
To detect possible failures all production data (e. g. sensor data, order data from the ERP system and other information systems, production environment data, …) will be gathered and analyzed. As part of the BigPro approach the influence of the persons within the production -the heart of a production-will be considered as well. In fact, the worker's input and his working experience is of great importance to gain better insights in the production system's condition. Unusual observations such as growing noise emission or oil leakage stay mostly unnoticed, but can be detected by experienced workers. As part of this project, different sources of human input are tested regarding their failure management suitability: text analyses of intranet department news, maintenance comments as well as voice recognition within the production itself are potential data sources.
Human data, as well as data from production assets will be automatically analyzed and handled by complex event processing methods. In addition, not only current but also past information from failure situations are processed to detect reoccurring patterns and improve the platform's failure forecasting capabilities. After patterns are detected, the probability of an occurring failure is determined to define the data's quality and to decide whether correcting actions will be taken.
Applying Big Data -technology to the production data allows for the consideration of all data (structured or unstructured) relevant for the production process, making the digital representation of the production more comprehensive. Thus, the more data and information is available in real-time, the better the planning, controlling and managing of production systems can be performed, while responsiveness to unforeseeable events increases. All these aspects pave the way to a more resilient production system suffering of fewer unplanned production downtimes.
Big Data for failure prevention and reaction management
After patterns have been detected, adequate countermeasures need to be defined for the pro-active character of the failure management system in BigPro. As a supporting tool for the creation and evaluation of specific reactive actions the FMEA analysis will be used. For known patterns a reactive action will be defined in the failure management platform and documented in a countermeasure catalog. For an identified pattern with a high probability rate the previously defined countermeasure might be initiated automatically by the system. Patterns with a lower probability of occurrence can be forwarded to the person in charge as a failure warning with a reaction proposal. Thus, the risk of the production from going into downtime is reduced. The catalog will be extended in an ongoing validation process. To eventually use this technology for different production branches cross-sector solutions need to be generated.
Failure visualization
As a subordinate theme this research aims to visualize information about possible failures, their urgency, and possible reasons with proposed countermeasures.
Information should be visualized differently for different groups of employees. While the production manager needs failure notifications about urgent failures, the machine operator needs all types of information about the machines that are in his area of responsibility. He also needs a different degree of information and is used to more technically detailed information. His information may include information about resource shortage or signs of increasing wear as well as a drop in oil pressure. This personalized way of failure visualization creates a more transparent and user-oriented workflow while increasing efficiency of the failure management system.
BigPro for a resilient failure management in production
The project BigPro unites new data processing approaches with an emphasis on failure management strategies. The aim of the project is the creation of new usable concepts and tools in regards to the failure detection, failure handling and failure visualization. The project takes place in close collaboration with three project partners to test created solutions in action within their production systems.
The project partners are of varying size with a range of different production systems to study and ensure the manifold application possibilities of the BigPro platform.
The overall approach
Information plays an important role in this project. To realize an effective and efficient failure management system, it is important to consider the right pieces of information in the right context. The project's Big Data approach allows for the consideration of all kinds of data and information, without the need to specify relevant infor-mation beforehand. Thus, all available data can be gathered, analyzed, and used in the BigPro platform for a data-based failure management.
BigPro will extend the data processed for analysis from the production environment (production machine data, environmental data, and order data) with unstructured, human data. Thus, a more complete digital image of the production is gained. Impressions, such as unusual machine noises or flawed machine operations are difficult to track with ordinary sensors. BigPro will be capable to capture and understand human input, and will use this additional information for the failure management.
The overall goal of BigPro is to enable a pro-active failure management for producing companies. This goal is carried out by developing algorithms for data pattern analysis. These algorithms examine existing data pools for patterns during production failures. Detected data patterns will be correlated with the related failure and included in the catalog of countermeasures. BigPro platform will use this data base to compare current data stream from the production environment with the known patterns. In case of a match, the system will warn, that a specific failure might occur. If a known and established countermeasure is recognized, it will be suggested to the responsible user.
Next, to initialize and conduct pro-active or re-active countermeasures, it is important to identify the appropriate management / decision level to address the failure. Here, it is important to provide the required information visualized user-oriented and in the right aggregation level.
The project comprises of the following tasks to implement a Big Data platform for failure management in production systems: • Creating an information landscape for each use case, and developing a concept to determine data and information reliability for the failure management system, • Evolving algorithms for CEP data pattern management as basis for a pro-active failure management, • Creating an expandable catalog of countermeasures, correlated with identified data patterns, and • Developing new, user-oriented visualization concepts for different decision levels.
Use cases descriptions
The first use case is part of a research environment to test the interaction between practice and research. Based on a real production environment, electrically powered pedal carts are being assembled in a small-batch production. The factory is equipped with modern machinery and assisted by voice-based systems such as Pick-by Voice commissioning. Due to research activities, the data environment is extended on a regular basis. This leads to a dynamic data generation environment and a high variety and veracity of data. As part of research it is possible to study employees as indicators of disturbance in more detail than in actual companies.
The second partner has started to digitalize its hand moulding shop by installing RFID technology linked to the ERP system to increase process transparency. These data are extended by data pulled from the involved production machines. This use case represents the data availability of a typical SME. The company does not yet have a total failure management system but with up to six weeks of throughput time for each product, it is of the utmost importance, that failures and resulting production disturbances can be avoided.
The production process of the third partner requires the interaction of a high number of production machines, each creating a significant volume of data points that need to be merged to extend the already existing failure management system. The integration of human created content promises further insights into the production process and its stability.
Challenges in BigPro
The three use cases and their diverse production and business backgrounds mean a significant challenge to BigPro. Each partner demands for a specific problem solution in a specific context. To ensure transferability of the solution, three measures need to be taken: First, the partner specific problems need to be generalized to examine transferability options. Second, a set of standard BigPro elements to address the generic problems will be defined. These sets comprise of involved information objects, as well as required information sources (e.g. sensors). Third, the catalog's logic to gather countermeasures needs to receptive for all three partner's requirements.
Further challenges arise from the integration of structured and unstructured data. Especially the aspired inclusion of human-generated content poses a challenge for the BigPro platform. On the one hand, it is necessary to generate data without interfering with the workers' working routines. Thus, analyses were run to identify already existing human interfaces within the treated use cases. On the other hand, there is still the complexity of digitalizing input and processing the retrieved data into context-related content. Therefore, the system will be taught in terms and context by reading in documents and manuals of the respective process.
Conclusion and Outlook
An efficient failure management plays an important role for production companies. Scrap and downtime are cost drivers that need to be avoided. Since data and information play an increasingly important role in companies and for decision makers, it seems natural to use data for a failure management system. BigPro introduces comprehensive approach by using Big Data methods for more precise failure detection. A Big Data platform will be developed capable of processing structured and unstructured data, generated in the production environment.
Unlike other approaches, BigPro not only uses data from production machines and environment sensors, but stresses the worker's capabilities to indicate disturbances and failures. By digitalizing the human input, and merging it with machine data on the BigPro platform, the digital image of the production is more complete, and serves a better decision basis. On this basis, data pattern analyses are run to detect looming failures in production. This goal drives another challenge: the combination of historic and real-time data, as well as the correlation with data patterns and related failures.
Finally, a concept for a user-oriented visualization to better support decision makers is required. This concept ensures that only information relevant for a person is shown (management decides on aggregated information, while production workers require actual information on machine status).
In the first project phase the technical and business-driven use case requirements have been gathered, discussed and documented. Next, the BigPro platform will be initiated based on the documented requirements. In parallel, the information landscape is drawn, to identify relevant information objects. Based on the information objects, the data pattern analysis will start.
|
2018-01-23T22:44:24.520Z
|
2015-09-07T00:00:00.000
|
{
"year": 2015,
"sha1": "10b4f5739c99364486bac306e6d3ec912b2dca77",
"oa_license": "CCBY",
"oa_url": "https://hal.archives-ouvertes.fr/hal-01417530/file/346972_1_En_55_Chapter.pdf",
"oa_status": "GREEN",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "e5ad3c98a9d5f5732e3c10c49775ccb8fa8ea20c",
"s2fieldsofstudy": [
"Computer Science",
"Engineering"
],
"extfieldsofstudy": [
"Computer Science",
"Engineering"
]
}
|
53609485
|
pes2o/s2orc
|
v3-fos-license
|
Sedimentological Study and Heavy Mineral Analysis of Sediment Samples from Well-S, Niger Delta, Nigeria
Sedimentological study and petrographic analyses were carried out on thirty ditch cutting samples from well-S, Niger Delta, Nigeria, with the aim of determining the provenance and depositional environment of the sediments. The samples were subjected to soxhlet extraction for the removal of soluble organic matter from them and particle size analyses using Pipette and Emery sedimentation techniques in order to determine the grain size distribution of the sediments. Separation of heavy of mineral from the samples was done with the aid of bromoform to enable petrographic analyses of the heavy mineral suite, under the polarising microscope. The data obtained from the grain size analysis were used in preparing histograms, from which some simple statistical parameters were derived. Graphic mean values obtained range between 0.74 and 2.64 Ø, which implies that the sediments are predominantly fine-medium grained. The inclusive standard deviation values ranges from 0.53 to 1.24 Ø, indicating that the sediments are moderately well sorted to moderately sorted. Inclusive graphic skewness values of 0.29 to 0.70 indicate that the sediments ranges from finely skewed to strongly finely skewed and the graphic kurtosis values of 0.61 to 1.54 shows that the sediments are predominately very platykurtic which implies a low energy environment of deposition. The polymodal nature displayed by the histograms indicates that the sediments have been derived from various sources. The study concluded that the sediments were deposited in a fluvial environment. It also established that the sediments originate from metamorphic and acid igneous rocks of the Nigeria Basement Complex and mineralogically mature to sub-mature.
Introduction
Sediments are derived from the pre-existing rocks that have been weathered, transported and deposited in basins.
The Niger Delta is one of the depositional basins in Nigeria and it has been accommodating sediments since the Paleocene.
The understanding of the sedimentary processes most importantly the environmental factors that have influence on the weathering, transportation, deposition and subsequent modification of the sediments are crucial in knowing their source and reconstructing the environment in which they were. Sedimentologists are saddled with the responsibility of studying the properties of the sediments such as texture, structure, chemical and mineralogical composition to uncover the natural history of the sediments.
Although, the Niger Delta has been extensively studied by various workers, as regard the provenance of the sediments, heavy mineral suit and depositional environment (NERDECO (Netherlands engineering consultant, 1959 [1]); Evamy et al., 1978[2]; Adedokun 1981[3]; Mateawo 1995[4]), there is paucity of information on sedimentological and heavy mineral studies of sediments from deepwater wells in the Niger Delta.
The aim of the present work is to study the sedimentological properties of sediments from two deepwater Niger Delta wells penetrated in the sequence and subsequently establish the provenance and environment of deposition of the sediments.
Geology of the Study Area
The Niger Delta, which is the most important sedimentary basin in Nigeria, is one of the world's major hydrocarbon prolific basin and largest delta complexes. It is located between latitudes 4° and 6° North and longitudes 4° and 9° East. It is bounded in the West by the Benin Flank and in the East by the Calabar Flank which is a subsurface continuation of the Oban Massif. The Delta covers an area in excess of 75,000km 2 (Avbovbo, 1978[5]). The Niger Delta occupies an area restricted by the Benin Flank, the Calabar Flank, the Anambra Basin and the Senonian Abakaliki Uplift. It is situated in the Gulf of Guinea on the west coast of Africa. Themodern Niger Delta is generally agreed to be built on an oceanic crust. The supporting arguments came from the pre-continental drift reconciliation. This indicates an important overlap of NE Brazil on the present Niger Delta; and from a series of geological and geographical observations. The Delta has been very prolific in terms of hydrocarbon due to the association of source rock, structure, thermal histories and lithology type which are favourable conditions for the production, accumulation and retention of hydrocarbons.
Materials and Methods
Thirty (30) ditch cutting samples from 10700-11570m, collected at 30m in Well-S were obtained for sedimentological studies from a Nigeria deepwater operator (Oil Company). The name and location of the well was not made available for proprietary reasons. However approximate location of the study area as shown in (figure.1).
A soxhlet extraction was carried out to remove the soluble organic matter contained in the samples using mixture of n-hexane and toluene in ratio 2 to 3 as solvent. Thereafter, the samples were wet sieved using 63 µm mesh to separate the fine grained size fraction (silt/clay) from the coarse grained size fraction (sand particle) using the standard procedure of Carver (1971) [6] and Folk (1974) [7].The clay/silt fraction of the sample was analysed for grain size distribution using pipette method, while Emery's rapid sedimentation tube method was used for the grain size studies of the sand size fraction of the samples. Twenty (20) sand sized fraction samples were selected for the heavy mineral separation and dry sieved with 200micron mesh in order to remove the coarse grains. The sample was introduced into a separating funnel containing bromoform in a fume cupboard and was properly stirred. The heavy minerals settled while the lighter fraction floated on the bromoform due to difference in their densities (Carver 1971). The heavy minerals obtained were sprinkled on a glass slide and covered with acover slip using Canada balsam as a moulting medium. These slides were labelled properly and examined under petrographic studies.
Results and Interpretation
The data obtained from the pipette and emery's sedimentation tube analyses were used in plotting cumulative, histogram and probability curves. Quantitative graphical values for the various percentiles such as Ø5, Ø16, Ø25, Ø50, Ø75, Ø84 and Ø95 were obtained from these curves on the probability log paper and the results are presented in Table 1. The following Statistical parameters deeived from the results include; graphic mean, inclusive graphic standard deviation, inclusive graphic skewness and graphic kurtosis for each sample. These parameters are defined by the equations that make use of the percentiles stated above as proposed by Folk and Ward (1957) [8].
The textural characteristics, mechanisms of deposition and depositional environment of the studied samples were inferred from the statistical parameters. The values obtained from graphic mean ranges between 0.74 and 2.64 Ø. This shows that sediments are fine to coarse grained. On the average, about 63.3% of the sediments are fine grained, 33.3% are medium grained and 3.4% are coarse grained. This implies that the velocity of the transporting and depositing medium themselves environment with deposition in a predominantly low energy environment.
The inclusive graphic standard deviation values ranges from 0.53 to 1.24Ø which indicates that the sediments are moderately sorted to moderately well sorted. About 60% of the samples are moderately sorted while the remaining 40% were moderately well sorted. However the range of standard deviation values, predominantly in excess (0.80 -1.4) implies a moderately agitated medium with a relatively constant energy of deposition. The range of standard deviation values of 0.50 to 0.80 Ø is more typical of fluvial environments than inland dunes which indicate that the sediments were deposited by a river supplying the sea with sediments (Friedman, 1961) [9].According to Folk (1974), sediments that are deposited by a constant current will produce better sorting than sediments deposited by current that fluctuate rapidly. Therefore, the sediments analysed in the studied well section must have been deposited by constant to intermediate current strength.
The values of skewness and kurtosis, show how closely the grain size distribution approaches the normal Gaussian probability curve and the more extreme the values are, the more the size curves deviates from normal (Folk, 1974).
Sediments from one source has fairly normal curves while sediments from more than one source deviate appreciably from the normal and show high values of skewness and kurtosis. The values of the graphic skewness ranges from 0.29 to 0.70 (i.e from fine skewed to strongly fine skewed). This shows that the energy of the environment is low as indicated by the prevailing positive skewness of the sediments. More than 96% of the sediments are strongly fine skewed while the remaining 4% are fine skewed.
The graphic kurtosis values obtained for the samples range from 0.61 to 1.54 (from very platykurtic to very leptokurtic). On average, more than 33% of the sediments are leptokurtic while 60% of the sediments were very platykurtic, platykurtic and mesokurtic at 20% each and the remaining 6.7% were very leptokurtic. This implies that they have the tails portions better sorted than the central part, although there are values that show better sorting at the central part.
The three principal modes of transport which are traction (rolling/sliding), saltation and suspension (visher, 1969[10]; Visher and Howard, 1974[11]; Sagoe and Visher, 1977[12]) are generally represented by different line segments on a probability plot. Friedman (1967) showed that river sands commonly display these three modes of transportation while beach sands in contrast display only the saltation transportation components. Detail examination of most of the probability curves derived from the studied samples indicates three line segments which imply that the three modes of transportation were present. It means that the deposits were likely to be of fluvial origin. The most effective transport mechanisms of the sediments studied are saltation and suspension with saltation being the predominant while traction can be a subordinate mechanism of transportation ( figure 3).
The histogram plotted for most of the samples analysed exhibit polymodal to bimodal distribution (figure 2). The histograms show similar grain size distribution.
The polymodal nature of the samples indicates that the sediments are from different sources. The uniformity of sands in this area is believed to be as a result of uniformity in the force transporting and depositing them. The histograms displayed both the primary and subsidiary modes. The primary mode is between 0.92 and 2.39 Ø while subsidiary mode is between 3.00 and 3.84 Ø.
Depositional Environment
Sahu (1964) [13] proposed some equations that are useful for the confirmation of the depositional environment. One of these equations is used to differentiate between shallow marine and fluvial sediments. It is mathematically expressed as; Ysh.mari flu = 0.2852Mz -8.76040 ز -4.8432SKI + 0.0482Kg Where: Mz = Graphic mean ز = Square of the average graphic standard deviation SKI = Average graphic skewness Kg = Average graphic kurtosis
Results of the Heavy Mineral Analysis
The heavy minerals present in the sediments analysed are listed in Table 2.According to Carver (1971), heavy minerals are accessory minerals present in concentrations of less than 1%. They are chiefly silicates and oxides, many of which are very resistant to mechanical abrasion and chemical weathering.
Petrography
The heavy mineral assemblages were dominated by opaque minerals in the samples analysed. It has been noted by Friedman and Sanders (1978) [14] that opaque minerals typically predominate in a heavy mineral suite. Emphasis is placed on non-opaque minerals in this study because the opaque minerals are of little importance in provenance determination. They are anhedral in shape with very irregular outlines. The non-opaque minerals include Tourmaline, Staurolite, Rutile, Zircon, Apatite, Kyanite, Amphibolite, Monazite and Olivine. A brief description of the textural and optical characteristics of each heavy mineral type is as shown in Table 3. The photomicrographs of some of the heavy minerals as shown in figure 4.
Conclusions
The data obtained from the statistical parameters show that the sands from well-S are predominantly fine grained, moderately sorted to moderately well sorted, strongly fine skewed and very platykurtic. This implies that the sediments were deposited in a relatively low energy fluvial environment.
It was revealed by the statistical parameters that the sediments were transported mainly by saltation and suspension with a greater population of the sediments transported by saltation. The sediments in the sequence studied have been found to be deposited in a predominantly low energy medium under fluviatile conditions as deduced from the application of Sahu's (1964) equation to the statistical parameter.
The heavy mineral suite indicates that the sediments of the study area are likely to have been derived from acid igneous and metamorphic rocks which form part of the Basement Complex rocks of Nigeria. The co-occurrence of stable heavy minerals like Tourmaline, Zircon, Rutile, and Garnet indicate that the sediments are mineralogically mature.
|
2019-04-26T14:23:54.605Z
|
2016-01-01T00:00:00.000
|
{
"year": 2016,
"sha1": "d952f4fa5da6a4150bcfa2a4a6d38f5b1eaa5672",
"oa_license": "CCBY",
"oa_url": "http://www.hrpub.org/download/20160830/UJG1-13906383.pdf",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "ab5af2474c6a6f0252c8c00b1d55428559f38776",
"s2fieldsofstudy": [
"Geology",
"Environmental Science"
],
"extfieldsofstudy": [
"Geology"
]
}
|
234031680
|
pes2o/s2orc
|
v3-fos-license
|
Robotic Arms with Anthropomorphic Grippers for Robotic Technological Processes
: The robotic arms of the human arm type, so-called collaborative robots, have been improved, optimized, and diversified greatly in recent years. However, most of them are still equipped with mechanical grippers with plier-like jaws. Equipping these robotic arms with anthropomorphic grippers is currently hampered by variants of these grippers on the market that are far too complex and at inaccessible prices to be used on a large scale. As an alternative to the familiar anthropomorphic grippers, I presented an anthropomorphic gripper with five fingers, made under my coordination, constructive, and functional, including briefly the coupling solution with a robotic arm.
Introduction
After the advent of industrial robots in technological processes, for a long time, industrial robots have been and are equipped for parts transfer operations, with mechanical grippers with jaws. These grippers can only be used for one type of parts or a set of similar parts. There are also multi-grippers that can be used for more types of parts. The main disadvantage of these grippers is the limited range of use and the need to change the gripper in the case of another type of part in shape or size [1]. In parallel with the mechanical grippers with jaws, anthropomorphic grippers, similar to the human hand, with three to five fingers, were continuously developed and perfected. Robotic arms similar to the human arm were also developed and perfected, which became compact, precise, and reliable [2,3]. Thus, in the robotic technological processes, the classic industrial robots got to be replaced, as an increasingly obvious trend, with robotic arms/collaborative robots (cobots) equipped with anthropomorphic grippers, fixed or mounted on mobile platforms, to the more complex shape of humanoid robots. In this way, it is possible to replace human operators with these variants of humanoid arms or robots. This paper presents a unit of robotic arm/ anthropomorphic gripper that can be widely used in the robotization of technological processes for industrial products' manufacturing.
Types of Collaborative Robotic Arms
In the industrial robot, the robotic arm had from the beginning as a model the human arm, with the mention that the initial variants of industrial robots had large dimensions and some disproportions compared to the human arm. Anatomically and cinematically, the human arm represented in Figure 1 is characterized by several elements and seven independent movements (in xyz coordinate axis): three rotations in the shoulder (ω 1 ; ω 2 ; ω 3 ), one rotation in the elbow (ω 4 ), one rotation around the forearm (ω 5 ), and two rotations at the wrist level (ω 6 ; ω 7 ). xyz coordinate axis): three rotations in the shoulder (ω1; ω2; ω3), one rotation in the elbow (ω4), one rotation around the forearm (ω5), and two rotations at the wrist level (ω6; ω7). As we have already mentioned, the first variants of industrial robots only tried to copy the structure and kinematics of the human hand, a direction in which they were partially successful. However, in the last 10-15 years, robotic arm structures have appeared that are much more similar to the human arm and have comparable performance. Some of these variants will be presented (for each variant the independent movements are highlighted: ω1, …, ω7, corresponding to the number of degrees of freedom, an original contribution of this paper, useful for the easier understanding of the operation of these robots). The Barrett Arm (Figure 2a) has been made since the early 2000s and is particularly accurate. The main features of this robotic arm are a height of 42 cm, length of 72 cm, width of 34 cm, weight 27 of kg, high speed, and very good accuracy [4]. Figure 2b shows the Universal Robot UR 10 robotic arm. Its main features are: it safely works alongside employees or separately; it automates tasks up to 22 lbs (10 kg); its reach radius is up to 51.2 in (1300 mm); it has 360-degree rotation on each wrist joint, 6-axis capability, and 0.1 mm repeatability; and it is lightweight and mountable at only 24.3 lbs and easily programmed to switch tasks [5]. [6]. Figure 3b shows the robotic arm type Elfin, which has the following features: control mode: continuous path control; drive mode: electric; application loading, pick and place, condition: new; CE certification; trademark: Han's Robot [7]. As we have already mentioned, the first variants of industrial robots only tried to copy the structure and kinematics of the human hand, a direction in which they were partially successful. However, in the last 10-15 years, robotic arm structures have appeared that are much more similar to the human arm and have comparable performance. Some of these variants will be presented (for each variant the independent movements are highlighted: ω 1 , . . . , ω 7 , corresponding to the number of degrees of freedom, an original contribution of this paper, useful for the easier understanding of the operation of these robots). The Barrett Arm (Figure 2a) has been made since the early 2000s and is particularly accurate. The main features of this robotic arm are a height of 42 cm, length of 72 cm, width of 34 cm, weight 27 of kg, high speed, and very good accuracy [4]. Figure 2b shows the Universal Robot UR 10 robotic arm. Its main features are: it safely works alongside employees or separately; it automates tasks up to 22 lbs (10 kg); its reach radius is up to 51.2 in (1300 mm); it has 360-degree rotation on each wrist joint, 6-axis capability, and 0.1 mm repeatability; and it is lightweight and mountable at only 24.3 lbs and easily programmed to switch tasks [5].
Proceedings 2020, 63, 77 2 of 9 xyz coordinate axis): three rotations in the shoulder (ω1; ω2; ω3), one rotation in the elbow (ω4), one rotation around the forearm (ω5), and two rotations at the wrist level (ω6; ω7). As we have already mentioned, the first variants of industrial robots only tried to copy the structure and kinematics of the human hand, a direction in which they were partially successful. However, in the last 10-15 years, robotic arm structures have appeared that are much more similar to the human arm and have comparable performance. Some of these variants will be presented (for each variant the independent movements are highlighted: ω1, …, ω7, corresponding to the number of degrees of freedom, an original contribution of this paper, useful for the easier understanding of the operation of these robots). The Barrett Arm ( Figure 2a) has been made since the early 2000s and is particularly accurate. The main features of this robotic arm are a height of 42 cm, length of 72 cm, width of 34 cm, weight 27 of kg, high speed, and very good accuracy [4]. Figure 2b shows the Universal Robot UR 10 robotic arm. Its main features are: it safely works alongside employees or separately; it automates tasks up to 22 lbs (10 kg); its reach radius is up to 51.2 in (1300 mm); it has 360-degree rotation on each wrist joint, 6-axis capability, and 0.1 mm repeatability; and it is lightweight and mountable at only 24.3 lbs and easily programmed to switch tasks [5]. [6]. Figure 3b shows the robotic arm type Elfin, which has the following features: control mode: continuous path control; drive mode: electric; application loading, pick and place, condition: new; CE certification; trademark: Han's Robot [7]. [6]. Figure 3b shows the robotic arm type Elfin, which has the following features: control mode: continuous path control; drive mode: electric; application loading, pick and place, condition: new; CE certification; trademark: Han's Robot [7]. Proceedings 2020, 63, 77 3 of 9 (a) (b) Another variant of robotic arm of this type is ROZUM Robotics (Figure 4a) characterized by being ultra-lightweight and mobile (8 kg weight), strong and dexterous (3 kg payload, 700 mm reach), precise (±0.1 mm repeatability), and fast (30 rpm/2 m/s) [8]. The KUKA robotic arm (Figure 4b), made after a long period of improvement and optimization of KUKA classic robots, in which we can also remark upon the great difference between the traditional industrial robots and articulated robotic arms of the last generation, is characterized by a 7-DOF robotic arm and adaptation algorithms; the robot is equipped with torque sensors, allowing us to perform torque control and by extension impedance control, allowing for compliant interaction and motion-adaptation [9]. The Rebel Arm 1-2 robotic arm ( Figure 5a) is characterized by 6 DOF, with integrated control system and motor; an outer chassis that consists entirely of polymers and is therefore cost-effective and light; an articulated arm that enables applications involving human-machine collaboration; lightweight, internal cables; joints that are suitable for service robotics applications; and brushless DC motors instead of stepper motors [10]. The Panda robotic arm ( Figure 5b) is characterized by a easy-to-program robotic arm designed for small businesses and ability to move in seven axes, designed with a smart sense of "touch"; the Panda can help conduct science experiments, build circuit boards, or pretest equipment (two Panda arms can even work together to build a third) [11]. Another variant of robotic arm of this type is ROZUM Robotics (Figure 4a) characterized by being ultra-lightweight and mobile (8 kg weight), strong and dexterous (3 kg payload, 700 mm reach), precise (±0.1 mm repeatability), and fast (30 rpm/2 m/s) [8]. The KUKA robotic arm (Figure 4b), made after a long period of improvement and optimization of KUKA classic robots, in which we can also remark upon the great difference between the traditional industrial robots and articulated robotic arms of the last generation, is characterized by a 7-DOF robotic arm and adaptation algorithms; the robot is equipped with torque sensors, allowing us to perform torque control and by extension impedance control, allowing for compliant interaction and motion-adaptation [9]. Another variant of robotic arm of this type is ROZUM Robotics (Figure 4a) characterized by being ultra-lightweight and mobile (8 kg weight), strong and dexterous (3 kg payload, 700 mm reach), precise (±0.1 mm repeatability), and fast (30 rpm/2 m/s) [8]. The KUKA robotic arm (Figure 4b), made after a long period of improvement and optimization of KUKA classic robots, in which we can also remark upon the great difference between the traditional industrial robots and articulated robotic arms of the last generation, is characterized by a 7-DOF robotic arm and adaptation algorithms; the robot is equipped with torque sensors, allowing us to perform torque control and by extension impedance control, allowing for compliant interaction and motion-adaptation [9]. The Rebel Arm 1-2 robotic arm ( Figure 5a) is characterized by 6 DOF, with integrated control system and motor; an outer chassis that consists entirely of polymers and is therefore cost-effective and light; an articulated arm that enables applications involving human-machine collaboration; lightweight, internal cables; joints that are suitable for service robotics applications; and brushless DC motors instead of stepper motors [10]. The Panda robotic arm (Figure 5b) is characterized by a easy-to-program robotic arm designed for small businesses and ability to move in seven axes, designed with a smart sense of "touch"; the Panda can help conduct science experiments, build circuit boards, or pretest equipment (two Panda arms can even work together to build a third) [11]. The Rebel Arm 1-2 robotic arm (Figure 5a) is characterized by 6 DOF, with integrated control system and motor; an outer chassis that consists entirely of polymers and is therefore cost-effective and light; an articulated arm that enables applications involving human-machine collaboration; lightweight, internal cables; joints that are suitable for service robotics applications; and brushless DC motors instead of stepper motors [10]. The Panda robotic arm (Figure 5b) is characterized by a easy-to-program robotic arm designed for small businesses and ability to move in seven axes, designed with a smart sense of "touch"; the Panda can help conduct science experiments, build circuit boards, or pretest equipment (two Panda arms can even work together to build a third) [11]. All types of robotic arms presented are part of the so-called class of collaborative robots designed to interact in a friendly manner and very efficiently with the human operator. These robotic arms typically have six or more degrees of mobility and can be used individually or in pairs, as human arms. Two examples are given for illustration of the structure equipped with two robot arms each (Figure 6a,b). In most cases, these robotic arms, used individually or in pairs, have been equipped and are still equipped, as already mentioned, on a large scale with grippers with jaws, pliers, or sporadically with articulated finger grippers (3, 4, or 5 fingers). This situation is explained by the still low performance and affordable variants of anthropomorphic finger grippers.
It is time for this situation to be overcome and to move broadly to the endowment of robotic arms of the collaborative type with anthropomorphic grippers with five articulated fingers [2,3].
Solving the Direct Kinematic Problem with the Method of Homogeneous Operators
A problem of particular importance for robotic arms of the human arm type is the solution of direct kinematics. The following is a brief example of solving direct kinematics for a robot of this type, for which the method of homogeneous operators is applied [14,15]. This method application All types of robotic arms presented are part of the so-called class of collaborative robots designed to interact in a friendly manner and very efficiently with the human operator. These robotic arms typically have six or more degrees of mobility and can be used individually or in pairs, as human arms. Two examples are given for illustration of the structure equipped with two robot arms each (Figure 6a,b). All types of robotic arms presented are part of the so-called class of collaborative robots designed to interact in a friendly manner and very efficiently with the human operator. These robotic arms typically have six or more degrees of mobility and can be used individually or in pairs, as human arms. Two examples are given for illustration of the structure equipped with two robot arms each (Figure 6a,b). In most cases, these robotic arms, used individually or in pairs, have been equipped and are still equipped, as already mentioned, on a large scale with grippers with jaws, pliers, or sporadically with articulated finger grippers (3, 4, or 5 fingers). This situation is explained by the still low performance and affordable variants of anthropomorphic finger grippers.
It is time for this situation to be overcome and to move broadly to the endowment of robotic arms of the collaborative type with anthropomorphic grippers with five articulated fingers [2,3].
Solving the Direct Kinematic Problem with the Method of Homogeneous Operators
A problem of particular importance for robotic arms of the human arm type is the solution of direct kinematics. The following is a brief example of solving direct kinematics for a robot of this type, for which the method of homogeneous operators is applied [14,15]. This method application In most cases, these robotic arms, used individually or in pairs, have been equipped and are still equipped, as already mentioned, on a large scale with grippers with jaws, pliers, or sporadically with articulated finger grippers (3, 4, or 5 fingers). This situation is explained by the still low performance and affordable variants of anthropomorphic finger grippers.
It is time for this situation to be overcome and to move broadly to the endowment of robotic arms of the collaborative type with anthropomorphic grippers with five articulated fingers [2,3].
Solving the Direct Kinematic Problem with the Method of Homogeneous Operators
A problem of particular importance for robotic arms of the human arm type is the solution of direct kinematics. The following is a brief example of solving direct kinematics for a robot of this type, for which the method of homogeneous operators is applied [14,15]. This method application involves the use of homogeneous operators of rotation, translation, and rotation-translation compound operators, respectively, for translation-rotation. In Figure 7a, we show the form of the homogeneous elementary translation operator of the reference system O m x m y m z m to the reference system O n x n y n z n , by the axis x m = x n : Proceedings 2020, 63, 77 5 of 9 involves the use of homogeneous operators of rotation, translation, and rotation-translation compound operators, respectively, for translation-rotation. In Figure 7a, In these matrices, Snm = sin nm ϕ and Cnm = cos nm ϕ are sines, respectively, cosines of rotation angles. Rotation is around the respective axes, from the reference system m to the reference system n.
If we use two elementary homogeneous rotation and translation operators, translation and rotation ones, respectively, we can obtain compound homogeneous operators corresponding to matrices resulted by multiplying the matrices corresponding to homogeneous elementary operators. In the same form, the matrix of elementary homogeneous rotation operators by x-axis, y-axis, and z-axis, according to Figure 7b-d, are: Proceedings 2020, 63, 77 6 of 9 In these matrices, S nm = sinϕ nm and C nm = cosϕ nm are sines, respectively, cosines of rotation angles. Rotation is around the respective axes, from the reference system m to the reference system n. If we use two elementary homogeneous rotation and translation operators, translation and rotation ones, respectively, we can obtain compound homogeneous operators corresponding to matrices resulted by multiplying the matrices corresponding to homogeneous elementary operators. Compounds operators ease, to some extent, the kinematic calculation, by reducing the number of operations of multiplication of the matrices corresponding to rotations around axes in kinematic couplings and translations between the two axes of two successive couplings. Below, we exemplify the direct kinematic problem solving for the kinematic structure with 6 axes (0,1,2,3,4,5) analyzed and represented in Figure 8.
Proceedings 2020, 63, 77 6 of 9 Compounds operators ease, to some extent, the kinematic calculation, by reducing the number of operations of multiplication of the matrices corresponding to rotations around axes in kinematic couplings and translations between the two axes of two successive couplings. Below, we exemplify the direct kinematic problem solving for the kinematic structure with 6 axes (0,1,2,3,4,5) analyzed and represented in Figure 8. To obtain the reference system coordinates O5x5y5z5 reported to the reference system O0x0y0z0 (the direct kinematics problem), we write matrix forms of the rotation or translation operators of successive passage from the reference system m to the reference system n: m = 0,1,2,3,4,5; n = 0,1,...,5. The matrix of the reference system coordinates O5x5y5z5, as compared to the reference system O0x0y0z0, which is a product of the transfer matrices above matrix, under the form: The kinematic analysis presented may be extrapolated to any other structure of the robotic arm type human arm.
Five-Finger Anthropomorphic Gripper for Robotic Arms
Furthermore, I describe an anthropomorphic five-finger gripper, designed under the coordination of the author, with a high degree of resemblance to the human hand made under my coordination, a type of gripper that is recommended to be used to equip the robotic arms described above. Figure 9 shows such a robotic arm and the gripper that will be mounted on it, real variant and the CAD model [16]. To obtain the reference system coordinates O 5 x 5 y 5 z 5 reported to the reference system O 0 x 0 y 0 z 0 (the direct kinematics problem), we write matrix forms of the rotation or translation operators of successive passage from the reference system m to the reference system n: m = 0, 1, 2, 3, 4, 5; n = 0, 1, ..., 5. The matrix of the reference system coordinates O 5 x 5 y 5 z 5 , as compared to the reference system O 0 x 0 y 0 z 0 , which is a product of the transfer matrices above matrix, under the form: The kinematic analysis presented may be extrapolated to any other structure of the robotic arm type human arm.
Five-Finger Anthropomorphic Gripper for Robotic Arms
Furthermore, I describe an anthropomorphic five-finger gripper, designed under the coordination of the author, with a high degree of resemblance to the human hand made under my coordination, a type of gripper that is recommended to be used to equip the robotic arms described above. Figure 9 Proceedings 2020, 63, 77 7 of 9 shows such a robotic arm and the gripper that will be mounted on it, real variant and the CAD model [16].
Proceedings 2020, 63, 77 7 of 9 Figure 9. Recommendation for coupling a human arm robot with an anthropomorphic five-finger grip.
This gripper, according to Figure 10a, has five degrees of mobility and is driven by five stepper motors (Figure 10b). The implementation of the gripper is being carried out, having solved the first part of the problem by ensuring the compatibility between the robot and the gripper. An example of use is the simulation of a mounting and transfer operation of a metal shaft. According to Figure 11a,b, a bearing is mounted on a shaft. After mounting, the shaft is taken and stored in a box on a suitable support (Figure 12). This gripper, according to Figure 10a, has five degrees of mobility and is driven by five stepper motors (Figure 10b). The implementation of the gripper is being carried out, having solved the first part of the problem by ensuring the compatibility between the robot and the gripper. This gripper, according to Figure 10a, has five degrees of mobility and is driven by five stepper motors (Figure 10b). The implementation of the gripper is being carried out, having solved the first part of the problem by ensuring the compatibility between the robot and the gripper. An example of use is the simulation of a mounting and transfer operation of a metal shaft. According to Figure 11a,b, a bearing is mounted on a shaft. After mounting, the shaft is taken and stored in a box on a suitable support (Figure 12). An example of use is the simulation of a mounting and transfer operation of a metal shaft. According to Figure 11a,b, a bearing is mounted on a shaft. This gripper, according to Figure 10a, has five degrees of mobility and is driven by five stepper motors (Figure 10b). The implementation of the gripper is being carried out, having solved the first part of the problem by ensuring the compatibility between the robot and the gripper. An example of use is the simulation of a mounting and transfer operation of a metal shaft. According to Figure 11a,b, a bearing is mounted on a shaft. After mounting, the shaft is taken and stored in a box on a suitable support (Figure 12). After mounting, the shaft is taken and stored in a box on a suitable support (Figure 12). The robotic workstation can be optimized by using a Kinect sensor, which takes over the movements of a human arm and transmits them to the robotic arm, the anthropomorphic gripper being configured to grip various parts using a Motion Leap sensor ( Figure 13). The robotic workstation can be optimized by using a Kinect sensor, which takes over the movements of a human arm and transmits them to the robotic arm, the anthropomorphic gripper being configured to grip various parts using a Motion Leap sensor ( Figure 13). The presentation of this solution seeks to encourage the widespread use of robotic arms equipped with anthropomorphic grippers with five fingers of average complexity achievable at low cost, which really contributes to the quasi-total robotization of technological processes of The robotic workstation can be optimized by using a Kinect sensor, which takes over the movements of a human arm and transmits them to the robotic arm, the anthropomorphic gripper being configured to grip various parts using a Motion Leap sensor ( Figure 13). The presentation of this solution seeks to encourage the widespread use of robotic arms equipped with anthropomorphic grippers with five fingers of average complexity achievable at low cost, which really contributes to the quasi-total robotization of technological processes of manufacture and assembly.
Conclusions
Based on what is presented in this paper, the following conclusions can be drawn:
•
Human arm type robotic arms, also called collaborative robots or cobots if their shapes are more complex, have greatly improved and diversified in recent times; even one of the big companies brought to market industrial robots and such structures. The presentation of this solution seeks to encourage the widespread use of robotic arms equipped with anthropomorphic grippers with five fingers of average complexity achievable at low cost, which really contributes to the quasi-total robotization of technological processes of manufacture and assembly.
Conclusions
Based on what is presented in this paper, the following conclusions can be drawn:
•
Human arm type robotic arms, also called collaborative robots or cobots if their shapes are more complex, have greatly improved and diversified in recent times; even one of the big companies brought to market industrial robots and such structures.
•
These robotic arms, more efficient than the robotic arms from the traditional industrial robots, are further equipped especially with jaw grippers of the pliers type that do not highlight all their constructive and operational possibilities.
•
The maximum efficiency of collaborative robotic arms use can be achieved by equipping them with anthropomorphic grippers, still difficult to access because of high costs, and a sometimes unnecessary complexity; • As an alternative to the familiar anthropomorphic gripper, to equip robotic arms, I briefly present an anthropomorphic gripper with five fingers, sufficiently advanced and feasible at a lower cost, made under my supervision, including a solution for coupling a robotic arm, and exemplification of use in the case of assembly and transfer operations; this is a solution that has advantages in terms of cost and operation for current applications compared to other very expensive and unjustifiably complex anthropomorphic grippers.
|
2021-05-10T00:04:27.535Z
|
2021-01-28T00:00:00.000
|
{
"year": 2021,
"sha1": "ac9613263172e1312e1e7fdc9c1f3685343a0d9d",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2504-3900/63/1/77/pdf",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "faa1b9b06edae3e5b577947a2d3f7b17006064bc",
"s2fieldsofstudy": [
"Business"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
209325993
|
pes2o/s2orc
|
v3-fos-license
|
Digital clinical trials: creating a vision for the future
Digital technologies have transformed almost every aspect of our lives over the past decade including the way we communicate, shop, and read. Digital health technologies, despite their reputation for over-promising and under-delivering, can potentially offer the solutions needed to transform clinical trials, if backed by sufficient investment and regulatory support. However, this cannot be accomplished by replicating the current research processes and just transforming them from paper to digital form. Rather, a complete re-thinking and re-engineering of the clinical trial experience around the participant rather than the research site is needed. While some trials could be entirely digital in a virtual environment, many will need a hybrid of virtual and clinical site-based activities. Clinical trials are the central mechanism for unbiased assessment of proposed advances in health, healthcare, and evaluation of comparative options for approaches to prevention, diagnosis, and treatment. For trials to inform clinical decision making and to be of greatest value, the setting should be consistent with clinical practice and participants should be representative of the individuals who will use the new therapies or delivery approaches. Unfortunately, the existing clinical research infrastructure is changing only gradually, and that often leads to clinical trials being both logistically challenging and excessively expensive. There is ample evidence of the need for a better solution to the current clinical trial system. For example, while the public broadly supports and values clinical research, <10% of eligible individuals are asked to participate. The additional time and travel commitments too often required for clinical trial participation can lead to many interested people declining the opportunity. Clinicians are likewise dispirited by the numerous repetitive practices required in the current clinical trials enterprise, many unnecessary if available digital data sources were to be fully used, leading them to avoid asking patients to participate. This limited enrollment can lead to health decisions that are too often informed by results obtained in an artificially homogeneous population, or made in a vacuum of high quality evidence. Related to the lack of diverse participants is the agonizingly slow enrollment that most clinical trials experience, typically requiring nearly double the time of what was planned, with half of clinical sites enrolling none or very few participants. This, plus other factors, contribute to what is often the greatest barrier to clinical trials, the costs, which can routinely range into hundreds of millions of US dollars. Data from electronic health records and claims data, which is already collected as part of routine care combined with real-world signals from mobile phones, wearables, implanted and in-home sensor technology will enable remote, continuous monitoring of participants as the data quality and types improves, thereby eliminating most travel requirements to a clinical site. These changes will also allow for more frequent and real-time follow-up of participants, overcoming the limitation of in-clinic exams as the primary means of objective follow-up. In addition, innovative sensors can provide novel data to more precisely refine phenotypes, such as continuous glucose monitoring. Another unique capability digital technology brings to trials is near ubiquitous, 24/7 connectivity fostered by smart phones and especially cell phones (owned by 96 and 81% of adults living in the United States, respectively), which enables two-way, real-time communications from most physical locations. Besides also helping to minimize geographic obstacles to participation, this level of connectivity allows for the possibility of individual findings and overall results to be returned to participants throughout the duration of the study, fostering a true partnership in research. A transformational aspect of digital trials is how they can truly involve participants as partners and even enable patients to design and lead clinical trials. To manage and make sense of the vast amounts of data capable of collection through these novel techniques will require the broad use of another digital technology: artificial intelligence (AI). Combined with traditional biostatistical methods, AI will be useful to tackle daunting problems of missing data and artifact that lead to questions about interpretation of digital data. The user interface and communication around technology implementation will likely require a specific, different role for research sites in assuring fidelity of “real world” measurements and increasing use of in-home visits (virtual and in-person) to ensure that high quality data are being collected. In order to chart a path forward for greater implementation of digital clinical trials, a workshop co-sponsored by the National Institutes of Health (NIH) and the National Science Foundation (NSF) was recently convened to discuss the challenges and necessary next steps to move the field forward. (See https://www. nhlbi.nih.gov/events/2019/digital-clinical-trials-workshopcreating-vision-future). There are substantial, although undoubtedly addressable challenges. The characteristics of digital technologies that make them attractive for use in health researchtheir pervasiveness and depth of easily captured informationalso make the data they generate valuable to commercial entities as well as to malign actors. Frequent news of major data breaches, or stories of hidden app trackers sharing personal data without the user’s knowledge reinforce security and privacy concerns of researchers and participants alike. While trust is often inherent between patients and clinicians and supported by legal guidance such as the Health Insurance Portability and Accountability Act, the lines are blurry or nonexistent for commercial entities with health-related technologies or those on the fringes of health like social media. This lack of oversight enabled the majority of app developers to broadly share data without the user’s explicit knowledge or consent. In Europe the General Data Protection Regulation was recently enacted to protect broadly defined data, including health data, and to mandate control and choice in accepting or declining terms of service (http://www.privacy-regulation.eu/en/index.htm). Because the preservation of confidentiality is mandatory for maintaining the trusting relationship needed in medical research, at a minimum a digital research infrastructure must achieve the confidence level of other sectors dealing with private and highly sensitive information digitally, such as finance and banking. Increased future use of novel data structures that provide a verifiable and tamper-proof history of all transactions can offer greater assurance of data security. The ethical basis of human experimentation has been driven by awareness of exploitation of vulnerable individuals and www.nature.com/npjdigitalmed
Digital technologies have transformed almost every aspect of our lives over the past decade including the way we communicate, shop, and read. Digital health technologies, despite their reputation for over-promising and under-delivering, can potentially offer the solutions needed to transform clinical trials, if backed by sufficient investment and regulatory support. However, this cannot be accomplished by replicating the current research processes and just transforming them from paper to digital form. Rather, a complete re-thinking and re-engineering of the clinical trial experience around the participant rather than the research site is needed. While some trials could be entirely digital in a virtual environment, many will need a hybrid of virtual and clinical site-based activities.
Clinical trials are the central mechanism for unbiased assessment of proposed advances in health, healthcare, and evaluation of comparative options for approaches to prevention, diagnosis, and treatment. For trials to inform clinical decision making and to be of greatest value, the setting should be consistent with clinical practice and participants should be representative of the individuals who will use the new therapies or delivery approaches. Unfortunately, the existing clinical research infrastructure is changing only gradually, and that often leads to clinical trials being both logistically challenging and excessively expensive.
There is ample evidence of the need for a better solution to the current clinical trial system. For example, while the public broadly supports and values clinical research, <10% of eligible individuals are asked to participate. 1,2 The additional time and travel commitments too often required for clinical trial participation can lead to many interested people declining the opportunity. Clinicians are likewise dispirited by the numerous repetitive practices required in the current clinical trials enterprise, many unnecessary if available digital data sources were to be fully used, leading them to avoid asking patients to participate. This limited enrollment can lead to health decisions that are too often informed by results obtained in an artificially homogeneous population, 3 or made in a vacuum of high quality evidence. 4 Related to the lack of diverse participants is the agonizingly slow enrollment that most clinical trials experience, typically requiring nearly double the time of what was planned, with half of clinical sites enrolling none or very few participants. 5 This, plus other factors, contribute to what is often the greatest barrier to clinical trials, the costs, which can routinely range into hundreds of millions of US dollars. 6,7 Data from electronic health records and claims data, which is already collected as part of routine care combined with real-world signals from mobile phones, wearables, implanted and in-home sensor technology will enable remote, continuous monitoring of participants as the data quality and types improves, thereby eliminating most travel requirements to a clinical site. These changes will also allow for more frequent and real-time follow-up of participants, overcoming the limitation of in-clinic exams as the primary means of objective follow-up. In addition, innovative sensors can provide novel data to more precisely refine phenotypes, such as continuous glucose monitoring. 8 Another unique capability digital technology brings to trials is near ubiquitous, 24/7 connectivity fostered by smart phones and especially cell phones (owned by 96 and 81% of adults living in the United States, respectively), which enables two-way, real-time communications from most physical locations. 9 Besides also helping to minimize geographic obstacles to participation, this level of connectivity allows for the possibility of individual findings and overall results to be returned to participants throughout the duration of the study, fostering a true partnership in research. A transformational aspect of digital trials is how they can truly involve participants as partners and even enable patients to design and lead clinical trials. 10 To manage and make sense of the vast amounts of data capable of collection through these novel techniques will require the broad use of another digital technology: artificial intelligence (AI). Combined with traditional biostatistical methods, AI will be useful to tackle daunting problems of missing data and artifact that lead to questions about interpretation of digital data. The user interface and communication around technology implementation will likely require a specific, different role for research sites in assuring fidelity of "real world" measurements and increasing use of in-home visits (virtual and in-person) to ensure that high quality data are being collected.
In order to chart a path forward for greater implementation of digital clinical trials, a workshop co-sponsored by the National Institutes of Health (NIH) and the National Science Foundation (NSF) was recently convened to discuss the challenges and necessary next steps to move the field forward. (See https://www. nhlbi.nih.gov/events/2019/digital-clinical-trials-workshopcreating-vision-future).
There are substantial, although undoubtedly addressable challenges. The characteristics of digital technologies that make them attractive for use in health research-their pervasiveness and depth of easily captured information-also make the data they generate valuable to commercial entities as well as to malign actors. Frequent news of major data breaches, or stories of hidden app trackers sharing personal data without the user's knowledge reinforce security and privacy concerns of researchers and participants alike.
While trust is often inherent between patients and clinicians and supported by legal guidance such as the Health Insurance Portability and Accountability Act, the lines are blurry or nonexistent for commercial entities with health-related technologies or those on the fringes of health like social media. This lack of oversight enabled the majority of app developers to broadly share data without the user's explicit knowledge or consent. 11 In Europe the General Data Protection Regulation was recently enacted to protect broadly defined data, including health data, and to mandate control and choice in accepting or declining terms of service (http://www.privacy-regulation.eu/en/index.htm). Because the preservation of confidentiality is mandatory for maintaining the trusting relationship needed in medical research, at a minimum a digital research infrastructure must achieve the confidence level of other sectors dealing with private and highly sensitive information digitally, such as finance and banking. Increased future use of novel data structures that provide a verifiable and tamper-proof history of all transactions can offer greater assurance of data security. 12 The ethical basis of human experimentation has been driven by awareness of exploitation of vulnerable individuals and populations. This exploitation has led to a complex framework built on Institutional Review Board oversight at the level of the research site, which is defined by attested accountability of a site principal investigator and a sponsor for appropriate conduct within a reviewed protocol. The U.S. Food and Drug Administration regulations reinforce this approach with significant penalties for failing to follow the federal regulations. Continued partnership with regulators will be needed as increasing real-world experience surfaces unanticipated complexities in order to maintain the balance in favor of benefit and reduce the risk of exploitation in the digital sphere.
Beyond concerns with security, privacy, and data quality, a significant barrier to implementation of digital clinical trials is the process of participant recruitment, enrollment, and follow-up that have become revenue generators for research centers. To foster change, many existing research organizations, both commercial and academic, will face the same difficulties that a number of corporations have been forced to address over the last decade dealing with their own digital disruption (Table 1). However, unlike personal photography, travel, retail sales, and many more industries driven by consumer preference, ongoing funding of clinical research depends almost solely on the decision of trial funders, whether grant reviewers or medical industry leaders, who historically tend to support the status quo rather than drive innovation. 13 So, what is needed to overcome existing challenges and drive innovation using digital technologies? As noted earlier, establishing standards and protocols supporting transparency and privacy of participant's data is critical. Next, incentives for clinical trials that necessitate the development of innovative solutions to existing problems in health research are needed. For example, providing funding opportunities for nationwide trials targeted to historically hard-to reach populations, such as those living in a rural setting, or supporting programs designed to refine existing knowledge of inexact phenotypes such as hypertension, diabetes, or anxiety disorders based on the use of validated, personal sensors used continuously over prolonged periods of time in a real-world setting. Importantly, ongoing learning and near real-time adaptability, a major advantage of clinical trials using digital technology, will need to be an anticipated and supported component of any funded program as their novelty will come with many unanticipated lessons.
Improving health for the public is central to the mission of both the NIH and NSF with multiple efforts highlighting a digital clinical trial focus, including NIH's commitment to the use of FHIR standards for federally-funded clinical trials (https://grants.nih. gov/grants/guide/notice-files/NOT-OD-19-122.html). Other examples include the NIH-funded Eureka Platform to support the development of valid and reliable mobile technologies and the Intensive Longitudinal Health Behavior Network to develop models of human behavior that can then be used for health monitoring. Further, the joint Smart and Connected Health program (NSF-18-541) supports the advances in technology needed to drive digital clinical trial development. Despite these efforts, more needs to be done. Affordable, rapid, pragmatic, and participant centric clinical trials are needed to accelerate that advancement. 14 Digital technologies, although in the early days of implementation in health research, offer unique tools critical to this transformation.
Both patients and clinicians are insistent that we need better and faster evidence to inform decisions. While market forces drove digital disruption in many other industries, the clinical research community, funding agencies, and regulators will need to work together, to encourage methodological innovation and develop a digital clinical trial enterprise.
Disclaimer: Any opinions, findings, and conclusions or recommendations expressed in this paper are those of the authors and do not necessarily reflect the views of the NHLBI or the NIH. The traditional clinical research enterprise should take action to benefit from digital disruption, rather than attempting to maintain the status quo, which has made numerous traditional entities obsolete
|
2019-12-13T14:27:14.574Z
|
2019-12-01T00:00:00.000
|
{
"year": 2019,
"sha1": "1866eb5a90c0dc0cfde4a0d8776afe60d30455aa",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41746-019-0203-0.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "5537a5113425ee1c8d161984b0d9ac1af4e047f1",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
225483366
|
pes2o/s2orc
|
v3-fos-license
|
Brightly Colored to Stay in the Dark. Revealing of the Polychromy of the Lot Sarcophagus in the Catacomb of San Sebastiano in Rome
: The Lot Sarcophagus is one of the most relevant funerary sculptures of late antiquity (mid-4th century AC). Some of the remarkable aspects are the following (i) it is still preserved in situ; (ii) most of the carved scenes are rarities or unicum; (iii) not all the sculpture work has been completed, which allows us to analyse the executive process; (iv) many traces of polychromy have remained. This paper is focused on the characterization of the residual polychromy by using in-situ non-invasive techniques. Furthermore, few micro samples were taken, to be analysed in laboratory to study the composition of some deposits and to define if a preparatory layer was present under the coloured layer. The data showed that the very rich polychromy of the Lot Sarcophagus was made of Egyptian blue, yellow ochre, and three di ff erent types of red: two inorganics (red ochre and cinnabar), and one organic-based (madder lake). Furthermore, some decorations, completely vanished and no longer visible to the naked eye, have been rediscovered, also providing details on the construction phases. During the project, the 3D model of the sarcophagus was acquired, which afterwards was used to map the results of the diagnostic campaign.
Introduction
Although many ancient civilizations are known to have made use of polychromy on stone sculptures and architectural elements [1], most of these colours were lost.
After a period of mistaken interpretation of sculpted objects as pure white, a long discussion among scholars, about if ancient objects were painted or not, began in 1800 and continued throughout the past century. In recent years, the interest in the study of original polychromy on ancient stone artefacts has grown. This work is included in a wider research project aimed to enlighten the use of colours on the sculptures in Roman time.
Gathering as much information as possible about the original polychromy of an archaeological find is extremely important as it offers a new key that gives scholars and the public the sculpture's original appearance.
A correct understanding of the original polychromy is complicated by the fact that the few traces of residual colour are often small due to the vicissitudes the sculptures underwent, such as burial for hundreds or thousands of years, excavation and exposure to the environment, bad storage, and severe cleaning treatments. The sculpted artwork dates to the mid-4th century AC, since the stylistic manner seems to demonstrate it was produced by a thriving Roman marble workshop [33].
The Lot Sarcophagus was found in June 1950 during the excavation, conducted by Father Ferrua, of a sumptuous mausoleum adjacent to the Basilica of Saint Sebastian. The sarcophagus was positioned about 2.60 m under the soil and, all around it, was protected by a strong masonry. Over the lid there was also another type of less resistant masonry, which Ferrua thought had been done when the sarcophagus was reopened to insert a second deceased. Between the masonry and the sculpted surface there was a layer of lime mortar that was removed by the discoverers with hard chisel work, leaving some traces over the surface. After the excavation, many colour traces disappeared [34].
During recent conservation work, the entire marble surface was cleaned in a respectful and noninvasive way towards both the colors and the marble surface 1 . This allowed the recovery of many colour traces under the residue lime mortar [35].
The frieze on the lid shows a group of bearers returning to the city after the hunt. In the centre of the frieze two cupids held a tabula, unfortunately without any inscription.
On the chest, in the upper register and on the left side, Christ raising Lazarus, in the presence of Mary (Lazarus' sister) and Martha, Peter (with the rooster at his feet) denying Jesus, and Moses receiving the Law are represented. On the right side: Abraham who is about to sacrifice his son Isaac, a servant with a donkey and Cain and Abel offering the gift of their work to God the Father. In the centre of upper register, there is a shell where the couple buried in the sarcophagus, whose names are not known, is probably portrayed. The lower register is very interesting because it shows unusual scenes and the relief is not completely carved. On the left of the Lot, his family flee Sodom and the scene of God's angel ordering Adam and Eve to leave the Garden of Eden, are represented. Below the central shell, the dionysiac scene of the pressing of the grapes is displayed. In the last sector, which is barely outlined, we can recognise only the massacre of the innocents on the far right, while the previous scene remains uncertain. The frieze on the lid shows a group of bearers returning to the city after the hunt. In the centre of the frieze two cupids held a tabula, unfortunately without any inscription.
On the chest, in the upper register and on the left side, Christ raising Lazarus, in the presence of Mary (Lazarus' sister) and Martha, Peter (with the rooster at his feet) denying Jesus, and Moses receiving the Law are represented. On the right side: Abraham who is about to sacrifice his son Isaac, a servant with a donkey and Cain and Abel offering the gift of their work to God the Father. In the centre of upper register, there is a shell where the couple buried in the sarcophagus, whose names are not known, is probably portrayed. The lower register is very interesting because it shows unusual scenes and the relief is not completely carved. On the left of the Lot, his family flee Sodom and the scene of God's angel ordering Adam and Eve to leave the Garden of Eden, are represented. Below the central shell, the dionysiac scene of the pressing of the grapes is displayed. In the last sector, which is barely outlined, we can recognise only the massacre of the innocents on the far right, while the previous scene remains uncertain.
Materials and Methods
The analytical protocol was based on two diagnostic phases. The first one was based on the in-situ application of the multi band photographic techniques (UVL and VIL) and two non-invasive spectroscopic techniques: X-ray fluorescence (XRF) and fibre optic spectroscopy (FORS). At the same time, the images for creating the 3D model were acquired. The digital model was later used to map the diagnostic data, and this application is described in an article currently being drafted. In the second phase, after the collection and evaluation of the data, for the still unsolved questions, micro samples were taken and, afterwards, analysed in a laboratory.
In-Situ Instrumentation
For the photographic UVL, a digital camera Canon EOS 7D (18 Mpixel, CMOS sensor) was used. The camera was equipped with a Canon lens EFS 28 mm f/3.5 with a B+W486 UV/IR blocking filter to cut reflected ultraviolet. As sources, two Flash Quantum T5D with B+W UV black 403 filters were used. The same set-up was used for acquiring visible images removing the filter from the flashes.
For VIL acquisitions, the surfaces were irradiated with visible light by using two flashes Quantum T5D mounted with a B+W 486 UV/IR blocking filter, the infrared luminescence was collected with a modified (built-in filter for IR removed) Canon EOS 400D (10.1 Mpixel, CMOS sensor) with Canon lens EFS 28 mm with a B+W 093 infrared filter to cut all stray radiation from the visible spectrum and thus collecting only infrared luminescence. A white Spectralon ® plate (WS-1S-L Labsphere certified standard) was used as reference.
Digital images were acquired in situ by a portable microscope Scalar DG-2A equipped with optical zoom with magnifications from 25× to 200×. This was used for the documentation of both measured areas and details. Images were acquired at 25× magnification (investigated area of 13 × 8 mm).
Fibre optic reflectance spectroscopy (FORS) measurements were carried out in the spectral range 350-900 nm by using a tungsten lamp (20 W) as the source and the grating Ocean Optics (model HR2000) as the detector connected by optical fibre bundles Y shaped. The measuring head geometry was chosen as small as possible due to fact that the sculpted surface presented very few flat areas. Therefore, the head configuration for the measure was 0/0 • . The probe head in contact with the surface was supplied with a homemade black cylinder, which, at the same time, guarantees a soft contact, permits the fixing of the best distance from the surface in order to maximise the signal, and maintains the measuring area shielded from undesired external light. The analysed area was 2 mm in diameter, each acquired spectrum was the average of 30 scans. As reference, a Spectralon ® plate was used. Spectra were compared with reference ones available in the ISPC-CNR reference database to identify pure pigments or admixtures.
The X-ray fluorescence (XRF) spectra were collected by means of handheld Tracer III SD Bruker spectrometer, equipped with rhodium anode and a solid-state silicon detector energy dispersion system. The used set-up was 40 keV and 12 µA for 120 s. The measuring area was an elliptical spot of 4 × 7 mm. For data processing, ARTAX software was used. As far as possible, efforts were made to acquire the measurements with the two spot techniques (XRF and FORS) on the same area also taking into account that the dimensions of the investigated areas are different for the two techniques.
One of the criteria for choosing these techniques is their flexibility to be used in unconventional conditions, such as those of the environment of the sarcophagus, inside the catacomb, where the conditions were rather harsh (temperature range 12-15 • C, RH 90-95%). The number of in situ analyses is reported in Table 1. The digital model of the sarcophagus was created by using Agisoft PhotoScan (http://www.agisoft. com/), a software that performs photogrammetric processing of digital images [36] generating 3D spatial data. For the sarcophagus and the surrounding environment, about 200 photos were acquired using a Nikon D3300 (24,2 Mpixel, CMOS sensor) equipped with a Nikkor lens AF-S 18-55 mm.
Laboratory Instrumentation
An optical microscope Nikon Eclipse E600 was used for the acquisition of images of the micro samples collected in both visible reflected light and UV (filters λ ex 330-380 nm; λ em > 410 nm).
FT-IR spectra were recorded using an Alpha Bruker Optics Spectrophotometer by using an Attenuated Total Reflectance (ATR) module equipped with a diamond crystal. For each spectrum, 64 scans were acquired in the spectral range 4000-375 cm −1 , with 4 cm −1 resolution. All the spectra were processed by using the ATR correction tool for ATR diamond crystal with the Opus 7.0.122 software by Bruker Optics.
Scanning Electron Microscope (SEM) measurements were performed with a FEI-ESEM-Quanta 200 instrument in low vacuum (1 Torr). It was equipped with both secondary and back-scattered electron detectors and with EDS detector for micro-analysis. The electrons in the primary beam were accelerated with a potential difference of 25 keV.
Samples for studying the biological growth were taken by using sterile cotton swabs. The samples were then sown in a laboratory by passing the swab directly on nutrient media. Potato Dextrose Agar (PDA, DifcoTM) was used to detect a possible fungal presence, whereas a liquid BG11M nutrient medium was used to detect a possible phototrophic presence. The latter nutrient medium was prepared according to Rippka [37], which was then adjusted with 5 mL/L of NaNO 3 . The morphological characterization was made by using a Nikon Eclipse E200 microscope, according to Komarek et al. [38,39].
Results
As already described, at the time of the discovery (June 1950), the sarcophagus was buried in a niche under the floor of the mausoleum. Between the walls of the sarcophagus and the walls of the niche there was a tenacious and very adherent masonry, as reported by Father Ferrua [34].
Microsamples of both white and brownish materials, deposited either on marble or on painting traces, were analysed by means of FT-IR in ATR mode. They were all constituted by calcite (CaCO 3 ) together with some silicates. Therefore, these samples ( Table 2) were identified as residues of the masonry mentioned at the time of the excavation and not completely removed. UVL images acquired covered most of the surface of the sarcophagus. In many cases, the same area was investigated not only by positioning the set up in front of the surface but also with different orientations since the traces were also found in the most hidden areas of the sculptures.
In the UVL images, in particular of the first and second register of the surface, several circular spots of different dimensions, characterized by a brilliant red luminescence emission, were evidenced. As an example, both the Vis and UVL images acquired on the tabula, the hand, and the arm of the cupid2 are reported in Figure 2a-d. These red spots in UVL were supposed to be due to a biological growth, appearing to be of a blue-green color in the VIS images (Figure 2a,c, and Figure 3).
In the UVL images, in particular of the first and second register of the surface, several circular spots of different dimensions, characterized by a brilliant red luminescence emission, were evidenced. As an example, both the Vis and UVL images acquired on the tabula, the hand, and the arm of the cupid2 are reported in Figures 2a-d. These red spots in UVL were supposed to be due to a biological growth, appearing to be of a blue-green color in the VIS images (Figures 2a, c, and Figure 3). The identification of the spots of supposed biological origin, was made through laboratory cultivation techniques (Table 2). No fungal growth was observed on the PDA medium after two weeks of incubation time. Filamentous cyanobacteria such as Phormidium sp. and round-shaped coccoid colonial cyanobacteria were detected in the BG11M nutrient medium. This kind of bacteria is characterized by a very efficient photosynthetic apparatus able to grow in dim light conditions at very low light levels (10-15 lux) [40,41]. The photosynthesis efficacy and the colonization velocity are influenced by the light wavelength. Therefore, this feature could be used to control the possible regrowth after the restoration process by using blue sources for lighting [42,43]. These colonizers of harsh environments contain photosynthetic pigments, mainly chlorophylls and phycobiliproteins, the latter giving the blue-green color in visible light. The characteristic red fluorescence under the UV light, is instead due to the presence of chlorophylls.
Therefore, when looking at UVL images, attention must be paid to the presence of this biological growth to distinguish it by those materials that have a similar color in UVL images but were, on the contrary, intentionally applied.
Indeed, in UVL images, a red fluorescence was also present in several of the areas colored in hues from pink to red in visible light, but also in some areas in which the colour was no longer visible The identification of the spots of supposed biological origin, was made through laboratory cultivation techniques (Table 2). No fungal growth was observed on the PDA medium after two weeks of incubation time. Filamentous cyanobacteria such as Phormidium sp. and round-shaped coccoid colonial cyanobacteria were detected in the BG11M nutrient medium. This kind of bacteria is characterized by a very efficient photosynthetic apparatus able to grow in dim light conditions at very low light levels (10-15 lux) [40,41]. The photosynthesis efficacy and the colonization velocity are influenced by the light wavelength. Therefore, this feature could be used to control the possible regrowth after the restoration process by using blue sources for lighting [42,43]. These colonizers of harsh environments contain photosynthetic pigments, mainly chlorophylls and phycobiliproteins, the latter giving the blue-green color in visible light. The characteristic red fluorescence under the UV light, is instead due to the presence of chlorophylls. Therefore, when looking at UVL images, attention must be paid to the presence of this biological growth to distinguish it by those materials that have a similar color in UVL images but were, on the contrary, intentionally applied.
Indeed, in UVL images, a red fluorescence was also present in several of the areas colored in hues from pink to red in visible light, but also in some areas in which the colour was no longer visible to the naked eye. In Figure 4, images of the cupid1 and the cloth of Abel are reported. The cupid1 shows red fluorescence on the wing, the lips, and the eye, while in the cloth of Abel the fluorescence is evident both on the folds and the belt. Through the spot analyses (FORS and XRF) it was possible to identify the presence of three different kind of red pigments. The FORS spectra acquired in those areas characterized by a pink/red UV luminescence indicate the use of a red lake, likely madder lake [44,45]. As an example, in Figure 5c the FORS spectrum acquired on the line decorating the milestone (red 1, Figure 5a, red line) is reported. Red lake has been widely used for the decoration of figures carved on the lid, for example for the dress of the cursor, that was completely painted in red, and for the edges and folds of the clothes of the bearers on the right. In addition, red lake was used to underline most of the details of the faces (lips and eyebrows), the wings of the cupids, and the linear decorations of the edge of the lid. Less traces of red lake were found in the second register, a linear decoration on the upper edge of the sarcophagus, the belt and cloth of Abel, the underlining of the edges of the stones of the wall behind the legs of Cain, and Abel and God the Father. No traces of red lake were found in the third register.
Other red traces present (i.e., Roman numeral on the milestone, red 2, Figure 5a) appeared black in the UVL images (Figure 5b). The acquired data are reported in Figures 5d, e. The FORS spectra (Figure 5d), acquired in two areas on the Roman numeral show the characteristic "S shape" of cinnabar (HgS) [28,29], with the inflection point @ 595 nm (inflection point of reference cinnabar @ 598 nm). Moreover, in the XRF spectrum of the same areas (Figure 5e), signals of mercury (Hg) are clearly visible. Apart from the Roman numeral on the milestone and the lips of the cupid2 (Figures 5f, j), both belonging to the lid, cinnabar was found mainly in the second register, for the edges of the clothes of the characters (Figures 5g, k, Through the spot analyses (FORS and XRF) it was possible to identify the presence of three different kind of red pigments. The FORS spectra acquired in those areas characterized by a pink/red UV luminescence indicate the use of a red lake, likely madder lake [44,45]. As an example, in Figure 5c the FORS spectrum acquired on the line decorating the milestone (red 1, Figure 5a, red line) is reported. Red lake has been widely used for the decoration of figures carved on the lid, for example for the dress of the cursor, that was completely painted in red, and for the edges and folds of the clothes of the bearers on the right. In addition, red lake was used to underline most of the details of the faces (lips and eyebrows), the wings of the cupids, and the linear decorations of the edge of the lid. Less traces of red lake were found in the second register, a linear decoration on the upper edge of the sarcophagus, the belt and cloth of Abel, the underlining of the edges of the stones of the wall behind the legs of Cain, and Abel and God the Father. No traces of red lake were found in the third register. Observing the XRF spectra in Figure 5e, together with signals of calcium (Ca) and mercury (Hg), iron (Fe, low amount) and lead (Pb, traces) were detected. The iron (Fe) was present in almost all the areas where cinnabar was also identified but its amount was variable. The presence of this element could have been due to the prolonged contact of the surfaces with the earth and the masonry previously described. Yet, in the XRF spectra acquired in areas of the marble without traces of paint, the iron counts were absent (Figure 5e, grey spectrum) and this could suggest an intentional use, even if in different amounts, of red ochre blended with cinnabar.
Red ochre (red3) was identified in the traces of the decorations of the frames, the central part of the tabula (Figure 2a), and in Isaac's hair, as the only red pigment used. In Figures 6a, b the FORS and XRF spectra acquired in these two areas are reported. FORS spectra are characterized by the typical shape of red ochres with absorption bands of about 680 and 870 nm due to hematite (Fe2O3) [28,46]. Other red traces present (i.e., Roman numeral on the milestone, red 2, Figure 5a) appeared black in the UVL images (Figure 5b). The acquired data are reported in Figure 5d,e. The FORS spectra (Figure 5d), acquired in two areas on the Roman numeral show the characteristic "S shape" of cinnabar (HgS) [28,29], with the inflection point @ 595 nm (inflection point of reference cinnabar @ 598 nm). Moreover, in the XRF spectrum of the same areas (Figure 5e), signals of mercury (Hg) are clearly visible. Apart from the Roman numeral on the milestone and the lips of the cupid2 (Figure 5f,j), both belonging to the lid, cinnabar was found mainly in the second register, for the edges of the clothes of the characters (Figure 5g,i,k,l), for a line decorating the steps of the tomb of Lazarus on the left, and for the outline and decorations of the shoes of the characters on the left of the central shell (Figure 5i). No traces of cinnabar were found in the third register.
Observing the XRF spectra in Figure 5e, together with signals of calcium (Ca) and mercury (Hg), iron (Fe, low amount) and lead (Pb, traces) were detected. The iron (Fe) was present in almost all the areas where cinnabar was also identified but its amount was variable. The presence of this element could have been due to the prolonged contact of the surfaces with the earth and the masonry previously described. Yet, in the XRF spectra acquired in areas of the marble without traces of paint, the iron counts were absent (Figure 5e, grey spectrum) and this could suggest an intentional use, even if in different amounts, of red ochre blended with cinnabar.
Red ochre (red3) was identified in the traces of the decorations of the frames, the central part of the tabula (Figure 2a), and in Isaac's hair, as the only red pigment used. In Figure 6a,b the FORS and XRF spectra acquired in these two areas are reported. FORS spectra are characterized by the typical shape of red ochres with absorption bands of about 680 and 870 nm due to hematite (Fe 2 O 3 ) [28,46]. In the yellow areas, the FORS spectra (Figure 6c) showed the presence of iron hydroxides (Fe2O3 OH, xH2O), confirmed by XRF spectra (not reported) in which the only signal (apart from that of calcium (Ca)) was that of iron (Fe). The FT-IR ATR analysis of a micro-sample (Table 2) taken from the yellow robe of Maria provided the definitive identification of the yellow pigment thanks to the characteristic bands at 905 and 795 cm −1 of goethite mineral (FeO(OH)) [47,48]. Yellow ochre was used on the lid to paint the hair, and for both the bracelets and anklets of the cupids. In other Roman sarcophagi analyzed by the authors [23,24], yellow ochre was found as a priming layer for the gold leaf of the cupids' bracelets. Other scholars also describe the use of iron oxides and hydroxides either mixed or not with other materials, as a preparation layer for gilding [49,50]. In the case of the Lot Sarcophagus, no traces of gold were found. In the second register, yellow ochre was used for the hair and the cloth of the bride (left character in the shell), the cloth of Maria, and for several decorations such as a line on the wooden stick of Peter, the plumage of the rooster, the handle of Abraham's sword, the yellow line at the base of the altar, and the stars painted on the stones on the wall behind the characters. Yellow ochre was also one of the two pigments found in the third register, but only in the central scene, and it was used for the left cupid's hair in the tub with grapes in the Dionysian scene.
Clothes were also characterized by dark brown brushstrokes outlining the folds and the details. The XRF spectra acquired in correspondence of the brown brushstrokes on the yellow robes of the sister of Lazzaro and of the bride show iron signals (probably also due to the yellow layer below) and manganese (Mn) signals (Figure 6d). This brown color was widely used to highlight details, such as In the yellow areas, the FORS spectra (Figure 6c) showed the presence of iron hydroxides (Fe 2 O 3 OH, xH 2 O), confirmed by XRF spectra (not reported) in which the only signal (apart from that of calcium (Ca)) was that of iron (Fe). The FT-IR ATR analysis of a micro-sample (Table 2) taken from the yellow robe of Maria provided the definitive identification of the yellow pigment thanks to the characteristic bands at 905 and 795 cm −1 of goethite mineral (FeO(OH)) [47,48]. Yellow ochre was used on the lid to paint the hair, and for both the bracelets and anklets of the cupids. In other Roman sarcophagi analyzed by the authors [23,24], yellow ochre was found as a priming layer for the gold leaf of the cupids' bracelets. Other scholars also describe the use of iron oxides and hydroxides either mixed or not with other materials, as a preparation layer for gilding [49,50]. In the case of the Lot Sarcophagus, no traces of gold were found. In the second register, yellow ochre was used for the hair and the cloth of the bride (left character in the shell), the cloth of Maria, and for several decorations such as a line on the wooden stick of Peter, the plumage of the rooster, the handle of Abraham's sword, the yellow line at the base of the altar, and the stars painted on the stones on the wall behind the characters. Yellow ochre was also one of the two pigments found in the third register, but only in the central scene, and it was used for the left cupid's hair in the tub with grapes in the Dionysian scene.
Clothes were also characterized by dark brown brushstrokes outlining the folds and the details. The XRF spectra acquired in correspondence of the brown brushstrokes on the yellow robes of the sister of Lazzaro and of the bride show iron signals (probably also due to the yellow layer below) and manganese (Mn) signals (Figure 6d). This brown color was widely used to highlight details, such as the nails of the hands or the studs of the reins of the donkey. The FORS spectrum (not shown here) acquired in correspondence with the brown line delimiting the nail on the groom's hand, an area without any colored layer below, showed the presence of brown iron-manganese-based pigment, most likely umber.
A huge amount of blue color traces was still visible, and VIL confirms that the pigment used was Egyptian blue. The identification was further confirmed by spot analyses (Figure 7g,h). The Egyptian blue was widely used on all surfaces of the sarcophagus starting from the lid, where it was used for the decoration of the edges, of the internal frame of the tabula, and for the wings of the cupids. In the second register, it was used for the clothes (Figure 7d), the decorations of the tomb of Lazarus, for the studs of the reins of the donkey (Figure 7d), and for the objects that the characters hold in their hands, such as the groom's book (Figure 7e). It was also used to decorate the rock on which God the father sits and the wall behind the characters (Figure 7f).
In addition, in some areas where Egyptian blue was no longer visible to the naked eye, VIL was able to reveal its presence. As an example, Egyptian blue was found in the third register, in the Dionysian scene, in the eyes of the cupid on the left, outside the tube, and in an eye of the cupid in The Egyptian blue was widely used on all surfaces of the sarcophagus starting from the lid, where it was used for the decoration of the edges, of the internal frame of the tabula, and for the wings of the cupids. In the second register, it was used for the clothes (Figure 7d), the decorations of the tomb of Lazarus, for the studs of the reins of the donkey (Figure 7d), and for the objects that the characters hold in their hands, such as the groom's book (Figure 7e). It was also used to decorate the rock on which God the father sits and the wall behind the characters (Figure 7f).
In addition, in some areas where Egyptian blue was no longer visible to the naked eye, VIL was able to reveal its presence. As an example, Egyptian blue was found in the third register, in the Dionysian scene, in the eyes of the cupid on the left, outside the tube, and in an eye of the cupid in the center of the tube. Another important finding was the discovery of a wave-shaped decoration, with trefoil leaves, (Figure 8c,d) along the entire edge of the lid. It was interesting to note that the decoration continued also on the broken part (Figure 8d), thus proving that the pictorial decoration was done after the break took place. Egyptian blue was also found, in a very small amount, in some purple decorations such as the left border of the sleeve and the small circles on the vest of Isaac, and some shadings on the plumage of the rooster. The XRF spectra (not reported here) showed signals of iron (Fe), mercury (Hg), and manganese (Mn), but not those of copper (Cu), most likely due to the low quantity of Egyptian blue present. Therefore, it was decided to take a microsample from the purple left border of the vest of Isaac to clarify the composition of this color (Table 2). SEM-EDS analyses of the cross section of the purple micro sample confirmed the data obtained with portable XRF and, in addition, the rare blue crystals in the layer showed signals of copper (Cu) and silica (Si). This purple color was thus obtained with a complex blend of umber, cinnabar, and Egyptian blue.
Furthermore, in the purple micro sample, there was no presence of any preparatory layer as well as in the sample of yellow color taken from the second register in correspondence with the robe of Maria (Table 2). This latter micro sample was already quoted before when yellow pigments were described.
In some areas of the lid and on the edge of the sarcophagus, a bright yellow UV luminescence was observable (Figure 2b). In Figures 9a, b, images of a detail of the edge of the sarcophagus in visible light and UVL are reported, respectively.
The XRF spectra, acquired on those areas, showed much more intense lead (Pb) signals than in the measured painted areas belonging to the second register. It has been reported that lead white pigments have photoluminescence properties [51], and therefore, it could be inferred that the UVL was due to the presence of lead white. Therefore, a microsample was taken from the upper edge of Egyptian blue was also found, in a very small amount, in some purple decorations such as the left border of the sleeve and the small circles on the vest of Isaac, and some shadings on the plumage of the rooster. The XRF spectra (not reported here) showed signals of iron (Fe), mercury (Hg), and manganese (Mn), but not those of copper (Cu), most likely due to the low quantity of Egyptian blue present. Therefore, it was decided to take a microsample from the purple left border of the vest of Isaac to clarify the composition of this color (Table 2). SEM-EDS analyses of the cross section of the purple micro sample confirmed the data obtained with portable XRF and, in addition, the rare blue crystals in the layer showed signals of copper (Cu) and silica (Si). This purple color was thus obtained with a complex blend of umber, cinnabar, and Egyptian blue.
Furthermore, in the purple micro sample, there was no presence of any preparatory layer as well as in the sample of yellow color taken from the second register in correspondence with the robe of Maria (Table 2). This latter micro sample was already quoted before when yellow pigments were described.
In some areas of the lid and on the edge of the sarcophagus, a bright yellow UV luminescence was observable (Figure 2b). In Figure 9a,b, images of a detail of the edge of the sarcophagus in visible light and UVL are reported, respectively. Unlike the other microsamples taken, in this latter a white, relatively homogeneous preparatory layer was present (Figure 9c). The EDS analyses showed almost only signals of lead (Pb), while iron (Fe) was present in the analyses of some, sporadic colored grains. The FT-IR spectrum of the preparatory layer highlighted the presence of the lead white, basic lead carbonate (Pb(CO3)Pb(OH)2), by its characteristic absorption bands of 1407 and 684 cm −1 . The red layer was made of an admixture of cinnabar, red ochre, some lead white, and red lake. This latter, that in the VIS image is brilliant red (Figure 9c), in the UVL image appears pinkish (Figure 9d).
The use of the preparatory layer only on some parts, perhaps to impart to the decorative motifs a certain relief, is a rather rare finding and indicates precise choices by painters for obtaining specific results. Recently, the use of gypsum (CaSO4 2H2O) as a priming layer has been reported for a 3rd century AC repainting on the Ulpia Domnina sarcophagus of the National Roman Museum in Rome, while the "original" painting (160-180 AC) was applied directly on the marble [50]. The high hiding power [52,53] of lead white renders the latter particularly suitable for tempera techniques on stone or marble. Notwithstanding this, the use of lead white for the preparatory layers was not so common in Roman sarcophagi. Lead white was found on painted Macedonian funerary monuments both as preparatory layers or blended with pigments to create different hues and enrich the palette of painters [54].
In the paper of 1951 [34], written shortly after the excavation of the sarcophagus, a rich polychromy was described, although the author underlines how this largely vanished in a short time 2 . However, in the detailed description of the polychromy, in addition to the colors and hues that were identified in this work, the presence of two green robes was mentioned (that of the groom and of Abraham). In this case, we were not successful, and no traces of any pigment (either green or blue and yellow) were found.
One of the very common problems emerging when studying the traces of polychromy is the identification of the binder which is, by its nature, the material that mostly undergoes degradation due to the passage of time and conditions in which the work was found. This being susceptible to degradation is also, in large part, the cause of the loss of polychromy. The XRF spectra, acquired on those areas, showed much more intense lead (Pb) signals than in the measured painted areas belonging to the second register. It has been reported that lead white pigments have photoluminescence properties [51], and therefore, it could be inferred that the UVL was due to the presence of lead white. Therefore, a microsample was taken from the upper edge of the sarcophagus to clarify these findings ( Table 2). (Figure 9a, red square).
Unlike the other microsamples taken, in this latter a white, relatively homogeneous preparatory layer was present (Figure 9c). The EDS analyses showed almost only signals of lead (Pb), while iron (Fe) was present in the analyses of some, sporadic colored grains. The FT-IR spectrum of the preparatory layer highlighted the presence of the lead white, basic lead carbonate (Pb(CO 3 )Pb(OH) 2 ), by its characteristic absorption bands of 1407 and 684 cm −1 . The red layer was made of an admixture of cinnabar, red ochre, some lead white, and red lake. This latter, that in the VIS image is brilliant red (Figure 9c), in the UVL image appears pinkish (Figure 9d).
The use of the preparatory layer only on some parts, perhaps to impart to the decorative motifs a certain relief, is a rather rare finding and indicates precise choices by painters for obtaining specific results. Recently, the use of gypsum (CaSO 4 2H 2 O) as a priming layer has been reported for a 3rd century AC repainting on the Ulpia Domnina sarcophagus of the National Roman Museum in Rome, while the "original" painting (160-180 AC) was applied directly on the marble [50]. The high hiding power [52,53] of lead white renders the latter particularly suitable for tempera techniques on stone or marble. Notwithstanding this, the use of lead white for the preparatory layers was not so common in Roman sarcophagi. Lead white was found on painted Macedonian funerary monuments both as preparatory layers or blended with pigments to create different hues and enrich the palette of painters [54].
In the paper of 1951 [34], written shortly after the excavation of the sarcophagus, a rich polychromy was described, although the author underlines how this largely vanished in a short time 2 . However, in the detailed description of the polychromy, in addition to the colors and hues that were identified in this work, the presence of two green robes was mentioned (that of the groom and of Abraham). In this case, we were not successful, and no traces of any pigment (either green or blue and yellow) were found.
One of the very common problems emerging when studying the traces of polychromy is the identification of the binder which is, by its nature, the material that mostly undergoes degradation due to the passage of time and conditions in which the work was found. This being susceptible to degradation is also, in large part, the cause of the loss of polychromy.
In this case, the non-invasive techniques used in situ did not give any indication about the binder. Analyses of microsamples provided only a few hints. In the FT-IR spectra of the microsamples, no bands attributable to the binder (i.e., organic materials) were present. Again, this could be due to the low quantity present and thus below the sensitivity of the technique. In the SEM-EDS analyses of the three cross sections obtained from the three microsamples, beyond the elements correlated to the pigments, signals of phosphorus (P) were quite evident. Phosphorus is an element present in bone black [55]. Although no black crystals were present in the layers, the distribution of the phosphorus was uniform in the pictorial layers and not localized as it would be if due to bone black. These considerations lead to the hypothesis that phosphorus can be related to a binder such as casein, in which phosphoproteins are present [56].
For the 3D model, the three registers of the sarcophagus were reconstructed individually and subsequently aligned in a single model. About 40 photos were used for each register. By way of example, here, we describe the reconstruction of the second register. The 3D model reconstruction started with image alignment, in this first step, a sparse point cloud with 35,000 points was produced as an initial 3D representation of the scene. The second step increased the point cloud up to 6,000,000 points, and as the cloud directly derives from photographic images, it was colored.
Based on this last step, the software computes the reconstruction of the 3D polygonal mesh. After the geometry (mesh) is generated, it can be textured using the initial photos. Texture mapping is a way to add surface details, for example, color information, projecting one or more images onto the surface of the 3D model. As a result of the process, a model of 50,000,000 faces and a 3720 × 3720 pixel texture was created ( Figure 10).
Heritage 2020, 3 FOR PEER REVIEW 14 In this case, the non-invasive techniques used in situ did not give any indication about the binder. Analyses of microsamples provided only a few hints. In the FT-IR spectra of the microsamples, no bands attributable to the binder (i.e., organic materials) were present. Again, this could be due to the low quantity present and thus below the sensitivity of the technique. In the SEM-EDS analyses of the three cross sections obtained from the three microsamples, beyond the elements correlated to the pigments, signals of phosphorus (P) were quite evident. Phosphorus is an element present in bone black [55]. Although no black crystals were present in the layers, the distribution of the phosphorus was uniform in the pictorial layers and not localized as it would be if due to bone black. These considerations lead to the hypothesis that phosphorus can be related to a binder such as casein, in which phosphoproteins are present [56].
For the 3D model, the three registers of the sarcophagus were reconstructed individually and subsequently aligned in a single model. About 40 photos were used for each register. By way of example, here, we describe the reconstruction of the second register. The 3D model reconstruction started with image alignment, in this first step, a sparse point cloud with 35,000 points was produced as an initial 3D representation of the scene. The second step increased the point cloud up to 6,000,000 points, and as the cloud directly derives from photographic images, it was colored.
Based on this last step, the software computes the reconstruction of the 3D polygonal mesh. After the geometry (mesh) is generated, it can be textured using the initial photos. Texture mapping is a way to add surface details, for example, color information, projecting one or more images onto the surface of the 3D model. As a result of the process, a model of 50,000,000 faces and a 3720 × 3720 pixel texture was created ( Figure 10).
Conclusions
In this work, the surviving polychromy of the Lot Sarcophagus was studied in-situ, by applying mainly a non-invasive approach and using complementary portable techniques. Furthermore, some aspects not clarified by the non-invasive measures were clarified through a few targeted microsamples.
The study allowed us to uniquely identify the rich palette used by the craftsmen who created this remarkable work of art, also revealing details that were no longer visible but important for the construction of the work. Moreover, the study, through the identification of extraneous incrustations and biological growth, provided useful information for both the choice of the cleaning method and for the future conservation. Indeed, the specific environmental conditions favored the development
Conclusions
In this work, the surviving polychromy of the Lot Sarcophagus was studied in-situ, by applying mainly a non-invasive approach and using complementary portable techniques. Furthermore, some aspects not clarified by the non-invasive measures were clarified through a few targeted micro-samples.
The study allowed us to uniquely identify the rich palette used by the craftsmen who created this remarkable work of art, also revealing details that were no longer visible but important for the construction of the work. Moreover, the study, through the identification of extraneous incrustations and biological growth, provided useful information for both the choice of the cleaning method and for the future conservation. Indeed, the specific environmental conditions favored the development of cyanobacteria, and after the restoration, recolonization could be avoided by impeding the photosynthesis process by controlling the illumination quality, the cold light sources such as blue monochromatic lamp (λ~450 nm) being recommended.
During the study of the sarcophagus, it was also understood that the use of polychromy completes the sculpture work. Some details are rendered only through color and not relief. In addition, the different decoration of the clothes helps to identify the main characters of the scenes. Therefore, the polychromy of the sculpture was not a subsequent addition but was integrated into the project of the work.
The Lot Sarcophagus is one of a kind, it is well known by scholars (archeologists and art historians) but very little by the general public because its location is not inserted in the museum path of the catacomb. With the disclosure of the data obtained from this study, using the acquired 3D model, the lack of knowledge can be overcome through special tools which are currently in preparation that will be available in the museum.
Color was, for the Romans, a question of civil status and the use of precious pigments had the meaning of emphasizing the social level of the costumer. In the case of wall paintings or polychrome statues in the Domus, the richness of the colors had to convey the message of luxury and high social status.
However, the colors, instead of being shown, were also hidden. This is the case of many objects, sarcophagi, and funeral slabs, present in the catacombs of Rome and thus visible to a few people. In the case of the Lot Sarcophagus, covered by layers of both mortar and masonry, the presence of the rich decoration was only for the deceased.
|
2020-07-30T02:06:27.824Z
|
2020-07-27T00:00:00.000
|
{
"year": 2020,
"sha1": "df2480c40803038d03d97cbfa25c7df72857fbc8",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2571-9408/3/3/47/pdf",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "dfb7cf7a74c5e3c2ce1fe21fbaff518e485dc108",
"s2fieldsofstudy": [
"Art",
"History"
],
"extfieldsofstudy": [
"Art"
]
}
|
10018985
|
pes2o/s2orc
|
v3-fos-license
|
Brn-1 and Brn-2 share crucial roles in the production and positioning of mouse neocortical neurons
Formation of highly organized neocortical structure depends on the production and correct placement of the appropriate number and types of neurons. POU ho-meodomain proteins Brn-1 and Brn-2 are coexpressed in the developing neocortex, both in the late precursor cells and in the migrating neurons. Here we show that double disruption of both Brn-1 and Brn-2 genes in mice leads to abnormal formation of the neocortex with dramatically reduced production of layer IV–II neurons and defective migration of neurons unable to express mDab1 . These data indicate that Brn-1 and Brn-2 share roles in the production and positioning of neocortical neuron development. The mature neocortex is organized into six cell layers, each of which contains neurons with similar morpholo-gies, molecular properties, and projection patterns. The development of this neocortical structure depends on a sion pattern for their expression ∼ E14.5 prominent a pattern corre- sponds with the of cell proliferation the double mutant embryos. These results suggest that Brn-1 and Brn-2 may the proliferation cortical progenitor products obtained were subjected to electrophoresis, and the intensities of each amplified band were analyzed by densitometry. The PCR products for mDab1, p35, CDK5, and (cid:1) -actin were transferred to nylon-based membranes and hybridized with the following 32 P-labeled oligonucleo-tides specific for each cDNA: mDab1, 5 (cid:1) AAGGTCAGGATCGCAGC GAAGCCAC-3 (cid:1) ; p35, 5 (cid:1) -TCCCCACTGTCCCATGATCGGAGCTG-3 (cid:1) ; CDK5, 5 (cid:1) -CCCCATAGGCTCTCTGAACCCCAGT-3 (cid:1) ; and (cid:1) -actin, 5 (cid:1) - CAAGTCATCACTATTGGCAACGA-3 (cid:1) . For relative quantitation of mDab1 , p35 , and CDK5 mRNA, the radioactivity of the amplified bands was quantitated relative to standard curves obtained by PCR amplification of serially diluted wild-type RT-products.
The mature neocortex is organized into six cell layers, each of which contains neurons with similar morphologies, molecular properties, and projection patterns. The development of this neocortical structure depends on a highly ordered pattern of neuronal production and migration. Cortical neurons that comprise each layer are sequentially produced in the ventricular zone of the dorsal telencephalon (Angevine and Sidman 1961;Takahashi et al. 1999). Although the regulatory factors that function in this sequential production of a variety of layer-specific neurons have not been identified in mam-mals, in Drosophila the successive production of different types of cells from neuroblasts has been found to require a temporally stereotyped pattern of expression of a set of transcription factors including the Drosophila POU transcription factors Pdm1 and Pdm2 (Isshiki et al. 2001). In mammals, newly produced neurons leave their birthplace, migrate toward the cortical surface, and form cortical layers in an inside-out pattern with respect to their time of birth (Angevine and Sidman 1961;Rakic 1972). Recent genetic studies have identified large numbers of functional molecules involved in the migration/ positioning of neocortical neurons (for review, see Rice and Curran 1999).
Brn-1 and Brn-2, members of the mammalian class III POU transcription factor family, are prominently expressed in the embryonic brain, including the neocortex (He et al. 1989). Each single mutant, however, shows abnormalities only in limited brain regions. In Brn-2 mutant neonates, neuronal loss was observed only in the hypothalamic supraoptic and paraventricular nuclei, where Brn-1 is not expressed (Nakai et al. 1995;Schonemann et al. 1995). In Brn-1 mutants, remarkable changes in brain morphology were observed only in the hippocampus, where Brn-2 expression is barely detectable (data not shown). In the neocortex, where both Brn-1 and Brn-2 are expressed, no overt developmental defects were seen in either single mutant. These observations suggest functional complementation between Brn-1 and Brn-2 in neocortical development.
Results and Discussion
To explore their possible overlapping functions in neocortical development, we generated Brn-1/Brn-2 double homozygous mutants by intercrossing double heterozygotes that were healthy and fertile, with no apparent phenotype. Double homozygous mutants were born at the expected Mendelian ratio (76 double homozygous mutants among 1192 pups), but all of them died within 1 h after birth. In contrast to the limited abnormalities in Brn-1 −/− or Brn-2 −/− single mutants, Brn-1/Brn-2 double mutants suffered severe, broad brain defects. The olfactory bulb showed hypoplasia (Fig. 1A,B), and the cerebellum was less foliated, with loosely packed Purkinje cells (Fig. 1C,D). The neocortex was severely affected; its thickness was markedly reduced, and the stratification of the cortical neurons appeared to be disorganized (Fig. 1E,F).
The hypoplastic neocortex could be caused by reduced cell proliferation or accelerated cell death during embryonic corticogenesis. Because there was no evidence of increased apoptosis in Brn-1/Brn-2 double mutant cortex from embryonic day 14.5 (E14.5) to postnatal day 0 (P0; data not shown), we examined the proliferation of cortical progenitor cells by bromodeoxyuridine (BrdU) labeling. In mice, most cortical plate neurons are produced in the ventricular zone (VZ) or in the subventricular zone (SVZ) from E12.5 to E16.5 (The Boulder Committee 1970;Takahashi et al. 1999). Up to E13.5, there was no significant difference in the number of BrdU-labeled cells in the VZ of the double mutant embryos, compared with wild-type (E12.5: 100.0% ± 1.8% of wild-type; E13.5: 100.8% ± 2.2% of wild-type; Fig. 2A,AЈ). Reduced [Key Words: POU;mDab1;neocortex] cell proliferation in the VZ was observed at E14.5 and thereafter in Brn-1/Brn-2 mutant neocortex. (E14.5: 63.4% ± 2.6% of wild-type; E16.5: 60.2% ± 3.4% of wildtype; Fig. 2B,BЈ,C,CЈ). Reduction in the number of BrdUlabeled cells was particularly severe in the cortical SVZ in the double mutant (E16.5: 15.1% ± 2.5% of wild-type; Fig. 2C,CЈ). Despite the hypoplasticity of the Brn-1/Brn-2 deficient cortex, expression of GAD67 and calbindin appeared to be unaffected in the E19.0 neocortex (Fig. 3I,J; data not shown), suggesting intact generation and migration of the cortical interneurons, most of which are derived from the ganglionic eminence (Anderson et al. 1997). These results indicate that Brn-1 and Brn-2 share an essential role in the proliferation of cortical precursor cells within the VZ/SVZ from E14.5 onward, and that the reduction in subsequent cortical cell production could result in the hypoplastic neocortex seen in the double mutant neonate. Analysis of the temporal expression pattern for Brn-1 and Brn-2 proteins in the developing wild-type neocortex revealed that their expression in the VZ is initiated at ∼E14.5 and is prominent thereafter in the VZ/SVZ (Fig. 2D-I), with a pattern that corresponds with the period of reduced cell proliferation in the neocortex of double mutant embryos. These results suggest that Brn-1 and Brn-2 may function in the proliferation of late cortical progenitor cells in a cell-autonomous manner.
Lineage analyses and birthdating studies suggest that common cortical precursor cells first produce neurons of layer VI and then layer V (at E11.5-E15.5) and, even later, generate neurons destined for layers IV-II (at E14.5-E17.0) by successive cell division (Luskin et al. 1988;Takahashi et al. 1999). From the late embryonic neurogenesis stage, glial progenitor cells also proliferate and increase their numbers (Berman et al. 1997), differentiating into astrocytes or oligodendrocytes during a postnatal stage. The finding that Brn-1 and Brn-2 function in cell proliferation, specifically at the late neurogenesis stage, prompted us to examine whether Brn-1 and Brn-2 function in the production of upper-layer neurons and/or in the generation/expansion of glial progenitor cells. We assessed the formation of each cortical layer and the status of gliogenesis in the double mutant cortex at E19.0 or E18.5, using the following markers for different layers and glial progenitors: Tbr-1 for layer VI, subplate and SVZ (Fig. 3A); Wnt7b for layer VI (data not shown; Rubenstein et al. 1999), ER81 for layer V (Fig. 3C); ROR for layer IV ( Fig. 3E; Weimann et al. 1999), mSorLA or Svet1 for layers II/III and SVZ cells ( Feng et al. 1994;Kurtz et al. 1994), and CR-50 for Cajal-Retzius neurons in the marginal zone (MZ; Fig. 3O; Ogawa et al. 1995;D'Arcangelo et al. 1997). The marker studies indicated that the initial step of gliogenesis seemed to be unaffected in Brn-1/Brn-2 mutant neocortex ( Fig. 3L,N), whereas the numbers of ROR-positive, mSorLA-positive, or Svet1-positive neurons were dramatically reduced in Brn-1/Brn-2 mutant neocortex with mSorLAexpressing or Svet1-expressing SVZ cells lining the entire surface of the enlarged lateral ventricles of the mutant brains (Fig. 3F,H; data not shown). These results suggest that Brn-1 and Brn-2 are essential for proper production of neocortical neurons destined for layers VI-II.
Molecular marker analysis also revealed abnormal layering of the remaining cortical neurons in Brn-1/Brn-2deficient neocortex, in which the majority of ER81-positive layer V neurons, normally laminated above the Tbr-1-positive or Wnt7b-positive layer VI (Fig. 3A,C; data not shown), were found beneath the Tbr-1-positive or Wnt7b-positive layer (Fig. 3B,D; data not shown). It has been well documented that the laminar structure of the neocortex is built by migration of successively produced neurons in an inside-to-outside fashion, such that neurons born earlier reside in deeper layers, and those born later occupy more superficial layers within the cortical plate (CP) between the MZ and the subplate (SP). Thus, the largely inverted packing pattern of layer V and VI neurons in Brn-1/Brn-2 mutant cortex can be caused by either abnormal cell migration or cell fate defects such that the timing of layer VI and layer V neuronogenesis is inverted. To distinguish between the two possibilities, we labeled E12.5, E13.5, and E14.5 embryos, stages during which layer VI-V neuronogenesis is at a peak, with BrdU and examined the localization of BrdU-positive cortical neurons in E19.0 embryos. If the abnormal lamination is caused by cell fate defects, BrdU-labeled neurons should appear in comparable positions in the wild-type and Brn-1/Brn-2 mutant cortices. Conversely, if neuronal migration is affected, neurons labeled at the same time should occupy different positions in wild-type and mutant mice. In E19.0 wild-type cortex, cells born on E12.5 occupied the SP and the deepest part of layer VI (Fig. 4A), and most of the cells at E13.5 predominantly occupied layer VI above the E12.5-born cohort (Fig. 4B). The relative positions of E13.5-born to E12.5born neurons in the Brn-1/Brn-2-deficient cortex at E19.0 (Fig. 4D,E) were comparable with those in their wildtype littermates (Fig. 4A,B). The positioning of E14.5-born neurons, however, was significantly altered. E14.5born cells in wild-type cortex occupied layers V and IV in a superficial region of the CP (Fig. 4C), whereas those in Brn-1/Brn-2-deficient cortex remained in the intermediate zone (IZ), beneath the cohort of E12.5-born cells (Fig. 4F). Together with the abnormal localization of the layer V neurons in the IZ of Brn-1/Brn-2 mutant cortex (Fig. 3D), these BrdU neural birthdating experiments suggest abnormal migration of the layer V neurons born after E13.5 in Brn-1/Brn-2 mutant cortex (Fig. 4KЈ,L,LЈ).
Correct neuronal migration requires both radial glial fibers as guiding scaffolds for migrating neurons (Rakic 1972) and Cajal-Retzius neurons that play a key role in neuronal lamination by producing the secreted Reelin protein (Ogawa et al. 1995;Rice and Curran 1999). The alignment and density of radial glial fibers, labeled with antibodies against B-FABP or Nestin, were not altered (Fig. 3N,R). Furthermore, neither the number of Cajal-Retzius neurons nor their immunolabeling intensity for Reelin was changed in the Brn-1/Brn-2-deficient cortex (Fig. 3P). In fact, Cajal-Retzius neurons in the wild-type cortex expressed neither Brn-1 nor Brn-2 at E16.5 and E18.5 cortex ( Fig. 3O; data not shown). Thus, the migration defects in the Brn-1/Brn-2-deficient cortex do not seem to be a consequence of a disrupted radial glial fiber system or a loss of Reelin-expressing Cajal-Retzius neurons. Given Brn-1/Brn-2 coexpression in migrating neurons both in the IZ and CP (Fig. 2E-I), the altered migration of Brn-1/Brn-2-deficient cortical neurons can be a result of cell-autonomous defects.
To investigate the molecular mechanisms underlying the neuronal migration defects in Brn-1/Brn-2 mutant cortex, an RT-PCR analysis was performed on various genes involved in neuronal migration (Rice and Curran 1999). mDab1, VLDLR/ApoER2, and ␣3-integrin have been shown to function in positioning cortical neurons by mediating Reelin signal transduction. CDK5, p35 (one of the CDK5 activator subunits), Lis1 (Pafah1b1), and Doublecortin are also thought to affect neuronal migration in the developing cortex. Among all these tested genes, only mdab1 expression was clearly affected in the Brn-1/Brn-2 double mutant cortex at E16.5 (Fig. 5A,B; data not shown). Therefore, we examined the spatial distribution of the mdab1 mRNA in the cortex of Brn-1/ Brn-2 mutant embryos and wild-type littermates by RNA in situ hybridization. In the wild-type cortex at E16.5, mdab1 mRNA was expressed throughout the cortical wall, except for the MZ and SP. High levels of mdab1 mRNA were detected in the upper regions of the IZ and in the CP ( Fig. 5E; Rice and Curran 1999). In the Brn-1/Brn-2-deficient cortex at E16.5, mdab1 mRNA expression was significantly reduced throughout the cortical wall and, in particular, was undetectable in the upper region of the IZ (Fig. 5F) just beneath the chondroitin sulfate proteoglycans (CSPG)-positive SP (Sheppard et al. 1991), in which p35-highly expressing late-born neurons were abnormally congested (Fig. 5H,J,L). Therefore, the slight reduction in p35 mRNA levels in the E16.5 mutant cortex detected by RT-PCR analysis (Fig. 5A,B) might be caused by decreased numbers of p35-expressing neurons produced from E14.5 onward. Furthermore, quantitative RT-PCR analysis showed that mdab1 expression was reduced also in Brn-1/Brn-2 double heterozygotes (Fig. 5A,B), which show no histological defects in their neocortex. RNA in situ hybridization also showed that precipitously graded reduction of mdab1 mRNA levels correlated well with Brn-1/Brn-2 gene dosages (data not shown). These results imply that Brn-1 and Brn-2 act genetically upstream to activate mDab1-dependent positioning processes in cortical neurons. The early-born neurons lacking Brn-1 and Brn-2, however, migrate and split the preplate into the MZ and SP properly (Fig. 5J), which is not seen in the mdab1 mutant cortex; in yotari and scrambler, mutant mice carrying loss-of-function mutations in the mdab1 gene, cortical neurons fail to split the preplate to form the CP between the MZ and SP (Rice and Curran 1999). The maintenance of integrity of preplate splitting in Brn-1/ Brn-2 mutant E16.5 cortex could be caused by the redundant function of another class III POU factor, Brn-4, that also shares high homology in its primary structure with Brn-1 and Brn-2 (Mathis et al. 1992). In wildtype as well as double-mutant cortex, Brn-4 expression was also detected in the migrating neurons at ∼E15.5, but was reduced after then ( Fig. 5M-P). In Brn-1/Brn-2 mutant cortex, mDab1 expression was detected until E15.5 ( Fig. 5D) but was hardly detectable at E16.5 (Fig. 5F). Therefore, Brn-4, like Brn-1 and Brn-2, might also be able to activate mDab1-dependent processes in the positioning of early-born neurons.
Here we showed that there are two distinct types of the expression pattern of Brn-1/Brn-2 proteins in developing neocortex. Brn-1/Brn-2 expression in the precursor cells is restricted to a late pool of neural precursors, and Brn-1/Brn-2 is also expressed in a wide range of the postmitotic neurons, including Tbr-1-positive cortical plate neurons (data not shown). Double disruption of both Brn-1 and Brn-2 genes in mice led to two types of abnormalities during the neocortical development: selective loss of the neurons positive for layer IV-II markers (ROR, mSorLA, and Svet1), and significantly reduced mDab1 expression in all remaining neurons at late phase, independently of Brn-1/Brn-2 expression in their precursors.
Several lines of evidence suggest that mDab1 functions downstream of Reelin in a signaling pathway that controls cell positioning in the developing cortex (Rice and Curran 1999). However, it is not yet clear how these molecules dictate the spatial position of cortical neurons, including subplate neurons. Interestingly, in the Brn-1/Brn-2-deficient cortex, mDab1 expression was severely reduced only at a late stage, when most of the E14.5-born neurons migrate through the IZ, but do not reach the MZ, remaining congested just beneath the SP. Therefore, these results imply that mDab1 may be necessary for CP neurons to migrate through the SP. Alternatively, Brn-1 and Brn-2 could also regulate expression of other molecules that may be essential in this process. On the other hand, the hypoplasticity of the Brn-1/Brn-2-deficient cortex cannot be explained by an inability to express mdab1, because reduced cell proliferation has not been reported in mdab1 mutant cortex, and loss of ROR-expressing or mSorLA-expressing neurons was not observed in yotari (data not shown). We examined tailless (Monaghan et al. 1997) and pax6 (Tarabykin et al. 2001) expression, which are known to be essential for proper generation of cortical neurons. However, we found no changes in their expression in Brn-1/Brn-2 mutant cortex (data not shown).
Previous reports have indicated that the earliest events Brn-1 and Brn-2 in neocortical development of cell class specification within each cortical layer occur in coordination with neuronogenesis within the proliferating zone (McConnell and Kaznowski 1991). At later stages, when superficial layers are being generated, the progenitors become restricted to an upper-layer fate (Frantz and McConnell 1996). A recent report suggests that the subpopulation of the SVZ cells derived from the VZ represents neuronal progenitors committed to upperlayer neurons (Tarabykin et al. 2001). Because Brn-1 and Brn-2 are specifically expressed in late precursor cells within the cortical VZ/SVZ and function in the proliferation of these cells both in the VZ and especially in the SVZ, these factors might share an intrinsic role in the production of fate-committed neuronal precursors and/ or cortical neurons destined for the upper layers. Further analysis on these overlapping mutants would provide insight into the developmental mechanisms of the mammalian neocortex with its great diversity of cortical neurons.
Materials and methods
Histology and immunohistochemistry for calbindin, BrdU, and Nestin Fixed samples in Bouin's fixative were dehydrated and embedded in paraffin blocks, from which 5-8-µm serial sections were cut. Hematoxylin and eosin (HE) staining was performed following standard protocols. For immunohistochemistry, the following antibodies were used: anti-Calbindin (a gift of M. Watanabe, Hokkaido University, Japan), anti-BrdU (Beckton Dickinson), anti-Nestin (a gift of Y. Tomooka, Science University of Tokyo, Japan), and anti-Pax6 (a gift of N. Osumi, Tohoku University, Japan). The Vectastain ABC kit (Vector Laboratories) was used for detection. The sections were counterstained with hematoxylin.
BrdU-labeling analysis
For the cell proliferation assay, we injected pregnant mice intraperitoneally with BrdU (50 mg/kg) 1.0 h before death. BrdUpositive cells were visualized as described above. Three embryos for each genotype were analyzed at the indicated stages, and 10 sagittal sections at the level of the olfactory bulb for each embryo were used. The fraction of BrdU-positive cells in the VZ was determined by dividing the number of BrdU-positive nuclei by the total number of the nuclei identified in units of the 200-µm-wide VZ. For the assay in the SVZ, because of the difficulty in distinguishing SVZ cells from postmitotic cells, BrdU-positive SVZ cells were counted in the same units as the assay in the VZ. For birthdating analysis to determine the distributions of the cells labeled with BrdU (30 mg/kg) in the E19.0 neocortical wall, parasagittal sections at the level of the accessory olfactory bulb were used. At the level, 500-µm-wide radial stripes in the medial portions were divided into ∼40-µm-deep bins (20 horizontal bins in wild-type cortex and 14 bins in mutant cortex, respectively), and the position of each heavily and lightly labeled cell was assigned to a bin to generate histograms of the number of labeled cells against depth. Data from five sections from each of two to three littermates were averaged to give the histograms.
|
2018-04-03T00:11:14.124Z
|
2002-07-15T00:00:00.000
|
{
"year": 2002,
"sha1": "e571457536a9f71742118ecb874aa14fa3058aad",
"oa_license": null,
"oa_url": "http://genesdev.cshlp.org/content/16/14/1760.full.pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "18580673729f44d0d4ac9396acca179cf225c6bc",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
236906861
|
pes2o/s2orc
|
v3-fos-license
|
The Image of an Architect and Masonic Symbols in Works by Milorad Pavić
The paper analyzes works by the Serbian postmodernist writer Milorad Pavić. It attempts to prove that he possesses knowledge of royal art and uses masonic symbols in his writing related to geometry and architecture, including the radiant delta, compass, masonic gloves, and clepsydra. It is assumed that under the influence of these particular ideas, the writer creates the leading image of an architect and the motif of construction as freemasons believe in the Great Architect of the Universe. In the short novel Damascene, according to speculative masonry’s beliefs, the building of the church projects the building of a temple in a human soul. M. Pavić, as an architect, creates a structure of every novel, which he identifies with the golden section. This paper finds special symbols of the divine proportion in his prose, including snail’s shells, pyramids, and violins. A dynamic structure as an embodiment of the open work concept and a broad spectrum of themes provide artistic communication with a creative recipient. A reader has an opportunity to choose their own style of reading and solving textual puzzles because Pavić’s prose represents a wide variety of themes, symbols, images, and allusions that embody the secrets of Freemasonry, allowing for various interpretations.
Introduction
Prose by Milorad Pavić (1929-2009, an outstanding Serbian postmodernist, is characterized by modifying the prism of literary conventions, a powerful intellectual blast, multiple meanings, and never-ending search. It seems significant that the author introduces a builder's image and multiple mentions of the golden section as a criterion of a perfect form in his works. The constant search for a new form that would embody the concept of an "open" work manifested itself in the original structures of his novels (a lexicon novel, crossword puzzle novel, clepsydra novel, tarot novel, horoscope novel, and delta novel). An accomplice reader can choose their reading path and solve textual mysteries, for Pavić's prose abounds in themes, symbols, images, and allusions hiding the secretes of freemasons and encouraging them to look for various interpretations.
Although Pavić's prose has been researched a lot, the topic of masonry has not generated much interest. Focusing on the novels Dictionary of the Khazars, Inner Side of the Wind, Last Love in Constantinople, Second Body and the tale Damascene, Nemanja Radulović, a Serbian literary scholar was the first to write about masonic symbols used by the writer. The researcher goes beyond the "royal art" and presents a variety of esoteric and occult elements in Pavić's prose (Radulović, 2012). However, other experimental works of the author, like The Glass Snail, Unique Item, Multicolored Bread / Invisible Mirror, and A Choir of Birds from Paris failed to attract the researchers' attention. This paper aims to research the symbols of masonry in the prose by the Serbian postmodernist for it entails associative links, helps decode the text better and offer a special game for a creative recipient. To achieve the aim, we shall use close reading, typological, receptive-interpretive and intertextual methods of research.
The image of an architect and motif of construction
Architects, the knight templars, masonic symbols, and mysticism take various forms in Pavić's works. This paper attempts to distinguish the elements containing the echo of free mason's secrets and prove that the writer had in-depth knowledge about masonry and aimed to show it by inserting "identification marks" and "identification hints", which are often the keys to reading his works. It is assumed that these ideas foregrounded an architect's image, the motif of construction, and the image of a temple itself in his prose.
Masonic worldview found its reflection in architecture, painting, theatrical performances, and literature. The mythological prehistory of masonry connects its appearance to Adam, the forefather of humanity, the first builder on earth, the construction of the Egyptian pyramids, and Egyptian priests' sacraments. A legend goes that masons are the heirs and followers of the constructors who built Solomon's Temple. According to it, the fraternity of freemasons is connected to knight orders, especially the Templars.
The masons recognize a common name for the deity to be the "Great Architect of the Universe." In Dictionary of the Khazars, especially its Christian part, we encounter Avram Branković who is looking for a way to create Adam the primordial man (Pavić, 1988, p. 19). In the Muslim part of the novel, there is a parable about Adam Ruhani, a primordial angel-like ancestor of humans or the Spiritual Adam (Pavić, 1988, p. 64-65). In the Jewish part, we find Adam Kadmon, a man and a woman, simultaneously (Pavić, 1988, p. 85-86). Assumingly, Pavić used the idea of Adam Kadmon as a Jewish variant of gnostic mythologem of Anthropos, whose nature is revealed in the context of the interpretation of a biblical story about the creation of human, as some interpreters distinguished between the Adam created out of "the dust of the ground" and Adam created in the likeness of God (Averyntsev, 2007, p. 32). On this basis, it was believed that Adam Kadmon combines the male and female aspects giving continuation to the ancient mythological motif of the androgyny of the original man.
It is worth noting that the novel contains a dictionary entry entitled Music mason, where craftsmen sculpt salt and listen to the music of their marble (Pavić, 1988, p. 76-77). Pavić has a violin-shaped self-portrait -he studied at the faculty of philosophy (majoring in "Literature of Yugoslavia") and in violin class at the conservatoire at the same time. The motif of the violin (its perfect shape could be the symbol of the golden section) is often present in his writings.
Notably, the image of a creator, a constructor, in the Serbian writer's works is recurrent and always surrounded by the aura of mystery. An architect's macro-image is present in Pavić's crossword-puzzle novel Landscape Painted with Tea which resonates with Byzantine culture. Atanas Svilar, the main character, designed buildings that were never to be built. Here, we also find a story about the construction of the Serbian Hilandar Monastery on Mount Athos. Pavić's Blue Mosque, included in the novel, deserves special attention. For ten years, the mosque's constructor visited a Byzantine temple every day and constructed it using its model.
Literary scholars call clepsydra novel Inner Side of the Wind an androgynous novel because it is divided into male and female parts (Mihajlović, 1992). Notably, the sandglass as an esoteric symbol is widely used in masonry. The leitmotif of the novel is construction, as the work contains descriptions of building construction, especially temples. The main character Radacha Chihorić was both an architect and a monk who could not take vows at first because he belonged to Patarenes. Patarenes and Cathars in Serbia and Bosnia were Bogomilism followers -a heretic doctrine of dualist nature. Having become a monk, he constructs five temples dedicated to Our Lady shaped like the Greek letter Θ. Later, the main characters Radacha Chihorić (as a representative of the Byzantine school) and Sandal Krasimirić (as a representative of the Swiss school) simultaneously construct two towers over the Danube. The author creates a metaphor of competition between the representatives of two schools of architecture -eastern and western -drawing parallels between the two artists and two civilizations.
An Egyptian story lies in the short story (interactive drama and a short novel) The Glass Snail. Pavić introduces an architect's image, whose prototype is Senemut, a prominent ancient Egyptian architect, and the most favoured official of Queen Hatshepsut, into a modern plot. An ideal reader should become a virtual "archaeologist" to figure out the secrets from the history of ancient Egypt. Notably, the Egyptians made an enormous contribution to the study of the geometry of the world and its "perfect" proportions. For masons, a pyramid is an obvious sign paying tribute to the constructors of the past and an example of a perfect form, for the pyramids were constructed using the golden section. Certainly, a snail as an emblem is not a random choice either. It illustrates the natural spiral structure as an example of "divine proportion." Pharaoh Thutmose III is one of the characters of The Glass Snail. It is worth mentioning that the Mystical Order Rosae Crucis originates from ancient Egyptian mystery schools. Thus, it seems essential that the Pharaoh Thutmose III united everyone initiated into a single fraternity. Due to the rules imposed by the ruler, the fraternity became a mystical order with a single code (Shapravskyi, 2010, p. 327). Masonry inherited some of their principles, all their functions, the language of symbols, and the rites of passage from the Rosicrucian movement (Tsehelskyi, 2015a, p. 74).
Decoding masonic symbols
Pavić gave his tale Damascene the subtitle Interactive Tale for Computer and Compasses, which serves as a beacon for the reader. While the computer refers the reader to computer hypertext, experiments with non-linear reading, elements of the game as composites of postmodernist artistic practices, compasses reaffirms us in our suppositions concerning masonic symbols in writer's works. Set-square and compasses as tools belong to the central symbols of masonry. Together, they represent the union of heaven and earth where the set-square is the symbol of earth, and the compasses -of the dome of heaven (Karg, 2019, p. 172).
In Damascene (as well as in a short novel of the same name) the motif of construction is the central one, while the masonic symbols are scattered around the text. In the first part of the work, called Builders the author mentions numerous sacred places built by architects, construction of palaces with elements of ancient Greek architecture, and eight hundred Serbian architects named John who returned to their homeland carrying their guild flag with compasses on it. No doubt, the writer wanted to mention the guild of masons and John the Apostle -patron saint of the masonry, evangelist, Christ's disciple, a privy author of the Apocalypse which is full of gnostic and hermetic symbolism. Masonic lodges often bear his name. Stonemason fraternities often celebrate the day of John the Apostle, which is connected with admitting new members.
In Damascene, we read about two architects, both named John who are building a church dedicated to the Presentation of the Blessed Virgin Mary for the wedding of Attilia Nikolić, the daughter of their client, and, simultaneously, a palace for her to live in after she gets married. According to Radulović, the two Johns are a direct allusion to masonry, for both John the Apostle and John the Baptist are patron saints of freemasons (Radulović, 2012, p. 2).
Pavić opens a network of intertextual references and sends the reader to study the sources, for the architect who is supposed to build the church is called John the Damascene. John Damascene was a Byzantine theologist, philosopher, and poet who had a great impact on the development of theology in the Byzantine Empire, and medieval philosophy in Western Europe, Kyiv Rus, and the Balkans. There is another architect character in this work -John of the Ladder, who is commissioned to build a palace. The author once again counts on interpretative cooperation with the recipient prompting them to learn more about the figure of John of the Ladder (also known as John Climacus, John Scholasticus and John Sinaites) who was an orthodox monk. His main work is Ladder of Divine Ascent, which the church called the best book for spiritual growth. The ladder leading to Heaven symbolized difficult spiritual ascent to God.
Interestingly, masonic degrees resemble stairs -with each step, a person reaches a higher level of education and enlightenment. According to another explanation, the stairs represent faith, hope, and charity, the three theological virtues, called the divine ladder (Karg, 2019, p. 75). You will recall that after the ceremony, each mason starts building their symbolic temple following the principles of fraternity. It is an allusion to the construction of Solomon's Temple in Jerusalem (Karg, 2019, p. 71). In the text under study, both architects presented their drawings of the future buildings to the client on Andrew the Apostle's day. Assumingly, this is Pavić's way to hint to the Scottish Rite -one of the two greatest branches in freemasonry. Masonic lodges of this rite often bear the name of the patron saint of Scotland.
Instead of one, the architect showed the projects of three temples -made of boxwood, stone, and mysterious third material. Suspension of construction is explained as follows: You must have sinned, my Lord. You must have owed something to somebody, or shortchanged someone. When you remember what you did wrong and who you were unjust to, show repentance and put matters right. Return the debt, then John can complete your church.
-For God's sake, Damascene, where is John building the third church? -In Heaven. John always builds the third church in Heaven (Pavić, 1998).
Thus, the problem of spiritual growth, nurturing one's inner virtues, the problems of sin and repentance are central to this work. This resonates with the masonic idea of a spiritual temple of wisdom in the human heart, the temple that has to be built and dedicated to God (Karg, 2019, p. 92).
Masonic buildings are often called temples as a tribute of respect to the construction of the Solomon's Temple. In this case, the word temple is deprived of religious sense but pays respect to the masons' craft (Karg, 2019, p. 81). In the dining hall of the unfinished palace, one can see the symbols of the masonic temple as a place of masons' gathering to perform rituals. Blue sky, moon, stars, and unusual clock-like sun are depicted on the ceiling. Attilia Nikolić realized it could be used as a compass showing the way to architect Damascene. Interestingly, the word compass in English denotes both the magnetic compass used by navigators and pilots and a pair of compasses, a technical drawing instrument used to measure the dimensions in construction. Later, the character uses the architect's large wooden pair of compasses to measure the distance to the Church of Presentation of the Blessed Virgin Mary, the Temple.
Pavić's experimental novel Unique Item (a delta novel) also abounds in masonic allusions. Analyzing its poetic features, we will focus on the structure. Delta (graphically depicted as Δ) is the name of the fourth letter in the Greek alphabet. It is indicated below the novel title and appears in it several times at the intersection of textual fragments. In geography, the delta denotes the mouth of the river, which breaks up into numerous streams. Δ serves as a paratextual marker to indicate the split in the story. Note that masons have the all-seeing eye inside of an equilateral triangle called the "radiant delta" ("sacred delta", "sacred pyramid") as their emblem. This symbol has a long history. It was often used in Egyptian and Jewish cultures representing deity and global surveillance. Its interpretation is close to Christianity. It appears as a symbol of the Holy Trinity in Christian countries. Historically, many masonic symbols have religious connotations. The radiant delta symbolizes the omnipresence of the Great Architect of the Universe (Karg, 2019, p. 184). Pavić's "Egypt mania" is illustrative if we consider the impact ancient Egypt made on masonry.
The text of the delta novel contains narrative parts, which abound in various signs, symbols, mysticism, and fantasy. An attentive reader will notice references to masonry at all levels of the text. Mozart's The Magic Flute is mentioned in the novel. The author indicates that certain fourteen minutes of this opera constitute an audio password, a key opening the lock to a vault (Pavić, 2004, p. 63). The Magic Flute is called the apotheosis of masonry or its musical code. Its libretto is mysterious, and the action takes place in ancient Egypt. However, one can see masonic symbols behind the exotic Egyptian secrets: trial by fire, water, earth, and air, the use of numbers, and masonic music chords (Karg, 2019, p. 228).
It is to Egypt that Madam Lempicka from the novel decides to go. When studying the works on royal art, we found intriguing information that one of the biggest collections of works on freemasonry is preserved not far from Poznań. Among Polish scholars, the books by Małachowski-Łempicki are considered to be of special value (Tsehelskyi, 2015b, p. 146). We assume that Pavić knew about these sources and borrowed the last name of the character.
Later in the text, we see a Polish-Ukrainian-Lithuanian trace. The author activates Satan's prototype, using the appearance of a female demon called Marina Mniszech, who speaks Lithuanian. In Unique Item, she sends masonic gloves as a present, knocks on the door in masonic knock, and intends to question a man about women's masonic lodges (Pavić, 2004, p. 129-130). The author hints at disputes around women's involvement in masonic lodges, which are recurrent in freemason circles (Gize, 2015, p. 13). It is worth noting that Marina Mniszech was the daughter of Jerzy Mniszech, a Polish nobleman and diplomat connected to the Ukrainian lands of the Polish-Lithuanian Commonwealth. She was the first foreign Tsaritsa of Moscovia, wife, and widow of two tsars, False Dmitry I and False Dmitry II. Pavić got interested in the Halychyna Mary Stuart's personality and one of the most outstanding political schemes of the 17th century. What concerns the "demonic" characteristics of the image, we assume that the writer was aware of certain historically documented facts. Having moved to Moscow, the new tsar and tsaritsa brought the customs observed by Polish gentry (Szlachta) in the Polish-Lithuanian Commonwealth: at the wedding, they opted for European cuisine, ate from individual plates, and used forks. The use of medicinal herbs also looked like some witchcraft, not to mention their intention to hold a masked ball and prepare masks (Pahutiak, 2012). The above facts served as a confirmation of the demonic origin of the couple. The fourth chapter of the novel contains a fragment, "The dream of Pushkin's death." We see that the author does not just include some excerpts of Pushkin's poems into his novel but makes the poet himself one of the characters who resort to African magic and summons the demon Marina Mniszech to learn about the circumstances of his death. Legends have it that Pushkin was connected to masons, and a woman is typically associated with something infernal, thus death.
In Pavić's novel Invisible Mirror -Multicolored Bread (novel for children and others), we can trace references to the novels about the Holy Grail and the Knights of the Round Table. Masons inherited a lot from templar knights who participated in crusades. Much has been written about the connection of masonry with such a legendary artefact as the Holy Grail. Encoded literary references are not intended for odd reader; they reveal themselves to the "others" familiar with the author's writings on this topic. The main characters are looking for twelve silver knights in combat armour. The protagonist of the "female" part is the Travelling Rose. In this context, the first thing that comes to mind is the symbols of the secret Rosicrucian society -a Rosy Cross. According to one of the interpretations, a rose and a cross are the symbols of Christ's Resurrection and Atonement, the divine light of the universe and the carnal world of suffering, the symbols of Virgin Mary and Jesus, the male and female elements, the material and the spiritual.
Pavić's novel anthology Paper Theater includes A Choir of Birds from Paris, where the events get a geographical location -the character lives on Rue Vieille du Temple that leads to the Seine River along with templar's street -Rue du Temple. The introduction of special godonyms and antroponyms (main characters are called Marie-Madeleine d'Aubray and Godin de Sainte-Croix) in the literary work foreground the references to templar knights. Every day, listening to the birds singing, the narrator hears the word «Saintecroix». Assumingly, Pavić was familiar with Zbigniew Herbert's research on the secrets of the French gothic temples (Stone from the Cathedral) where the author compares freemasons with birds of passage who travelled to find better working conditions (Herbert, 2009, p. 29).
Conclusion
Proceeding from the analysis of the material, we have grounds to claim that masonic ideas and the foundations of mason philosophy resonated with the author. The image of the creator, architect, the motif of construction, and the image of a temple itself are central to Pavić's prose. Assumingly, this was the author's way to refer to the masons who believe in the Great Architect of the Universe and masonic temple symbols. While each part of the Dictionary of the Khazars features a story about Adam whom masons regard as the first builder on Earth, the image of the architect dominates in Landscape Painted with Tea, Inner Side of the Wind, The Glass Snail, and Damascene. The principles of geometry and architectural and building tools were important for the masons and their craft. The subtitle of Damascene mentions compasses as one of the most important Masonic symbols. The text itself contains many direct allusions to the Masonry (illustrates the symbols of a masonic temple, main architect characters are named after John the Apostle and John the Baptist, patron saints of the freemasons). The original structure of the novel Inner Side of the Wind manifests through the clepsydra principle impersonating time and being an important Masonic symbol. In Pavić's novel Unique Item, the subtitle given by the author illustrates, in our opinion, the sacred delta as an ideal triangle. On the pages of the novel, we also find mentions of masonic gloves as an element of masonic clothes and Mozart's opera The Magic Flute, whose subtext involves many masonic artefacts. A reference to the novels about the Holy Grail and the Knights of the Round Table is found in Invisible Mirror -Multicolored Bread. It also illustrates the symbols of Rosicrucianism, which left a significant legacy for the masons. According to a legend, freemasons were connected to the medieval order of the Knights Templar, who are referred to in Pavić's A Choir of Birds from Paris.
Non-linear texts analyzed in this paper prove a constant striving to construct a perfect creative form which the author connects to the golden section mentioning it many times and illustrating the emblems of divine proportion in his prose. Text architectonics, combined with a rich thematic array, ensures communication with the reader by provoking interpretation. For the sake of the interpretation adventure and extension of readers' horizons, Pavić never got tired of modelling artistic experiments. Notably, Pavić even compares a book to a building, where the reader may reside for some time or a temple they enter to pray.
Some aspects remain sealed for us and remain to be analyzed in the following papers, for we are talking about the esoteric prose (in a certain sense), whose secret symbols may be opened and interpreted by knowledgeable readers. Masonic ideas and symbols we noticed have a deeper context and open broader horizons. They constitute a certain secret code encountered in many writings by Pavić and invite to embark on new interpretation adventures.
|
2021-08-04T06:10:34.761Z
|
2021-01-01T00:00:00.000
|
{
"year": 2021,
"sha1": "6d924a21e9842beb558219a6dcbcb11b5ab526db",
"oa_license": "CCBYNC",
"oa_url": "https://www.journals.vu.lt/respectus-philologicus/article/download/22316/23213",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "6d924a21e9842beb558219a6dcbcb11b5ab526db",
"s2fieldsofstudy": [
"Art"
],
"extfieldsofstudy": []
}
|
271018295
|
pes2o/s2orc
|
v3-fos-license
|
Mortality among HIV-Positive and HIV-Negative People Who Inject Drugs in Mizoram, Northeast India: A Repeated Cross-Sectional Study (2007–2021)
Background: HIV and drug overdose continue to be the leading causes of death among people who inject drugs (PWID). Mizoram, a small state in the northeast of India, has the highest prevalence of HIV in India and a high HIV prevalence among PWID. Objective: To estimate the mortality among HIV-positive and HIV-negative PWID and to describe its associated factors. Methods: Cross-sectional datasets from the 2007–2021 Mizoram State AIDS Control Society (MSACS) data comprising 14626 PWID were analyzed. Logistic regression analysis was conducted to examine the factors associated with mortality among HIV-negative and HIV-positive PWID after adjusting for potential confounding factors. Results: Mortality among HIV-negative PWID declined by 59% between 2007 and 2021. The mortality rate among HIV-positive PWID also declined by 41% between 2007 and 2021. The multiple logistic regression analysis revealed that being divorced/separated/widowed (AOR = 1.41, 95% CI 1.03–1.94) remained positively associated with mortality among HIV-positive PWID. Mortality among HIV-negative PWID remained positively associated with ages of 24–34 years (AOR = 1.54, 95% CI 1.29–1.84) and above 35 years (AOR = 2.08, 95% CI 1.52–2.86), being divorced/separated/widowed (AOR = 1.28, 95% CI 1.02–1.61), and the sharing of needles/syringes (AOR = 1.28, 95% CI 1.34–2.00). Mortality among HIV-negative PWID was negatively associated with being married (AOR = 0.72, 95% CI 0.57–0.90), being employed (AOR = 0.77, 95% CI 0.64–0.94), and having a monthly income. Conclusions: The mortality rate among HIV-negative and HIV-positive PWID declined significantly between 2007 and 2021 in Mizoram. To further reduce mortality among PWID, interventions should target those sharing needles/syringes, those above 24 years of age, and unmarried participants.
Introduction
PWID are at increased risk of premature death [1,2].Globally, the leading causes of death among PWID are accidental drug overdose and human immunodeficiency virus (HIV) infection.A systematic review and meta-analysis conducted on 67 cohort studies in 2013 estimated that PWID had a crude mortality rate of 2.35 deaths per 100 personyears, a rate that is 14.7 times higher than the general population [1,3].This study also found that the mortality was three times higher among HIV-positive PWID compared to HIV-negative PWID.
ALIVE (AIDS Linked to the Intravenous Experience), a long-standing communitybased prospective study in Baltimore which followed PWID for 30 years (1988-2018), found that more than 40 percent of their participants died during follow-up (median = 13 years), primarily from HIV/AIDS, drug overdose, and chronic diseases [4].They also found that HIV/AIDS-related deaths declined after 1997 following the widespread availability of combination antiretroviral therapy (cART).However, the same study reported that drugrelated deaths among the participants increased exponentially, more than 80 times the national average, and were probably driven by the non-medical use of prescriptions and easy availability of fentanyl [4].A similar prospective cohort study conducted in Vancouver, Canada, also found an increased risk of death among HIV-positive PWID compared to HIVnegative PWID.However, in contrast to the ALIVE study, they found declining overdose deaths among the participants and suggested that this may be related to improved harm reduction strategies in Vancouver [5].
In 2021, the Joint United Nations Programme on HIV and AIDS (UNAIDS) reported that the risk of acquiring HIV is 35 times higher among people who inject drugs [6].However, with the increased availability of cART, people infected with HIV have enjoyed substantial reductions in HIV/AIDS-associated morbidity and mortality [7,8].Highly active antiretroviral treatment (HAART) has been shown to improve the course of HIV disease and subsequently decrease the mortality rate among all HIV-infected populations, including PWID [5,9].However, HIV-positive PWID are often less likely to benefit from treatment due to less-than-optimal adherence to treatment, leading to a high mortality rate among this population [9].
The highest prevalence of HIV in India is found among PWID [10].However, few reports of mortality among PWID are available.A longitudinal cohort study conducted in Chennai, South India, reported that the mortality among PWID was 4.3 per 100 personyears [11].The study also found that HIV-positive PWID who were not immunosuppressed at baseline had mortality rates comparable to HIV-negative PWID, suggesting that good adherence to highly active antiretroviral treatment (HAART) by this population would substantially impact mortality [11].
In India, the AIDS-related deaths per 100,000 declined from 15.04 in 2010 to 3.08 in 2021 [12].The Indian government rolled out free antiretroviral treatment (ART) under the national program in April 2004; this has led to a decline in AIDS-related deaths in the last several years [10].Mizoram, a state in the northeast of India, has also seen a decline in AIDS-related deaths from 58.71 deaths per 100,000 in 2010 to 15.80 in 2021.The decline in AIDS-related deaths in Mizoram could be attributed to the introduction of ART in 2007.To further combat AIDS-related deaths in Mizoram, there has been a scale-up of ART centers in different districts across the state, with increased efforts to link people living with HIV to ART services [13].However, despite these efforts, the number of AIDS-related deaths in Mizoram is still five times higher than the national average.
Despite the high number of AIDS-related deaths reported in Mizoram, no research has been conducted on the mortality rate among PWID in this region.Hence, the current study aimed to estimate the mortality rate among PWID over a period of 15 years and to determine the associated factors of mortality among HIV-positive and HIV-negative PWID in Mizoram, India.The findings from this study would enable health administrators, public health researchers, and government policymakers to reassess and improve the current intervention strategies aimed at reducing HIV-related deaths among people who inject drugs in Mizoram.
Study Sample and Design
This was a cross-sectional study and used secondary data on PWID who were registered in targeted intervention (TI) services under the Mizoram State AIDS Control Society (MSACS).The secondary data were from MSACS, and these datasets were accessed on 1 April 2021.The datasets used for this study can be accessed upon request.MSACS is an organization created by the Government of Mizoram on behalf of the state to respond to the HIV/AIDS epidemic and to deliver effective and efficient implementation of the AIDS control program.Datasets were extracted from TI-registered PWID.Targeted intervention (TI) is one of the many core strategies for HIV prevention among PWID [14].Harm reduction strategies under TI in Mizoram focus on major components including behavior change communication, treatment of sexually transmitted infections, distribution of condoms and other risk reduction materials, needle exchange programs, and opioid substitution therapy [13].Datasets from January 2007 to January 2021 were used to estimate the mortality rate among HIV-positive and HIV-negative PWID.A total of 14681 PWID were registered in the TI services between January 2007 and January 2021.ART was first introduced in 2007 in Mizoram, free of charge to people who tested positive for HIV.Hence, the year 2007 was chosen as the baseline for this study.
The registration of PWID into TI services was conducted by MSACS through nongovernmental organization-supported targeted intervention (TI-NGO).Data were collected by trained peer educators (PEs) and outreach workers (OWs) from 34 TI NGOs in eight (8) districts across Mizoram [13].Individuals who reported injecting drugs at least once 3 months before the date of data collection were eligible participants for enrolment in TI services.Yearly follow-up of registered PWID was performed to monitor the adherence to services, death rates, and those lost to follow-up.Participants who had died at any time during follow-up between 2007 and 2021 were included in the study.The comprehensive data collection procedures used in this study have been described elsewhere [15].
Outcome Measures
Mortality rates were ascertained through the records from MSACS.The causes of death of PWID were not recorded or classified.In this study, the outcome of interest was mortality among HIV-positive PWID and was coded binary 1 for 'Yes' and 0 for 'No'.The potential confounders that were considered were influenced by a previous similar study [5] and were classified into three main factors, namely, sociodemographic factors, injecting behavior, and sexual behavior.The sociodemographic characteristics included age category ('18-24', '25-34', and 35+), gender (male/female), marital status (never married, married, or separated/divorced/widowed), educational status (primary, middle, higher, or graduate and above), employment status (unemployed, employed, or self-employed), and average monthly income in Indian rupees (INR) (None, <3000, 3001-6000, 6001-10,000, or >10,000).Injecting behavior factors included sharing of needles/syringes (Yes/No).Factors related to sexual behavior included whether the person used a condom with a regular partner (Yes/No).
Statistical Analysis
STATA (Stata Corp., College Station, TX, USA, version 17.0) was used for all analyses.For categorical data, the preliminary analysis conducted has been summarized as total deaths and a total population of each confounding factor.The mortality rates for HIVnegative PWID and HIV-positive PWID were calculated by dividing the total deaths by the total population and multiplying by 1000, as well as their mortality rates and 95% confidence interval (CI) for all potential confounding factors.Using logistic regression models, the univariate analysis examined the independent association between the outcome and the confounding factors, and the multivariable analysis examined the independent risk factors for each study outcome variable after controlling for all potential confounding factors.
In the univariate analysis, all confounding factors with a p-value < 0.20 were retained and were used to build a multivariable logistic regression model [16].A manual backyard elimination procedure was applied for multivariate logistic regression to remove nonsignificant variables (p > 0.05).Only those statistically significantly related to the study outcomes at a 5% significance level in the final model are reported in the study.
significant variables (p > 0.05).Only those statistically significantly related to the study outcomes at a 5% significance level in the final model are reported in the study.
Mortality Rates among HIV-Negative PWID and HIV-Positive PWID
Table 1 shows the mortality rates among HIV-negative and HIV-positive PWID by sociodemographic, injecting, and sexual factors.As shown, the mortality rate per 1000 significant variables (p > 0.05).Only those statistically significantly related to the study outcomes at a 5% significance level in the final model are reported in the study.
Mortality Rates among HIV-Negative PWID and HIV-Positive PWID
Table 1 shows the mortality rates among HIV-negative and HIV-positive PWID by sociodemographic, injecting, and sexual factors.As shown, the mortality rate per 1000
Multivariable Analysis of Factors Associated with Mortality among HIV-Positive PWID
Table 2 shows the unadjusted and adjusted odds ratios of factors associated with mortality among HIV-positive PWID.Only factors identified as significant were included in the multivariable analysis.Our analysis showed that mortality among HIV-positive PWID was significantly low between 2017 and 2021 (AOR = 0.57, 95% CI 0.38-0.82).After adjusting for potential confounders, mortality among HIV-positive PWID remained positively associated with being divorced/separated/widowed (AOR = 1.41, 95% CI 1.03-1.94).PWID who used condoms with regular partners had a lower mortality rate (AOR = 0.72, 95% CI 0.55-0.96).
Discussion
This study is the first to estimate the mortality rates among PWID and describe the associated factors affecting mortality among HIV-positive and HIV-negative PWID in Mizoram, India.This study showed that there was an overall decline in the mortality rate among HIV-negative PWID from 2007 to 2021.The mortality among HIV-positive PWID remained stable between 2007 and 2016; however, from 2017 to 2021, the mortality among HIV-positive PWID declined by almost fifty percent, while mortality among HIV-negative PWID has reduced significantly by 76% in the past 15 years.Multivariable regression analyses showed that being divorced/separated/widowed had a positive association with mortality among HIV-positive PWID, and those who used condoms with regular partners had a lower mortality rate.The study also found that being between the ages of 24 and 34 years, being above 35 years of age, being separated/divorced/widowed, and sharing of needles/syringes contributed to higher odds of mortality among HIV-negative PWID.We also found that HIV-negative PWID who were employed and had a monthly income had lower odds of mortality.
This study found a noteworthy reduction in mortality rates among PWID over the last 15 years.The results differ from a cohort study conducted in Hai Phong, Vietnam, which examined mortality rates among PWID and found a high death rate among HIVpositive and HIV-negative PWID, with 67 HIV-positive and 36 HIV-negative deaths among 1658 participants over a median follow-up of 2 years [17].Past studies have shown that drug overdose and liver-related disease [11,17,18] are the underlying causes of death among HIV-negative PWID.Drug overdose is preventable and treatable through the use of naloxone [19,20].To prevent overdose and its associated harms, including death, the United Nations Office on Drugs and Crime (UNODC) in 2013, in collaboration with the Government of Mizoram, organized training in government hospitals and private institutions on 'Overdose Management and Prevention', including assisting in the procurement and distribution of naloxone in district hospitals in Mizoram [19].The findings in this study suggest that the distribution of naloxone in district hospitals and training in overdose prevention might have prevented deaths from drug overdose among this population.However, decreased mortality among this population cannot be achieved by one treatment modality alone.Comprehensive interventions that include needle/syringe programs, opioid substitution therapy, condom distribution programs, education, and communication are also important strategies that focus on addressing harms associated with drug use [21,22].
The finding that mortality has declined among HIV-positive PWID reported in this study may be attributed to the increased access to antiretroviral treatment (ART), adherence to treatment, and availability of support services.The decline in mortality among this population could be attributed to the launch of free ART in India in 2004 in its fight against HIV/AIDS [23].Although the initial rollout of ART was slow and limited, the third phase of the National AIDS Control Program (NACP-III), launched in 2009, provided a great impetus to scale up and increase access to services, including ART provision centers [23].
Various studies [24][25][26] have shown that early initiation and adherence to ART have been able to improve the survival of HIV-affected individuals.However, the eligibility for initiation of ART was based on CD4+ count and WHO clinical staging [24,27].This meant that not all HIV-affected individuals were eligible for free ART, but in 2017, MSACS launched the 'test and treat strategy' to improve the treatment of HIV-affected individuals in Mizoram.Under this strategy, people living with HIV were given free ART, irrespective of the CD4+ count [28].This strategy promoted early initiation of ART [26] and may have led to a decrease in the mortality rate since 2017.In addition to this, the availability of experienced clinicians, ART medication administration, and adequate support services [29] may have enhanced ART adherence among PWID, which in turn may have led to significant reductions in mortality among PWID in Mizoram.The exact effect of ART on mortality among HIV-positive PWID could not be explored in this current analysis.Further research is needed to examine the potential benefits of retention and adherence to ART among HIV-positive PWID on mortality.
Our research showed that the mortality rates for both HIV-negative and HIV-positive PWID were higher among those who were separated/divorced/widowed.Studies have shown that separated/divorced individuals have a wider sexual network, leading to more sexual partners and a greater risk of contracting HIV/AIDS, which can ultimately lead to death [30].This supports the findings of the National Longitudinal Mortality Study conducted in the United States, which found that individuals who were divorced or separated had a 4.3 times higher risk of dying from HIV/AIDS compared to those who were married [30].
Our research has shown that HIV-negative PWID who are between the ages of 24 and 34 years are at greater risk of mortality.Injected drug use is the most common risk factor for drug overdose in young people [31].A study in San Francisco found high mortality among young PWID (median age: 26 years), and drug overdose was the leading cause of death (57.9%) in this cohort [32].In addition, our study also found that PWID aged 35 years and older who reported being HIV-negative were more likely to die than those aged 18-24 years, as they may have been exposed to injected drug use for a longer period of time.These findings are consistent with a longitudinal HIV prevention study conducted in Denver, which found that individuals aged over 35 years had a higher risk of mortality compared to those aged 24-35 years [33].However, our findings contradict the study's conclusion that those in the age group of 25-34 are at a lower risk of mortality among HIV-negative PWID [33].
Our study also reported that sharing needles is a significant predictor of mortality among HIV-negative PWID.A community-based cohort study of PWID conducted in Tijuana [34] found that violence exposure, including interactions with law enforcement, was a significant predictor of mortality among this key population.Fear of police violence and injuries sustained during beatings led to increased substance use to deal with pain, and this contributed to riskier injection behaviors, including the sharing of needles/syringes [34].Police encounters can also drive mortality among PWID by creating barriers to accessing a range of necessary health and harm reduction services [34,35].
Employment and earning a monthly income were found to be linked to a decreased risk of mortality among HIV-negative PWID in this study.These results align with a previous study by DeBeck et al. [36] on the income-generating activities of people who inject drugs, which showed that earning money could help PWID avoid violence, criminal behavior, and death.This finding shows that having a stable job can reduce involvement in the sex trade industry and high-risk behaviors, such as decreased daily drug use.It also emphasizes the significance of better socioeconomic conditions as a crucial health determinant [17,37].
Despite the reduction in mortality observed in this study, HIV infection continues to play a significant role in the mortality of PWID in Mizoram.The findings of our study highlight the importance of continued promotion and implementation of HIV harm reduction services, including encouraging safe injection practices and engaging in chronic disease management and health promotion activities [5].Comprehensive training for laypersons and family members of drug users should include how to respond to overdoses, and the administration of naloxone is required to prevent opioid-related drug overdose deaths [38].There is also a need for the provision of training among police agencies/law enforcement on service delivery and health promotion strategies for PWID, as this can reduce behaviors that interfere with the achievement of public health goals [35].Structural interventions on the integration of police agencies/law enforcement and public health could serve as an opportunity to connect PWID with support services, which can result in reduced mortality among PWID [34,39].Perhaps the involvement of PWID in the design and delivery of such services would be an effective community-based health promotion strategy and may address the factors related to employment and skill development.
Strengths and Limitations
This study has certain strengths.Firstly, this is the first study on the impact of HIV infection and its associated factors on mortality among PWID in Mizoram.The second strength is that this is a population-based study with a large sample size, and we were able to report detailed data for a period of 15 years on this otherwise hard-to-reach population.Our study may also be limited by several factors.Firstly, the study could not determine cause-specific mortality because details on drug-related time of death and chronic diseaserelated time of death were missing from the MSACS datasets.Secondly, we could not determine how many HIV-positive PWID were enrolled in the ART program or how many PWID were retained due to the limitation of our access to this data.Thirdly, the mortality rates may have been underestimated, as participants who were registered in the TI program but discontinued the service may have died but were not followed up.
Conclusions
In conclusion, our results demonstrate that mortality rates among HIV-negative and HIV-positive PWID have declined significantly since 2016.This coincided with the launch of the 'test and treat strategy' and 'overdose management and treatment' in Mizoram at the time.However, this finding does not negate the need to continue a comprehensive response, including a scale-up of ART, education of laymen on overdose management and safe injection practices, and the involvement of law enforcement in public health goals for PWID to further reduce mortality among this vulnerable group.Our study findings also suggest that employment and income generation among PWID could help to avoid violence and death.Public health responses need to include skill development for PWID as an intervention strategy to reduce mortality among this population.
MR = Mortality rate per 1000 people.This table presents the mortality rates among PWID by sociodemographic, injecting, and sexual factors.
Table 2 .
Unadjusted and adjusted odds ratios of factors associated with mortality among HIVpositive PWID.
Table 3 .
Unadjusted and adjusted odds ratios of factors associated with mortality among HIVnegative PWID.
This table presents the sociodemographic, injecting, and sexual risk factors associated with mortality among HIV-positive PWID before and after adjusting for potential confounders.
|
2024-07-07T15:25:21.245Z
|
2024-07-01T00:00:00.000
|
{
"year": 2024,
"sha1": "9b064f2e79bd168aff9d99b1135a183e05d9e1d7",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.3390/ijerph21070874",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "acf37f226ae43d5c1ed423018044ff64fa1a3732",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
}
|
212726484
|
pes2o/s2orc
|
v3-fos-license
|
Effect of indigenously developed nano-hydroxyapatite crystals from chicken egg shell on the surface hardness of bleached human enamel: An In Vitro study
Objective: The objective was to evaluate the effect of nano-hydroxyapatite (nHA) derived from chicken eggshell on bleached human enamel in comparison with commercial casein phophopeptide-amorphous calcium phosphate (CPP-ACP) paste using Vickers microhardness test. Materials and Methods: nHA powder was prepared from chicken eggshell using combustion method. nHA slurry was prepared by mixing 1.8 g of nHA powder with 0.3 ml of distilled water. Forty intact maxillary anterior teeth were collected and decoronated, and the crowns were embedded in acrylic mold with the labial enamel surfaces exposed. Baseline microhardness evaluation was done (T0). The specimens were randomly divided into the following four groups (n = 10) based on the surface treatment of enamel: Group 1: no bleaching treatment; Group 2: bleaching with 30% hydrogen peroxide (HP) solution; Group 3: bleaching followed by the application of CPP-ACP; and Group 4: bleaching followed by the application of nHA. The specimens were stored in artificial saliva at 37°C for 2 weeks, after which they were subjected to Vickers microhardness test (T14). One-way ANOVA and Tukey's post hoc multiple comparison tests were used for statistical analysis (P < 0.05). Results: Bleaching with HP significantly decreased the enamel microhardness. CPP-ACP and nHA derived from chicken eggshell increased the enamel microhardness significantly. There was no significant difference in microhardness values among the CPP-ACP and nHA groups. Conclusion: Nano-hydroxyapatite sourced from chicken eggshell was as effective as CPP-ACP in remineralizing and restoring the lost microhardness of bleached enamel.
Introduction
There is an increasing awareness among patients regarding dental esthetics. Discoloration of teeth resulting from various reasons poses a major obstacle in achieving desirable esthetics. When compared to veneers and full-coverage crowns, bleaching is considered a conservative option for the management of discolored teeth. It can be done as an at-home or in-office procedure. [1] The development of bleaching gels that use hydrogen peroxide (HP) or carbamide peroxide in high concentrations (35%-38%) has made in-office bleaching procedures easier, with immediate favorable results achieved, without a need for further patient cooperation. [2] Longer application time and multiple visits are required in order to obtain optimum tooth-whitening How to cite this article: Kunam This is an open access journal, and articles are distributed under the terms of the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 License, which allows others to remix, tweak, and build upon the work non-commercially, as long as appropriate credit is given and the new creations are licensed under the identical terms.
For reprints contact: reprints@medknow.com results. This has a negative influence on the integrity of dental hard tissues. [3] These changes range from microscopical alterations of enamel surface in the form of surface defects and subsurface pores to significant reduction in enamel microhardness. [4,5] Direct topical application of remineralizing agents or their incorporation into bleaching gels has shown to decrease the unfavorable effects of bleaching agents on enamel. [6,7] Remineralizing agents such as fluoride, calcium, amorphous calcium phosphate (ACP), casein phophopeptide-ACP (CPP-ACP), hydroxyapatite (HA), and nano-HA (nHA) have shown promising results in various studies. [7][8][9] CPP-ACP has been developed based on calcium phosphate remineralization technology which can inhibit the demineralization process and enhance the remineralization of enamel and dentin. [8,10,11] Studies have proven that the remineralization potential of CPP-ACP is capable of repairing initial enamel caries lesions. [11] It has been reported that the application of a CPP-ACP paste either before or after in-office bleaching protocols can prevent HP-induced negative changes of roughness and hardness on enamel. [12] Incorporation of CPP-ACP in 10% and 16% carbamide peroxide gels has shown to increase postbleaching enamel hardness. [13] With the advent of biomimetic materials in the field of dentistry, materials with properties similar to that of natural tooth structure and those which can completely replace lost tooth structure have evolved. HA is one such material which is considered the most biocompatible and bioactive. [14,15] Compared to micron-sized particles, nano-sized particles have shown to possess morphology and crystallinity comparable to dental hydroxyapatite. [16] Studies have proven its potential to remineralize early enamel caries. [9,17,18] It is also shown to preserve the enamel morphology and prevent the loss of enamel microhardness. [19] HA can be produced using natural or synthetic sources. So far, the beneficiary effects of HA have been proven using the synthetically derived form. There is a paucity in the dental literature regarding the role of HA derived from natural sources.
Corals, cuttle fish shell, bovine bone, and eggshell are some of the natural sources of HA, which are otherwise disposed off as biowastes. Chicken eggshell could serve as a raw material for synthesizing HA in a natural and an economical way. [20] Kunam et al. reported the efficacy of nHA derived from chicken eggshell in combination with 2% sodium fluoride in dentinal tubule occlusion and demonstrated the effective depth of penetration of this combination into dentinal tubules. [21] Khoroushi et al. [22] concluded that incorporation of HA as a remineralizing agent into bleaching gel is effective in decreasing enamel microhardness changes subsequent to in-office bleaching. Moosavi and Hakimi [23] conducted a study in which bleaching followed by the application of MI paste, fractional CO 2 laser, or nHA led to an increase in the elastic modulus and hardness of bleached enamel. Gomes et al. [19] concluded that treatment of enamel with nHA paste prior to bleaching restores the hardness of enamel by minimizing the loss of Ca and P ions and increasing the uptake of F ions. Till date, there are no studies testing the efficacy of nHA derived from chicken eggshell on the microhardness of enamel following bleaching procedure. Hence, the aim of this in vitro study was to evaluate the effect of nHA derived from chicken eggshell on the microhardness of bleached human enamel in comparison with commercial CPP-ACP paste using Vickers microhardness test.
Synthesis of nano-hydroxyapatite powder from chicken eggshell
A simple combustion technique proposed by Sasikumar and Vijayaraghavan for synthesizing nanocrystalline HA powder using chicken eggshell, diammonium hydrogen phosphate [(NH 4 ) 2 HPO 4 ], and citric acid was adopted in the current study. Eggshells were collected from a local hatchery and boiled in water for 30 min. The shells were dried for 60 min in a hot air oven and blended into a fine powder. The obtained powder was dissolved in concentrated nitric acid (con.HNO 3 ), which resulted in the formation of a yellow eggshell solution. Standardized eggshell solution was added to 1 M citric acid. The pH of the solution was adjusted to 9.5 by adding 1:1 NH 4 OH. Adding 1 M (NH 4 ) 2 HPO 4 solution drop wise at 1 ml/min resulted in a white precipitate, which was dissolved by the addition of con.HNO 3 . The solution was stirred at 70°C for 2 h until the formation of a transparent gel. The gel was subjected to combustion in a preheated muffle furnace at 250°C. This resulted in a black-colored precursor, which when sintered at 900°C for 2 h yielded a white-colored pure nanocrystalline HA powder. [24] The nHA slurry was prepared by mixing 1.8 g of nHA powder with 0.3 ml of distilled water.
Specimen preparation
The study protocol was approved by the Institutional Ethical Committee of SRM Dental College (SRMU/ MandHS/SRMDC/2011/M.D.S-PG Student/010). Forty extracted noncarious maxillary incisors were collected and stored in distilled water containing 0.2% thymol until use. The teeth were decoronated at the cemento-enamel junction, and the root portions were discarded. The root canal openings were sealed with utility wax. The teeth were positioned in a plastic mold and embedded using a self-curing acrylic resin with the labial enamel surfaces exposed. The enamel surfaces of the teeth were ground into a flat surface using 80-grit silicon carbide papers and polished using 600-, 1200-, and 2400-grit aluminum oxide abrasive papers.
Baseline microhardness evaluation (T 0 )
Microhardness of the samples was determined using a Vickers microhardness tester (Wilson Wolpert Instruments, Aachen, Germany) fitted with a 300-g load. The indenter was allowed to sink and rest on the enamel surface for 10 s, and the Vickers hardness number (VHN) was determined. Three indentations were performed on each specimen, with a distance of 100 µm between them, and the measurements were averaged. This was taken as the baseline (T 0 ) microhardness value (MHV) of the sample. The teeth were randomly divided into four groups based on the remineralizing agent used. No bleaching was done for samples in Group A, whereas Groups B, C, and D were bleached with 30% HP solution (Thermo Fischer Scientific, Chennai, Tamil Nadu, India), followed by no further treatment in Group B, application of CPP-ACP (GC Tooth Mousse; GC America Inc., USA) in Group C, and nHA slurry in Group D. In Groups B, C, and D, bleaching was done by allowing the HP solution to remain in contact with the labial enamel surface of the samples for 30 min, following which it was rinsed off with distilled water. In Groups C and D, after bleaching, CPP-ACP paste or nHA slurry was applied using a micro brush and allowed to remain on the enamel surface for 2 min. After remineralizing treatment, all the specimens including those of Group A were stored in artificial saliva (Aqwet, Cipla, Mumbai, Maharashtra, India) at 37°C. This procedure was repeated every day for 14 days.
Final microhardness assessment (T 14 )
At the end of 2 weeks, the microhardness of the samples was measured again. This value was taken as the posttreatment (T 14 ) MHV of the sample.
Statistical analysis
The T 0 and T 14 MHV were statistically analyzed by one-way ANOVA and Tukey's post hoc multiple comparison tests (P < 0.05). Descriptive statistics were analyzed using SPSS Statistics V21.0 (IBM, USA).
Results
The baseline and posttreatment mean MHV and standard deviation of all the groups are summarized in Table 1 and the graphical representation of the same is given in Figure 1. The baseline MHV ranged from 301 ± 7.56 KHV to 311 ± 8.38 KHN, with no significant difference among the groups (P > 0.05), indicating uniform distribution of samples between the groups. Post hoc comparisons showed that, among the groups at T 14 , Group B had a significantly lesser MHV (P < 0.05). Independent sample t-test showed that no significant difference was seen between MHV at T 0 and T 14 in Groups A, C, and D (P > 0.05). A significant decrease in MHV was seen at T 14 compared to T 0 in Group B (P < 0.05).
Discussion
Microhardness of enamel varies depending on its degree of mineralization, local variations in its structure resulting from the presence of enamel rods, and tufts or porosities near the dentino-enamel junction. [25] The morphology of bleached enamel has been extensively studied. It was observed that bleaching agents demineralize enamel to a depth of up to 50 µm. This loss of mineral is evidenced as hardness changes. Hence, microhardness tests are considered as appropriate to evaluate the adverse effects of bleaching agents on enamel. [26] In the current study, there was a significant reduction in the microhardness (T 14 ) of samples that were subjected to bleaching alone (Group B -291.84 ± 8.09 VHN). These results coincide with those of various other studies [12,13,19,22,23] which have shown that bleached enamel is less harder than normal enamel.
A remineralization system should supply stabilized bioavailable calcium, phosphate, and fluoride ions because these minerals are lost after bleaching. [27] In the current study, the application of CPP-ACP after bleaching (Group C) increased enamel microhardness (317.44 ± 9.66 VHN) significantly when compared to samples that received no additional treatment after bleaching. In comparison to the experimental group, where nHA derived from chicken eggshell (308.46 ± 6.67 VHN) was applied post bleaching (Group D), CPP-ACP increased the enamel microhardness, though not statistically significant.
CPP in CPP-ACP stabilizes calcium and phosphate ions at the tooth surface in a bioavailable state and prevents them from transforming into a crystalline phase. This reservoir of calcium, phosphate, and fluoride ions released from the nanocomplexes of CPP-ACP diffuse down concentration gradients across demineralized zones and deposit themselves into voids in apatite crystals. This promotes crystal growth in the form of fluoride-containing apatite, thereby achieving remineralization. [27,28] Yengopal and Mickenautsch conducted a systematic meta-analysis and concluded that there is sufficient clinical evidence demonstrating enamel remineralization and caries prevention by regular use of products containing CPP-ACP. [29] These results are in accordance with other studies, [12,13,23] which also found that enamel mineral loss was significantly reduced when CPP-ACP was applied.
In the present study, the nHA-treated group (Group D) showed a significant increase in enamel microhardness post bleaching (308.46 ± 6.67) compared to the bleaching-only group (Group B -291.84 ± 8.09). There was no significant difference between T 14 MHV of samples in CPP-ACP group and nHA group. Chicken eggshell has a high percentage of bioavailable calcium (39%); relevant amounts of Mg, P, and Sr; and low levels of toxic metals such as Pb, Al, Cd, and Hg. This composition makes eggshell an attractive source of calcium. [30] Eggshell waste helps in reducing the cost of high-quality calcium source and at the same time promoting the recycle of the material. [20,24] Synthesis of nHA from chicken eggshell by combustion method is an economical way of obtaining pure, crystalline nanoscale powder. [24] In our previous study, nHA powder synthesized by this technique was characterized using scanning electron microscopy (SEM) and X-ray diffraction (XRD). [21] SEM observations confirmed the nanomeric size of the particles which were in the range of 19 to 30 nm. The particles were rod shaped and were present in the form of agglomerated clusters. XRD analysis of the synthesized powder showed a sharp and well-defined peak at 34.12° of 2ᶿ values and the Ca/P ratio was 1.67, confirming the presence of pure HA.
Crystals of nHA sediment on to the enamel surfaces and act as a template in the mineral precipitation process. The template further attracts large amounts of Ca 2+ and PO 3− ions continuously from the remineralizing solution to fill up the defects and micropores on the demineralized enamel surface. This facilitates crystal growth and integrity, thereby promoting remineralization. Studies have shown that the effect of remineralization is increased when the size of the HA particle is reduced to nanometric range. [18] An in situ study concluded that application of nHA paste prior to bleaching with 35% HP was able to restore the hardness of enamel and does not interfere in bleaching effectiveness. [19] Studies have also shown that the use of nHA paste was effective at lowering the incidence and intensity of tooth sensitivity after bleaching. [31,32] The current study results are contradictory to the in vitro study conducted by Comar et al. where different concentrations of nHA with and without fluoride were used in comparison with a commercial CPP-ACP paste by evaluating cross-sectional hardness on bovine enamel and dentin. There was no significant improvement in microhardness in samples treated with nHA. [33] Santos et al. [34] showed that the application of the nHA paste after the bleaching procedure did not significantly reduce the enamel loss compared to the unmodified HP group. This may be attributed to the high pH of the paste (8.6). It was shown that an increase of pH from 4.0 to 7.0 resulted in a decrease of calcium and phosphate released from nHA. [18] However, the eggshell-derived nHA used in the present study is acidic in nature; hence, an enhanced remineralization effect is seen. Studies have also quoted that low solubility associated with pure HA might not provide enough available Ca and PO 4 3− ions to increase the stability of the HA in the enamel. [35] Further studies using nHA derived from chicken eggshell are needed in this regard. Although this study could not simulate the complex oral environment completely, these results could pave way for further research on the use of this naturally sourced nHA as a potential remineralizing agent.
Conclusion
From the results of this in vitro study, it can be concluded that: 1. Bleaching with 30% HP significantly reduces the microhardness of enamel 2. The use of CPP-ACP or nHA restores the lost microhardness of bleached enamel 3. nHA was as effective as CPP-ACP in remineralizing bleached enamel.
Financial support and sponsorship
Nil.
Conflicts of interest
There are no conflicts of interest.
|
2020-03-17T13:07:53.946Z
|
2019-07-01T00:00:00.000
|
{
"year": 2019,
"sha1": "af618f726870e5f527ca0f3ff5fb7e09e0a440d4",
"oa_license": "CCBYNCSA",
"oa_url": null,
"oa_status": null,
"pdf_src": "WoltersKluwer",
"pdf_hash": "af618f726870e5f527ca0f3ff5fb7e09e0a440d4",
"s2fieldsofstudy": [
"Materials Science",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
168404599
|
pes2o/s2orc
|
v3-fos-license
|
Efficient Usage of Resources through RFID Cards
Background/Objectives: Now-a-day’s petrol consumption has increased too much. The idea of limiting the resources by using them effectively is our main objective. Methods/Statistical Analysis: Trending identification techniques like RFD Technology is used here for implementing the proposed approach. Each person having a vehicle is provided with these cards containing unique ids. The unique id is recognized by the RFID Reader. As per the vehicle requirement the usage value is predefined. When the person uses his petrol, the value will be decremented automatically in the database. LCD is used to display at present usage value of the customer and also the remaining usage value. Findings: Consumers may face difficulties after the usage limit is exceeded so he/she should take care of their usage of the petrol. Quicker responses are needed from the database. Application/Improvements: This type of technique can be applied on several non-renewable energy sources. By using high end servers like we can improve its speed and processing efficiently.
Introduction
Utilizing resources is not only the human's first priority but also conserving them is his responsibility. In this paper we would like to suggest an RFID Technology based approach to limit the non-renewable resources such as petrol. Petroleum Geologists believe that the time when one-half of Earth's petrol has been exploited may occur between 2020 to 2050 although a variety of experts believe it has occurred already. Considering future generation's exploitation of resources should be done properly, if not we have to face dangerous situations. Efficient usage of these kinds of resources is the only solution to this problem. RFIDTechnology had become an emerging technology of its capacity to carry large amounts of data, reusability, processing efficiently and its high levels of security, medicines and eating products 1 . Usage can be limited by using RFID Cards.
RFID Technology
Recent days RFID Technology has become so convenient for usage and cost effective. It uses radio waves as a medium to transfer data from RFID card to the RFID Reader 2 . RFID Reader takes the responsibility of received data for transmitting to the desired destination. Card and Reader will operate at same frequencies for receiving and transmitting operations 3 . This technology is a wireless mode of communication and has a range of several meters. RFID System used in several areas such as restricting unauthorized access control which provides security, tracking location or a resource. RFID Technology has more advantages compared with barcode system so its implementation had a full demand in present generation.
Keywords: Databases, PC, RFID Card, RFID Reader, RS-232, UL Kits
The electronic tag holds the data in the form of a number used to recognize the person automatically with the help of a database. This tag technology is an automatic identification with the help of RF signals. This system offers high frequency ranges from 125 KHz to 2.4 GHz 5 . RFID Reader helps to track the data present in the tag. When the person places the tag near the reader, the reader antenna will send a command to the tag for retrieving the information. When the tag receives a command then it execute and transmit the data to the reader. Inbuilt antenna present inside the tag will helps to transfer the data from tag to reader 6 .
Reader is connected to the UL kit. This kit helps to display the person's unique number in the Character LCD along with his name. Also the person's photo is displayed in the graphic LCD. The communication between the reader and UL kit id is done with the help of RS 232 protocol. It is one mode of serial communication helps to transmission of the data between the devices serially. RS 232 works with a baud rate of 9600 bits per second. RS 232 is used widely because of its ease of use. The data from the UL kit is transferred to the computer for further processing.
Design of Database
An efficient and robust database is maintained for processing of RFID data. Initially all the requirements for this system as shown in the Figure 3 is gathered such as RFID card numbers, name of the person using the card, his residential address, average monthly consumption in liters, vehicle type, usage value till date in liters. A table is created in such a way holding all the above details for processing. The data i.e. card number from the reader is
Composition of System Architecture
From the Figure 1, this system architecture includes mainly RFID System, UL kit, PC and Database. RFID System includes RFID Card, RS-232, RFID Reader and antenna. RFID System is attached to the UL kit. UL kit is attached to PC via RS 232. PC helps to interact with the databases. Databases hold a key role by maintaining all the required information.
Working of System Architecture
This architecture is designed to achieve the usage of resources efficiently. RFID System helps to read the data from tags. The data will be transmitted to the reader, which is in read mode. All the data received from the reader will be transferred serially to the PC via RS -232 4 . PC receives the data and requests the database for all the information regarding that number. Information retrieved from the database is displayed on the LCD present on UL kit. So the person at filling station will know the usage and he decides whether to fill petrol or not which achieves efficient usage of resources.
Design of RFID System
In RFID System key components includes a tag, reader, antenna, RS 232, UL kit, as shown in the Figure 2. The main purpose of this system is to read the number present in the RFID Card and pass it on for further processing. queried and sent to the backend database 7 . The database purpose is to retrieve the data, based on the card number and supply them to the PC.
The PC will further display the details of the customer on the UL kit. The data modified at the filling station is updated in the database for future references. The employee at filling station should be careful while updating the value for avoiding errors.
System Operation
At the time of purchasing vehicle, the owner is provided with RFID cards. These cards contain unique identification numbers stored in it. It is used to recognize who the owner is and what type of vehicle he is using. Based on the vehicle type, its monthly petrol consumption is calculated and stored in our database as a default value. For sure the vehicle needs to be filled with the petrol for transportation. At the filling station in order to get petrol the owner needs to show the RFID card. At each and every filling station, we have RFID Readers. They will read the unique number in the RFID card and the data is transmitted serially to the UL kit by using RS 232 protocol followed by PC 8 . PC requests the database to find the vehicle usage value. The value retrieved from the database i.e. usage value and the user name is displayed on the UL kits, as shown in the Figures 4 and 5. If the usage value is within the range the vehicle is allowed to fill the petrol and its value is decremented and updated in our database. If the remaining usage value in our database reaches to zero, he is not allowed to fill the petrol and needs to wait till the next month for renewal. The remaining usage value is automatically updated in the database at the first day of every month. By following this approach the owner will not simply use the vehicles for smaller distances instead he may go by walk. Petrol consumption is limited so that we can use it for longer time period.
Conclusion
In this paper, we are proposing an RFID Technology based integrated approach for limiting the resources like petrol. RFID Technology becoming flexible and cheaper now-a-days helps to develop an efficient design. Considering real time possibilities an embedded device is included in this design i.e. UL Kit. RFID tag with the help of database we can track the persons details. Verifying the user's identity and his remaining usage value the person is authenticated for filling the petrol. Web server helps to centralize the entire system for global access. Usage of these systems helps a lot in conserving the petroleum resources and preventing its depletion.
|
2019-05-30T13:10:16.022Z
|
2015-01-20T00:00:00.000
|
{
"year": 2015,
"sha1": "0f98f243ac22e1c546dfef641306ca563eacdead",
"oa_license": null,
"oa_url": "https://doi.org/10.17485/ijst/2016/v9i17/92989",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "1fcdb4932c2b927ad89f8ded9b9b2360dff7b1c0",
"s2fieldsofstudy": [
"Computer Science",
"Engineering"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
238204545
|
pes2o/s2orc
|
v3-fos-license
|
Hybrid computational modeling demonstrates the utility of simulating complex cellular networks in type 1 diabetes
Persistent destruction of pancreatic β-cells in type 1 diabetes (T1D) results from multifaceted pancreatic cellular interactions in various phase progressions. Owing to the inherent heterogeneity of coupled nonlinear systems, computational modeling based on T1D etiology help achieve a systematic understanding of biological processes and T1D health outcomes. The main challenge is to design such a reliable framework to analyze the highly orchestrated biology of T1D based on the knowledge of cellular networks and biological parameters. We constructed a novel hybrid in-silico computational model to unravel T1D onset, progression, and prevention in a non-obese-diabetic mouse model. The computational approach that integrates mathematical modeling, agent-based modeling, and advanced statistical methods allows for modeling key biological parameters and time-dependent spatial networks of cell behaviors. By integrating interactions between multiple cell types, model results captured the individual-specific dynamics of T1D progression and were validated against experimental data for the number of infiltrating CD8+T-cells. Our simulation results uncovered the correlation between five auto-destructive mechanisms identifying a combination of potential therapeutic strategies: the average lifespan of cytotoxic CD8+T-cells in islets; the initial number of apoptotic β-cells; recruitment rate of dendritic-cells (DCs); binding sites on DCs for naïve CD8+T-cells; and time required for DCs movement. Results from therapy-directed simulations further suggest the efficacy of proposed therapeutic strategies depends upon the type and time of administering therapy interventions and the administered amount of therapeutic dose. Our findings show modeling immunogenicity that underlies autoimmune T1D and identifying autoantigens that serve as potential biomarkers are two pressing parameters to predict disease onset and progression.
Introduction
Various autoimmune disorders influence human health; type 1 diabetes (T1D), a form of diabetes mellitus in humans and animal research, is a group of metabolic disorders in which insulin-secreting β-cells are targeted by biased decisions of the immune system. T1D progression after initiation follows multiple phase transitions in complex pancreatic cellular networks. As the primary infiltration of the immune response against self-antigens proceeds, numerous interactions including cell differentiation and competition between different types of cells accelerate the induction of autoimmune responses, which eventually leads to T1D onset and progression.
The barrier in sampling experimental data from pancreatic tissues or lymph nodes in highrisk T1D subjects makes the use of mathematical/computational modeling of pancreatic β-cell destruction an intriguing opportunity to analyze this disease. Various questions and problems typically arise from researchers and the disease on a daily basis that require long and tedious experimental work to possibly find an answer. Since the model has been trained on multiple datasets, having such a practical and flexible platform can widen the horizon with reasonable accuracy and provide preliminary insight into the appeal of such questions as the ABM model can be quickly adjusted to address such inquires and issues before conducting any experiments. In addition, we can all acknowledge designing an experiment requires lengthy paperwork and follows with associated costs assuming all the tools, materials, patients are available for lab work. Moreover, experiments alone cannot often explain the behavior of very rich and complex developmental dynamics of pancreatic islets and β-cells with feedback across different levels of biological organization as pointed out in Anmar and Santiago's review [1]. Such a quantitative approach emerging from modeling the detailed biology of immune responses could provide novel insights into the mechanisms underlying the regulation of T1D when the safety, reproducibility, and efficiency of the present experimental techniques are challenging. Now the challenge is to design and construct such a quantitative framework for analyzing highly orchestrated biology, immunology, and the pathogenesis of T1D based on the current knowledge and modular organization of cellular networks and biological parameters. To unravel complex system behaviors resulting from the inter-and intra-cellular and signaling networks linking the immune system and metabolism during T1D progression, it is crucial to first integrate essential information from inherent biological processes into a systematic framework.
on modeling interactions between a class of agents including antigen, pancreatic β cell, dendritic cell, naive CD8 + T cell, activated CD8 + T cell, and cytotoxic CD8 + T cell. The rules that govern interactions between these agent classes were summarized based on existing literature (which is described in detail in the Agent behaviors section). The selection of an agent class was based on expert knowledge and available experimental data of infiltrating CD8 + T cells. Under different biological conditions, the model can predict the individual-specific trajectory of infiltrating CD8 + T cells. The model can also predict the time interval between antigen detection to the possible development of overt diabetes to mimic the differentiation in T1D progression. Moreover, the model provides a computational platform to test the efficacy of different therapeutic strategies.
Simulation software packages
The data-driven agent-based model was implemented in the NetLogo environment, which is a multi-agent programmable language and integrated modeling platform [14,[28][29][30]. The primary user interface of NetLogo comprises two-dimensional (2D) grids, in which two types of agents are used to construct an agent-based model (as shown in Fig 1). A patch is an immobile agent type that comprises the background grids in the simulation space. Turtles usually referred to as mobile agents can interact with other turtles and maneuver over patches in the simulation space. Turtles can be classified into various types of agents for modeling different cell motility, interaction, migration, etc. and their associated attributes can be defined as state variables in the simulation. These state variables help differentiate the behaviors of individual agents and allow computational modelers to simulate the functions and/or actions of these agents regulated by the system. The user-friendly interface of NetLogo also allows modelers to define model parameters and observe simulation results.
Simulation initial setting
We generated a 201 × 201 2D grid in NetLogo as the simulation interface, designed to reflect a 2D projection of the pancreas in NOD mice. The 2D projection of the pancreas comprises Schematic illustrating pancreatic islets, circulation, and PLNs in the NOD mouse model. Patches are shown by the background grids in NetLogo, turtles are mobile agents moving over the background grids (e.g. Naïve CD8 + T cell). DCs and CD8 + T cells migrate from circulation and/or PLNs to pancreatic islets, which contribute to T1D progression. One pancreatic islet is shown (the area restricted by the dashed curve in the pancreas). Pancreatic islets may overlap and form clusters of islets in the pancreas.
https://doi.org/10.1371/journal.pcbi.1009413.g001 three areas including pancreatic islets, circulation, and PLNs, storing the three main locations of immune responses in T1D progression (as shown in Fig 1). A pancreatic islet is a complex zone where pancreatic β cells, pancreatic α cells, and pancreatic δ cells are located [31]. In pancreatic islets, β cells are targets of infiltrating CD8 + T cells, and a persistent assault to β cells results in T1D [32,33]. Approximately 900,000 pancreatic β cells reside in the pancreas of the NOD mouse [26,27]. To further improve the performance of computational complexity, we simulated approximately one percent of the total number of β cells in the agent-based model. The number of other cell types was also proportionally reduced to accurately imitate the number of interacting cells. We used multiple patches (i.e. grids in the 2D NetLogo simulation interface) to represent pancreatic β cells because β cells are immobile during T1D progression.
To initialize the pancreatic β cells, 25 patches were selected as the centers of islets, and pancreatic β cells were then formed by an automated random process to furnish the area in a circular fashion with a radius of 38 grid units away from the selected center patches. The averaged diameter of a pancreatic islet is equivalent to 100 μm [27], and thus we set the length of each grid to 1.32 μm in pancreatic islets. For the simulation size presented in this paper, the number of pancreatic β cells was fixed to 8080 counts. PLNs are distributed in the surrounding area of the pancreas where APCs interact with naïve CD8 + T cells [34][35][36]. We divided the entire interface of NetLogo into three regions to mimic pancreatic islets, nearby circulations, and PLNs, as illustrated in Fig 1. For the in-silico modeling environment, the random process for which different agents (cells) interact is more important than the actual physical morphology, which in vivo determines how these agents will interact. The choice of agents is directly comparable to the cell types and tissue organization formed in the pancreas [14]. Therefore, the NetLogo setup is appropriate for this model.
Agent behaviors
Autoantigens, a trigger to initiate the onset of T1D, are the first agents released by apoptotic β cells [33]. Apoptotic β cells in this manuscript refer to those β cells that are in the process of programmed cell death. In accordance with the autoantigen stimuli, the number of apoptotic β cells is initialized in the agent-based platform, and then infiltrating CD8 + T cells in the simulating experiment engage in autoimmunity. If the immune tolerance is unable to restore homeostasis for infiltrating CD8 + T cells, beta cells eventually succumb to damage and apoptosis. Resident DCs in the neighborhood engulf the released autoantigens and become APCs [33]. These islet-resident APCs migrate to PLNs at 15-18 days of T1D onset [33] where they present autoantigens and activate naïve CD8 + T cells [35]. The activation process has three stages. During the first stage, APCs have a random short-span interaction with naïve CD8 + T cells; this process lasts 6-8 hours after PLNs hosting APCs [37]. Following the first stage, APCs establish prolonged and stable interactions (2-24 hours) with naïve CD8 + T cells, leading to activation of naïve CD8 + T cells [38]. Activated CD8 + T cells experienced multiple rounds of differentiation with a time interval of 4-8 hours before they eventually egress PLNs after 3-5 days and approach pancreatic islets [38,39]. As naïve CD8 + T cells become activated, APCs unbind from activated CD8 + T cells, and then randomly advance to stimulate the next available naïve CD8 + T cell. The lifespan of APCs ranges from 48 hours to 72 hours in PLN [40,41]. For this purpose, each APC was assigned state variables to indicate the time that they have already lived and follow a stochastic process within their lifespan range, a uniform distribution unif (48,72). The value of the state variable was updated by 1 unit per simulation step, where each simulation step represents 1 hour in T1D progression. If the value of the assigned state variable for an APC agent in PLNs exceed 72 hours, the agent is then forced to die. "Die" in NetLogo leads to disappearance and removal of an agent in the simulation. During the lifespan of APCs, APCs could activate multiple naïve CD8 + T cells based upon the time required for triggering naïve CD8 + T cells. Each binding activated CD8 + T cell is then able to proliferate 4 to 7 new CD8 + T cell every 5 days based upon experimental data presented in [42].
Activated CD8 + T cells enter efferent lymph vessels [43] and then migrate to islets through circulation [44]. In pancreatic islets, activated CD8 + T cells can be further stimulated by antigens released by pancreatic beta cells and become cytotoxic CD8 + T cells [35]. Cytotoxic CD8 + T cells move within/between islets following a random pattern, Brownian motion, and their velocities are in the range of 10-15 μm/min [44]. The speed of cytotoxic CD8 + T cells was accordingly adjusted in the simulating experiment because the space between each islet was designed for a 2D grid simulation interface. Cytotoxic CD8 + T cells destroy pancreatic beta cells via a direct interaction [32], and T1D progression proceeds as a result of the random movement of cytotoxic CD8 + T cells. Pancreatic β cells undergo an apoptotic process after they are in conjunction with cytotoxic CD8 + T cells [32,45]. As more pancreatic β cells endure the process of programmed cell death, the recruitment of DCs from circulation to islets is enhanced [46,47]. To this end, the number of recruiting DCs was correspondingly calibrated using mass-action kinetics, which will be discussed in detail in the Mathematical Model Section. Recruited DCs move within islets and engulf autoantigens through the phagocytosis process at sites of insulitis, and then migrate from islets to PLNs. Within PLNs, following these cycles of migration, the recruited APCs further activate naïve CD8 + T cells and promote the prognosis in pancreatic T1D.
Pancreatic β cells may proliferate their offspring during T1D progression [48][49][50]. A recent study also found out that regenerating and glucose-stimulated β cells can re-enter cell division cycles in a shorter period, compared to β cells recovering through cell division cycles under normal conditions [51]. To mimic these characteristics of β cells in NetLogo, each β cell is assigned to a state variable with the purpose to identify a quiescence period. During the quiescence period, β cells are not able to divide and proliferate. Based upon the study [51], pancreatic β cells can remain in a quiescence period for approximately 7 days (168 hours). At the beginning of a simulation, we assume the quiescence period of β cells follows a uniform distribution (unif(0, 168)). If the state variable (associated with the quiescence period) of a specific β cell is equal to 72, it means that this β cell has stayed in a quiescence period for 72 simulation steps (note 1 simulation step represents 1 hour in T1D progression). During the simulation, the quiescence period of each β cell was checked every simulation step. Once the quiescence period of β cells exceeds the maximum quiescence period (e.g. 168 hours), β cells can re-enter the cell division and start to proliferate new β cells at the next simulation step. The quiescence period of β cells was shortened as the simulation proceeds since regenerating and glucosestimulated β cells re-enter the cell division cycles in a shorter period during a T1D progression. A change in the quiescence period of β cells was formulated using a power function, as demonstrated in the Mathematical Model Section.
Mathematical model
To calibrate quantitative changes in agent number during T1D, we used mathematical expressions and mathematical models to measure the recruiting process of circulating DCs and the pancreatic β cell replication. For example, we calibrated the mass-action kinetics to measure the number of recruiting DCs from circulation, which depends upon apoptotic β cells, expressed as follows: where DC r | t = j+1 represents the number of recruiting DCs at the simulation step j+1, and DC r | t = j represents the number of recruiting DCs at the simulation step j. β AP | t = j represents counts of apoptotic β cells at the simulation step j, k r 2 Nðm r ; s r Þ represents the recruiting rate for DCs in circulation for a given (μ r , σ r ), and DC c is defined as counts of DCs in circulation. The number of circulating DCs during T1D progression was observed to remain unchanged [52]. DC c was then assumed to stay stable in the simulation.
Experimental studies reported that β cells had an increased rate of proliferation, functional recovery, and resistance to autoimmune destruction during T1D progression in a NOD mouse model [48,50,53,54]. A recent study also suggested that replicated β cells were able to re-enter cell division after a certain quiescence period [51]. This study also found out the quiescence period was shortened by an increased rate of glucose metabolism [51]. In addition, recent studies suggested that the glucose level was enhanced as the percentage of apoptotic β cells increased [55]. Based upon the experimental evidence, we assume that the quiescence period of viable β cells (the apoptotic β cells and dead β cells lose the ability to replicate) is associated with the percentage of apoptotic beta cells in a time-dependent manner. To model β cell replication, functional recovery, and resistance to autoimmune destruction, we propose the following mathematical expression: where T b i j t¼j represents a quiescence period (the quiescence period represents a combined effect of replication of β cells, functional recovery of β cells, and β cells' resistance to autoimmune destruction) of the i th β cell at the simulation step j, and the power γ describes the degree of glucose metabolism in the system. The degree of glucose metabolism was estimated by counts of apoptotic β cells, β AP , the higher the apoptotic β cells in the system, the shorter the quiescence period of surviving β cells. β init represents counts of healthy β cells at the onset of T1D autoimmunity, simulation step = 0. The value of γ was estimated and set to 1 because when the quiescence period of β cells becomes 2 days there are 70% of apoptotic β cells in the islets (0.3 γ ×7 = 2)) [51]. T b i j t¼0 represents the quiescence period of β cells under normal condition (simulation step = 0). A state variable that calculated how long each β cell survived in islets was assigned to each β cell at the beginning of the simulation. A β cell started to undergo the replication process if the value of this state variable exceeded the value of T b i j t¼j .
Experimental data collection
In addition to mathematical models, we collected experimental data such as the percentage of cytotoxic CD8 + T cells proliferating within islets, the time required for DCs migrating from islets to PLNs, and the lifespan of naïve CD8 + T cells in PLNs from existing experimental studies. These data were incorporated into data-driven agent-based modeling as system parameters. Most of the data we collected were time-related experimental data, each of which helped to simulate kinetic analysis of the model. We collected experimental data from studies that were most similar to our simulation setting (e.g. in a NOD mouse model). We also investigated other parameters used in simulations if data were not available from experimental studies, which is explicitly described in the next section. A summary of the collected experimental data is provided in Table 1.
Parameter estimation
The Latin hypercube sampling (LHS) method is applied to estimate the default values of unknown parameters. The ranges of unknown parameters were summarized from literature and obtained from the field experts' suggestions, which are included in S4 Table (i.e. Ranges used for LHS). Based on the LHS method, the range of each parameter value is divided into n = 100 intervals and each interval of a parameter is sampled once. A total of 100 combinations of parameters generated from the LHS Package in R was performed on our local server to obtain the dynamics of CD8 + T cells and compare them with the experimental data. About 25 simulations were run simultaneously each time on the server (server-specific configuration: 4x Intel Xeon CPU E5-4650 2.1 GHz 48 cores with 192 GB of RAM). In addition, each set of parameters was then performed 20 times to obtain the average counts of infiltrating CD8 + T cells at week 4, week 6, week 8, week 10, week 12, and week 14. A set of parameter values was selected as the default values that satisfy the condition where the difference between the simulated results and experimental data at weeks 4, 6, 8, 10, 12, and 14 is optimally minimized. The Table 1. Data from experimental studies in the NOD mouse model.
Experimental data Value Sources
Percentage of CTLs proliferating in islets (15.4-23.8) % [42] Number of islets in the pancreas 2500 counts [61] Lifespan of naïve CD8 + T cell in PLN (3)(4)(5) days [38,39] Time required for DCs migrating from islets to PLNs (15-18) days [33] Lifespan of DCs in PLN hrs [40,41] Time required for naïve CD8 + T cell activation in PLN (2-24) hrs [37] Time required for β cells remaining in a quiescence period under a non-glucose-simulated environment 168 hrs [51] Time required for β cells remaining in a quiescence period under a glucose-simulated environment 48 hrs [51] Time required for DCs interacting with naïve CD8 + T cells (6-8) hrs [37] Time required for naïve CD8 + T cell differentiation in one cycle (4-8) hrs [38,39] Time required for β cell replication 24 hrs [51] Time required for CTL differentiation 120 hrs [42] Number of CTL proliferating (4-7) counts [42] Islet diameter (100-110) μm [27] Time required for activated CD8 + T cell migrating from PLN to islets 120 hrs [62] https://doi.org/10.1371/journal.pcbi.1009413.t001 results then showed that kinetics of CD8 + T cells highly follow oscillatory dynamics as observed and suggested by studies and experts [56][57][58]. Parameter ranges utilized for local sensitivity analysis are included in S4 Table. The ranges determined by the field experts are marked using asterisks. For other ranges, appropriate references are provided in S4 Table. Local sensitivity analysis An imperative characteristic of modeling studies of physiological systems, such as T1D, is the potential to discover how sensitive (or uncertain) different mechanisms of the model are to variabilities and thus pinpoint promising targets for therapeutic purposes. One aspect of the sensitivity analysis is used to determine how elevated the change in a given output is generated by perturbations in model input values [59,60]. The sensitivity analysis provides modelers insights into the inherent nature of the model by investigating the relationship between input and output variables. Within the sensitivity analysis, local sensitivity analysis quantifies the effect of small variations of input factors on output variables when only one factor is changed at a time [59]. A global sensitivity analysis was conducted later to investigate the effect of multiple interactions between the inputs and model output. Compared to the global sensitivity analysis (as described in the next section), local sensitivity analysis requires less computational costs [60]. For our simulating experiment, we deal with a total of N P = 21 unknown parameters in the model (as tabulated in S4 Table). Local sensitivity analysis provides initial screening for sensitive parameters, and then helps target the most sensitive parameters for global sensitivity analysis. To conduct a local sensitivity analysis, one factor was altered at a time while other factors remained unchanged. For the varying parameter, nine values, M = 9, were selected on either side of the default (baseline) value, P d . Default values were estimated to fit the experimental data (details were presented in the Result section) by numerous simulation replications. Four values were selected on the left side of the default value, and four values on the right side of the default value. The interval between the selected values was set equally to 5% of the default value (in total the values range between [P d ±20%×P d ]). Each selected value, P s 2[P d ±20%×P d ], was assumed to follow a normal distribution. The mean of the normal distribution was consecutively set to each of the nine selected values, and the standard deviation was calculated as 6% of the default value of the selected parameter (the maximum value of each parameter can be found in S4 Table), to ensure the parameter values extend to the larger range [P d ±30%×P d ]. Each selected value was performed N = 20 simulation runs to capture the stochastic nature of the agent-based model. For these simulation runs, the time required for an overt T1D was recorded as an output variable of interest X s = X s (P s ).
For each selected value of tested parameters, a boxplot was generated to measure 25th percentiles, median, and 75 percentiles of the output variable, as formulated below: representing a series of data, where X i s j is the i th data in the sorted vector. Q 1 , Q 2 , and Q 3 denotes the 25 th percentile, the median, and 75 th percentile of the sorted vector, respectively. By calculating the quartiles of a sorted vector, the distribution of data can be easily evaluated.
Furthermore, a one-way ANOVA test [63] was performed to identify if different levels of unknown parameters have different effects on the model output (e.g. time required for T1D development). The ANOVA test helped identify the sensitivity of unknown parameters to the model output, which was conducted in the following procedures: where y ik represents the i th observation in k th group. In ANOVA tests, the total variation in the data is partitioned into two components: SSR and SSE, representing the sum of squares due to the between-groups effect, and the sum of squared errors, respectively. ðy ik À � y �k Þ is a variation of observations in each group from their group mean estimates, � y �k , (i.e. variation within the group), ð � y �k À � y �� Þ denotes variation of group means from the overall mean, � y �� , (i.e. variation between groups), and n k is the sample size for the kth group. One should reject the null hypothesis H 0 if the p-value of the F test is smaller than the predetermined significance level (e.g. α = 0.05), which indicates that at least one of the mean values of the output variable (i.e. time required for T1D development) is significantly different from others.
Global sensitivity analysis
Global sensitivity analysis is a technique applied to simulations of the performance and the quantitative effect of input variables on output variables when the input variables are varied over the entire possible ranges [60]. Both the extended Fourier amplitude sensitivity test (eFast) and Sobol's method deliver the measures for the decomposition of the total variance of model output into the main and interaction effect of each parameter. Compared to Sobol's method, eFast is more computationally efficient because it requires fewer sampling/simulations [60]. As such, we implemented an eFast, which is an advanced version of the Fourier amplitude sensitivity test (FAST). As introduced in the 70s, FAST is proposed to implement sensitivity analysis for both monotonic and nonmonotonic models. Compared to other sensitivity analyses such as partial correlation coefficients, FAST can more efficiently explore the multidimensional space of input factors by a suitably defined search-curve [64]. The search curve for the input factor P i is defined as follows: where q i representing a search curve in the unit interval (q i 2[0,1] for i = 1,2,. . .,N l ) was proposed by Saltelli et al. [64], and N l = 5 representing the number of sensitive parameters based on local sensitivity analysis. The search-curve generates normalized sample points more uniformly distributed in the unit interval [64]. In Eq (1), ω i for i = 1,2,. . .,N l �N P = 21 denotes the corresponding frequency for q i corresponding to the normalized value of i th input factor and s varies within the interval (0, 2π). The selection of ω i starts from the determination of ω max for a series of input factors such as Q = {q i , i = 1,� � �,N l }. A maximum frequency ω max for the series of input factors can be obtained by the equation o max ¼ nÀ 1 2m , where n is the predetermined sample size (e.g. n = 200 for our simulation study), m denotes the interference factor (a default value is 4) [65]. For the remaining frequencies, the maximum allowable frequency is given by: where o 0 max represents the second maximum frequency for the factor set Q. This strategy for selecting other frequencies is applied to ensure the search curves has distinct frequencies for ω max and o 0 max . Thus, the search curve can fill the sampling space as much as possible. For a detailed description, one can refer to an automated algorithm proposed by Saltelli et al. [64]. Moreover, since the range of the input factor q i is from 0 to 1, we implemented a quantile function to transform the unit factor q i to a realistic value of input factors X i . eFast is developed based upon the original FAST, and the advantage of eFast over the original FAST is the ability to evaluate not only the first-order sensitivity index (i.e. effect of one factor) but also the total-order sensitivity index (i.e. the interaction effect between input factors), and how these interaction effects may affect the model outputs [60,66]. The first-order sensitivity index and the total-order sensitivity index for i = 1,� � �,N l can be obtained using the following equations: Eqs (2)-(8) define the Fourier coefficients A j and B j for i = 1,� � �,N l in a Fourier series, where the real-valued function f(s) represents the simulation results that were expanded by a Fourier series [64]. The reason why the Fourier coefficients were introduced is that partitioning of variance in eFast works by varying different parameters at different frequencies [66].D i for i = 1,� � �,N l represents a fraction of total variances induced by the uncertainty of i th input factor.D T calculates the variances induced by the input factor and the interactions between the input factor and other input factors (first + higher-order).D T can be calculated by the difference between the total varianceD and the complement term 2 P o i =2 j¼1 Ʌ j , as given in Eq (8). The complement term helps calculate the variance produced by other frequencies than ω i . The ratioD î D was applied to estimate the main effect of i th input factor on the output, and the ratioD T D was employed to calculate the main and interaction effects of i th input factor on the output, respectively.
Therapy-directed simulations
Therapy-directed simulations were conducted to verify the effects of hypothetical therapies on the output variable, the time required for T1D development. The goal of implementing therapy-directed simulations is to identify possible therapeutic strategies that could reduce the chance of T1D development. For the simulation purpose, T1D was developed if the survival percentage of healthy β cells falls within the range of 10% -30% of the initial healthy β cells (the total number of β cells at the simulation step 0) [45,48]. The hypothetical therapies are proposed based upon the results from both local and global sensitivity analyses. For the therapy-directed simulations, 50 simulation runs were conducted for each strategy, and the probability of developing T1D was calculated using the following formulation:
Number of T1D cases observed in simulations Total number of simlations
where P T1D represents the probability of leading to T1D under the proposed strategy.
Validation of model using experimental study
To cross-validate the agent-based model, we compared the number of infiltrating CD8 + T cells from multiple simulation runs to the number of infiltrating CD8 + T cells collected in the experimental study [62]. Magnuson et al. [62] studied the population dynamics of infiltrating CD8 + T cells in a NOD mouse model. They measured the number of infiltrating CD8 + T cells at week 4, week 6, week 8, week 10, week 12, and week 14, respectively, and provided us the raw data for model validation (please refer to Acknowledgment Section). Our simulated results demonstrated that the number of infiltrating CD8 + T cells have an agreement with the experimental data in a time-dependent manner (R 2 = 0.98 calibrated by comparing the mean values between model fit and experimental data), as shown in Fig 4. As illustrated in Fig 4, a significant increase in infiltrating CD8 + T cells occurs around week 8 in the simulation, which is consistent with the findings of the previous studies [67,68]. During the time course of T1D progression, activated CD8 + T cells continue to migrate from pancreatic lymph nodes (PLNs) to the site of islets, which causes persistent damage to β cells and eventually leads to overt T1D after week 12 (see S1 Movie). The oscillatory dynamics and kinetics of CD8 + T cells have been postulated to follow by a series of immunological responses including the death of CD8 + T cells in islets, the proliferation of CD8 + T cells within islets, and the recruitment of CD8 + T cells from PLNs to islets due to beta-cell proliferation (see the agent rules presented in S2 Table), which was also observed and proposed by Trudeau et al., Vendrame et al., and von Herrath et al. [56][57][58]. These immunological responses were incorporated in the ABM framework and average counts of infiltrating CD8 + T cells (denoted by the red line and illustrating cyclic behavior) were calculated based upon 100 simulation runs (mean ± SEM). Furthermore, regulated patterns with individual variations in model results concur with experimental data reflecting cell fate and population heterogeneity demonstrated that T1D progression implies an inherently noise-driven process but a highly orchestrated and robust physiological mechanism. Besides, considering 21 unknown parameters in the system, the number of possible combinations is extremely large and, therefore, a Latin Hypercube Sampling (LHS) [69] was applied to efficiently scan parameter spaces and help estimate accurate values and distributions of unknown parameters. By minimizing the difference between the simulated results and experimental data (R 2 ), the default values of unknown parameters were estimated and listed in S4 Table. To understand the effect of infiltrating CD8 + T cells on pancreatic β cells, time-dependent counts of pancreatic β cells were replicated. As depicted in Fig 5A, counts of healthy β cells start to significantly decrease at around week 8 (when the simulation in ABM interface shows 1344 steps equivalent to 1344 hours; note that 1 simulation step represents 1 hour in T1D progression). The survival analysis of β cells in pancreatic islets persistently decreases, accompanying a continuous increase of infiltrating CD8 + T cells in islets (as shown in Fig 4). In the meanwhile, discernible increases in β cell counts occurring after week 12 were also detected following repeated decreases. This phenomenon is referred to as the "Honeymoon Period" that may be induced by β cell regeneration, resistance to autoimmune destruction, and functional recovery during T1D progression [9,10,12,49,50,53,54] in congruence with experimental studies which was observed in [56,[70][71][72].
PLOS COMPUTATIONAL BIOLOGY
Although there were no experimental data for the dynamics of β cells in the NOD mouse model from the same study of T-cell dynamics to directly verify the results in Fig 5, the model shows that it captures the onset of overt diabetes in good agreement with the distribution of the time at which NOD mice become diabetic extracted from the existing literature [45,48,57,[73][74][75]. The time at which NOD mice become diabetic is heterogeneous, and most diabetes incidences in NOD mouse models occur during week 12 and week 20 [45,73,74]. Simulated results corroborating the highly regulated heterogeneity in T1D incidences demonstrate, as shown in Fig 5B, that T1D development takes place most frequently between week 12 and week 20 after the onset of autoimmunity. It is also worthwhile to point out that, based upon 100 simulation runs, 50% (50 out of 100) of T1D cases occur during week 16 and week 18. Analogous to the NOD mouse model, onset of disease in the ABM model is not an agedependent event, and the time at which T1D becomes overt in the simulations is assumed that the survival percentage of healthy β cells falls randomly within the range of 10% -30% of the initial healthy β cells (the total number of β cells at the simulation step 0).
Sensitivity analysis
One of the major challenges of modeling a complex biological system is the lack of sufficient experimental data. Constructing a standardized and reliable computational procedure from statistical learning to analyze the current biological knowledge and available experimental data is therefore essential. For this purpose, we propose to investigate such a lack of information using ABM to quantitatively simulate the performance of phase progressions in complex pancreatic cellular networks during T1D autoimmunity. In addition, numerous essential factors and parameters are unknown (as demonstrated in S4 Table) during T1D progression. By implementing the sensitivity analysis, the effects of variations and heterogeneities of unknown parameters on the output of interest (e.g. the time required for overt T1D occurrence) can be thoroughly studied. [45,73,74]. The horizontal axis represents the time required for developing overt T1D in-silico, and the vertical axis denotes the relative frequency of T1D incidence. It is important to note that T1D onset occurs in our simulations based on the assumption that the survival percentage of healthy β cells falls randomly within the range of 10% -30% of the initial healthy β cells (the total number of β cells at the simulation step 0). https://doi.org/10.1371/journal.pcbi.1009413.g005 We initially investigated the impacts of unknown parameters on the outcome of T1D progression by implementing local sensitivity analysis to identify the most sensitive parameters that perturb significantly dynamics of the complex system. As a primary screening method, local sensitivity analysis provides information on how heterogeneity in each factor changes the profile and behavior of T1D autoimmunity. Note that the local sensitivity analysis implemented here is employed by fluctuating the magnitude of one parameter but keeping values of other parameters fixed [76]. The underlying assumption of the local sensitivity analysis is that the relationship between input parameters and output profiles of the model is undeviating and almost linear when the change in input parameters is relatively small. In ABM, local sensitivity analysis is often achieved by varying selected inputs within their confidence intervals based upon a specific percentage surrounding their default values [60]. To accurately illustrate the effects of unknown parameters on output profiles of the model, we conducted 20 simulation replications for each selected parameter value for the local sensitivity analysis (details on value selection were described in Section Local sensitivity analysis). A summary of results from the local sensitivity analysis is illustrated in Fig 6. Among all the unknown parameters, five sensitive parameters are presented in Using the local sensitivity analysis, boxplot-and-raw-data distributions of T1D occurrence were captured and plotted in Fig 6. By means of a one-way analysis of variance (ANOVA) test, we identified that 1) the average lifespan of cytotoxic CD8 + T cells (CTLs) within the pancreatic islets shown in Fig 6A; 2) the initial number of damaged beta cells when the simulation step equals 0 shown in Fig 6B; 3 Fig 6C; 4 Fig 6D; 5) recruitment rate of DCs within the pancreas shown in Fig 6E are the five sensitive parameters during T1D progression.
) the maximum number of naïve CD8 + T cells binding to DCs shown in
We only focused on T1D incidence within 20 weeks after the onset of autoimmunity which was suggested by experimental studies [45,73,74], and if no overt T1D was observed within this time interval, we reported the highest possible range for simulation run which is set to 32 weeks. As depicted in Fig 6, we discovered that a delay in the T1D occurrence is virtually inevitable, revitalizing survival of healthy β cells if the following conditions meet: 1) recruitment rate of DCs reduces; 2) the maximum number of naïve CD8 + T cells binding to DCs reduces; 3) time interval of DCs movement increases. Moreover, we observed that T1D progression may be inhibited if the average lifespan of cytotoxic CD8 + T cells in islets is reduced to five days or less, or the number of apoptotic β cells is less than a certain threshold estimated at most 510 damaged beta cells exist when the simulation begins (the range for the initial number of damaged cells has been scaled up to reflect the actual size of β cells). It is also worthwhile to point out that acute and progressive onset of diabetes chronicled by different studies through differentiation of T1D phenotype in the NOD mouse model [10,77,78] were also captured in our sensitivity analysis as shown in Figs 6 and S4.
The local sensitivity analysis provides interesting modeling insights by changing one parameter at a time. However, the output profiles of the simulation can be significantly changed by systematic variations in key model parameters. Thus, it is worthwhile to investigate how variations in the input parameters change the model output due to their interaction with other parameters. To further decipher the interaction effects between parameters, we applied the variance-based extended Fourier amplitude sensitivity test (eFAST) method for global sensitivity analysis. To this end, five sensitive parameters were selected based upon the results from local sensitivity analysis to perform eFAST. After transforming search curves into physiological ranges (as shown in S2 Fig) Using Eqs (2)-(8), the first-order index and total-order index were calculated, and the main effects and interaction effects of the five selected parameters on the model output are shown in Fig 7. The model output in these simulations, as aforementioned, constitutes the time required for developing overt T1D in NOD mice. If T1D was not detected within 20 weeks of autoimmunity (i.e. the number of remaining healthy β cells exceeds a threshold within the range of 10%-30% of initial β cell counts at each simulation), we assume no T1D occurrence was observed for this experiment concurring with experimental studies [45,73,74].
In this case, as shown in Fig 7A, the full data set comprised all possible outcomes of the model output in each simulation run, whether T1D occurs within 20 weeks of autoimmunity or not. As a result, the lifespan of cytotoxic CD8 + T cells plays a key role in the global sensitivity analysis of the model output since the first-order index of this parameter approaches 70% (sensitivity of parameter P 1 shown in Fig 7A). Corroboration for this hypothesis and conclusion stems from the fact that the elongated lifespan of cytotoxic CD8 + T cells in islets is directly correlated to an increase in the number of cytotoxic CD8 + T cells during T1D progression. Less cytotoxic CD8 + T cells survive during T1D disease if the average lifespan of cytotoxic CD8 + T cells is reduced over time. T1D progression is significantly delayed or even inhibited, which may lead to a promising therapeutic target if a limited number of cytotoxic CD8 + T cells circulate in the islets (also observed in Fig 6A). In contrast to the average lifespan of cytotoxic CD8 + T cells in islets, other input factors such as the maximum number of naïve CD8 + T cells binding to DCs and the recruitment rate of DCs circulating in the pancreas experience minimal effects on sensitivity analysis of the model output for first-order indexes, while they exhibit approximately 25% contribution to the model output when the interaction effects were included.
boxes show Q 1 , Q 2 , Q 3 for the first quantile, interquartile, and third quantile values for selected points within the spectrum of [P d ± 30% × P d ]. For these five sensitive parameters (shown by labels of the horizontal axes), the p-values of F tests were smaller than the predetermined significance level (e.g. α = 0.05).
https://doi.org/10.1371/journal.pcbi.1009413.g006 ; (B) eFAST sensitivity based upon reduced data set that includes only cases with T1D occurrence. The horizontal axes represent five sensitive parameters. The parameters P 1 , P 2 , P 3 , P 4 , P 5 represent the average lifespan of cytotoxic CD8 + T cells in islets, the initial number of damaged β cells (β init at simulation step 0), the time interval of DCs movement in islets, the maximum number of naïve CD8 + T cell binding to DCs, and recruitment rate of DCs circulating in the pancreas, respectively. The vertical axes denote the eFAST sensitivity analysis for the first-order and total-order indexes. The first-order indexes are denoted by pink portions starting from 0, and total-order indexes are illustrated by both grey and pink portions. The first-order index reflects the variance induced by the uncertainty of a single input factor and the total-order index represents the output variance abiding by the interaction between this input factor and other input factors. https://doi.org/10.1371/journal.pcbi.1009413.g007 In Fig 7B, we observe that first-order and total-order effects of the five parameters changed drastically when reduced data sets (indicating only T1D incidences were considered for eFAST analysis) were included in the global sensitivity analysis. The purpose of implementing reduced data sets for global sensitivity analysis is to investigate how elevated these input factors perturb T1D progression and the onset of hyperglycemia or honeymoon phase when overt T1D was developed. Interestingly, we observed that the selected input factors have comparable main effects on variations of the model output (as shown in Fig 7B). Compared to full data sets, our findings show that more interaction effects on global sensitivity analysis of the model output were observed in the reduced data set. This indicates that interaction effects between the five parameters have more contribution to T1D progression once T1D occurrence is apparently inevitable.
Incorporation of therapy-directed approaches
In the previous section, we applied sensitivity analysis methods to investigate the main and interaction effects of unknown parameters on T1D development. Five essential parameters were found to be highly correlated to overt T1D progression. Moreover, interaction effects were observed to be more related to the onset of T1D progression when T1D was developed. Considering the parameters (i) initial number of damaged β cells and (ii) average time interval of DCs movement in islets may be difficult to be regulated in experimental studies, we focused on the three parameters including the average lifespan of CTLs in islets, maximum number of naïve CD8 + T cells binding to DCs, and recruitment rate of DCs for targeted therapeutic strategies. The objective of therapy-directed simulations is to test the following hypothesis: the occurrence of T1D is significantly delayed or inhibited by therapeutic interventions if: 1) the lifespan of CTLs in islets was reduced; 2) the capacity of naïve CD8 + T cell binding to DCs was reduced; 3) the recruitment rate of DCs was reduced; or 4) two of the above strategies (1)-(3) were applied simultaneously by a combination of these therapeutic interventions.
As depicted in Fig 8A, the heatmap shows the probability of T1D occurrence was significantly declined if the average lifespan of CTLs in islets was reduced. Furthermore, the occurrence of T1D is dependent on the time administering therapeutic intervention (x-axis) and the applied amount of therapeutic dose range (y-axis). When the proposed therapy based upon our simulation findings was administered with a single dosage regiment at week 4, a significant decline in T1D occurrence was observed. However, when the same amount of therapeutic dose range was administered at a later stage of T1D progression (e.g. from week 14 to week Regions with red color represent a higher incidence of diabetes, and regions with blue color represent a lower incidence of diabetes. (A) Therapy 1 denotes the strategy that reduces the longevity of CTLs in islets. (B) Therapy 2 is described as an intervention that inhibits DC infiltration into islets. (C) Therapy 3 denotes a strategy that mitigates the binding process of naïve CD8 + T cells on DCs. Therapeutic interventions were implemented based upon single dosage regimens starting at week 4 to week 16. Two black circles filled white in Panel (A) show the change in T1D occurrence in Therapy 1 when the lifespan of CTLs was reduced from 120 hours to 114 hours at week 8.
https://doi.org/10.1371/journal.pcbi.1009413.g008 16), T1D occurrence is not significantly delayed or prohibited compared to an early stage of administering therapeutic intervention. Also, the percentage of T1D occurrence ranging from 60% to 70% is reduced to the range (30% to 40%), respectively, when the lifespan of CTL was reduced from 120 hours to 114 hours at week 8 (black circles filled white in Fig 8A). However, higher single-dose regimens or multiple-dose regimens are required to reduce T1D occurrence to 30% when therapeutic interventions are administered at week 12. Compared to targeted therapeutic strategies required to weaken the longevity of CTLs in islets, both therapies that reduce the capacity of naïve CD8 + T cell binding to DCs and the recruitment rate of DCs in the pancreas are less effective, as illustrated in Fig 5B and 5C.
The effects and efficacies of combinations of therapeutic strategies on T1D development were then investigated, as shown in Fig 9. Compared to a single therapeutic intervention, combinations of therapeutic strategies significantly reduce the likelihood of overt T1D instances at a late stage of T1D progression (week 12 as shown in Fig 9 and week 14 as shown in S5 Fig).
Specifically, the incidence of overt T1D approximately ranges from 50% to 70% when we implement a single therapy at week 12 (Therapy 2 or Therapy 3 as shown in Fig 8B and 8C). However, the occurrence of overt T1D diminished and shifted the range (50% to 70%) to (20% to 40%) when a combination strategy was applied (as depicted in Fig 9C1-9C3).
These findings from therapy-directed simulations indicate that the effectiveness of the therapeutic strategies on T1D progression is dependent on three main factors: the time administering therapeutic interventions, the administered amount of therapeutic dose range, and the type of therapeutic intervention based upon the key parameters. Therapeutic interventions were implemented based upon single dosage regimens starting at week 4 (Panels A1-A3) to week 16 (Panels D1-D3). Therapy 1 represents a certain strategy that reduces the residence of CTLs in islets. Therapy 2 is described as an intervention that can inhibit DC infiltration into islets. Therapy 3 denotes a strategy that prohibits binding sites on DCs for naïve CD8 + T cells. https://doi.org/10.1371/journal.pcbi.1009413.g009
Discussion
Human diseases frequently involve networks of complex inter-and intra-cellular and signaling interactions linking the immune system and metabolism during disease progression. Mathematical and computational approaches emerging from modeling the detailed biology of immune responses have been successfully used to explain non-intuitive behaviors and characterize variations in disease [79,80]. The development of T1D involves a complex network between pancreatic β cells and cells of both innate and adaptive immune systems [81], which requires a systematic level of understanding of onset, progression, and prevention of the disease. Herein, we sought to reconcile how cellular-level insights about the underlying interplay of immune responses and diabetes affect the ultimate behaviors of type 1 diabetes.
To accomplish this goal, we developed a hybrid modeling structure. Previous studies including our earlier research papers constructed mechanistic mathematical models to investigate various components in T1D and sepsis [4,[9][10][11][12]. We have also developed data-driven ABM to carry out in silico therapy-directed experiments in a mouse model and to investigate immune responses in human cell lines [14,82]. With previous experience and established knowledge in both mathematical modeling, ABM of immune responses, we identified the most accurate platform, as a promising framework for the physiological and etiological process to simulate complex cellular interactions in T1D, that integrates data-driven ABM and mathematical modeling with statistical components.
One major advantage of applying an agent-based framework to modeling a biological system is that it can capture spatial and noise-driven effects in highly orchestrated movements of different agents during a biological process that remains incompletely understood and is normally underestimated by mathematical models [15,42]. However, ABM, in some cases, would require extensive computational resources because of a detailed representation of the system. In realistic T1D progression, interactions among 10 6 −10 7 cells were observed [23,[43][44][45]. In this case, when an agent-based simulation was implemented at such high computational complexity levels, the computational efficiency was relatively low (one simulation run took more than 10 hours to conduct a 20-week T1D progression using our validated model). Such high computational complexities would further hinder the sensitivity analyses because they require massive repetitions of simulation runs when varying parameter values [16]. To improve the computational efficiency and underperformance, we proportionally reduced the number of agents (each agent in-silico represents one cell in vitro, details are presented in the Materials and Methods section) in the agent-based simulation. The agent-based simulation with a reduced size can simulate a 20-week T1D progression within one hour using a high-performance computer workstation which allowed for further high-throughput computing analysis.
By implementing advanced statistical methods, we designed the hybrid computational framework to model the complex network which primarily focused on the simulation of interactions between DCs, cytotoxic CD8 + T cells, and pancreatic β cells during T1D progression. Simulated results recaptured (R 2 = 0.98 based upon the comparison between mean values of model results and mean values of experimental results) individual trajectories of cytotoxic CD8 + T cells in experiments, as illustrated in Fig 4, suggested that the model can calibrate and predict successfully the progression of the disease. Our simulations confirmed the occurrence of the individual-specific "honeymoon phase" by incorporating the combined effect of β-cell replication, functional recovery, and resistance to autoimmune destruction into the model. Owing to the hybrid framework, we could explain the complex, highly orchestrated, and robust physiology of cell fate and population heterogeneity when the underlying mechanisms are a noise-driven process. Despite the stochastic behavior of regulatory circuits within agents, our results validated with experimental data show that cellular networks are precisely regulated leading to autoimmunity and beta-cells eventually succumb to damage (Figs 4 and 5).
The manifestation of overt T1D was shown to associate with the loss of β-cell mass [33,48,50]. Thus, we calibrated the kinetics of pancreatic β cells in our simulation runs and used the number of residual health β cells as a means to quantifying the magnitude of the initial stimulus for T1D progression. It was also shown that β-cell transplantation or regeneration is not sufficient for treating T1D disorder [83]. One major concern is that regenerated β cells may also become the new target of CTLs during T1D progression and β cells eventually succumb to damage and apoptosis [32] if the factors driving T1D progression fail to control homeostasis and immunological tolerance in alienating lymphocytes. To identify the driving factors leading T1D progression to overt T1D, we implemented local and global sensitivity analyses using in silico data generated in our simulations and suggested the average lifespan of cytotoxic CD8 + T-cells, initial number of apoptotic β-cells, number of binding sites on DCs for naïve CD8 + T cells, time interval of DCs movement in islets, and recruitment rate of DCs as five key drivers of T1D progression. The time required for developing T1D was the primary output of interest in the sensitivity analyses, which was also the target of in silico therapydirected experiments. Based upon these therapy-directed simulations, we discovered that the probability of T1D development could be reduced if appropriate strategies were applied at various time windows during T1D progression.
Cytotoxic CD8 + T cell has long been recognized as a major driving force for developing T1D [84]. Several experimental studies discovered that the prevention of diabetes can be achieved in animal models by down-regulation of cytotoxic CD8 + T cells [85][86][87][88]. Pinkse et al. [84] focused on administering immunodominant peptides derived from major antigens to down-regulate cytotoxic CD8 + T cells associated with β cell destruction in type 1 diabetes. But due to a limitation of the peptide therapy (short half-life of peptides in the circulation), our simulating model results suggest that further studies on other therapeutic options to reduce the lifespan of cytotoxic CD8 + T cells are needed to improve survival rates of beta cells. Several studies have provided a thorough discussion about multiple ways of regulating the longevity of immune cells including the use of agents, telomerase activation, inhibition of apoptosis, and reversal of energy [89][90][91][92][93]. These studies have demonstrated multiple methods can be employed to regulate the lifespan of immune cells, which provide additional justification to implement parameter modification such as the "lifespan of cytotoxic CD8 + T cells" in our modeling. Besides, the model results demonstrated that the likelihood of T1D occurrence would significantly decline if the administered amounts of therapy were increased at a late stage of T1D progression (as shown in Fig 8), which could also be a potential research direction for future experimental studies.
Our simulations also suggested T1D progression would markedly be delayed if recruitment rates of DCs were declined (as depicted in Fig 6E and 6B). Previous studies demonstrated a depletion of DCs helps preventing or reversing T1D in NOD mice [32,81,94]. Interestingly, our simulation study showed that this can be explained by the fact that, since the primary role of DCs is to function as APCs both in the periphery and within islets [32], the activation and the invasion of CTLs to islets decreased if the number of DCs was reduced in islets. As depicted in Fig 8, a reduction in the recruitment rate of DCs is less effective to improve survival at a late stage of T1D progression. However, it should be also noted that the computational model was focused on CD8 + T cells only; thus, it is important to point out herein a certain level of mature dendritic cells may be also important for the generation of regulatory CD4 + T cells of inhibiting T1D progression [95]. The reasons that CD4 + T cells were not included in the model are discussed in-depth as follows.
Our simulations further suggested that the number of binding sites on DCs for naïve CD8 + T cells is another key parameter that could impact T1D progression. Early treatment of NOD mice was observed to reduce the likelihood of autoimmune diabetes incidence if the value of this parameter declined (as illustrated in Fig 6D). The binding process between naïve CD8 + T cells and DCs is a crucial step for the activation of naïve CD8 + T cells in PLNs [32,33,35]. Then the activated CD8 + T cells in PLNs migrate to islets and become cytotoxic CD8 + T cells. When the number of binding sites on DCs for naïve CD8 + T cells decreased, the window of opportunity for naïve CD8 + T cells to become activated CD8 + T cells narrowed, and therefore the number of cytotoxic CD8 + T cells in islets declined. Thus, the model suggests that a reduction of the capability of naïve CD8 + T cell binding to DCs would provide another opportunity and experimental direction for resolving T1D development. Fischer et al. [96] reported that naïve CD4 + T cells deprived of MHC class II molecules showed a decreased ability to interact with a limited number of cognate antigen-bearing dendritic cells. In their study, Act-mOVA mice were bred to I-Aβ-deficient (MHC class II-deficient) mice to produce Act-mOVA MHC class II-deficient mice. This evidence shows that it is highly likely that a similar method could be applied to a NOD mouse model to deprive receptor molecules on dendritic cells with in vivo knockout method, which may limit the binding sites on dendritic cells to interact with naïve CD8 + T cells. As a matter of fact, this study provided the empirical investigation of our primary intuition for the concept of our simulation.
Moreover, results from therapy-directed simulations suggest that a combination of therapeutic strategy on reducing both recruitment rate of DCs and the maximum number of naïve CD8 + T cells binding to DCs would noticeably reverse the mechanism of destruction of beta cells, a potential to cure overt T1D, from week 10 to week 12 (as plotted in the heatmaps of Fig 9 and S5 Fig). Thus, unlike single-drug interventions, the effect of combined therapeutic strategies (e.g. a combination of treatment strategy on reducing both recruitment rate of DCs and number of binding sites on DCs for naïve CD8 + T cells) on T1D treatment may provide valuable insight into a prospective cross-sectional as well as longitudinal studies of onset, remission, and recurrence. Therapy-directed simulations recommend that the combined therapeutic strategies as an important mediator would contribute to a decreased risk of a range of short-term and long-term diabetes-related complications. It is most noteworthy that a recent study [97] also demonstrated combination therapy can reverse hyperglycemia in a NOD mouse model with established type 1 diabetes.
The present study should be evaluated in the context of several possible limitations. First and foremost, like all models, the hybrid computational model presented herein is an abstract of reality, to some extent, similar to an experiment when it is performed for a laboratory investigation dealing with a controlled environment while it might give a completely different output in reality and the human body. Although the model was designed to describe the major components based on the current knowledge and available data, excluded multiple mechanisms associated with T1D progression should be mentioned herein. For example, we did not model CD4 + T cells in this study because the role of CD4 + T cells in autoimmune T1D remains incompletely understood [35] and thus experimental data on CD4 + T cells are not sufficient. It is worthwhile to mention that the progression of T1D is associated with the finely tuned immune balance between effector CD4 + T cells and regulatory CD4 + T cells. The quantitative relationship between regulatory CD4 + T cells and effector CD4 + T cells is regulated by other cell types and cytokines [32,35,[98][99][100]. In this case, additional interactions between dendritic cells, CD4 + T cells, and CD8 + T cells would require extensive efforts in estimating unknown parameters and their distributions. When the number of unknown parameters is very large, it is often impossible to identify reliable model results [101]. A second limitation concerns the discrepancy between the NOD mouse model and the human model. Despite the NOD mouse model has been successful in multiple forms for studying T1D in humans [23], the limitation of the NOD mouse model should be marked. Compared to the NOD mouse model, it seems that the human pancreas has a lower potential for β-cell regeneration based on recent access to the pancreas of organ donors with T1D [102]. In addition, the severity of insulitis in human models has been found to be less pronounced than in the NOD mice model [103], which may impact the conclusions that parameters associated with CD8 + T cells predominate T1D progression in human models. Based on these reasons, it is worth mentioning that the strategies that would be effective in the NOD mouse model are not necessarily effective in clinical trials. We also considered modeling other treatment parameters such as anti-CD3 because they were proven to be effective in NOD mice and clinical trials [104]. However, modeling these treatment parameters, again, required the incorporation of numerous unknown parameters at the current stage.
Despite these limitations, our study enhanced the feasibility of computational modeling for simulating disease progression by simulating complex cellular networks. Biological research has been long focused on various aspects of the mystery of biological systems, which advanced the understanding of mechanistic knowledge of complex systems. Nevertheless, many of the emergent, integrative behaviors of biological systems result not only from complex interactions within a specific level but also from feedback interactions that comprise complex cellular networks [80]. This hybrid computational approach can simulate complex T1D progression associated with higher rates of diabetes-related complications by capturing essential components and their interactions during multiple pathways of immune responses. It can also investigate the effects of parameters on T1D development and suggest future directions for experimental studies to reduce these complications. With the aid of advanced statistics, our findings uncovered non-intuitive biological parameters that could be potentially targeted as therapeutic options. Accordingly, we suggest that this hybrid computational framework can help improve the systematic understanding of complex diseases and design in silico therapeutic strategies for other complex diseases such as cancer.
Supporting information S1 Table. Agent rules during T1D progression. Rules were described using pseudocode in NetLogo, and state variables associated with agents were identified by square brackets). (DOCX) S4 Table. Unknown parameters in the ABM simulations. Parameter ranges used for local sensitivity analysis and Latin hypercube sampling (LHS) are tabulated below. By consulting with the field experts and cited papers, the ranges were found in the literature (in this case literature does are not necessarily mean about NOD mouse model since these parameters are defined as unknown) which were then employed for unknown parameters (i.e. Ranges used for LHS) for the purpose of initial simulation of the model. The ranges determined by the field experts are marked using asterisks. (DOCX) S5 Table. Computational costs for sensitivity analysis and therapy-directed simulations. (DOCX)
S1 Fig. A binding process between naïve CD8 + T cells and dendritic cells in the ABM simulations.
During the binding process, each APC (i.e. dendritic cell that has engulfed antigens) checked the surrounding eight patches. If naïve CD8 + T cells appear on these eight patches, an APC checks the number of dendritic cells binding to each naïve CD8 + T cell. A dendritic cell can bind to a naïve CD8 + T cell if this naïve CD8 + T cell has available binding sites for the dendritic cell (e.g. the naïve CD8 + T cell at the bottom of nine grids in S1 Fig). A dendritic cell can bind to multiple naïve CD8 + T cells until there are no available binding sites on the naïve CD8 + T cell (e.g. the naïve CD8 + T cell in the top of nine grids in S1 Fig). The number of binding sites on dendritic cells for naïve CD8 + T cells are determined by a state variable named the maximum number of naïve CD8 + T cells binding to DCs. Regions with red color represent a higher incidence of diabetes, and regions with blue color represent a lower incidence of diabetes. Therapy 1 represents a certain strategy that reduces the residence of CTLs in islets. Therapy 2 is described as an intervention that can inhibit DC infiltration into islets. Therapy 3 denotes a strategy that prohibits binding sites on DCs for naïve CD8 + T cells. (TIFF) S1 Movie. NetLogo interface during T1D progression. The cross-section captures twodimensional projection along the largest axis of the pancreas. This window, in fact, simulates part of the whole picture. For illustrative purposes, we show a few islets and agents during T1D progression. The window is partitioned into two panels; the top panel shows PLNs with circulation in a large red disk, and the bottom panel illustrates the islets including beta cells in green colors. Brown agents represent cytotoxic CD8+T cells, yellow for autoantigens, pink agents for activated CD8+T cells, and green agents turning black shaded areas show damaged and apoptotic beta-cells. Insulitis in NOD mice proceeds through a stage of peri-insulitis in which T cells accumulate around the islets. For improving the computational efficiency, the video only shows insulitis progressing from inside the islets. (MOV)
|
2021-09-29T06:17:11.387Z
|
2021-09-01T00:00:00.000
|
{
"year": 2021,
"sha1": "c7c0ece88613fbc7d54d5dfe52a8aec031a10fba",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/ploscompbiol/article/file?id=10.1371/journal.pcbi.1009413&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "771bb2494ca5e4b8b333762be1c38da7bfe32c68",
"s2fieldsofstudy": [
"Medicine",
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science",
"Medicine"
]
}
|
131242159
|
pes2o/s2orc
|
v3-fos-license
|
Woody Species Diversity in Traditional Agroforestry Practices of Dellomenna District , Southeastern Ethiopia : Implication for Maintaining Native Woody Species
The major impact of humans on forest ecosystems including loss of forest area, habitat fragmentation, and soil degradation leads to losses of biodiversity. These problems can be addressed by integration of agriculture with forests and maintaining the existing forests. This study was initiated to assess woody species diversity of traditional agroforestry practices. Three study sites (Burkitu, Chire, and Erba) were selected based on the presence of agroforestry practice. Forty-eight (48) sample quadrants having an area of 20m × 20m, 16 sample quadrants in each study site, were systematically laid using four transect lines at different distance. The diversity of woody species was analyzed by using different diversity indices. A total of 55 woody species belonging to 31 families were identified and documented. There were significantly different (P < 0.05) among the study Kebeles (peasant associations). Mangifera indica, Entada abyssinica, and Croton macrostachyus were found to have the highest Important Value Index. The results confirmed that traditional agroforestry plays a major role in the conservation of native woody species. However, threats to woody species were observed. Therefore, there is a need to undertake conservation practices before the loss of species.
Introduction
Agriculture is the main backbone of the economy but also the major occupation of Ethiopian population [1].Rapid population growth and long history of sedentary agriculture have changed the land use/land cover systems and caused environmental degradation in many developing countries including Ethiopia [2].Bishaw and Asfaw [3] indicated that population growth and environmental degradation on forest ecosystems lead to loss of forest area, habitat fragmentation, soil degradation, and biodiversity losses.International concern is to find alternative farming systems that are ecologically and economically sustainable as well as culturally acceptable to local communities.
Agroforestry is a dynamic ecologically based natural resources management system through integration of trees on farms that diversifies agricultural landscapes and sustains production for increased social, economic, and environmental benefits [4].Agroforestry systems are known to bring about changes in edaphic, microclimatic, floral, faunal, and other components of the ecosystem through biorecycling of mineral elements, environmental modifications, and changes in floral and faunal composition [5][6][7].According to Schroth et al. [8], agroforestry also contributes to biodiversity conservation on a landscape scale in three ways.These are (i) the provision of supplementary secondary habitat for species that tolerate a certain level of disturbance, (ii) the reduction rates of conversion of natural habitat in certain cases, and (iii) the creation of a more benign and permeable "matrix" between habitat remnants compared with less tree-dominated land uses, which may support the integrity of these remnants and the conservation of their populations.
There are several types of traditional agroforestry practices in different parts of Ethiopia.Some of the different 2 International Journal of Biodiversity agroforestry practices include coffee shade tree systems, scattered trees on the farm land, home gardens, woodlots, farm boundary practices, and trees on grazing lands [9,10].Adjoining habitats that are more similar to the remnants in terms of structure and floristic composition are the most beneficial to the long-term preservation of biodiversity [8].In addition to supporting native species of plants and animals, agroforestry areas may contribute to the conservation of biodiversity by increasing the connectivity of populations, communities, and ecological processes in fragmented landscapes [11].
Agroforestry systems may maintain considerable intraspecific genetic variation at the landscape level, and this variation is essential for adaptation to changes in environmental conditions [12].Agroforestry systems serve as in situ conservation areas for many species that farmers value and therefore wish to conserve [13].The mechanisms by which traditional agroforestry systems contribute to biodiversity have been examined by various authors [8,[14][15][16].The same authors indicated that agroforestry plays five major roles in conserving biodiversity: (1) provides habitat for species that can tolerate a certain level of disturbance; (2) helps to preserve germ-plasm of sensitive species; (3) helps to reduce the rates of conversion of natural habitat by providing a more productive, sustainable alternative to traditional agricultural systems that may involve clearing natural habitats; (4) provides connectivity by creating corridors between habitat remnants which may support the integrity of these remnants and the conservation of area-sensitive floral and faunal species; and (5) helps to conserve biological diversity by providing other ecosystem services such as erosion control and water recharge, thereby preventing the degradation and loss of surrounding habitat.Agroforestry practices are the main option to reduce these problems.In the study area (Dellomenna District), farmers have been practicing different traditional agroforestry practices by integrating different woody perennials, crops, and livestock components in their lands.These traditional agroforestry practices constitute perennial and herbaceous plants that may promote biodiversity conservation and socioeconomic alternatives to local communities.However, the contribution of these traditional agroforestry practices on biodiversity conservation has not been studied so far in Dellomenna District.Therefore, this study was initiated to investigate status of woody species diversity in traditional agroforestry practices of Dellomenna District with particular emphasis on maintaining native woody species.
The Study Area
Location.Dellomenna District is one of the districts found in Bale Zone, Southeast Ethiopia.Geographically, it lies between 6 ∘ 40 -7 ∘ 10 N and 39 ∘ 30 -40 ∘ E (Figure 1).The district comprises 14 Kebeles with a total area of 461,665 hectares.It is bordered in the west by Harenna-Buluk District, in the east by Berbere and Guradamole Districts, in the North by Goba District, and in the South by Madda Walabu District [17].
Topography and Climate.The area is characterized by flat lands and moderately steep rolling hills with valley bottoms.The altitude of the district ranges within 1000-2500 meters above sea level.It has two agroclimatic zones where 86.7% is "Kolla" (dry, hot tropical climate) while the remaining 13.3% is "Woina Dega" (moist to humid, warm subtropical climate).The rainfall pattern in the area is the bimodal type, that is, middle of March to end of May (short rain season) and September to October (the main rainy season).Annual rainfall ranges within 700-1200 mm.The average temperature for Dellomenna is 18 ∘ C [17].
Population and Means of Livelihood.The total population of Dellomenna District is 96,161 with a population density of 21 persons/km 2 [18].There are various sources of livelihood and income for local communities living in the district.These include Coffee arabica, honey, Catha edulis, crops, livestock production, timber, and other nontimber forest products.These products serve either for household consumption or for cash income or both.For example, honey, Catha edulis, and coffee are exclusively for income and field crops and livestock are mainly for household consumption.
Land Use.The land use categories of this district are forest, agriculture, grazing land, and settlement [17].According to Tadesse and Feyera [18], natural forest and woodlands still account for the largest share of the land use types in the district.Despite its large coverage, natural forest in Dellomenna District is under pressure by humans.Agricultural expansion, settlement, overgrazing, forest fire, and intensive management of coffee in the forest are the major threats to the natural forest.Tef (Eragrostis tef (Zucc.)Trotter), maize (Zea mays), sorghum (Sorghum bicolor L.), and haricot bean are the major field crops grown in the district.Fruits like mango (Mangifera indica), banana (Musa species), papaya (Carica papaya), avocado, Annona muricata, and Psidium guajava are common in the area.Vegetables including cabbage, carrot, pepper, onion, Irish potato, and sweet potato (Ipomoea batatas) are also grown in the area [18].Various types of traditional agroforestry practices are also observed in the area.These include home garden and multipurpose trees on the farm land and farm boundary, agrosilvopastoral and silvopastoral [19].
Sampling Techniques. Systematic sampling methods
were employed during the course of this study.The sampling procedures focused on identification of areas having traditional agroforestry practices.Accordingly three study sites were selected: Burkitu, Chire, and Erba Kebele were selected.Finally, based on the topography or the gradient land use systems, four transect lines were aligned at an interval of 500 m in each selected Kebele.On each transect, four quadrats were laid at an interval of 200 m.The first transect line and the first plot were systematically selected.A total of 48 quadrats, 16 quadrats in each selected Kebele, were used for vegetation assessment.
Samples of all tree and shrub species encountered during this assessment were collected and recorded in their local names and later converted into scientific name by researchers themselves and by the use of agroforestry database: tree species reference and selection [20], useful trees and shrubs of Ethiopia [21,22] and Flora of Ethiopia and Eritrea, Edwards et al. [23], Hedberg et al. [24], and Hedberg et al. [25].For identification of the trees and shrubs that were not identified by researchers and by use of reference materials, expert field identification was made.
Sampling Design.
For the assessment of the diversity of woody species in traditional agroforestry practices, all woody species were recorded, and diameters at breast height (DBH, 1.3 m) for all woody species ≥5 cm were measured using a caliper or diameter tape except for coffee [26].The diameter of coffee shrub was measured at 15 cm aboveground [27].A quadrat size of 20 × 20 m (400 m 2 ) was used for woody species assessment for diameter ≥5 cm [28].Within this plot five subplots of 5 × 5 m, at four corners and in the center, were International Journal of Biodiversity laid for sapling assessment for diameter of 1-5 cm.Within each subplot, again a small five plot of 2 × 2 m was laid in each corner and center for seedling assessment for diameter <1 cm [28].The dimensions of the quadrats and sampling size coincide with recommended practice in the ecological literature and represent a compromise between recommended practice, accuracy, and practical considerations of time, resources, and effort [28].
Woody Species Diversity Indices.
Woody species diversity was analyzed by using different diversity indices.Shannon diversity index ( ), Shannon equitability/evenness index (), species richness (), and Simpson diversity index () were calculated and analyzed.These diversity indices provided important information about rarity and commonness of species in a community.Species richness is the total number of species in the community [29].It is the oldest and the simplest concept of species diversity.
Shannon-Wiener Diversity Index (𝐻
). Shannon's index accounts for both abundance and evenness of the species present.Two components of diversity are combined in the Shannon diversity index: (1) the number of species and (2) equitability or evenness portion of individuals among the species [29,30].The Shannon diversity index ( ) is high when the relative abundance of the different species in the sample is even and is low when few species are more abundant.It is based on the theory that when there is a large number of species with even proportions, the uncertainty that a randomly selected individual belongs to a certain species increases and thus diversity increases.It relates proportional weight of the number of individuals per species to the total number of individuals for all species [31].The Shannon diversity index is calculated as follows: where is Shannon diversity index and is proportion of individuals found in the th species.
Value of the index ( ) usually lies between 1.5 and 3.5, although, in exceptional cases, the value can exceed 4.5 [31].The larger the value, the higher the diversity.
Evenness (Shannon equitability) index () was calculated as described by Kent and Coker [31] to estimate the homogeneous distribution of tree species on farms: where is the number of species and is proportion of individuals of the th species or the proportion of the total species. has values between 0 and 1, with 1 being complete evenness [31].Usually, Shannon diversity index places most weight on the rare species in the sample [29] and hence Simpson's diversity () was used to include the most abundant species.
Simpson's Diversity Index (𝐷).
Simpson's diversity index is derived from a probability theory and it is the probability of picking two different species at random [29,30,32].Simpson's diversity () is calculated as where is Simpson's diversity index and is proportion of individuals found in the th species.
Simpson's diversity index gives relatively little weight to the rare species and more weight to the most abundant species.It ranges in value from 0 (low diversity) to a maximum of (1 − 1/), where is the number of species [29,30].The above indices, which are generally referred to as alpha diversity, indicate richness and evenness of species within a locality, but they do not indicate the identity of the species where it occurs.Hence, variation in composition of woody species among the different land use types (patch forests and agroforestry) was determined by computing Beta diversity.Beta diversity () is usually expressed in terms of a similarity index between different habitats in the same geographical area [32].
Similarity Indices (𝑆 𝑠
). Similarity indices measure the degree to which the species compositions of different system are alike.Many measures exist for the assessment of similarity or dissimilarity between vegetation samples or quadrats.The Sorensen similarity coefficient is applied to qualitative data and is widely used because it gives more weight to the species that are common to the samples rather than to those that only occur in either sample [31].
The Sorensen coefficient of similarity ( ) is given by the following formula: where
Characterizing of the Study Area.
The types of traditional agroforestry practices found in the study area included scattered trees, parkland agroforestry, home gardens agroforestry practices, and live fences.
In Chire Kebele, home gardens and parkland agroforestry were more common than in Erba and Burkitu.Live fence types of agroforestry were more common in Erba than the other two Kebeles.In Burkitu Kebele, Mango based home garden and scattered trees types of agroforestry practices were common.In each study site, fruit trees like Mangifera indica are dominantly found.
Woody Species Diversity
3.2.1.Woody Species Richness, Abundance, and Frequency.A total of 55 woody species belonging to 31 families were gathered, identified, and recorded in the traditional agroforestry practices of the study sites (see Appendix).Forty-seven (47) (85%) of these species were indigenous while the remaining 8 species (15%) were exotic.Anacardiaceae, Bignoniaceae, and Myrtaceae family had the highest number of woody species (7 each), while Apocynaceae, Cupressaceae, Flacourtiaceae, Meliaceae, Papilionoideae, Proteaceae, Rhamnaceae, Santalaceae, and Sapotaceae families had the lowest number of woody species (2 each).Highest numbers of woody species were recorded at Chire while lowest numbers of species were recorded at Erba (Table 1).
However, there was no significant difference in woody species abundance per plot ( = 0.7586) among the three Kebeles.Out of the total 55 woody species found in the area, the dominantly observed species were Croton macrostachyus (68.75%) followed by Mangifera indica (60.42%) followed by Persea americana (35.42) while 20 species had the lowest frequency (2.08%) (Figure 2).
Diversity Indices.
Shannon-Wiener's diversity index indicated that Chire Kebele was more diverse than the other two Kebeles (Table 3).A similar trend was noticed in terms of Simpson's diversity index.Shannon evenness (99%) indicated that the highest homogeneity of woody species was found in Chire Kebele compared with the other two Kebeles.The lowest Shannon diversity index, Simpson diversity index, and evenness were recorded in Burkitu Kebele.
Discussion
Woody Species Composition and Diversity.The highest woody species richness in the Chire traditional agroforestry could be due to its relatively well organized irrigation activities compared with the other study Kebeles.The woody species richness of the study area was comparable with another study in Ethiopia ( [34]: 64 woody species from Beseku) and lower than a study in Nicaragua ( [35]: 83 tree species).In addition, the woody species richness in this study was lower compared with several other studies: for example, 120 trees and shrubs from Sidama in Southern Ethiopia [10], 459 tree and shrub species around Mt. Kenya in central and eastern Kenya [36], 289 woody plants from suburban areas in Sri Lanka [37], and 122 trees and shrubs from Northeast India [38].
The number of woody species per plot recorded in the present study is less when compared with the earlier report of Kindt [39] from Meru, Kenya, in which the average number of species per farm was 54, ranging from 28 to 97.The total and average number of individual woody species per plot recorded in the present study is also higher than similar studies reported from other locations.For example, Kindt et al. [40] reported 16.6 tree species per farm ranging from 15.7 to 17.5 for western Kenya.The higher woody species abundance per plot in the present study could be because woody species abundance largely depends on the planting pattern of the woody species as reported in home gardens of Sidama [10].
The variation in woody species richness could be due to site characteristics, management strategy, socioeconomic factors [10], and farmers' preferences for tree species and functions in different localities [41].For example, farmers maintained many tree and shrub species for environmental services like soil and water conservation in the drier regions of the West African Sahel [41].The frequency of distribution of tree species on farms in the present study was variable.As one would expect, tree species with a greater economic or ecological value or both were found to be frequently distributed across the farms.Mangifera indica was the most frequent species occurring in 97% of the sampled farms.It is followed by Croton macrostachyus, Entada abyssinica, and Annona reticulate.The low abundance species could indicate that the population size might be too low to sustain these species within the agroecosystem unless their abundance is increased, as reported by O'Neill et al. [42].Since tree species diversity is required for the long-term survival of species, tree integration on farms could be one of the areas for conservation.
Shannon's diversity index of woody species in this study in traditional agroforestry systems was comparable to the study on Kerala garden in India, ranging from 1.12 to 3 [43], Tolera [44], who recorded Shannon diversity index, Simpson index, and evenness as 2.22, 0.83, and 0.64, respectively, also comparable to the present study.It is higher than the finding for Sidama home gardens by Abebe [10] and is comparable with the findings in the home gardens of Thailand, which ranges from 1.9 to 2.7 for Shannon index [45].The IVI is an aggregate index that summarizes the density, abundance, and distribution of a species.It measures the overall importance of a species and gives an indication of the ecological success of a species in a particular area.The tree species with the highest IVI recorded in traditional agroforestry were M. indica, Entada abyssinica, and C. macrostachyus.The IVI values can also be used to prioritize species for conservation, and species with high IVI value need less conservation efforts, whereas those having low IVI value need high conservation effort.
Conclusion and Recommendations
The results of the present study confirm that traditional agroforestry practices play a major role in the conservation of native woody species like Syzygium guineense and Juniperus procera which are endemic to Ethiopia and the critically endangered species like Cordia africana and C. macrostachyus.Based on the results obtained from the study, the following recommendations are offered: This study focused mainly on the assessment of the woody species diversity in traditional agroforestry practices; hence, in-depth assessment of all natural habitats is important to quantify the status of native woody species in the area.Creating awareness at the grass roots level about wise utilization of the woody species in the area is crucial in order to prevent the loss of valuable tree species.The governmental and nongovernmental organizations should promote different agroforestry practices to conserve indigenous woody species through circa situm conservation.
Figure 1 :
Figure 1: Map of the study sites in Dellomenna District, Southeastern Ethiopia.
Figure 2 :
Figure 2: Frequency occurrences of woody species across traditional agroforestry practices in Dellomenna District, Southeastern Ethiopia (for more details see Table10).
is Sorensen similarity coefficient, is number of species common to both samples, is number of species distinctive in sample 1, and is number of species distinctive in sample 2.
[33]Statistical Analysis.Variation in woody species diversity was tested using one-way ANOVA.Significant differences in mean values for woody species diversity were tested by least significance difference at < 0.05.All statistical computations were made using SAS statistical Software version 9.0[33].
Table 1 :
Woody species richness in traditional agroforestry practice in Dellomenna District, Southeastern Ethiopia.
Table 2 :
Mean woody species richness and abundance per plot of traditional agroforestry practices in Dellomenna District, Southeastern Ethiopia.Different letter(s) ordered vertically on mean values show a significant difference at < 0.05 among the three Kebeles. Note.
Table 3 :
Woody species diversity indices in traditional agroforestry practice in Dellomenna District, Southeastern Ethiopia.
Table 5 :
The five woody species with the highest IVIs in traditional agroforestry practices in Dellomenna District, Southeastern Ethiopia.Kebeles is listed in Appendix.The five woody species with the highest IVIs in each study Kebele are given in descending order in Table 5.The species with the highest IVI were Croton macrostachyus and Annona reticulata in Burkitu, Mangifera indica and Catha edulis in Chire, and M. indica and C. macrostachyus in Erba.
Table 6 :
List of woody species in the overall traditional agroforestry practices in Dellomenna District, Southeastern Ethiopia.
Table 7 :
List of woody species and their Important Value Index in traditional agroforestry of Burkitu Kebele in Dellomenna District, Southeastern Ethiopia.
Table 8 :
List of woody species and their Important Value Index in traditional agroforestry practices in Chire Kebele in Dellomenna District, Southeastern Ethiopia.
Table 9 :
List of woody species and their Important Value Index in traditional agroforestry practices in Erba Kebele in Dellomenna District, Southeastern Ethiopia.
|
2019-04-25T13:05:26.975Z
|
2015-12-08T00:00:00.000
|
{
"year": 2015,
"sha1": "421a3e730380284ee0dca3eed88db83f1e7291c8",
"oa_license": "CCBY",
"oa_url": "https://downloads.hindawi.com/archive/2015/643031.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "421a3e730380284ee0dca3eed88db83f1e7291c8",
"s2fieldsofstudy": [
"Environmental Science",
"Agricultural and Food Sciences"
],
"extfieldsofstudy": [
"Geography"
]
}
|
247451496
|
pes2o/s2orc
|
v3-fos-license
|
No Dopamine Agonist Modulation of Brain [18F]FEOBV Binding in Parkinson’s Disease
The [18F]fluoroethoxybenzovesamicol ([18F]FEOBV) positron emission tomography (PET) ligand targets the vesicular acetylcholine transporter. Recent [18F]FEOBV PET rodent studies suggest that regional brain [18F]FEOBV binding may be modulated by dopamine D2-like receptor agents. We examined associations of regional brain [18F]FEOBV PET binding in Parkinson’s disease (PD) subjects without versus with dopamine D2-like receptor agonist drug treatment. PD subjects (n = 108; 84 males, 24 females; mean age 68.0 ± 7.6 [SD] years), mean disease duration of 6.0 ± 4.0 years, and mean Movement Disorder Society-revised Unified PD Rating Scale III 35.5 ± 14.2 completed [18F]FEOBV brain PET imaging. Thirty-eight subjects were taking dopamine D2-like agonists. Vesicular monoamine transporter type 2 [11C]dihydrotetrabenazine (DTBZ) PET was available in a subset of 54 patients. Subjects on dopamine D2-like agonists were younger, had a longer duration of disease, and were taking a higher levodopa equivalent dose (LED) compared to subjects not taking dopamine agonists. A group comparison between subjects with versus without dopamine D2-like agonist use did not yield significant differences in cortical, striatal, thalamic, or cerebellar gray matter [18F]FEOBV binding. Confounder analysis using age, duration of disease, LED, and striatal [11C]DTBZ binding also failed to show significant regional [18F]FEOBV binding differences between these two groups. Chronic D2-like dopamine agonist use in PD subjects is not associated with significant alterations of regional brain [18F]FEOBV binding.
■ INTRODUCTION
Cholinergic systems are important actors in the central and peripheral nervous systems. Alterations of brain cholinergic systems are implicated in the pathophysiology of several neurodegenerative disorders, including Alzheimer's disease, Parkinson's disease (PD), Huntington's disease, and Progressive Supranuclear Palsy. Pioneering post-mortem investigations suggested that cholinergic system deficits contribute to important clinical features of these disorders. These studies were largely limited to end-stage disease, limiting correlative analyses with clinical features.
Molecular imaging [positron emission tomography (PET) and single photon emission computed tomography (SPECT)] methods were developed subsequently to study cholinergic system changes in vivo. The initial ligands measured regional acetylcholinesterase (AChase) activity. 1,2 These valuable tracers had limitations. AChase is not uniquely expressed by cholinergic neurons; high AChase activity within some regions, notably the striatum, limited quantification, and AChase activity might be subject to disease-related alterations. The vesicular acetylcholine transporter (VAChT) protein, responsible for sequestering acetylcholine (ACh) in synaptic vesicles and uniquely expressed by cholinergic neurons, was an alternative target. 3 4,5 There is strong preclinical evidence that the 5-hydrotetralin [ 18 F]((−)-(1-(-8-(2-fluoroethoxy)-3-hydroxy-1,2,3,4-tetrahydronaphthalen-2-yl)piperidin-4-yl)(4-fluorophenyl)-methanone ([ 18 F]VAT) will also emerge as a useful VAChT PET ligand. 6,7 Regional brain VAChT ligand binding is usually interpreted as a measure of cholinergic terminal density. Some data suggest that VAChT tracer binding is modulated by manipulation of other neurotransmitter systems, potentially confounding interpretation of VAChT tracer binding as a simple measure of cholinergic terminal density. Striatal VAChT ligand binding may be modulated by the activation or blockade of dopamine D2-like receptors. 8−11 This phenomenon may be particularly relevant to studies of cholinergic system changes in PD, where there are primary changes in dopaminergic systems and dopaminergic agents are the primary treatments. To explore the potential impact of chronic dopamine D2-like receptor activation on VAChT ligand binding in PD, we compared brain [ 18 12 Subjects with evidence of large vessel stroke or other intracranial lesions on anatomic imaging were excluded. The Movement Disorder Society-Unified Parkinson's Disease Rating Scale (MDS-UPDRS) motor examination (MDS-UPDRS III ) was performed in the dopaminergic medication "off" state after overnight abstinence from PD medications and before the first medication dose of the day, at least 12 h without PD medications. 13 The mean MDS-UPDRS III score was 35.5 ± 14.2 (range 2−74). Subjects completed the Montreal Cognitive Assessment (MoCA) with a mean score of 26.2 ± 3.3. 14 The mean duration of the disease was 6.0 ± 4.0 years. Thirty subjects were taking a combination of a dopamine D2like agonist and carbidopa−levodopa preparations, 62 were using carbidopa−levodopa preparations alone, 8 were taking a dopamine D2-like agonist alone, and 8 were not receiving dopaminergic drugs. Dopamine D2-like agonists used by participants included pramipexole and ropinirole. No subjects were treated with anti-cholinergic or cholinesterase inhibitor drugs. Most subjects had moderate severity of disease; 6 subjects Hoehn & Yahr (HY) stage 1, 3 HY stage 1.5, 22 HY stage 2, 43 HY stage 2.5, 28 HY stage 3, and 6 HY stage 4. The mean HY stage was 2.5 ± 0.6. Primary [ 18 F]FEOBV binding results from these subjects have been reported previously. 15 [ 11 C]Dihydrotetrabenazine (DTBZ) PET, to evaluate nigrostriatal dopaminergic terminal density, was performed in a subset of 54 subjects.
PET imaging was performed in 3D imaging mode with a Siemens ECAT Exact HR+ tomograph or a Biograph 6 TruPoint PET/CT scanner (Siemens Molecular Imaging, Inc., Knoxville, TN), which acquires 63 transaxial slices (slice thickness: 2.4 mm) over a 15.2 cm axial field-of-view. Harmonization for inter-camera differences was performed. Images were corrected for scatter and motion. Subjects were scanned in the dopaminergic medication "on" state.
[ 18 F]FEOBV was prepared as described previously. 16 [ 18 F]FEOBV delayed dynamic imaging was performed over 30 min (in six 5 min frames) starting 3 h after an intravenous bolus dose injection of 8 mCi [ 18 F]-FEOBV. 15 [ 11 C]DTBZ was prepared as previously reported. 17 A 60 min bolus/ infusion protocol was used for [ 11 C]DTBZ PET imaging. 18 [ 11 C]DTBZ PET imaging was performed in the dopaminergic medication "off" state in the morning.
PET imaging frames were spatially coregistered within subjects with a rigid-body transformation to reduce the effects of subject motion during the imaging session. 19 Statistical parametric mapping (SPM) software (SPM12; Wellcome Trust Centre for Neuroimaging, University College, London, England [https://www.fil.ion.ucl.ac.uk/spm/software/spm12/ ]) was used for PET−MRI co-registration using the cropped T1-weighted MR volumetric scan. All brain PET images were partial-volume-corrected using the Muller-Gartner method and spatially normalized to the Montreal Neurological Institute template space using DARTEL normalization protocol and smoothed with a Gaussian kernel of 8 mm full width at halfmaximum to adjust the anatomical variability between the individual brains and to enhance the signal-to-noise ratio. 20 A supratentorial white matter reference tissue approach was used to estimate VAChT binding as previously reported. 21,22 Distribution volume ratios (DVRs) were calculated from a ratio of summed six delayed imaging frames (3 h after injection) for gray matter target and white matter reference tissues. 15,21,22 The [ 11 C]DTBZ DVR of the bilaterally averaged striatum was determined based on the Logan plot graphical analysis method with the supratentorial cortex as the reference region. 23 FreeSurfer software (http://surfer.nmr.mgh.harvard.edu) was used to define cortical and subcortical MR gray volumes-of-interest (VOIs) based on labels from the Mindboggle-101 data set. 24 Frontal, temporal, parietal, and occipital cortical VOIs were computed as the average of neocortical regions. Neostriatal regions included the nucleus accumbens, caudate nucleus, and putamen. We used edge erosion to overcome possible partial volume effects due to possible spillover from adjacent regions. We calculated the VOI DVR by averaging the DVR values of each voxel per volume.
Statistical Analysis. Analysis of covariance was performed with VAChT [ 18 F]FEOBV binding with VOIs as the dependent variables and dopaminergic medication group as independent variables. Age, duration of disease, levodopa equivalent dose (LED), 25 Controlling for Confounder Variables. Given significant differences in age, duration of disease, and LED, analysis of covariance (ANCOVA) was performed to compare these two groups while adjusting for these confounder variables. The analysis was also adjusted for striatal dopaminergic [ 11 C]DTBZ VMAT2 PET binding, limiting the data set to 54 patients who had VMAT2 PET. The confounder variable-adjusted analysis did not demonstrate significant dopamine D2-like agonist effects for cortical, striatal, thalamic, and cerebellar gray matter VAChT binding (Table 3).
■ DISCUSSION
We did not find any evidence that chronic dopamine D2-like agonist treatment modulated brain regional [ 18 F]FEOBV binding in this population of mild to moderately advanced PD subjects. This is the first study to address dopamine receptor−VAChT binding site expression interactions in humans. Prior studies of the interactions of dopamine receptor manipulation and VAChT binding site expression used nonhuman primates (NHPs) and rodents, mainly using agents active at dopamine D2-like receptors (likely both molecularly defined D2 and D3 receptors). 8−11 In an NHP experiment studying brain [ 18 F]VAT uptake, Liu et al. reported that pretreatments with the dopamine D2-like receptor agonist quinpirole decreased striatal [ 18 F]VAT uptake. 9 The clinically used dopamine agonists, pramipexole, ropinirole, and rotigotine, are primarily D2-like agents and were employed in our participants at clinically effective doses consistent with pharmacologically relevant effects on striatal dopamine D2-like receptors. Prior studies largely focused on dopamine D2-like receptor manipulations with dopamine D2-like receptor antagonists. Ingvar et al., however, reported that acute pretreatment with the dopamine D1-like receptor antagonist SCH23390 did not alter the regional brain uptake of the VAChT ligand [ 18 F]NEFA in NHPs. 11 Several studies in rodents and NHPs described increased striatal VAChT ligand uptake following acute D2-like receptor antagonist treatments. 8 26−30 In addition, FEOBV has a low affinity for the human σ 1 receptor. 31 The considerable majority of our PD subjects received chronic carbidopa−levodopa treatment. It is possible that there are complex interactions between the effects of striatal dopaminergic denervation and dopamine D2-like receptor manipulations. Efange et al. described the effects of acute pretreatment with the dopamine D2-like antagonist spiperone on s t r i a t a l u p t a k e o f t h e V A C h T l i g a n d [ 1 2 5 I ]iodobenzyltrozamicol ([ 125 I]MIBT) in rats with unilateral 6hydroxydopamine lesions of the nigrostriatal dopaminergic projection. 10 Spiperone pre-treatment substantially increased [ 125 I]MIBT (>80%) uptake in the normal striatum, but this effect was markedly blunted in the denervated striatum. Our subjects experienced chronic striatal dopaminergic denervation and chronic dopamine replacement treatments. Almost all prior experiments studied acute pre-treatments. Terry et al. reported data on chronic treatment of rodents with risperidone or haloperidol, measuring brain regional VAChT protein levels, though striatal VAChT protein levels were not assessed. 32,33 When measured at 15 days after treatment initiation, there The increase in striatal VAChT ligand uptake after acute D2like receptor blockade is hypothesized to reflect increased striatal cholinergic interneuron activity, suggesting that regional VAChT expression is modulated by the magnitude of cholinergic neurotransmission. 34,35 VAChT is described as a slow transmembrane transporter, and VAChT expression could be a limiting factor in refilling synaptic vesicles with ACh during periods of increased cholinergic neurotransmission. 3,36 Consistent with this concept, Nagy and Aubert demonstrated that transgenic B6eGFPChAT mice expressing multiple copies of the VAChT gene have increased brain VAChT protein expression and enhanced potassium stimulated ACh release from ex vivo hippocampal slices. 3,37 Similarly, VAChT knockdown mice exhibit diminished ACh release. 3 This raises the interesting prospect that [ 18 F]FEOBV PET, and similar methods, could be used to measure functional correlates of changing cholinergic neurotransmission. Cisneros-Franco et al. used [ 18 F]FEOBV PET to evaluate cholinergic system changes accompanying auditory perceptual learning in rats, identifying increased auditory cortical [ 18 F]FEOBV binding as a correlate of training. Similar results were obtained with ex vivo measurements of choline acetyltransferase expression, suggesting that increased regional [ 18 F]FEOBV binding accompanied increased cholinergic neurotransmission. 38 Increased regional [ 18 F]FEOBV binding is described in human subjects with isolated/idiopathic REM sleep behavior disorder and early PD. 39,40 These results may reflect increased cholinergic neurotransmission in some brain regions during the PD prodrome and early stages of PD, perhaps as part of compensatory efforts consequent to failing nigrostriatal signaling. 41 Another possibility is that dopaminergic agents do not alter VChAT expression but rather modulate tracer binding. Further preclinical and human experiments evaluating will be needed to evaluate the relationship of VAChT expression and [ 18 F]FEOBV binding changes to changes in cholinergic neurotransmission.
In comparison with the rodent PET study of Schildt et al., we used a 3 h delayed equilibrium model to derive a binding ratio between the gray matter target regions relative to the supratentorial white matter reference region. We did not obtain immediate post-injection images to estimate K i to more directly compare our results to this prior rodent study. Our study might be underpowered to detect effects of dopamine D2-like receptor manipulations but our study population was significantly larger than any rodent or NHP study addressing this question. Half of our study population did not undergo [ 11 C]DTBZ imaging to confirm the presence of nigrostriatal dopaminergic deficits, raising the possibility of including subjects without evidence of dopaminergic denervation (SWEDDs). 42 We do not have pathologic confirmation of αsynucleinopathy for our subjects and might have included subjects with neurodegenerative PD mimics. Our subjects, however, had mainly mild to moderate PD with typical clinical courses and response to treatments, militating against the presence of SWEDDs or PD mimics. Even if present, SWEDDs and/or PD mimics would be only a tiny fraction of subjects studied and unlikely to bias our results. Other limitations of our work include the cross-sectional nature of our analysis in subjects with treated PD. Prospective studies comparing untreated PD subjects with appropriate controls and studying the acute effects and chronic effects of levodopa and dopamine agonist treatments would be useful in clarifying the relationships between dopamine D2-like receptor stimulation and regional VChAT ligand binding.
■ CONCLUSIONS
We found no evidence that chronic dopamine agonist treatment affects brain regional [ 18 F]FEOBV binding in PD. Further work examining the relationships between VAChT ligand binding and changes in cholinergic neurotransmission is warranted.
■ ACKNOWLEDGMENTS
The study was supported by P50NS123067, the Parkinson's Foundation, the Farmer Family Foundation, and the Weston Brain Institute. We thank our research subjects for their participation.
|
2022-03-16T06:18:13.718Z
|
2022-03-15T00:00:00.000
|
{
"year": 2022,
"sha1": "ae510089f63dcf8404e3b9b0543dfc4940c9eba7",
"oa_license": "CCBYNCND",
"oa_url": "https://pure.rug.nl/ws/files/222307783/acs.molpharmaceut.1c00961.pdf",
"oa_status": "GREEN",
"pdf_src": "PubMedCentral",
"pdf_hash": "d7b9d38964f86c856497e475ecb309e3404a6021",
"s2fieldsofstudy": [
"Biology",
"Psychology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
237802968
|
pes2o/s2orc
|
v3-fos-license
|
Reproductive Behaviors of Wild Chinese Pangolin (Manis Pentadactyla): A Case Study Based on Long-Term Monitoring
Observations of Chinese pangolin (Manis pentadactyla) in the wild are extremely rare and challenging because of their nocturnal and cryptic activity patterns and low population density. The present article reported the rst eld observation in eastern Taiwan, from October 4, 2012 to June 16, 2016, on the reproductive behavior of the Chinese pangolin based on the monitoring of a female (LF28) using radiotelemetry and camera traps. During this period, LF28 aged from 1 to 4.5-years old and gave two single-births, both took place in early December, at 3 and 4 years old, respectively. We recorded the entire 157 days of the rst nursing period from parturition to maternal separation. For the second infant, the gestation period was estimated to be around 150 days based on the evidence that the pregnancy started in early Jul. 2015 and the offspring was born on Dec. 9, 2015. During the entire nursing period, LF28 frequently moved the offspring from one nursing burrow to another staying various durations ranging from 1 day to more than 35 days, and almost all (= 15/16) of these burrows were located in the core (MCP75) of LF28’s home range. Started from the month of parturition and lasting throughout the whole nursing period, different adult males constantly visiting the nursing burrows were recorded. Mating behavior was recorded once outside the burrow in March, which provided evidence of the occurrence of post-partum estrus in this species. Delay implantation was proposed based on the observation of a several months lag between copulation and the estimated pregnancy initiation date. The present study demonstrated the advantage of using remote technologies to learn the life history of resting fossorial species.
Introduction
Field observations of the Chinese pangolin (Manis pentadactyla) are extremely rare and di cult due to their nocturnal and elusive behavioral patterns 1 , as well as a very low population density due to their critical endangered status 2 . Pangolins are fossorial and frequently use their powerful forelimbs to excavate ground burrows not only to search for ants or termites (i.e., the foraging burrows), but also to create shelters used for resting, giving birth and nursing offspring (i.e., the resting or nursing burrows) [3][4] .
It has been estimated that, in eastern Taiwan, the pangolin burrow density can be as high as 110/ha in a habitat with a density of 12.8 pangolins per 100ha 5 . However, very few (less than 2%) of these burrows were resting burrows 3 . Resident pangolins use the resting burrows within their home range in turn, but infrequently return to the same burrow several days in a row. In the wet (summer) season, they use the same resting burrow continuously for an average of 1.5 days, while in the dry (winter) season the average is 3.9 days 3 . Also, unlike foraging burrows, which are created mainly in the dry season, are rarely revisited, and eventually collapse or ll up with earth, these resting burrows are permanent, being repeatedly used and shared, but non-simultaneously by different individuals, and they are routinely maintained by the users 3 .
Due to the di culty of locating pangolins in the wild, knowledge concerning their reproductive biology, in comparison with phylogenetically closely-related carnivorous species 6 , is extremely limited [7][8] . Thus, almost all the present knowledge has come from captive observations. Records from captivity have shown that estrus and mating principally occur in the spring and summer (Feb.-Jul.) and the gestation period typically lasts six to seven months 4,9−10 . However, longer gestation length, based on the observation of the duration between mating behavior and parturition in captivity, has also been reported to range from 10 to more than 12 months in some cases 11 . Parturition in Chinese pangolins occurs seasonally between September and March, while there is a clear peak birth season from Dec. to Jan. 10 (K. J.-C. Pei unpub. data).
We recently reported the rst case of the growth and behavioral development of an infant pangolin raised by a radio-tagged young female (LF28) in their natural habitat 8 . It was found that pangolins exhibit maternal parental care. As infant pangolins require intensive maternal care during the nursing period 12 , knowledge of the reproductive behaviors of females during the nursing period not only has biological signi cance but is also crucial for the conservation of this critical endangered species. According to Sun et al. 8 , the infant pangolin was kept in the same nursing burrow after birth for approximately 4 weeks, before LF28 moved it to another burrow to continue nursing. During the whole nursing period, this motheroffspring pair moved to several resting burrows before the infant eventually left the mother at approximately 5 months old 8 .
It has been proposed that while mammalian females spend more energy on parental care than males, males often invest more energy into seeking and displaying for mates 13 . For solitary and fossorial mammals, such as the pangolin, a male's mobility and mate-nding tactics can be critical for mating success, especially due to the low population density [14][15] .
In this article, we present further reports of resting burrow use patterns of the female pangolin LF28 and multiple male visitations during the nursing period based on intensive monitoring by radio-tracking and camera trapping from late 2012 to mid 2016. Two occasions of births, in 2014 and 2015, respectively, were also observed during the monitoring period. The body mass of this female pangolin was also recorded periodically. To our knowledge this is the rst study to report such reproductive behavioral ndings for any pangolin species.
Ethics statement
Ethics approval was granted by the Laboratory Animal Center, National Pingtung University of Science and Technology (NPUST). Pangolins were live captured for radio-tracking with permission granted by the Taiwan Forestry Bureau (permit numbers 1011701139, 1031700176, and 1050143346) as required by the Wildlife Conservation Act, 2013. All clinical examinations were carried out by experienced veterinarians following procedures described in Khatri-Chhetri et al. 16 and all methods were carried out in accordance with the relevant guidelines and regulations. This research is part of the "Pangolin biology and ecology project", a long-term study conducted by the Institute of Wildlife Conservation, NPUST since 2009.
Study area
This study was conducted in Taitung County, eastern Taiwan (22°90'N, 121°18'E), located at the southern end of the Coastal Mountain Range 8 . It is one of the areas in Taiwan where a stable pangolin population can be found 17 . Due to the long history of human encroachment, there is no primary forest in this area.
Radio tracking
Researchers encountered the female pangolin LF28 in the eld on Oct. 4, 2012. She was brought to the Pingtung Rescue Center for Endangered Wild Animals (PTRC), NPUST, for clinical inspection and sample collection, including body weight and length measurements, blood sample collection for biochemical analyses, abdominal ultrasonography to check for pregnancy, microchip implantation and VHF radiotransmitter attachment. The transmitter was attached on a scale of the pangolin's tail near the hip following the protocol suggested by Sun et al. 18 . Since LF28 was still growing when we started tracking in late 2012, two models of radio transmitter (Model R2020 12g and R2030 24g, Advanced Telemetry Systems, Inc, Isanti, MN, USA) with active mode of 16h on/ 8h off had to be used in the present study.
On Oct. 4, 2012, LF28 had a body weight of 1.85 kg and total body length of 60.2 cm, and was estimated to be 9-10 months old. Her birth month was estimated to be in either Dec. 2011 or Jan. 2012 (i.e., 2011/2012 birth season). After LF28 was released where she was encountered, her subsequent locations were determined using a TR4 telemetry receiver (Telonics, Inc., 932 E. Impala Avenue, Mesa, AZ, 85204 − 6699 USA) with a directional antenna (RA-2AK or RA-23 K; Telonics, Inc.). Triangulation was normally undertaken once a day for 7 consecutive days, and for 2 separate weeks per month. Other than tracking her in the nighttime when she was active, we also tracked the radio signals as frequently as possible in the daytime to locate her resting burrows.
The radio signal was un-detectable on Jan. 22, 2013, and LF28 was unable to be tracked until 1 year later, when she was approximately 2 years old. On Jan. 10, 2014, LF28 was in sight again in her home range. She carried a non-functioning transmitter with a detached antenna, and her identity was con rmed by microchip scanning. LF28's transmitter was replaced on the same day and she was tracked without issue till June 16, 2016. The lighter transmitter was replaced by the heavier one when LF28' reached a body weight of 3,500 g. Home range and core activity area of LF28 were calculated using Minimum Convex Polygons (MCP) 19 .
During the whole monitoring period, LF28 was recaptured and brought to the eld station, except during the nursing period, from time to time to check the transmitter condition 18 , milking status, and to measure her body weight. LF28 was sent to PTRC for a detailed clinical examination and sample collection three more times, which were in Jan. 2014, Aug. 2015 and May 2016, respectively, as descripted above.
Intensive monitoring of LF28 during the nursing period
Once parturition was detected intensive monitoring was initiated to follow LF28 even more closely. The radio-tracking frequency was further increased to almost every day. We also increased our monitoring effort on the resting burrow used by LF28 by installing camera traps (Bushnell Trophy Cam, Reconyx HC500 or Reconyx UltraFire) at 1 to 1.5m in front of the burrow entrance. Once LF28 had moved to another burrow, which was detected by radio-tracking, we relocated the camera trap immediately to the new site. Other pangolins that visited the burrows were also recorded. As the camera traps were set in video mode, we were able to identify the gender of almost every pangolin that approached the burrow based on the appearance of the genitals. (Fig. 1). The nursing period ended on May 6, 2015 when LF28 left the nursing burrow alone and after this date LF28 was not recorded with the offspring. The total length of this nursing period was 157 days.
Results
On August 15, 2015, approximately 3 months after the maternal separation of the rst offspring LF28 was found to be pregnant again during abdominal ultrasonography, the crown-rump length of the embryo was 25mm with heartbeat and vertebrae observed (Fig. 2). Our intensive monitoring resumed on Nov. 25, 2015 due to the expectation of delivery of the second offspring.
Photos taken at midnight on Dec. 6, 2015 indicated LF28 was still pregnant (Fig. 3a), however photos taken 16 hours later revealed she had delivered her offspring (Fig. 3b). Therefore, the parturition of this second offspring took place from Dec. 6 ~ 7, 2015, which was almost exactly 1 year after the previous birth.
The second infant was captured for the rst time on camera on Dec. 29, 2015, when it was 3 weeks old and carried by LF28 to relocate to another nursing burrow (Fig. 3e). However, despite LF28 being recorded 22 times during the next 2.5 months, this infant was never sighted again. We therefore believed the second birth was un-successful and intensive monitoring concluded on Mar. 17, 2016. Furthermore, the radio signal of LF28 was permanently lost due to unknown reasons on June 16, 2016 (Fig. 1).
Resting burrow usages during the nursing period
A total of 147 locations, including 122 nighttime locations and 25 resting burrows, were obtained by radio-tracking during the entire period of this study. The home range (MCP100) and core area (MCP75) sizes were 34.0 ha and 14.9 ha, respectively (Fig. 4). The core area of LF28 was located toward the southern edge of the home range, which was close to human settlements. (Fig. 4). Among the 25 resting burrows, at least 16 (64%) were used as nursing burrows, including parturition, by LF28 during the two nursing periods. Fifteen of these 16 nursing burrows were located in the core area (Fig. 4). The majority of the nursing burrows were located in secondary forest (9), and the other habitats included bamboo forest (3), grassland (2) and farmland (2). Interestingly, one of the farmland burrows was found under a seriously damaged concrete oor in a small, abandoned aviary.
The duration that LF28 used the same nursing burrow ranged from 1 day to over 35 days. The longest durations occurred when the parturitions took place (> 32 days and > 35 days respectively), and these two parturition burrows were located in bamboo forest and farmland habitat, respectively ( Fig. 3c and 3d). LF28 was frequently observed to pull hay into or out of the burrows during the parturition and nursing period. The hay-pulling behavior lasted as long as 46 mins. LF28 left the nursing burrow every or every other day for foraging during the nursing period.
LF28 used at least 14 resting burrows a total of 17 times during the rst nursing period, with burrow F, H and K used twice (Fig. 5). On April 11, 2015, LF28 left burrow K with her offspring at 00:57, but her radio signal could not be tracked despite the efforts by the researchers during the following days. Intensive monitoring was therefore suspended until LF28 was observed back in burrow K again on April 22 (Fig. 5).
Presence of other pangolins and small carnivores
During our intensive monitoring we were also able to record other pangolins and carnivores that approached or even entered the nursing burrow when it was occupied by LF28 and the infant. There were 7 cases where other pangolins approached and entered the burrow, which took place between mid-Dec 2014 and early May 2015 (Fig. 5). With the exception of one subadult whose gender could not be discriminated from the video footage, adult males were responsible for all other visits ( Fig. 6a and 6b). The adult male that visited on Dec. 13, 2014, not only entered the burrow but also showed soil excavation behavior. Moreover, the adult male observed on Feb. 26, 2015 was identi ed as an individual (LM15) that was once radio tagged by a research team in Oct. 2011 (Fig. 6c). Therefore at least two different adult males were recorded visiting the burrows during this nursing period. Of these 7 visitations, 5 lasted less than 10 mins, whereas the other two lasted for 30 mins and 1h, respectively. Mating behavior was recorded outside of the nursing burrow between an unidenti ed adult male and LF28 on March 20, 2015 ( Fig. 5; Fig. 6d), when the infant pangolin was close to 4 months old.
During the second nursing period of LF28, four visitations from unmarked adult males were observed. Two of these visits were recorded at the parturition burrow, which included one 2-min long event at 01:28 on Dec. 6, right before the parturition, and one event 7 days later at 18:38 on Dec. 13. During the latter event, the male exhibited soil excavating behavior. We were not able to con rm the duration of the other two visitations performed by unmarked adult males, on Jan. 16 and Jan. 19,2016, respectively, at the second burrow that LF28 used.
In addition to pangolins, small carnivores approached the nursing burrow on ve separate occasions during the nursing period of LF28, comprised of three visitations of crab-eating mongoose Herpestes urva and two visitations of ferret-badgers Melogale moschata. Among the ve visitations, two resulted in a carnivore entering the burrow and leaving after 2 mins, which were each performed by a crab-eating mongoose and ferret-badger, respectively.
Discussion
Despite only focusing on one female Chinese pangolin, LF28, our study, to our knowledge, is the rst to provide highly detailed records on the nursing behavior of this poorly studied but critically endangered species. During the entire tracking period, the body weight of LF28 increased from 2 kg at 1-year-old to 3kg at 2-year-old, and LF28 reached her maximum body weight of 4 kg at age 3. Based on the uninterrupted monitoring between Dec 2014 and June 2016, LF28 gave birth to her rst offspring when she was 3-years old and another one at 4-years old (Fig. 1). Both infants were born in early December, which were in accordance with the peak birth season of the species 10 . Our observations con rmed that the Chinese pangolin is a seasonal breeder in the wild, and they give birth once a year . Also, they can give birth in consecutive years with a litter size of one 20 .
Other studies (n = 4) have found that the lightest weight, or youngest age, a female Chinese pangolin can give birth is 3 kg or at 2-years old 11,[20][21] , which indicated that they can conceive at an age of 1 to 2 years. Therefore, the rst birth of LF28, which took place when she was 3-years old, might suggest a delay in pregnancy or sex maturation. However, information concerning the average primipara age for this species is not available to date, more research, especially in the wild, is necessary.
Our results indicate that female Chinese pangolins will carry their offspring frequently from one nursing burrow to another during the entire nursing period. In the case of LF28, nursing burrows were only some of the resting burrows utilized and were predominantly located within the core area (MCP75%) of her home range (Fig. 4), despite the close proximity to human settlements. This suggests that familiarity of the environment or food resource availability should be important considerations in nursing burrow selection.
Nursing burrows were normally used only once during the same nursing period, with durations varying from 1 day to more than 1 month (Fig. 5). This frequent relocation behavior should be important to avoid predation of the newborn. Our monitoring showed that small carnivores, such as ferret-badgers or crabeating mongoose, will enter the nursing burrow, which may suggest they are searching for prey. Therefore, this could re ect a potential threat to the infant pangolin, especially when the mother is absent for foraging 8 .
Burrows where LF28 gave birth were not only used for the longest duration after birth, they were also used before parturition. Similar to our ndings, a previous study reported that both males and females will collect and pull hay into the resting burrow in the wintertime 3 . Therefore, in addition to providing insulation, the hay could also serve as necessary bedding for the delivery and nursing of offspring. Other functions of hay that have been proposed include false barriers that can act as predator deterrent structures 22 .
Our records revealed at least two different adult male pangolins approaching and entering the nursing burrows multiple times throughout the nursing period. Most of these visits only lasted minutes, whereas a few lasted longer. During one long visit, in March, mating behavior was observed, therefore the occurrence of post-partum estrus, or even ovulation, may be likely for this species. In captivity, mating behavior was also observed between February and July 10,23 . Although there is no direct evidence yet, these adult male visits suggest that at least some of them were for mate-searching. Male pangolins most likely depend on olfactory cues to locate females in heat. In mammals, female chemical signals have important roles in sexual attraction and facilitating sexual receptivity [24][25][26] . Female Chinese pangolins tend to defecate close to the burrow during the nursing period (N.C.M. Sun unpubl. data), therefore, despite the frequent relocation behavior expressed by the mother, it was likely to generate su cient olfactory information for male pangolins.
It is also possible that female pangolins will mate more than once, even with different males, during the same nursing period. Sun et al. 20 have reported that certain female Chinese pangolins exhibited a lack of mate delity based on microsatellite marker assessments. Our observation provides additional support for this phenomenon. Multiple mating with the same or different males has been observed in several solitary carnivores 28-31 . For males, frequent pre-copulatory encounters with females may offer advantages that increase opportunities for mating compared to males that are less familiar with females [32][33] . Hypotheses concerning the advantages of females exhibiting promiscuity have also been widely proposed, including direct bene ts (e.g., stimulation of reproduction, fertilization assurance, mate retention etc.) and genetic bene ts (e.g., choice of paternity, sperm competition, inbreeding avoidance etc.) [34][35] .
Interestingly, during two separate visitations adult males exhibited excavation behavior, and both events took place shortly after parturition. This excavation behavior at a parturition burrow has never been reported before for male pangolins, therefore, further research is needed to better understand the role male pangolins play in parental care.
The fetus of LF28's second offspring detected in the ultrasonographic image in Aug. 15 provided additional information on the gestation length of the species. Following the fetal and extra-fetal structure development of small-sized (3-8kg) dogs described in Luvoni and Grioni 36 and Kim and Son 37 , this fetus might have reached a maturity of 30-40 days or less. The implantation of this fetus most likely occurred in early July. This infant pangolin was born on Dec. 8 later that year, and the gestation length was estimated to be around 150 days, which was shorter than previous reports 4,9−10 . This was the rst estimation of gestation length of the Chinese pangolin based on physiological evidence under natural conditions.
Our ndings of the gestation period, which took place later in the year (July-December), coupled together with the occurrence of post-partum estrus and mating earlier in the year (December-May), suggests that delayed implantation likely takes place in this species, as proposed by Chin et al. 11 . This also explains why there was such an extensive variation in the gestation length, from 180 to more than 372 days, determined based on the observation of mating behavior and parturition in captivity [10][11]21 . More studies on the reproductive physiology for this species are necessary.
Lastly, the present study also demonstrated that the di culties associated with researching the life history and behaviors of the elusive pangolin could be alleviated with the use of technologies (e.g., camera trapping, radio tracking, etc.). This is especially true for non-migratory fossorial species if one has an appropriate knowledge of their home range or residential environment. There are more and more new technologies and devices that have been developed and applied to wildlife research in the eld, which should greatly improve our understanding and promote conservation efforts of endangered species such as the pangolin.
|
2021-09-28T01:08:57.441Z
|
2021-07-14T00:00:00.000
|
{
"year": 2021,
"sha1": "0d72fb769449432d26a9c83de3968b8b9c5d89f2",
"oa_license": "CCBY",
"oa_url": "https://www.researchsquare.com/article/rs-702541/v1.pdf?c=1637263227000",
"oa_status": "GREEN",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "0bd476c2f424452bb15d393ec0a52d142ac69471",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Biology"
]
}
|
45344873
|
pes2o/s2orc
|
v3-fos-license
|
Omalizumab in patients with severe uncontrolled asthma: well-defined eligibility criteria to promote asthma control
After more than a decade of omalizumab being widely used in the treatment of asthma, the Brazilian National Commission for the Incorporation of Technologies stated its opposition to the incorporation of omalizumab use within the scope of the Unified Health Care System of Brazil.(1) That ruling runs contrary to expert opinion that the drug should be made available to a specific group of patients with severe uncontrolled asthma, selected according to eligibility criteria that are well defined in clinical protocols.
TO THE EDITOR:
After more than a decade of omalizumab being widely used in the treatment of asthma, the Brazilian National Commission for the Incorporation of Technologies stated its opposition to the incorporation of omalizumab use within the scope of the Unified Health Care System of Brazil. (1) That ruling runs contrary to expert opinion that the drug should be made available to a specific group of patients with severe uncontrolled asthma, selected according to eligibility criteria that are well defined in clinical protocols.
Here, we report the results of omalizumab administration in 12 patients with severe asthma, selected according to the strict eligibility criteria presented in Chart 1. Nine of those patients met the criterion of lack of asthma control with appropriate treatment, and 3 met the criterion of the need for continuous doses of oral corticosteroids to maintain asthma control. Of the 12 patients evaluated, 8 (70%) were female. The mean age at the initiation of treatment was 45.36 ± 15.19 years. The baseline FEV 1 was 1.72 ± 0.53 L (56.5 ± 12.6% predicted), with no change after omalizumab administration. At enrollment in the study, the patients were using a mean inhaled corticosteroid dose of 2,318.18 ± 844.7 µg/day, and 7 patients (63%) were using oral corticosteroid chronically, at a dose of 2.5-40 mg/day. The mean monthly dose of omalizumab was 504.54 ± 316.58 mg.
During the study period, omalizumab was discontinued in 1 patient, because of fainting and a rash, which were probably associated with the use of the medication. The asthma control scores of the 11 patients who completed the recommended 16 weeks of treatment are shown in Figure 1. Six of those patients had an excellent response, with evident improvements in their scores. One of those patients had ventricular tachycardia as a side effect of β 2 agonist use and was dependent on the use of corticosteroids, and that patient maintained control at the end of the 16-week follow-up period without the use of the β 2 agonist. Another 3 patients (of the 6 who clearly benefited from the treatment with omalizumab) were chronic corticosteroid users. Among those 3 patients, the corticosteroid was discontinued in 1, whereas the dose of corticosteroid was reduced in 1 and maintained in 1.
Two patients showed no improvement after 16 weeks of treatment, at which point the omalizumab was discontinued. In 3 patients, the response was considered partial. In 2 patients, the omalizumab was discontinued after 32 weeks, because of the occurrence of exacerbations. In 1 patient, the decision was made to continue the treatment with omalizumab. In summary, in 58% of the patients selected, we maintained the administration of omalizumab. When we used the asthma control questionnaire, defining a half-a-point variation in the score as clinically significant, the response was classified as good in 64% of the patients, compared with 73% of those when we used the asthma control test, defining a 3-point variation as clinically significant. Of the 12 patients in our sample, 8 presented a good response to omalizumab, regardless of the method employed to evaluate that response.
The decision to carry out this pilot analysis was made by the Pharmacy Board of the University of São Paulo School of Medicine Hospital das Clínicas, in 2010. At that time, there was only one nationally published study demonstrating that IgE blockade was safe in patients with asthma or allergic rhinitis caused by helminth infection. (2) However, in a multicenter study conducted in Brazil and published in 2012, Rubin et al. (3) evaluated the use of omalizumab as an add-on therapy in patients with moderate allergic asthma that was not controlled despite treatment with the combination of long-acting bronchodilators and inhaled corticosteroids (fluticasone ≥ 500 µg/day or equivalent). The authors reported improvement in asthma control and in the overall perception of efficacy among those patients. We started selecting patients in 2012, when the funds were made available in the annual budget of the institution. We estimated that it would take 1 year to conclude the analysis of 12 patients. However, because we strictly adhered to the pre-established criteria, it took nearly 3 years. Our experience was similar to that recently reported by the Australian Department of Health Subcommittee on Pharmaceutical Use. (4) Contrary to the initial estimate that approximately 1,000 patients per year would be included in the first 5 years of the program, only 148 and 156 patients were treated in the first and second year, respectively. (4) Although oral corticosteroid use was not a mandatory inclusion criterion, virtually all of our patients were using oral corticosteroids (regularly or continuously), as well as high doses of inhaled corticosteroids. Our criteria correspond to those approved by the Australian Department of Health (4) and by the National Institute for Health and Care Excellence (NICE) in the United Kingdom, (5) bodies that rely on pharmacoeconomic evaluations to guide their decisions regarding the allocation of resources for new medications.
The substantial improvement observed in some of our patients demonstrates that omalizumab is capable
LETTER TO THE EDITOR
Omalizumab in patients with severe uncontrolled asthma: well-defined eligibility criteria to promote asthma control of radically altering the quality of life and the work capacity of a select portion of severe asthmatics. Similar results were taken into account in the United Kingdom in 2007, (3) at the time of provisional approval of omalizumab, despite the unacceptable cost-effectiveness ratio, which was > £ 30,000/quality-adjusted life year (QALY) gained. Not long ago, the NICE concluded that the use of the drug had become economically viable (cost-effectiveness ratio, £ 23,200/QALY gained) and limited its indication to patients with severe uncontrolled asthma who are users of oral corticosteroids and in whom the asthma remains uncontrolled even when the patients are medicated and followed according to the NICE guidelines (5) ; a similar policy was adopted by the Australian Department of Health. (4) The efficiency of those guidelines was recently confirmed in real-life studies. (6,7) In our study, the decision to maintain the treatment not only in the responders but also in the partial responders was based on the overall evaluation of efficacy by the medical staff. That practice is supported by the literature, which demonstrates, as the best parameter of a response to omalizumab, the impression of the medical staff at week 16 (according to the package insert used in Europe, that must be considered in the decision as to whether or not to continue treatment).
Our study has certain limitations. We did not determine exactly how many patients were evaluated and how many were offered the treatment. However, that does not interfere with the main conclusion: among an estimated total of approximately 2,500 (new and follow-up) patients with difficult-to-control asthma seen over a 3-year period, the use of omalizumab resulted in substantial clinical improvement in only a small portion. Another limitation was the lack of a control group. In a study with an n = 1 design (in which efficacy and safety are evaluated in an individual patient using a double-blind, placebo-controlled, double-blind randomized study with multiple treatment periods), Gibson et al. (8) analyzed omalizumab administration in 12 patients with characteristics similar to those of our patients. In that study, the drug was discontinued after 12 weeks of treatment, allowing comparison between the periods with and without omalizumab treatment. The results were similar to those obtained in the present study: 50% of the patients evaluated showed a total or partial response.
INCLUSION CRITERIA
• Severe uncontrolled asthma (ACQ score > 1.5) + treatment with a high dose of inhaled corticosteroid (> 1,500 µg of beclomethasone or equivalent) + treatment with long-acting β2 agonists or the need for continuous or intercalary maintenance with an oral corticosteroid (≥ 3 months in the last year) • Severe controlled asthma (ACQ score ≤ 1.5) treated with an oral corticosteroid, accompanied by adverse events • Adults > 18 years of age that are adherent to treatment and have been followed for ≥ 6 months • Body weight of 30-150 kg • Total serum IgE of 30-1,500 IU/mL • Allergic asthma, confirmed clinically and by skin test or by in vitro testing for specific IgE • ≥ 2 emergency room visits or ≥ 1 hospitalization for asthma in the last year • Nonsmoker or former smoker EXCLUSION CRITERIA • Pregnancy • Infectious exacerbation in the last 30 days ACQ: asthma control questionnaire.
ACQ -16 weeks
ACT -16 weeks 4.5 In conclusion, patients with severe asthma that remains uncontrolled despite appropriate treatment according to the available guidelines constitute only a small proportion of asthma patients and, because they are in poorer health, consume the largest share of the resources allocated. (9) For asthma patients treated at referral centers, efforts are made to identify the factors that, duly scaled and treated, have a positive effect on the evolution of their asthma. (10,11) The results obtained in our study, taken together with those reported in studies conducted at other centers, demonstrate that IgE blockade is effective for some patients. The application of a rigid protocol at asthma treatment centers would allow the identification of patients who might benefit from treatment with omalizumab, as opposed to prescription by litigation. (12) Individualized and accurate medical practice, allowing equity within the system without impeding scientific progress, is the way of the future. (13)
|
2018-04-03T06:19:39.269Z
|
2017-11-01T00:00:00.000
|
{
"year": 2017,
"sha1": "6011e0de7e1a2b7977828fbf4fcf811e00ad7a8a",
"oa_license": "CCBYNC",
"oa_url": "http://www.scielo.br/pdf/jbpneu/v43n6/1806-3713-jbpneu-43-06-00487.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "f2b21ce73c37f31767dc495ebfe3ca7d5d0aa99a",
"s2fieldsofstudy": [
"Medicine",
"Political Science"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
235222914
|
pes2o/s2orc
|
v3-fos-license
|
Normal tension glaucoma: Prevalence, etiology and treatment
Normal tension glaucoma is the most common type of glaucoma among people of east Asian countries. While a signifi cant minority of cases of normal tension glaucoma respond to drugs or surgical procedures that lower intra-ocular pressure, most cases continue to progress, resulting in a continuing loss of visual fi eld and blindness. We here review the current state of knowledge of this debilitating disease, and evaluate a promising pilot study showing a potential route to evaluate normal tension glaucoma and to effectively treat it with a vitamin and mineral supplement. Review Article Normal tension glaucoma: Prevalence, etiology and treatment George Ayoub1,2*, Yanan Luo3 and Dominic Man-Kit Lam1 1World Eye Organization, 1209 Shui on Centre, 6 Harbour Road, Hong Kong 2Department of Psychology, University of California, Santa Barbara, CA, 93106, USA 3College of Chemistry, Central China Normal University, Wuhan, 430079 PRC, China Received: 21 April, 2021 Accepted: 29 April, 2021 Published: 30 April, 2021 *Corresponding authors: Dr. George Ayoub, Department of Psychology, University of California, Santa Barbara, CA, 93106, USA, E-mail: ORCID: https://orcid.org/0000-0002-0807-8983 https://www.peertechzpublications.com
Introduction
Glaucoma is a visual disorder that is increasing in Recent evidence has identifi ed that while IOP is normal in NTG, measurement of retinal venous pressure (RVP) reveals often elevated levels. Current treatment for NTG is limited to reduction in IOP, which is marginally effective. RVP increases are often associated with disturbed microcirculation, due to limited autoregulation and altered endothelial cells related e.g. to Flammer syndrome. A reduction of GON from NTG in response to vitamin supplementation to restore normal RVP, as detailed in this review, will provide a powerful tool to curtail progression of NTG. Additionally, using RVP measurement as a biomarker for NTG provides an early diagnostic for this debilitating disorder.
A three month pilot study with glaucoma patients that directly tested this hypothesis has proven effective in identifying NTG and presumed NTG by the presence of elevated RVP. Treatment of patients with Ocufolin forte was effective in reducing RVP as well as elevated homocysteine levels, a biomarker for defi ciencies. Ocufolin contains micronutrients and is well tolerated, with no reported adverse implications. We suggest RVP screening of those at risk for NTG and treatment of elevated RVP with Ocufolin forte as a viable diagnosis and treatment for this common type of glaucoma.
Epidemiology
Globally, glaucoma impacts over 70 million people with one in ten bilaterally blind, making it the leading cause of irreversible blindness. Glaucoma is often asymptomatic prior to vision reduction, so those affl icted is likely much greater than reported numbers [1]. Worldwide, all types of glaucoma account for 6.5% of blindness, and the prevalence of glaucoma of all types is 3.5% for people over 40 years [2]. Tham, et al. [3] projected an increase of 50% of all types of glaucoma in the next two decades.
The global prevalence of all types of glaucoma is detailed by Chen et al [4], Tham, et al. [3] and Kim, et al. [5]. Within Asian populations, normal tension glaucoma (NTG) comprises 70% High tension glaucoma predominates in people with origins in Africa and Europe, while NTG predominates in people with origins in east Asia. In the Wang, et al. [6] consensus report on NTG in China, which draws from the major medical centers in China, they identifi ed NTG affl icting 1% of the Chinese population, and that NTG comprises 70% of POAG cases. They report that in healthy populations, the average IOP is 17 for white people and 15 for Chinese, while POAG averaged 22 [6]. They found that patients meeting the characteristics of Flammer Syndrome (FS) have a lower intracranial pressure, leading to an increased gradient at the lamina cribosa, and a resultant decrease in perfusion of the optic nerve. Wang, et al. [6] provide guidance for treatment of NTG based on patient specifi cs.
Flammer syndrome, which is often associated with NTG, describes a phenotype of people having a predisposition for an altered vascular reaction to stimuli such as cold, emotional stress or high altitude. Common symptoms are: cold extremities, low blood pressure, prolonged sleep onset time, reduced feeling of thirst, increased sensitivity to odor, pain, vibration and certain drugs. FS subjects are often ambitious, successful, perfectionists and sometimes brooding. Frequent signs are: altered gene expression, prolonged blood fl ow cessation in nailfold capillaroscopy after cold provocation, reduced autoregulation of ocular blood fl ow, and reduced retinal vasodilation after stimulation with fl ickering light. Retinal venous pressure is on average higher and retinal astrocytes are more often activated. FS occurs more often in females than in males, in thin than in obese subjects, in young than in old people, in graduates than in blue collar workers, in subjects with indoor than outdoor jobs [7]. Associated diseases are: normal tension glaucoma, occlusion of ocular vessels, retinitis pigmentosa, multiple sclerosis, tinnitus or even sudden hearing loss.
Etiology
NTG, alternately termed low tension glaucoma or normal pressure glaucoma, manifests with optic disc fl ame hemorrhages and cupping, while IOP remains under 21 mmHg [8,9]. While NTG is generally considered to be similar to POAG in the outcomes, its etiology is different, and the mechanism of damage is different from POAG with high IOP [10].
Current thinking on NTG is that damage to the retina may be due to a lack of perfusion, with reports of disrupted ocular blood fl ow (OBF) [11] possibly due to an increase in retinal venous pressure (RVP) causing damage to the axons of the retinal ganglion cells that comprise the optic nerve [1].
Trivli et al [9] reviewed NTG pathogenesis, and developed a model ( Figure 1) showing that an increase in RVP causes a decrease in OBF, which impacts the Retinal Ganglion Cells (RGC), resulting in a change in the Optic Nerve Head (ONH). Wareham and Calkins [12] provided an excellent review detailing the impact of glaucoma on retinal vasculature, including diagrammatic clarity and the role of hormones on the endothelial cell making up the retinal arteries. Additionally, Wang, et al, [13] explained that a pressure mismatch in POAG was created by either high IOP or low cerebrospinal fl uid (CSF) pressure, resulting in a pressure gradient at the lamina cribosa that leads to Glaucomatous Optic Neuropathy (GON).
The work of Fan et al [14,15] showed that NTG has a disturbed OBF as measured with imaging techniques, and NTG is comorbid with systemic disorders, including migraine, hypotension, Alzheimer's disease and Flammer Syndrome [7,16]. Fan, et al. [15] suggest that NTG may not be glaucoma, but a group of disorders with GON. This has implications for what may be best practice in treatment.
Ocular blood fl ow (OBF) is on average less in glaucoma patients than in healthy controls. This reduction is more pronounced in normal-tension than in high-tension glaucoma, and it is more distinct in cases with progressing damage as compared to those with stable disease. OBF reduction has two effects on GON. The primary effect is a fl uctuating supply of oxygen and micronutrients to the organs, leading to tissue damage. The fact that hypoxia-related factors are upregulated in eyes of glaucoma patients indicates oxygen depletion. It is, however, not constant hypoxia, but rather the fl uctuation of the oxygen and micronutrient supply that leads to tissue damage, likely due to oxidative and nitrosative stress. Low perfusion pressure and disturbed autoregulation are major causes of the reduced blood supply causing oxygen and micronutrient variation, and both systemic hypotension and disturbed autoregulation are often consequences of the primary vascular dysregulation syndrome (PVD) [17,18]. The observed splinter hemorrhages in these patients are a consequence of a local breakdown of the blood-brain or blood-retinal barrier. The often associated vein occlusions can be a consequence of local vein dysregulation [19]. Gugleta [20] described the signifi cance of endothelin-1 (ET-1) in glaucoma. Endothelin-1 is vasoconstrictive and is a ubiquitous molecule that occurs in nearly all tissues. Its primary physiological function is regulation of blood vessel diameter and thus the regulation of blood supply in tissues. It is secreted locally and exerts its effects locally. Endothelin-1 is involved in the regulation of blood fl ow in the retina and the optic nerve [12]. Flammer and Konieczka [16] evaluated the role of endothelin on retinal venous pressure (RVP). In healthy subjects RVP is usually equal to or slightly above intraocular pressure (IOP), while RVP is often signifi cantly increased in patients with eye or systemic disease.
This indicates endothelin-1 is a useful biomarker for RVP, with another important biomarker being homocysteine.
Homocysteine levels and the frequency of heterozygous methylenetetrahydrofolate reductase (MTHFR) C677T mutation are increased in open-angle glaucoma. Since homocysteine can induce vascular injury, alterations in extracellular matrix remodeling, and neuronal cell death, these fi ndings may have important implications for understanding glaucomatous optic neuropathy [21].
In the comprehensive review of homocysteine as a biomarker by Smith and Refsum [22], they fi nd 100 diseases or conditions that are associated with raised concentrations of plasma homocysteine. The commonest associations are with cardiovascular diseases and diseases of the central nervous system, but a large number of developmental and age-related conditions are also associated. Few disease biomarkers have so many associations. The clinical importance of homocysteine as a biomarker becomes apparent if lowering plasma homocysteine by B vitamin treatment can reduce disease. Smith and Refsum [22] reported fi ve diseases that are diminished by lowering total homocysteine: neural tube defects, impaired childhood cognition, macular degeneration, primary stroke, and cognitive impairment in the elderly. They concluded that plasma homocysteine levels in adults of 10 μmol/L or less are probably safe, but that values of 11 μmol/L or above may justify intervention. Homocysteine is more than a disease biomarker: it may be a useful guide for the prevention of disease [22,23].
The data in Schmidl [24] showed that a three-month intake of a dietary supplement containing L-methylfolate can signifi cantly reduce blood homocysteine levels in patients with diabetes. This is of importance because higher plasma levels of homocysteine are linked with an increased risk of vascular associated systemic diseases and eye diseases. The review of such nutritional therapies for treatment of diabetic retinopathy by Shi et al [25] gives Homocysteine (Hcy) and endothelin-1 are useful biomarkers for elevated retinal venous pressure. As seen in Devogelaere [26] Hcy is elevated in NTG and other types of glaucoma.
Effi cient reduction of Hcy with vitamin supplementation was already shown by Schmidl [24].
Retinal venous pressure (RVP) may be measured noninvasively by ophthalmodynamometry [27,28]. While RVP is equal to or slightly above IOP in healthy people, it is often Stodtmeister [29] recently documented a means to measure RVP with a contact lens dynamometer (Imedos, Jena, Germany). This device entails monitoring the retinal vein for pulsation while a pressure is applied to the sclera.
Until recently, the pressure in the intraocular veins was assumed to be equal to the IOP. According to Stodtmeister [30], the pressure in the central retinal vein may be considerably higher than the intraocular pressure. Therefore, the pressure in the retinal veins in the prelaminar layer of the optic nerve head is likely also higher than the IOP. In this case the perfusion pressure (arterial pressure minus central retinal venous pressure) is reduced (schematized in Figure 1). Since RVP is higher in glaucoma patients than in healthy subjects and in patients with unequal excavations, RVP is higher in the eyes with larger excavation, RVP is a considerable risk factor for the progression of glaucomatous damage. Such elevated RVP may be the reason IOP-lowering therapy is ineffective in eyes in which the pressure of the central retinal vein is higher than the intraocular pressure, a condition that may apply to about 40-50% of glaucoma patients [29,30]. ocular blood fl ow. Optical coherence tomography angiography (OCTA), a dye-free, non-invasive imaging assessment, has recently been deployed to assess glaucomatous damage [31].
They showed that glaucomatous eyes had a reduced blood fl ow and vessel density in the optic nerve head compared to control eyes. Additionally, Wang, et al. [32] showed that OCTA measurements correlate with visual fi eld measurements, indicating that OCTA provides a direct route to assessing retinal perfusion. Thus, measurement of RVP and use of OCTA look to be valuable tools in assessing retinal health in glaucoma.
Of course, retinal venous pressure can also be increased in a clinically healthy eye. But nevertheless, it can be a strong sign of a systemic disorder, such as an autoimmune disease [33].
The ocular cause of an increase of RVP may either be a mechanical compression or a functional constriction of the vein at the exit of the eye. The consequences are decreased perfusion pressure, which increases the risk for hypoxia.
Increased RVP also increases transmural pressure and thereby a risk for retinal edema.
Elevated RVP and elevated CSF pressure may have a single cause or one may be infl uenced by the other. Morgan [34] estimated the RVP effect by measuring the retinal vein ophthalmodynamometric force, and found that it, and not IOP, correlated with optic disc excavation. Morgan, et al. [35] noted that high myopia patients have a thin lamina cribosa, that this is exacerbated in myopic patients with glaucoma, and that a thinner lamina cribosa magnifi es the pressure gradient effect by two to four times. This magnifi ed effect may explain the more rapid progression of glaucoma in myopic patients including those having lower IOP.
In Fang, et al. [36], one frequent sign of NTG is an increase of the retinal venous pressure (RVP). The effect of FS on RVP was examined, measuring RVP in eyes of POAG patients and healthy subjects with and without FS. Results showed that RVP was higher in subjects with FS, particularly in FS subjects with glaucoma.
Pillunat, et al. [37] evaluated patients with IOP-controlled open angle glaucoma, examining those with early, moderate and advanced disease stages and compared these to a healthy control group. In more advanced cases of glaucoma, RVP was higher than expected.
Sung [38] found progression in visual fi eld loss in NTG was related to an unstable ocular perfusion pressure (which is proportional to RVP if IOP is static) by measuring IOP over a 24 hour period. They found that visual fi eld defects in NTG are more central than in patients with high IOP.
Treatment
Currently, the main goal of glaucoma treatment is the slowing of disease progression with the goal of preserving quality of life. The only tool to accomplish this has been reduction of IOP, with several multicenter trials providing evidence that reducing IOP slows the disease progression [1].
Thus, the recommended treatment for POAG, including NTG, is to decrease IOP. Glaucoma treatment uses two treatment strategies to reduce IOP: medication (topical or systemic) and surgical shunts [39]. Additionally, minimally invasive glaucoma surgery (MIGS) as well as cataract surgery in NTG patients [40] have shown promise in slowing disease progression. While IOP reduction is less effective for NTG, it is the only treatment available to date, so the current best practice is to lower IOP in NTG, with the American Academy of Ophthalmology advising to lower IOP to 8-12 mmHg, and to exercise caution in using beta blockers due to the comorbidity of systemic nocturnal hypotension among NTG patients [41].
Decreasing IOP is minimally effective at preserving vision, but given it is the only option, it is the current standard of care.
The evidence reported above that NTG presents with vascular defi ciencies gives hope that improving ocular perfusion may be a potential treatment for NTG, as Fan, et al. [15]
Conclusion
Given the link of RVP with NTG and other eye conditions, and the potential to treat RVP with a mixture of micronutrients having no known adverse effects, we suggest the use of RVP measurement or OCTA screening to assess for NTG in glaucomatous patients and to screen for early NTG in those with risk factors for local or systemic micronutrient defi ciencies caused by various reasons. While RVP measurement may feel less comfortable for elderly patients than IOP measurement, it will permit the skilled ophthalmologist to identify and track NTG and to monitor treatment with the vitamin cocktail.
We believe screening for FS characteristics will reduce RVP measurements to those at risk for NTG, and will allow the medical community to directly address this signifi cant cause of blindness in a manner to reduce its impact. We suggest that glaucoma patients who are progressing despite adequate IOP control may benefi t from RVP and homocysteine evaluation, in addition to the potential benefi t of MIGS or early cataract surgery.
While the measurement of RVP is non-trivial, we believe it will be a useful tool for early diagnosis and for monitoring treatment of NTG, and provide a signifi cant advance in treatment of this major cause of blindness. . Elevated homocysteine appears to be a useful biomarker for increased RVP, which may be important in treating glaucoma that is not responsive to IOP reduction.
|
2021-05-28T17:26:54.633Z
|
2021-04-30T00:00:00.000
|
{
"year": 2021,
"sha1": "722b668c849ff9568264c882fe6646022c580f39",
"oa_license": "CCBY",
"oa_url": "https://www.peertechzpublications.com/articles/JCRO-8-188.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "722b668c849ff9568264c882fe6646022c580f39",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
227236915
|
pes2o/s2orc
|
v3-fos-license
|
Meta-analysis of postoperative antithrombotic therapy after left atrial appendage occlusion
Objective This meta-analysis explored the safety and effectiveness of different anticoagulant regimens after left atrial appendage occlusion (LAAO). Methods Databases, such as PubMed, MEDLINE, EMBASE, Web of Science, and Cochrane Library, were searched to identify eligible studies according to the inclusion criteria. The incidences of events, including device-related thrombus (DRT) formation, stroke, systemic thromboembolism, bleeding, cardiovascular mortality, and all-cause mortality, were analyzed using R version 3.2.3. Results The screening retrieved 32 studies, including 36 study groups and 4,474 patients. The incidence of outcomes after LAAO was calculated via meta-analysis. In the subgroup analysis, the rates of DRT formation, cardiovascular mortality, and all-cause mortality were significantly different among different antithrombotic methods. Single antiplatelet therapy was associated with the highest rate of adverse events, followed by dual antiplatelet therapy (DAPT). Vitamin K antagonists (VKAs) and new oral anticoagulants (NOACs) carried lower rates of adverse events. Conclusions Anticoagulant therapy had better safety and efficacy than antiplatelet therapy. Thus, for patients with nonabsolute anticoagulant contraindications, anticoagulant therapy rather than DAPT should be actively selected. NOACs displayed potential for further development, and these treatments might represent alternatives to VKAs in the future.
Introduction
Atrial fibrillation (AF) is the most common arrhythmia encountered in clinic practice. The most severe complication of AF is thromboembolism, especially ischemic stroke. Approximately 20% to 30% of strokes are directly caused by AF. 1 Oral anticoagulants (OACs) significantly reduce the incidence of stroke in patients with AF, but long-term anticoagulant therapy might be impeded by bleeding complications in patients with a high risk of bleeding, restricting the use of OACs. 2,3 Because 90% of blood clots in patients with nonvalvular AF originate from the left atrial appendage, 4,5 left atrial appendage occlusion (LAAO) was developed with the fundamental goal of completely sealing the left atrial appendage, thereby eliminating the primary source of emboli and avoiding the need for anticoagulant therapy through mechanical occlusion. 6,7 The 2016 European Society of Cardiology guidelines about AF explicitly stated that LAAO may be considered for patients with AF who have contraindications for OACs and a IIB indication. 1 With the development of LAAO, devicerelated thrombus (DRT) formation and embolic stroke after implantation of the occluder have attracted extensive attention. 8 After the implantation of the left atrial appendage occluder, which is perceived as a foreign object by the human immune system, thrombosis might occur on the occluder surface before complete endothelialization of the occluder surface (typically 45 days), leading to thromboembolic events. 9 Therefore, antithrombotic therapy is necessary in the early stage after occluder implantation, and anticoagulants and/or antiplatelet drugs should be administered in the early stages of LAAO. 10 The current guidelines for postimplantation antithrombotic therapy are unclear because of the lack of supporting data and significant heterogeneity encountered in clinical practice. 7,11 LAAO is recommended as an alternative treatment strategy in patients with AF who are at high risk of stroke. Ideally, long-term anticoagulant therapy should not required in patients undergoing LAAO. However, if complicated antithrombotic management is needed after the operation, the original intent of improving the quality of life of patients via LAAO is lost. Although the mainstream recommendation is treatment with vitamin K antagonists (VKAs) for 45 days postsurgery, some individuals prefer dual antiplatelet therapy (DAPT). 11 Therefore, the type of antithrombotic drugs to be used and whether longterm antithrombotic therapy is required are issues that require further investigation.
In the present study, the status of postoperative antithrombotic therapy for patients with nonvalvular AF was reviewed. Meta-analysis was used to explore the safety and efficacy of different anticoagulant regimens after LAAO to provide guidance for the development of appropriate anticoagulant regimens.
Search strategy
We conducted a literature search using PubMed, MEDLINE, EMBASE, Web of Science, and Cochrane Library (February 1, 2020) to identify eligible studies using the keywords "left atrial appendage closure," "anticoagulant," and "thrombus." We also manually searched the reference lists of relevant studies to identify additional publications. The retrieved citations were reviewed independently by two investigators (SL and JW), and any disagreements were solved via discussion. This metaanalysis has been registered on PROSPERO under the registration number CRD42020151460.
Selection criteria
The following inclusion criteria were applied for the eligible studies: 1) prospective or retrospective studies; 2) patients with nonvalvular AF who had undergone LAAO; 3) patients received distinct antithrombotic regimens after LAAO; and 4) the study described the safety and efficacy outcomes of patients undergoing prolonged antithrombotic therapy. The following studies were excluded: 1) studies with <10 subjects; 2) studies with missing data; and 3) case reports, review articles, guidelines, and cell and animal studies. To avoid publication bias, studies that were followups of other included studies or substudies of the same cohorts were also excluded from this meta-analysis. When multiple publications from the same study population were found, data from the most inclusive report were used.
Data extraction
A standardized, prepiloted form was used to extract data from the included studies. The following study characteristics were extracted: year of publication, study design, number of patients, and clinical characteristics. The primary efficacy outcomes of this study were as follows: the incidence of DRT formation, systemic thromboembolism, and stroke (hemorrhagic and ischemic). The primary safety outcomes were bleeding (minor bleeding and major bleeding), cardiovascular mortality, and all-cause mortality. DRT was defined as an echo density on the device visible on transesophageal echocardiography. Bleeding events were classified as major (intracranial, retroperitoneal, intraspinal, intraocular, or pericardial hemorrhage; decrease of hemoglobin levels >2 g/dL; and transfusion of !2 units of packed red blood cells) and minor (other bleeding events). Cardiovascular death was defined as death caused by a disturbance of the cardiovascular system, and all-cause death was defined as death from any cause.
Quality assessment
The methodological quality and risk of bias of the included studies were evaluated using the Methodological Index for Non-Randomized Studies. 12 This index consists of 12 items, each scored on a scale of 0 to 2, including the evaluation purpose, design, data collection, and follow-up of the study. The maximum ideal score is 16 for noncomparative studies and 24 for comparative studies.
Statistical analysis
The effect estimates were extracted from each study in the form of events in dichotomous data and means or medians for continuous data. Then, the pooled proportion was calculated using the inverse arcsine variance weights. Because of the existence of extreme values, we transformed the proportion of each study using the Freeman-Tukey double arcsine method. The heterogeneity between the studies was analyzed using the I 2 statistic and the randomeffects model. Funnel plots were generated to observe potential biases, and asymmetry was tested using Egger's linear regression approach. Forest plots were generated to illustrate the relative effect size of the individual studies on each clinical outcome. The analysis was conducted using R version 3.2.3. P < 0.05 denoted statistical significance.
Study selection
A total of 1065 studies were retrieved. After removing duplicates, 663 studies were subjected to further scanning. After reading the abstract and partial text of each study, unrelated studies, reviews, case reports, studies with unclear outcomes or incomplete data, and studies in which LAAO was combined with other surgeries were excluded. Finally, 32 studies 13-44 that satisfied the selection criteria were included in this meta-analysis ( Figure 1).
Characteristics of studies
Four studies 13,14,25,35 used two different antithrombotic regimens after LAAO. Therefore, each of these four studies was considered to include two research groups, and finally, 36 research groups were included in the analysis. The studies were published between 2011 and 2019, and the number of enrolled patients ranged 12 to 1019. These studies encompassed 4474 patients with nonvalvular AF (mean age, 74.38 AE 6.80 years). The mean CHA 2 DS 2 -VASC and HAS-BLED scores were 4.3 AE 1.5 and 3.1 AE 1.0, respectively. Table 1 summarizes the baseline characteristics of the patients in the included studies. The quality of all 32 nonrandomized controlled studies was evaluated, and the quality scores ranged from 8 to 18, indicating that the studies were of moderate quality. The postoperative antithrombotic regimen, follow-up Table 2. The definitions of these outcome events in each study were roughly similar, and the reported data were considered to be under the same definition.
Result of the meta-analysis
Via meta-analysis, we obtained the overall incidence of various endpoint events after LAAO. The heterogeneity among the 36 study groups for DRT was I 2 ¼ 49%. The results illustrated that the pooled rate of DRT formation was 1.69% (1.02% to 2.48%). The meta-analysis of systemic thromboembolism consisted of 28 study groups, and the result of heterogeneity testing was I 2 ¼ 38%. The pooled rate of systemic thromboembolism was 0.03% (0.00% to 0.39%). In total, 35 study groups were included in the analysis of stroke/transient ischemic attacks (TIA), and the result of heterogeneity testing was I 2 ¼ 37%. The pooled rate of stroke/TIA was 1.18% (0.66% to 1.82%). There were 31 study groups for major bleeding, and the result of heterogeneity testing was I 2 ¼ 72%. The pooled rate of major bleeding was 2.38% (1.21% to 3.83%). Twenty-six study groups were included in the analysis of minor bleeding, and the result of heterogeneity testing was I 2 ¼ 76%. The pooled rate of minor bleeding was 2.32% (1.03% to 3.98%). There were 32 study groups in the analysis of cardiovascular mortality, and the result of heterogeneity testing was I 2 ¼ 54%. The results demonstrated that the pooled rate of cardiovascular mortality was 0.27% (0.00% to 0.83%). The analysis of all-cause mortality included 32 study groups, and the result of heterogeneity testing was I 2 ¼ 85%. The pooled rate of allcause mortality was 4.27% (2.50% to 6.41%). Forest plots are presented in
Subgroup analysis
We conducted subgroup analyses according to the different postoperative antithrombotic methods ( Table 3). The antithrombotic regimens of the 36 study groups were divided into four categories according to the initial antithrombotic drugs: DAPT, new oral anticoagulants (NOACs), VKAs, and single antiplatelet therapy (SAPT). Then, the results for the four antithrombotic schemes were calculated.
Regarding DRT, the rates of cardiovascular mortality and all-cause mortality significantly differed among the treatments (P < 0.05). Specifically, SAPT was associated with the highest incidence of DRT events (5.38%), followed by DAPT (2.31%), VKAs (0.89%), and NOACs (0.07%). The incidence of all-cause death decreased in the order of SAPT > DAPT > NOACs > VKAs, whereas that of cardiovascular death decreased in the order of SAPT > DAPT > NOACs ¼ VKAs. Although statistical differences were not detected, DAPT and SAPT were linked to higher rates of thromboembolism, stroke/TIA, and minor bleeding than NOACs and VKAs. Moreover, NOACs were associated with the highest incidence of major bleeding (5.05%).
Heterogeneity analysis
Meta-regression analysis was performed on the outcome indicators of major bleeding, minor bleeding, cardiovascular mortality, and all-cause mortality to assess their high heterogeneity. We analyzed the influence of publication year, sample size, and literature quality on heterogeneity. The difference in sample size was identified as a source of heterogeneity for major bleeding and cardiovascular mortality (P < 0.05), the quality of the literature was identified as a source of heterogeneity for minor bleeding (P < 0.05), whereas none of the three factors was a source of heterogeneity for all-cause mortality.
Publication bias
Regarding publication bias tested using funnel plots and Egger's test, there was no significant difference in the outcomes of DRT formation, systemic thromboembolism, stroke/TIA, minor/major bleeding, and all-cause mortality, as indicated by the statistically symmetrical funnel plots. However, Egger's test revealed bias in the outcome of cardiovascular mortality (P < 0.001; Figure 4).
Discussion
This study used meta-analysis to study the incidence of postoperative complications after LAAO and the effects of the antithrombotic regimen. According to our meta-analysis, the incidence of postoperative adverse events for LAAO was as follows: DRT, 1.69%; systemic thromboembolism, 0.03%; stroke/TIA, 1.18%; major bleeding, 2.38%; minor bleeding, 2.32%; cardiovascular mortality, 0.27%; and all-cause mortality, 4.27%. The pooled incidence of adverse events after LAAO surgery was similar to that reported in previous large studies. 45 Presently, the antithrombotic regimens after LAAO surgery are as follows: (a) OAC therapy for 45 days after surgery, DAPT after confirmed successful blockade, and lifelong aspirin monotherapy; (b) DAPT for 3 to 6 months after surgery and then long-term aspirin use; (c) NOACs; and (d) aspirin alone. The PROTECT-AF and PREVAIL trials, two large multicenter randomized controlled studies, primarily compared the efficacy and safety of LAAO to oral warfarin in preventing stroke in patients with AF. [46][47][48] In these two trials, warfarin (þ/À aspirin) was administered for 45 days after surgery, followed by DAPT for 6 months and then aspirin. The results illustrated that although a risk of bleeding was plausible in the early stage, regimen (a) was feasible in the highrisk population without anticoagulant contraindications. According to these two studies, regimen (a) was widely adopted. 49 However, the PROTECT-AF and PREVAIL groups did not include patients with contraindications against coagulation, and LAAO is indicated for patients with a high risk of bleeding from OACs. 50 Thus, it was unclear whether antiplatelet therapy could prevent DRT formation.
The subsequent ASAP registry study was the first prospective, multicenter, nonrandomized study of patients with nonvalvular AF with warfarin contraindications. 43 The study enrolled 150 patients after 6 months of postoperative DAPT followed by long-term aspirin use. During followup, six (4%) patients developed DRTs; among them, only one patient experienced ischemic stroke on day 341. Based on the results of ASAP, the antiplatelet drug regimen was found to be applicable in several other single-arm registry studies, and the incidences of ischemic stroke and thrombus formation on the device surface were both low during the follow-up period. In the current analysis, in addition to major bleeding events, antiplatelet regimens (SAPT and DAPT) were linked to higher rates of adverse events than anticoagulant regimens. In particular, significantly lower rates of DRT formation, cardiovascular death, and all-cause death were noted for NOACs and VKAs. We concluded that the efficacy of anticoagulant therapy is significantly better than that of antiplatelet therapy, and the safety of anticoagulant therapy was not inferior to that of antiplatelet therapy. Therefore, we speculated that antiplatelet therapy could not replace anticoagulant therapy. Another paper matched and compared patients treated with anticoagulant or antiplatelet therapy in various large studies, reaching the same conclusion. 51 In some studies, patients were divided into nonanticoagulant contraindication and anticoagulant contraindication groups, and regimens (a) and (b), respectively, were recommended for these groups. 52 In fact, there are patients with relative contraindications for anticoagulant rather than absolute contraindications, such as previous bleeding and poor international normalized ratios. If these patients were classified as having contraindications for anticoagulants and preferences for antiplatelet therapy, then the effectiveness of antithrombotic therapy might be reduced. Some studies suggested that the risk of DRT formation after LAAO could be assessed using the platelet count, ejection fraction, CHA 2 DS 2 -VASC score, echocardiographic features, and occlusion conditions. 53,54 It is also reported that old age and previous history of ischemic stroke are predictors of DRT formation. 55 Thus, we speculated that patients' ability to receive anticoagulant therapy should be carefully evaluated; however, no patient should be forced to receive DAPT. For relatively contraindicated patients with an acceptable risk of bleeding, at least 45 days of anticoagulant therapy is recommended.
Finally, according to current data, although NOACs were linked to the highest incidence of major bleeding, these drugs were associated with lower rates of DRT formation, cardiovascular death, and allcause death than the other regimens, and they carried similar risks of embolism and stroke. These observations were similar to those at a 3-month follow-up in the EWOLUTION study. 56 In addition, B€ osche et al. 35 and Enomoto et al. 25 compared the safety and efficacy of NOACs to DAPT and warfarin, respectively, after LAAO, finding that NOAC treatment was safe and effective. Thus, it could be deduced that NOACs have a critical role as antithrombotic therapies after LAAO, but this finding must be substantiated in larger clinical trials. 57
Study limitations
The majority of the articles included in this meta-analysis were single-arm studies. Conversely, few randomized controlled trials were identified, and the level of evidence was not high. The number of studies including different antithrombotic schemes varied greatly, and only two studies assessed SAPT. Furthermore, the present study analyzed the efficacy and safety of various antithrombotic regimens, but it did not distinguish the plausible effects of different types of sealers on postoperative adverse events. In addition, heterogeneity was analyzed in this study. The sample size and study quality explained some of the heterogeneity, but no source of heterogeneity was identified for all-cause mortality. We detected publication bias for cardiovascular mortality outcomes, which may have been attributable to incomplete and inaccurate outcome reporting in some of the lower-quality studies.
Conclusion
Although most patients globally receive DAPT after LAAO surgery, the results of this meta-analysis indicated that anticoagulant therapy is associated with better safety and efficacy than antiplatelet therapy. Concerning patients with nonabsolute anticoagulant contraindications, anticoagulant therapy should be selected. In addition, NOACs have satisfactory development potential, and they may serve as alternatives to VKAs in the future.
Declaration of conflicting interest
The authors declare that there is no conflict of interest.
Funding
The authors disclosed receipt of the following financial support for the research, authorship and/or publication of this article: This study was funded by the Nanjing Medical Science and Technology Development Fund (QRX17060) and Jiangsu Pharmaceutical Association Shire Biopharmaceutical Fund (S201606).
|
2020-12-01T14:07:37.100Z
|
2020-11-01T00:00:00.000
|
{
"year": 2020,
"sha1": "e6b856c5f5155f3d23ff4b8567ebd9898922cca7",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.1177/0300060520966478",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "eea56677d21728d424437b163eab2acc29d67e45",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
54765546
|
pes2o/s2orc
|
v3-fos-license
|
Effect of Age and Sex on Histomorphometrical Characteristics of Two Muscles of Laticauda Lambs
The aim of the present experiment was to determine the effect of sex and age on histochemical and morphometric characteristics of muscle fibres (myocytes) in lambs born by single, twin, triplet and quadruplet birth. Thirty lambs were slaughtered at 60 days of age; thirty were weaned at 60 days and fed until 120 days with flakes (60%) and food supplements, and then slaughtered. Muscle tissues were obtained from two muscles, namely m. semitendinosus and m. longissimus dorsi of all lambs. For each fibre type, area perimeter and diameter (maximum and minimum) were measured and slow-twitch oxidative fibres, fast-twitch glycolytic fibres, fast-twitch oxidative-glycolytic fibres were histochemically differentiated. The muscles were stained for myosin ATPase, and succinic dehydrogenase. At 60 days, females had fibres larger than males, whereas the opposite was observed at 120 days. Besides, at 60 days, the lambs born by single birth had fibres larger than those born by multiple birth, whereas the opposite was observed at 120 days. Single lambs were heavier than twin lambs and multiple lambs. Fast-twitch glycolytic fibres had the largest size, followed by slow-twitch oxidative and fast-twitch oxidative glycolytic fibres. The dimensions of fibre types in m. longissimus dorsi were larger than in m. semitendinosus (P < 0.001).These muscle fibre characteristics are thought to be important factors influencing meat quality, which is often related to metabolic and contractile properties as determined by the muscle fibre type distribution. Birth type, postnatal development, lamb, histochemistry, nutrition, muscle fibres The Laticauda originated probably from the Northern African sheep, Berbera or Barbaresca, and it got the present characteristics thanks to subsequent crossbreeds with the sheep from the Apennines, the typical one from Southern Italy. Particularly, Laticauda is bred in the country of Benevento and Avellino and during the last years the head number has increased. This breed is traditionally reared in hilly farm pasture predominantly in sedentary breeding groups and the most common farming system is the “family farm type”. Laticauda is a dual-purpose breed with the ability to produce good amount of milk and meat. Meat quality is affected by numerous factors including the growth stage and differentiation of skeletal muscle fibre types. The physiological differentiation of muscle fibres is a dynamic equilibrium which can vary during growth or as a response to the muscle work rate. Guth and Yellin (1971) noticed that muscle fibres continuously change during the animal’s life as a functional demand adaptation and that the fibre type only reflects the fibre constitution at a certain moment. Henkel (1991) suggested muscle histochemistry as a tool for quantifying the effect of different treatments on the size of muscle fibres. In this study we have defined the skeletal muscle fibre populations of lambs by using different methods of m-ATPase and we have analyzed the postnatal development of these fibre populations between 60 and 120 days of age. Muscle fibre type may be classified based on the enzymatic activity; in this study the following nomenclature is used for myofibre types: ACTA VET. BRNO 2010, 79: 3–12; doi:10.2754/avb201079010003 Address for correspondence: Dr. Salvatore Velotto, PhD. Department of Soil, Plant, Environmental and Animal Production Sciences Faculty of Agraria, University of Study of Naples “Federico II” Via Università 133, 80055, Portici (NA), Italy Phone: +390812539269-264 Fax: +390817762886 E-mail: velotto@unina.it http://www.vfu.cz/acta-vet/actavet.htm FG (fast-contracting with glycolytic metabolism) or IIB fibres; type SO (slow-contracting with oxidative metabolism) or I fibres and FOG (fast contracting-with glycolytic-oxidative metabolism) or IIA fibres. The information obtained by the distribution of different fibre types, their number and diameter and/or area, represent an element of paramount importance for the determination of some basis characteristics (morphological and functional) of the muscle that have great influence on some aspects of meat quality. Our study also intends to assess probable correlation between the birth type and weight. Particularly the lambs born by single birth had higher weight than lambs born by multiple births and the degree to which it was reduced was influenced by postnatal nutrition (Greenwood et al. 2000). Materials and Methods We used sixty lambs (30 males and 30 females) coming from a farm located in Caserta (Italy) that were slaughtered at 60 and 120 days. The slaughter house had EEC mark with reference to rules 852/853/854/2004; 2076/2005; 1069/2009. The animals were treated according to the guidelines of the European Community on the treatment of experimental animals (Reg. CE 1/2005; directives 74/577/EEC; Act 439 2 August 1978). They were born by single, twin, triplet and quadruplet births and were fed mother’s milk until 60 days of age. All of the ovines had passed the obligatory health tests and had the characteristic pattern of the species. Lambs were identified and weighed at birth, and the weight was checked weekly until the end of the study. The time periods for the slaughter of the lambs during the postnatal development to obtain the muscle samples were 60 and 120 days after the birth. We obtained the muscle samples surgically, and we used m. semitendinosus (St) and m. longissimus dorsi (Ld). To avoid possible morphological and morphometric alterations of the fibres, the samples were frozen during the first hour after the animal’s slaughter in 2-methylbutane cooled in liquid nitrogen. Transverse serial sections (8 μm) were cut in a cryostat at -20 °C. The sections were stained histochemically for myosin ATPase (m-ATPase shows the muscular contraction) and succinic dehydrogenase (SDH shows the fibres metabolism) simultaneously on the same muscular fibres (Padykula and Herman 1955; Nachlas et al. 1957; Barany 1967; Edstrom and Kugelberg 1968; Guth and Samaha 1970; Solomon and Dunn 1988; Velotto et al. 2005). The method used for the combined histochemical staining (acid m-ATPase + SDH) consisted of different phases. Acid pre-incubation was performed at room temperature for 20 min and was always followed by two 1-min rinses in CaCl2 in tris hydroxymethyl aminomethane buffer rinses solution. Nitro blue tetrazolium (NBT) incubation was performed for the detection of SDH activity at 37 °C for 20 min followed by two rinses in distilled water. For the myofibrillar (acid) ATPase portions, the procedure was performed at 37 °C at pH 9.4 for 60 min along with two times for five minutes in CaCl2 solution and incubation for 5 min in CoCl2 solution. Finally, ammonium sulphide staining of the acid ATPase procedure was performed. Cover slips were placed over the stained tissue sections and fixed in place using glycerol jelly. Additional serial sections were also histochemically stained for detection of basic m-ATPase and SDH activities. The basic m-ATPase method consisted of different phases. Sodium-cacodylate and sucrose solutions were used for incubation for 5 min followed by two 1-min rinses in CaCl2 in tris hydroxymethyl aminomethane buffer rinse solution. Sigma 221 and CaCl2 solutions were used for 10 min at the range of pH 10.3-10.5 followed by two 1-min rinses in CaCl2 and tris hydroxymethyl aminomethane buffer (Merck & Co USA) rinse solution. For the myofibrillar (acid) ATPase portions, the procedure was performed at 37 °C at a pH of 9.4 for 50 min along with one 30-s rinse in CaCl2 solution and incubation for 3 min in CoCl2 solution. Finally, ammonium sulphide staining of the acid ATPase procedure was performed. Cover slips were placed over the stained tissue sections and fixed in place using glycerol jelly. The SDH method was used for the second control procedure and consisted of different phases. Incubation in NBT at 37 °C was performed for 40 min followed by two rinses in distilled water. Finally, formaldehyde solution was used for 10 min. Cover slips were placed over the stained tissue sections, and fixed in place using glycerol jelly. Morphometric analyses were carried out using an interactive image analysis system (Leica CM 1100). The parameter of minimum diameter was selected as measurement of the fibre diameter to avoid possible errors because of tilted sections. In each muscle section at least 150 fibres of each type were analyzed using random fields. The average fibre size (area, perimeter, maximum and minimum diameter) was calculated. Data were processed by analysing the variances and means were estimated by following the general linear model (Proc GLM; SAS, 1992) in which the factors considered are fixed, and the effect of the other factors is expressed as deviation from the general average (μ). The model used was: yijklm = μ + Sexi + Btj + Muk + Ftl + (sex*Ft)il + (Bt *Ft)jl + (Mu*Ft)kl + εijklm. yijklm = value of relative observation to the lmo fibre type of the kmo muscle; of the mmo subject of imo sex, born of jmo birth type. Sexi = fixed effect of the imo sex (I = 1,2). Btj = fixed effect of the Jmo birth type (j = 1, 2, 3, 4). Muk = fixed effect of the kmo muscle (k = 1, 2). Ftl = fixed effect of the lmo fibre type (l = 1, 2, 3). (Sex*Ft)il = fixed effect of the imo sex with the lmo fibre type. 4
The Laticauda originated probably from the Northern African sheep, Berbera or Barbaresca, and it got the present characteristics thanks to subsequent crossbreeds with the sheep from the Apennines, the typical one from Southern Italy.Particularly, Laticauda is bred in the country of Benevento and Avellino and during the last years the head number has increased.This breed is traditionally reared in hilly farm pasture predominantly in sedentary breeding groups and the most common farming system is the "family farm type".Laticauda is a dual-purpose breed with the ability to produce good amount of milk and meat.Meat quality is affected by numerous factors including the growth stage and differentiation of skeletal muscle fibre types.The physiological differentiation of muscle fibres is a dynamic equilibrium which can vary during growth or as a response to the muscle work rate.Guth and Yellin (1971) noticed that muscle fibres continuously change during the animal's life as a functional demand adaptation and that the fibre type only reflects the fibre constitution at a certain moment.Henkel (1991) suggested muscle histochemistry as a tool for quantifying the effect of different treatments on the size of muscle fibres.In this study we have defined the skeletal muscle fibre populations of lambs by using different methods of m-ATPase and we have analyzed the postnatal development of these fibre populations between 60 and 120 days of age.Muscle fibre type may be classified based on the enzymatic activity; in this study the following nomenclature is used for myofibre types: FG (fast-contracting with glycolytic metabolism) or IIB fibres; type SO (slow-contracting with oxidative metabolism) or I fibres and FOG (fast contracting-with glycolytic-oxidative metabolism) or IIA fibres.The information obtained by the distribution of different fibre types, their number and diameter and/or area, represent an element of paramount importance for the determination of some basis characteristics (morphological and functional) of the muscle that have great influence on some aspects of meat quality.Our study also intends to assess probable correlation between the birth type and weight.Particularly the lambs born by single birth had higher weight than lambs born by multiple births and the degree to which it was reduced was influenced by postnatal nutrition (Greenwood et al. 2000).
Materials and Methods
We used sixty lambs (30 males and 30 females) coming from a farm located in Caserta (Italy) that were slaughtered at 60 and 120 days.1978).They were born by single, twin, triplet and quadruplet births and were fed mother's milk until 60 days of age.All of the ovines had passed the obligatory health tests and had the characteristic pattern of the species.
Lambs were identified and weighed at birth, and the weight was checked weekly until the end of the study.The time periods for the slaughter of the lambs during the postnatal development to obtain the muscle samples were 60 and 120 days after the birth.We obtained the muscle samples surgically, and we used m.semitendinosus (St) and m. longissimus dorsi (Ld).To avoid possible morphological and morphometric alterations of the fibres, the samples were frozen during the first hour after the animal's slaughter in 2-methylbutane cooled in liquid nitrogen.Transverse serial sections (8 μm) were cut in a cryostat at -20 °C.The sections were stained histochemically for myosin ATPase (m-ATPase shows the muscular contraction) and succinic dehydrogenase (SDH shows the fibres metabolism) simultaneously on the same muscular fibres (Padykula and Herman 1955;Nachlas et al. 1957;Barany 1967;Edstrom and Kugelberg 1968;Guth and Samaha 1970;Solomon and Dunn 1988;Velotto et al. 2005).The method used for the combined histochemical staining (acid m-ATPase + SDH) consisted of different phases.Acid pre-incubation was performed at room temperature for 20 min and was always followed by two 1-min rinses in CaCl 2 in tris hydroxymethyl aminomethane buffer rinses solution.Nitro blue tetrazolium (NBT) incubation was performed for the detection of SDH activity at 37 °C for 20 min followed by two rinses in distilled water.For the myofibrillar (acid) ATPase portions, the procedure was performed at 37 °C at pH 9.4 for 60 min along with two times for five minutes in CaCl 2 solution and incubation for 5 min in CoCl 2 solution.Finally, ammonium sulphide staining of the acid ATPase procedure was performed.Cover slips were placed over the stained tissue sections and fixed in place using glycerol jelly.Additional serial sections were also histochemically stained for detection of basic m-ATPase and SDH activities.The basic m-ATPase method consisted of different phases.Sodium-cacodylate and sucrose solutions were used for incubation for 5 min followed by two 1-min rinses in CaCl 2 in tris hydroxymethyl aminomethane buffer rinse solution.Sigma 221 and CaCl 2 solutions were used for 10 min at the range of pH 10.3-10.5 followed by two 1-min rinses in CaCl 2 and tris hydroxymethyl aminomethane buffer (Merck & Co USA) rinse solution.For the myofibrillar (acid) ATPase portions, the procedure was performed at 37 °C at a pH of 9.4 for 50 min along with one 30-s rinse in CaCl 2 solution and incubation for 3 min in CoCl 2 solution.Finally, ammonium sulphide staining of the acid ATPase procedure was performed.Cover slips were placed over the stained tissue sections and fixed in place using glycerol jelly.The SDH method was used for the second control procedure and consisted of different phases.Incubation in NBT at 37 °C was performed for 40 min followed by two rinses in distilled water.Finally, formaldehyde solution was used for 10 min.Cover slips were placed over the stained tissue sections, and fixed in place using glycerol jelly.Morphometric analyses were carried out using an interactive image analysis system (Leica CM 1100).The parameter of minimum diameter was selected as measurement of the fibre diameter to avoid possible errors because of tilted sections.In each muscle section at least 150 fibres of each type were analyzed using random fields.The average fibre size (area, perimeter, maximum and minimum diameter) was calculated.Data were processed by analysing the variances and means were estimated by following the general linear model (Proc GLM; SAS, 1992) in which the factors considered are fixed, and the effect of the other factors is expressed as deviation from the general average (μ).
The model used was: y ijklm = value of relative observation to the l mo fibre type of the k mo muscle; of the m mo subject of i mo sex, born of j mo birth type.Sex i = fixed effect of the i mo sex (I = 1,2).Btj = fixed effect of the J mo birth type (j = 1, 2, 3, 4).Mu k = fixed effect of the k mo muscle (k = 1, 2).Ft l = fixed effect of the l mo fibre type (l = 1, 2, 3).(Sex*Ft) il = fixed effect of the i mo sex with the l mo fibre type.
(Bt *Ft) jl = fixed effect of the j mo birth type with the l mo fibre type.(Mu*Ft) kl = fixed effect of the k mo muscle with the l mo fibre type.
ε ijklm = residual error.Significance between the mean values was evaluated using Student's t-test.
Results
The birth weight was higher in lambs born by single birth than in lambs born by multiple births.Lambs born by single birth and slaughtered at 60 days after birth weighed more than those born by twin birth (5036.4g vs. 4243.5g; P < 0.05) triplet birth (5036.4g vs. 2855.7 g; P < 0.01) and quadruplet birth (5036.4g vs. 1986.3g; P < 0.01).A similar trend was noticed at 120 days after birth.Lambs born by single and multiple birth slaughtered at 60 days showed similar average daily increase in weight, even if lambs born by triplet birth showed a higher increment than those born by multiple births or single birth.At 120 days the daily increase of weight was highest in lambs born by triplet birth than those born by twin birth (112 g/d vs. 68.4g/d; P < 0.05).Lambs slaughtered at 60 days and fed mother's milk only showed a similar live weight in all birth types considered, whereas the lambs slaughtered at 120 days and born by triplet birth showed weights higher than those born by twin births (22.12 kg vs. 18.58 kg; P < 0.05) and single birth (22.12 kg vs. 19.08 kg; P < 0.05).The histochemical staining results of muscle fibre types in the lambs were based on combined m-ATPase reactions, after acid (pH range 4.35-4.4)and alkaline (pH range 10.3-10.5)pre-incubations in St and Ld muscles.The muscle fibre types identified in the lambs could be classified in three types (Plate I, Figs 1,2,3,4,5,6 and Plate II, Fig. 7): type I or SO (slow-twitch oxidative), type IIB or FG (fast-twitch glycolytic) were moderately alkaline and acid-negative fibres, and type IIA or FOG (fast-twitch oxidative-glycolytic) were highly alkaline and acid-negative fibres.
In the lambs, the alkaline fibres (IIA) were highly oxidative and highly glycolytic, thus corresponding to type FOG fibres.However, some of the fibres had a weak glycolytic activity.The alkaline fibres (IIB) were mainly oxidative-negative and highly glycolytic, which would correspond to type FG fibres.However, some of the fibres had highly oxidative activity.The percentage of FOG fibres in the Ld and St muscles were 45% and 38%, respectively, at 60 days and 52% and 44%, respectively, at 120 days.The FOG fibres were characterized by a more versatile oxidative-glycolytic metabolism, according to the necessity to obtain energy for the contraction.At 120 days an increased number of FOG fibres was noticed in the St and Ld muscles (+ 6% and + 7% respectively) accompanied by a reduction of SO fibre (-5% and-5% respectively).We studied the fibre type in males and females in order to detect differences in the birth type.In the first group of lambs (60 days) the females had larger fibres than the males, whereas the opposite was observed in the other group (120 days).The results of the analysis of variance showed significant interactions between muscle × fibre type, birth type × fibre type, sex × fibre type (Table 1).The difference between females and males (fibre area at 60 days) was 8% for FG (1095.02 vs. 1008.27,P < 0.001), 5% for FOG (744.30 vs. 705.28,P < 0.01) and 18% SO (987.33 vs. 807.18,P < 0.001).At 120 days the males had larger fibres than females, the differences between males and females were 20% for FG (1294.45 vs. 1029.84,P < 0.001), 18% for FOG (956.12 vs. 777.07,P < 0.001), and 11% for SO (1173.13 vs. 1036.16,P < 0.001) (Table 2).With respect to the birth type, lambs born by single birth and slaughtered at 60 days had greater fibres than those born by multiple births.Particularly the difference between single and quadruplet birth (fibre area) was 28% for FG (1236.91 vs. 887.77,P < 0.001), 24% for FOG (812.83 vs. 616.68,P < 0.001) and 12% for SO (943.56 vs. 830.01,P < 0.001) (Table 3).Lambs born by double and triplet birth and slaughtered at 120 days had larger fibres than those born by single birth.The differences between twin and single births in fibre area were non-significant for FG, on the contrary were 26% for FOG (901.10 vs. 662.24,P < 0.001), and 15% for SO (1052.2 vs. 877.74,P < 0.001).The differences between triplet and single birth in fibre area were non-significant for FG on the contrary
Discussion
The present study demonstrates that the distribution and size of different muscle fibre types are affected by species, breed, muscle type, sex, birth type and that they in turn seem to affect the quality of meat.The results of this study show that lambs born by single birth and fed mother's milk have higher weights and larger fibres than those born by multiple births.Ruttle (1971) demonstrated that type of birth had the greatest influence on the birth and weaning weight of early weaned lambs.Lambs born as singles weighed more at birth than lambs born as twins and triplets.Besides, lambs weaned at 3 months of age were heavier than lambs weaned at 2 months and this difference was highly significant.Cimen (2006) indicated that it is possible that poor nutrition (especially for dams with twin lambs) during early postnatal life imposes a permanent limitation on fibre production ability and fibre diameter of the lamb.After weaning, the muscle fibre development varied in relation to the birth type: FOG and SO fibres developed more in triplet births than in single births.The effect was due to the limited amount of milk for the lambs.This result is assumed that the effect of birth type takes after 60 days of development, when it was made weaning.We know that the muscle function influences the final percentage of muscle fibre type.
Our study highlights that Ld muscle shows a high percentage of FOG.Ld is the intermediate and largest of the continuations of the sacrospinalis.According to Briand et al. (1981) this muscle exhibits both high oxidative and high glycolytic activity.In the St muscle at 120 days it was possible to notice an increase of FOG fibres and a reduction of FG and SO fibres.Some studies (Ashmore et al. 1972;Suzuki et al. 1988) suggest that in sheep the percentage and distribution of the different types of 8 (1) Ld = m.longissimus dorsi St = m.semitendinosus FG = fast glycolytic fibres, FOG = fast glycolytic oxidative fibres; SO = slow oxidative fibres Fibre Types (1) Area/μm fibres, classified by means of the m-ATPase technique, are genetically determined in the semitendinosus, triceps brachii and abdominal cutaneous muscles.However, the percentage of the m-ATPase fibre types changed after birth in the quadriceps muscle (White et al. 1978) and in the longissimus muscle (Moody et al. 1980).Velotto et al. (2005) studied the distribution of the three fibre types in two muscles of the Gentile of Puglia (caput longum m. tricipitis brachii and m. psoas major) and confirmed the high percentage of FOG fibres in both muscles considered.Our results show that the percentage of the fibres I, IIA, and IIB in the longissimus varies between day 60 and 120 of the development.Suzuki and Cassens (1983) analyzed in sheep the development of the fibres percentage in the growing serratus ventralis thoracis muscle.According to these authors, the number of type II fibres (IIA plus IIB) decreases from birth (80%) until 4 weeks after birth (65%) and remains constant until the end of development.
However, type I fibres increase their number from birth (10%) until 4-weeks old (35%) but then the percentage is also constant.Our results in the longissimus dorsi muscle show that the percentage of IIA fibres increased between 60 and 120 days after birth, whereas the percentage of type IIB fibres decreased between 60 and 120 days after birth.However, in our study type I as the IIB one decreased during development.
The difference between our work and that of Suzuki and Cassens (1983) might be due to the fact that we studied different muscles.Some publications (Ashmore et al. 1972;Hawkins et al. 1985) suggest a transformation of the IIA fibres (of small size) into IIB fibres (of a bigger size) as an explanation for the increase of muscular size during the development.We think that the developmental course of oxidative and/or glycolytic fibres should be specifically tested in future works.
Some studies (Solomon et al. 1981) described different increases of percentages for certain aerobic fibres in the ovine longissimus muscle, and at 8 weeks after birth, Whipple and Koohmaraie (1992) found 48.6% of oxidative fibres and 41.6% of glycolytic fibres in this muscle.With respect to the size of fibre types, our results show that in the lambs slaughtered at 60 days the dimensions of fibres in the females are larger than in the males.The opposite was observed for the lambs slaughtered at 120 days.It has been suggested (Hawkins et al. 1985) that the increase in the carcass fat content is related to an increase in the red fibre size.This study also relates the size of the muscular fibres to the slaughtered weight, race and sex of the animals.Suzuki et al. (1988) proposed that in sheep the size of the type I fibres is similar to the size of the type IIB fibres in the hip and thigh muscles, as well as in other muscles (Suzuki 1971ab;Suzuki and Cassens 1983).They also indicated that in sheep the glycolytic fibres (white) are not always bigger than the oxidative fibres (red).
Our results confirm the large size of the fibres IIB and I and the smaller size of the IIA fibres.It is also noticeable that all three types of fibres identified with the m-ATPase technique significantly increased their size only between 60 and 120 days after birth.In summary, with the use of an appropriate m-ATPase technique, the two fast fibre types IIA and IIB can be separated histochemically in skeletal muscle of lambs, even at early stages of postnatal development.McCoard et al. (2001) observed that twin neonate lambs sacrificed at 20 days had lower body weights and muscle weights compared to single born lambs.Lower muscle weights in twins were associated with smaller myofibre cross-sectional areas and lower total nuclei numbers and myogenic precursor cell numbers × muscle in selected hind-limb muscles.These results indicate that myofibre hypertrophy in late gestation and early postnatal life is related to myogenic precursor cell number which may have important implications for growth potential of the growth-restricted fetus.In agreement with the previous study our results highlight that at 60 days the lambs born by twin birth and multiple birth have fibres smaller than the lambs born by single birth.Fibre area in the lambs increased (P < 0.01) with age in both oxidative and glycolytic fibres (Whipple and Koohmaraie 1992).Our study demonstrates that the Ld muscle has shown a significant development in FOG fibre type higher than in the St muscle at 60 and 120 days while the opposite was observed for FG and SO fibres types.The present research shows the evolution of muscle fibre characteristics in the period 60-120 days of life for lambs born by single and multiple births.Significant interaction among the considered factors highlighted with variance analysis discussed in the previous section demonstrates that age, sex and birth type influence the dimensions of the fibres and consequently the meat quality.Meat quality is a term used to describe a range of attributes of meat.However, it is now becoming clear that variation in other factors such as the muscle fibre type composition and the buffer capacity of the muscle together with the breed and nutritional status of the animals may also contribute to be observed variations in meat tenderness (Maltin et al. 2003).
Table 2 .
Mean value (avg) and variation coefficient (v.c.,%) of morphometric characteristics of fibre types in the males and females as related to age Ld at 60 and 120 days were larger than in St (Table4).At 60 and 120 days no significant differences were noticed between Ld and St muscles in the fibre area for SO.FG fibres were larger in Ld muscle at 60 days; however, the opposite was observed at 120 days.
Table 3 .
Mean value (avg) and variation coefficient (v.c.,%) of morphometric characteristics of fibre types for birth type as related to age FG = fast glycolytic fibres, FOG = fast glycolytic oxidative fibres; SO = slow oxidative fibres.
Table 4 .
Mean value (avg) and variation coefficient (v.c.,%) of morphometric characteristics of fibre types in muscles under study
|
2018-12-05T06:14:50.517Z
|
2010-01-01T00:00:00.000
|
{
"year": 2010,
"sha1": "d5824dca509ba98533826b55cf4b7bcdf983a861",
"oa_license": "CCBY",
"oa_url": "https://actavet.vfu.cz/media/pdf/avb_2010079010003.pdf",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "d5824dca509ba98533826b55cf4b7bcdf983a861",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Biology"
]
}
|
219587718
|
pes2o/s2orc
|
v3-fos-license
|
A Gap Between Asthma Guidelines and Management for Adolescents and Young Adults
18 years ( P [ .02). At least 1 dispensation of any inhaled corticosteroid before age 18 years was found for 73% (107 of 147), compared with 50% (74 of 147) after age 18 years. The mean number of dispensed any inhaled corticosteroid was 3.1 before 18 years and 2.1 after 18 years ( P < .01). Only 3% (5 of 147) had a regular dispensation of any inhaled corticosteroid once a year during the 8-year period. CONCLUSIONS
INTRODUCTION
Most patients with asthma have mild or moderate disease and can be managed in primary care; with currently available medications, most can be treated effectively. 1,2The long-term goal of asthma treatment is to achieve control of symptoms and maintain normal activity levels. 3It is therefore important to regularly monitor symptom control, risk factors, and response to treatment through follow-up visits performed by an appropriate suitable health care provider at an appropriate level of care. 3wedish recommendations on asthma management are similar to the Global Initiative for Asthma (GINA) guidelines.4][5] At around 18 years old, management involves a transition from pediatric to adult health care. 6Our recent qualitative study showed that young adults with severe asthma felt left out of the system during the transition from pediatric to adult health care. 7oreover, they experienced fewer or no follow-up visits in adult health care.Thus, it is relevant to investigate whether the transition to adult health care influences asthma-related health care consumption in a wider group.It is also important to explore how the process of transition affects pharmacological dispensation, because nonadherence to therapy is particularly common during this period. 8,9Medication adherence is the cornerstone for improving the patient's health-related quality of life, and an improved adherence can lead to decreased asthma morbidity and mortality. 10he aim of this study was therefore to investigate asthmarelated health care consumption and pharmacological dispensation during the transition from pediatric to adult health care.A longitudinal approach was used to follow different asthma phenotypes during the entire transition process.
Study design
During the period 1994 to 1996, parents to all newborns living in predefined areas of Stockholm, the capital of Sweden, including inner-city, urban, and suburban districts, were asked to participate in the longitudinal population-based birth cohort BAMSE (Barn/ Child, Allergy, Milieu, Stockholm, Epidemiology). 11,12The ongoing birth cohort includes 4089 participants, who have been followed since birth with repeated follow-ups.At age 2 months, a baseline questionnaire was answered.When the participants were approximately aged 1, 2, 4, 8, 12, and 16 years, parents completed follow-up questionnaires to collect information about symptoms related to asthma and other allergic diseases, lifestyle factors, and treatment of asthma.At 12, 16, and 24 years, participants were also asked to complete a questionnaire themselves.In addition to the questionnaires, the participants were invited to undergo clinical examinations, including blood sampling and lung function measurement, at approximately ages 4, 8, 16, and 24 years.
The study population consisted of 1808 participants who responded to the questionnaires and lived in the Stockholm region at both the 16-and 24-year follow-ups (Figure 1).The mean age was 16.5 years at the 16-year follow-up and 22.4 years at the 24-year follow-up.For analyses related to the clinical examinations, participants with a valid spirometry measure were included.
Lung function exposure assessment
At the clinical examinations carried out at approximately ages 16 and 24 years, lung function was measured through spirometry using a Jaeger MasterScreen-IOS system (Carefusion Technologies, San Diego, Calif).All subjects performed repeated maximal expiratory flow volume measurements. 13The highest values of forced vital capacity (FVC) and FEV 1 were extracted and used for analysis in accordance with the European Respiratory Society and American Thoracic Society criteria. 14FEV 1 /FVC data were converted to z scores on the basis of The Global Lung Function Initiative [GLI]reference values.†The FEV 1 /FVC ratio z score below the lower limit of normal, defined as the lower fifth percentile in the never-asthmatic population.
zDispensed at least 800 mg budesonide or equivalent, 500 mg fluticasone, or fixed combinations of ICSs and LABAs.
xAsthma with high daily doses of ICS plus dispensed LABA and/or leukotriene receptor antagonist at least once in the 18 mo before the 16-y follow-up to prevent the asthma from becoming or remaining uncontrolled, defined as at least 1 of the following 4 alternatives on the basis of data from the 16-y follow-up: (1) Uncontrolled asthma based on modified GINA definition 3,16 ; (2) Taken cortisone tablets dissolved in water for asthma or respiratory symptoms 3 d in a row; (3) Sought acute medical care because of respiratory symptoms; and (4) Impaired lung function, FEV 1 below 80% of predicted.jjP values obtained using tests of proportions and Wilcoxon signed-rank test indicated differences between 4 y before and after age 18 y for visits and level of care.
Asthma phenotypes-Exposure definitions
Exposure definitions were based on questionnaire and clinical data from the 16-and 24-year follow-ups.Current asthma was defined as fulfilling at least 2 of the following 3 criteria 16 : Symptoms of wheeze and/or breathing difficulties in the last 12 months, ever doctor's diagnosis of asthma, and/or asthma medication occasionally or regularly in the last 12 months.Persistent asthma was defined as fulfilling the definition of current asthma at both the 16-and 24-year follow-ups.
Allergic asthma was defined as a combination of asthma and IgE sensitization to inhalant allergens (cat, dog, horse, and/or house-dust mite, timothy grass, birch, mugwort, and/or mold). 17evere asthma was based on European Respiratory Society and American Thoracic Society and GINA guidelines, 1,3,16 defined as asthma with high daily doses of inhaled corticosteroids (ICSs).Information on pharmacy-dispensed asthma medication within the 18 months before each follow-up was obtained by linkage to the Swedish Prescribed Drug Register. 18High doses were identified when a participant was dispensed at least 800 mg budesonide or equivalent, greater than or equal to 500 mg fluticasone, or fixed combinations of ICSs and long-acting b 2 -agonists (LABAs), and dispensed LABAs and/or leukotriene receptor antagonists at least once to prevent the asthma from becoming or remaining uncontrolled despite therapy.Uncontrolled asthma was defined as at least 1 of the following 4 alternatives on the basis of data from the 16-or 24-year follow-ups: (1) Uncontrolled asthma based on modified GINA definition 3,16 ; (2) Taken cortisone tablets dissolved in water for asthma or respiratory symptoms 3 days or more in a row; (3) Sought acute medical care because of respiratory symptoms; and (4) FEV 1 below 80% of predicted.
Airflow obstruction was defined as the FEV 1 /FVC ratio z score below the lower limit of normal, defined as the lower fifth percentile in the never-asthmatic population.
Data sources of outcome assessment
A flowchart is shown in Figure 1 and presents how questionnaire and clinical data from the 16-and 24-year follow-ups were linked to mandatory Swedish health registries between 2008 and 2018: In total, an 8-year period (based on individual age, participants born between 1994 and 1996)-4 years before and 4 years after age 18 years, respectively.This period is hereafter denoted "before and after age 18 years."A timeline with study periods and data collection is presented in Figure E1 in this article's Online Repository at www.jaci-inpractice.org.Data on asthma-related health care consumption were obtained from Stockholm Regional Healthcare Data Warehouse Vårdanalysdatabasen (VAL). 19The VAL database includes complete data on all health care consultations in primary and specialist care, all hospitalizations and medical procedures, and diagnoses based on the International Classification of Diseases, Tenth Revision (ICD-10). 20or each health care consultation, a maximum of 15 diagnoses based on the ICD-10 can be registered.One diagnosis is assigned as the main condition, whereas others are secondary. 21We identified participants with physician-diagnosed asthma, ICD-10 codes J45 and/or J46, as main or secondary diagnosis.With data linked to the personal identity number, it is possible to follow each individual over time. 22nformation on dispensed asthma medications was obtained by linkage to the national Swedish Prescribed Drug Register using personal identity numbers. 18,22,23The medications included were the following, classified in accordance with the Anatomical Therapeutic Chemical Classification System 24 : short-acting b 2 -agonists , ICSs (R03BA), fixed combinations of ICSs and LABAs (R03AK), and leukotriene receptor antagonists (R03DC).
Covariates
Information on covariates was obtained from the baseline questionnaire (sex, mother's age at birth, parent born outside Sweden, parental allergic disease, parental education, and parental smoking).
Statistical analysis
Tests of proportions (categorical variables) and Wilcoxon signedrank test (continuous variables) were used to study differences between groups (background characteristics, consultations, levels of care, medical procedures, and dispensed asthma medications).Twosample Wilcoxon rank-sum test was used to study sex differences.
McNemar test was used to evaluate differences in asthma control over time.P values of less than .05were considered statistically significant.
The association between having had a consultation after age 18 years and selected asthma phenotypes (allergic asthma, asthma and airflow obstruction, asthma with high daily doses of ICS, and severe asthma) was analyzed using a logistic regression model.Potential confounders were selected a priori from the previous literature: sex and socioeconomic status and expressed as odds ratios (ORs) with 95% CI.
For the main analyses, we used ICD-10 codes J45 and/or J46.Sensitivity analysis with all ICD-10 codes J e "Diseases of the respiratory system" was performed to assess potential underreporting.
All analyses were performed with the STATA statistical software (release 14.2; College Station, Texas).
The study was approved by the Regional Ethical Review Board in Stockholm, Sweden.All participants and parents provided informed consent to participate in the study.
Health care consumption
Asthma at the 16-year follow-up.At the 16-year follow- up, 14% (n ¼ 253) of the adolescents fulfilled the study definition of current asthma (Table I).Of these, 62% (n ¼ 157) had allergic asthma, 7% (n ¼ 18) had airflow obstruction (FEV 1 / FVC z score < lower limit of normal), 24% (n ¼ 60) were dispensed high daily doses of ICSs or fixed combinations of ICSs and LABAs, and 5% (n ¼ 12) fulfilled the definition of severe asthma (Table II).
In the 4-year period before their 18th birthday, 32% (82 of 253) of the adolescents had at least 1 asthma-related consultation, compared with 27% (68 of 253) in the following 4-year period (Table I).The mean number of consultations decreased from 1.2 before 18 age years to 0.6 after age 18 years (P < .01).This relationship was seen for all asthma phenotypes (Table II).In the sensitivity analyses including consultations for all diagnoses of diseases of the respiratory system, the mean number of consultations before and after 18 age years was 3.0 versus 2.0 (P < .01)(see Table E2 in this article's Online Repository at www.jaci-inpractice.org).
Asthma at the 24-year follow-up.At the 24-year followup, 14% (n ¼ 248) of the young adults fulfilled the study †A combination of asthma and IgE sensitization to inhalant allergens (cat, dog, horse, and/or house-dust mite, timothy grass, birch, mugwort, and/or mold).
zThe FEV 1 /FVC ratio z score below the lower limit of normal, defined as the lower fifth percentile in the never-asthmatic population.
xDispensed at least 800 mg budesonide or equivalent, 500 mg fluticasone, or fixed combinations of ICSs and LABAs.
jjAsthma with high daily doses of ICSs plus dispensed LABAs and/or leukotriene receptor antagonists at least once in the 18 mo before the 16-y follow-up to prevent the asthma from becoming or remaining uncontrolled, defined as at least 1 of the following 4 alternatives on the basis of data from the 16-y follow-up: (1) Uncontrolled asthma based on modified GINA definition †The FEV 1 /FVC ratio z score below the lower limit of normal, defined as the lower fifth percentile in the never-asthmatic population.
zDispensed at least 800 mg budesonide or equivalent, 500 mg fluticasone, or fixed combinations of ICSs and LABAs.
xAsthma with high daily doses of ICSs plus dispensed LABAs and/or LTRAs at least once in the 18 mo before the 16-y follow-up to prevent the asthma from becoming or remaining uncontrolled, defined as at least 1 of the following 4 alternatives on the basis of data from the 16-y follow-up: (1) Uncontrolled asthma based on modified GINA definition 3,16 ; (2) Taken cortisone tablets dissolved in water for asthma or respiratory symptoms 3 d in a row; (3) Sought acute medical care because of respiratory symptoms; and (4) Impaired lung function, FEV 1 below 80% of predicted.jjP values obtained using Wilcoxon signed-rank test indicated differences between 4 y before and after age 18 y for respective dispensed asthma medicines.
Persistent asthma.Eight percent fulfilled the criteria for persistent asthma (Table I).Before age 18 years, 39% (58 of 147) of these young adults had 1 or more asthma-related consultation, similar to 37% (55 of 147) after age 18 years.Only 2% (3 of 147) had yearly consultations during the entire study period of 8 years.The mean number of consultations decreased from 1.6 before age 18 years to 1.0 after age 18 years (P ¼ .02).
Figure 2 shows the number of consultations before and after age 18 years, respectively, divided by level of care.The most common combination was having no consultation either before or after age 18 years, and the second most common was attending specialist care before age 18 years but having no consultation after age 18 years.After age 18 years, the mean number of consultations in primary care increased significantly and the number in specialist care decreased (Table I).The sensitivity analyses showed similar results (see Table E2).
Uncontrolled asthma was found among 57% (80 of 147) at the 16-year follow-up, and among 72% (103 of 147) at the 24year follow-up (P < .01).There were 2 registered emergency room visits before age 18 years, and 4 after age 18 years.During the study period of 8 years, there was 1 registered hospitalization with asthma as the main diagnosis, whereas 9 of the young adults had hospitalizations where asthma was a secondary diagnosis (8 on 1 occasion and 1 on 2 occasions).
Of all registered consultations during the study period, indirect contacts, for instance, by telephone and mail, accounted for a total of 4% (17 of 382).
Factors associated with the odds for respective asthma phenotypes (allergic asthma, asthma and airflow obstruction, asthma with high daily doses of ICSs, and severe asthma) and having a consultation after age 18 years among those with persistent asthma showed that asthma with high daily doses of ICSs was associated with increased odds of having a consultation (OR adj ¼ 2.6; 95% CI, 1.3-5.6)(Table III).Increased odds were also seen for severe asthma (OR crude ¼ 3.9; 95% CI, 1.0-16.0),but no significant association was seen in the adjusted model (OR adj ¼ 3.9; 95% CI, 0.9-16.1).No association was seen for asthma and airflow obstruction.
The mean number of registered spirometry tests in VAL decreased significantly, from 0.27 before age 18 years to 0.16 after age 18 years (P < .01).One or more spirometry was registered among 27% (40 of 147) of the young adults before 18 age years, and among 16% (24 of 147) after age 18 years.
Figure 3 shows the prevalence of current asthma, and the mean number of yearly consultations in relation to sex.Males had a higher mean number of consultations before age 18 years (males: 2.1, females: 1.2, P ¼ .25),but after age 18 years, no difference was seen (males: 0.9, females: 1.1, P ¼ .90).
zP values obtained from Wilcoxon signed-rank test indicated differences between 4 y before and after age 18 y for respective dispensed asthma medicines.
age 18 years for all asthma phenotypes, except severe asthma (Table IV).Among those with persistent asthma, at least 1 dispensation of SABA before age 18 years was found for 70% (103 of 147) compared with 50% (73 of 147) after age 18 years.The average number was 2.8 before age 18 years, and 2.1 after age 18 years (P < .01)(Table V).Only 3% (4 of 147) had a regular dispensation of SABA once a year during the entire study period of 8 years.At least 1 dispensation of any ICS before age 18 years was found for 73% (107 of 147), compared with 50% (74 of 147) after age 18 years.The mean number of dispensed any ICS was 3.1 before age 18 years and 2.1 after age 18 years (P < .01)(Table V).Only 3% (5 of 147) had a regular dispensation of any ICS once a year during the entire 8-year period.
DISCUSSION
This longitudinal population-based birth cohort, investigating different asthma phenotypes in relation to asthma-related health care consumption and pharmacological dispensation during the transition from pediatric to adult health care, showed that there is a gap between asthma guidelines and actual management.Almost two-thirds of the young adults with persistent asthma had not had any follow-up visit after age 18 years.For all asthma phenotypes, health care consultations were fewer than recommended in guidelines, and their frequency decreased after the transition.The dispensations of asthma medications decreased after the transition, even for the participants with severe asthma.For all asthma phenotypes, almost no one had dispensed regular asthma medications during the 8-year period.
Few previous studies have addressed the transition from pediatric to adult health care for patients with asthma.However, a recent French observational study characterized changes in asthma care in adult patients with persistent asthma, and found that the number of visits per year to specialist care increased with time during a 10-year period, whereas the number of visits to primary care decreased. 25In the present study, the opposite was seen regarding level of care.The result was expected, given that Swedish primary care is responsible for providing basic medical treatment. 26However, a large proportion of the participants attending specialist care before age 18 years had no consultation either in primary or in specialist care after age 18 years.Patients who are managed well in primary care can remain with their primary care physician, ensuring continuity, but most adolescents with asthma requiring a tertiary level of care would be expected to need specialist care as adults. 8One of few published studies examining the impact of randomized referral to either primary or specialist care, and risk factors for deterioration during the transition from pediatric to adult health care, showed that mild/moderate asthma was managed equally effectively regardless of level of care. 274][5] A recent US study assessed clinician-reported adherence to asthma guideline recommendations, and found that agreement with and adherence to guidelines was higher for specialty physicians than for primary care physicians, and overall low adherence with, for example, use of written asthma action plans and medical procedures, such as spirometry. 28In the present study, the number of registered spirometry tests was very low.A single measure of lung function may not provide a true estimate of an individual's risk of obstructive disease later. 29However, repeated measurements may reveal a persistent reduction or rapid decline in lung function over time, either of which is more likely to be associated with an increased risk of chronic obstructive pulmonary disease in early adulthood.It is therefore important that adolescents and young adults with asthma have regular follow-up visits, even if their asthma is mild. 8 recent Swedish observational cohort study found that 1 of 5 adult patients with severe asthma had visited specialist care because of asthma during the course of a year. 30Furthermore, they showed that one-third of patients with asthma, irrespective of severity, had visited primary care.The authors discussed that many patients probably did not have regular visits but were managed by prolonged prescription of asthma medication through indirect consultations.Our results showed for all asthma phenotypes low numbers of dispensed asthma medications, that very few had dispensed regular asthma medications, and that more than two-thirds among those with persistent asthma had uncontrolled asthma at the 24-year follow-up.This is unsatisfactory, because the long-term goal of asthma management is to achieve control of symptoms through, for example, treatment.Previous studies have shown that adherence to ICS is poor, leaving patients exposed to the risks of SABA-only treatment. 31,32According to the GINA guidelines' new recommendations, treatment of asthma with SABA alone is no longer suggested for adults and adolescents. 33This supports the importance of consultation and increased understanding of asthma and asthma management. 34n the present study, among those with persistent asthma, high daily doses of ICSs indicated increased odds of having had a consultation after age 18 years.However, almost two-thirds of these young adults had no consultation after age 18 years.Severe asthma also indicated increased odds of having had a consultation, but the adjusted model did not show a significant association.Results from a recent cohort showed that the proportion of children with severe asthma decreased steadily with increasing age, and approximately half of those with severe asthma in childhood resolved during adolescence. 35However, a recent cross-sectional study discussed that severe asthma symptoms, poor lung function, and higher airway hypersensitiveness in childhood are predictors of persistence of childhood asthma to adulthood. 36These groups should therefore, as suggested in guidelines, have more frequent monitoring during and after transition to adult health care.
Strengths and Limitations
Important strengths of the present study include the prospective and population-based design of the BAMSE cohort, and the large and well-characterized study sample.Another strength is the use of solid and unique data through linkage to mandatory Swedish health registries, with high quality and coverage.The national Swedish Prescribed Drug Register provides complete data on the number of individuals exposed to dispensed prescribed medications in the Swedish population. 23The regional VAL database has approximately 85% coverage of all diagnoses in primary care, more than 90% coverage of utilization in specialist care, and more than 99% coverage of hospital care. 37nformation gathered via health care registers may theoretically also have limitations.There is a potential bias in overreporting or underreporting diagnoses and a risk of variability in data quality. 38However, the sensitivity analyses with the wider ICD-10 codes showed comparable results, with a slightly improved mean number of consultations both before and after age 18 years.
The study definition of persistent asthma could be discussed, because symptoms vary over time and in intensity.However, the same results were seen for all asthma phenotypes.
Our results are based on the Swedish population and the generalizability may be questioned because health care systems vary between countries; reimbursement systems, modes of payment, and the uses of specialist services also vary. 21However, the transition from pediatric to adult health care is an international concern, and we believe that our results can be transferred to other countries and populations.
CONCLUSIONS
Almost two-thirds of the young adults with persistent asthma had not had a follow-up visit after age 18 years.For all asthma phenotypes, health care consultations were fewer than recommended in guidelines, and decreased after the transition.The dispensations of asthma medications decreased after the transition, even for the participants with severe asthma.For all asthma phenotypes, almost no one had dispensed regular asthma medications during the 8-year period.This study shows that there is a gap between asthma guidelines and actual management.Increased adherence to current guidelines is required when planning for optimal care of adolescents and young adults, including their transition to adult health care.
FIGURE 2 .
FIGURE 2. Number of consultations 4 years before and after age 18 years, divided by level of care, among young adults with persistent asthma (n ¼ 147).*Consultations in both primary and specialist care were merged into specialist care (n ¼ 4).
FIGURE 3 .
FIGURE 3. The mean number of yearly consultations among young females (n ¼ 79) and males (n ¼ 68) with persistent asthma and the prevalence of asthma during this time period.**The prevalence of current asthma is plotted on the basis of prevalence at the 12-, 16-, and 24-year follow-ups. 15
years before and after 18 years of age
FIGURE 1. Study flowchart of the study population (n ¼ 1808) linked to data sources for asthma-related health care consumption and pharmacological dispensation.BAMSE, Barn/Child, Allergy, Milieu, Stockholm, Epidemiology.
TABLE I .
Number of consultations and level of care, 4 y before and after age 18 y, in relation to asthma at the 16-and 24-y follow-ups Fulfilling at least 2 of the following 3 criteria: symptoms of wheeze and/or breathing difficulties in the last 12 mo, ever doctor's diagnosis of asthma, and/or asthma medicine occasionally or regularly in the last 12 mo.†Fulfilling the definition of current asthma at both the 16-and 24-y follow-ups.zP values obtained using tests of proportions and Wilcoxon signed-rank test indicated differences between 4 y before and after age 18 y for visits and level of care. *
TABLE II .
Number of consultations and level of care, 4 years before and after age 18 y, in relation to asthma phenotypes at the 16-y follow-up
TABLE III .
Asthma phenotypes at the 16-y follow-up in relation to 1 consultation after age 18 y among young adults with persistent asthma (n ¼ 147) *Adjusted for sex and socioeconomic status.
3,16; (2) Taken cortisone tablets dissolved in water for asthma or respiratory symptoms 3 d in a row; (3) Sought acute medical care because of respiratory symptoms; and (4) Impaired lung function, FEV 1 below 80% of predicted.
TABLE IV .
Number of dispensed asthma medicines, 4 y before and after age 18 y, in relation to asthma phenotypes at the 16-y follow-up *A combination of asthma and IgE sensitization to inhalant allergens (cat, dog, horse, and/or house-dust mite, timothy grass, birch, mugwort, and/or mold).
TABLE V .
Number of dispensed asthma medicines, 4 y before and after age 18 y, in relation to asthma at the 16-and 24-y follow-ups
|
2020-06-11T09:03:17.818Z
|
2020-06-06T00:00:00.000
|
{
"year": 2020,
"sha1": "153626dc21669fed3e29328d887cc50577a1af59",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1016/j.jaip.2020.05.034",
"oa_status": "HYBRID",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "cb05aa717868e88c7a4f55780daf09b8275fbd53",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
11578272
|
pes2o/s2orc
|
v3-fos-license
|
Expanding the armamentarium for the spondyloarthropathies
Ankylosing spondylitis (AS) is a member of the family of spondyloarthropathies, which are inflammatory arthritides largely involving the axial skeleton and commonly accompanied by peripheral arthritis. Genetic factors, particularly the presence of HLA-B27, are major contributors to the susceptibility for AS. Despite some therapeutic advances, the treatment options for patients with AS and related disorders have been limited. Several lines of evidence have led to the hypothesis that patients with AS might benefit from treatment with tumor necrosis factor (TNF). Specifically, TNF concentrations are known to be significantly elevated in the synovium of patients with rheumatoid arthritis (RA), in the inflamed gut of patients with inflammatory bowel disease, and in the inflamed sacroiliac joints of patients with AS. The anti-TNF agents have been shown to be of benefit in, and currently have indications for, RA (etanercept, infliximab, adalimumab), Crohn's disease (infliximab), and psoriatic arthritis (etanercept). Because the spondyloarthropathies share pathogenetic mechanisms with the above-specified disease states, studies have been conducted to evaluate the effectiveness of anti-TNF agents in several disorders, including AS. Data from clinical trials so far with infliximab and etanercept show that patients with AS and related disorders achieve significant improvement in clinical signs and symptoms based on validated outcomes measures. Computed tomography and magnetic resonance imaging (MRI) can facilitate the early diagnosis of AS. Studies with infliximab using MRI together with updated scoring methods demonstrated significant decreases in associated spinal inflammation. TNF antagonist therapy is well tolerated in patients with AS, with a side effect profile consistent with the prior experience of patients with RA.
Introduction
The spondyloarthropathies are chronic, autoimmune, inflammatory joint diseases that are second in prevalence to rheumatoid arthritis (RA) among the rheumatic diseases [1,2]. The seronegative (that is, negative for autoantibodies) spondyloarthropathies include ankylosing spondylitis (AS), psoriatic arthritis, Reiter's syndrome (reactive arthritis), arthritis associated with inflammatory bowel disease, and undifferentiated spondyloarthropathies [3]. A special study group/committee of pathologists of the European League Against Rheumatism has recommended the term spondylo-arthritides on the basis of the inflammatory nature of these rheumatic conditions [4]. AS, the prototypical spondyloarthropathy, is an often painful disorder that impairs physical functioning and can lead to lost productivity, loss of employment, and impaired quality of life [5][6][7]. Patients with AS have a 1.5-4-fold increased risk of death from a variety of disorders including circulatory diseases, amyloidosis, fractures of the spine, gastrointestinal diseases, and renal disorders [8][9][10]. Prevalence varies with ethnic origin and is highest among Native Americans living along the Pacific Coast and among Eskimos [11].
Review
Expanding the armamentarium for the spondyloarthropathies Genetic factors are the major contributor to AS, and although the disease seems to be polygenic, the antigen HLA-B27 is present in at least 75% of patients with AS [3,12,13]. AS is believed to result from the generation of cytokines by antigen-stimulated T cells. Pathologic changes consist of an enthesopathy with edema and mononuclear cell infiltration at the contact sites between bones and ligaments or tendons [14]. Synovial tissue of the involved joints demonstrates the proliferation of synovial lining cells, a mononuclear cell infiltrate that can include large numbers of plasma cells, and superficial fibrin deposition [15]. Immunohistochemical techniques show dense cellular infiltrates consisting predominantly of T cells and macrophages in the sacroiliac joints of patients with AS [16]. Large amounts of mRNA specific to tumor necrosis factor (TNF), a proinflammatory cytokine, are found in sites of bony remodeling in these patients, as demonstrated by in situ hybridization analysis ( Fig. 1) [16].
Elevated concentrations of TNF mRNA are found in the synovial tissue of patients with RA [17,18], in the inflamed gut of patients with Crohn's disease (CD) [19], and in the inflamed sacroiliac joints of patients with AS [16,20]. TNF antagonists have proved successful in the management of RA and CD. Given that TNF mRNA concentrations are elevated in the sacroiliac joint in patients with AS, it seemed logical to test the hypothesis that TNF antagonists could improve outcomes in patients with AS. This paper briefly reviews the conventional therapies for managing AS and focuses on the evolving role of the TNF antagonists infliximab and etanercept in the management of AS in patients.
Presentation and management of the spondyloarthropathies
The clinical manifestations of AS include inflammation, back pain -which usually presents at the thoraco-lumbar junction -and stiffness. The stiffness is worst first thing in the morning, or with prolonged inactivity, and patients characteristically report improvement with activity. Although AS is traditionally considered a disease of the axial skeleton, arthritis of the appendicular skeleton is also common. Examination demonstrates reduced extension and right and left lateral bending, and flexion is reduced with later disease [21,22]. Patients may also have decreased chest expansion and pain and/or tenderness over the sacroiliac region or the buttocks. Involvement of the hip, knees, and shoulders may also be detected in about 60-80% of patients [23,24]. Although patients with mild AS are often able to maintain full physical function, the presence of moderate-to-severe disease can significantly limit employment, daily activities, and quality of life [6,25].
Until recently, treatment options for AS were limited. This was related to two factors: chronicity of the disease before diagnosis, and lack of therapeutic agents that could provide anything more than symptomatic relief. Traditional treatments have consisted predominantly of physical therapy plus the administration of nonsteroidal anti-inflammatory drugs (NSAIDs) to decrease joint pain and morning stiffness [26,27]. Physical therapy and exercise can significantly improve range of motion and relieve symptoms; regular physical conditioning and back exercises may also help [28]. However, most patients with active disease require regular analgesia, often with NSAIDs [27,29]. Selective cyclo-oxygenase-2 (COX-2) inhibitors, such as celecoxib, are effective analgesics and may decrease the potential risk of NSAID-induced gastrointestinal enteropathy [30]. Chronic administration of systemic corticosteroids is not recommended, because they are rarely effective and serve to promote decreased bone mineral density, although injection of a long-acting corticosteroid into painful sacroiliac joints may relieve pain in patients whose symptoms are refractory to NSAIDs [31]. The disease-controlling antirheumatic therapy or disease-modifying antirheumatic drugs sulfasalazine and possibly methotrexate may have some efficacy, particularly in patients with appendicular involvement [32][33][34].
Because TNF mRNA has been detected in the inflamed gut and sacroiliac joints of patients with chronic inflammatory bowel diseases and spondyloarthropathies, including AS, this suggested an opportunity to investigate the efficacy of TNF antagonists in these diseases [35]. Because adalimumab has only recently become available in the USA, most of the clinical investigations have been conducted with etanercept and infliximab. Currently, in addition to RA, etanercept is approved in the USA for use In situ hybridization analysis demonstrates large amounts of tumor necrosis factor (TNF) mRNA, in sites of bony remodeling in a patient with ankylosing spondylitis. TNF is a proinflammatory cytokine. Reproduced with permission from John Wiley & Sons, Inc. [16]. © 1995 American College of Rheumatology in psoriatic arthritis, whereas infliximab is indicated for CD [3,36,37].
Etanercept and infliximab: similar yet different
Although etanercept and infliximab are both anti-TNF inhibitors, they differ in several ways. First, etanercept binding is restricted to the trimeric form of soluble TNF, whereas infliximab binds to both monomer and trimer forms. Second, etanercept forms relatively unstable complexes with soluble TNF, resulting in the release of dissociated TNF, whereas infliximab forms stable complexes with soluble TNF. Third, more infliximab molecules bind to transmembrane TNF, and with a higher avidity than etanercept [38]. Knowledge of binding characteristics differences may explain some of the differences seen with infliximab and etanercept in Crohn's disease, a member of the spondyloarthropathy family. For example, infliximab has been shown to be effective in both short-term [39] and long-term [40] clinical trials in CD, and to continue these benefits with maintenance infliximab treatment, but the efficacy of etanercept in CD, in a well-controlled randomized trial, has not been demonstrated [35,41].
With infliximab, 573 patients with active CD who continued infliximab after an initial response to a single infusion were more likely to be in remission at weeks 30 and 54, whereas results from the study by Sandborn and colleagues [41] showed no statistically significant difference between the percentage of etanercept-treated patients and that of control-treated patients who showed improvement at 4 weeks on the CD Activity Index (39% versus 45%, P = 0.763, respectively). Factors postulated to be responsible for these differences include the following: the maintenance of stable neutralizing complexes with TNF [38]; the accessibility of involved tissues to the agent depending on the administered dose; the potential role of other cytokines, such as lymphotoxin-α, in certain disease processes; the concentration of TNF in the involved tissues; and the relative sensitivity of affected tissues to the effects of TNF [38]. It is important to state that whereas these differences exist in CD, there are no direct comparisons in CD or in AS. Whether the differences in CD translate to differences in AS is not yet known. We now turn our attention to patients with AS and a review of some of the early and pivotal trial data with these agents.
Etanercept
Gorman and colleagues [42] conducted a double-blind, placebo-controlled trial to evaluate the effectiveness of etanercept in 40 patients with active, inflammatory AS. Patients were randomized to receive etanercept 25 mg subcutaneously twice weekly or placebo for 4 months, with the opportunity to participate in a 6-month open-label extension in which all patients would receive etanercept [43]. The primary outcome measure was a composite treatment response of 20% or more in at least three of five measures of disease activity, which is similar to the criteria defined by the Assessments in Ankylosing Spondylitis (ASAS) Working Group [44] -duration of morning stiffness, degree of nocturnal spinal pain, the Bath Ankylosing Spondylitis Functional Index (BASFI) [45], the patient's global assessment of disease activity, and the joint swelling score -with no worsening in any measure.
At the end of 4 months, treatment response was originally reported in 80% of etanercept-treated patients versus 30% of controls (P = 0.004) [42]. A subsequent reanalysis corrected this to 75% instead of 80%, with a corrected P value of 0.01 by Fisher's exact test (twotailed) [46]. Other results at 4 weeks included changes in physicians' global assessment of disease activity (-31. This may have been due to spinal ossifications related to prolonged disease. The measure of enthesitis used in this study, the modified Newcastle Enthesis Index [47,48], showed significant improvement after 4 months (-4.5 with etanercept versus -1.5 with placebo, P = 0.001), although the authors suggest that the measure requires further study [42]. In terms of overall efficacy, the authors concluded that etanercept produced a rapid, significant, and sustained response in the initial phase of the study [42].
The 6-month open-label phase that followed showed that etanercept sustained clinical benefit in terms of percentage of patients achieving an ASAS 20% response up to the end of the study (Fig. 2) [42,43]. Etanercept was well tolerated, with five injection-site reactions (versus one with placebo) and 10 cases of upper respiratory infections (versus 12 with placebo) being the most commonly reported adverse events [42]. Other adverse events included tinnitus and muscle fasciculations of eye and thigh muscles reported in one etanercept-treated patient, and the development of a positive antinuclear antibody titer of 1: 80 in one etanercept-treated and one placebo-treated patient.
In another placebo-controlled, double-blind, 24-week trial of etanercept, 30 patients with proven, active AS were randomized to initially receive etanercept 25 mg subcutaneously twice a week or placebo for 6 weeks [49]. After 6 weeks, the placebo-treated patients were crossed over to receive etanercept for 12 weeks, and the etanercepttreated patients were continued on medication for an additional 6 weeks. Primary outcome measures were indices of disease activity including the Bath Ankylosing Spondylitis Disease Activity Index (BASDAI) [50], the BASFI, pain, quality of life (measured by the Short Form-36 [SF-36]) [51], and concentrations of CRP [49]. By definition, responders showed a 50% or more improvement in the BASDAI score. Results at 6 weeks showed a 50% BASDAI response in 57% of etanercept-treated patients versus 6% of controls (P = 0.004) [49]. Similar results were reported in the patients who switched to etanercept after 6 weeks of placebo. Significant improvements were also seen in physical function (P < 0.05) and mean concentrations of CRP (P < 0.001). No severe adverse events or major infections were reported.
Infliximab
To evaluate the effectiveness of infliximab in patients with AS, an open-label pilot study was conducted in which 11 patients with AS of short duration (median 5 years) and active disease for at least 3 months were administered three infusions of infliximab 5 mg/kg intravenously at weeks 0, 2, and 6 [52]. Follow-up assessments were made at weeks 1, 2, 4, 6, 8, and 12. Outcome measures included BASDAI, BASFI, pain as measured on a 10-cm visual analog scale (VAS), and the Bath AS Metrology Index (BASMI), which is used to assess spinal mobility [52,53]. Laboratory markers (such as CRP, ESR, and interleukin-6) were monitored, and dynamic magnetic resonance imaging (MRI) was performed in five patients. Of the 11 patients at study entry, 10 [52]. Median interleukin-6 concentrations also declined from 12.4 mg/L at baseline to less than 5 mg/L at week 12. Subjective improvement was noted as early as 1 day after the initial infusion, with the positive effect persisting until week 12. Three of five patients had MRI-revealed spinal inflammation at baseline, which improved in two of the three patients 2-6 weeks after the third infusion. This study gave a very positive early indication that anti-TNF therapy is effective in the management of AS over the short term.
The successful results of the 12-week open-label phase led to an extension phase in which patients were to be given up to three additional infusions of infliximab 5 mg/kg in cases of relapse [54]. The median total observation period for the open-label phase was 39 weeks (range 35-41 weeks); by the end of the study, six patients had received all three additional infliximab infusions. The data from this study, which was the first to report on the longterm (approximately 1 year) management of active AS with infliximab, showed that the improvement elicited by a loading regimen of three infliximab infusions could be maintained for up to about 7 weeks before the first symptoms reappeared [54]. Dosing data suggested that infliximab infusions might be needed every 6 weeks to achieve sustained improvement in patients with active AS.
A placebo-controlled 12-week study involving 70 patients with active AS was conducted to assess the effectiveness of infliximab in AS [55]. The patients were randomized to receive either infliximab 5 mg/kg at weeks 0 (baseline), 2, and 6, or placebo. Efficacy parameters included validated clinical criteria from the ASAS working group [44] (namely BASDAI, BASFI, BASMI, and SF-36 for quality of life). The primary end point was a 50% improvement in disease activity, as determined by BASDAI, at the study's end (week 12). Intention-to-treat analysis showed more patients meeting the primary end point (BASDAI 50%) with infliximab than with placebo (53% versus 9%, P < 0.0001) [55]. Throughout the 12-week treatment period, infliximab was shown to be effective in terms of the percentage of patients achieving response for all the response criteria used: BASDAI 20% (P = 0.001), Available online http://arthritis-research.com/content/6/S2/S36
Figure 2
Percentage of patients in each study group who had a treatment response. A treatment response was defined as 20% or greater improvement in at least three of five outcome measures (duration of morning stiffness, degree of nocturnal spinal pain, the Bath Ankylosing Spondylitis Functional Index, the patient's global assessment of disease activity, and the score for joint swelling). The patients in the etanercept group received etanercept throughout the 10-month study period; those in the placebo group received placebo for 4 months, followed by etanercept for 6 months. The differences between the groups were statistically significant at month 1 (P < 0.001), month 3 (P = 0.03), and month 4 (P = 0.004). During the open-label portion of the trial, there were no statistically significant differences between the two groups. Reproduced with permission from [42]. Copyright © 2002 Massachusetts Medical Society. All rights reserved. Note that subsequent reanalysis of the data the end of 4 months corrected the P value to 0.001 [46]. Placebo-controlled study Open-label extension BASDAI 70% (P = 0.045), ASAS 20% (P = 0.0007), ASAS 50% (P < 0.0001), ASAS partial remission (P = 0.005), BASFI (P < 0.0001), and BASMI (P = 0.0023). Infliximab demonstrated effects on peripheral arthritis and enthesitis. At baseline, 44% of patients in the infliximab group had both conditions active; by week 12, only 17% had peripheral arthritis and 27% had enthesitis. No change was noted in the placebo-treated patients for arthritis (P = 0.249) or enthesitis (P = 0.81). As for the SF-36 data, infliximab significantly improved the physical component score, whereas placebo did not (P < 0.0001).
The 67 patients in both treatment groups who completed the study were enrolled in a 1-year open-label extension phase in which the controls were treated with infliximab 5 mg/kg intravenously and then received infusions every 6 weeks until week 54 [56]. The magnitude of the response at 12 weeks was sustained until week 54, with improvement reported in every outcome measure that was evaluated: BASDAI, BASFI, pain by VAS, quality of life by SF-36, and CRP [56]. Figure 3 illustrates the percentage of infliximab-and placebo-treated patients who attained 50% BASDAI improvement from baseline through to week 54. Data at 6 weeks after those initially treated with placebo were switched to infliximab were 56% for infliximab, in comparison with 49% for the placebo group (P value not significant), demonstrating rapid response once the placebo group received the active drug [56]. By the end point of the study, 54 patients completed the course (more than 75%); 16 patients dropped out. An analysis of completers confirmed a continuous decline in disease activity based on BASDAI (means ± SD are shown): 6.5 ± 1.3 at week 0, 3.2 ± 1.8 at week 12, and 2.5 ± 1.8 at week 54. This study demonstrated that infliximab remains efficacious in active AS over a 1-year treatment period.
Other aspects of infliximab therapy in patients with AS have been investigated. An open-label study by Stone and colleagues (21 patients enrolled; 18 patients evaluated at week 14) was conducted to determine whether infliximab (5 mg/kg intravenously for 2 hours at weeks 0, 2, and 6) is an effective treatment for patients with AS who have not responded satisfactorily to conventional therapy and to identify whether there are any baseline clinical and imaging correlates of response to treatment with infliximab in AS [57]. Measures taken at baseline and with each subsequent visit included nine functional variables (for example BASDAI, BASFI, and Health Assessment Questionnaire), six clinimetrics (for example chest expansion and finger-to-floor distance), and laboratory inflammatory markers (for example ESR, CRP, and haptoglobin); MRI scans before and after infusions were performed in a subset of nine selected consecutive patients. Results showed that by week 14 there was more than 60% improvement in functional variables, selective improvement in clinimetric scores (for example chest expansion, P < 0.021), and significant improvement in inflammatory markers, which were maintained from week 6. MRI findings showed improvement in the patients examined during the imaging cohort and demonstrated a reversal of inflammatory changes with infliximab as early as weeks 2-4. Other key findings were the identification of two groups of responders to infliximab (marked responders and not so marked responders), an absence of correlation between treatment response and baseline inflammatory markers, and a positive benefit with infliximab even in patients with advanced AS (as all patients in this study had high baseline BASDAI scores).
Maksymowych and colleagues, in a prospective observational and inception cohort analysis, evaluated the efficacy and tolerability of infliximab in 21 patients with NSAIDrefractory AS seen in both university-based and community-based practice [58]. The patients in this study, who were seen from April 2000 to October 2001, were given infliximab at a dose of 3 mg/kg intravenously at weeks 0, 2, and 6, and at 2-month intervals thereafter. Data collected (at baseline, at week 14, and at the earlier of year 1 or withdrawal) included patient demographics, Bath AS indices (for example BASDAI, BASFI, and BASMI), laboratory markers (for example ESR and CRP), and adverse events or reasons for withdrawal; dynamic MRI with gadolinium enhancement was also done for the first six consecutive patients. Efficacy data from 17 patients assessed at week 14 included the following significant results: mean BASDAI improvement from baseline (P < 0.001), 43% reduction in mean BASFI (P < 0.001), 55% reduction in mean ESR (P < 0.001), 63% reduction in mean CRP (P = 0.01), and reduction in maximal rate of MRI-defined gadolinium augmentation (P = 0.04). The study demonstrated the effectiveness and In addition to its demonstrated efficacy in AS, infliximab has been shown to be effective in patients with other types of spondylarthritides. Van den Bosch and colleagues evaluated the efficacy and safety of infliximab in a 12-week placebo-controlled clinical trial involving 40 patients with active spondyloarthropathy (AS with axial disease, AS with peripheral arthritis, psoriatic arthritis, undifferentiated spondyloarthropathy) [59]. Patients were randomized to receive either placebo or infliximab 5 mg/kg on weeks 0, 2, and 6. Primary outcome measures were improvements in patient and physician global assessments of disease activity on a 100 mm VAS. Study results showed a statistically significant difference in favor of infliximab for the primary outcome (global disease assessments) and significant reductions in ESR and CRP as early as week 2 and sustained up to week 12. For the peripheral disease assessments, infliximab led to significant improvements at week 12 for all outcomes (morning stiffness, P = 0.038; peripheral joint pain, P = 0.002; tender joint count, P = 0.015) except swollen joint count (P value not significant). For axial disease assessments, statistically significant improvements in favor of infliximab were seen for morning stiffness (P = 0.006), spinal pain (P = 0.002), BASDAI (P = 0.002), and BASFI (P = 0.041). Given the limited numbers seen with psoriasis and differences in baseline Psoriasis Area and Severity Index, no conclusions could be drawn for any psoriasis-specific effect. Overall, this study demonstrated that infliximab is efficacious in various spondyloarthropathies, including and other than AS.
Use of MRI to visualize spinal inflammation
Imaging techniques such as computed tomography (CT) and MRI might extend the sensitivity of conventional radiologic procedures in the diagnosis of early AS. CT can detect early bony erosions, whereas MRI is particularly sensitive to inflammatory changes in the soft tissue and bone marrow (Fig. 4) [60]. The sensitivity of MRI techniques can be enhanced with the use of contrast materials such as gadolinium. Fat-saturation techniques such as the Short-Tau Inversion Recovery method can be used to characterize the acute spinal inflammation of active AS [61,62].
On the basis of the MRI results, the acute spinal lesions of AS can be evaluated by the MRI Scoring System for Spinal Inflammation in AS (ASspiMRI), an updated system in which each of 24 vertebrae (C2 to S1) is graded on a six-point scale that measures degree of edema, extent of bone erosions, inflammation, and chronicity [63]. Braun and colleagues employed this system in a study examining the effects of infliximab 5 mg/kg in 20 patients with active AS [62]. The infliximab-treated patients showed a 60% improvement in Short-Tau Inversion Recovery scores over 3 months (P = 0.01), whereas the controls showed a 21% deterioration from baseline (P = 0.5). Improvement in MRI scores with infliximab was significantly correlated with clinical improvement in BASDAI scores (P < 0.03).
Conclusion
In patients with active AS, anti-TNF therapy with infliximab and etanercept has been shown to result in clinically important benefits, as assessed with validated outcomes measures. MRI studies demonstrate that improvements in clinical parameters are accompanied by decreased structural manifestations of disease. Future trials need to delineate predictors of optimal patient response to anti-TNF therapy, optimum dosing schedules, and the role of combination therapy to enhance clinical benefit. Whether TNF antagonist therapy alters the natural history of AS remains an important research priority. Early therapy could then prevent the associated disability and economic consequences of AS.
Competing interests
PMP is or has been a consultant for Amgen, Wyeth, Centocor, Ortho-McNeil, Pfizer and Merck, and has received grant support from Amgen, Wyeth, Abbott, Centocor, Genetech, Ortho-McNeil, Pfizer and Merck. JB has received honoraria for consultation from Centocor, Schering Plough, Amgen, Wyeth and Abbott.
Acknowledgement
The transcript of the World Class Debate for ACR 2002 has been published electronically in Joint and Bone. This article, and others published in this supplement, serve as a summary of the proceedings as well as a summary of other supportive, poignant research findings (not included in the World Class Debate ACR 2002).
Figure 4
Magnetic resonance images of a normal spine (left) and spinal inflammation in a patient with ankylosing spondylitis (right). Reproduced with permission from Elsevier [60].
|
2014-10-01T00:00:00.000Z
|
2004-06-21T00:00:00.000
|
{
"year": 2004,
"sha1": "1685e82a70e8f1d9426a682b0b7c4b394fb14302",
"oa_license": null,
"oa_url": "https://arthritis-research.biomedcentral.com/track/pdf/10.1186/ar1012",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "1685e82a70e8f1d9426a682b0b7c4b394fb14302",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
119134633
|
pes2o/s2orc
|
v3-fos-license
|
Colored knot polynomials for Pretzel knots and links of arbitrary genus
A very simple expression is conjectured for arbitrary colored Jones and HOMFLY polynomials of a rich $(g+1)$-parametric family of Pretzel knots and links. The answer for the Jones and HOMFLY polynomials is fully and explicitly expressed through the Racah matrix of U_q(SU_N), and looks related to a modular transformation of toric conformal block.
It depends on g + 1 integers n 0 , . . . , n g , the algebraic lengths of the constituent 2-strand braids. Orientation of lines does not matter, when one considers the Jones polynomials (not HOMFLY!). Also, these polynomials are defined only for the symmetric representations [r] and, hence, do not change under arbitrary permutations of parameters n i (though the knot/link itself has at best the cyclic symmetry n i −→ n i+1 , and even this is true only for particular orientations). A particular manifestation of this enhanced symmetry has been recently noted in [8].
The answer for the colored Jones polynomials for this entire family can be written in full generality, and is wonderfully simple and is made out of the Racah matrix [9] of SU q (2) in representation [r] (spin r/2). Orthogonality of S implies that Quantum numbers in these formulas are defined as [n] = q n −q −n q−q −1 .
The first of the r + 1 polynomials is just the Jones polynomial for the 2-strand torus knot/link T [2, n]: The origin of its orthonormal "satellites" P (n) k (q|r) with k = 1, . . . , r and of entire rotation from the monomial basis {λ n m } is yet unknown.
Naturally, a straightforward generalization exists to the HOMFLY polynomials: which looks like a modular transformation of toric conformal block, summed over intermediate states X in the loop: ?
Here dim X and A are the universal A-dependent dimension of representation X (which is equal at A = q N to the quantum dimension of representation X of SU q (N )) and rescaled Racah matrix respectively, interpolating between those for SU q (N ) at A = q N . This formula is indeed true [10], at least when all the representations are symmetric or their conjugate: R i = [r], [r]. The answer for antisymmetric representations then follows from the general transposition rule [11,12] . It should possess further continuation a la [13] to superpolynomials, thus providing a β-deformation [14] of the universal Racah matrix.
Eqs. (1) and (7) result from a tedious calculation in [10,15] with the help of evolution [11,16], modernized Reshetikhin-Turaev [3,5] and modular matrix [2,4,6] methods. Of course, a simple conceptual derivation should exist for such a simple and general formula, but it still remains to be found. The value of (1) is independent of the derivation details, and it is high, because this only formula (together with its lifting to the HOMFLY polynomials in [10]) contains almost all what is currently known about explicit colored knot polynomials beyond torus links: in particular, all the twist and 2-bridge knots are small subsets in the Pretzel family (however, among torus knots with more than two strands, only [3,4] and [3,5] belong to it).
Examples: We give them here mostly for the Jones case, for an exhaustive description of the symmetric HOMFLY polynomials for all Pretzel links see [10]. First of all, we list the first few S and A matrices for the lowest representations of SU q (2): In general, from (3) and [9] one has: Given these matrices, eq.(1) provides absolutely explicit expressions for all genus-g knots/links in the corresponding representations [15]: J (n0,...,ng) 2 . . .
Similarly, the fundamental HOMFLY from [10] is where χ m and ∆ k are quantum dimensions of the representations appearing in the products [r]⊗[r] and [r]⊗[r] respectively (i.e. restrictions of the Schur functions to the topological locus, described by the hook formulas). One more new parameter is the number 2g || of parallel braids (g || = g ↑↑ = g ↓↓ ), the remaining g ↑↓ = g + 1 − 2g || are antiparallel.
As to practical applications, formula (10) . . , n) [2] = 1 [3] (upper/lower signs are for odd/even n) can look a little too long as compared to the usual for the two-strand knots/links [2, n + 1] = (n + 1) 1 (when a = 1, b = 1) or for the twist knots (n + 2) 2 or (|n| + 1) 1 (when a = 2, b = 1, n odd): a lot of cancelations happen in these special cases. However, in generic situation such cancelations do not take place, and just the same expression describes the colored Jones polynomials for, say, the Pretzel (1,1,1,1,1,1,1,1,19,19,19,19,19,19,19) with a = 8, b = 7, n = 19, which is about one page long. To avoid possible confusion, we note that the Jones polynomials are unreduced and appear in this formalism in the vertical framing, the one consistent with the orientation independence. Conversion to the topological framing inserts the factor q −8 for each vertex in the parallel braids, this means a total q −8(n+1) for (n + 1) 1 , while for the twist knots the total factor is either q 16 or just 1, if tw k is represented as (−1, −1, 2k) or as (1, 1, 2k − 1).
|
2014-12-08T15:43:09.000Z
|
2014-12-08T00:00:00.000
|
{
"year": 2014,
"sha1": "35bee71c884509345a238bdc156a4e7cbd38416e",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1016/j.physletb.2015.02.029",
"oa_status": "GOLD",
"pdf_src": "Arxiv",
"pdf_hash": "35bee71c884509345a238bdc156a4e7cbd38416e",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Physics",
"Mathematics"
]
}
|
99932115
|
pes2o/s2orc
|
v3-fos-license
|
Highly efficient one-pot tandem Friedlander annulation and chemo-selective Csp3–H functionalization under calcium catalysis
A highly efficient and regioselective Friedlander synthesis of 2-methyl-3-acyl quinolines is described, which occurs under solvent-free conditions and employs calcium triflate as a sustainable catalyst. For the first time in the literature, these 2-methyl-3-acyl quinolines undergo an in situ chemoselective Csp3–H functionalization to furnish structurally enriched quinoline heterocycles in high yields and with atom and step economy.
Introduction
Rational drug design based on privileged scaffolds is one of the most powerful concepts in modern drug discovery. 1 A quinoline moiety is one of these privileged scaffolds, and it has been extensively explored owing to its broad biological spectrum. 2 For example, quinoline derivatives serve as antimalarial drugs (quinine, chloroquine), 2 anti-inammatories, 3 antibacterials, have antituberculosis properties, 4 are multifunctional agents for Alzheimer's disease, 5 and nd uses as other therapeutic agents. 6 In addition, these moieties are an integral part of many biologically active natural products (Fig. 1). 7 Hence, the development of new synthetic methods for quinoline derivatives is an active area of organic synthesis. Among several synthetic methods available, functionalization of 2-methylquinolines has emerged as a new synthetic technique to address the synthesis of quinoline-based new chemical entities (NCE). 8 However, most of these methods utilize 2-methylazaarenes as a starting point for functionalization to generate the new libraries. In general, 2-methyl quinolines can be synthesized starting from easily available oacyl anilines and a suitable carbonyl compound through a Friedlander annulation under acidic or basic conditions. 9 To the best of our knowledge, there are only a couple of reports available in which in situ-generated 2-methyl quinolines were functionalized. 10,11 Nevertheless, both of these reports were limited to direct synthesis of styryl quinolines and no chemoselectivity was attained; moreover, the reaction could not even proceed with preformed 2-methyl, 3-acyl quinolines with In(OTf) 3 . 11 Hence, it is highly desirable to explore the chemoselectivity of these 2-methyl, 3-acyl quinolines through functionalization of their methyl groups to generate useful quinoline derivatives.
On the other hand, one-pot multicomponent reactions (MCRs) have been proven to be efficient alternative green synthetic reactions for some existing classical stepwise reactions. 12 MCRs are well known for the synthesis of complex molecules starting from simple starting materials. Looking at its importance, we have been working on a one-pot, solventfree/in-water multicomponent approach using calcium tri-ate as an environmentally benign catalyst. 8i-k,13 As a continuation of our interest in the facile, selective, and sustainable synthesis of biologically important heterocycles, we disclose here a highly efficient one-pot tandem calcium-catalysed Friedlander annulation followed by chemoselective C-H functionalization to generate quinoline-based new chemical entities. Fig. 1 Representative examples of 2-alkyl quinoline derivatives present in natural products and drug molecules.
Results and discussion
As depicted in Fig. 2, we designed a Friedlander synthesis of quinoline (3) which contains two activated methyl groups, the chemoselectivity of which could be differentiated by a suitable combination of reagents and conditions. Based on our expertise in C sp 3 -H functionalization, we decided to functionalize the methyl group (C sp 3 -H) on the 2 nd position in a tandem one-pot multicomponent approach. 8i-k Thus, compound (I) could be achieved by selecting a suitable electrophile (and this can be an aldehyde or isatin) which could accommodate two moles of methyl azaarene. Compound (II) was envisaged from the chemoselective and conjugate addition of compound 3 to the activated alkenes in one pot.
In order to implement our concept, initially we decided to look for better conditions for 2-methyl quinoline synthesis and its conjugate (Michael) addition to a chalcone derivative in one pot. and acetylacetone (2a) were chosen as starting materials for the Friedlander synthesis of 2methyl, 3-acetyl, 4-phenyl quinoline (3a). As shown in Table 1, stoichiometric amounts of 1a and 2a were reuxed in water with 10 mol% of Ca(OTf) 2 /Bu 4 NPF 6 and the reaction gave a positive result with 61% of 3a aer 6 h (entry 1, Table 1). Toluene gave a better result compared to water (Table 1, entry 2) and DCE gave a lower yield of 3a. However, the reaction yielded excellent results under neat conditions (entry 4, Table 1). Aer a careful observation of the optimization studies, we found that the Friedlander synthesis was effective at 120 C under neat conditions with 10 mol% of Ca(OTf) 2 /Bu 4 NPF 6 to furnish a nearly quantitative yield of 3a. 14 Encouraged by this observation, we proceeded further and added simple chalcone 4a to the above reaction to check the possibility of a conjugate addition reaction. Gratifyingly, the reaction gave Michael adduct 5a in 72% yield aer 18 h. Encouraged by this observation, applicability of the reaction condition for the one-pot synthesis of 4-(3-acetyl-4phenylquinolin-2-yl)-1,3-diphenylbutan-1-one (5a) was generalized with different enones bearing different electronwithdrawing and -donating groups. As shown in Table 2, ortho-amino benzophenone (1a) and acetylacetone (2a) were reacted with various chalcones in presence of 10 mol% Ca(OTf) 2 /Bu 4 NPF 6 under neat conditions 15 through a tandem Friedlander annulation followed by Michael addition to give the quinoline derivatives 5a-5d in good yields. 5-Chloro-2aminobenzophenone derivative (1b) also reacted with acetylacetone and various chalcone derivatives under the same conditions and produced the quinoline derivatives 5e-5i in good yields. Interestingly, ortho-aminobenzophenone bearing an electron-withdrawing group (-NO 2 ) also showed a similar reactivity with acetylacetone and chalcones, to furnish the quinoline derivatives 5j-5l in moderate to good yields as shown in Table 2.
Aer a successful demonstration of a three-component calcium-catalyzed tandem Friedlander synthesis of 2-methyl, 3-acyl quinolines and their chemo-selective functionalization through a Michael addition to the chalcone compounds (Table 2), we decided to make dimeric quinoline derivatives. For this, aldehydes were taken as electrophilic partners instead of chalcones, as it is known that 2-methylquinoline adds to aldehydes to yield alcoholic compounds which may further undergo Fig. 2 Schematic representation of our synthetic plan for quinoline derivatives through a one-pot Friedlander annulation followed by chemoselective functionalization (E ¼ electrophile; EWG ¼ electron withdrawing group). Table 1 Optimization of reaction conditions for the calcium-catalysed Friedlander synthesis of 2-methyl, 3-acyl, 4-phenyl quinoline (3a) a
another nucleophilic substitution with a second mole of 2methylquinoline in the presence of Ca(II). 13f To implement this idea, we performed the Friedlander annulation and then added 1 equiv. of benzaldehyde and 2-methylquinoline (1 equiv.) in one pot; the reaction was continued for another 8 h to isolate the desired product 8a in 80% yield. Refreshed by this result, we extended this procedure for the synthesis of other dimeric quinolines 8b-8e in excellent yields as shown in Table 3. When 4-nitrobenzaldehyde was added alone aer the Friedlander annulation, a homodimeric quinoline derivative 8f was isolated in 71% yield aer 13 h. Having these fruitful results in hand, we investigated another four-component reaction for the synthesis of quinoline derivatives through a tandem Friedlander annulation and C-H functionalization as described in Table 4. Benzaldehyde (6) and malononitrile (9) were added to the substituted 2methylquinoline (formed through Friedlander annulation) in one pot and the reaction was further reuxed in water for an additional 5 h to obtain the four-component adduct 2-(2-(3acetyl-4-phenylquinolin-2-yl)-1-phenylethyl)malononitrile (10a) in 92% yield through a simple ltration (Table 4). This compound was so pure that no further recrystallization was required. The substrate scope of this one-pot four-component synthesis was demonstrated by the participation of a large number of aryl aldehydes bearing electron-donating/withdrawing groups and substituted ortho-amino benzophenones to furnish the respective compounds 10a-10i with excellent yields, as depicted in Table 4. This idea was further extended for the synthesis of biologically important quaternary centered oxindolyl-quinoline derivatives by simply switching the electrophile from aldehyde to isatin (Table 5). Thus, quaternary-centered oxindolyl derivative 12a was isolated through simple ltration in 92% yield in 5.5 h under similar conditions. Similarly, 1-methylisatin with 3a yielded 12b and 12d in 93% and 91% yields, respectively, whereas 5methylisatin furnished the product 12c in 91% yield.
The synthetic utility of this protocol was demonstrated via a gram scale synthesis of 5a (2.65 g) through a tandem Friedlander annulation/C-H functionalization (chemoselective) and 71% yield of the desired product was obtained (Scheme 1). Table 2 Substrate scope for the Ca(OTf) 2 -catalyzed tandem Friedlander annulation and Michael addition for the synthesis of substituted quinoline derivatives a a Stoichiometry of reactants: 1 (0.50 mmol), 2 (0.50 mmol) & 3 (0.50 mmol); reaction was performed in a sealed vessel; isolated yields were reported. Table 3 Substrate scope in the one-pot four-component Ca(II)catalyzed Friedlander annulation and chemoselective C sp 3-H functionalization for the synthesis of dimeric quinoline derivatives a a Stoichiometry of reactants: 1 (0.50 mmol), 2 (0.50 mmol), 6 (0.50 mmol) & 7 (0.55 mmol); reaction was performed in a sealed vessel; isolated yields were reported.
Conclusions
In summary, we described the rst report of tandem Friedlander annulation and chemoselective C sp 3 -H functionalization of in situ-generated 2-methyl, 3-acyl quinolines under calcium catalysis. The wide substrate scope, high yields, and exibility to extend to more varieties of quinoline derivatives under calcium catalysis, given the atom and step economy of the method, will attract attention from medicinal chemists wishing to explore further the biological utilities of quinoline derivatives.
General remarks
All chemicals were purchased from commercial sources and were used as received without further purication. 1 H, 13 C NMR spectra were recorded on an Avance Bruker 500 MHz spectrometer in CDCl 3 . Chemical shis (d) are given in ppm relative to tetramethylsilane (TMS) and calibrated to residual chloroform peaks. Coupling constants (J) are reported in Hz and coupling patterns are described as: s ¼ singlet, d ¼ doublet, t ¼ triplet, q ¼ quartet, quint ¼ quintet, hept ¼ heptet, m ¼ multiplet. Melting points were measured with a Büchi Melting Point B-540 apparatus. Reactions were monitored by thin layer chromatography (TLC) with aluminium sheets silica gel 60 F254 from Merck with detection by UV light and charring with bnaphthol and ninhydrin stain.
|
2019-04-08T13:12:08.553Z
|
2017-03-28T00:00:00.000
|
{
"year": 2017,
"sha1": "6d550023b04fa659e1c105670d6b78117b844293",
"oa_license": "CCBYNC",
"oa_url": "https://pubs.rsc.org/en/content/articlepdf/2017/ra/c6ra28642a",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "4940c6c2cef2159b72f1b908ec262f0558070756",
"s2fieldsofstudy": [
"Chemistry"
],
"extfieldsofstudy": [
"Chemistry"
]
}
|
258434265
|
pes2o/s2orc
|
v3-fos-license
|
The treatment and rationale for the correction of a cervical kyphosis spinal deformity in a cervical asymptomatic young female: a Chiropractic BioPhysics® case report with follow-up
[Purpose] To present a case demonstrating dramatic restoration of the cervical lordosis and reduction of forward head posture by use of Chiropractic BioPhysics® (CBP®) technique. [Participant and Methods] A 24-year-old cervical asymptomatic female presented with poor craniocervical posture. Radiography revealed forward head posture and an exaggerated cervical kyphosis. [Results] The patient received CBP care including mirror image® cervical extension exercises, cervical extension traction and spinal manipulative therapy. After 36 treatments over 17-weeks, repeat radiography demonstrated a dramatic improvement of an alteration of the cervical kyphosis to a lordosis and a reduction of forward head posture. Subsequent treatment increased the lordosis further. Long-term follow-up at 3.5 years showed some loss of original correction, however, a maintenance of the global lordosis. [Conclusion] This case demonstrates that non-surgical reversal of a cervical kyphosis to a lordosis is possible in a short time using CBP cervical extension protocols. It is logical if the kyphosis had not been corrected, over time, osteoarthritis and various craniovertebral symptoms would have evolved as the literature indicates. The diagnosis of gross spinal deformity, we argue, requires its correction prior to the onset of symptoms and permanent degenerative changes.
INTRODUCTION
Cervical kyphosis is a subluxation pattern of the cervical spine only diagnosable via imaging 1) . It is also recognized as an etiological factor for a variety of craniocervical symptoms including headache, neck pains and radiculopathies [2][3][4][5][6][7][8] . Altered sagittal cervical spine alignment, including kyphosis, is also implicated in poor long-term health consequences after neck injury 8,9) . Although there has been a wide range of variability reported for 'normal' neck curve alignment, 10) the normal cervical curve is lordotic 10,11) .
In the manual therapies literature, there has been debate as to the usefulness of radiographic screening of the sagittal spinal curves [12][13][14][15][16][17] . For instance, Christensen and Hartvigsen concluded that "Evidence from epidemiological studies does not support an association between sagittal spinal curves and health including spinal pain," even though the authors found several health conditions having statistically significant odds ratios in the presence of altered spine alignment, including death, they claimed the relationships are 'unlikely to be causal' 12) . Despite the manuscript being referenced 100s of times, it was found to have fundamental flaws including misinterpreted findings, missing relevant references, and if it were not for these flaws, would have had to conclude the oppposite 13) . Others have dismissed the importance of improving the sagittal lordotic spinal curves in the treatment of spine ailments (e.g. Jenkins et al. 14) ; Corso et al. 15) ), but fail to accurately interpret the very studies that they referenced that actually provide high-level, supportive evidence and rationale for utilizing spine re-alignment treatments (and therefore X-rays) 16,17) .
As it turns out, the few that argue against routine imaging in spine care 12,14,15) , use recurring themes of flawed debate including radiophobic fear-mongering, misappropriation of medical references and dismissing relevant data [16][17][18] . The current spine literature, however, including manual therapies and spine surgery domains, points to routine spine imaging being an ethical, evidence-based approach to contemporary spine care [19][20][21][22][23][24][25] . Indeed, it is the only method to clinically and efficiently screen for spine disorders including cervical kyphosis.
Cervical kyphosis may be present in an asymptomatic individual, but the biomechanical consequences are increased loading of the intervertebral discs, anterior vertebral bodies and ultimately, accelerated degenerative changes 26) . Thus, the diagnosis of cervical kyphosis and its correction is important even in an asymptomatic individual. This report details the successful reversal of a kyphotic cervical curve to a normal lordotic curve in a young female asymptomatic for cervical complaints.
PARTICIPANT AND METHODS
On July 9, 2014, a 24 year-old female presented with discomfort regarding her right shoulder; the patient stated she had injured it about 4 years prior. On a numerical pain rating scale (NPRS) the patient rated her pain to average a 1.5/10 (0=no pain; 10=severe pain with the patient bed ridden). No complaints regarding the cervical spine were reported. All orthopedic tests, range of motion and reflexes were normal/unremarkable.
Radiographic assessment was performed and analyzed using the PostureRay EMR software (PostureRay Inc., Trinity, FL, USA). This software incorporates the Harrison posterior tangent (HPT) method to quantify the sagittal cervical lordosis by generating lines contiguous with the posterior vertebral body margins; both relative rotation angles (RRAs) between all pairs of adjacent cervical vertebrae as well as a global absolute rotation angle (ARA) between C2 and C7 were calculated. This line drawing method is reliable and the standard error of measurement is small (<2°) 27,28) .
Initial radiographs of the cervical spine showed a cervical kyphosis of +14.8° (C2-C7 ARA) as compared to an ideal configuration of up to −34-42°3 , 4) (Fig. 1). The atlas plane line (APL) was −10.6° (vs. −29 normal 3) ), the anterior head translation (AHT) was 22 mm (horizontal distance from the posterior inferior corner of C7 to the vertical line from the posterior superior vertebral body corner of C2). The images were free of degenerative processes.
Treatment was aimed at restoring a normal cervical lordosis via CBP technique methods [29][30][31][32] . CBP technique incorporates mirror image ® exercises, postural adjustments and spinal traction; in this case, cervical hyperextension positions were incorporated in the exercises and spinal traction in order to restore lordosis. The treatment protocol involved a frequent treatment schedule (e.g. 3 times a week for 12 weeks) prior to a re-assessment including an X-ray to assess treatment efficacy 30) . The patient underwent three periods of corrective care, where she was treated approximately twice a week, totaling 107 treatments over 59-weeks. Following the corrective care, the patient attended for maintenance treatments when available ( Table 1).
Treatment involved full-spine, spinal manipulative therapy (SMT), paraspinal stimulation with a hand-held adjusting instrument (Arthrostim ® , Impac, Inc., Salem, OR, USA) as well as corrective neck exercises. A Pro-lordotic neck exerciser (Circular Traction LLC., Huntington Beach, CA, USA) was used for neck extension exercises where the patient would place the exercise band at the mid portion of their neck and extend the handles anteriorly to create resistance, while simultaneously extending their head posteriorly forcing a hyperextension of the cervical spine (Fig. 2). This exercise was held for 3 seconds and repeated 50 times at each appointment. Cervical traction involved the 'compression-extention 2-way' three-point bending approach, featuring a posterior-toanterior pull with a simultaneous hyperextension of the cervical spine (Fig. 3) 33) . The intent of the chin-forehead strap is to extend the head into a position of extension where a second strap, placed at the C4-C6 vertebral level creates a posterior-toanterior pull to force a hyper-extension position of the neck. This traction was performed for 10 minutes. The patient began with 5 lbs-2 lbs (weights on front and back, respectively) but was able to tolerate an increased weight of 20 lbs-10 lbs by the 34th treatment. By the 105th treatment the weight had increased to 25 lbs-13 lbs. Regarding home treatment, the patient was prescribed a Denneroll orthotic unit to perform cervical extension traction daily for 10-20 minutes. The patient was compliant with the home traction initially, but only performed it intermittently later in the program. The patient gave verbal and written consent for the publication of these results.
RESULTS
The first re-assessment was conducted after 36 treatments over 17-weeks. Following the third re-assessment, the patient visited sporadically over a period of 3.5 years as she decided to do some traveling. She was only able to attend 14 treatments over the 3.5 year period. Radiographic imaging showed an 18.6° reduction of her cervical lordosis (−6.6° vs. −25.2°), a 9.8° (−21.7° vs. −31.5°) reduction of her APL, and a 6.8 mm increase in AHT (13.1 mm vs. 6.8 mm) was observed.
DISCUSSION
This case documents a dramatic 40° improvement in cervical lordosis, the correction of a severe cervical kyphosis to a normal lordosis configuration. Follow-up at 3.5 years showed a loss of initial correction, but maintenance of lordosis with minimal treatment. This case further illustrates an example of successful preventative care of a patient, where a patient with an undiagnosed cervical kyphosis received corrective treatment to restore the deformity towards a normal alignment, all while asymptomatic. The successful reversal of the kyphotic curve and reduction of forward head posture presents promising results for the future of preventative care.
This case shows that gross cervical kyphosis spinal deformity, despite the absence of concurrent symptoms, is amenable to structural correction. In fact, the diagnosis of a cervical kyphosis, regardless of symptoms leaves a patient in a vulnerable position for future injury after a traumatic event (e.g. motor vehicle collision 34) ) and for the development of degenerative changes over time 26) (Fig. 4). Also, the development of symptomatic expression resulting from the spinal deformity, including neck pains, headaches etc. are significantly more likely to evolve over time [2][3][4][5][6][7][8] . We argue that being 'asymptomatic' in the presence of a gross spinal deformity is more accurately described as 'pre-symptomatic'. It is clear and evident that spine misalignment, particularly gross deviations, set the precedence for biomechanical deterioration over time. Further, it is easier to correct smaller spine deformities than larger deformities. Thus, for these reasons, it is important to diagnose spine misalignments early (e.g. perform posture and spine screening) and to treat spine deformities at the earliest recognition (while a patient is asymptomatic/presymptomatic); as goes the saying, 'An ounce of prevention is worth a pound of cure'.
This case demonstrated a dramatic 40° correction in lordosis after the corrective care protocol. A recent systematic review documented an average of 12-18° after an average of 30-36 treatments over 10-12 weeks 24) . Another systematic review of case reports utilizing extension traction methods found an average of 14° lordosis correction after 40 treatments over 14 weeks 25) . This case documents a much greater increase in lordosis likely for several reasons. First, there was more treatment given for this case; no randomized or non-randomized trial incorporating CBP extension traction methods have implemented multiple 'rounds of care' after re-assessment. Second, the patient was asymptomatic which enabled a more aggressive and early treatment approach; the published trials incorporating extension traction typically involve various cranio-cervical pathological conditions. Last, the deformity was so great (14° kyphosis) that there was more improvement to be had versus typical patient groups featured in previous trials having symptomatic hypolordosis, typically not gross kyphosis.
It should be mentioned that there was a loss of initial correction noted on the follow-up (3.5 years). Although a large loss of correction in lordosis occurred, it still represented a 21° improvement from the initial presentation. This supports the need for maintenance treatment to maintain a re-established lordosis. As found in the several trials published utilizing CBP extension traction 24) , a loss of initial lordosis correction has been documented to occur, warranting a supportive or maintenance treatment schedule beyond the initial comprehensive structural corrective treatment protocol; this is suggested to be about twice per month 24) .
As mentioned, although some have argued against routine X-ray screening in the manual therapies, it is irrefutable that only imaging (radiographic, CT, MRI) can elucidate the spine's vertebral coupling patterns (i.e. subluxation pattern) which is essential in the application of spine correcting methods as featured in this case. For instance, Fig. 5 illustrates a patient with forward head posture and several 'unknown' spine coupling patterns possible (Fig. 5) 1) . It is only with imaging that the patient's precise spinal pattern can be unveiled underpinning biomechanically informed treatment choices. Thus, contemporary spine treatment approaches such as those incorporated by CBP technique methods will always require screening (i.e. baseline) X-rays in order to provide evidence-based, patient-specific treatment, ethical, and biomechanically advanced treatment options. Limitations to this case is that it is only an individual case. This case adds to the literature of a growing base of cases that document the non-surgical restoration of the cervical lordosis 24,25) . Due to the design of multiple randomized clinical trials on cervical extension traction methods 24) , it can be presumed that the main treatment modality contributing to the increased lordosis was related to the unique traction methods as exercise 35) and spinal manipulation 36,37) alone, have not proven to be effective means to restore cervical lordosis.
|
2023-05-02T15:02:47.568Z
|
2023-05-01T00:00:00.000
|
{
"year": 2023,
"sha1": "e97693fb0f418c21e877b65856fd6edd45991627",
"oa_license": "CCBYNCND",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "aa552e2b9a1838501ee117846a991148fc90c328",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
9715634
|
pes2o/s2orc
|
v3-fos-license
|
Prevalence of cardio-respiratory factors in the occurrence of the decrease in oxygen uptake during supra-maximal, constant-power exercise
Purpose To investigate the physiological mechanisms that explain the end-exercise decrease in oxygen uptake during strenuous constant-power exercise, we recruited eleven trained, track cyclists. Methods On two separated days they performed 1) resting spirometric measures, followed by an incremental test on a cycle ergometer to determine the power output at and 2) an exhaustive isokinetic supramaximal cycling exercise (Tlimsupra) at 185 ± 24% of (i.e., 640.5 ± 50.8 W). During cycling exercise tests, , ventilation parameters, stroke volume (SV) and heart rate were continuously recorded. Furthermore, arterialised capillary blood samples were collected to measure blood pH, arterial oxygen saturation, lactate and bicarbonate concentration before and 5 min after Tlimsupra. Results A > 5% decrease in and/or SV was observed in 6 subjects, with 5 out of 6 subjects presenting both phenomena. The magnitude of the decrease was correlated with the magnitude of the SV decrease (R = 0.75, P < 0.01), the peak-exercise end-tidal O2 partial pressure (R = 0.80, P < 0.005) and the resting, forced expiratory volume in 1 s (R = 0.72, P < 0.05), but not with any blood variables. The significant post-Tlimsupra decrease in forced vital capacity and forced inspiratory volume corroborate with a possible respiratory muscle fatigue. Conclusion Based on these findings, we demonstrate that the occurrence of decrease in more than half of our subjects, during a strenuous constant-power exercise leading to a mild-acidosis (pH = 7.21 ± 0.04), results mainly from cardio-respiratory factors and not from blood metabolic responses.
Background A significant decrease in whole-body pulmonary oxygen uptake _ V O 2 À Á at the end of supra-maximal running exercise in the field has been reported (Billat et al. 2009, Hanon et al. 2010; Thomas et al. 2005). Of note, the _ V O 2 decrease was concomitant with a decrease in running velocity that could logically be considered as one of the explanations for this phenomenon. However, it is important to note that (i) the _ V O 2 decrease was proportionally larger than the drop in running velocity (Hanon and Thomas 2011) and (ii) the final velocity was always greater than the velocity associated with the maximal _ V O 2 _ VO 2max À Á of each subject (Hanon and Thomas 2011). Additionally, researchers have also observed a _ V O 2 decrease during exhaustive treadmill exercise performed at a constant intensity (Nummela and Rusko 1995), (Perrey et al. 2002). An important unresolved physiological question therefore, is what are the mechanisms that contribute to this phenomenon?
Low blood pH values reduce the affinity of O 2 to haemoglobin and contribute to an exercise-induced arterial hypoxemia (EIAH). Harms et al. (Harms et al. 2000) stated that _ V O 2 appears to decrease by 2% for each 1% decrease of arterial O 2 saturation (SaO 2 ), at least when SaO 2 is less than 95%. Furthermore, acid-base disturbances have been shown to change the partial pressure at which carbon dioxide begins to stimulate breathing (Duffin 2005). The model simulations presented by this author (Duffin 2005) demonstrated the importance of the central strong ions difference (SID) in the regulation of breathing. Therefore, an altered acid-base balance in response to supra-maximal exercise may also contribute to disturbances in exercise ventilation, O 2 transport, and utilisation. Independant of changes in blood pH, increases in blood lactate levels have also been associated with decreases in oxygen supply (Rozier et al. 2007, Mortensen et al. 2007) and O 2 extraction (Poole et al. 1994). Therefore, large ionic and metabolic perturbations at the end of exhaustive supra-maximal exercise may affect O 2 transport and utilisation and contribute to the end-exercise _ V O 2 decrease. Concomitant with the decrease in _ V O 2 observed during exercise performed at~95% of _ VO 2max , Perey et al. (Perrey et al. 2002) observed a decrease in minute ventilation _ V E À Á , tidal volume (V T ). Hanon & Thomas (Hanon and Thomas 2011) also reported a strong correlation between the V T and _ V O 2 responses observed in the last 100 m of 400-, 800-, and 1500-m races (r = 0.85, P < 0.0001), suggesting that respiratory response patterns may play a role in the _ V O 2 decrease during the latter part of supra-maximal exercise. With little increase in alveolar O 2 pressure (PAO 2 ) during exercise, the pulmonary diffusion capacity becomes critical for the maintenance of arterial O 2 pressure (PaO 2 ) (Dempsey 2006), and large lungs appear to be an advantage when performing whole-body exercise (Nielsen 2003). On the other hand, large swings in thorax movement could present negative consequences such as excessive fluctuations in intra-thoracic pressures (Amann 2011) or extreme respiratory muscle work and fatigue (Aaron et al. 1992). Indeed, many studies having shown that the respiratory system might affect the quality of the O 2 transport during strenuous exercises (Nielsen 2003), it remains to test the contribution of end-exercise respiratory response on the _ V O 2 decrease. The significant _ V O 2 decline observed in the last two minutes of a 5-10 min exhaustive test (Gonzalez-Alonso and Calbet 2003) has also been directly associated with the inability of the heart to maintain the rate of O 2 delivery to locomotive skeletal muscles. These authors emphasised that the mechanisms of fatigue which could explain the declining systemic O 2 delivery and _ V O 2 during heavy exercise were complex, possibly involving inhibitory signals that originated in different bodily tissues and organs. However, these authors did not concurrently measure changes in respiratory variables, and, further, it is not known if changes in cardiac parameters also contribute to the exercise-induced decrease in _ V O 2 during supramaximal exercise lasting less than 2 min. The link between resting lung volumes, exercise-induced cardio-respiratory responses on one hand, and the decrease in _ V O 2 on the other, needs to be investigated. Therefore, the main aims of this study were to identify the primary factors associated with the inability to maintain a high steady-state _ V O 2 in healthy, trained subjects. To rule out the potential confounding influence of a decrease in velocity or power output, we chose a constantwork-load cycle exercise. The subjects were tested on a cycle ergometer in order to control the pedalling pattern as participants fatigued (i.e., to avoid a frequency and then a power decrease). We hypothesized that the impairment of both cardiac and respiratory function associated to metabolic perturbations would result in a _ V O 2 decrease.
Results
The results are expressed as the group average, with corresponding statistical results, and for the main physiological variables, as the individual responses referenced as a letter (A to K).
Tlim supra test
The mean performance for T-lim supra test was 51.4 ± 6.9 s (range from 43 to 65 s). During this test, a mean power (P supra Δ30%) of 641 ± 51 W was sustained at a mean pedalling rate of 109 ± 6 rpm; the mean power output corresponded to 185 ± 24% of P− _ VO 2 max and 49 ± 3.8% of P max .
Respiratory responses
The _ V O 2peak reached during the Tlim supra test was equal to 55.0 ± 7.3 mL.min -1 .kg -1 (95.0 ± 7.6% of _ VO 2max ). Figure 1 displays the time course of the _ V O 2 expressed relative to time for the eleven subjects. During Tlim supra , a _ V O 2peak value was detected at 43.3 ± 5.3 s after the onset of the test (~80% of the total test duration). From 80% of the total duration until the end of the test, average mean _ V O 2 for the group, significantly decreased by 5.4 ± 4.7% of the _ V O 2peak (P < 0.05). The peak VRMO 2 value observed at the end of the exercise was 423.9 ± 96.7 mL.min -1 . This corresponded to 11.9 ± 2.1% (ranged from 8.6 to 15.0%) of the whole pulmonary oxygen uptake.
In 6 of our 11 subjects (subjects A, B, C, D, E, F in Figure 2), the decrease in _ V O 2 was greater than 5%, corresponding to 9.1 ± 2.4% of peak values. In the 5 other subjects (G, H, I, J, K), the decrease was between zero and 3.5% (0.9 ± 2.0%). During the Tlim supra , considering the peak (2.6 ± 0.5 L) and final values (2.4 ± 0.4 L), a global decrease in V T corresponding to 5.9 ± 5.6% was found (P < 0.05) with no concomitant global decrease in RF and _ V E . This V T decrease was observed in 7 subjects (Figure 2), whereas a decrease in RF and in _ V E (not presented in Figure 2) Figure 1 Mean time course of _ V O 2 during the Tlim supra test in the decrease and no decrease subjects. Time course of oxygen uptake during the Tlim supra test at each 5% interval-duration in the subjects who present a ≤ 5% (white labels) and > (black labels) _ V O 2 decrease Values are mean ± SD; * : significant decrease relative to _ V O 2peak , P < 0.05. was only observed in one (subject E) and 3 subjects (subjects A, E and J), respectively. The decrease in VT was 7.9 ± 6.4% in subjects who present a _ V O 2 decrease (A to F) and 3.5 ± 3.8% in subjects G to K who present a <5% _ V O 2 decrease. The difference between these two groups was significant (P < 0.05) with an effect size equivalent to 0.80.
The functional pulmonary data are presented in Table 1. The ICC for the FVC pre-tests was 0.95 (confidence interval: 0.88-0.98). The mean P ET O 2 peak value was 122.2 ± 4.8 mmHg and VR was 89.8 ± 10.5% of the estimated MVV. The difference between the subjects who exhibited a < or > 5% _ V O 2 decrease was significant (P < 0.05) for the pre-exercise values of FEV 1 with a corresponding effect size equivalent to 2.24.
The comparison between pre and post-Tlim supra (Figure 3) data revealed a significant decrease in FEV 25 , FIVC and FVC (P < 0.05).
Cardiac responses
HR values attained a steady-state value of 185 ± 11 beats.min -1 (98.4 ± 5.0% of IT maximal HR). The highest CO (25.0 ± 5.6 L.min -1 ) and SV (140.3 ± 33.0 mL) mean values measured during Tlim supra were not different from maximal values recorded during the IT. In 6 out of 11 subjects (A, B, C, D, E and K), a SV decrease of more than 5% was observed ( Figure 2).
If we compare the decrease in SV in those subjects who present a _ V O 2 decrease or not, the decrease values in SV were 17.7 ± 12.3% in subjects A to F and 3.8 ± 8.4% in subjects G to K. The difference between the two groups was significant (P < 0.05) with a corresponding effect size equivalent to 1.29.
Blood metabolic responses
The blood results measured before and after Tlim supra are presented in Table 2. The peak values of [La], pH and [HCO 3 -] were obtained 5 min after the end of the exercise. The SaO 2 value measured immediately after stopping exercise was 92.5 ± 2.7%.
Relationships between the _ V O 2 decrease and metabolic, respiratory and cardio-dynamic data The magnitude of the _ V O 2 decrease was correlated with the P ET O 2 peak values (R = 0.80, P < 0.005), and the correlation with the decrease in V T approached significance (R = 0.57, P = 0.06). The magnitude of the _ V O 2 decrease was also correlated with FEV 1 (R = 0.72, P < 0.005) and FEV 25 (R = 0.73, P < 0.01) measured at rest and post-exercise, respectively. The partial correlations between _ V O 2 on one part and V T , SV, P ET O 2 and FEV 1 on the other part, were 0.52, (P > 0.05), 0.70, 0.78 and 0.71 (P < 0.05), respectively. As observed in Figure 2, 5 of the 6 subjects exhibiting a _ V O 2 decrease also presented a SV decrease (expressed as a percentage of the peak value) (8.6 ± 9.9 mL.beat -1 ), but the inverse was not verified with one subject (K) presenting a drop in SV without a _ V O 2 decrease. Nevertheless, the relationship between the SV and _ V O 2 decrease was significant (R = 0.75, P < 0.01). Significant correlations were also observed between SV decrease and both the peak value of P ET O 2 (R = −0.65, P < 0.05) and the resting FEV 1 (R = 0.73, P < 0.01) as shown in Figure 4. No significant relationships (P > 0.05) were observed between the _ V O 2 decrease and the blood data ([La] (R = −0.45), pH (R = 0.10), SaO 2 (R = 0.14) and [HCO 3 -] (R = 0.24).
Discussion
A significant mean decrease in _ V O 2 was observed in the last 20% of the total exercise duration. This decrease was greater than 5% of the peak value in 6 out of 11 subjects, with 5 of these 6 subjects also presenting a decrease in stroke volume. The correlations indicated that the magnitude of the _ V O 2 decrease was linked with that of SV, and that both were negatively linked with respiratory parameters such as peak exercise end tidal O 2 partial pressure and resting forced expiratory volume in 1 s. The strong interrelations between cardiac and respiratory responses suggest that both contribute to the _ V O 2 decrease during intense, supramaximal cycling exercise. A significant post-exercise decrease in resting expiratory and inspiratory flow volumes was observed suggesting that there was also respiratory muscle fatigue.
VO 2 peak
The present study indicates that during a cycling test performed at 185% of MAP, well-trained cyclists are able to reach 95% of their _ VO 2max in less than 50 s. This is similar to the value of 94% obtained during a 400-m track run (Hanon et al. 2010). As reviewed by Gastin (Gastin 2001), _ V O 2 can be as high as 90% of the athlete's maximum after 30-60 s. However, these previous studies all utilized intensive cycling exercise of short duration and initiated with a maximal starting power (Wingate test or all-out exercise). In the present study, the power was constant, but sufficiently elevated (185% P-_ VO 2max ) to induce exhaustion in less than 60 s. Therefore, our protocol was successful at soliciting a large percentage of the _ VO 2max during an intense constant-power exercise in well-trained sprint cyclists.
_ V O 2 decrease
A moderate decrease in the mean _ V O 2 was observed during the final 20% of the supramaximal cycle test. The magnitude of this _ V O 2 decrease (0 to 12%) differed from our recent results obtained during a 400-m running field test of similar duration (50 s), in which a systematic and greater _ V O 2 drop (15%) was observed in the final 100 m (Hanon et al. 2010). Of note, and contrary to the present study, this last exercise segment was performed with a large velocity decrease. Nevertheless, a _ V O 2 decrease can occur in exercise performed at a constant pace in a subset of subjects, suggesting that at least some of this decrease is independent from a velocity or power decrease (Nummela andRusko 1995, Perrey et al. 2002). It should be noted that, as in the above-mentioned studies (Hanon et al. 2010), the _ V O 2 decrease occurred while _ VO 2max was not reached.
Each step in the O 2 supply chain, from breathing air to transport to the muscle cells, could influence O 2 availability, especially during whole-body, maximalintensity exercise. Although hyperventilation produces an increase in alveolar O 2 tension to overcome the diffusion limitation of the lungs (Dempsey 2006), this could also have negative consequences such as extreme energetic cost, respiratory muscle fatigue, or attainment of the respiratory reserve. Each of these factors could have influenced _ V O 2 during the latter stages of our exercise protocol.
Metabolic data and the _ V O 2 decrease
The lack of a relationship between the magnitude of the _ V O 2 decrease and the post-test blood changes is not in accordance with our previous all-out running data. In this previous experiment, a 23 and 12% drop in velocity was observed in the last 100 m of 400- (Hanon et al. 2010) and 800-m (Hanon and Thomas 2011) races, respectively. The [lactate], [HCO 3 -] and pH were respectively 22.0 mmol.L -1 , <5 mmol.L -1 and 7.00 after the 400-m race, whereas these values were 15.9 mmol.L -1 , >12 mmol.L -1 and 7.21 in the present constant-power exercise, indicating a more moderate alteration of the acid-base balance. Therefore, in this context, we can hypothesize that the blood buffers were not completely depleted with the result that, contrary to the running (Hanon et al. 2010), rowing (Nielsen et al. 1999) or cycling (Bishop et al. 2007) all-out exercises, the organism was able to prevent an additional acidosis. In the present study, the post-exercise arterial saturation values (92.5 ± 2.7%) are at the limit of the definition of EIAH (less or equal to 92%). The magnitude of the _ V O 2 decrease (5.4%) appears to be in line with the statement that _ V O 2 appears to decrease by 2% for each 1% decrease of SaO 2 under 95% (Harms et al. 2000). Nevertheless, no significant correlation was observed between the magnitude of the _ V O 2 decrease and the present blood PaO 2 , SaO 2 and pH values. The brief duration of this supramaximal exercise, the type of exercise (constant-power vs all-out), and the chosen sport (cycling vs. running), could explain the lower EIAH values compared to those usually observed in well-trained runners (Millet et al. 2009). These global metabolic results suggest that if the bicarbonate reserve are sufficient to eliminate excess H + , the O 2 saturation may not be maximally affected by the eventual decrease in PaO 2 (Nielsen 2003) and may not represent a major cause of the decrease in _ V O 2 .
Respiratory cost and respiratory muscle fatigue
During a 10-min exercise at~95% of _ VO 2max , Perrey et al. (Perrey et al. 2002) observed a significant decrease in _ V E (due to a decrease in V T ) in subjects who demonstrated a _ V O 2 decrease. In the present supra-maximal exercise, _ V E and RF increased until the end of the exercise, except in two subjects who exhibited a concomitant _ V E and _ V O 2 decrease. However, the overall significant V T decrease (5%), observed in eight subjects at the end of the test, tended to be correlated with the decrease in _ V O 2 (R = 0.57, P = 0.06, n = 11). The maximal VRMO 2 values (9-15% of the whole pulmonary _ V O 2 ), similar to the maximal values previously published (Aaron et al. 1992) and the VR values (90 ± 10% of MVV), could also raise questions about the ability to carry out this ventilatory load. Furthermore, the functional capacity tests demonstrated a decrease in the inspiratory forced capacity after the Tlim supra . This result is in line with that recorded in well-trained rowers (Volianitis et al. 2001), cyclists (Romer et al. 2006) and swimmers (Lomax and McConnell 2003) (10%) was less than post 300-and 400-m swimming (15%), but this latter measurement was performed 20 s after the end of the test. Based on the observation that voluntary activation recovers almost fully by 3 min (Bigland-Ritchie et al. 1986), we chose to collect posttest spirometric data 3 min after the exercise in order to exclude the hypothesis of a central activation failure. Our data demonstrating a FIVC decrease are in line with the observation of the diaphragm fatigue shown by Johnson et al. (Johnson and Sieck 1993) who stated that near maximal VR values cannot be carried out for more than 15 to 30 s. Therefore, our data confirm that the respiratory muscle response is likely to be affected during constant-power supramaximal exercise.
Respiratory reserve
Maintaining the O 2 alveolar pressure (P A O 2 ) through the stimulation of the respiratory muscles could cause athletes to reach and even surpass the respiratory reserve during maximal exercise, and a small portion of the maximal exercise flow volume and pressure-volume envelope on expiration could approach maximal expiratory flow limits near end-expiratory lung volume (Johnson et al. 1996). In the present study, only one subject reached the resting VR values and this subject did not exhibit a _ V O 2 decrease but, Babb (Babb 2013) stated that expiratory flow limitation is not all or none phenomenon and that approaching maximal expiratory flow can affect breathing mechanics. The onset of dynamic airways compression and subsequent airway resistance start long before expiratory flow becomes limited. Therefore in the last part of the exercise, when near _ VO 2max values are attained, a number of mechanisms for inadequate hyperventilation are possible . Furthermore, based on the demonstration of a modified _ V E response in an inclined versus an upright position (Grappe et al. 1998), we cannot exclude an influence of the inclined cycling position on the ratio between _ V E recorded in the cycling position and the MVV recorded in an upright position.
Cardio-respiratory responses and _ V O 2 decrease
All subjects who exhibited a decrease in _ V O 2 also presented a decrease in SV during the exercise, and a correlation was observed between the final SV data and the decrease in _ V O 2 . The observation that CO declined significantly before maximal heart rate was reached confirms the results presented by Gonzales-Alonso (Gonzalez-Alonso and Calbet 2003) and indicates that maximal cardiovascular function was attained below maximal heart rate. The decline in stroke volume clearly caused the drop in CO, although the underlying mechanisms remain obscure. The positive correlation between the decrease in _ V O 2 and FEV 1 could indicate that expiratory intrathoracic pressure could have a negative effect on the _ V O 2 response. Because the heart and lungs share a common surface area, progressive lung inflation and hyperpnea with exercise may increase competition for intrathoracic space and inhibit cardiac filling via a change in cardiac compliance (Peters et al. 1989). Expiratory load leads to a reduction in CO related to an increase in expiratory abdominal and intrathoracic pressure (Stark-Leyva et al. 2004). Hortop et al. (Hortop et al. 1988) has previously demonstrated, in patients with a cystic fibrosis, a strong relationship between the changes in SV with exercise and the FEV 1 . In our trained subjects, the decrease in SV was significantly correlated with P ET O 2 and FEV 1 , which could corroborate the relationship reported between SV and changes in intrathoracic pressure following voluntary lung inflation (Stark-Leyva et al. 2004) and the findings of a recent overview emphasizing the respiratory mechanisms that impair O 2 transport (Amann 2011). In those subjects with high levels of expiratory flow, we could suggest that, in inclined cycling position, positive expiratory intrathoracic pressure is greater, increasing the ventricular afterload and reducing the rate of ventricular filling during diastole (Miller et al. 2007, Stark-Leyva et al. 2004) which could be deleterious for the maintenance of SV (Amann 2012) and therefore _ V O 2 .
Conclusions
We demonstrated that a _ V O 2 decrease occurs at the end of a constant-power supra-maximal exercise in 6/11 subjects, with the main result being that this phenomenon was related to respiratory characteristics and to the cardiac response. The relationship between stroke volume and _ V O 2 decrease confirms, for supramaximal exercise, previous observations for longer and less intensive cycling exercise (Gonzalez-Alonso and Calbet 2003), (Mortensen et al. 2008). Furthermore, the influence of the respiratory system on the _ V O 2 response observed during the exercise in the participants who present both high resting forced expiratory volume and exercise peak P ET O 2 are innovative and confirm that the pulmonary system is a key determinant of the physiologic responses before stopping a supramaximal cycling exercise. The present data suggest that the respiratory response in case of acute maximal exercise could be the origin of the decrease in SV and _ V O 2 in cycling position. The relation between respiratory, cardiac parameters and _ V O 2 decrease in the case of acute acidosis remains to be tested, and we can hypothesize that different mechanisms may be involved in the _ V O 2 decrease depending on the level of acidosis and the body position.
Methods
Fourteen specifically trained subjects were solicited for this study. They had at least 5 years of competitive cycling experience and trained 8 hours per week in sprint track cycling and/or BMX. All were successful at nationallevel events and none had any history of pathology of the lower-limb muscles or joints.
Three subjects were not retained in data processing because of signal loss in the collection of ventilatory data or non observance of the given pedalling rate. Then, eleven trained men (age 24.9 ± 6.5 y, height 1.79 ± 0.05 m and body mass 75.3 ± 8.2 kg) volunteered for this study. They were informed of the nature of the study, and the possible risks and discomforts associated with the experimental procedures, before giving their written consent to participate. The experimental design of the study was approved by the local Ethics Committee of Saint-Germain-en-Laye (France; acceptance n°2009-A01004-53), and was carried out in accordance with the Declaration of Helsinki.
Experimental protocol
The protocol, carried out during the pre-competition period, included two sessions separated by two days: (1) a first session consisting of anthropometric measurements, resting spirometric monitoring (volume and flow), a torque-velocity cycling test, and an incremental test performed until exhaustion on a calibrated cycle ergometer, (2) a second session consisting of a constantload, supra-maximal cycle test performed until exhaustion; in a pilot study, we observed that the body temperature was not increased by more than 1°C during this test.
During the first visit, anthropometric data were recorded, subjects were familiarized with the spirometric tests to be performed in this study, and three resting spirometric tests were recorded in order to test the reliability of the measures (Figure 3). Subjects began with a warm-up of 15 min of cycling at 100-150 W, 1 min of recovery and a 5-s sprint. After a 5-min recovery, participants were asked to perform three maximal cycling sprints (5 s separated by 3 min of recovery) according to a previous protocol (Dorel et al. 2010). Three different resistive torques of 0, 0.4-0.7, 1-1.5 Nm/kg body mass were applied to obtain maximal force and power values over a large range of pedaling rates among the three bouts. After computation, the data from the three sprints were used to draw force-and power-velocity relationships and hence to determine maximum power (P max ) and the corresponding specific optimal pedaling rate (f opt ) at which P max occurred (for details, see (Dorel et al. 2010)).
After 20 min of rest, they performed an incremental cycle test (IT) to determine their _ VO 2max and power output at _ VO 2max (P− _ VO 2max , i.e. the power that elicited _ VO 2max ). The progressive protocol consisted of 6 min of pedaling at 100 W followed by a stepped ramp increase in power output of 20 W.min -1 until volitional exhaustion. Participants were instructed to maintain their chosen preferred cadence for as long as possible, and the test was completed when the cadence fell more than 10 rpm below this value for more than 5 s despite strong verbal encouragement. All respiratory and cardiac variables were recorded continuously.
During the second session, subjects were asked to perform a standard warm-up: 8 min at 150 W, 2 min at 260 W, a recovery period (i.e., 2 min), a 10-s sprint of progressively increasing intensity with the last 3 s performed at a maximal all-out intensity, 90 s of recovery and finally two brief all-out sprints (5 s in duration) interspersed with 90 s of recovery. After a further 10 minutes of passive recovery, subjects performed the cycling exercise (Tlim supra ) at a constant power output (P supra Δ30%) for as long as possible until exhaustion. P supra Δ30% was defined as the supra-maximal intensity above MAP corresponding to an increment of 30% of the difference between P max (estimated from torque-velocity test) and P− _ VO 2max . Subjects were required to keep a constant pedalling rate (i.e., corresponding to f opt minus 10%). No information relative to test duration was given to the subjects. The test continued until complete exhaustion: either until the cyclists voluntarily chose to stop the exercise or until they were no longer able to maintain their initial test cadence (± 3 rpm), which was considered as a failure to maintain the required task (i.e., the target power output at a constant cadence). Respiratory and cardiac responses were recorded continuously during the entire experimental session. Arterialised capillary blood samples (85 μL) were taken from a hyperemized ear-lobe just before the start of Tlim supra (7 min after the end of the warm-up), at exhaustion, and 5 and 8 min during the passive recovery.
Material and data collection/processing
All testing sessions took place in a well-ventilated laboratory at a temperature of 20-22°C and were conducted using an electronically-braked cycle ergometer (Excalibur Sport, Lode, Groningen, The Netherlands). Vertical and horizontal positions of the saddle, handlebar height, crank and stem lengths were set to match the most comfortable and usual position of the participants.
Respiratory responses
Spirometric variables, [i.e. forced vital capacity (FVC), forced expiratory volume in 1 s (FEV 1 ), Tiffeneau index (FEV/FVC), forced inspiratory volume (FIVC) forced inspiratory volume in 1 s (FIV 1 ), forced expiratory flow at that point that is 25, 50 or 75% from FVC (FEV 25, 50 or 75 )] ( Figure 2A) were measured with an ergospirometric device (Spirobank II, MIR, Roma, Italy) before and 3 min after the end of Tlim supra . The precision and reproducibility of the data (FEV 1 and FVC) have been reported (Liistro et al. 2006). Before Tlim supra , a minimum of three satisfactory inspiratory and expiratory efforts were conducted with the highest measurement being defined as maximal. At the end of the Tlim supra , and due to timeconstraints (recovery influence), only one satisfactory measurement was asked to the subjects in order to measure the exercise-induced changes in the respiratory function.
During both IT and Tlim supra , _ V O 2 , _ V E , CO 2 production _ V CO 2 À Á , respiratory frequency (RF), V T and endtidal oxygen tension (P ET O 2 ) were recorded breath by breath with a fixed gas exchange system (Quark CPET, Cosmed, Roma, Italy). Calibration of the gas analyser was performed according to the manufacturer's instructions before each test for each subject. To avoid artefacts in recording signals, the finger was warmed with a vasodilator ointment 10-15 min before starting the measurement. The apparatus was automatically calibrated before each test. During the IT, breath-by-breath gas exchange values were smoothed (i.e., 3-s central moving average). In order to characterize the subjects, the highest _ V O 2 value in a 30-s period was considered as the _ VO 2max . The criteria used for the determination of _ VO 2max were threefold: a plateau in _ V O 2 despite an increase in power output, a respiratory exchange ratio (RER) above 1.1, and a heart rate (HR) above 90% of the predicted maximal HR. For the purpose of comparing, over the same period of sampling, with the peak value of _ V O 2 _ V O 2peak À Á measured during Tlim supra , the highest 5-s average was also determined. To determine _ V O 2peak during Tlim supra (and as previously reported (Hanon et al. 2010)), values were smoothed (i.e. 3-s central moving average) and then a 5-s average was applied in order to compare _ V O 2 and other ventilatory responses (V T , RF, _ V E ), with those of cardiac output (CO), stroke volume (SV) and changes in SaO 2 at the same time.
For Tlim supra , the end _ V O 2 value _ V O 2end À Á was defined as the average during the last 5-s period and the _ V O 2 decrease was considered as _ V O 2peak − _ V O 2end . The _ V O 2 decline was considered as a _ V O 2 decrease, when the magnitude of the phenomenon was larger than 5% of the peak value while the power of exercise continued to be above P− _ VO 2max (Billat et al. 2009). The same criterion was applied to the other cardio-respiratory variables.
The _ V O 2 of the respiratory muscles (VRMO 2 , expressed in mL.min -1 ), was calculated from the work of breathing (W B , kg.min -1 ) using the equation proposed by Coast et al. (Coast et al. 1993): and VRMO 2 ¼ 34:9 þ 7:45 W B : The ventilatory reserve (VR) was defined as _ V E expressed as a percent of the estimated resting MVV (maximal voluntary ventilation): _ V E Ä MVV ; where MVV ¼ rest FEV 1 Â 40 (Johnson et al. 1996).
Cardiac responses
A bio-impedance method was used to determine SV, HR and CO (Physioflow, Manatec Type PF05L1, Strasbourg, France). The basis for this technique and its application, validity and reliability for exhaustive exercise testing have been described (Lepretre et al. 2004), and it has been demonstrated that thoracic hyperinflation does not alter CO (Charloux et al. 2000). For this experiment, SV, HR and CO values were averaged every five seconds.
Blood metabolic responses
Prior to, 0 and 3 min post-IT, blood samples were collected and analysed for lactate concentration using a Lactate Pro analyser (Arkray, Japan). Prior to and post-Tlim supra session, arterialised capillary blood samples (85 μL) were analysed to measure blood pH, [La], SaO 2 , PaO 2 and CO 2 (PaCO 2 ) and bicarbonate concentration ([HCO 3 -]) with an i-STAT dry chemistry analyser (Abbott, Les Ulis, France).
Statistical analysis
Data are reported as mean ± SD. Because subjects did not perform exactly the same exercise duration, data were expressed relative to the % of total duration (every 5% of Tlim supra duration) for Figure 1 and for ANOVA. Changes in gas-exchange variables during Tlim supra were evaluated by a one-way analysis of variance (ANOVA), with repeated-measures across each 5% interval, followed by multiple comparisons (Student-Newman-Keuls) to test the effect of time on the variables. The intra-class correlation (ICC) was calculated for pre-test spirometric data. Relationships between variables (ventilatory, cardiodynamic, arterial oxygen saturation, metabolic parameters and _ V O 2 ) at different times of the test and final Tlim supra performance were analyzed by a Pearson's correlation coefficient. In order to measure the strength of the relationship between the _ V O 2 decrease and a given variable, while controlling the effect of the other variables, Pearson partial correlations were also calculated. The level of significance was set at P < 0.05. Finally, aiming to compare the difference in main variables, between the subject who exhibited a > 5% decrease in _ V O 2 and the others, effect sizes (ES) were calculated using Cohen's d. Effect sizes of 0.8 or greater, around 0.5 and 0.2 or less were considered as large, moderate, and small, respectively. The level of significance was set at P < 0.05.
|
2016-05-04T20:20:58.661Z
|
2013-12-05T00:00:00.000
|
{
"year": 2013,
"sha1": "06caa227e281eade97d92b43410dda619120b282",
"oa_license": "CCBY",
"oa_url": "https://springerplus.springeropen.com/track/pdf/10.1186/2193-1801-2-651",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "251f81a8bb13565d488046eb73d28c2b1726ad21",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
11228601
|
pes2o/s2orc
|
v3-fos-license
|
An Efficient Algorithm For Simulating Fracture Using Large Fuse Networks
The high computational cost involved in modeling of the progressive fracture simulations using large discrete lattice networks stems from the requirement to solve {\it a new large set of linear equations} every time a new lattice bond is broken. To address this problem, we propose an algorithm that combines the multiple-rank sparse Cholesky downdating algorithm with the rank-p inverse updating algorithm based on the Sherman-Morrison-Woodbury formula for the simulation of progressive fracture in disordered quasi-brittle materials using discrete lattice networks. Using the present algorithm, the computational complexity of solving the new set of linear equations after breaking a bond reduces to the same order as that of a simple {\it backsolve} (forward elimination and backward substitution) {\it using the already LU factored matrix}. That is, the computational cost is $O(nnz({\bf L}))$, where $nnz({\bf L})$ denotes the number of non-zeros of the Cholesky factorization ${\bf L}$ of the stiffness matrix ${\bf A}$. This algorithm using the direct sparse solver is faster than the Fourier accelerated preconditioned conjugate gradient (PCG) iterative solvers, and eliminates the {\it critical slowing down} associated with the iterative solvers that is especially severe close to the critical points. Numerical results using random resistor networks substantiate the efficiency of the present algorithm.
Introduction
Progressive damage evolution leading to failure of disordered quasi-brittle materials has been studied extensively using various types of discrete lattice models [1,2,3,4,5,6,7,8]. Large-scale numerical simulation of these lattice networks in which the damage is accumulated progressively by breaking one bond at a time until the lattice system falls apart has often been hampered due to the fact that a new large set of linear equations has to be solved everytime a lattice bond is broken. Since the number of broken bonds at failure, n f , increases with increasing lattice system sizes, L, i.e., n f O (L 1:7 ), numerical simulation of large lattice systems becomes prohibitively expensive. Furthermore, in fracture simulations using discrete lattice networks, ensemble averaging of numerical results is necessary to obtain a realistic representation of the lattice system response. This further increases the computational cost associated with modeling fracture simulations in disordered quasi-brittle materials using large discrete lattice networks.
Fourier accelerated PCG iterative solvers [9,10,11] have been used in the past for simulating the material breakdown using large lattices. However, these methods do not completely eliminate the critical slowing down associated with the iterative solvers close to the critical point. As the lattice system gets closer to macroscopic fracture, the condition number of the system of linear equations increases, thereby increasing the number of iterations required to attain a fixed accuracy. This becomes particularly significant for large lattices. Furthermore, the Fourier acceleration technique is not effective when fracture simulation is performed using central-force and bond-bending lattice models [10].
This study presents an algorithm that combines the multiple-rank sparse Cholesky downdating scheme with the rank-p inverse updating scheme of the stiffness matrix, which effectively reduces the computational bottleneck involved in re-solving the new set of equations after everytime a bond is broken. In this paper, we consider a random threshold model problem, where a lattice consists of fuses having the same conductance, but the bond breaking thresholds, i c , are based on a broad (uniform) probability distribution, which is constant between 0 and 1. This relatively simple model has been extensively used in the literature [1,2,3,4,5,6,7] for simulating the fracture and progressive damage evolution in brittle materials, and provides a meaningful benchmark for comparing different algorithms. A broad thresholds distribution represents large disorder and exhibits diffusive damage leading to progressive localization, whereas a very narrow thresholds distribution exhibits brittle failure in which a single crack propagation causes material failure. Periodic boundary conditions are imposed in the horizontal direction to simulate an infinite system and a constant voltage difference (displacement) is applied between the top and the bottom of lattice system. The simulation is initiated with a triangular lattice of intact fuses of size L L, in which disorder is introduced through random breaking thresholds. The voltage V across the lattice system is increased until a fuse (bond breaking) burns out. The burning of a fuse occurs whenever the electrical current (stress) in the fuse (bond) exceeds the breaking threshold current (stress) value of the fuse. The current is redistributed instantaneously after a fuse is burnt. The voltage is then gradually increased until a second fuse is burnt, and the process is repeated. Each time a fuse is removed, the electrical current is redistributed and hence it is necessary to re-solve Kirchhoff equations to determine the current flowing in the remaining bonds of the lattice. This step is essential for determining the fuse that is going to burn up under the redistributed currents. Therefore, numerical simulations leading to final breaking of lattice system network are very time consuming especially with increasing lattice system size.
Summary of the Proposed Algorithm
The algorithm presented in this paper reduces the computational complexity of obtaining the solution x n , after the n th bond is broken, to a backsolve using the already existent factorization of the stiffness matrix A m , and p = (n m )vector updates. The algorithm is based on the well known Shermon-Morrison-Woodbury formula [12] for obtaining the inverse of the new stiffness matrix A 1 n+ 1 (after the (n+ 1) th fuse is burnt) from the old stiffness matrix inverse A 1 n through a rank-one update. Infact, the algorithm is such that if the inverse of the lattice stiffness at any stage (m = 0;1;2 ) of analysis A 1 m is available, then all subsequent analysis involving (n = m + 1;m + 2; ) burnt fuses can be carried out using p = (n m ) vector updates. However, since most often the inverse of the stiffness matrix is rarely ever explicitly calculated, the algorithm additionally requires a backsolve using the already existent factored matrix A m . The backsolve operation is further simplified by the fact that it is performed on a trivial load vector and hence the solution can be obtained easily.
Based on the above description of the algorithm presented in this paper, given the factorization of the matrix A m , the computational cost involved in all the subsequent steps (n = m + 1;m + 2; ) is a backsolve using the already factored matrix, and p = (n m ) vector updates. The computational complexity of the backsolve is O (nnz(L m )), where nnz(L m ) denotes the number of non-zeros of the Cholesky factorization L m of A m . The computational complexity of p vector updates is O (p n dof ), where n dof denotes the number of degrees of freedom in the system. As p increases, it is possible that the computational cost associated with the p vector updates exceeds the cost involved in the factorization of the matrix A n . Under these circumstances, it is advantageous to obtain the factorization L n of the new stiffness matrix A n , and use this L n for all the subsequent backsolve analysis steps, until the computational cost associated with the vector updates once again exceeds the stiffness factorization cost. Using the algorithm presented in the paper, it is not necessary to re-factorize the new stiffness matrix A n . Instead, we adopt the multiple-rank update of the sparse Cholesky factorization algorithm [13,14] for updating the L m ! L n . This multiple-rank update of L m to obtain the new factorization L n is computationally cheaper compared with the direct factorization L n of the new stiffness matrix A n [13,14].
Proposed Algorithm
In the following, we describe the updating scheme for the inverse of the stiffness matrix in the case of scalar random fuse model after a fuse has been burnt. A similar procedure can be applied for central-force and beam models [15].
Let A n represent the stiffness matrix of the random fuse network system in which n number of fuses are either missing (random dilution) or have been burnt during the analysis. Let us also assume that a fuse ij (the (n + 1) th fuse) is burnt when the externally applied voltage is increased gradually. In the above description, iand j refer to the global degrees of freedom connected by the fuse before it is broken. For the scalar random fuse model, the degrees of freedom iand j are also equivalent to the node iand node j connected by the fuse before it is broken. The new stiffness matrix A n+ 1 of the lattice system after the fuse ij is burnt is given by and k ij is the conductance of the fuse ijbefore it is broken. After breaking the fuse ij, the electrical current in the network is redistributed instantaneously. The redistributed current values in the network are calculated by re-solving the Kirchhoff equations, i.e., by solving the new set of equations formed by the matrix A n+ 1 . This procedure is very time consuming since a new set of equations (inverse of A n+ 1 for n = 0;1;2; ) need to be solved everytime after breaking the (n + 1) th fuse. However, significant computational advantages can be gained if the inverse of A n+ 1 is obtained simply by updating the inverse of A n . This is achieved by using the well known Shermon-Morrison-Woodbury formula for inverting the rank-p update of a matrix. Thus, the inverse A 1 n+ 1 of Eq. (1) can be expressed as Hence, the inverse of the stiffness matrix of the lattice system after breaking the (n + 1) th fuse ij is obtained simply by a rank-one update of the inverse of the stiffness matrix before the fuse is broken. Further, if the inverse of the matrix A n is available explicitly, then the vector u can be obtained trivially from the i th and j th columns of A 1 n . In particular, this implies that if the inverse of the matrix A n is available explicitly at any stage n = 0;1;2 of analysis, then the redistributed currents in all subsequent stages of analysis involving m = n + 1;n + 2; burnt fuses can be obtained in a trivial fashion from the column vectors of A 1 n and the vectors u p , where p = 1;2; ;(m n). However, since the inverse of the stiffness matrix A n is not usually calculated explicitly, the vector u is obtained using the already factorized A n matrix through a backsolve operation (forward reduction and backward substitution) on the vector v (Eq. (4)).
REMARK 1: Without loss of generality, when the fuse that is broken is attached to a constrained/prescribed degree of freedom j, the vector v is given by and In the case of periodic boundary conditions, consider the case of a broken fuse jk that is attached to a slave degree of freedom k whose master degree of freedom is i. Under these circumstances, the methodology presented earlier is applicable in a straightforward manner if it is understood that breaking the fuse jk is equivalent to breaking the fuse ij.
REMARK 2: The load vector b n+ 1 will differ from the load vector b n only if the (n + 1) th broken fuse ij is attached to a prescribed degree of freedom, where a constant voltage difference is imposed. Once again, for presentation purposes, let us assume that j is such a prescribed degree of freedom. Then the load vector b n+ 1 is given by where w t = k ij 0 0 1 0 0 If neither inor j is a prescribed degree of freedom, then w = 0.
Before breaking the (n + 1) th fuse, the solution vector x n is obtained by solving the Kirchhoff equations After breaking the (n + 1) th fuse that connects the i th and j th degrees of freedom, the updated solution vector x n+ 1 is obtained by solving the new set of Kirchhoff equations Substituting Eqs. (3,7) and (9) into the solution of Eq. (10) and simplifying the result, we have The only unknown in Eq. (11) is the column vector u, which can be obtained through a backsolve operation using either Eq. (4) or Eq. (6). Furthermore, it is not necessary to explicitly assemble the matrix A n and perform factorization to do the backsolve operation. Instead, we can use the already factorized matrix A m to obtain the vector u. In the above description, m < n and denotes the latest broken bond at which the factorization L m of A m is available. To see this clearly, let us first decompose the matrix A 1 n into A 1 m and a matrix C such that where Due to the amount of the storage requirement ( O (n 2 dof )), and the computational cost associated in evaluating the Eq. (15) ( O (n 2 dof )), the matrix C is never explicitly calculated or stored. Instead, the vectors u l for l= 1;2; ;(n m )are stored, and the (j th i th )column of C is evaluated as where u li and u lj refer to the i th and j th components of the vector u l . Equation (16) reduces the storage and computational cost to ( O (p n dof ))and ( O (p n dof )) operations, respectively. Even with this modification, the storage and computational requirements can become prohibitively expensive as the number of updates, p, increases, and hence it is necessary to limit the maximum number of vector updates between two successive factorizations to a certain m axupd. That is, it is necessary to perform or update the factorization of the stiffness matrix A at regular intervals.
Instead of re-factorizing the stiffness matrix A after every m axupd steps, it is more effective to update the factorization L m using the multiple-rank sparse Cholesky factorization update algorithm [13,14]. This multiple-rank update of L m to obtain the new factorization L n+ 1 , after breaking the (n + 1) th fuse, is computationally cheaper compared with the direct factorization L n+ 1 of the new stiffness matrix A n+ 1 [13,14]. We use the multiple-rank downdate algorithm presented in [13,14] to obtain the new Cholesky factorization L n+ 1 from the existing Cholesky factor L m . The multiple-rank downdate algorithm [13,14] is based on the analysis and manipulation of the underlying graph structure of the stiffness matrix A and on the methodology presented in Gill et al. [16,17] for modifying a dense Cholesky factorization. This algorithm incorporates the change in the sparsity pattern of L and is optimal in the sense that the computational time required is proportional to the number of changing non-zero entries in L. In particular, since the breaking of fuses is equivalent to removing the edges in the underlying graph structure of stiffness matrix A , the new sparsity pattern of the modified L must be a subset of the sparsity pattern of the original L. Denoting the sparsity pattern of L by L, we have Therefore, we can even use the modified dense Cholesky factorization update (algorithm 5 in the reference Davis et al. [13]) and work only on the non-zero entries in L. Furthermore, since the changing non-zero entries in L depend on the i th and j th degrees of freedom of the fuse ij that is broken, it is only necessary to modify the non-zero elements of a submatrix of L.
The multiple-rank update of the sparse Cholesky factorization is computationally superior to an equivalent series of rank-one updates since the multiple-rank update makes one pass through L in computing the new entries, while a series of rank-one updates require multiple passes through L [14]. The multiple-rank update algorithm updates the Cholesky factorization L m of the matrix A m to L n+ 1 of the new matrix A n+ 1 , where A n+ 1 = A m + Y Y t , and Y represents a n dof p rank-p matrix. The computational cost involved in breaking the (n + 1) th fuse ij is simply a backsolve operation (O (nnz(L m )))on a load vector given by Eq. (2) using the already factored matrix A m , (n + 2 m )vector updates, and one vector inner product.
The optimum number of steps between successive factorizations of the matrix A is determined by minimizing the computatioal cpu time required for the entire analysis. Let t fac and t upd denote the average cpu time required for performing/updating the factorization A m and the average cpu time required for a single rank-1 update of the solutionũ (n+ 1 m ) , respectively. Note that the evaluation ofũ (n+ 1 m ) requires (n m ) vector updates. Let the estimated number of steps for the lattice system failure be n steps . Then, the total cpu time required for solving the linear system of equations until the lattice system failure is given by = n fac t fac + X (n steps n fac ) n fac t upd = n fac t fac + 1 2 (n steps n fac ) n fac n steps n fac t upd (18) where n fac denotes the number of factorization until lattice system failure. The optimum number of factorizations, n opt fac , for the entire analysis is obtained by minimizing the function . The maximum number of vector updates, m axupd, between successive factorizations is estimated as m axupd = (n steps n opt fac ) n opt fac
Numerical Simulation Results
In the following, we consider two alternate forms of the algorithm presented in this paper. These two solver types are Solver Type A: Given the factorization L m of A m , we use rank-1 sparse Cholesky update/downdate [13] to update the factorization L n+ 1 (O (nnz(L n )) for all subsequent values of n = m ;m + 1; . Once the factorization L n+ 1 of A n+ 1 is obtained, the solution vector x n+ 1 is obtained by a backsolve operation (O (nnz(L n+ 1 )). Solver Type B: Given the factorization L m of A m , the algorithm evaluates the new solution vector x n+ 1 , after the (n+ 1) th fuse is burnt, using Eq. (11) (O (nnz(L m ) + (n + 2 m )vector updates). Instead of refactorizing the matrix after m axupd steps, we use rank-p sparse Cholesky update/downdate [14] to obtain the factorization L m + m axupd of the matrix A m + m axupd (O (nnz(L m )).
The above two algorithms are benchmarked against the PCG iterative solvers, in which optimal [18,19,20,21] circulant matrices are used as preconditioners to the Laplacian operator (Kirchhoff equations). The Fourier accelerated PCG presented in [9,10,11] is not optimal in the sense described in [18,19,20,21], and hence it is expected to take more number of CG iterations compared with the optimal circulant preconditioners.
In the numerical simulations using solver types A and B, the supernodal Cholesky factorization is performed using the TAUCS solver library (http://www.tau.ac.il/ stoledo/taucs). In these simulations, the maximum number of vector updates, m axupd, is chosen to be a constant for a given lattice size. We choose m axupd = 25 for L = f4;8;16;24;32g, m axupd = 50 for L = 64, and m axupd = 100 for L = f128;256;512g. For L = 512, m axupd is limited to 100 due to memory constraints. By keeping the m axupd value constant, it is possible to realistically compare the computational cost associated with different solver types. Moreover, the relative cpu times taken by these algorithms remains the same even when the simulations are performed on different platforms.
Tables 1 and 2 present the cpu and wall-clock times taken for one configuration (simulation) using the solver types A and B, respectively. These tables also indicate the number of configurations, N config , over which ensemble averaging of the numerical results is performed. The cpu and wall-clock times taken by the optimal circulant matrix preconditioned iterative solver is presented in Tables 3. For iterative solvers, the number of iterations presented in Tables 3 denote the average number of total iterations taken to break one intact lattice configuration until it falls apart.
Based on the results presented in Tables 1-3, it is clear that for modeling the breakdown of disordered media as in starting with an intact lattice and successive breaking of bonds until the lattice system falls apart, the solver types A and B based on direct solvers are superior to the Fourier accelerated iterative PCG solver techniques. It should be noted that for larger lattice systems, limitations on the available memory of the processor may decrease the allowable m axupd value, as in the case of L = 512 using solver type B. However, this is not a concern for simulations performed using solver type A.
Using the solver type A, we have performed numerical simulations on two-dimensional triangular and diamond (square lattice inclined at 45 degrees between the bus bars) lattice networks. Table 4 presents the number of broken bonds at peak load, n p , and at fracture, n f , for each of the lattice sizes considered. In addition, Table 4 also presents the number of configurations, N config , over which statistical averaging is performed for different lattice sizes. The numerical results presented in Tables 1-3 are performed on a single processor of Cheetah (27 Regatta nodes with thirty two 1.3 GHz Power4 processors each), the eighth fastest supercomputer in the world (http://www.ccs.ornl.gov). However, the numerical simulation results presented in Table 4 are performed on Eagle (184 nodes with four 375 MHz Power3-II processors) supercomputer at the Oak Ridge National Laboratory to run simulations simultaneously on more number of processors. Figure 1 presents the snapshots of progressive damage evolution for the case of a broadly distributed random thresholds model problem in a triangular lattice system of size L = 512.
Conclusions
The paper presents an algorithm based on rank-one update of the inverse of the stiffness matrix and the multiple-rank downdating of the sparse Cholesky factorization for simulating fracture and damage evolution in disordered quasi-brittle materials using discrete lattice networks. Using the proposed algorithm, the average computational cost associated with breaking a bond reduces to the same order as that of a simple backsolve (forward elimination and backward substitution) operation using the already LU factored matrix. This algorithm based on direct solver techniques eliminates critical slowing down observed in fracture simulations using the conventional iterative schemes. Numerical simulations on random resistor networks demonstrate that the present algorithm is computationally superior to the commonly used Fourier accelerated preconditioned conjugate gradient iterative solver.
For analysis of fracture simulations using discrete lattice networks, ensemble averaging of numerical results is necessary to obtain a realistic representation of the lattice system response. In this regard, for very large lattice systems with large number of system of equations, this methodology is especially advantageous as the LU factorization of the system of equations can be performed using a parallel implementation on multiple processors. Subsequently, this factored LU decomposition can then be distributed to each of the processors to continue with independent fracture simulations that only require less intensive backsolve operations.
|
2014-10-01T00:00:00.000Z
|
2003-10-29T00:00:00.000
|
{
"year": 2005,
"sha1": "8a5f376e6ef540ec896eef00066dc9f9be0ae752",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/cond-mat/0503719",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "cdd543a8a6802c61090028d67d9e1d8229d5eafc",
"s2fieldsofstudy": [
"Engineering",
"Materials Science"
],
"extfieldsofstudy": [
"Mathematics",
"Physics"
]
}
|
59512304
|
pes2o/s2orc
|
v3-fos-license
|
Position-based Selective Neighbors
In this paper, we propose a routing protocol, named Position-based Selective Neighbors (PSN), for controlling the Route Request (RREQ) propagation in Mobile Ad-hoc Networks (MANETs). PSN relies on the Residual Energy (RE) and the Link Lifetimes (LLT) factors to select the better end-toend paths between mobile nodes. The key concept is to consider the RE and the LLT to find the best neighboring nodes to forward the received RREQs. A Simulation has been performed to compare PSN with other pioneering routing protocols. Experimental results show that PSN performs better than its competitors. Indeed, our protocol increases the network life time and reduces the network overhead. Furthermore, it reduces the overhead generated by the redundant RREQ, while maintaining good reachability among the mobile nodes. Keywords—Mobile Ad-hoc network (MANET); routing protocol; energy aware; link life time; AODV
I. INTRODUCTION
Academics and industry have become increasingly interested in wireless research over the last decade.Wireless access was chosen because it allows free movement.A Mobile Ad-hoc Network (MANET) has proved very interesting in finding ways to improve its operation and performance.A MANET typically consists of interconnected mobile nodes using wireless links that have no access points or permanent infrastructure.Moreover, a lot of work has been performed across the layers of Open Systems Interconnection (OSI) while applying Medium Access Control (MAC).Particularly, many routing algorithms have been proposed to provide endto-end routes.These can be reliable and robust against the mobility of the nodes.
For example, neighboring nodes in wireless networks share wireless media.In addition to this, the nodes must compete with others in order to gain access to these media (channels).A MAC layer will control such an operation.Basically, the MAC protocol governs the access of wireless devices to shared wireless media.This protocol imposes many time constraints in the process to properly regulate the shared resource and to avoid collisions.These can happen, as illustrated in Figure 1 In such a case, the node A does not know that the node B simultaneously receives data from the node C. As a result, it can start its own transmission, which will cause the collision with node B. The neighboring-node collision and interference, the hidden-node presence as well as the distances between senders and receivers have a significant effect on wireless network performances.MANETs face such a problem, particularly while having a lot of data, controlling packet traffic as well as mobile topology.Because MANET"s topology is highly mobile and the nodes generating data and forwarding entities within networks, designing efficient and robust routing protocols requires a lot of effort.Several routing protocols have been recently put forward for MANETs, whose goal is to establish end-to-end paths in multi-hop scenarios between sink destination nodes and data-generating sources [1][2][3][4][5][6].Nodes discover routes to a specific destination within conventional on-demand routing protocols [4][5], through the broadcasting of a Route Request (RREQ) packet.On the reception of a RREQ, the node will check if that packet was previously received.If it is the case, the node will drop it.If it is the contrary, a Route Reply (RREP) will be then sent back to the source node according to the availability of the route.In either case, the RREQ will be rebroadcasted by this node to its immediate neighbors until finding its destination.As a matter of fact, that route-discovery method is called blind flooding.The rebroadcasting of a copy of the received RREQ by each mobile node results, within the global network, in a maximal N -2 number of rebroadcasts.In such a situation, N in this network is the number of nodes.Thus, there is possibly excessive redundant retransmission, hence high channel contention.This may lead to excessive packet collisions within dense networks.This can be called a broadcast storm problem [7], so it greatly raises end-to-end delay and network communication overhead, whilst rising bandwidth loss [7,9].
A lot of existing approaches have attempted to resolve the flooding problem through the reduction in the number of redundant messages.On the other hand, this results in low coverage and connectivity degree.In fact, the interdependence between both phenomena is problematic for balancing message overhead (in other words, that redundancy level) and coverage [8].www.ijacsa.thesai.orgTherefore, the decrease in collisions within the network is able to ameliorate network performances, mainly for MANETs, where nodes may collaborate together so as to connect to those nodes not actually being within their transmission range.In addition to that, broadcasting RREQ messages generate duplicate messages across the full network, looking at the same time for an end-to-end path, hence a big chance of potential collisions.The elimination of the unnecessary RREQ packets is able to decrease the number of packet collisions, which will ameliorate the network performance.
This paper puts forward one novel algorithm which minimizes within the global network the RREQ propagation and simultaneously keeps the network connectivity.The (x, y) coordinates of all nodes and their neighbors in the suggested algorithm are known.According to those positions, the best neighbors are selected by one node to further rebroadcast RREQs.We divide the source node"s transmission range to 4 equal zones.The latter are as follows: (Zone1, Zone2, Zone3 and Zone4) in a set M= {M1, M2, M3, M4}.Furthermore, we select 4 neighbor nodes from these zones based on the quality of their links to the source node as well as on their residual energy levels.
Our workout lines one effective routing protocol, which can tackle such a flooding problem and minimize RREQs propagation when at the same time maintaining among nodes within a global network comparable reachability.
II. RELATED WORK
Several approaches have recently proposed manners of decreasing the broadcast-storm effect due to simple flooding [7][8][9][10].Indeed, we can classify these approaches to five categories [8]: the neighbor knowledge methods, the probability-based approach, the position-based methods, simple flooding and different other approaches utilizing various techniques.The simple flooding has been discussed earlier in the introduction
A. Neighbor Knowledge Methods
From a first point of view, a main concept for this method is expanding data concerning the node neighbors.According to that, every node sends a neighbor node address or two ones to their neighbors.After that, this node uses existing "hello" messages in the purpose of sending this information periodically.As a result, every node can implicitly know what is in common with others.In the same vein, the writers of [11] suggested the two-hop backward-neighbor information concept.This latter was utilized to minimize the number of forwarding nodes.It also reduced the collisions in the network.Generally, such a suggested mechanism required exchanging one-hop hello messages.A novel joint one-hop neighbor information-based flooding scheme was put forward in [12], consisting of two sub-algorithms: receiver-phase and sender-phase.The sender-phase algorithm would facilitate for the node the selection of one subset of its one-hop neighbors in order to forward flooding messages.It would also select forwarding nodes that could greatly contribute to flooding message dissemination.The writers in [13] put forward an efficient flooding scheme.Indeed, this latter was based on one-hop Information within MANETs.Basically, every node would use its one-hop neighbor data.Looking for one new route, every node would determine a subset of its neighbors as candidate ones for rebroadcasting that message when they received it.Accordingly, the addresses of these nodes were attached to the RREQ message.Once a RREQ was received, the node would search for its address.When the latter was found by the receiving node, the sending one would provide the candidates from a novel subset of its neighbors and would rebroadcast the RREQ.If it was not the case, the node would drop the RREQ.
From a second point of view, neighbor-knowledge methods succeed in the reduction of extra RREQs in the network.On the other hand, the addresses of all the neighboring nodes are carried by periodic hello messages, hence the use of the available bandwidth, which might rise up the overhead.In addition, because of the nodes mobility, the gathered two-hop or one-hop data are not all the time exact.
B. Position-based Methods
It is worth mentioning that area-based methods comprise location-based and distance-based schemes.These methods show the area that can be offered by one node when rebroadcasting the same received message.In fact, in the transmission range of that node, a great distance from a previous broadcasting node will result in an additional coverage to be acquired, hence a big chance to reach more nodes.Actually, the writers put forward in [14] an approach which is known as Flooding based on One-hop Neighbor Information and Adaptive Holding (FONIAH).
The authors could assume that nodes knew their geographical location.Added to that, the sharing positions among the nodes would require that every node sent hello messages continuously having location information.One main idea of FONIAH is the node"s ability to select those furthest nodes within its transmission range.Afterwards, it could calculate the distance (Maximum Distance (Dmax)) between these furthest nodes and itself .Such a distance was utilized for calculating waiting time at that receiver node.Abolhasan and Wysocki suggested in [15] Position-based Selective Flooding (PSF), where one novel scheme was applied to select forwarding nodes.Mainly, a received RREQ Would be rebroadcast by the receiving nodes just as it enters the Forwarding Region (FR), as it is illustrated in Fig. 2. That was a good position from which neighbors could rebroadcast RREQs, and therefore there would be a strong signal will probably be and a great coverage area.However, such a technique might not find the requested destination for the reason that the destination node was opposing the forwarder.[16,27] a new algorithm for the reduction of overhead generated by redundant RREQ messages.Part of their work, candidate Neighbors rebroadcasting a RREQ (CNRR) would divide the transmission ranges.The latter were done for nodes sending or rebroadcasting those RREQs into four equal zones (Zone1, Zone2, Zone3 and Zone4).So after that, a node per zone would be selected in the aim of rebroadcasting the RREQ.This selection was on the basis of distance between a node and its neighbors.www.ijacsa.thesai.org
R m ax
The Forwarder nodes as falling in Forwarding Region
C. Probability-based Approaches
They are contingent with assigning various nodeparticipation probabilities within a network.These probabilities are signs to nodes for discarding or rebroadcasting a received RREQ.Their values are able to differ for multiple algorithms and node conditions.Yassein et al. suggested in [17] a new probabilistic flooding algorithm that can build up the threshold value for a node having many neighbors.As a consequence, this node cannot rebroadcast the received RREQ.On the other hand, this node may rebroadcast the received RREQ in condition of having a low number of neighbor nodes.Nourazar et al. proposed in [18] a Dynamic Adjusted Probabilistic Flooding (DAPF) Algorithm.Its main goal was to rebroadcast the probability function one message dynamically adjustable with local observations and passing time.We can cite for instance the number of received duplicate messages and network density.Kim et al. suggested in [19] one dynamic probabilistic broadcasting approach.This latter was composed of two (probabilistic and position-based) methods.The probability here was assigned to nodes upon the basis of their distances from a RREQ sender.As a result, in case the receiver node was near that sender node, it might be difficult to rebroadcast the RREQ.Otherwise, it would be more probable to rebroadcast the RREQ and to achieve a wider coverage area.
D. Other Approaches
As it seems to be, various other approaches have been also considered by the research community in the objective of tackling the broadcast storm problem.For example, both studies of [20][21] considered node speed necessary to rebroadcast the RREQs.Khamayseh et al. suggested in [20] two approaches for the enhancement of the route discovery phase and for the increase in overall routing performance.In addition, the authors considered node speed necessary to participate at the route discovery phase.Both approaches were Aggregate-AODV (Agg-AODV) and Per-Hop Mobility Aware (PH-MA-AODV), where the node would keep track of its speed.Firstly, in case a RREQ is received, the node decides if it will forward that RREQ based on its speed.So if the latter is high, the received RREQ will be discarded.In case it is low, the node will take the decision of participating in the route and forwarding, in a way or another, the received RREQ.These nodes are illustrated in Fig. 3. Their speed is greater than 80 m/s and as a result, those received RREQs will be discarded.Secondly, the node attaches its speed.Afterwards, it will forward the received RREQ.Actually, the selection of the best route into the source node will be done by the destination node on the basis of the nodes" low aggregate speed.
III. PROPOSED PSN PROTOCOL
We will discuss in this part the PSN routing protocol.In view of fact, AODV as an on-demand routing protocol follows blind flooding in the purpose of disseminating route discovery packets in global networks.This blind flooding can work well when reachability very significant.Nevertheless, since end-toend route selection is carried out by these protocols utilizing hop counts, non-stable paths are able to return because of the MANET extremely mobile environment.To deal with such a problem, two solutions were proposed in [27].The authors outlined a mechanism of placing into different zones sending/forwarding-node neighbors.The writers in [28] considered link stability and looked in an explicit manner into neighboring nodes" residual battery energy and quality of links.Such suggested protocols would decrease to a minimum the network-wide RREQ dissemination and at the same time preserve the desired connectivity.On the other hand, the previously proposed mechanisms had isolation problems.Let us take as an example the CNRR protocol.This latter just considered the locations of the neighboring nodes.In that way, the RREQ forwarding decisions were solely based upon distance.Despite the fact that RREQ dissemination considerably was reduced by that method, the energies of the remaining nodes and link quality were ignored.Consequently, these returned routes might not be stable for long.In the same context, during the route discovery phase, Link Stability and Energy Aware LSEA [28] protocols take into account nodes" residual energies and link quality.In fact, such a method returns stable paths, thus leading to a high throughput and fewer delays.Yet, because such a method does not give careful consideration to the positions of nodes when disseminating RREQs, we can compromise connectivity.www.ijacsa.thesai.org
A. PSN Route Discovery Mechanism
In the following, there are three phases of the route discovery process of the proposed PSN protocol: First of all, the neighbors of the "S" node are divided ito 4 zones, so as to send an RREQ, which is in a precise manner the same mechanism in [27].In addition, every neighbor (x,y) coordinates are made known for nodes by using specialized positioning devices like GPS [22].
Secondly, every one of the "S" nodes will compare in a specific zone its average Link Lifetime (LLT) with the averaged LLT (LLTavg).This latter is got from all nodes" specific times.These nodes share the links with the current "S" node.In a similar way, the "S" node will compare in the specific zone all its neighbors" residual energies with (REavg).
Thirdly, a Candidate Node (CN) will be selected by the "S" node among neighbors, based on specific conditions, in a specific zone in order to forward a current RREQ.This selection will be based on two conditions.Firstly, in condition that the neighboring nodes" LLTs and REs are higher than LLTavg and REavg, in that case this current node will be selected as a Potential Candidate Neighbor (PCN) and added to Potential Candidate List (PCL)of the "S" node.This is similar for PCL.Secondly, the "S" node will select CNs from an already existing PCL on the basis of their LLTs and REs.As a matter of fact, a node will be selected from a PCL set when having a great number of LLTs and Res, compared to other PCL nodes.On the other hand, if in the specific zone there is one "S" node neighboring node having the ability of meeting LLTavg and REavg conditions for PCLs, then such a node having in the specific zone most LLTs and REs will be selected as a CN.This will be similar for all the zones.
As an example, let consider the MANET topology, as it is depicted in Fig. 4, where node X has the intention of sending a RREQ to its neighbors.Firstly, that node will divide its transmission range to four zones.It can be assumed that node X compares all its neighbors" LLTs and REs with LLTavg and REavg in Zone1.On the basis of the checks, nodes A, B and C will be in fact selected as Potential Candidate Neighbors, hence putting them within the PCL list.As a consequence, that node X PCL in Zone1 = {A, B and C}.After that, the same node will select the best node in order to forward the RREQ while comparing LLTs and REs.Added to that, consider (LLTC and REC) > (LLTB and REB) > (LLTA and REA).In that case, that node X will select node C, which will be considered as its CN in Zone1.In the same way, CNs will be selected within the other three zones.This will be done following the previously discussed procedure.Moreover, node X may have the capability of attaching all selected CNs addresses within RREQ packet respective zones.To make it clear, all nodes X zone neighbors will check whether their addresses are part of the address list upon receiving the RREQ packet.When they make sure their addresses are in the list, they can in fact forward the RREQ to their neighbors, This is done according to the PNS procedure.If it is not the case, the others will simply drop it.
In the objective of understanding the PSN route discovery mechanism concept, let us take into account Fig. 4 and Fig. 5, which, for simplicity, present just Zone1.In addition to that, The "S" node interactions with all its neighbors can be seen within the specific zone.As a matter of fact, are respectively the neighboring (A, B, C, D, E and G) nodes links with the S" node.It should be noted that every node LLT is shown above the link, On the other hand, REs are below each individual node.As a result, every node knows its neighbors LLTs and REs.
It was suggested by the authors of [28,29] every node knew all its neighbor nodes LLTs and REs by exchanging "hello" messages.In a similar way, the "hello" message, in the proposed PSN protocol, was modified to convey to all the current node neighbors its (x, y) coordinates and REs.Indeed, such a frequent exchange of "hello" messages would certainly help every node to get new data concerning its neighbor"s residual energy and link quality.Fig. 5 shows the "S" node has the intention of sending a RREQ packet to its neighbors.According to that, after computing (neighboring nodes) LLTavg as well as REavg, there will be a comparison of these values with the LLT and RE values of every by the "S" node in the target of discovering which nodes have LLTs and REs higher than that of LLTavg and REavg.In particular, just the A, E and F nodes are included within the PCL.From another point of view, the B, C, D and G nodes in Zone1will be left off the PCL due to the fact that LLTavg and REavg are higher than their LLTs and / or REs.Added to that, Fig. 5 indicates that node E, among the PCNs within the PCL, is the best candidate to be selected like a CN, which the case for node E, based on its good LLT and RE.That will be repeated by the "S" node for all the zones in a way that one node is selected in every zone like a CN.
Finally, as a last phase, the "S" node will include all CN addresses and broadcast them.A similar RREQ will be received by all the zones nodes.Every time that these addresses are seen within an address list, the current RREQ will be rebroadcasted according to the aforementioned method.The other neighboring nodes will only drop that RREQ.
Table 1 shows that Algorithm 1 selects four CNs in the aim of forwarding the RREQ as follows.First of all, one full area around the "S" node will be split up into four separate zones.The latter are symbolized by the M = {M1, M2, M3, M4} set.As a matter of fact, a set of nodes inside every zone is represented by each member of set M. That is to say, M1 = , , M2= , , M3= , , M4= , .Next, it iterates through every node of the specific zone and selects the PCL set and therefore the CN in that zone.In the end, the "S" node sends to chosen candidate nodes the RREQ packet.
B. Percentage of RREQ Reception by Neighbour Nodes
According to what has been discussed in [7], which we can get a 61% higher coverage area across a full network offered by rebroadcasting RREQs [7].PSN will offer more betterment and enhancement with an algorithm which will also help CNs check for optimized RREQ dissemination.As an example, if Algorithm 1 is chosen to be run by any sender/forwarder "S" node, as represented and provided by Table 1, four CNs will be then selected among its neighbors.In addition, the "S" node will attach the addresses of the selected CN to the RREQ packet and after that will broadcast it.Only the attached CNs will be permitted for further processing the received RREQ.This will happen if they find their addresses in a RREQ altered.
The verification of distances between every RREQ neighbor and the "S" sender will result in checking how many of their neighbors got a similar one.If the transmission range of "S" is more than the distance, the CN will assume that the neighbor obtained a similar RREQ as itself.Thus, any CN will be able to get the percentage of how many neighbors got a similar RREQ.Through extensive simulation, it is basically observed that the percentage which will improve a network performance is 75%.Hence, it is clear that when more than 75% of CN neighbors obtained a similar RREQ, the CN must not rebroadcast such an obtained one as most of its neighbors got it, so it will not be necessary to rebroadcast it.When lower than 75% of CN neighbors get a similar RREQ, it will be rebroadcasted by the CN.Fig. 6 illustrates the overhead / network link in case that CNs have a predefined percentage of what concerns the rebroadcasting of received RREQs.
As a matter of fact, the results presented in Fig. 6 demonstrate what follows: If the percentage is low, overhead will be as well low and vice versa.In other words, if fewer CN neighbors obtain a similar RREQ, the CN node will rebroadcast that obtained RREQ, thus the addition of more overhead to the network.Actually, in case this percentage is low, most CNs may keep such a RREQ.As a consequence, to find the intended destination will be improbable since few nodes will get the RREQ.From that reason, the balance between reachability and overhead added in the network is struck through the means of setting the percentage at 75%.
IV. PSN PERFORMANCE EVALUATION AND RESULTS ANALYSIS
The PSN protocol was implemented in the NS2 modeler [23], version 2.34.NS2 is a discrete event network simulator tool used to a great extent when simulating real network scenarios.Added to that, it is freely available and was in the first place designed to simulate wired networks.On the other hand, it has been extended for the simulation of wireless networks including MANETs, wireless LANs and wireless sensor networks.Moreover, it can be organized as it is stated by the OSI reference model [24].It was shown in [25] that 57% of all published papers based on simulation-utilized NS2 as their simulation tool.This confirms and demonstrates that NS2 is a network simulator which is powerful and trusted.
A. Simulation Environment and Parameters
The suggested PSN protocol is exhaustively analyzed through its comparison with our previous proposed A-LSEA and C-CNRR schemes, while depicting its performance.The following section will discuss in detail the results got after comparing between AODV, C-CNRR, A-LSEA and PSN through the use of the parameters given in Table 2.For the simulation of mobile nodes random way points are utilized, where every node will randomly move at a consistent [5 -30 m/s] speed.At the same time that any node attains one definite random destination, it will take a pause of only two seconds.Afterwards, it will start moving again to a new random destination.
B. Results and Discussion for the First Simulation
This sub-section will analyze in detail the obtained results and it will present the comparative discussion.
1) Total overhead: Fig. 7 shows our comparison of all the overhead of proposed schemes to the AODV, C-CNRR and A-LSEA overheads.Fig. 7 demonstrates also that the overhead goes up significantly when mobility grows for AODV.By way of contrast, this rise is constant for these suggested PSN, A-LSEA and C-CNRR schemes.This due to the fact that the AODV protocol will flood any obtained RREQ with no constraints That is to say, without any energy level or link quality.Through the comparison of the other three schemes, it is clear that PSN outperforms A-LSEA and C-CNRR, as long as the PSN will consider LLTAVG and REAVG and will select as well a specific set of nodes (CNs) in the aim of rebroadcasting a RREQ.In addition to that, the PSN routing protocol will reduce to the least possible the overhead via the driving of the CNs to verify the exact number of their own neighbors obtaining a similar RREQ before sending it.On the other hand, C-CNRR considers only the distance,.In spite of that, A-LSEA considers both constraints.This is actually done without a zoning concept or even an extra verification of the number of neighbors receiving the same RREQ.
2) Sent and received RREQs: In the entire network, the number of sent and received RREQs is illustrated in Fig .8. Generally, a broadcast RREQ is sent by one node and afterwards all its neighbors receive it.As a matter of fact, there is a correlation between the number of sent RREQs and the number of received ones (highhigh or low -low).The PSN outperforms all other protocols due to the fact that the suggested algorithm selects CNs on the basis of link quality and energy levels as well as the basis of how many node neighbors obtaining one RREQ.When a definite or specific number of "S" node neighbors receive one RREQ, the latter will not be flooded within the network.Consequently, there will be across the entire network more control over RREQ dissemination.In the same way, A-LSEA performs better than C-CNRR due to the fact that A-LSEA path selection is more constant (in case that RE and LLT are considered) compared with C-CNRR (considering just the distances between nodes).3) Average throughput: We demonstrate in Fig. 9 the average PSN routing protocol throughput while comparing it with other routing protocols (A-LSEA, C-CNRR and AODV).
In general, we can see that it decreases when nodes mobility increases for all analyzed protocols.In addition to that, the PSN has a good performance compared to other protocols as the PSN-selected paths holds out more time than the ones selected by other protocols.As a result, the PSN is better than the other protocols (A-LSEA, C-CNRR and AODV) since having the ability to send more data because of very good path lifetimes.
4) Data received:
We show in Fig. 10 the data received for PSN as well as the other routing protocols (A-LSEA, C-CNRR and AODV)., for which it is demonstrated that the amount of received data will decrease in case mobility increases.This has an effect on the established routes and links.These latter require being re-established whenever breakages occur.By way of contrast, the amount of received data in the PSN routing protocol will decrease in case the speed rises from 5 m/s to 15 m/s.On the other hand, it stays approximately constant above 15 m/s for the reason that the PSN-algorithm links judge residual energy and link lifetimes.This makes easier to have an impact on high speeds through the involvement of just the nodes selected by the developed algorithm (the selection of a one best node in every zone).This is performed in an end-to-end path.5) Data sent: Fig. 11 depicts the amount of information that has been sent during simulation in a successful manner.The node power supply in MANETs is not permanent because it is naturally mobile.Therefore, any sent or obtained information to and by a node will lead to the reduction in energy levels.In Fig. 8, it can be noticed that AODV is the worst protocol as regards sent or obtained RREQs.A big number of sent or obtained RREQs that are not necessary will greatly decrease the battery life of nodes.Added to that, the PSN is a better protocol when compared to any other routing protocols since it sends a lower number of RREQs, even though it sends more data successfully.
6) Network lifetime: We illustrate in Fig. 12 the proposed-PSN, A-LSEA, C-CNRR and AODV network lifetime results. .We show as well that all other routing protocols are outperformed by the PSN, while giving better Network Lifetime results, due to the fact that the PSN routing protocol will select just 4 nodes for rebroadcasting received RREQs.
In addition to that, an advanced algorithm is run by the selected CN nodes in the goal of eliminating RREQ redundancy by verifying the exact number of their neighbors receiving the same one.According to this, the CN nodes will discard or rebroadcast the obtained RREQs.AS a matter of fact, saving energy will lead to node energy, hence the growth in network lifetime.
7) Data drop:
We depict in Fig. 13 (in packets) the amount of data dropping.This is carried out during simulation of proposed-PSN, A-LSEA, C-CNRR and AODV protocols.It is clear that the PSN routing protocol is better than the other routing protocols for these former performance metrics.
On the other hand, Fig. 13 shows that C-CNRR is greatly better than all the other routing protocols in relation to the dropping of data.Despite the fact that the PSN selects better paths compared in fact to the other routing protocols, there will be no advantage of performing better basically as regards any dropped data packets within the network.This can be because the end-to-end-path C-CNRR selection is made on the basis that the distance between the route nodes is advantageous owing to the signal strength for sending and receiving data in distances smaller than those of the routing protocols.
C. Results and Discussion for Second Simulation
For further verification and validation, the authors in [20] implemented Mobility-Aware AODV in NS2.It was also compared with the proposed PSN approach.The latter is used in this section in the goal of achieving a better performance while taking into account our previously introduced (A-LSEA and C-CNRR) routing protocols as well as that standard AODV.For this reason, we consider this approach (PSN) in this paper the best proposed routing protocol.In fact, PSN is selected to be compared to AODV and the work suggested in [20] through the use of similar simulation parameters, as provided by Table 3.In the next through Fig. 14 -Fig.20, we will illustrate different metrics [30][31][32] for the comparisons between PSN, MA-AODV and AODV.It is clear in Fig. 14 that the PSN routing protocol was able to send and receive in the network fewer RREQ packets.This is due to the fact that end-to-end routes are selected by the PSN on the basis of LLT and RE factors On the other hand routes are selected by MA-AODV only on the basis of node speeds.An edge is given to the PSN over MA-AODV by these two factors for the reason that the route selected by PSN endures for more time than that selected by MA-AODV.
Added to that, those routes selected by MA-AODV endure generally less.Afterwards, the nodes will establish a novel path through the initiation of a new RREQ discovery process.As a consequence, many RREQs will be sent and received in addition to the entire overhead, as depicted in Fig. 15.The average delivery ratio is illustrated in Fig. 16 for the AODV, MA-AODV and PSN routing protocols.We can see that all routing protocol delivery ratios go up as soon as the pause time increases.www.ijacsa.thesai.orgThis is because of the stillness in the mobile nodes.What is more, this PSN routing protocol performs in a good way compared to MA-AODV and AODV routing protocols due to the fact that PSN selects paths lasting longer compared to the ones selected by MA-AODV for the reason that selected PSN end-to-end paths are based on Link Lifeitmes (LLT) and Residual Energy (RE) of the nodes involved in the route.By way of contrast, MA-AODV forwards node-speed-based RREQs.As a result, this MA-AODV algorithm will lead to the end-to-end routes which are entirely having along the path low-speed nodes.On the other hand, this approach, does not ensure any very good paths as regards MANETs because, first of all, , there may be two slow oppositely moving nodes.
As a of fact, these two nodes link lifetime can terminate since they move apart.In addition to that, imagine two neighboring nodes that move quite quickly in the same direction.By way of contrast, these two nodes link lifetime is valid for a long time compared to two oppositely moving lowspeed nodes.MA-AODV will just consider that node speed in order to forward a received RREQ.At the same time, the PSN will consider both direction and speed and will calculate in fact the any two neighbor nodes link lifetimes.In the second place, the PSN will provide a very strong packet delivery ratio.This is on account of how the nodes Residual Energies are considered by the PSN during the route selection decision.Moreover, it is noticeable in Fig. 17 that this suggested PSN runs in a successful way the network for more time compared to the MA-AODV routing protocol.In the first place, the PSN considers the Link Lifetimes and the Residual Energies of the nodes that are in fact involved within end-toend routes whose role is returning stable paths.
Secondly, energy is conserved by discontinuing sending/receiving RREQ packets which are not necessary and which consume a big amount of the node energy.In the same vein, MA-AODV considers just the nodes speed is the latter cannot be an accurate parameter for the selection of unchanging paths.Running a network much longer will enable nodes to send and receive data lot of information, as depicted in Fig. 18.As a result, the PSN outperforms MA-AODV as it sends/ receives more data packets.A throughput comparison of AODV, MA-AODV and PSN routing protocols is clearly shown in Fig. 19, where the PSN mostly outperforms MA-AODV on account of that improved algorithm, which improves and stabilizes in a better way endto-end paths.We can notice as well that the shape of the general throughput rate curve is incremental to the same degree as the pause times increases.This is in spite of some fluctuations caused by nodes mobility randomness.Theoretically speaking, the sex-second-pause-time scenario throughput rate should be greater than a scenario that has a four-second pause time, while having during simulation identical trajectories travelled by nodes.On the other hand, the positions towards which nodes move, under Random Waypoint mobility models, are chosen in a random way and vary from a scenario to another.
Data Sent
Finally, we note that the PSN outperforms the MA-AODV routing protocol as regards the information drop packets, as depicted in Fig. 20, That can be owing to the abovementioned reasons.
V. CONCLUSION
This paper presents the PSN protocol, a routing protocol for controlling RREQ propagation within networks, which allows end-to-end paths to be selected based on the Residual Energy (RE) and the Link Lifetimes (LLT).PSN benefits from the combination of these two important factors.Moreover, when the CNRR [27] and LSEA [28. 29] concepts are merged, the RREQs dissemination into the network will actually be reduced without causing reachability loss between the nodes.In addition, we introduced a threshold percentage based method, in which the nodes verify that their neighbors have received before rebroadcasting a RREQ.By preventing nodes from sending duplicate RREQs, this mechanism more intelligently controls network-wide flooding, based on a defined threshold relating to the percentage of its neighbors that have received the RREQ.We performed a simulationbased comparison between the proposed PSN and other routing protocols for different metrics and we have discussed the results.This increases network lifetimes, improves throughput, and enables more data to be sent and received.The proposed scheme combines both the Residual Energy (RE) and Link Lifetime (LLT) factors in the routing management process, rather than using only a single factor, as in the case studies of [20,26].
Fig. 4 .
Fig. 4. Instance of Dividing Transmission Range into four Zones and Selecting CNs in Each Zone.
Fig. 5 .
Fig. 5. Instance of Selecting best CN from PCL List in Specific Zone.
TABLE III .
SIMULATION PARAMETERS FOR COMPARING AODV, MA-AODV AND PSN
|
2019-01-06T13:32:35.021Z
|
2018-01-01T00:00:00.000
|
{
"year": 2018,
"sha1": "5336cf157888aec7323a424dfbb235cf2fed3712",
"oa_license": "CCBY",
"oa_url": "http://thesai.org/Downloads/Volume9No12/Paper_28-Position_based_Selective_Neighbors.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "5336cf157888aec7323a424dfbb235cf2fed3712",
"s2fieldsofstudy": [
"Computer Science",
"Engineering"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
2113134
|
pes2o/s2orc
|
v3-fos-license
|
Reversal of a full-length mutant huntingtin neuronal cell phenotype by chemical inhibitors of polyglutamine-mediated aggregation
Background Huntington's disease (HD) is an inherited neurodegenerative disorder triggered by an expanded polyglutamine tract in huntingtin that is thought to confer a new conformational property on this large protein. The propensity of small amino-terminal fragments with mutant, but not wild-type, glutamine tracts to self-aggregate is consistent with an altered conformation but such fragments occur relatively late in the disease process in human patients and mouse models expressing full-length mutant protein. This suggests that the altered conformational property may act within the full-length mutant huntingtin to initially trigger pathogenesis. Indeed, genotype-phenotype studies in HD have defined genetic criteria for the disease initiating mechanism, and these are all fulfilled by phenotypes associated with expression of full-length mutant huntingtin, but not amino-terminal fragment, in mouse models. As the in vitro aggregation of amino-terminal mutant huntingtin fragment offers a ready assay to identify small compounds that interfere with the conformation of the polyglutamine tract, we have identified a number of aggregation inhibitors, and tested whether these are also capable of reversing a phenotype caused by endogenous expression of mutant huntingtin in a striatal cell line from the HdhQ111/Q111 knock-in mouse. Results We screened the NINDS Custom Collection of 1,040 FDA approved drugs and bioactive compounds for their ability to prevent in vitro aggregation of Q58-htn 1–171 amino terminal fragment. Ten compounds were identified that inhibited aggregation with IC50 < 15 μM, including gossypol, gambogic acid, juglone, celastrol, sanguinarine and anthralin. Of these, both juglone and celastrol were effective in reversing the abnormal cellular localization of full-length mutant huntingtin observed in mutant HdhQ111/Q111 striatal cells. Conclusions At least some compounds identified as aggregation inhibitors also prevent a neuronal cellular phenotype caused by full-length mutant huntingtin, suggesting that in vitro fragment aggregation can act as a proxy for monitoring the disease-producing conformational property in HD. Thus, identification and testing of compounds that alter in vitro aggregation is a viable approach for defining potential therapeutic compounds that may act on the deleterious conformational property of full-length mutant huntingtin.
Background
Huntington's disease (HD) is a severe, dominantly inherited neurodegenerative disorder that typically has its onset in mid-life, though it may occur in the juvenile years or in the elderly, and that produces an inexorable decline to death 10-20 years later [1]. Its cardinal clinical feature is a characteristic motor disturbance involving progressive choreoathetosis, but the disorder also involves psychological changes and cognitive decline. The neuropathological hallmark of HD is the loss of medium spiny striatal projection neurons in a dorso-ventral/medio-lateral gradient that eventually decimates the caudate nucleus, but considerable neuronal loss also occurs in other parts of the basal ganglia and in the cortex [2]. The pathogenic process of HD is initially triggered by an expanded polyglutamine segment near the amino terminus of huntingtin, an ~350 kDa protein whose precise physiological function is uncertain [3]. Huntingtin is required for normal embryonic development and neurogenesis, based on the lethal consequences of mutational inactivation in the mouse [4][5][6]. By contrast, the HD mutation itself does not impair this developmental activity but rather produces a "gain-offunction" that acts to cause the disorder [7]. Genotypephenotype studies of HD patients, in comparison with other polyglutamine neurodegenerative disorders, have delineated a number of genetic criteria for the mechanism that triggers HD pathogenesis: 1) a threshold polyglutamine length (within a normal human lifespan); 2) progressive severity with increasing polyglutamine length above the threshold; 3) complete dominance over the wild-type protein; 4) greater dependence on polyglutamine length than on huntingtin concentration (within a physiological range) and 5) striatal selectivity, due to the huntingtin protein context in which the polyglutamine tract is presented [8,9].
The "gain-of-function" due to the HD mutation is thought to lie in a novel conformational property conferred on mutant huntingtin by the expanded polyglutamine tract [10]. This has been supported by in vitro studies of a small amino-terminal huntingtin fragment, where an expanded polyglutamine tract promotes self-aggregation in a manner that conforms to the first four genetic criteria [10][11][12]. The in vitro aggregation involves a conformational change of the polyglutamine segment from a random coil to an amyloid structure and is paralleled in cell culture in some ways by the formation of cytoplasmic and nuclear inclusions that also incorporate other proteins [13]. Neuronal inclusions containing amino-terminal fragment have also been detected in HD brain, though their role in pathogenesis remains a matter of debate, as they may occur late in the pathogenic process as a consequence of huntingtin degradation [14].
Precise genetic modeling of HD in the mouse supports the view that in vivo, the "gain-of-function" property conferred by the expanded polyglutamine acts within fulllength huntingtin to cause abnormalities that do not initially involve formation of an insoluble aggregate [15,16]. Knock-in mice in which the HD mutation has been introduced into Hdh, the mouse orthologue, display early biochemical and histological phenotypes that are associated with expression of full-length mutant huntingtin at normal physiological levels and in a normal developmental pattern [7,[15][16][17][18][19][20]. Indeed, the phenotypes associated with expression of full-length mutant huntingtin in these mice, and in neuronal progenitor cells derived from them, also fulfill the genetic criteria for the mechanism triggering HD pathogenesis [15,[20][21][22] One of the earliest phenotypes is the nuclear localization of full-length mutant huntingtin in the nucleus of striatal neurons [16]. Together, the knock-in mouse data suggest that the process of pathogenesis is triggered by the presence of expanded polyglutamine in full-length huntingtin and leads only after many months to the formation of amino-terminal huntingtin fragment and inclusion formation [15].
We have postulated that the same conformational property that promotes aggregation in the context of a small fragment may also act with the context of full-length huntingtin to trigger pathogenesis, possibly by altering huntingtin's interaction with another cellular element. Consequently, we have identified small molecules from the NINDS Custom Collection of bioactive compounds that inhibit in vitro aggregation of amino-terminal mutant huntingtin [23]. These have been tested for their ability to reverse the huntingtin localization phenotype associated with full-length mutant huntingtin in cultured striatal progenitor cells from Hdh knock-in mice. Our findings indicate that some of these compounds reverse the effects of the expanded polyglutamine in both assays and support the view that some inhibitors of polyglutamine aggregation may lead to viable therapeutics targeted at full-length mutant huntingtin, early in the disease process.
Screening for inhibitors of aggregation
We have previously demonstrated that, when released from the protection of a GST fusion protein, the amino terminal fragment 1-171 of mutant huntingtin, forms aggregates in a manner consistent with the genetic criteria for the mechanism of HD pathogenesis [10]. We used a modified version of this assay, implemented using a 96well format ELIFA dot blot apparatus, to screen the NINDS Custom Collection (NCC) which consists of 1040 small bioactive compounds, both FDA-approved drugs and natural products ( Figure 1A). The screening was carried out in a blinded fashion as part of the NINDS Schematic diagram of the aggregation screening assay (A) and typical results (B) Figure 1 Schematic diagram of the aggregation screening assay (A) and typical results (B) A: Scheme of an in vitro mutant huntingtin aggregation assay modified for drug screening. In the primary screening, the mixture of fusion protein, GST-Q58-Htn (20 µg/ml) and Thrombin (0.5 unit/µg protein) was immediately distributed into the 96-well plates containing diluted compounds at 40 µl/well. The final concentration of the small compounds is 100 µM. 10 µl of 10% SDS/50 mM 2-mercaptoethanol was added into each well to stop the reaction after 24 hours incubation at room temperature. The aggregates were separated by filtering through a cellulose acetate membrane (0.2 µm). Immunoblotting was done with a specific anti-huntingtin antibody, HP1, followed by incubation with peroxidase conjugated anti-rabbit antibody. The signals of the retained aggregates were scanned and quantified. In the secondary screening, compounds tested positive in were tested at 10 µM and a 45-minute incubation at room temperature was followed after mixing the protein and enzyme.
SDS boiling
Neurodegeneration Drug Screening Consortium, with the identities of compounds in the NCC only being made available after completion of the screens [23].
In our primary screen ( Figure 1A), GST-Q58-Htn (20 µg/ ml) was mixed with thrombin (0.5 unit/µg GST-Q58-Htn) and immediately dispensed into a 96-well PCR plate containing compounds diluted to a final concentration of 100 µM. Incubation was continued for 24 hours at room temperature to allow aggregate formation. The aggregation was stopped by 2% SDS/10 mM 2-mercaptoethanol followed by boiling for 5 minutes. The mixture was filtered through a cellulose acetate membrane by using a 96well ELIFA dot blot apparatus. The aggregates retained on the membrane were detected and quantified by immunoblotting and subsequent image analysis. A typical immunoblot result is shown in Figure 1B. Congo Red, a known huntingtin aggregation inhibitor, was used as the positive control [24]. 10 µM Congo Red can completely inhibit the Q58-Htn aggregation. DMSO, used for the negative control, had no impact on Q58-Htn aggregation.
Potential inhibitors were distributed evenly cross the whole NCC library ( Figure 2). Sixty compounds that showed more than 50% inhibitory effect were selected to be retested in a second screen at a lower concentration of 10 µM. The 8 compounds in column 5 of plate 9, were missed in the primary screening at 100 µM, and were therefore also tested in the second screening at 10 µM. In the primary screening, a "hit" could have resulted either from direct inhibition of polyglutamine-induced aggregation or indirectly, by inhibition of the thrombin and consequent failure to cleave GST-Q58-Htn, which does not by itself aggregate. Consequently, the second screening at 10 µM was carried out after thrombin digestion, to eliminate thrombin inhibitors. Western blotting showed that more than 95% of GST-Q58-Htn is cleaved by thrombin (at ratio of 0.5 unit/1 µg protein) within 30 minutes (data not shown). Consequently, the mixture of GST-Q58-Htn and thrombin was preincubated for 45 minutes, followed by centrifugation to remove any aggregates already formed, before adding the test compounds. Nineteen of the compounds tested at 10 µM, showed significant direct inhibitory effects on aggregation. The 10 most potent compounds, corresponding to a 'hit' rate of 1%, are shown in Figure 3.
Characteristics of aggregation inhibitors
To determine the potency of each inhibitor, we performed dose response assays at concentrations ranging from 0.01 µM to 500 µM. Representative curves for the 6 most potent compounds are shown in Figure 4. Gambogic acid and celastrol showed strong but incomplete inhibition even at the maximum concentration, permitting approximately 20% residual aggregate to form. The average IC 50 (half-maximal inhibition) values for the most potent 10 compounds, which range from 0.7 to 15 µM, were obtained from at least two independent experiments each ( Figure 3). The most effective aggregation inhibitor was gossypol-acetic acid complex, followed closely by gambogic acid, and then juglone, celastrol, and sanguinarine nitrate, which all had IC 50 values less than 6 µM.
Effect on striatal cells expressing endogenous full-length mutant huntingtin
To test the hypothesis that compounds, which inhibit the aggregation-promoting property of amino-terminal mutant huntingtin will also rescue effects of full-length mutant huntingtin, we tested the top six inhibitors in a striatal cell-based assay. Mutant Hdh Q111/Q111 and wildtype Hdh Q7/Q7 striatal cell lines, ST7/7 and ST111/111, respectively, which have been prepared by transformation with a tsSV40 vector, can be propagated in culture and used for cytological and biochemical comparisons [25]. These cells express full-length mutant or wild-type huntingtin, respectively, with no evidence of truncated aminoterminal fragments, no formation of polyglutamine aggregates and no cell death-producing toxicity. However, like the striatal neurons of Hdh Q111/Q111 knock-in mice, the ST111/111 cells show nuclear staining of huntingtin when tested with an amino-terminal huntingtin antibody that is sensitive to the conformation of the full-length protein ( Figure 5A). By contrast, ST7/7 cells expressing wild type huntingtin show both nuclear and cytoplasmic immunostaining with the same huntingtin antibody (Figure 5A). This differential localization phenotype occurs early in the cascade of events detected during the lifespan of Hdh knock-in mice, months before the appearance of huntingtin amino-terminal fragment, and fulfills the genetic criteria from genotype-phenotype studies in HD patients, including polyglutamine length progressiveness and striatal specificity, suggesting that it follows from the same property that triggers HD pathogenesis.
This huntingtin localization phenotype was used to monitor the effect of inhibitors in the mutant cells ( Figure 5B), and the results are shown in Table 1. About 89% of ST7/7 striatal cells showed both nuclear and cytoplasmic immunostaining signal, while ST111/111 striatal cells showed only nuclear signal (99%). Of the six compounds tested, celastrol and juglone both reversed the mutant phenotype in a dose-dependent manner. Juglone showed no evident cell toxicity up to 10 µM, where 68% of mutant cells had reverted to wild-type phenotype. Celastrol reverted up to 81% of the cells but showed toxicity, killing ~4% of cells at 10 µM. Gossypol acetic acid complex was less effective, but showed no toxicity. At 50 µM, only 15% of the mutant cells displayed the wild-type phenotype. Gambogic acid showed a comparable small effect at 10 µM but was very toxic at high concentration, as all cells were killed at 50 µM. Sanguinarine nitrate and anthralin showed little effect on the huntingtin localization phenotype in this assay.
Discussion
A dramatic marker of pathology in many neurodegenerative disorders is the appearance of intracellular inclusions in some surviving neurons [13]. In HD, these inclusions stain positively for huntingtin, ubiquitin and a number of other proteins, but are thought to be initiated by the aggregation of an amino-terminal fragment of mutant huntingtin, due to its expanded polyglutamine tract [14,[26][27][28]. A number of model systems have been developed to investigate the polyglutamine-driven aggregation process and its consequences both in vitro and in vivo, but it remains unclear in HD whether the formation of aggregates plays an essential role in the pathway of pathogenesis or is a downstream by-product of neuronal dysfunction induced by full-length mutant huntingtin [29]. In either event, the search for drugs that alter in vitro aggregation of amino-terminal huntingtin fragment is attractive, since the aggregation-promoting physical property exhibits characteristics comparable to the disease-producing property of corresponding human alleles, as defined from genotype-phenotype studies of HD patients. For example, if accumulation of huntingtin inclusions is the proximate cause of neuronal death, compounds that inhibit aggregation would have therapeutic potential. Conversely, if the inclusions are only a downstream marker of the pathogenic process, the same drugs may still have therapeutic potential if they act on the property of full-length mutant huntingtin that triggers pathogenesis. It was with a view to testing whether the in vitro aggregation assay could act as a proxy for monitoring the diseaseproducing property of mutant huntingtin that we undertook this study.
% Inhibition
A comparable screening assay to the one used here has been employed to screen a large chemical library and has demonstrated the feasibility of identifying small molecule inhibitors of polyglutamine aggregation, including a family of benzothiazole-related compounds [30]. However, the long, arduous and expensive process of developing compounds for use as drugs in humans prompted us to screen a smaller chemical library biased toward drugs already approved by the U.S. Food and Drug Administration. The aggregation-inhibiting compounds that we identified came from a collection of mostly FDAapproved compounds and bioactive natural products that were specifically assembled for a neurodegenerative disease drug screening consortium supported by the National Institute for Neurological Disorders and Stroke and several disease foundations, including the Hunting-ton's Disease Society of America [23]. The drugs were distributed to 27 different labs for blinded testing in assays of potential relevance to neurodegenerative disease.
Unfortunately, despite the preponderance of FDA approved drugs in the collection, none of our top hits (IC 50 < 10 µM) is a compound approved for internal use in humans. The most effective inhibitor of aggregation was gossypol-acetic acid complex. Gossypol, a polyphenol found in cottonseed, has been studied extensively as a male contraceptive in China but the World Health Organization has argued against its use because of induced hypokalemia, high toxicity and the risk of permanent sterility [31]. Almost as potent, gambogic acid is a complex ring-structured natural product that is the main active ingredient of gamboge resin from the Garcinia hanburyi Ten most potent aggregation inhibitors from the NCC library
14.9
tree. It has long been used as a pigment for painting and in traditional medicine as a potent purgative. Recently, it was identified in a high-throughput screen as an apoptosis inducer with potential for development as an anti-cancer agent [32]. Juglone is a napthoquinone found in the bark of the black walnut (Juglans nigra), which has been used as an herbal medicine for its antihaemorrhagic and antifun-gal properties [33]. Celastrol is a triterpenoid from the vine Tripterygium wilfordii used as an alternative medicine for rheumatoid arthritis that has anti-oxidant, antiinflammatory activity, immunosuppressive and anti-angiogenic activities. It has been proposed as worthy of exploration as a therapeutic in Alzheimer disease [34]. Sanguinarine, a benzophenanthridine alkaloid from bloodroot (Sanguinaria canadensis) has broad antimicrobial and anti-inflammatory, as well as potential anti-angiogenic activity [35]. It has been used as an oral rinse and as a potential antigingivitis/antiplaque agent in toothpaste [36]. Anthralin is a synthetic derivative of chrysarobin, a traditional remedy for various skin ailments from Andira araroba, that has been widely used as topical treatment for psoriasis and alopecia areata [37,38].
Dose-response curve of mutant huntingtin aggregation inhibitors
Although none of these bioactive compounds is a candidate for immediate human trials, they provided a means to test whether compounds that inhibit polyglutamine aggregation might also block the neuronal phenotype caused by an elongated polyglutamine tract in full-length huntingtin. While we do not have a direct physical measure of huntingtin conformation and it remains possible that compounds could reverse the cellular phenotype by different pathways, our finding that juglone and celastrol, two drugs of different structure selected as hits in our primary aggregation assay, are both effective at restoring cytoplasmic huntingtin staining in the striatal cell assay suggests that they both act via a conformational property of mutant huntingtin. That gossypol, gambogic acid, sanguinarine and anthralin were not effective could be due to any of a number of reasons, including cellular uptake, toxicity, interaction with other cellular components, etc. However, it may also indicate that these compounds do not directly modify the conformational property of mutant huntingtin, but instead block a different step in the in vitro aggregation process or that they do so by a different physical effect than juglone and celastrol. A detailed analysis of structure-activity relationships using structurally-related compounds and testing in vivo in knock-in mice for their ability to reverse the cascade of mutant huntingtin-associated phenotypes will be needed to adequately assess the potential of any of these different types of compounds for testing in HD clinical trials.
The remaining compounds identified as weaker aggregation blockers in our primary screen include selamectin, a veterinary anti-parasitic [39], pararosaniline pamoate, a treatment for schistosomiasis [40], tyrothricin, a cyclic peptide antibiotic, and meclocycline, a tetracyclinerelated antibiotic. The latter is of particular interest since it was the most potent of several tetracycline-related antibiotics present among the NCC compounds, including tetracycline, chlortetracycline, demeclocycline, doxycycline, methacycline, oxytetracycline and notably, minocycline, which has been proposed as a therapeutic in HD and other neurodegenerative disorders. Minocycline is an FDA approved antibiotic used for a variety of infections that has variably been reported to improve symptoms in the R6/2 exon 1 overexpression HD model [41][42][43][44]. It has anti-inflammatory and anti-apoptotic activity that has been proposed to involve several potential mechanisms of action. In our hands, minocycline is a weak inhibitor of polyglutamine aggregation with an IC 50 of 43 µM (unpublished data). This is consistent with the inhibitory effect on huntingtin exon 1 aggregation reported previously at 30 µM in long-term hippocampal slice cultures from the R6/2 mouse [44]. Two safety and tolerability studies of minocycline in human HD are completed [45,46] and can be expected to lead to efficacy trials earlier than any trials for strong aggregation inhibitors. However, it is conceivable that long-term, low level inhibition of mutant huntingtin's aggregation-promoting conformational property, independent of minocycline's anti-apoptotic activity, may be sufficient to alter detectably the timing of disease onset or early progression. If the hoped for positive results are obtained in minocycline HD trials, this alternative mechanism should be considered since it would have implications for testing of meclocycline and for assessing the potential trade-off between potency and toxicity in choosing other aggregation inhibitors as potential long-term therapeutics.
Interestingly, the same set of 1040 NCC compounds were screened for their ability to block toxicity in a PC12 cellular assay where induced expression of huntingtin exon 1 encoding 103 glutamines leads to the accumulation of aggregates and rapid cell death [47]. Although eighteen compounds were found to be completely protective, none was among our hits, suggesting that the mechanism of polyglutamine toxicity in the PC12 cells is fundamentally different than the mechanism(s) involved in the in vitro aggregation assay. Among a secondary class of partially protective compounds in the PC12 assay, only celastrol overlapped with our hits. The NCC compounds were also screened in a cellular assay in HEK 293T cells expressing androgen receptor with 112 glutamines [48]. In this model for spinal bulbar and muscular atrophy, accumulation of intracellular inclusions, accompanied by caspase 3 activation, is followed by cell death within 72 hours. Twenty compounds that blocked caspase 3 activation included celastrol, gambogic acid, sanguinarine and tyrothricin, though all but sanguinarine showed toxicity. The major finding from this assay was that several cardiac glycosides were protective, presumably by a different mechanism than our hits. Indeed, although most of the assays in the NINDS consortium involved disorders associated with protein aggregation, including various polyglutamine disorders, amyotrophic lateral sclerosis and Parkinson disease, there was a remarkable lack of overlap in hits suggesting that the individual assays targeted fundamentally different mechanisms. A possible exception was celastrol, which was found as a hit in our aggregation assay, the two assays noted above, and other assays which will be discussed in a summary article describing the consortium.
Conclusions
The identification and further characterization of chemical inhibitors of in vitro aggregation of an amino-terminal fragment of mutant huntingtin offer promise for the development of potentially therapeutic compounds that also target the deleterious conformational property of full-length mutant huntingtin.
Chemical library, enzymes and antibodies
A library containing 1040 small chemical compounds consisting of FDA-approved drugs and bioactive natural products, the National Institute of Neurodegenerative Diseases and Stroke Custom Collection (NCC), was provided in thirteen 96-well plates by MicroSource Discovery Systems, Inc (Gaylordsville CT). The complete list of NCC compounds is available [49]. All compounds were dissolved in 100% DMSO at a concentration of 10 mM. Thrombin was purchased from Amersham Pharmacia Biotech (Piscataway NJ). Anti-huntingtin antibody HP1, used in the screening assay was described by Persichetti et al. [50]. AP229 used in the cell-based assay was previously described and was a gift of Dr. A.H. Sharp [25].
GST-huntingtin construct and expression
A recombinant pGEX-2TK expression vector with cDNA fragment encoding amino terminal 171 amino acids of human huntingtin with a polyglutamine tract of 58 was used to prepare the GST-Q58-htn fusion protein. The GST-Q58-htn was overexpressed in BL21 cells and purified by affinity chromatography over glutathione-sepharose 4B beads (Amersham Pharmacia Biotech). The purified proteins were stored at concentration of 2.0 mg/ ml at -80°C.
Aggregation assay
For primary screening of the chemical library, 1 µl of 4.0 mM small compound stock, diluted from the original plates, was placed in wells of 96-well plates. The fusion protein, GST-Q58-Htn was mixed with thrombin (0.5 unit/1 µg protein) at a concentration of 20 µg/ml in a buffer of 50 mM Tris-HCl, pH 8.0, 100 mM NaCl, 2.5 mM CaCl 2 , 1.0 mM EDTA. The mixture was immediately distributed into the 96-well plates containing diluted compounds at 40 µl/well and mixed well. The final concentration of the small compounds was 100 µM. After 24 hours incubation at room temperature, the reaction was stopped by adding 10 µl of 10% SDS/50 mM 2-mercaptoethanol to each well followed by boiling in a PCR machine for 5 minutes. The mixture from each well was filtered through a cellulose acetate membrane ((0.2 µm, GE Osmonics labstore, Minnetonka MN) by using a 96well ELIFA (Pierce Biotech). The aggregates retained on the membrane were detected by a specific anti-huntingtin antibody, HP1 (diluted 1:1000), followed by incubation with peroxidase conjugated anti-rabbit antibody (diluted 1:10,000, Sigma). Signals from SDS insoluble aggregates were scanned and quantified by using ImageMaster Totalab image analysis software (Amersham Pharmacia Biotech). In the secondary screening, all steps were the same except the final concentration of compound in each well was reduced to 10 µM and the GST-Q58-Htn/thrombin mixture was preincubated for 45-minutes at room temperature and clarified by centrifugation at 28,000 × g for 30 minutes before being added to the test wells. For IC 50 determinations, the in vitro aggregation assay and signal quantification were performed as in the second screening but varying the final concentration of input drug. The data for each inhibitor were obtained from at least two independent experiments in which every sample was analyzed in triplicate with Prism 3.0 software (Graphpad Software, Inc., San Diego, CA).
In the immunofluorescence and confocal analysis experiment, wild-type ST7/7 and homozygous mutant ST111/ 111 cells, grown to confluence on glass coverslips, were treated with 6 different drugs at four different concentrations (0.5, 5, 10 or 50 µM) for 30 min at 33°C. After the treatment the drugs were removed and the cells washed twice in PBS. The cells were then fixed by incubation in 4% formaldehyde for 15 min, permeabilized by 0.1% Triton X-100 for 5 min and incubated for 30 min with blocking solution (1% bovine albumin in PBS). Coverslips were then incubated in primary antibody (AP229 1:500 dilution) for 2 h at room temperature and washed three times in PBS before a further 1 h in blocking solution containing the secondary antibody (goat anti-rabbit Cy3, Jackson ImmunoResearch, West Grove, PA. USA). After three washes in PBS coverslips were mounted onto glass slides with Vectashield (Burlingame, CA. USA) and the images were analyzed with a laser confocal microscope (Bio-Rad, Hercules, CA. USA) using 20 × objective. Cell death was quantified by scoring the percentage of cells with apoptotic nuclear morphology, i.e. condensed or fragmented nuclei, under the confocal microscope. In each case five to ten randomly selected fields were counted, comprising at least 200 cells, and each experiment was repeated 3 times.
|
2016-04-30T01:05:23.138Z
|
2005-01-13T00:00:00.000
|
{
"year": 2005,
"sha1": "ea6d3a296bade941cdde6add87f0e666422a44a4",
"oa_license": "CCBY",
"oa_url": "https://bmcneurosci.biomedcentral.com/track/pdf/10.1186/1471-2202-6-1",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "0db9678e691f8dd12833bd5f11046d59f02426fe",
"s2fieldsofstudy": [
"Biology",
"Chemistry"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
248302448
|
pes2o/s2orc
|
v3-fos-license
|
The association of childhood abuse and neglect with tattoos and piercings in the population: evidence from a representative community survey
Background Tattoos and piercings are becoming increasingly popular in many countries around the world. Individuals seeking such body modifications have reported diverse psychological motives. Besides purely superficial considerations, tattoos and piercings can also have a deep, personal meaning. For example, they can mark and support the emotional processing of significant life events, including formative experiences from early childhood. However, there is a lack of studies that examine the links of tattoos and piercings with experiences of childhood abuse and neglect in large, population-based samples. Methods We investigated the association of reports of childhood abuse and neglect with the acquisition of body modifications (tattoos and piercings) within a representative German community sample. Survey participants (N = 1060; ages 14–44 years) were questioned whether they had tattoos and piercings and filled out the 28-item Childhood Trauma Questionnaire Short Form (CTQ-SF). Results Tattoos and piercings were more common among individuals who reported childhood abuse and neglect. The proportion of participants with tattoos and piercings increased as a function of the severity of all assessed types of abuse and neglect (emotional, physical, and sexual abuse; emotional and physical neglect). In logistic regression analyses which included the covariates age, gender, education, and income, the sum of significant kinds of childhood abuse and neglect was positively related to having tattoos and/or piercings (OR = 1.37 [95% CI 1.19–1.58]). Conclusions The results corroborate previous research indicating that body modifications could have special significance for individuals who have survived adversity, in particular interpersonal trauma at the hands of caregivers. These findings could inform screening procedures and low-threshold access to psychotherapeutic care.
Background
Deliberate body modifications such as tattoos and piercings have a long cultural-historical tradition and are based on techniques that are similar worldwide. Since time immemorial, they have been used as a form of expression, for instance of cultural values, sexual maturity, or of the social status and wealth of the wearer [1]. Long-established tattoo techniques with cultural significance are still present, such as those used by the indigenous peoples of Polynesia or the Inuit, which see the application by hand and simple tools that have hardly changed over hundreds of years. However, modern technological and medical advances have contributed to the proliferation of both tattoos and piercings in today's society. In many Western countries, they are becoming increasingly popular [2,3]: Whereas tattoos and piercings used to serve as identifying characteristics of marginalized groups and/or different subcultures [4], they are now a mass phenomenon and reflect a changed attitude towards one's body: In times of more individualistic lifestyles, the body becomes an aesthetic object which can be actively changed, in accordance with contemporary ideals of self-expression and beauty [5][6][7][8][9]. Tattoos and piercings warrant particular attention as they are usually permanent alterations. Besides health concerns such as allergies and infections [10,11], they might still imply social sanctioning in some contexts (e.g., at the workplace [12]).
In 2016, 37% of individuals above 14 years who were included in a representative German community study reported having a tattoo. Although tattoos were reported by people of all levels of education and vocational success, they were slightly more common among those with fewer years of school and those currently out of work [13]. Similar proportions of men and women reported having tattoos. By contrast, more women than men reported having piercings (excluding those of the earlobes) [6]. An earlier US-American study had yielded similar results [14].
The underlying psychological motivations for tattoos and piercings have been the focus of comparatively smaller studies, many of which used qualitative methods. Sweetman [7] highlighted that the persistent nature of a tattoo, as well as the involved pain and care, add to its particular significance compared to other fashionable accessories. It is important to note that tattoos and piercings serve as means of communication [15] as they are an outward expression of something felt inwardly. In their review, Wohlrab, Stahl [16] summarized major motivations for acquiring body modifications. These fell into ten categories, comprising superficial motives (such as beauty and fashion) as well as expressions of profound personal meaning (personal narrative, group affiliations and commitment, resistance).
Tattooed and pierced individuals also reported a higher need for uniqueness [17] and lower self-esteem [18] than those without any body modifications. Body modifications have been related to comparatively pronounced risk-taking behavior [19,20] and sensation seeking [21]. They were more common among individuals with personality disorders [22] and pathological behaviors such as non-suicidal self-injury (NSSI), e.g., in the form of cutting [23,24].
Along these lines, a recurring theme in the literature has been emotional regulation and coping with stressful life events [25]. In a previous German investigation, participants described the marking of a stage of life, overcoming adversity, and striving to reclaim control over one's life [26] as motives for the acquisition of piercings and tattoos.
Numerous studies have referred to the importance of previous experiences of bodily harm inflicted by others: In particular survivors of sexual abuse reported the wish to overcome past experiences by means of body modification [27]. An older community study from New Zealand had also found comparatively high rates of childhood sexual abuse among women with tattoos [28]. In a similar way, researchers suggested that a piercing could be an expression of the wish to heal "past wounds" [29]. Piercing may also enable the reconciliation with formerly refused or dissociated body parts [4]. It fits that following periods of healing time promote the occupation with one's body as well its care [4]. A recent study also found higher rates of childhood neglect and abuse among intimately pierced individuals [30].
However, there is a lack of comprehensive, systematic investigations of the associations of childhood abuse and neglect with tattoos and piercings at the population level. This presents a research gap as adverse childhood experiences are a widespread phenomenon [31], with sustained consequences for health and well-being, identity, and behavior across the life span.
In addition, research has shown that psychological trauma disrupts narrative processing, meaning that memories of adverse events might be represented differently than memories of experiences that were not accompanied by intense distress (see e.g., [32]). This could make it difficult to access and communicate them in verbal form, e.g., in conversation with others. Instead, body modifications lie close as a more physical, behavioral mode of expression.
Furthermore, survivors of childhood abuse and neglect are especially likely to show the characteristics of tattooed and pierced individuals reported above, e.g., low self-esteem, risk-taking and other impulsive behaviors, which are often observed in the context of personality pathology [33,34]. These factors could facilitate tattoos and piercings in the sense of mediating or moderating variables: As developmental risk factors, abuse and neglect implicate a negative self-image and emotion regulation difficulties (e.g., [35,36]). Against this background, tattoos and piercings could be used specifically to create more pleasant subjective experiences. This includes feelings of being in control, which contrast the distressing early experience of having been victimized and/or neglected [37]. At the same time, impulsive traits make it more likely that individuals will get (multiple) tattoos or piercings without much concern about potential risks or undesirable long-term consequences, which might otherwise deter them.
The present study: We used a validated questionnaire assessing childhood abuse and neglect, the 28-item short form of the Childhood Trauma Questionnaire (CTQ-SF) [38], in a representative population sample. We presumed that childhood abuse and neglect are consequential early life experiences that are positively associated with body modifications later in life, e.g., based on previous evidence from survivors of sexual abuse [27,28] and individuals with intimate piercings [30]. We thus expected higher rates of tattoos and piercing among individuals reporting abuse and neglect compared to those reporting no abuse or neglect. We also expected reports of more severe abuse and neglect to be associated with higher proportions of tattoos and piercings among the persons affected.
Tattoos and piercings are in some respects comparable (e.g., both are permanent and the experience of getting them is painful to some degree), however, piercing the skin versus applying an image or lettering to it are different kinds of body modifications. Therefore, given the lack of studies that have systematically investigated associations of (childhood) adversity with tattoos and piercings within the same sample, more exploratory research questions concerned potentially differential associations of childhood abuse and neglect with tattoos versus with piercings.
Further, as women are more likely to experience childhood abuse and neglect [39], it is an open question whether the association of childhood abuse and neglect and piercings in particular remains robust if gender differences are statistically controlled.
Survey strategy.
A representative sample of the German population was surveyed by the independent demographic consulting company USUMA (based in Berlin, Germany) from 09/2016 to 11/2016. Participants were chosen via random-route procedure. All participants were at least 14 years of age and had sufficient understanding of the German language. They were informed of the study procedures, data collection, and anonymization of personal data before providing informed consent. In the case of minors, participants gave informed assent with informed consent being provided by their parents/legal guardians. The sample was representative of the German population with respect to age, gender, and level of education. Out of 4902 designated addresses, 2510 households participated. Individuals in multi-person households were randomly selected using a Kish-Selection-Grid. Responses were anonymous. Socio-demographic information was obtained in a face-to-face interview conducted by trained interviewers. All other information was gathered in written form (pen and paper) as part of a questionnaire that was handed out together with a sealable envelope. It included questions about tattoos and piercings and the 28-item Childhood Trauma Questionnaire Short Form. The study was conducted in accordance with the Declaration of Helsinki and fulfilled the ethical guidelines of the International Code of Marketing and Social Research Practice of the International Chamber of Commerce and of the European Society of Opinion and Marketing Research. The study materials and procedure were approved by the Ethics Committee of the Medical Department of the University of Leipzig (number 297/16ek).
In order to establish comparability with previous studies investigating tattoos and piercings in the German population [40] and to focus on a younger age group in which body modification is of higher relevance, we only included participants aged 14-44 years (reducing the sample to N = 1060).
Sociodemographic information
Participants reported their age, gender, and educational attainment. We calculated equivalised income according to the OECD guideline [41] by dividing the household income through the square root of people in household. The result was then recoded into the following categories: 1 ≤ 1250€, 2 = 1250-2500€, 3 ≥ 2500€.
Tattoos and piercings
The presence of tattoos and piercings was assessed via self-report. The questions were "Do you have tattoos?" and "Do you have piercings (not including those of the earlobes)?". Response options were "No", "Yes, one", and "Yes, multiple".
Childhood abuse and neglect
Experiences of abuse and neglect were assessed using the 28-item short form of the Childhood Trauma Questionnaire (CTQ-SF) [38]. It comprises five subscales: emotional abuse, physical abuse, sexual abuse, emotional neglect, and physical neglect. Each of the 28 items (e.g., "I had to wear dirty clothes", assessing physical neglect) is scored on a five-point Likert scale (ranging from 1 = never to 5 = very often). Responses to the single items are then summarized. For each subscale, the sum score ranges from 5 to 25 points. The total score of the questionnaire is the sum of the five subscales. The CTQ-SF has been widely used in community samples as well as in clinical practice and research. Klinitzke, Romppel [39] confirmed its fivefactor-structure and attested to the scales' acceptable to good internal consistencies (Cronbach's α = 0.62-0.96). We also confirmed acceptable to good internal consistencies based on the present sample (emotional abuse: ω = 0.83, physical abuse: ω = 0.78, sexual abuse: ω = 0.86, emotional neglect: ω = 0.87, and physical neglect: ω = 0.65).
Statistical procedure
In this study, the coding of the severity (none to minimal, low to moderate, moderate to severe, severe to extreme) of the five different kinds of childhood abuse and neglect assessed by the CTQ-SF followed established, widely used norms. These were based on previous representative surveys of the German population [42]. For example, for the subscale emotional abuse, none to minimal ranges from 5 to 8 points, low to moderate from 9 to 12 points, moderate to severe from 13 to 15 points, and severe to extreme from 16 to 25 points.
In line with this previous investigation, the categories were also combined into "non-significant" (including only none to minimal abuse/neglect) and "significant" reports (combining the three categories low to moderate, moderate to severe, and severe to extreme).
In order to control for potential confounders of the associations of interest, we calculated multivariate logistic regression models of the presence of body modifications (including separate analyses of the presence of tattoos and piercings). These models included participants' age (as a continuous variable), gender (coded 1 = men, 2 = women), equivalized household income, level of education (1 = lower than the German Abitur, 2 = (comparable to the) German Abitur or higher), and the sum of "significant" kinds of abuse and neglect (referring to the five subscales of the CTQ-SF, using the cutoffs detailed above) as a continuous variable.
P-values correspond to two-tailed tests. Confidence intervals (CIs) are reported for Odds Ratios (OR). Analyses were carried out using R Version 4.0.3. We calculated the phi coefficient (φ) for associations of dichotomous variables, i.e., comparisons of proportions via χ 2 -tests, and Cohen's d as an effect size measure for standardized differences of mean values, i.e., comparisons conducted via t-tests. Effect sizes and regression coefficients are interpreted following Cohen [43]. Due to the small amounts of missing data (< 2% per variable), we used listwise deletion.
Participants
We analyzed data of 1060 participants. This sample comprised 560 women (52.8%). Participants' mean age was 30.47 years (SD = 8.41). Roughly a fifth of participants had the German Abitur (general university admission, usually obtained after 12-13 years of school) (N = 282, 26.6%), and most participants' income fell into the lowest income bracket (N = 628, 59.2%).
Prevalence of tattoos and piercings
In total, 38.1% (N = 404) of the sample reported to have at least one tattoo or piercing. Tattoos were more common (N = 339, 32.0%) than piercings (N = 212, 20.0%). Comparable proportions of men and women reported having tattoos, while piercings were more common among women [χ 2 (1, N = 1058) = 42.52, p < 0.001, φ = 0.20]. Having multiple tattoos was also similarly common among men and women, but more women than men reported having multiple piercings [χ 2 (1, N = 1060) = 16.80, p < 0.001, φ = 0.13]. There were 143 participants (13.5%) who reported to have both (at least one tattoo and at least one piercing).
Prevalence of childhood abuse and neglect
On the basis of the cut-offs established by Häuser, Schmutzer [42], at least one kind of "significant" childhood abuse or neglect was reported by 24.6% of participants (N = 261). Physical forms of abuse or neglect were reported by more participants (N = 223, 21.0%) than emotional forms (N = 155, 14.6%). Those reporting childhood abuse or neglect were more likely to be women.
Association of childhood abuse and neglect with tattoos and piercings
Overall, 48.3% of those reporting at least one kind of abuse or neglect also reported to have at least one tattoo or piercing, compared to 35% among those who reported no childhood abuse or neglect. This difference was statistically significant (χ 2 (1, N = 1058) = 14.45, p < 0.001, φ = 0.12). Likewise, 40.6% of participants who reported at least one kind of abuse or neglect had at least one tattoo, compared to 29.4% of those who did not report any "significant" abuse or neglect (χ 2 (1, N = 1058) = 11.35, p = 0.001, φ = 0.11). Similar ratios were observed regarding piercings: 27.3% of individuals who reported abuse or neglect also reported at least one piercing, compared to 17.8% of those who did not report abuse or neglect (χ 2 (1, N = 1058) = 10.86, p = 0.001, φ = 0.10). Group differences were similar for participants reporting multiple tattoos or piercings (see Fig. 1).
We also investigated tattoos and piercings separately. Fig. 3.
Regression analyses
As both the exposure to childhood abuse and neglect and the presence of tattoos and piercings varied depending on participants' socio-demographic characteristics, we investigated associations of body modifications and childhood abuse and neglect in multivariate analyses which included these potential confounders as covariates (Table 1). There was still a positive association of the number of "significant" kinds of abuse and neglect and the likelihood to report any tattoos or Fig. 1 Percentage of individuals with tattoos and piercings, stratified by reports of childhood abuse and neglect. Proportions of those with tattoos or piercings (or several tattoos or piercings, respectively), were greater among those who reported adverse childhood experiences. All presented differences between those without childhood adversity and those with reports of childhood adversity were statistically significant piercings (OR = 1.37 (95% CI 1.19-1.58)). The effect applied to tattoos (OR = 1.34 (95% CI 1.16-2.54)) and to piercings (OR = 1.30 (95% CI 1.12-1.51)).
Discussion
This study used a validated assessment of childhood adversity in a representative sample of the German population. We found consistent associations of abuse and neglect and the presence of body modifications. Not only were tattoos and piercings more common among those who reported any kind of childhood adversity, their prevalence rates also increased with greater severity of all kinds of abuse and neglect.
Thus, the results complement previous studies which focused on specific (risk) groups [27,30,44], as individuals with body modifications in our study were part of a random sample. They had not been recruited because of these characteristics and/or the special relevance their tattoos and piercings had for them, personally. The sociodemographic differences among participants with and without tattoos and piercings corresponded to prior representative investigations in the German context [6,13]. However, we also observed a positive association of the sum of significant kinds of childhood adversity and tattoos and piercings in multiple logistic regression analyses that statistically controlled the effects of variables such as age and level of education. Given the growing popularity of tattoos and piercings among younger individuals due to their aesthetic appeal [3], it is especially remarkable that we still found the present associations that indicate other, more personal motivations for body modifications.
These findings corroborate previous research which highlighted the connection between the experience of sexual abuse and intimate piercings [30]. In the present study, we did not differentiate between pierced body parts, but the participant group who reported severe to Table 1 Logistic regression analyses of having any body modification, at least one tattoo, or at least one piercing on sociodemographic characteristics and childhood abuse and neglect 1 Nagelkerke R 2 = 0.053; 2 Nagelkerke R 2 = 0.051; 3 Nagelkerke R 2 = 0.098 extreme sexual abuse included the largest proportion of pierced (as well as of tattooed) individuals. Along these lines, the similar patterns observed for tattoos and piercings mirror previous reports of comparable motives [16]. In our study, we did not only observe the anticipated associations of physical and sexual abuse with body modifications, but we also found effects of emotional abuse and neglect. Therefore, piercings and tattoos might not only play a role in coping with negative experiences during which bodily autonomy was restricted or violated. Emotional abuse and neglect have also been highlighted as consequential early experiences which implicate mental distress later in life [45,46]. Hence, survivors of these forms of childhood adversity might perceive the acquisition of body modifications as empowering, too. This hypothesis is supported by a previous study which found that especially individuals who also reported NSSI (an indicator of severe emotional pain which is common among survivors of abuse and neglect (e.g., [47]) cited emotional regulation as a reason for getting tattoos and piercings [24]. Likewise, female study participants with symptoms of unstable personality disorder (a specific pattern of personality pathology that has been linked with traumatic interpersonal, early experiences (e.g., [48]) differed from mentally healthy study participants regarding their motives for body modifications: They attached greater relevance to personal topics such as processing negative life events and coping [49].
These findings have several implications. On the one hand, it would be an unwarranted, overgeneralizing assumption to expect that people's choices of tattoos and piercings are necessarily connected to stressful, early life events. In this study, neither presence nor severity of childhood abuse and neglect were perfectly correlated with body modifications. Previous research has also shown associations with more recent life events [25] and, beyond those, listed various other motivations (including more superficial ones) [16,50,51]. Especially as tattoos and piercings become part of Germany's and other countries' mainstream culture, the embellishment of the body can be assumed to be the primary goal of most people who get tattooed and/or pierced.
On the other hand, the present results indicate new, unconventional opportunities for creating access to psychosocial support, better screening, and potential starting points for interventions in psychotherapy.
First, it would be worth considering whether tattoo and piercing studios should be involved in populationbased campaigns aimed at the mitigation of negative consequences of early life adversity. If clients disclose experiences of childhood abuse and neglect, the staff could pass on respective information material including contact details, providing low-threshold access to nearby clinics or counselling services. Clients could still decide for themselves whether to follow up on this offer.
Second, (mental) health care professionals should be aware of the potential significance of patients' body modifications. If it is not already part of routine assessments, patients should be screened for a history of childhood abuse and neglect. In the following, a cautious exploration of their past experiences could contribute to emotional relief. It could also support the prevention of both mental and physical later-life sequelae of childhood adversity (e.g., through psychotherapy, psychoeducation, and adaptive health behaviors).
Third, the results suggested that patients' tattoos and piercings could indicate topics of great significance to them (such as self-determination, or taking control). In the context of psychotherapy, clinicians could explore whether these are also currently important struggles in patients' lives. The acknowledgement of tattoos and piercings as ways of self-expression could also facilitate conversations about individual ways of dealing with the past [52].
Strengths and limitations
The large, representative population sample is a great strength of the present work, also because it precludes issues such as self-selection of participants (e.g., due to their special affinity for tattoos or piercings). We took a number of potentially confounding variables into account (gender, age, equivalized household income, and level of education). However, the study's results need to be interpreted in the context of its limitations. First, the cross-sectional study design cautions against causal interpretations. Childhood abuse and neglect was assessed at a later stage in life and via self-report. However, self-reports of childhood abuse and neglect were deemed trustworthy [53]. Information regarding body modifications was also assessed in the form of a self-report and could have been more detailed: First, it was limited to tattoos and piercings and did not include other forms of body modification (such as scarification or transdermal and microdermal implants). Second, participants were not asked where their tattoos or piercings were located. There is evidence that the placement of a body modification is important with respect to its meaning for the wearer and with regard to its visibility/others' reactions [4,16]. Third, we did not collect detailed data about the total area or number of modifications, although both seem to be clinically relevant to distinguish fashion-motivated modifications from those used as emotional regulation or coping [8]. However, this aspect would be difficult to assess in quantitative surveys. Within the present context, assessments of current distress (e.g., symptoms of posttraumatic stress disorder) could have provided further clinically relevant insights. Regarding gender identity, the survey from which we drew our data forced a choice of the options woman or man. There was no option for nonbinary individuals and it did not differentiate between trans-and cisgender women and men either. Lastly, the current results are based on a German community sample. Therefore, they are only transferable to other cultures to a limited extent. This includes contexts in which body modifications are less common and viewed less favorably by the majority society because they are judged against a particular historical background, for example Japan, where tattoos still carry the stigma of criminal associations [54]. In strong contrast, some body modifications are very popular with other cultures and have great cultural significance for the individual and their community, for instance the piercing of the nose done by Indian women [54]. This limitation applies to most of the published research which heavily focuses on European or US-American surveys. Furthermore, as respective cultural factors are likely still relevant for migrated persons, it is a limitation of the present work that it did not differentiate between individuals of different origins and/or nationalities living in Germany.
Conclusions
The present study adds to previous research by confirming positive and similar associations of tattoos and piercings with childhood abuse and neglect within a representative population sample. These relations did not just pertain to physical and sexual abuse, but also to early experiences of neglect and emotional forms of trauma. They were still observed in statistical models that controlled effects of potential socio-demographic confounders such as gender and age. Hence, for a substantial number of individuals who acquire body modifications, they could present a means of coping with previous adversity and be an expression of autonomy. These findings open up new avenues for support offers (involving tattoo artists and piercers) and screening (e.g., in primary care). Tattoos and piercings could also provide an impetus for therapeutic conversations about the significance of past experiences and about currently important themes.
|
2022-04-22T13:18:05.462Z
|
2022-04-22T00:00:00.000
|
{
"year": 2022,
"sha1": "67e74dae1c94c2af89af9f6a339b29c4d4c3d5b8",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "3e5a001841d844b5fef59919e03539dbbc8bcc79",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
253510564
|
pes2o/s2orc
|
v3-fos-license
|
Constraining accuracy of pairwise velocities using scale-free models
We present a continuation of an analysis that aims to quantify resolution of N -body simulations by exploiting large (up to 𝑁 = 4096 3 ) simulations of scale-free cosmologies run using Abacus. Here we focus on pairwise velocities of the matter field and of halo centres selected with both the Rockstar and CompaSO algorithms. For the matter field, we find that convergence at the 1% level of the mean relative pairwise velocity can be demonstrated over a range of scales, evolving from a few times the grid spacing at early times to slightly below this scale at late times. Down to scales of order the force smoothing, convergence is obtained at ∼ 5% precision, and shows a behaviour indicating asymptotic stable clustering. We also infer for LCDM simulations conservative estimates on the evolution of the lower cut-off to resolution (at 1% and 5% precision) as a function of redshift. For the halos, we establish convergence, for both Rockstar and CompaSO, of mass functions at the 1% precision level and of the mean pair-wise velocities (and also 2PCF) at the 2% level. We find that of the two halo finders, Rockstar exhibits greater self-similarity, specially on small scales and small masses. We also give resolution limits expressed as a minimum particle number per halo in a form that can be directly extrapolated to LCDM.
INTRODUCTION
Observational tests such as Type Ia supernovae (Perlmutter et al. 1997;Riess et al. 1998), large-scale structure analysis from Baryon Acoustic Oscillations (BAO, Eisenstein et al. 2005;Cole et al. 2005) and the temperature anisotropies of the cosmic microwave background (CMB, Jaffe et al. 2001;Pryke et al. 2002;Planck Collaboration et al. 2014) provide compelling evidence that the Universe is in an accelerated expansion. To explain this within the framework of General Relativity requires a new type of "dark" energy that accounts for about 70% of the total, and whose nature is still unknown. In the current standard model of cosmology (LCDM), this energy component is in the form of a cosmological constant. Alternative theoretical approaches either add extra degrees of freedom to characterize the energy content of the Universe or modify the Einstein-Hilbert action (for a review on these models see Clifton et al. 2012).
Ongoing and future surveys such as the Dark Energy Spectroscopic Survey (DESI) (DESI Collaboration et al. 2016) or the space-based mission Euclid (Laureijs et al. 2011) will provide large scale structure maps of the Universe of unprecedented statistical precision, allowing astronomers to measure the expansion history of the Universe and the growth rate of cosmic structures in sufficient detail to potentially distinguish between the different possible aforementioned scenarios.
Indeed, one of the most valuable tests to discriminate between ★ E-mail: sara.maleubre@lpnhe.in2p3.fr these multiple models observationally, and ultimately determine which can explain current data, consists in the study of the rate at which cosmic structures grow (see e.g. Perenon et al. 2019;Brando et al. 2021), as different theories can predict quite different growth histories even for the same background evolution. A popular way of constraining this growth rate is by analysing the corrections to galaxy redshifts due to their peculiar velocities, which produces a modification of galaxy clustering, an effect called redshift-space distortions (RSD, Jackson 1972;Kaiser 1987). Since peculiar velocities are caused by gravitational pull, we can trace a relation between the velocity field and the mass density field and thus estimate the rate at which structures grow.
In order to exploit this information, it is essential to calculate accurate theoretical predictions for the large-scale structure of the Universe. Below scales where the perturbative approaches break down, such calculations rely entirely on cosmological simulations performed using the -body method. This approach approximates the continuous phase-space distribution of dark matter by that of a sparse finite sample of particles, and evolves them in a finite box with periodic boundary conditions. In this context, an important question is the accuracy and scale-range limitations of this method in attaining the physical limit.
The assessment of the accuracy to which results converge to values independent of the numerical parameters (time stepping, force accuracy parameters) introduced in the resolution of the -body system is straightforward. In this respect, extensive code compar-isons (Heitmann et al. 2008;Schneider et al. 2016;Garrison et al. 2019;Grove et al. 2022) give considerable added confidence in the precision of results for different statistics. Such comparisons do not address, however, the question of the accuracy with which these simulations represent the physical limit. While dependence on box size can be assessed by direct extrapolation studies (see e.g. Euclid Collaboration et al. 2019), assessing the accuracy limitations imposed at small scales due to the discretization of the matter field is much more complex. The reason is that there are, at least, two relevant unphysical parameters, the mean interparticle spacing (denoted Λ here) and the gravitational force smoothing (denoted ), and numerical extrapolation to the continuum physical limit, corresponding to /Λ → 0, is in practice unattainable. Precise quantitative conclusions regarding it have remained elusive and sometimes controversial (see Joyce et al. 2021, for a discussion and some references).
Previous studies using -body simulations have already used the information contain in the dark matter and halo pairwise velocity field to study plausible deviations from the standard model (Hellwing et al. 2014;Gronke et al. 2015;Bibiano & Croton 2017;Valogiannis et al. 2020). Such conclusions ultimately rely on the ability of the -body method to accurately predict and compute the desired statistic and that of the chosen halo finder retrieving halo properties accurately. But halos are not uniquely defined entities, and their properties depend strongly on the algorithm adopted for their extraction. In addition to the aforementioned efforts in examining precision in -body simulations, several studies have been carried out to assess the accuracy and resolution of different codes for halo recovery (see Knebe et al. 2011, for a review).
In this article, we use the techniques introduced in Joyce et al. (2021) and developed and applied also in Leroy et al. (2021); Garrison et al. (2021a); Garrison et al. (2021c) and Maleubre et al. (2022) to derive resolution limits arising from particle discretization for different statistics by analysing deviations from self-similarity in scale-free cosmological models. Here, we employ these methods to assess and quantify the limits arising from discretization on the precision at which the radial component of the pairwise velocity of the full dark matter field, and of halos, can be retrieved from -body simulations. In addition, we revisit and develop further the analysis in Leroy et al. (2021) of the mass functions and two-point correlation function of halos, extending it to include both larger simulations and scale-free models with different exponents as well as to the new halo finder C SO Bose et al. 2022).
This article is structured as follows. The first part of the next section describes what scale-free cosmologies are and how their self-similar evolution can be used to determine the accuracy at which different statistics can be measured in -body simulations. Next, we recall the expressions for the radial component of the pairwise velocity and the pair conservation equation, as well as give the equation for the latter in the context of scale-free cosmologies. We end the section with a description of the halo statistics that will be analysed. Section 3 contains a summary of the simulations used, as well as a brief description of A , the -body code used for their computation. It also contains a description of the method used to estimate convergence of the different statistics in both the dark matter field and halos, and ends with a summary of the halo finders we compare (R and C SO). In section 4 we present and analyse our results for both dark matter and halos, as well as infer resolution limits to nonscale-free cosmologies. Finally, we summarize our results in Section 5.
Scale-free simulations and Self Similarity
Scale-free cosmologies have an Einstein-de Sitter, EdS, (Ω = 1) background and a power-law power spectrum ( ∝ ) of initial perturbations, which are thus characterized by just one length scale, the scale of non-linearity. This can be defined by 2 lin ( NL , ) = 1 ( 1) where 2 lin is the variance of normalized linear mass fluctuations in a sphere. Its temporal evolution can be calculated from linear perturbation theory as NL ∝ 2 3+ (2) One can infer that, if the evolution of gravitational clustering is independent of any other length scale (notably ultraviolet or infrared cut-offs to the assumed power-law fluctuations), it must be self-similar, i.e., the temporal evolution of the statistics describing clustering is given by a spatial rescaling following Eq. 2. More specifically, any dimensionless function ( 1 , 2 , ...; ) describing clustering (where the are the parameters on which the statistic depends) will obey a relation of the form where NL, encodes the temporal dependence of the characteristic scale with the same dimensions as (as inferred from NL ). Our interest in self-similarity is driven by the fact that it greatly simplifies the description of clustering: its time dependence is effectively trivial, and any statistic describing clustering is specified by the single time-independent function on the right-hand side of Eq. 3. As discussed in our previous papers, we can use this property to determine the range of scales that a simulation can reliably reproduce: any deviation from self-similarity arises necessarily from dependence on the unphysical scales proper to the -body simulations.
Pairwise Velocity and pair-conservation equation
In this study we focus on the radial component of the mean pairwise velocity defined by where the velocity difference (v 1 −v 2 ) of a pair of objects is projected on to their separation vector r, and < · · · > denotes the ensemble average. It can be estimated in a finite simulation by directly averaging the pair velocity over all pairs. To do so, here we have coded an appropriate modification of the analysis tool Corrfunc (Sinha & Garrison 2019. To facilitate our analysis based on self-similarity, we will always consider below the dimensionless ratio of 12 to the Hubble flow ( ), so that self-similarity has the simple expression in the form of Eq. 3.
The first half of this paper focuses on the matter field, and the choice to study 12 is motivated by the fact that, in this case, it can also be related to the two-point correlations of mass density via the so-called pair conservation equation. This relation was first derived by Davis & Peebles (1977) as a consequence of the BBGKY equations. In their statistical description, matter was approximated by a set of identical particles of mass m, making their theoretical results directly applicable to those of -body simulations. Starting from the continuity equation for the density contrast (zeroth moment of the Vlasov equation) one obtains the pair conservation equation: where is equal to the conformal time and 12 is the standard reduced two-point density-density correlation function (2PCF) defined as the ensemble average at two different locations (1+ 12 =< (1+ (x 1 ))(1+ (x 2 )) >). This can be conveniently rewritten as (Nityananda & Padmanabhan 1994): where¯= 3 −3 ∫ 0 2 , the cumulative two-point correlation function (cumulative 2PCF), is the average 2PCF interior to x where we have normalized the velocity to the Hubble flow ( ). For economy, we have dropped the indices 12 in the two-point quantities. As Eq. 6 is exact, it implies that we can estimate in a finite sample indirectly, using instead of the velocities themselves the direct estimators of the 2PCF, the cumulative 2PCF and its derivative, combined in the appropriate way. This has been previously exploited in an early study of the pair velocity in scale-free models by Jain (1997) focused on the question of whether clustering become stable at small scales (Peebles 1974), i.e. whether it tends to become stationary in physical coordinates, corresponding to = − . In the context of scale-free models and their expected self similarity, it is convenient to rewrite Eq. 6 with the time derivative taken at a fixed value of the rescaled commoving separation (i.e. at fixed / NL rather than fixed ) When the two-point density correlations (as described by and¯) are self-similar, the last term vanishes and we can infer that is also self-similar. On the other hand, self-similarity of and¯is not a requirement for that of . We will pay careful attention to this point in our analysis below, and we will show that there is in fact a regime in our simulations in which approximates well self-similarity while the 2PCF does not.
Halo quantities
In Sec. 4.2 we will use self-similarity to test different halo selection algorithms. As for the matter field, our focus is on the convergence and accuracy of the measurements obtained from these algorithms of the mean pairwise velocity of halo centres. As the latter are measured as a function of halo mass, we will start by analysing the convergence of the mass function (HMF) as a function of mass, extending a similar analysis already given in Leroy et al. (2021) to incorporate the CompaSO algorithm (and also the results of larger simulations and scale-free models).
We recall that the halo mass function is just the number density of halos of a given mass at a given redshift. Following the treatment of Press & Schechter Press & Schechter (1974), it is convenient to express it in terms of the "multiplicity" function (Jenkins et al. 2001;Tinker et al. 2008) where¯is the mean matter density, and is defined by Eq. 1 but now as a function of mass using NL ∝ 3 NL .
For scale-free cosmologies, we can conveniently express ( ) in terms of the rescaled mass / NL as where we used ≡ / and from Eq. 1 ln −1 / ln = (3 + )/6 The halo-halo 2PCF, hh ( , , ), is a dimensionless function of the separation and of the halo mass , calculated at a given snapshot. Similarly, the radial component of the pairwise velocity can be computed as the correlation between two centres weighted by their projected velocity. In both cases, if self-similarity applies, it is conveniently rewritten in terms of the dimensionless rescaled functions ,hh = ,0 ( / NL , / NL ) .
A code and simulation parameters
We report results based on the simulations listed in Table 1, performed using the A -body code (Garrison et al. 2021b). A offers high performance and accuracy, based on a high-order multiple method to solve far-field forces and an accelerated GPU calculation of near-field forces by pairwise evaluation. While the = 1024 3 simulations were run using local facilities at the Harvard-Smithsonian Center for Astrophysics (CfA), the larger = 4096 3 simulation are part of the A S project (Maksimova et al. 2021), which used the Summit supercomputer of the Oak Ridge Leadership Computing Facility.
The simulation data we exploit in this article are summarized in Table 1. As in Maleubre et al. (2022), we have simulated three different exponents ( = −1.5, = −2.0 and = −2.25), chosen to probe the range relevant to standard (i.e. LCDM-like) models. For the first two exponents, we have two simulations with different but otherwise identical parameters, allowing us to study finite box size effects. For the larger ( = 4096 3 ) simulations, the statistics have been calculated on (random) sub-samples of different sizes (25%, 3%) to facilitate the assessment of finite sampling effectsFor the other two spectral indices, = −2.0 and = −2.25, we have four = 1024 3 simulations, each with identical -body parameters but different realizations of the IC. These will be analysed below, both individually and as an average.
We work in units of the mean inter-particle (i.e. initial grid) spacing, Λ = / 1/3 . The essential time-stepping parameter in A has been chosen as = 0.15 for all simulations, and the additional numerical parameters have been set as detailed in Maleubre et al. (2022). These choices are based on the extensive convergence tests of these parameters reported in our previous studies (see Joyce et al. 2021;Garrison et al. 2021a).
The remaining parameter correspond to the softening length. As previously introduced in Garrison et al. (2016), A performs a spline softening derived as a Taylor expansion in of the Plummer softening expression, requiring a smooth transition at the softening scale up to the second derivative. All softening lengths in this study have been fixed in proper coordinates for the interesting redshifts, decreasing as ( ) ∝ 1/ in commoving coordinates, those used by the simulation. To avoid a too large softening at earlier times, we fixed it in commoving coordinates down to 0 , the first output of our simulation, and change to proper from then on. For all the simulations studied here, we use ( 0 )/Λ = 0.3. This value has been chosen following the results in Garrison et al. (2021a) and Maleubre et al. (2022), being both accurate and efficient for the spectral indices analysed.
The start of the simulation ( = ) is chosen so that top-hat density fluctuations at the particle spacing are given by While the first output epoch ( = 0 ) corresponds approximately to the formation of the first non-linear structures, fixed at the time at which fluctuations of peak-height ≈ 3 are expected to virialize in the spherical collapse model ( ∼ / , with = 1.68): Subsequent output values are spaced by a factor √ 2 in the nonlinear mass scale. Given that NL ∝ 3 NL and substituting in Eq. 13, we get: We use log 2 ( / 0 ), as the time variable of our analysis, which indicates how many epochs have passed since the first output. It is also convenient to define the variable = 12 3 + log 2 0 (15) with = 0, 1, 2, ... corresponding to the different outputs of the simulation. Initial conditions have been set up using a modification to the standard Zel'dovich approximation (ZA), detailed in Garrison et al. (2016). This includes a second order lagrangian perturbation theory (2LPT) correction as well as particle linear theory (PLT) corrections as described in Joyce & Marcos (2007) and Garrison et al. (2016). The latter corrects the initial conditions for discreteness effects at early times, so that the result of fluid evolution is reproduced at a target time = PLT . For all our simulations here we have PLT = 0 , with 0 defined by Eq. 13.
Estimation of converged values
As in our previous papers, we will assess the convergence to the physical limit by studying the temporal evolution of statistics, which become time-independent in the case of self-similarity. To make this study quantitative -i.e. to identify estimated converged values, and converged regions at some precision -we need to adopt appropriate criteria. While the conclusions drawn should not of course depend significantly on the chosen criteria, these criteria are intrinsically somewhat arbitrary in detail. In practice, their choice is made based on visual examination of data. We follow here the simple procedure described in Maleubre et al. (2022). It allows us to estimate a converged value and converged region at a chosen precision, per rescaled bin for each of the statistics analysed in this paper. The method is equivalent for all our dimensionless statistics, whether they are matter-field ( ,¯, / ) or halo ( ( / NL ), hh , ,hh / ). We denote our chosen statistic by in the following.
We first calculate an estimated converged value (denoted as est ) in each rescaled bin as the average of the statistic in a specific temporal window. The width of this window is conveniently specified by a number of snapshots , corresponding to an increase in the nonlinearity scale by a factor of 2 /6 (below we use = 5). To identify the location of the candidate converged window, we "slide" a window of width across the data to find that which minimizes where max , min , and are respectively the maximum, minimum, and average values in the window. Specifying now a parameter characterizing the precision of convergence, any bin is considered to be converged only if the minimal value of Δ is less than .
To identify the region of convergence to this estimated value (at precision ), for each rescaled bin with a converged est , we find the largest (containing at least three consecutive snapshots, though again this number is not essential) connected temporal window verifying We denote conv the average calculated over this new window, and take this as the estimated converged value of the statistic for the given rescaled bin. We note that, in the following (as in Maleubre et al. 2022), when we say that we have precision at % we mean that = /100 1 .
In the results presented below, all two-point quantities have been calculated over the same / NL grid, whether they are matter-field or halo. We use bins of constant logarithmic spacing 1 + (Δ / ) ≈ 2 1/12 (Maleubre et al. 2022, following), ensuring that bins of different snapshots match when rescaled by NL to facilitate comparison between them. In order to reduce statistical noise sufficiently, we have rebinned by grouping four such bins, corresponding to Δ / ≈ 0.26. In the case of the HMF, mass-bins were chosen and rebinned so that Δ / ≈ 0.5. In our presentation below we label our bins, for simplicity, just by the value of the rescaled variable at the geometrical centre of the bin.
Halo Finders: R and C SO
In this paper we analyse results from two different group-finding algorithms, comparing their level of resolution in a set of halo-statistics, as well as the accuracy of convergence. Competitive assignment to spherical overdensities (C SO) ) is a newly developed halo-finder specifically created to meet the demanding requirements of the A S cosmological -body simulations. It runs on-the-fly, as part of the simulation code itself, with two of its primary requirements being keeping up with the high speed of A (Maksimova et al. 2021), and support the creation of merger trees to be used in the Dark Energy Spectroscopic Instrument (DESI) project. On the other hand, Robust Overdensity Calculation using K-Space Topologically Adaptive Refinement (R ) (Behroozi et al. 2013) is a well established, widely used halo-finding algorithm. It has been subjected previously to self-similarity tests (for the HMF and 2PCF) in scale-free cosmologies in (Leroy et al. 2021). In Sec. 4.2 we will compare results from both group-finder algorithms.
The C SO algorithm is a configuration-space, FoF and SO algorithm to compute halos from -body simulations. It first obtains a measurement of the local density using a kernel of the form = 1 − 2 / 2 kernel , where typically kernel = 0.4Λ. Particles with a density Δ higher than a chosen threshold are then grouped together into FoF groups (L0 halos). The main halos (L1 halos) are then formed inside these groups. Within each group, the algorithm finds Table 1. Summary of the -body simulation data used for the analysis of this paper. The first column shows the spectral index of the initial PS, is the number of particles of each simulation, and the third column shows the number of simulations with identical parameters but different realizations of the IC. The fourth column shows the available statistic and sampling of the matter field, while the last one indicates the run Halo Finder. the particle with the highest kernel density-the first halo nucleusand makes a preliminary assignment to it of all particles within a radius L1 (innermost radius enclosing Δ < Δ L1 = 200 in EdS). Particles outside 80% L1 are eligible to become their own halo centre as long as they are the densest within their kernel radius. The algorithm then finds the next highest density among eligible particles, which becomes the next halo nucleus. Particles are assigned to this nucleus as the first, but if a particle belongs to both halos one and two, the algorithm performs a competitive assignment. This reassigns a particle to a new halo if its enclosed density with respect to the new halo is twice that of the old one. The search for new halo centres within 0 continues until no particles remain that are likely to nucleate halos of sufficient density. C SO can sometimes fragment elongated halos into multiple objects, due to its spherical nature, or identify substructure as a distinct halo at one epoch that was already identified as a monolithic halo at a previous epoch. For this reason, a cleaning procedure is performed in post-processing, relying on merger-trees information ). This procedure checks what fraction of the particles of a halo at time come from a much larger halo located at a similar position at time i−1 and i−2 . If a sufficiently large fraction did, then the newer halo is deemed a "potential split" and merged into the larger halo. In addition, if at an earlier redshift a halo peak mass exceeds more than twice its present day mass, it is also merged into a more massive neighbour, from whom it had presumably split off. The described cleaning method affects, in general, low-mass halos around more massive ones, appending their particle list to the latter, and resulting in cleaned halo catalogues with a lower number of smaller halos vs. a larger number of bigger halos. As we will show in Sec. 4.2.1, this shifts the value of the HMF in each mass-bin exactly in the correct direction to preserve self-similarity, which is evidence for the good performance of the procedure.
R is a six phase-space dimensions plus time halo finder, aiming at maximizing consistency of halo properties across snapshots. The code starts by creating FoF groups of a linking length larger than standard ( = 0.28 by default), which assures that virial spherical overdensities can be determined within. For each of these FoF groups, a phase-space metric is defined by normalizing the positions and velocities of the particles by the position and velocity dispersions of the group, such that for two particles 1 and 2 the distance metric is defined by: The algorithm now performs a modified FoF in phase-space within each group, where it links particles with and adaptive phase-space linking length such that a constant fraction of particles (default 70%) is always linked together with at least another particle into subgroups.
The process repeats for each subgroup, creating a hierarchical set of structures until a minimum size substructure is found at the deepest level. Seed halos are placed at this final structure, and particles at higher levels are assigned to the closest seed halo in phase-space, where now the metric (Eq. 18) is calculated with respect to the seed halo. More than one seed can be found within each of the first level FoF groups, corresponding to either a halo or subhalo. This categorization is performed by including temporal information of previous steps, following particle-halo associations across timesteps. During its final step, R calculates the gravitational potential of all particles using a modified Barnes-Hut method in order to unbind particles. R defines halo masses by using various (user-specified) SO criteria. Following the results in Leroy et al. (2021), we restrict ourselves in this study to using the SO mass corresponding to the virial radius, including all halo structures and considering only gravitationally bound mass (STRICT_SO_MASSES=0). Finally, halo centres and velocities are calculated in the code using a subset of the innermost particles (∼ 10% of the halo radius), minimizing a Poisson error / √ . R has been run with default parameters but for MIN_HALO_OUTPUT_SIZE=25 and TEMPORAL_HALO_FINDING = 0, to be consistent with (Leroy et al. 2021)
Radial pairwise velocity of matter field
As discussed above, in a scale-free cosmology, self-similarity implies an independence of the results of an -body simulation of their discretization parameters. By carefully examining the departures from self-similarity that are actually measured, we can infer how the resolved scales depend on the unphysical scales in thebody simulation. We report in this section this analysis for the mean pairwise velocity in the matter field.
Direct estimation
As discussed in subsection 2.2, / can be estimated directly from the measured particle velocities, or indirectly from measurements of the 2PCF. We consider first the former estimate. Fig. 1 shows the estimated / as a function of time (parameterized by the variable log 2 ( / 0 )) at different rescaled distance, for spectral indices = −1.5, = −2.0 and −2.25. Each plot correspond to the simulations with the highest number of particles ( = 4096 3 for = −1.5, and the average of the four = 1024 3 simulations for the other). The left panel gives / as a function of /Λ (with Λ the grid spacing), while the right panel gives it as a function of the rescaled variable / NL . Self-similarity corresponds to the superposition of the data at different times in the latter plot.
These plots show qualitatively the general behaviour of the statistic, which is similar to that seen for the 2PCF and the PS (Maleubre et al. 2022). Self-similarity can be seen to propagate from larger commoving scales, significantly above Λ at early times, to smaller scales as time evolves. In particular, the scales around the "turnaround" point -corresponding to the maximal radial infall velocity -are only resolved at later times. As for the 2PCF and PS in our previous studies, the redder the index, the more reduced is the range of approximate self-similarity. This is a reflection primarily of the smaller range of scale-factor which is accessible in simulations of a fixed size as decreases, and also, as we will see further below, of larger finite box size effects. Finally, we note that all three models appear to show the same behaviour at asymptotically small scales, tending to a value close to −1, the value predicted by the stable clustering hypothesis. We will assess these behaviours quantitatively below in subsubsection 4.1.3.
Estimation using pair conservation
We next consider the estimation of from the estimated 2PCF, using the exact relation Eq. 7 for in terms of ,¯and ¯. As noted, we can also test the validity of the relation when the term in v anishes, which corresponds to self-similarity of¯. Fig. 2 shows the normalized pairwise velocity at each rescaled coordinate for a set of selected redshifts, in the same way as in the right panel of Fig. 1. In Fig. 1). The dots correspond, in the left panels, to estimations using pair-counting and the assumption of self-similarity of the 2-pt statistics i.e. using Eq. 7 with the last term set to zero. In the right panels, this last term is also included in the estimator. addition, we have added a dotted line which gives the new estimation obtained using pair conservation. The left panel excludes the nonself-similar term, while the right panel corresponds to the full (exact) expression Eq. 7. To estimate the time derivative, we have simply used a finite difference estimate on the closest two "neighbouring" snapshots.
In the right panels we see that, as required by pair conservation, we recover to a very good approximation from the alternative estimator. The very small differences can be attributed to finite particle number noise and possible systematic offset due to the estimation of the time derivative. Given the close spacing (Eq. 14) of our snapshots, it is unsurprising that any such effect appears to be small. At small scales, on the other hand, close examination shows that the pair conservation estimator is slightly less noisy than the direct one. This is as might be anticipated: because of the intrinsic dispersion in the pairwise velocities, we can expect its average to have a greater variance than the direct pair count (as noted previously by Jain 1997). Thus, in assessing what is required to obtain an accurate estimation of the pairwise velocity, one needs to consider between the need to have closely spaced outputs to accurately estimate the time derivative if pair counting is used, or a larger volume for accurate direct estimation.
The left panels, on the other hand, show very large discrepancies between the two estimators, which we can infer as being due to a significant deviation in the corresponding range of the (integrated) 2PCF from self-similarity. Indeed, we can see that this is the case from the corresponding direct analysis of¯displayed in Fig. 3: the scales at which the agreement of the estimators break corresponds to the break from self-similarity of¯. We note that, at late times, the associated break appears to occur at a scale where / approaches −1, the value corresponding to stable clustering. Thus, there is indeed a range where approximate self-similarity appears to persist despite the fact that the 2PCF differ much more from their physical values, and this range appears to correspond, at later times, to that where stable clustering is well approximated.
Quantitative determination of resolved scales
To better understand, and then also quantify, the limitations on the range of self-similarity arising from the different unphysical simulation parameters (specifically Λ, and ) we now study more closely the evolution as a function of time of / (estimated directly and indirectly via pair conservation), and of and¯, for fixed values of / NL . This corresponds to taking the values on vertical lines in the right panels of Fig. 1 (and the equivalent plots for and¯). As discussed, self-similarity of the statistic then corresponds to time independence, i.e. to convergence (in some range) of the time series to a fixed value. Fig. 4 and 5, for spectral indices = −1.5 and = −2.0 respectively, shows such plots for three chosen values of / NL . (We exclude n=-2.25 for economy, but will discuss it further below). To help understand the scales involved in each plot, we also display the values of /Λ on the upper -axis. As NL is a monotonically growing function of time, /Λ increases from left to right, translating the fact that the spatial resolution relative to the grid increases with time in these plots. We note that in almost all the plots we can identify easily by eye what appears to be a converged value in a finite range of scale (the only exceptions are those of¯in the first panels). In all these cases, a lower cut-off to this converged range is clearly identifiable. As we discussed in the analysis of similar plots in our previous analyses Maleubre et al. 2022), and will see again in detail now, this lower cut-off clearly corresponds to the resolution limit fixed by the ultraviolet cut-offs (Λ and ).
The different estimations of the statistics shown are indicated in the legend and described in the figure caption. Recall that, as detailed in Table 1, the properties of the simulations analysed differ for the two different exponents. While data for = −1.5 correspond to a single realization of each box size, = −2.0 presents data from four different realizations of = 1024 3 boxes and their statistical average.
In the cases in which the rescaled bin is converged following the criterion specified above, in subsection 3.2, at a precision of 1% (i.e. = 0.01), the estimated converged value is indicated as a dashed line and the red shaded region indicates that within 1% of this value. In addition, we add a sub-plot with the dispersion between this value and individual data from direct estimation from all our simulations (including the individual = 1024 3 boxes with = −2.0). This value of 1% is chosen because it is approximately the smallest value of for which we obtain a significant range of contiguous bins satisfying our convergence criteria. It corresponds to the highest precision (i.e. smallest ) at which we can in practice establish convergence using our data.
The first panel of each figure corresponds to a highly non-linear (small) scale. Although / is not converged at the 1% precision level, the different estimators nevertheless give highly consistent values and appear to show robust convergence albeit at lower precision (of order a few percent), starting from a scale well below Λ. As anticipated in the previous section, the converged value is close to −1. Further, we see more clearly that this convergence is indeed not associated with that of¯, i.e. at this scale the measured cumulative 2PCF¯approximates very poorly its physical value.
The next (second) panel (of both Fig. 4 and 5) corresponds to the bin around the smallest rescaled separation for which / (in the statistically largest available simulation, using direct estimation) converges (according to our convergence criterion, at the chosen 1% precision level). The lower cut-off to the convergence of / is just slightly below the grid spacing (at about Λ/2). We see also that shows convergence starting from the same scale, so the range of convergence for the pair counting estimator using ¯= 0, i.e. assuming self-similarity of¯, is accurate in a similar range. Looking at the lower sub-panels in the plots of / , we see that the convergence of the direct estimators in the individual = 1024 3 simulations is degraded at slightly larger scales, just above Λ for = −1.5 and slightly below for = −2.0. These are simply finite (at a fixed Λ) noise in the estimators, as the associated fluctuations disappear in the larger ( = 4096 3 ) simulation for = −1.5 but also when the four = 1024 3 simulations are combined for = −2.
The third panel of both figures shows a considerably larger scale, in the weakly non-linear regime, which have a lower cut-off to convergence (at the 1% level) a few times larger than the grid spacing. In this case, for = −1.5, there is no visible evidence for the finite effects seen in the previous bin. On the contrary, for = −2.0, we observe much poorer convergence of / both in the direct estimations (lower sub-panel) and in the pair counting estimator (solid lines in main panel). Further, we see now an offset from the estimated converged value that is a systematic shift rather than a random noise, and even in the average over the four simulations, a break from convergence is detected within the range of scale probed. The cancellation (or at least partial cancellation) of these systematic offsets when the realizations are averaged indicates that this is due to significant differences in the initial power at larger scales due to the finite sampling of modes. On the other hand, the observed break from convergence at large scales (in the average) can be attributed to finite box size effects arising from the missing power in modes below the fundamental of the simulation box, finite , and no longer due to a finite at fixed Λ as before. These same tendencies are present, but even much more pronounced for = −2.25 (data not shown). Indeed, in this case the lower and upper cut-offs to convergence below the few percent level are no longer clearly separable from one another in almost all bins. For this reason, we do not use the = −2.25 below in our quantitative assessment of resolution limits.
Resolution as a function of time
Applying the analysis detailed above to all bins, we can deduce the comoving scales which are resolved (i.e. self-similar) at each given time, for each of the statistics and estimators we have calculated. Fig. 6 shows the comoving separation, in units of the grid spacing, of the resolved bins at the 1% (upper two panels) and 5% (lower two panels) precision levels, i.e. of the bins found to be converged according to the criteria described in subsection 3.2 for = 0.01 and = 0.05. The points in the left panels are for the mean pairwise velocity direct estimate using the = 4096 3 simulation for = −1.5 and the average over the four = 1024 3 simulations for = −2. The right panels show the cumulative 2PCF, using the same simulations.
The resolution ranges for¯(in the right panels) can be taken essentially to be those for the mean pairwise velocity estimated from pair conservation and imposing the additional constraint that¯is resolved, i.e. ¯= 0, because ( ) is always resolved starting from a significantly smaller scale than for¯as can be seen in the right panels of Fig. 4 and 5. This is just a simple consequence of the fact that¯, , and of the 2PCF and cumulative 2PCF (right panels), as a function of logarithmic scale factor log 2 ( / 0 ), lower x-axis, and as a function of /Λ, upper xaxis. Each row correspond to a different bin of rescaled separation / NL as labelled. The blue triangular symbol represents the smaller = 1024 3 simulation, while the red circles represent the = 4096 3 simulation. Results obtained using the pair counting estimator are drawn as a continuous line in the appropriate colours. Horizontal red dashed lines indicate the converged value of each of the three statistics, calculated from the largest simulation as described in the text, and the red shaded region indicates that within ±1% of this value. The sub-panels in the plots of / give the dispersion of the results obtained using the direct estimation with respect to the converged value.
by definition, is sensitive (at any given precision level) to ( ) over a range of scale below . It will only therefore be resolved starting from a lower cut-off, below which ( ) is resolved over some significant range.
Comparing the upper panels, we see that, as anticipated, the relaxation of the self-similarity constraint extends only very modestly the resolved regions, for the case of convergence at the 1% level. There are some additional bins that meet the convergence criterion, but most of them are not contiguous with the main converged region and thus do not actually extend the lower limit to resolution (i.e. the scale below which convergence is affected by the unphysical UV scales). In contrast, at 5% precision, there is a very marked difference between the two plots: again as anticipated in our more qualitative analysis above, we see that the resolution of the pairwise velocity now extends down to scales of order the softening length (indicated by the dashed line in each plot). As we will discuss further below, the apparent explanation for this is that the behaviour of the pairwise velocity at these small scales -corresponding to stable clustering -remains the same whether the spatial clustering is resolved or not.
Resolution limits extrapolated to LCDM
LCDM models are not scale-free: the linear PS is not a power-law, and there are deviations from EdS power-law scale factor. Nevertheless, the latter deviations are only at very low redshift and the PS, in the range of scales relevant to large scale structure formation in cosmology, can be well approximated as a slowly varying powerlaw: its logarithmic slope varies roughly between = −2.5 and = −1.5 over two decades in scale. From Fig. 6 we see that the behaviour of the lower cut-off to resolution is quite weakly dependent on when plotted as a function of / 0 . Thus, we can confidently bracket the lower resolution limits (due to the cut-offs, Λ and ) using the scale-free results. As discussed in our previous analyses, given the physical grid spacing of a LCDM simulation, one can infer 0 and then obtain a conversion between redshift and the variable log 2 ( / 0 ) which allows an approximate "mapping" of the scale-free results to the LCDM simulation. Taking the tighter bounds obtained for = −1.5, Fig. 7 shows an example of conservative resolution for a simulation with Λ = 0.5ℎ −1 Mpc. Results are given for a 1% (orange) and 5% (blue) precision in the direct estimation of the pairwise velocity, as plotted in the left panels of Fig. 6. Note that the larger missing scales at 5% simply show that / is converged at much earlier redshifts.
Converged mean pairwise velocities and stable clustering
Having focused on identifying the resolved scales, it is also interesting to look at what can be inferred about the behaviour of the studied statistics, and in particular about their behaviour at asymptotically small scales, where the convergence or deviation from stable clustering is of particular interest.
We show in Fig. 8 the converged values of the normalized pairwise velocity for the three simulated spatial indices. These values correspond to the same analysis used to obtain the left panels in Fig. 6, but . Estimated converged / as a function of rescaled separation, for the three different indicated exponents. In the main plot the same data has been used for the left panels in Fig. 6 i.e. using direct estimation. The converged values are obtained using the 1% and 5% precision criteria, with the error bars estimated as described in the text. The sub-plot shows these same errors as shaded regions; it also shows (star symbols) the relative difference with the converged values obtained using pair counting estimation. while these show the resolved regions, we now plot the corresponding converged values in each rescaled bin determined by this analysis (i.e. the mean values conv in the discussion in subsection 3.2). The points plotted are a combination of the values for the bins converged at the 1% level and at the 5% level: we plot conv for all bins which converge with = 0.01, and then also for the bins which do not converge at = 0.01 but do at = 0.05. We add an indicative estimate of the error on conv which takes into account the expectation that it will decrease as the size of the converged window increases: where is the size (in consecutive snapshots) of the converged window (used to calculate conv ) and min the smallest window for which Eq. 17 is satisfied, and we have taken here min = 3. Error bars for the 1% level are smaller than the points, thus where the error bars are visible, the corresponding bins converge only at the 5% level. As could be anticipated, we see that both the accuracy and range of scale measured increases as does.
We see in this plot that, while there is a clear -dependence in the shape of the function at larger scales, the behaviour at asymptotically small scales shows a remarkable consistency towards a "universal" stable clustering (bearing in mind that the error bars are only quite rough estimates of the systematic uncertainties due to finite resolution). Positing this to be the correct physical limit also explains why it can be measured quite well even at scales where the physical behaviour of the clustering is not itself resolved: stable clustering is a robust behaviour that it is not spoiled by the discretization of the density field in an -body simulation. Leroy et al. (2021) applied the methods we are using here to explore the convergence of the HMF and the 2PCF of halo centres obtained with two different halo finders: a simple FoF algorithm, and R -. This analysis showed, using analysis of a single scale-free simulation with = 1024 3 and = −2.0, that the test of self-similarity clearly shows the much superior convergence of the latter algorithm, revealing very evidently a marked resolution dependence of the measured FoF statistics. We first extend here the comparative analysis to include C SO and also make use of our larger simulations and range of exponents to refine the results of Leroy et al. (2021). We then return to the focus of this paper, exploring the convergence of the radial pairwise velocities of halos, comparing it also with that of the 2PCFs.
Halo mass functions
We first show in Fig. 9 the multiplicity function , as defined in subsection 2.3, as a function of the rescaled mass / NL . We refer hereafter to as the halo mass function (HMF). The two left panels correspond to the C SO catalogue obtained from the = 4096 3 simulations of = −1.5 (upper) and = −2.0 (lower), while the right panels are for R catalogue of a single = 1024 3 simulation of the same two exponents. All plots show that the selfsimilar rescaling appears to apply to a good approximation, especially at late times. The smaller R boxes show greater deviations at larger rescaled masses, which are simply due to the reduced number of halos in the smaller volume. Also comparing the two indices for each halo finder, we observe that the self similarity at large rescaled masses show visible deviations at later times for = −2.0, while = −1.5 presents almost no such deviations. This difference mirrors what we observed in the analysis above of the dark matter statistics, and reflects the increasing importance of finite box size as the index of the power spectrum reddens.
Following the same steps as in our analysis of the dark matter statistics, we next study these qualitatively apparent features quantitatively by considering vertical slices in Fig. 9, assessing the self-similarity of the HMF as a function of time in bins of fixed rescaled mass. Fig. 10 shows the HMF, for = −1.5 (left panels) and = −2.0 (right panels), for the different halo catalogues indicated and three chosen representative rescaled mass bins. We also indicate, on the upper -axis, the number of particles in a halo ( / part ), where part is the particle mass of our simulations, on the upper -axis. As NL grows as a function of time, the halos populating a given / NL bin contain more and more particles as time progresses. We note that the different halo finders have different mass definitions, so in these figures we do not expect agreement in the value of ( / NL ) but we are interested instead in comparing the time/particle number range in which a convergence to a constant behaviour (i.e. self-similarity) is attained. The horizontal lines in the panels indicate the estimated converged value when such convergence is attained at 1% precision, using exactly the same criteria as detailed in subsection 3.2 (with = 0.01). The uppermost two panels correspond to the smallest rescaled mass at which such convergence is obtained (for at least one of the finders), and the bottom panels to the largest such rescaled mass. This value of 1% is again chosen because it is approximately the smallest value of for which we obtain a significant range of contiguous bins satisfying our convergence criteria.
Examining these plots, we see several clear trends. R catalogues show good convergence: the 1% precision level is attained starting from order 10 2 particles, with degrading convergence at larger mass/later time due to smaller box size. C SO catalogues show equally good convergence beyond 10 3 particles when the cleaning is performed, while the raw C SO catalogues never meet the convergence criteria and show instead a clear monotonic dependence on the resolution. On one hand, the larger number of particles needed for convergence in C SO is expected, as the kernel density scale is fixed and does not scale self-similarly (i.e., a new scale is inserted in the problem, which is assumed will affect self-similarity of small objects). On the other hand, the behaviour displayed by the raw C SO is very similar to that observed in Leroy et al. (2021) for FoF-selected halos. Thus, the merger- , lower x-axis, and halo particle number ( / part ), upper x-axis, for a set of given mass-rescaled bins / NL . Blue triangles correspond to R for a single = 1024 3 simulation, while circles correspond to C SO for the = 4096 3 simulation (orange corresponds to results before merger-tree cleaning and red corresponds to results after). Horizontal dashed lines represent the converged value of the HMF, and the shaded regions indicate that within ±1% of this value. tree based cleaning (discussed in subsection 3.3 above) appears to correct very appropriately the mass of halos, by increasing by the right amount the number of larger halos at each given time to restore the self-similarity.
In the panels in the bottom row, which probe the most massive halos resolved, a clear upper cut-off exists in the convergence in the cleaned C SO catalogues at a few times 10 4 particles. Comparing with the plain computer text format latexplain computer text format latexplain computer text format latexbehaviour seen in the same bin for the smaller R boxes, which appear to show a down-turn of the data away the converged value at a slightly earlier time, it appears that these deviations can be attributed to finite box size effects. Further tests against larger R boxes would be desirable to confirm this and exclude any evidence for residual resolution dependence in the cleaned C SO, as well as test against self-similarity for the different cleaning parameter values.
Indeed, we note that one of the more general conclusions we can draw is that the self-similarity tests on scale-free models are an excellent tool for testing resolution of halo finders. Furthermore, while we do not claim these tests to be proof of correctness, selfsimilarity is a necessary evidence for it, and results can be used to place minimal convergence limits to halo finder algorithms.
Pairwise velocities
We now turn to our analysis of the mean pairwise velocities, and the 2PCF, of halo centres. As for the HMF above, the latter extends and also allows comparison with the more limited analysis already reported in Leroy et al. (2021). We have first considered the HMF as we would expect that any other halo statistics, which are generically expected to depend on / NL , will be self-similar to a good approximation at a given rescaled / NL only if the HMF is also. Amongst other considerations, we will examine below the extent to which this is the case quantitatively for the 2PCF and mean radial pairwise velocity. Fig. 11 and 12 show, respectively, for the same three rescaledmass bins in Fig. 10, the 2PCF and mean radial pairwise velocity for halo centres. The latter is calculated directly only, as the pair conservation relation we exploited for the dark matter analysis is not valid for halos. We display results for the cleaned C SO catalogues in the = 4096 3 simulations and the indices = −1.5 and = −2.0. We plot the values of the statistic as a function of the rescaled distance / NL , and for all redshifts with data in the given bin of rescaled mass. In each plot we have marked by a black vertical line the scale 2 vir / NL corresponding to twice the virial radius, vir , of the corresponding rescaled mass. In addition, the shaded area marks the corresponding scale to the minimum and maximum mass limits on the finite bin. Although C SO halos may be separated by less than 2 vir (as they are neither spherical nor have a spatial extent directly determined by vir ) we expect a scale of this order to be an effective lower cut-off to the range in which a physical halo correlation function can be measured. Fig. 11 and 12 display qualitative behaviour similar to that in the statistics we have analysed previously: both statistics show clear self-similarity propagating in time from larger to smaller scales in time. As anticipated, the scale 2 vir / NL does seem to give a good indication of the lower cut-off scale. Perhaps surprisingly, at the latest times self-similarity seems even to extend to separations as small as vir . Further the plots appear to show, again perhaps surprisingly, that the convergence of is slightly better than that of the 2PCF.
Following again the analysis in the previous sections, to assess more fully and quantify these behaviours, we take vertical slices in Fig. 11 12. As / and are each functions of the two rescaled variables / NL and / NL , each such plot thus corresponds now to a specific bin of each of these two variables (and self-similarity again to a time independent behaviour of the dimensionless statistics). Limitations of space here impose the choice of a few illustrative values of / NL and / NL .
In Fig. 13 we show three plots for each of the two statistics, for R and cleaned C SO halo catalogues obtained in the = 1024 3 and = 4096 3 simulations of = −1.5, respectively: the bins correspond to three values of / NL over the range in which we obtain satisfaction of our convergence criteria at the 2% level, and for which, as in the previous figures the converged values are indicated by horizontal lines and the precision by the shaded regions. The value of / NL in each bin has been chosen to correspond approximately to 2 vir / NL , which is approximately the smallest scale from which we observe convergence of both statistics (using the same criteria). Just as in the plots for the HMF in Fig. 10, we also plot in the upper x-axis the number of particles in the analysed halos as a function of time. We do not display the results for the raw C SO catalogue because this data is almost exactly superimposed on that for the cleaned catalogue (more remarkably for / ): differently to what we observed for the HMF, the accuracy of these halo statistics, and indeed their convergence (see below), is insensitive to the associated re-assignment of particles. The value of 2% precision has (like in the corresponding HMF plots above) been chosen because it is approximately the smallest value of for which we obtain a significant range of contiguous bins satisfying our convergence criteria, corresponding to the highest precision at which we can in practice establish convergence using our data.
As anticipated from Fig. 11 and 12, the convergence of the pairwise velocity (left panels) is indeed significantly better than that of the 2PCF (right panels). Convergence is attained (at a given precision, here 2%) starting from a smaller particle number (i.e. earlier in time). This difference becomes more pronounced in the largest mass bin, as clearly illustrated here in the chosen bin (bottom plots in the figure) in which the 2PCF alone fails to meet the convergence criteria. We believe the explanation for this comes from the very different dependencies of the two statistics on the rescaled mass. Comparing the converged values of the two statistics in the different mass bins in Fig. 13, we see that the pairwise velocity is only very weakly dependent on the mass compared to the 2PCF: the former varies by only 20%, while the latter changes by a factor of 5 (as the mass itself varies by a factor of 50). Errors in the mass assignment of the halos selected in a given mass bin will thus feed through to give a much larger error in the 2PCF.
Examining further the lower bounds to convergence, we observe that the R data converges on small scales at fewer particles per halo than the C SO, while both perform equivalently at larger scales. This is clearest in the lowest mass bin, extending to the larger mass bins, albeit somewhat obscured by the relative noisiness at larger scales of the R data (due to smaller box size).
Resolution limits for halo statistics in scale-free and LCDM-type simulations
As we have discussed, the lower cut-offs to convergence for the halo statistics we have analysed can be stated as cut-offs on the number of particles per halo, and in the case of the correlation functions (which depend also on separation) also in terms of a cut-off on separation in units of the virial radius. Further, in the data shown above we have seen that in practice the requirement on particle number, for a given halo finder, seems not to depend significantly on the mass bin for the HMF or the pairwise velocity at a given scale, at least for the approximately fixed separations (in units of virial radius) which we examined. Fig. 14 presents a more complete view of the data to test whether these behaviours are really valid in general: for = −1.5 (upper panels) and = −2 (lower panels), the leftmost panel shows in each case the lower cut-off to convergence expressed in particles per halo for the HMF as a function of the rescaled mass, while the other two panels show, for the pairwise velocity and 2PCF respectively, the analogous quantity as a function of separation in units of vir , and for different bins of rescaled mass. In each plot, the two sets of curves shown correspond to the two indicated halo finders (full line/circles to cleaned C SO and dotted lines/stars to R ), and each of the curves (or points) to different mass bins / NL . The dashed-thick lines correspond to best (least-squares) fits of a linear dependence on / vir -/ NL for the HMF case -to the data (for each of the halo-finders separately). All results correspond for a single = 1024 3 simulation, while red circles correspond to cleaned C SO for the = 4096 3 simulation. Horizontal red dashed lines represent the converge value of / and 2PCF, and the shaded regions indicate that within ±2% of this value. The lower panel of each plot indicates the dispersion of the direct measurement of the statistic with respect to its converged value, while the shaded region covers the imposed ±2% precision. to our best reported precision: 1% for the HMF and 2% for the 2PCF and pairwise velocity.
The plots for show that the anticipated behaviours indeed hold: the bounds for the different mass-bins collapse approximately onto a single line and can thus be well approximated as a bound on the number of particles per halo as a function of / vir exclusively. On the other hand, the column plotting the data shows that, although the dependence on / NL is weak for the converged values, the quality of convergence for large mass-bins (and large scales) is significantly reduced with respect to the former statistic.
The behaviours also confirm and further quantify the trends we observed in subsubsection 4.2.2. In particular, we see that the number of particles per halo required for a self-similar behaviour is, for each of the two statistics, indeed higher for C SO than R at small scales, but this difference disappears progressively as we go to larger scales, where both halo finders perform similarly. We also see further quantified the better convergence of compared to . Finally, we note that the actual number of particles per halo required to meet the convergence criteria for these two statistics are in fact considerably smaller than those required for the HMF. Although convergence for the latter is established at a 1% and the 2-pt statistics are only converged at 2% level, relaxing the precision limits for the HMF only changes very slightly the required particle numbers. This is explained in the same way as we explained the relative quality of the convergence of and the 2PCF: the HMF is itself a much stronger function of rescaled mass than the 2PCF (and a fortiori than ). For example, comparing the second to third rows of Fig. 10, we see that the HMF changes by a factor of 10, while in Fig. 13 (as discussed above) we see that the 2PCF varies by a factor of 3 and the mean pairwise velocity only by 20%. Thus, to obtain a 2% error in and we can tolerate a much larger error in the mass function. It is for the same reason that (and in some range) show no significant sensitivity to the cleaning of the C SO catalogues, as these correspond (as seen above) to relatively small changes to the mass assigned to halos.
Finally, we see that the two sets of plots, for = −1.5 and = −2.0, differ only very marginally. Further, they are formulated in terms of mass and length units ( NL and vir ) that are also clearly defined not just in scale-free cosmologies but in any cosmology. Given this, it is very reasonable to take these resolution bounds to be appropriate for any cosmology, like LCDM, in the range in which structures are seeded by a linear power spectrum with a close to power law behaviour and comparable exponents to these. One caveat is that the scale-free models are EdS cosmologies, so more caution should be used when adapting the bounds at ≈ 0 where deviations from EdS become significant. Nevertheless, it seems unlikely that these effects, arising essentially from the resolution limits on identification of halos, would have significant sensitivity to the background cosmology.
CONCLUSIONS
The analysis we have reported here is an extension of that in a set of papers Leroy et al. 2021;Garrison et al. 2021a;Garrison et al. 2021c;Maleubre et al. 2022), which have shown the usefulness of self-similarity and scale-free cosmologies in quantifying resolution of cosmological -body simulations. Our focus here has been on the radial component of the pairwise velocity, both in the full matter field and for halos selected using the R and C SO catalogues. We have also extended, as a complement and for comparison, the analysis of the 2PCF of the matter field (previously studied in Joyce et al. 2021;Garrison et al. 2021a) and of the HMF and 2PCF for halos (previously studied in Leroy et al. 2021). Compared to these previous studies which used a single power law ( = −2.0) and simulations of a single size ( = 1024 3 ), as in Maleubre et al. (2022) we have considered a set of both different power laws and different box sizes. Unsurprisingly, we have found that the same methods indeed allow us to quantify the evolution of resolution at small scales of the mean pairwise velocity, and further confirm the high levels of accuracy attained by the A code also in its determination of correlations in the velocity field. Further, in line with Leroy et al. (2021), we find that self-similarity tests are indeed an excellent tool to assess the performance of different halo finders, as shown by their capacity to detect the subtle differences resulting from the cleaning of the C SO catalogues. Our exploitation here of simulations of different sizes, of several realizations, and of scale-free models with different exponents has allowed us not only to improve some of the results in this previous work (notably concerning halos) but has also been essential to allow us to extend the method to a velocity statistic. This is the case because it is crucial for an accurate determination of the precision of convergence to be able to separate very clearly the effects of discretization at small scale from both the noise and systematic effects at large scales (right panel), with the last two computed in addition at different / vir bins. All results correspond to our best reported precision: 1% for HMF and 2% for pairwise velocity and 2PCF. Solid lines/circles correspond to cleaned C SO while dashed lines/stars correspond to R . The blue and orange dashed lines are the least squares best fit of all mass-bin data. The axes are the same in all plots to facilitate comparison. due to the finite box size. For the pairwise velocity statistics, which are more sensitive than the 2PCF to these effects, the comparison of different (and larger) box sizes and different exponents turns out to be essential to disentangle clearly the different effects. We have also exploited the two different estimators of the statistic -directly from the velocities or indirectly by pair-counting -to identify noise due to finite size effects. The comparison of different exponents has allowed us also to see how the range of converged scales markedly degrades due to finite size effects as decreases, and in practice our = −2.25 simulations are not useful for placing precision limits at the 1% level. Further, we argue that our results for the evolution of small scale resolution can be extrapolated to LCDM type models, as they are, when suitably expressed, very weakly dependent on scalefree index (which have been chosen to probe the relevant range). The same is not true of box size effects, which are strongly dependent, and indeed we do not attempt to make an extrapolation for these.
For the pairwise velocity of the dark matter, we have found that we can determine the evolution of lower cut-off to resolution at the 1% level. It is approximately equal to the corresponding cut-off for the cumulative 2PCF, which converges at the same precision level varying from a few times the grid spacing at early times to slightly below this scale at late times. This is a few times larger than the scale at which the 2PCF itself attains the same precision Garrison et al. 2021a). This reflects the coupling of the velocity correlation at a given scale to the clustering at smaller scales (as expressed through the integral¯in the self-similar limit). On the other hand, at 5% precision we have obtained resolution extending down to scales of order the softening , where even the 2PCF is far from its converged value Garrison et al. 2021a). In the corresponding range of scale / ≈ −1, i.e. the result is consistent with the so-called stable clustering hypothesis in which non-linear structures become stationary in physical coordinates (Peebles 1974). The conclusion that clustering may indeed tend to this behaviour at asymptotically small scales is consistent with an early analysis (with much smaller simulations, ∼ 10 6 ) of the question using pairwise velocities by Jain (1997) (estimated by pair-counting), and also with results for the shape of the power spectrum at large reported in Maleubre et al. (2022). In this hypothesis, the fact that resolution extends to such small scales for / is simply due to the fact that the stable behaviour is not spoiled by the discretization of the matter field, and persists even if the clustering is very different to that in the continuum model.
For the halo statistics, extending the analysis of Leroy et al. (2021), we have been able to use our data to establish resolution limits at the 1% precision level for the HMF, and at the 2% level for the 2PCF and pairwise velocity in both the R catalogues and the C SO catalogues, provided the cleaned version described in Bose et al. (2022) is used for the HMF. As in Leroy et al. (2021) we express the lower limits to resolution for the HMF as a lower limit on the number of particles, which turns out to be roughly independent of the mass. For the pairwise velocity and 2PCF, which are also functions of separation, we find that the lower bounds on the number of particles are, to a good approximation, independent of mass when plotted as a function of separation in units of the virial radius (corresponding to the given mass).
Plotting the inferred lower bounds on particle numbers for each of the three statistics, for = −1.5 and = −2 simulations, shows that the results show no significant dependence on and thus can be confidently adopted to LCDM-like simulations. At the 1% level, R is not able to resolve the HMF below ∼ 100−200 particles, the cleaned version of C SO breaks self-similarity below ∼ 1000 particles, and its raw version never achieves this convergence at the same precision. For the 2PCF and pairwise velocities, the 2% precision level is attained with significantly smaller particle numbers than the previous statistic, with the latter requiring the least. For these, the effects of cleaning C SO are less significant, as the dependence on mass-bin is suppressed. At small scales, R exhibits self-similarity starting at a smaller particle number than C SO, plausibly explained by the introduction of a fixed kernel density scale in the latter, which the authors assume will certainly affect self-similarity of small objects. This difference decreases as the separation increases and disappears at (10 − 20) vir .
With respect to the preparation of theoretical predictions for forthcoming surveys, and specifically for redshift space distortions, our analysis of the pairwise velocity gives only an indication of the resolution limits at small scales in -body simulations. It would be straightforward to extend our analysis to additional statistics used in this context e.g. PDFs of the pairwise velocity and their moments (see references in introduction). Further, to attain a quantification of bounds for the typically cited target 1% level would require slightly more data sets than what we have used here --either slightly larger simulations, or a couple of realizations of the same size as our largest simulations here.
We conclude with some comments on other possible further developments of this work. Our analysis has confirmed the finding of Leroy et al. (2021) that self-similarity is a powerful tool to put halo algorithms to the test and compare their resolution. It would be interesting to explore in particular whether the C SO algorithm can be further modified in order to improve its resolution at low halo mass, while maintaining its computational speed. Our analysis of the mean pairwise velocities in the dark matter field (cf. Fig. 8) shows an apparently universal shape below the scale of maximal infall, and going asymptotically to stable clustering. It would be interesting to compare these results with those in LCDM, making use of the resolution limits we have determined here, to assess whether we indeed find the same behaviour. To establish the evidence for stable clustering at asymptotically small scales, a fuller comparative joint analysis of the 2PCF, PS, and pairwise velocity itself should be performed. Finally, we note that the results we have derived here can be exploited to study in a very controlled way the bias of both the 2PCF and the pairwise velocities of the halos relative to the dark matter field, as a function of mass.
ACKNOWLEDGEMENTS
S.M. thanks Sownak Bose for guidance and technical support with the halo merger tree code for A S , used to clean the C SO catalogue of the scale-free simulations used in this paper. She also thanks the Institute for Theory and Computation (ITC) and the Flatiron Institute for hosting her in early 2022, and acknowledges the Fondation CFM pour la Recherche and the German Academic Exchange Service (DAAD) for financial support. S.M. and M.J. thank Pauline Zarrouk for useful discussions.
D.J.E. is supported by U.S. Department of Energy grant, now DE-SC0007881, NASA ROSES grant 12-EUCLID12-0004, and as a Simons Foundation Investigator.
This research used resources of the Oak Ridge Leadership Computing Facility at the Oak Ridge National Laboratory, which is supported by the Office of Science of the U.S. Department of Energy under Contract No. DE-AC05-00OR22725. The A S simu-lations have been supported by OLCF projects AST135 and AST145, the latter through the U.S. Department of Energy ALCC program.
DATA AVAILABILITY
Data access for the simulations part of A Sis available through OLCF's Constellation portal. The persistent DOI describing the data release is 10.13139/OLCF/1811689. Instructions for accessing the data are given at https://abacussummit.readthedocs.io/en/latest/dataaccess.html.
Data corresponding to the smaller simulations as well as the derived data generated in this research will be shared on reasonable request to the corresponding author.
|
2022-11-15T13:12:48.392Z
|
2022-01-01T00:00:00.000
|
{
"year": 2022,
"sha1": "79b2ca46fd871994e082259ed9dbc1e80780d80c",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "79b2ca46fd871994e082259ed9dbc1e80780d80c",
"s2fieldsofstudy": [
"Geology"
],
"extfieldsofstudy": []
}
|
645600
|
pes2o/s2orc
|
v3-fos-license
|
A heterozygous moth genome provides insights into herbivory and detoxification
Minsheng You and colleagues report the whole-genome sequence of the diamondback moth, Plutella xylostella. Their transcriptome analysis from different life stages, together with comparative genomic and phylogenetic analysis, provides insights into herbivore evolution and insect adaptation to plant feeding and detoxification. How an insect evolves to become a successful herbivore is of profound biological and practical importance. Herbivores are often adapted to feed on a specific group of evolutionarily and biochemically related host plants1, but the genetic and molecular bases for adaptation to plant defense compounds remain poorly understood2. We report the first whole-genome sequence of a basal lepidopteran species, Plutella xylostella, which contains 18,071 protein-coding and 1,412 unique genes with an expansion of gene families associated with perception and the detoxification of plant defense compounds. A recent expansion of retrotransposons near detoxification-related genes and a wider system used in the metabolism of plant defense compounds are shown to also be involved in the development of insecticide resistance. This work shows the genetic and molecular bases for the evolutionary success of this worldwide herbivore and offers wider insights into insect adaptation to plant feeding, as well as opening avenues for more sustainable pest management.
The global pest P. xylostella (Lepidoptera: Yponomeutidae) is thought to have coevolved with the crucifer plant family 3 Fig. 1) and has become the most destructive pest of economically important food crops, including rapeseed, cauliflower and cabbage 4 . Recently, the total cost of damage and management worldwide was estimated at $4-5 billion per annum 5,6 . This insect is the first species to have evolved resistance to dichlorodiphenyltrichloroethane (DDT) in the 1950s 7 and to Bacillus thuringiensis (Bt) toxins in the 1990s 8 and has developed resistance to all classes of insecticide, making it increasingly difficult to control 9,10 . P. xylostella provides an exceptional system for understanding the genetic and molecular bases of how insect herbivores cope with the broad range of plant defenses and chemicals encountered in the environment (Supplementary Fig. 2).
(Supplementary
We used a P. xylostella strain (Fuzhou-S) collected from a field in Fuzhou in southeastern China (26.08 °N, 119.28 °E) for sequencing (Supplementary Fig. 1). Whole-genome shotgun-based Illumina sequencing of single individuals (Supplementary Table 1), even after ten generations of laboratory inbreeding, resulted in a poor initial assembly (N50 = 2.4 kb, representing the size above which 50% of the total length of the sequences is included), owing to high levels of heterozygosity (Supplementary Figs. 3 Table 2). Subsequently, we sequenced 100,800 fosmid clones (comprising ~10× the genome length) to a depth of 200× ( Supplementary Fig. 5 and Supplementary Tables 3-5), assembling the resulting sequence data into 1,819 scaffolds, with an N50 of 737 kb, spanning ~394 Mb of the genome sequence (version 1; Supplementary Fig. 6 and Supplementary Table 6). The assembly covered 85.5% of a set of protein-coding ESTs (Supplementary Tables 7 and 8) generated by transcriptome sequencing 11 . Alignment of a subject scaffold against a 126-kb BAC (GenBank GU058050) from an alternative strain (Geneva 88) showed extensive structural variations between haplotypes. However, the coding sequence l e t t e r s of the nicotinic acetylcholine receptor α6 gene (spanning >75 kb) 12 on the BAC and the genome scaffold was relatively conserved ( Supplementary Fig. 7). Whole-genome shotgun reads from three libraries (500 bp, 5 kb and 10 kb) were mapped to the BAC and corresponding scaffold, covering 86.7% and 98.1% of sites, respectively (Supplementary Fig. 7), indicating high polymorphism levels between the alleles. Genome-wide exploration of variation identified abundant SNPs, insertions and/or deletions (indels), structural variations and complex segmental duplication patterns within the sequenced population of the Fuzhou-S strain ( Fig. 1, Supplementary Figs. 8 and 9, Supplementary Tables 9-13 and Supplementary Note). Thus, we generated a genome of ~343 Mb (version 2) for annotation and analysis by masking ~50 Mb of possible allelic redundancy in the version 1 assembly (Supplementary Fig. 10 Table 21), suggesting the expansion of certain gene families. In addition to 1,683 Lepidoptera-specific genes (Supplementary Table 22 and Supplementary Note), we found 1,412 P. xylostella-specific genes (Supplementary Fig. 13), exceeding in number the 463 Bombyx mori-specific genes 13 and the 1,184 Danaus plexippus-specific genes 14 (Fig. 2). The P. xylostella-specific genes were largely involved in biological pathways essential for environmental information processing, chromosomal replication and/or repair, transcriptional regulation and carbohydrate and protein metabolism ( Supplementary Fig. 14 and Supplementary Table 23). These findings suggest that P. xylostella has an intrinsic capacity to swiftly respond to environmental stress and genetic damage.
and 4 and Supplementary
Phylogenetic analysis indicated that the estimated divergence time of insect orders was approximately 265-332 million years ago (Fig. 2). This is around the time of the divergence of mono-and dicotyledonous Figure 1 Genomic variations within the sequenced P. xylostella strain. The outermost circle shows the reference genome assembly with a 100-kb unit scale. Scaffolds that could be assigned to linkage groups are joined in arbitrary order to generate the partial sequences of 28 chromosomes (detailed in the supplementary Note). The green segment represents the scaffolds that were unable to be assigned (Un). The innermost circle denotes segmental duplications (of ≥8 kb), with connections shown between segment origins and duplication locations. Segmental duplication pairs with 100% similarity are shown in red, and those with ≥90% similarity are shown in blue. Histograms indicate the number of SNPs (red, outer circle) and indels (light green, inner circle) in 30-kb and 50-kb windows, respectively. Apis mellifera, which are based on fossil evidence. The Arachnida, Tetranychus urticae, was used as an outgroup, and a bootstrap value was set as 1,000. 1:1:1 orthologs include the common orthologs with the same number of copies in different species, N:N:N orthologs include the common orthologs with different copy numbers in the different species, patchy orthologs include the orthologs existing in at least one species of vertebrates and insects, other orthologs include the unclassified orthologs, and unclustered gene include the genes that cannot be clustered into known gene families. npg l e t t e r s plants (~304 million years ago) 15 , consistent with the coevolution and concurrent diversification of insect herbivores and their host plants. It can be predicted that P. xylostella became a cruciferous specialist when Cruciferae diverged from Caricaceae (~54-90 million years ago) 16 , which provides additional evidence to support our estimation of the divergence time (~124 million years ago) of P. xylostella from two other Lepidoptera, B. mori and D. plexippus (Fig. 2). The genome-based phylogeny showed that P. xylostella is a basal lepidopteran species (Fig. 2), and this idea is well supported by its modal karyotype of n = 31 (refs. 17,18) and the molecular phylogeny of Lepidoptera 19,20 , indicating the importance of P. xylostella in the history of lepidopteran evolution.
On the basis of P. xylostella transcriptome data 11 , we identified 354 preferentially expressed genes in larvae ( Supplementary Fig. 15), and a set of these genes is involved in sulfate metabolism, some of which were validated using quantitative RT-PCR for gene expression analysis (Supplementary Figs. 16-18, Supplementary Table 24 and Supplementary Note). Glucosinolate sulfatase (GSSs) enables P. xylostella to feed on a broad range of cruciferous plants by catalyzing the conversion of glucosinolate defense compounds into desulfoglucosinolates, thus preventing the formation of toxic hydrolysis products 3 (Supplementary Fig. 2). In order to function, all sulfatases require posttranslational modification by sulfatase-modifying factor 1 (encoded by SUMF1) 21 , which regulates the sulfatase whose higher activities depend on greater amounts of sulfatase and SUMF1 transcripts 22 . We found that high expression of P. xylostella SUMF1 in third-instar larvae was coupled with significantly higher expression of the GSS1 and GSS2 genes relative to other members of the P. xylostella sulfatase gene family (Fig. 3). We propose that the coevolution of SUMF1 and GSS genes was key in P. xylostella becoming such a successful herbivore of cruciferous plants (Supplementary Fig. 2). Furthermore, a new gene, predicted to be a sodium-independent sulfate anion transporter, was highly expressed in all larval stages and in the midgut (Fig. 4) and is likely associated with the excretion of toxic sulfates 23 .
In comparisons with the larval midgut proteome of the polyphagous lepidopteran Helicoverpa armigera 24 , we found similar digestive enzymes encoded by P. xylostella larval preferentially expressed genes that were expressed predominantly in the midgut Fig. 19 and Supplementary Table 25). The abundant larval midgut-specific serine proteinase genes in the P. xylostella genome may circumvent the action of insecticidal plant protease inhibitors through differential expression in response to different plant hosts 25 (Supplementary Fig. 20). Among the P. xylostella larval preferentially expressed genes, we identified a set of genes, including GOX (encoding glucose oxidase), related to the host range of herbivores 26 and involved in the perception of chemical signals from host plants and defense against secondary plant compounds (Fig. 4, Supplementary Table 25 and Supplementary Note), suggesting the presence of a complex chemoreception network and multiple detoxification mechanisms. We identified five chemoreception gene families related to larval feeding preferences and adult searching for host plants: odorant receptors (ORs), odorant-binding proteins (OBPs), gustatory receptors (GRs), ionotropic receptors (IRs) and chemosensory proteins (CSPs) (Supplementary Fig. 21, Supplementary Table 26 and Supplementary Note). Notable among these genes is an expansion of ORs but not GRs, as reported in the B. mori genome 27 . Species-specific expansion of CSPs in moths is less than that observed in butterflies 18 . Lifecycle-and tissue-specific expression of ORs identified 30 variable, 23 constitutive and 9 adult-specific expression patterns (Supplementary Fig. 22), indicating that P. xylostella possesses a high potential for adaptation to chemical cues from host plants (Supplementary Fig. 2).
Detoxification pathways used by insect herbivores against plant defense compounds may be co-opted for insecticide tolerance 28 or resistance (Supplementary Fig. 2). We found that P. xylostella possessed an overall larger set of insecticide resistance-related genes than B. mori, which is monophagous and has had little exposure to insecticide over 5,000 years of domestication 13 (Supplementary Table 27). We identified in the P. xylostella genome apparent gene duplications of most ATP-binding cassette (ABC) transporter families and three classes of major metabolic enzymes, the cytochrome P450 monooxygenases (P450s), glutathione S-transferases (GSTs) and carboxylesterases (COEs) (Supplementary Fig. 23 and Supplementary Table 26). npg l e t t e r s expanded compared to the corresponding family in B. mori (Fig. 5a). Larval transcriptomes were sequenced from the Fuzhou-S strain that was genotyped and from two substrains selected for resistance to chlorpyrifos or fipronil 11 . ABC transporter genes were upregulated more frequently than GSTs, COEs or P450s in insecticide-resistant larvae (Supplementary Fig. 24), highlighting the potential role of ABC transporters in detoxification.
We then investigated the genomic variations and transposable elements in genes and their 2-kb upstream regions in these four families, some of which were validated using Sanger sequencing (Supplementary Tables 28-31 and Supplementary Note). On average, transposable elements (~20 per gene) were abundant, followed in frequency by structural variations (~16), SNPs (~6) and indels (<1), near these gene families (Supplementary Fig. 25). The coding sequences of COEs were rich in SNPs (Supplementary Fig. 25a), which can be critical in determining COE substrate specificity and catalytic activity under xenobiotic stresses 31 . Principal-component analysis indicated that intronic regions consistently harbored all types of polymorphic variations, whereas coding sequences were frequently polymorphic for structural variations and transposable elements, which may have a pronounced effect on gene function (Fig. 5b). Transposable elements were abundant within or near the P450s involved in induced xenobiotic detoxification in insects, whereas those related to constitutive developmental metabolism were free of transposable element insertions 32 . Our findings show that numerous transposable elements accompany the gene families involved in metabolic detoxification sensitive to external stresses (Supplementary Table 32). These associations seem to be a consistent trend in Lepidoptera ( Supplementary Fig. 25b). The transposable element orders of long terminal repeat (LTR) and long interspersed nuclear element (LINE) were predominant in P. xylostella and B. mori, respectively, and the proportional composition of various transposable element orders tended to be similar in different gene families for each of the species (Fig. 5c). A recent expansion of the LTR retrotransposons (>90%) in the P. xylostella genome has occurred over the past 2 million years, occurring much later than the expansion of B. mori LTRs (Fig. 5d) and possibly reflecting the timing of extensive adaptive evolutionary events in P. xylostella 33 . The polymorphism within the P. xylostella genome might support adaptation to host plant defenses and insecticides by providing a repertoire of alternative alleles or cis-regulatory elements 29 and genetic variations 34 for gene expression.
In this project, we developed a new approach for non-model insect genome sequencing using next-generation sequencing technology and de novo assembly of the highly polymorphic genome. Analyses identify complex patterns of heterozygosity, the expansion of gene families associated with perception and the detoxification of plant defense compounds and the recent expansion of retrotransposons near detoxification genes. These adaptations reflect the diversity and ubiquity of toxins in its host plants and underlie the capacity of P. xylostella to rapidly develop insecticide resistance. This study provides insights into the genetic plasticity of P. xylostella that underlies its success as a worldwide herbivore. The genomic resources described here will facilitate future studies on the adaptation and evolution of other arthropods and support the incorporation of molecular information into the development of strategies for more sustainable agriculture.
MeTHods
Methods and any associated references are available in the online version of the paper.
Accession codes. The genome described herein is the first reference genome of P. xylostella, AHIO01000000. Genome assemblies and annotations described here have been deposited at the DNA Data Bank of Japan (DDBJ), the European Molecular Biology Laboratory (EMBL) and GenBank under accession AHIO00000000. Raw sequencing data from the transcriptome have been deposited at the NCBI Short Read Archive (SRA) under accession SRA034927. l e t t e r s oNLINe MeTHods Strain for sequencing. A strain of the diamondback moth (DBM) (Fuzhou-S), P. xylostella, was reared on radish seedlings without exposure to insecticides for 5 years, spanning at least 100 generations. An inbred line was developed by successive single-pair sibling matings. Male pupae were used for genome sequencing.
Whole-genome shotgun sequencing and assembly. Individual DNA from the inbred F 1 , F 4 and F 10 insects was used for construction of paired-end libraries (Supplementary Table 1). Sequencing was performed using the Illumina Genome Analyzer IIx or HiSeq 2000 platform. Short reads were assembled using SOAPdenovo 35 .
Fosmid-to-fosmid sequencing and assembly. DNA was extracted from a pool of ~1,000 male pupae using a cetyltrimethylammonium bromide (CTAB)based method. Fosmid libraries with insert sizes ranging from 35 to 40 kb were constructed. We sequenced 100,800 single colonies to achieve 10× coverage of the genome. For each colony, two paired-end libraries with 250-bp and 500-bp fragments were constructed and sequenced. On average, each library was sequenced >200× with a total of 114 lanes and an output of 855 Gb. Vector or contaminated DNA and poor reads with >10% unknown nucleotides or >40 bases with quality value of ≤5 were filtered out 36 .
Genome assembly. We developed custom software (Rabbit) for assembling sequences with large overlaps (>2 kb). Rabbit contains three modules: Relation Finder, Overlapper and Redundancy Remover.
We used the Poisson-based K-mer model to determine repeat sequences, segmental duplications or divergent haplotypes. Each K-mer was defined as either a 'repeat' or 'unique' K-mer, depending on whether its occurrence frequency was greater or less than twice the average frequency, respectively (Supplementary Fig. 10), using the Poisson model where λ is the expected frequency for K-mers, y is the given frequency of a particular K-mer and P is the occurrence probability of a given K-mer frequency. Therefore, the probability of a unique K-mer being greater than twice the expected frequency is given by the following equation: Few unique K-mers can occur with a frequency larger than twice the expected value, especially when the expected frequency is ≥20 (Supplementary Table 14). Rabbit is capable of connecting these unique regions and removing redundancy. We chose K = 17 bp 36,37 and trimmed repeat sequence ends (Supplementary Fig. 4).
We used SSPACE 38 to build scaffolds and SOAP-GapCloser 35 to fill the gap with 131.2× whole-genome shotgun short reads (Supplementary Table 1). This resulted in a genome with 394 Mb (version 1), slightly larger than the estimated haploid genome size (339.4 Mb) 17 . We extracted all similar sequences with LAST 39 and retained one copy of the sequences containing >40% unique K-mers and masked the others with 'n' to generate a revised genome of ~343 Mb (version 2).
Digital gene expression (DGE).
Quantitative RNA-seq was conducted for newly laid eggs, fourth-instar larvae, the midguts of fourth-instar larvae, pupae (>2 d after pupation), virgin male and female adults, and the heads of fourthinstar larvae and male or female adults. Paired-end libraries (insert size of 200 bp) were sequenced with read length of 49 bp. The RPKM 40 values were calculated for DGE profiling.
Larval preferentially expressed gene analysis. On the basis of the DBM genome and the transcriptomes for newly laid eggs, third-instar larvae, pupae and virgin adults, we analyzed differential gene expressions in four developmental stages using the same statistical approach 11 . The larval preferentially expressed genes were defined as genes that were highly expressed in the larval stage compared to the other three developmental stages, with RPKM ratio ≥ 8 fold (upregulated) and false discovery rate (FDR) ≤ 0.001. Gene prediction. We used Augustus (v 2.5.5) 41 , Genscan 42 and SNAP 43 for de novo gene prediction, compared the candidate genes to the transposable element protein database using BLASTP (1 × 10 −5 ) and removed genes that showed over 50% similarity to the transposable elements. The predicted proteomes of D. melanogaster, B. mori, Anopheles gambiae and Tribolium castaneum were aligned with the DBM genome using TBLASTN (E value ≤ 1 × 10 −5 ). Highscoring segment pairs (HSPs) were grouped using Solar (v. 0.9.6) 36 . We extracted target gene fragments and extended 500 bp at both ends. GeneWise (v. 2.2.0) 44 was used for the alignment of fragments to a protein set. We clustered the predicted genes with an overlap cutoff of >50 bp. The results of de novo and homolog-based predictions were incorporated into a gene set using GLEAN 45 .
Integration of transcriptome data with the GLEAN set. Transcriptome reads 11 were mapped onto the genome using TopHat 46 . We then used Cufflinks 47 (with default parameters) to assemble transcripts and integrated the transcripts with the GLEAN set by filtering out redundancy and the genes with ≥10% uncertain bases and coding region lengths of ≤150 bp.
Functional annotation. The integrated gene set was translated into aminoacid sequences, which were used to search the InterPro database 48 by Iprscan (v 4.7) 49 . We used BLAST to search the metabolic pathway database 50 (release58) in KEGG and homologs in the SwissProt and TrEMBL databases in UniProt 51 (release 2011-01).
Comparison of genomic synteny. We used a set of lax parameters 36 to perform LASTZ (v. 1.01.50) and MCSCAN 78 (v. 0.8) to search for syntenic blocks in P. xylostella and B. mori or D. melanogaster.
Genomic variation. We fragmented the fosmid sequences in silico into 100-bp single-end reads or paired-end reads (insert size of 500 bp). We used SOAPaligner/soap2 35 to map the reads onto reference sequences and SOAPsnp 79 and SOAPIndel 35 to annotate SNPs and indels, respectively (with acceptable depths ranging from 3 to 30). On the basis of the sequencing of a single Fuzhou-S individual (Supplementary Table 1, SI), SOAPsv 80 was employed for annotating structural variations. We performed whole-genome alignment comparison using LASTZ. The regions that were ≥1 kb with identity of ≥90% were regarded as segmental duplications.
Annotation of genes concerned. On the basis of available protein sets (Supplementary Table 26) and the predicted proteomes of P. xylostella, B. mori and D. melanogaster, BLASTP was used to search for the homologs in each of the three genomes. We applied cutoffs at 1 × 10 −20 , bit-score of 100 and coverage of 100 continuous amino acids for gapped alignment. We filtered out the results with total coverage of alignment of <70% for the same species and <40% for different species. We also used InterProScan 81 to search for candidate genes on the basis of conserved motifs from InterPro 48 . The candidates were manually checked against the Conserved Domain Database 82 in NCBI to validate the gene searching results and confirm that the method used in our DBM genome was as effective and reliable as the methods used in other insect genomes.
PCR validation.
We randomly selected 20 each of annotated SNPs, structural variations (≥50 bp and ≤200 bp) and transposable elements (≥300 bp and ≤600 bp) within or around the metabolic detoxification genes. PCR primer sets were designed for each of them to amplify an 800-bp region (Supplementary Table 31). Direct Sanger sequencing was performed for PCR products from both ends. Alignments between sequencing results and the reference genome were performed using BLAST or BLAT 83 .
Quantitative RT-PCR validation. We used 20 genes for validation of host plant responsiveness, and another 20 genes to examine differential expressions over the life cycle (Supplementary Table 24). We also used a B. thuringiensis strain containing CryIIAd (GenBank DQ358053) to infect the DBM strain and determine the gene expression for sulfate metabolism. Third-instar larvae were treated with CryIIAd (7.589 µg·/ml) by the leaf-soaking method 84 , with double-distilled water as control or no food supply for starvation. RT-PCR was performed for quantitative gene expression based on the 2 -∆∆CT method 85 , with the ribosomal protein L32 (RPL32) gene (GenBank AB180441) serving as an internal reference. Each experiment was repeated three times.
|
2017-03-31T15:18:51.008Z
|
2013-01-13T00:00:00.000
|
{
"year": 2013,
"sha1": "5486b0aad3da4375185940b673910cd537eb6cbf",
"oa_license": "CCBYNCSA",
"oa_url": "https://www.nature.com/articles/ng.2524.pdf",
"oa_status": "HYBRID",
"pdf_src": "SpringerNature",
"pdf_hash": "1df994d3d9039b795537552c42ca488a22e57d2f",
"s2fieldsofstudy": [
"Biology",
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
}
|
54795956
|
pes2o/s2orc
|
v3-fos-license
|
The Schwarzian Theory - Origins
In this paper we further study the 1d Schwarzian theory, the universal low-energy limit of Sachdev-Ye-Kitaev models, using the link with 2d Liouville theory. We provide a path-integral derivation of the structural link between both theories, and study the relation between 3d gravity, 2d Jackiw-Teitelboim gravity, 2d Liouville and the 1d Schwarzian. We then generalize the Schwarzian double-scaling limit to rational models, relevant for SYK-type models with internal symmetries. We identify the holographic gauge theory as a 2d BF theory and compute correlators of the holographically dual 1d particle-on-a-group action, decomposing these into diagrammatic building blocks, in a manner very similar to the Schwarzian theory.
C Partition function of a particle on a group manifold 43 1 Introduction and summary Sachdev-Ye-Kitaev (SYK) models of N Majorana fermions with random all-to-all interactions have received a host of attention in the past few years [1,2,3,4,5,6,7,8,9,10,11,12], mainly due to the appearance of maximally chaotic behavior [13,14,15,16,17], suggesting a 2d holographic dual exists. It was realized immediately that the infrared behavior of these models and their relatives is given by the so-called Schwarzian theory, a 1d effective theory with action given by the Schwarzian derivative of a time reparametrization: the Schwarzian derivative of f . Miraculously, the same action and interpretation appears when studying 2d Jackiw-Teitelboim (JT) dilaton gravity [18,19,20,21,22,23,24,25,26], with action: This leads to the holographic duality between the Schwarzian theory and Jackiw-Teitelboim gravity. UV decorations can be added to both theories if wanted, but this is the minimal theory on both sides of the duality that contains the universal gravity regime. In [27] we solved the Schwarzian theory by embedding it in 2d Liouville CFT, fitting nicely with the well-known piece of lore that Liouville theory encodes the universal 3d gravitational features of any 2d holographic CFT.
A direct generalization of the SYK model is to consider instead complex fermions. These models have a U(1) internal symmetry, and the resulting infrared two-point correlator has the symmetry [28]: for a function f , corresponding to arbitrary conformal transformations, and g, corresponding to arbitrary gauge transformations on the charged fermions. The former is known to be represented by a Schwarzian action, whereas the latter is represented by a free 1d particle action. At large N and low energies, the theory is dominated by quantum fluctuations of just these two fields. In general, the low-energy theory is then The interaction term S int will depend on the specific theory at hand. Stanford and Witten [29] obtained this same action by considering the coadjoint orbit action for Virasoro-Kac-Moody systems. Generalizations to non-abelian global (flavor) symmetries of the fermions were studied in e.g. [30,31,32]. Finally, when considering supersymmetric SYK models with N = 2 supersymmetry, the above action (with a specific value of a) arises as the bosonic piece of the N = 2 super-Schwarzian action [33].
Our goal here is to understand the structure behind these theories better, and their correct bulk descriptions. As a summary, we will find the following diagram of theories ( Figure 1), linking four theories through dimensional reduction and holography. The same quadrangle of theories exists for the compact group models as well. Correlation functions of the Schwarzian theory were obtained first in [34,35] and generalized and put in a Liouville context in [27]. We analogously compute correlation functions for the compact group models and find a Feynman diagram decomposition in perfect analogy with that of the Schwarzian theory in [27]. For a compact group G, an arbitrary diagram is decomposed into propagators and vertices: (1. 5) where C λ is the Casimir of the irreducible representation λ and m ∈ Ω λ is a weight in the representation λ. The vertex function is given essentially by the 3j-symbol of the compact group G: (1.6) The representation labels of each exterior line are summed over. In the Schwarzian theory, operator insertions are associated to discrete representations of SL(2, R) and external lines to continuous representations, originating from the perfect dichotomy of (normalizable) states and (local) vertex operators in Liouville theory. In the rational case here, all representation labels are discrete, related to the state-operator correspondence in rational 2d CFT.
Our main objective is to demonstrate that the embedding of the Schwarzian theory within Liouville theory is not just convenient: it is the most natural way to think about the Schwarzian theory. This will be illustrated by both a field redefinition of Liouville theory and by immediate generalizations to compact group constructions. To expand our set of models, we also discuss N = 1 and N = 2 supersymmetric Liouville and Schwarzian theories wherever appropriate.
The paper is organized as follows. Section 2 contains a path-integral derivation of the link between Liouville theory and the Schwarzian theory. This was hinted at in [27], but is proven more explicitly here. We use this description of Liouville theory to exhibit more explicitly the structural links between these theories in a holographic context in section 3. In section 4 we look at the bulk story for the compact internal symmetries of SYK-type models. Section 5 discusses the 1d particle-on-a-group actions and the diagrammatic rules for computing correlation functions. We end with some concluding remarks in section 6. The appendices contain some additional technical material.
Recently, the papers [36,37] appeared that also investigate extensions of the Schwarzian theory with additional symmetries.
Path integral derivation of Schwarzian correlators
In [27] we provided a prescription for computing Schwarzian correlators through 2d Liouville theory on a cylindrical surface between two ZZ-branes. This was based on results in [38,39] on (the moduli space of) classical solutions of boundary Liouville theory. Here we will provide a direct Liouville path integral derivation that substantiates our previous prescription.
Classical limit of thermodynamics
The Schwarzian limit we will take corresponds to the classical ( → 0) limit of a thermodynamical system. 2 Let us therefore briefly review how this works. For a general theory with fields φ and momenta π φ , the phase space path integral of the thermal partition function is given as: (2.1) Rescaling β t = τ and taking the classical limit, the pq-term localizes to configurations with δ(π φφ ) = 0, i.e. static configurations for whichφ = 0,π φ = 0. Hence one finds which is just the classical partition function for a field configuration. We will take precisely this classical limit in the Liouville phase space path integral in the next subsection.
Gervais-Neveu field transformation
Liouville theory with a boundary is defined by the Hamiltonian density: with parameters c = 1 + 6Q 2 and Q = b + b −1 . The last term integrates to a boundary term. Operator insertions in Liouville are the exponentials V = e φ . Gervais and Neveu [41,42,43,44] considered a (non-canonical) field redefinition applied to Liouville theory as (φ, π φ ) → (A(σ, τ ), B(σ, τ )) with where A σ = ∂ σ A etc. The new functions A and B need to be monotonic (as can be seen from (2.4)): A σ ≥ 0 and B σ ≤ 0. This transformation is invertible, up to simultaneous SL(2, R) transformations on A and B as: We are interested in the large c-regime (small b), where using this field redefinition, the Hamiltonian density (2.3) can be written as The Liouville phase-space path integral, with possible insertions of the type e φ , is then transformed into The Jacobian factor in the measure is the Pfaffian of the symplectic 2-form ω. Performing the Gervais-Neveu transformation (2.4),(2.5) on the standard symplectic measure, one finds Next we define this theory on a cylindrical surface between two ZZ-branes [45] at σ = 0 and σ = π ( Figure 2 The classical solution of this configuration is well-known [38,39]: (2.10) in terms of a single function f that satisfies f (x+2π) = f (x)+2π. To implement the boundary conditions at the quantum level, it is convenient to perform a thermal reparametrization of the A and B fields into new fields a and b as in terms of which (2.4) is rewritten as The redefinition (2.11) preserves the monotonicity properties a σ ≥ 0 and b σ ≤ 0. The ZZboundary state is characterized by φ → ∞ at the location of the branes, by (2.12) requiring a = b | σ=0 and, by the monotonicity requirements, a = b + 2π | σ=π . More general boundary conditions and branes are discusses in appendix A. See Figure 3 left. The Schwarzian limit is defined by taking the small radius limit (T → 0), thereby reducing the theory to just the zero-mode along the τ -direction. To obtain a theory with non-zero action, we need to take c → +∞ simultaneously such that cT 24π = C, a fixed constant. 3 This double scaling limit is identical to the classical limit of thermodynamics discussed earlier in section 2.1. We obtain for the Liouville correlator in this limit: 4 (2.13) This can be simplified by defining a doubled field f to implement the boundary conditions on the ZZ-branes, as 14) for f continuous, f σ ≥ 0 everywhere, and f a 1 : 1 mapping from (−π, π) to (−π, π), so f ∈ diffS 1 /SL(2, R) (Figure 3 right). The symplectic form (2.9) in these new variables, using that f σ = a σ and f σ = −b σ and that both terms add up, is written as 5 which is identified with the Alekseev-Shatashvili symplectic measure on the coadjoint Virasoro orbit [47,48]. The boundary term drops out by our choice of boundary conditions, and the expression is SL(2, R) invariant by construction. The link between Liouville theory between branes and the geometric Alekseev-Shatashvili action is made in appendix A.
Stanford and Witten showed that for a suitable choice of gauge, this becomes the standard SL(2, R) t 1/ḟ (t) measure [29]. Regardless, the final expression for the path integral becomes . . e C π −π dt{F,t} .
(2.16) 4 To avoid cluttering the equations, the "mod SL(2, R)" is left implicit here. 5 β should be set to 2π here. To reintroduce β in all expressions, one places the branes at a distance β/2 and sets A = tan π β a etc. Alternatively, one can redefine C → C 2π β and then rescale t → t 2π β and f → f 2π β . This gives the field f its physical dimension and demonstrates that the coupling constant C ∼ cT has the dimensions of length.
The theory is reduced to a Schwarzian system on the circle, with F = tan 1 2 f . The Lagrangian {F, t} is the analogue of (1.1) for finite temperature. In the process, Liouville operator insertions become bilocal insertions in the Schwarzian theory. Liouville stress tensor insertions are written in (2.7) as a sum of two Schwarzian derivatives, resp. the holomorphic and antiholomorphic stress tensor. This exhausts the non-trivial Liouville operators. We end up with a Euclidean theory on the circle. As stressed in [27], one can then extend this expression to arbitrary times for the bilocal operators to obtain the most generic Euclidean time configuration. Expressions for correlators are then obtained by taking the double scaling limit directly in the known equations in Liouville theory. Afterwards, one can directly Wick-rotate these to Lorentzian signature. Both of these steps are non-trivial, and the correctness of this procedure is verified by several explicit checks in [27].
To summarize, the 1d Lagrangian is the dimensional reduction of the 2d Hamiltonian, and the 2d local vertex operators become bilocal operators in the 1d theory. This is the rule we used in [27], and we will use this short mnemonic later on in section 5 when we generalize this construction beyond SL(2, R) to arbitrary (compact) Lie groups.
Bäcklund Transformation
Instead of using the Gervais-Neveu parametrization (2.4),(2.5), we can make one more field redefinition to get a free field theory (Bäcklund transformation) by defining transforming the symplectic measure again into the canonical one: proving that the transformation (φ, π φ ) → (φ F , π F ) is canonical in field space (see e.g. [49] and references therein). The Hamiltonian gets transformed into the free-field one: (2.20) Boundary conditions still need to be specified however, and, when written this way, the system is not suited for the doubling trick.
There is a slight variant of this transformation that is better equiped for this purpose, by defining (ψ, χ) as: A σ ≡ e ψ , B σ ≡ −e χ , (2.21) or, in terms of the Bäcklund variables: φ F = ψ + χ, π F = ψ σ − χ σ . It will turn out that these field variables correspond to the Alekseev-Shatashvili fields [47,48]. Upon taking the Schwarzian limit, they correspond also with the field variables utilized in [34,35]. In these variables, . The field transformation (φ, π φ ) → (ψ, χ) has a harmless symplectic form: The measure is now innocuous as it's field-independent, and can be readily evaluated in terms of an auxiliary fermion η as To implement the ZZ-boundary conditions for ψ and χ, we need to return to the A and B fields using (2.11). The boundary conditions in terms of these is illustrated in Figure 4.
defined for the doubled interval (−π, π), with F (π) = F (−π) + ∞, in the sense of the above figure. Defining a doubled ψ-field for the interval (−π, π), the winding constraint is written as: which can be regularized and implemented in the theory using a Lagrange multiplier [34,35]. The path integral becomes: 6 (2.27) Again taking the double scaling limit reduces this system to the expression: which can be computed explicitly as shown in [34,35]. We remark that this theory exhibits chaotic behavior, even though it looks like a free theory. Within this language, this is explicitly found in [34,35], and ultimately arises due to the above constraint (introducing a 1d Liouville potential) and the non-local nature of the operator insertions.
N = 1 super-Liouville
The preceding discussion can be generalized to N = 1 super-Liouville theory and the N = 1 super-Schwarzian. We will be more sketchy in this paragraph, some details are left to the reader. The analogous treatment of Gervais and Neveu for N = 1 Liouville theory appeared in [50,51,52] and we heavily use their results. N = 1 super-Liouville theory is defined by the Hamiltonian density The gauge symmetry implementation is more subtle now. The original invariance is reduced to γ = 0 (to fix the divergences to σ = ±π by choice) and β = 0 (the transformation (2.21) undoes this redundancy). Only rescalings F → α 2 F are left, which indeed correspond to shifts in ψ which leave the action (2.27) and operator insertions invariant. This leftover gauge symmetry is explicitly distilled in correlators in [34,35]. Also, quantum renormalization effects should be taken into account when considering the 2d system as discussed in [47,48]. Figure 5: Liouville theory in 2d in its different incarnations, and the resulting 1d theory one finds upon taking the double scaling (classical) limit. The redefinitionḟ = e ψ utilized by Altland, Bagrets and Kamenev (ABK) [34,35], is the dimensional reduction of the transition from Gerveu-Neveu variables to Bäcklund variables.
for a scalar φ and two Majorana-Weyl fermions ψ 1 and ψ 2 . The auxiliary field F has been eliminated by its equations of motion. In superspace (σ, τ, θ 1 , θ 2 ), the general classical super-Liouville solution for the superfield Φ(σ, τ, θ 1 , θ 2 ) is written as As before, this can be generalized to an off-shell field redefinition in the phase space path integral: utilizing the off-shell generalization of (2.30) and the conjugate momentum as the definition of the non-canonical field redefinition (see [50] for details). These fields are not completely independent, but satisfy making the transformation a super-reparametrization, and reducing the number of real components from eight to four, matching the l.h.s. of (2.31). In these variables, super-Liouville theory is naturally interpreted as the theory of all super-reparametrizations, generalizing this statement from previous sections.
To rewrite the theory in terms of these variables, consider first the differential equation for a fermionic function V i (σ, τ, θ i ). For e.g. i = 1, one checks that this equation is solved for x = (Dα) −1 , A(Dα) −1 , α(Dα) −1 with V i equal to (minus) the super-Schwarzian derivative, and A and α linked by (2.32). Indeed, evaluating the above for e.g. x = (Dα) −1 gives explicitly Analogous formulas hold for V 2 in terms of β and B.
It was then demonstrated in [50] that the Hamiltonian density can be written as The bosonic pieces of V i thus become the Hamiltonian density in real space (after integrating over θ). The fermionic parts (the Λ's) in (2.36) are interpreted as the supercharge densities.
ZZ-brane boundary conditions at σ = 0, π require that Φ → ∞ at those locations, which means by (2.30), next to the bosonic conditions on A and B, that α = ±β| σ=0,π . This again allows us to recombine A and B into a single reparametrization F , and α and β into η, the superpartner of F . For the latter, one needs to choose N S (opposite) boundary conditions on the branes such that α = β on one end and α = −β on the other. This leads to an antiperiodic fermionic field η on the doubled circle, which indeed corresponds to a thermal system. It is possible to choose other fermionic boundary conditions at the ZZ-branes, but this only leads to the N = 0 Schwarzian as discussed in [27]. Super-Liouville vertex operators e αΦ become bilocal super-Schwarzian operators of the form (2.30), given by arbitrary super-reparametrizations of the classical Liouville solution.
Classical dynamics of Liouville and 3d gravity
Here we analyze some aspects of the classical dynamics of 2d Liouville and 3d AdS gravity with the dimensional reduction to the 1d Schwarzian and 2d Jackiw-Teitelboim gravity in mind. The larger goal is to demonstrate the structural links between 2d Liouville theory, 3d gravity, the Schwarzian theory, and JT gravity. The next section generalizes this further to other theories.
Liouville with energy injections
In [24], we analyzed the Schwarzian theory at the classical level in 2d Jackiw-Teitelboim (JT) gravity by allowing energy injections from the boundary. We demonstrated there that the matter energy determines a preferred coordinate frame close to the boundary. Here we show how that analysis directly generalizes to the higher dimensional Liouville theory. For this purpose, the Gervais-Neveu variables (A, B) are most useful. Liouville theory at large c is expected to describe the universal gravitational features of holographic CFTs, and it is this regime we discuss here. As in (2.4), the Liouville exponential is related to the (A, B) fields as (3.1) On-shell, A and B are holomorphic resp. antiholomorphic functions and the Liouville metric ds 2 = e φ dx + dx − is transformed from the Poincaré patch into an arbitrary frame. 7 The lightcone stress tensor components are given by equation (2.7): Energy conservation would ordinarily result in holomorphicity for T ++ and T −− . However this is violated if the system is not closed, as happens when one would inject additional energy into the system. We allow for this possibility here. The Schwarzian theory has its time coordinate identified with the Liouville spatial coordinate σ, thus we relabel the Liouville coordinates to reflect this: we set τ → x and σ → t. This corresponds to swapping the roles of time and space in Liouville theory. The total energy on a constant-t slice equals Within a holographic theory with bulk coordinates (t, r, x), the total change in boundary energy equals the net bulk inwards flux from the boundary: This equation is not that powerful in general. However, when reducing to the spatial (= x) zero-mode, it becomes the classical Schwarzian equation of motion [22,23,24]; the Schwarzian equation is just energy conservation. When evaluating (3.2) on a region where energy is conserved, all functions become holomorphic and this just reduces to the uniformizing coordinate identification: where x ± = τ ± σ.
Bulk interpretation
The above can be interpreted as a diffeomorphism from vacuum Poincaré AdS 3 (A, B) into a new preferred frame (x + , x − ). It is clearest to demonstrate this in a region where no additional matter falls in (or is extracted) (Figure 6 left). It has been shown in [53] that Figure 6: Left: classical injection of bulk energy between t 1 < t < t 2 . We consider the region after the injection takes place t > t 2 where a non-zero boundary T ±± was generated. Right: classical injection of a translationally symmetric pulse into the bulk.
the general bulk diffeomorphism that brings the Poincaré AdS 3 solution (X + , X − , u) to the Banados metric (x + , x − , z) is found by extending the transformation into the bulk, with the chiral functions X ± (x ± ) and L ± (x ± ) determined by solving 8 The full bulk diffeomorphism is given by Hence the functions A and B indeed correspond to the boundary reparametrization that, upon extending into the bulk using (3.10), is precisely the required frame. Setting z = in (3.9) leads to a radial trajectory u(X + , X − ) representing a fluctuating holographic boundary caused by matter injections. Note that solving (3.11) directly leaves a SL(2, R) × SL(2, R) ambiguity, which is fixed by boundary (gluing) conditions, just as in the 2d case [24].
As an explicit example, consider a translationally invariant injection of matter through a pulse (Figure 6 right). This requires T ++ = T −− to set T tx = 0 for t > 0, equal to (half) the energy injected. One can then immediately solve (3.11) for A and B after the pulse: The resulting Banados metric at t > 0 is of course the BTZ black hole frame. 9
Jackiw-Teitelboim from 3d
It has been known for a long time that a spherical dimensional reduction of 3d gravity yields 2d Jackiw-Teitelboim gravity [54]. This is done by considering the 3d ansatz with λ a mass scale. This yields directly 10 which is indeed JT gravity. The Schwarzian coupling constant C ∼ 1/G 2 , but G 3 L → 0 to match 3d gravity with 2d Liouville theory at large central charge, with Brown-Henneaux central charge c = 3L 2G 3 . So we choose λL → +∞ to obtain a finite limit with G 2 ∼ λG 3 . This is the Schwarzian double scaling limit from the bulk perspective. This 3d perspective on the bulk is very useful, and we here mention some aspects that become easier to understand when embedding the theory in 3d.
Black hole solutions from 3d
At the level of classical solutions, the general vacuum solution of 3d Λ < 0 gravity is the Banados metric: for arbitrary chiral functions L ± (x ± ). Performing a spherical dimension reduction requires L + = L − = L, a constant, as it should be independent of ϕ. The resulting 3d space is a non-rotating BTZ black hole, dimensionally reducing to a 2d JT black hole. By (3.11), only constant Schwarzian solutions survive the reduction, as this is the generic 3d metric outside matter. And any 2d vacuum metric in JT theory is a black hole of a given mass. Indeed, directly solving the vacuum JT equations (as in [21,22,23,24,25]) leads to black hole spacetimes as the only solutions, perfectly analogous to the 2d CGHS models [55].
Fefferman-Graham from 3d
In [21,22,23,24], JT gravity is defined by enforcing an asymptotic value Φ 2 ∼ a/ of the dilaton Φ 2 at z = , combined with an asymptotically Poincaré metric. Here we demonstrate that, upon embedding in 3d, both of these conditions follow from just imposing asymptotically Poincaré boundary conditions directly in 3d. The 3d BTZ metric can be written as Performing the purely radial transformation ρ = √ µacoth µ a (x + − x − ) [21,24], and setting t = x + +x − 2 , the metric becomes which is of the form of a spherical dimensional reduction: giving the 2d JT black hole metric h ij and associated dilaton field Φ 2 . Asymptotically, the above 3d metric behaves as which, upon absorbing a in ϕ, is just the standard Fefferman-Graham asymptotic expansion. Hence imposing Fefferman-Graham gauge in 2d and Φ 2 ∼ a/ is equivalent to imposing Fefferman-Graham gauge in 3d.
3d embedding
Armed with the above embedding of the Schwarzian theory within Liouville and JT gravity within 3d gravity, we can now relate four different theories through dimensional reduction and the Schwarzian limit.
One starts with 3d gravity in the bulk, with periodically identified Euclidean time τ . Its boundary contains 2d Liouville theory. Instead reducing to the angular ϕ-zero-mode, one obtains 2d JT gravity in the bulk. These two 2d theories are living in distinct regions and are only linked through this higher-dimensional story. Finally dimensionally reducing Liouville theory leads to the Schwarzian theory as the angular zero-mode of the boundary theory ( Figure 7).
Liouville / WZW
3d Gravity / CS Schwarzian / particle on group JT gravity / BF theory t f r Figure 7: Link between four theories through dimensional reduction, both for the gravity sector, as for the group theory sector. The interior of the torus is the 3d bulk. The torus itself is the holographic boundary. Reducing to the angular zero-mode gives a 2d bulk and a 1d boundary line.
We can omit the ZZ-branes if we realize that their entire goal in life is to combine leftand right moving sectors into one periodic field, thereby transforming the cylindrical surface into a (chiral) torus. This equivalence is also demonstrated in Figure 8.
As we will demonstrate starting from the next section, an analogous story holds for group theory: Chern-Simons (CS) in 3d reduces to 2d WZW on the boundary. Instead restricting to the angular zero-mode leads to 2d BF theory in a different region. Further dimensionally reducing the boundary theory leads to the 1d particle on a group manifold. The resulting scheme of models was already shown in Figure 1 and is repeated in Figure 9 for convenience.
Bulk derivation
It was suggested in [22,23,24] that the Schwarzian theory is holographically dual to Jackiw-Teitelboim gravity. Within JT gravity, the Schwarzian appears as follows. The dilaton field blows up near the AdS boundary, with a coefficient depending on the matter sector. Keeping fixed its asymptotics, requires performing a coordinate transformation at each instant, depending on the injected / extracted energy from the system. This results in a fluctuating boundary curve (Figure 10 left). One can directly deduce the Schwarzian action from the bulk 2d JT dilaton gravity theory from the Gibbons-Hawking boundary term [23]. This argument has been generalized to N = 1 and N = 2 JT supergravity in [56] and [57] respectively. In appendix B we extend the argument (in the bosonic case) to include an arbitrary matter sector.
The gauge theory variant of this story is readily formulated: we need a preferred gauge transformation on the boundary curve at each instant, determined by the injected charge into the system (see Figure 10 right). The correct bulk theory that describes this situation is 2d BF theory. The argument we present is a dimensional reduction of the 3d Chern-Simons story and the direct analog of the Schwarzian argument of [23]. Consider the 2d BF theory obtained as a dimensional reduction from 3d CS theory: with A φ ∼ χ and ∂ φ = 0. One obtains: 11 This action is not gauge-invariant, but changes as just like 3d CS theory. Restricting the gauge transformations to satisfy δ g A 0 = 0| ∂M , solves this problem, but creates dynamical degrees of freedom at the boundary. Sending in charge through a matter field requires the additional term
4)
11 Reintroducing the correct prefactor k 4π in the Chern-Simons action, by analogy with section 3.2, one needs to set A φ ∼ χ k to find a finite limit. The resulting 2d action is proportional to some C again, which is not quantized even though the original k is.
which is the charge analogue of the energy-momentum matter source for the gravitational field given in appendix B. Varying w.r.t. A µ and χ gives the equations of motion: 5) and the boundary terms at r = +∞: These can be cancelled by constraining: for a parameter v that defines the specific theory. We choose v = 1. Path integrating (4.2) over χ sets F = 0 in the bulk. So we parametrize the solution as Using the boundary condition (4.7), the full action (4.2) now becomes: The total boundary charge is defined as Q = δS on-shell δA 0 =σ, (4.10) and the total boundary energy is For the matter action S matter , after integrating by parts, one finds the boundary term: representing the net inward flux of charge. As charge is sent in, one requires A 0 to change as well asymptotically to keep fixed the boundary condition (4.7). Either by using χ =σ and (4.5), or by directly varying the boundary action in terms of σ, one obtains which determines how the gauge transformation σ evolves due to matter charge; σ was pure gauge in the bulk but becomes physical on the boundary.
Some Comments:
• This procedure is independent of the gravity (Schwarzian) part. N = 2 JT supergravity would fix the relative coefficient (see section 4.2 below).
• Non-abelian generalization is straightforward. The non-abelian BF theory is which is gauge-invariant (χ transforms in the adjoint representation), up to the boundary term again. The equations of motion require A µ = g −1 ∂ µ g, with F = 0. The boundary condition is again chosen as χ = A 0 | ∂M . So the full theory reduces to the boundary action: which is the action of a particle on a group manifold, to be studied more extensively in section 5 below.
• One can write Jackiw-Teitelboim itself as an SL(2, R) BF theory [20], see also [58] for recent developments. In fact, dimensionally reducing SL(2, R) CS theory just gives us the SL(2, R) BF theory, which is the first-order formalism equivalent of dimensionally reducing the Ricci scalar directly. And indeed, the SL(2, R) particle-on-a-group action is equivalent to the Schwarzian action [27]. Operator insertions on the other hand are not so simple.
• 3d bulk gravity coupled to 3d CS theory leads to decoupled equations of motion because T CS µν ≡ 0. The only influence of the CS theory on the gravity part is in the definition of the total Hamiltonian: H = H grav + H CS with contribution (4.11), which provides just a shift in the energy. This will indeed be observed below in section 5.2.
Supersymmetric JT gravity theories
The identification of the non-interacting gauge sector as a 2d BF theory can also be understood from supersymmetry as will be illustrated here. Pure 3d gravity can be written as a sl(2) ⊕ sl(2) Chern-Simons theory. Similarly, Achucarro, Townsend and Witten demonstrated a long time ago that (p, q) 3d supergravity can be written as a osp(p|2) ⊕ osp(q|2) Chern-Simons theory [59] [60]. Dimensionally reducing these (super)gravity theories for the case p = q leads to a osp(p|2) 2d BF theory. And indeed, as known since a long time [20], JT gravity itself can be written as an sl(2) BF theory: S JT = Tr(ηF ), (4.16) with A = e a P a + ωJ, field strength F = dA + A ∧ A and η = η a P a + η 3 J in terms of zweibein e a (a = 1, 2) and spin connection ω.
Supersymmetric generalization is now straightforward, as one just generalizes the gauge group from sl(2) to either osp(1|2) (N = 1) or osp(2|2) (N = 2). In particular the N = 2 JT supergravity action may be written as [61] [62]: (4.17) in terms of the field strength F = dA + A ∧ A, with the dilaton superfield E and superconnection A, expanded into the osp(2|2) generators as: for three sl(2) generators P a , J, four fermionic generators Q ± ,Q ± and one additional u(1) generator B. These eight generators satisfy an osp(2|2) algebra whose explicit form can be found in the literature. 12 For simplicity, we set the cosmological constant zero here, as this does not influence the structure of the theory. In components, the action is The piece coming from just the bosons is then which is indeed bosonic JT gravity (4.16) supplemented with a u(1) BF theory χF . Studying the N = 2 theory on its own would be interesting as this couples the gravitational and gauge sectors in the bulk. This is left for future work.
Correlation functions in group models
We focus now on the boundary theories of the 3d Chern-Simons and 2d BF models. We will provide a prescription for computing correlation functions of the 1d particle-on-a-group theory, following the logic used in the Schwarzian theory in [27] and in section 2. We start by providing a general formalism starting from 2d Wess-Zumino-Witten (WZW) rational CFT and performing a double-scaling limit. Our main interest is again in computing the cylinder amplitude between vacuum branes. After that, we consider U (1) and SU (2) as two examples that will allow us to write down the generic correlation function using diagrammatic rules.
From 2d WZW to 1d particle-on-a-group
Consider the 2d WZW system with path integral for g ∈ G, integer level k, and with Γ the Wess-Zumino term which will not be needed. An operator F (g) is inserted, with F a scalar-valued function on the group. As well-known, this theory enjoys invariance under a local group transformation g → g 1 (z)gg 2 (z).
Just as in Liouville theory, we focus on the moduli space of classical solutions of this theory to deduce the link between the 2d and 1d operators. This system has the classical solution g(z,z) = f (z)f (z), with f andf local group elements as well. Inserting a brane at z =z (or u = v in Lorentzian signature) imposes reflecting boundary conditions: which, when translated into a condition on f , requiresf = f −1 . This boundary condition projects the symmetry onto its diagonal subgroup; the condition (5.2) is preserved under the group transformation provided g 1 = g −1 2 . In terms of f , the symmetry transformation is now f → g 1 f . At the second boundary brane at σ = π, where u = τ + π, v = τ − π, one has g = f (τ + π)f −1 (τ − π) which satisfies the boundary condition if f is 2π-periodic: f (x + 2π) = f (x). Hence, after implementing the boundary conditions, the system is characterized by a single 2π-periodic function f . Just as with the Schwarzian theory, we imagine performing a change of field variables from g to f . The transformation g(z,z) = f (z)f −1 (z) has, in analogy with (2.4), a redundancy in description: f ∼ f γ for γ ∈ G any global group element. One can then identify a local WZW operator F (g(z,z)) with a bilocal 1d operator as z → t 1 andz → t 2 . Dimensionally reducing as in the Liouville/Schwarzian case, the WZW action itself immediately reduces to the particle-on-a-group action, the Wess-Zumino term Γ vanishes upon dimensional reduction.
Hence the rational generalization of the Schwarzian story requires us to compute the 1d path integral over the group: The periodicity of 2π can be changed into β by rescaling the time coordinate as t → 2π β t, which can alternatively be achieved by placing the branes at β/2 apart. Both the action and the operator insertions are left invariant under the global group f → f γ, but are not invariant under local transformations. This immediately generalizes the Schwarzian coset diffS 1 /SL(2, R) to the generic rational case as the right coset G local /G global . Taking into account the periodicity of f , this integration space is also written as the right coset of the loop group: LG/G, which is known to be a symplectic manifold. The resulting partition function could then be computed using the Duistermaat-Heckman (DH) theorem just as in the Schwarzian case [63]. Note that the transformation f → g 1 f , g 1 ∈ G, is a symmetry of the action: it is the remnant of the WZW symmetry in 1d as remarked above. But it is not necessarily a symmetry of operator insertions and it isn't a gauge redundancy. We did not work out the measure [Df ] explicitly as in section 2, but by general arguments this has to be the standard √ G measure of the group metric: ds 2 = G µν dx µ dx ν = Tr [g −1 dg ⊗ g −1 dg]. The double scaling limit we take is T → 0 and k → ∞ with the product kT ∼ C held fixed proportional to a coupling constant C. We will be more specific about this below in section 5.3. 13 The coupling constant C allows us to explore the semi-classical regime of (5.3) at C → +∞.
Structurally the particle-on-a-group action is very similar to the Schwarzian action. The Lagrangian L and Hamiltonian H can be written as a particle moving on the group manifold as The quantization of a particle on a group manifold is in principle well-known (see e.g. [64]). Consider for instance the partition function (without operator insertions), and ignore first the modding f ∼ f γ we wrote in (5.3). Then this is manifestly the path integral rewriting of the Lorentzian partition function Tre −βH . As mentioned above, the theory is invariant under G×G as f (t) → g 1 f (t)g 2 . Using operator methods, this can be used to prove that each energy-eigenvalue, with irrep label j, has a degeneracy of (dim j) 2 . As an example, the SU (2) group manifold is just the three-sphere S 3 , which has SO(4) SU (2) × SU (2) isometry, meaning an organization of the energy spectrum in (2j + 1) 2 degenerate states. This can indeed also be seen explicitly for SU (2) in [65], and in the general case in [64,63], both with operator methods and path integral methods. Thus Reintroducing the gauge-invariance f ∼ f γ in (5.3) merely requires gauge fixing the thermal path integral, which yields an overall factor of the (finite) group volume (vol G) −1 , which is included in the zero-temperatore entropy S 0 and dismissed. As mentioned above, this does however allow one to prove one-loop exactness of the path integral through the DH formula. The above expression is indeed what we will obtain in section 5.3 below for SU (2), and is readily generalized beyond that. We provide some more explicit formulas in appendix C.
Cylinder amplitude
Just as to get to the Schwarzian from Liouville in section 2, we place two vacuum branes and consider the WZW amplitude on a cylinder between these vacuum branes (as earlier in Figure 2): brane 0 | e −T H cl |brane 0 , (5.7) withT = 2π 2 /T , the length of the cylinder in the closed channel when the circumference is fixed to 2π. As well-understood, a boundary state |a can be expanded into Ishibashi states as The sum ranges over all integrable representations of the Kac-Moody algebraĝ, which in the k → +∞ limit becomes just all irreducible representations of the Lie algebra g. In the limit of interest where the length of the cylinder becomes much longer than its circumference, the Ishibashi states are themselves dominated by their zero-mode (n = 0) states 14 The Kac-Moody algebra reduces to the zero-mode Lie algebra. One can thus write for (5.7): in terms of the modular S-matrix and the Casimirs C i of the irreps. Including operator insertions in the middle, requires splitting the evolution into separate pieces and inserting complete sets of primaries around each such insertion. For instance, the two-point function of this system can be written as: The matrix element can e.g. be computed in configuration space as which is the method we utilized for the Schwarzian theory in [27].
In the next two subsections we will consider the two simplest examples. The generalization to arbitrary compact groups will be obvious at the end. We will end up with a Feynman rule decomposition of the general correlator, analogously as in the Schwarzian case [27]. Just as in that case, we remark that the resulting expression is non-perturbative in the coupling constant C: the Feynman diagrams just represent convenient packaging of the building blocks of the general expressions.
Example: U (1)
As a first example, let's take U (1). We start with a direct evaluation of its correlators following the preceding discussion. Afterwards we will embed the theory into N = 2 Liouville and find the same answer. The latter serves as a further consistency check on the Schwarzian limit from supersymmetric versions of Liouville theory.
Direct evaluation
Consider a free boson field φ in 2d with action S = dudv∂ u φ∂ v φ. The classical solution is given by φ(u, v) = σ(u) +σ(v). (5.13) Perfect reflection at u = v and u − v = 2π requires σ = −σ and σ(u + 2π) = σ(u). Natural vertex operators are the exponentials: 14) The classical moduli space is parametrized by a real periodic function σ, so the Schwarzian 1d limit entails: In this particular case, the bilocal operator is just a product of two local operators.
Of course the resulting theory is free and immediately solvable. Consider e.g. a two-point correlator: The classical equation of motion for σ, including the operator insertions, is solved analogously as in the semi-classical regime of Liouville theory (and written here in Lorentzian signature):σ = Qδ(t − t 1 ) − Qδ(t − t 2 ), (5.17) henceσ increases by Q at t 1 and decreases again to its original value at t 2 . Thus the operators inject and extract charge, andσ represents the total charge in the system, as found earlier from the bulk perspective in section 4. The Gaussian path integral is readily computed as: If the integral on the r.h.s. is truly an integral ranging from −∞ to +∞, one obtains: which at β → +∞ asymptotes to → e − Q 2 τ 4 . This, as we show below in (5.65), is the general result for any non-abelian group as well, with Casimir Q 2 /4. This two-point function is of the shape as in Figure 11.
Interpretation in terms of N = 2 super-Schwarzian
The U (1)-sector is relevant for e.g. the N = 2 super-Schwarzian. This is because it contains, in addition to the fermionic superpartners, also an additional bosonic field σ that is identified with the above U (1)-sector. Here we demonstrate this directly. In the next paragraphs we will identify it from its N = 2 Liouville ancestor.
The bosonic piece of the super-Schwarzian action is the Schwarzian plus a free boson field σ [33]: (5.20) The relative coefficient was fixed by N = 2 supersymmetry. An N = 2 super-reparametrization of the invariant super-distance is given by the following expression: For a purely bosonic reparametrization, the bosonic piece of (5.21) is given by This can be viewed as a simultaneous reparametrization f (τ ) and gauge transformation g(τ ) ≡ e iσ(τ ) on the charged 1d operator O → e iσ O, as given in (1.3).
Charged Schwarzian from N = 2 Liouville
It is possible to obtain this theory directly from N = 2 Liouville theory. The N = 2 supersymmetric generalization of Liouville theory consists of the Liouville field φ, the superpartners ψ ± andψ ± and a compact boson Y , forming the full supersymmetric multiplet. The central charge is c = 3 + 3Q 2 = 3 + 3/b 2 . Details can be found in the literature, but will not be needed here. 15 Take this theory on the cylinder bounded by two ZZ-branes and consider imposing antiperiodic boundary conditions in N = 2 Liouville along the small circle (NS-sector) ( Figure 12). This leads to the removal of all fermionic degrees of freedom in the 1d theory, and retains only the Liouville field itself (leading to the Schwarzian) and the compact boson Y (leading to the U (1) theory). The analysis of section 2 can be repeated when adding the free boson Y . This leads to the additional 1d action in the Schwarzian limit: leading to the identification Y = 2σ to match with the super-Schwarzian field σ in (5.20).
The required building blocks of our story are readily available in the literature. N = 2 Liouville primary vertex operators in the N S sector are of the form: whereas Liouville states |P, Q with charge Q and Liouville momentum P have weight: (5.26) 15 Two convention schemes exist: we follow that of [66]. To go from the conventions of [67] to those of [66], one needs to set b 2 → 2b 2 and 2P 2 → P 2 .
The N S character for a primary with Liouville momentum P = 2bk and U (1) charge Q is given by: (5.27) in the large τ 2 -limit. The ZZ-brane wavefunction is determined by the modular S-matrix as: The total vacuum character then has the small T -behavior: hence the density of states is identified as The lack of a ∼ 1/ √ E divergence as E → 0 is an indication of the lack of supersymmetry [68].
Inserting one vertex operator (5.25) in the ZZ-cylinder amplitude, we get: Let's compute this explicitly. The ZZ-brane wavefunction is given by The minisuperspace limit of bulk N = 2 Liouville theory leads to a removal of all fermions, and the result is the Schrödinger equation: with E the energy, solved by The basic integral we need to compute is The Y -integral just gives δ(Q − q 1 + q 2 ) and the φ-integral is the same as in bosonic Liouville [27]. So we end up with Shifting the energy variables by the charge, then leads to: where now the energy variables E 1 and E 2 are only the energies of the Schwarzian subsystem, not the total energy. Factorization is now manifest, and the q-integral agrees indeed with (5.18). 16 One can write a Feynman rule decomposition of a general correlator, as done in [27]. The two-point correlator for instance is given diagrammatically as: where each line contains also a conserved charge, next to the Schwarzian SL(2, R)-labels.
Partition function
The vacuum character for SU (2) k on a cylinder of circumference T and length π, transforms under an S-transformation as: (5.39) which can be evaluated in the T → 0 limit using where h j = j(j+1) k+2 . The second equality expresses the character in terms of the closed channel with lengthT = 2π 2 /T . Keeping fixed T (k + 2) = 4π 2 /β, this becomes (2j + 1)e −βC j with the Casimir C j = j(j + 1). The analogue of the Schwarzian double scaling limit is here that the level k → +∞ as T → 0. The vacuum character (5.39) finally becomes: which, up to normalization constants, is a discrete quantum system with Hamiltonian = Casimir, and with the dimension of the irreps as density of states: ρ(j, m) = dim j = 2j + 1. Note that the sum ranges over both integers and half-integers. As in the Schwarzian theory, the prefactor can be written in terms of a ground state entropy as e S 0 , and requires regularization by taking finite k. In this case, the prefactor is just S 00 which goes to zero as k → ∞. This prefactor will cancel in correlation functions and is hence irrelevant for our computations; we drop it from here on. At low temperatures, only the vacuum contributes and Z → 1. At high temperatures, the sum can be replaced by an integral and Z → 2 √ π β 3/2 . Alternatively, the expression (5.41) is readily Poisson-resummed.
For a general Kac-Moody algebraĝ, it is well-known that the S 0j elements in the modular S-matrix carry information about the quantum dimension d j of the integrable representation j, and this reduces to the dimension in the classical (k → ∞) limit: It is instructive to recompute Z(β) from the closed channel:
Correlation Functions
Next we proceed by computing correlators of the SU (2) theory. Instead of evaluating configuration space integrals, we will compute the matrix element (5.12) directly using group theory as follows. General operator insertions F (g) are all built from the field g(z), so we can organize them into tensor operators O J,MM transforming in an irreducible representation of G. In the double scaling limit, one finds the bi-local operators: For a general operator O J,MM transforming both in the holomorphic and antiholomorphic sector as a tensor operator, a doubled Wigner-Eckart theorem holds: 46) in terms of two Clebsch-Gordan (CG) coefficients and a reduced matrix element A j 1 j 2 J . Note that a reordering of the arguments of the CG coefficients has been performed, resulting in some j-dependent factors that are absorbed into the reduced matrix element, see appendix D for details. The appearance of two Clebsch-Gordan coefficients will be crucial in what follows.
To determine the reduced matrix element A j 1 j 2 J , one can evaluate this expression for any choice of the m's.
We will determine it below for SU (2), and conjecture that for a general group G for irreducible representations λ 1 , λ 2 and λ, it equals for fusing j 1 and j 2 into J. In the large k limit, this is given explicitly as .
On the other hand, the SU (2) Clebsch-Gordan coefficient for combining j 1 and j 2 into J equals Some details of these computations are given in appendix D. We obtain the ratio identifying the reduced matrix element in (5.46) as which indeed suggests the general form (5.47).
The matrix element in the double scaling limit (and with the normalization (5.44)) is then written by the Wigner-Eckart theorem as 53) in terms of the Clebsch-Gordon coefficients and the reduced matrix element. Only operators that are left-right symmetric can connect the two Ishibashi states, yielding the Kroneckerdelta. The sum over CG coefficients squared is just the fusion coefficient: It equals 1 by unitarity of the CG-matrix, and can connect only states satisfying the triangle inequality. The formula (5.53) is a classical limit of a formula recently derived by Cardy in [70] (derived there for diagonal minimal models) where the Ishibashi matrix element is written as for a (diagonal) primary operator O J,M M . The Euclidean propagators e −τ H and the first factors on the r.h.s. can be viewed as regularization artifacts of the Ishibashi states to render them normalizable. We conjecture this formula and its classical limit hold for any rational CFT. In any case, we have illustrated it explicitly for SU (2) k which is the relevant symmetry group for e.g. N = 4 super-Schwarzian systems (see e.g. [71]).
The normalization of the intermediate operator O J,MM has been fixed above by the 2d CFT state-operator correspondence in (5.48). There is however a more convenient normalization for the 1d theory, by taking the operator and the SL(2)-field Φ J,MM to be instead related as which we now adopt.
Higher-point functions can now be deduced analogously, and we arrive at a Feynman rule decomposition of a general correlation function, where one sums over all intermediate representation labels using (5.57) and where the momentum amplitude A(j i , m i ; τ i ) is computed using the Feynman rules: The vertex is essentially the Clebsch-Gordan coefficient, but it can be written more symmetrically in terms of the 3j-symbol: with the amplitude A 2 diagrammatically: Combining everything we arrive at: which, for the particular case of the two-point function, can be written fully in terms of the integer fusion coefficients N J j 1 j 2 : This immediate simplification only occurs for the two-point function. Just as for U (1), this correlator is finite as τ → 0. The qualitative shape of the correlator is similar to the U (1)-case. Some examples are drawn in Figure 13. Our choice of normalization (5.56) ensures that O J,M (τ = 0) = 1. As a check, some simplifying limits can be taken. At zero temperature, C j 2 = 0, so j 2 = 0, and J = j 1 . So When J = 0 (insertion of the identity operator), j 1 = j 2 and one finds O 0,0 = 1, confirming the overall normalization of (5.64).
The partition function Z(β) itself (5.41) is also directly computed using the Feynman diagram decomposition: The time-ordered four-point function is drawn as and is given by the expression: Note that as β → ∞, this four-point function factorizes in two zero-temperature two-point function, coming from the clustering principle, and the dependence on only two independent time differences, just as happens in the Schwarzian case [27].
This construction is immediately generalized to arbitrary compact groups G, and leads to the rules as given in section 1.
The braiding and fusion matrices, which are given by q-deformed 6j-symbols of the group G [72], become the classical 6j-symbol of the group G. As emphasized for the Schwarzian case in [27], this quantity is used to swap the operator ordering and reach specific out-of-time ordered (OTO) correlators of interest, dual to shockwave interactions in the gravitational case [73]. For the Schwarzian theory, we find the precise semi-classical (large C) shockwave expressions of [14,23] starting from the exact OTO correlators in [74]. We leave a more detailed discussion to future work.
Concluding remarks
In this work, we presented more evidence and extensions to the link between 2d Liouville theory and the 1d Schwarzian theory. We believe this is the most natural way to look at the Schwarzian theory. The first half of this paper focussed on the Liouville path integral directly, where we emphasized the relevance of the parametrization of Gervais and Neveu in this context. We further extended the AdS 2 argument for preferred coordinate frames of [21,23,24] to the case of gauge theories and preferred gauge transformations. In the second half of this work, we demonstrated that the Schwarzian limit is only a special (irrational) case of the simpler case of rational compact models. All of these geometric theories have the property that the Hamiltonian, Lagrangian and Casimir coincide, and that local operators in 2d CFT become bilocal operators in 1d QM in a double-scaling limit. We produced correlation functions from the 2d WZW perspective, although our analysis was not entirely rigorous as we used the generalization of the prescription of [27]. It would be an improvement to complement this with a path-integral analysis as in section 2 for the rational theories as well, including the measure in the path integral. This is left to future work. Nonetheless, we deduced expressions for time-ordered correlators and provided Feynman rules. Out-of-time-ordered correlators can also be studied and require introducing 6j symbols to swap internal lines in diagrams. It would be particularly interesting to link this to results on OTO-correlators in rational 2d CFT, as in e.g. [75]. These theories also seem to be related to group field theories, utilized in the spinfoam formulation of LQG, which in turn seem to be related to the tensor models of e.g. [76,77]. A very interesting extension to study deeper would be to understand N = 2 Liouville theory in the N S-sector, which allows one to connect to the 1d N = 2 super-Schwarzian theory. The latter contains non-trivial interactions between the Y -boson and the Liouville field φ itself. However, technical obstructions appear to be present when analyzing the minisuperspace regime and performing the φ-integrals directly in coordinate space. We hope to come back to this problem in the future. The structure present in the rational theories, suggests the Schwarzian three-point vertex γ (k 1 , k 2 ) should also be interpretable as a 3j symbol of SL(2, R) with 1 discrete and 2 continuous representations. If this can be made more explicit, then the generalizations to the supersymmetric Schwarzian correlators can be conjectured to hold in terms of 3j and 6j symbols of OSp(1|2) and OSp(2|2) for N = 1 and N = 2 super-Schwarzian theories respectively, without resorting to the coordinate space evaluation of the Liouville integrals as mentioned above. A further question is whether anything can be learned for 4d gauge theories, as 2d boundary Liouville/Toda CFT was demonstrated in an AGT context in [78] to be linked to (a certain subclass of) these. Taking the double scaling limit should have an analogue in 4d gauge theories. One of the holographic successes of the Schwarzian theory is a correct prediction of the Bekenstein-Hawking entropy of the JT black holes [23,24]. Within the Liouville framework, it arises fully from the modular S-matrix as S BH = log S p 0 . On the other hand, it was found in [79] that the topological entanglement entropy in 2d irrational Virasoro CFT matches the Bekenstein-Hawking entropy for 3d BTZ black holes: S BH = log S p+ 0 S p− 0 . It would be interesting to utilize the 2d/1d perspective to shed more light on some of the puzzles that appear in 3d gravity and its relation to 2d Liouville dynamics. solution corresponding to these situations is: where one sets θ = n ∈ N to find the ZZ 1,n brane again. 17 Either of these alternative boundary conditions can be absorbed back into the action by rescaling f → θf . The only effect is a change 2π β → 2π β θ in (2.15) and in the Hamiltonian in terms of f . After doing this, the field f is again a circle diffeomorphism as before.
In all of these cases, we can make the link between the Liouville action in (2.8) and the geometric action of Alekseev and Shatashvili [47,48] more explicit as follows. The π φφ -term in (2.8) is precisely the canonical 1-form α integrated over time, with α = π 0 dσ πδφ, ω = dα. Given the symplectic 2-form ω, and ignoring global issues, α is determined only up to an exact form df , which integrates to zero as we take periodic boundary conditions in time. Explicitly, and after doubling, the geometric action is given by: and ω = dα given by equation (2.15) as can be explicitly checked, and the coadjoint orbit in terms of the FFZT brane parameter θ. This result demonstrates the equivalence of Liouville between branes and the coadjoint orbit action for all different orbits. This also followed directly since both of these evaluate to the same Virasoro character, but it is reassuring to see it directly from the path integral.
Finally taking the Schwarzian limit, we need b 0 → ∞ such that As discussed in the main text, the above geometric action (the pq-term in the Lagrangian) disappears in this limit, and only the Hamiltonian density (the Schwarzian derivative) remains. 18 As this is the generator of a U (1)-symmetry, Stanford and Witten applied the Duistermaat-Heckman theorem to prove the one-loop exactness of the resulting 1d partition function [29]. This one-loop exactness fails for correlation functions however and one has to resort to other methods, as given in this work. When changing the branes, the resulting 1d theories are all pathological as thermal systems, except the ZZ-ZZ system that is studied here. The two sectors only interact through the dynamical time variable f (t). As a sanity check, the matter equations of motion are given by
|
2018-06-29T14:34:23.000Z
|
2018-01-29T00:00:00.000
|
{
"year": 2018,
"sha1": "8fb2f8d0678ef78ed35cea7d634ebd0c18fe65ec",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/JHEP05(2018)036.pdf",
"oa_status": "GOLD",
"pdf_src": "Arxiv",
"pdf_hash": "8fb2f8d0678ef78ed35cea7d634ebd0c18fe65ec",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
246904166
|
pes2o/s2orc
|
v3-fos-license
|
Understanding Retention in Pre-Exposure Prophylaxis Care in the South: Insights from an Academic HIV Prevention Clinic
HIV pre-exposure prophylaxis (PrEP) is poorly utilized in the southern United States. We examined PrEP retention in care and sexually transmitted infections (STIs) through a retrospective review of the Duke University PrEP Clinic from January 1, 2015 to October 15, 2019. We evaluated short-term (3 months), long-term (additional 8–12 months), and longitudinal retention in care in our clinic. Adjusted odds ratios (aOR) were generated to explore demographics associated with retention. Kaplan–Meier curves were generated to view retention longitudinally. STIs were examined at baseline (1 year before initial PrEP visit) and while retained in care. Of a total of 255 patients; 88% were men, 37% were black, and 73% were men who have sex with men (MSM). Short- and long-term retention in care were met by 130/237 (55%) and 80/217 (37%) patients, respectively. MSM were more likely to be retained in the short term (aOR = 5.22, 95% confidence interval [CI] = 1.57–17.32). Self-referred patients were more likely to be retained in the long term (aOR = 2.18, 95% CI = 1.12–4.23). Uninsured patients were less likely to be retained in the long term (aOR = 0.32, 95% CI = 0.11–0.91). STI diagnoses include 42 infections at baseline and 69 infections during follow-up. STI diagnosed while in PrEP care was associated with longer retention in care over time. Patients discontinue PrEP care over time and STIs were frequently encountered. Additional studies are needed to determine the best way to retain patients in HIV preventative care.
Introduction
H IV pre-exposure prophylaxis (PrEP) given as once daily emtricitabine/tenofovir disoproxil fumarate or emtricitabine/tenofovir alafenamide is one of the most effective tools in the prevention of HIV acquisition. [1][2][3][4] The Centers for Disease Control (CDC) recommends that all patients receiving PrEP be seen every 3 months in follow-up to ensure they are HIV negative, to assess medication adherence and side effects, and to conduct sexually transmitted infection (STI) testing for sexually active persons with symptoms and asymptomatic men who have sex with men (MSM) at high risk. 5 However, a limitation to PrEP use is that it must be taken daily with frequent outpatient follow-up.
Since the rollout of PrEP programs nationally, there has been growing interest in persistence in care in follow-up visits. [6][7][8] Previous large studies in the United States have shown a wide range of retention rates in PrEP care. [9][10][11][12] Few studies have focused on PrEP retention in the southern United States, which accounted for more than half of new HIV diagnoses in 2016 but was also the region with lowest PrEP use nationally. 13,14 Early reports on the rate and timing of disengagement from PrEP care in the south indicate that retention is worse than other regions ranging between 32% and 63% over time. 8,[15][16][17] More focus is needed on persistence in care and predictors for disengagement to better design interventions.
In addition to HIV preventative services, PrEP clinics also provide STI care. STIs have been increasing in the United States with >2.5 million cases of gonorrhea, chlamydia, and syphilis reported by the CDC in 2019. This finding represents an increase in the rate of chlamydia, gonorrhea, and syphilis of 19%, 56%, and 74%, respectively, since 2015. 18 Southern states are frequently reported to have the highest incidence of STIs with North Carolina showing increased rates of STIs from 2018 to 2019. 19,20 Increased rates of STI are also prevalent in adolescents, an age group that is high risk for acquiring HIV. 21 Prior studies have shown an association between increased STI incidence and PrEP use. 22,23 Yet it is unclear how prevalent this association is in the southern United States.
In this study, we report on retention in care and STIs encountered in a large southern academic PrEP clinic in Durham, North Carolina, over a 4-year time period. Our aim was to describe short-and long-term retention in care and patient characteristics associated with retention in care. A secondary aim was to evaluate incident STI diagnoses, which serve as markers of HIV exposure, while in care.
Patient population
Data were reviewed from the Duke University PrEP Clinic, which was established in 2015. This academic hospital-associated clinic provides PrEP services for adult patients who are at high risk for HIV from a wide area in central North Carolina, which includes both urban and rural counties. The clinic is staffed by several providers with pharmacy and social work support who assist in acquiring access to the medication and provide psychosocial counseling. All patients who were seen within the Duke PrEP Clinic since inception in 2015 were eligible for inclusion in the study. Patients who had a diagnosis of HIV at time of first encounter in the Duke PrEP Clinic were excluded.
Data collection
We conducted a retrospective chart review of eligible patients from January 1, 2015 to October 15, 2019. Clinical data, including age, race, ethnicity, gender, sexual practice, insurance status, and referral source, were obtained from the Duke institutional data warehouse and through manual chart review.
Retention in care was determined by manual review of completed in-person follow-up encounters after the initial visit. Short-term retention in care was defined as completion of a 3-month follow-up as per CDC guidelines. 5 Long-term retention was defined as completion of a 3-month visit and an additional visit between 8 and 12 months after the initial encounter. Discontinuation of care was defined as a lack of follow-up visit for 6 months since the last encounter. Patients were excluded from further analysis after their first discon-tinuation of clinic care. These definitions were chosen to reflect a real-world experience in our clinic.
STI diagnoses were extracted from the medical records and included syphilis serologies, genital and extragenital chlamydia, and gonorrhea nucleic acid amplification testing, hepatitis B serologies, and diagnosis of giardiasis. Baseline STI was defined as a diagnosis at or within 1 year before the initial PrEP visit. STI diagnoses while on PrEP were any subsequent diagnosis while retained in care regardless of diagnosis location. We considered empiric treatment for an STI the same as a new diagnosis even if laboratory testing was not performed. If a patient had two infecting organisms diagnosed at the same time, they were considered to have two new incident STIs.
Data analysis
Outcomes of interest were short-and long-term retention in PrEP care and STI acquisition while on PrEP. Multivariable logistic regression was conducted to explore associations between patient-level determinants and outcomes of interest (SAS 9.4, Cary, NC). Kaplan-Meier curves were generated, and the log-rank test was used to compare longitudinal retention in PrEP among different patient groups (R, 3.6.0). The Kaplan-Meier curves used discontinuation from clinic care (no PrEP Clinic visit for 6 months) as the event of interest. This study was approved by Duke University Institutional Review Board.
Institutional approval
This study was approved by the Duke University Institutional Review Board (IRB Protocol 00103503).
Overall, our clinic patients frequently discontinue care over time with nearly all having their initial discontinuation of care within 1.5 years of starting PrEP (Fig. 1). Short-and long-term retention in care were met by 130/237 (55%) and 80/217 (37%) patients, respectively. MSM were more likely to be retained in the short term (adjusted odds ratio [aOR] = 5.22, 95% confidence interval [CI] = 1.57-17.32). Selfreferred patients were more likely to be retained in the long term (aOR = 2.18, 95% CI = 1.12-4.23), whereas patients without insurance were less likely to be retained in the long term (aOR = 0.32, 95% CI = 0.11-0.91) ( Table 2).
We also examined retention until first discontinuation of care longitudinally in our clinic using Kaplan-Meier curves and the log-rank test (Figs. 2 and 3). Male patients remained in care longer than female patients ( p = .049), but there was no statistical difference between age (greater or less than 35) ( p = .15). Nonblack patients remained in care longer than black patients ( p = .025), but there was no difference between ethnicity (Hispanic/Latino vs. Non-Hispanic/non-Latino) ( p = .84). Patients with insurance remained in care longer than uninsured patients ( p = .046). Retention was similar from different referral sources, including self and primary care physicians, except from community organizations, which was lower ( p = .0037).
STIs diagnoses were made in 30 (12%) patients at baseline for a total of 42 unique infections. Of these baseline infections, 36% were syphilis, 32% were gonorrhea, 25% were chlamydia, 5% were Giardia and hepatitis B, and 2% were incident HIV infection. After initial PrEP visit, 44 (17%) patients had incident STIs detected for a total of 69 unique infections consisting of 6% syphilis, 38% gonorrhea, 55% chlamydia, and 1% Giardia and Hepatitis B combined. Two new HIV diagnoses were made at the initial PrEP encounter before starting PrEP medication.
No new HIV diagnoses were made during follow-up visits. Patients with an STI at baseline had no significant difference in persistence in care over time compared with patients without baseline STIs ( p = .069). However, patients diagnosed with STIs during PrEP follow-up remained in care longer than those without a new STI diagnosis ( p = .0006). There was no difference in retention between persons with or without an HIV-positive sexual partner ( p = .89). Longitudinally, there was no significant difference in persistence in care between MSM and non-MSM when followed to first discontinuation of care ( p = .086).
Discussion
We present 4 years of data from a large academic PrEP clinic in the south. We focused on short term (3 months) and long term (an additional visit between 8 and 12 months) to reflect the CDC guidelines, previously published timeframes, and our clinical experience. 5,23-28 Over time, patient retention in care declined with approximately half completing a 3-month follow-up visit and just more than one-third completing a longer-term visit 8-12 months from the initial visit. This is consistent with what has been previously reported in other southern PrEP clinics. 8 (5) MSM, men who have sex with men; PrEP, pre-exposure prophylaxis.
BURNS ET AL.
PrEP care is generally worse in the south when compared with the Midwest and western United States. 6,27,[29][30][31][32] In our analysis, MSM and self-referral were predictive of persistence in care. Both groups may be more motivated to stay in care due to increased perceived risk for HIV and/or greater awareness and willingness to take PrEP among MSM and the fact that self-referred patients had sought out care on their own versus being referred by another provider or agency. 33,34 When followed over time, nonblack patients and male patients were retained longer than black and female patients, respectively. Uninsured patients were less likely to remain in PrEP care in the long term. In our PrEP clinic, uninsured patients are eligible for financial assistance to alleviate the clinical care and laboratory costs associated with PrEP, and medication is obtained through pharmaceuticalsponsored drug assistance programs.
Therefore, patients likely face other socioeconomic barriers to remaining on PrEP care such as transportation or stable housing. [35][36][37] It is notable that patients referred from community-based organizations appeared to be less apt to be retained in care than those referred from medical providers, dating apps, insurance providers, peers, self-referrals, health departments, insurance, and unknown referral sources. Community-based organizations are key partners to our clinic and often serve populations who are difficult to reach but would greatly benefit from HIV prevention efforts. Further collaboration with these organizations is needed to help increase PrEP use and retention.
Interestingly, we found no difference in retention from patients with or without an HIV-positive sexual partner. Having an HIV-positive sexual partner who is not virally suppressed is one of the key indications for PrEP use. The lack of a difference in retention among this group may be indicative of the awareness of the recent U = U campaign promoting that undetectable viral load means HIV is untransmittable. 38 Perhaps with this recent breakthrough, persons on PrEP may reassess their risk for HIV acquisition and decide to stop the medication.
Finally, women were shown to fall out of care sooner than men. Although this may be due in part to the lower number of women in our clinic, it may also be an indication that we are not meeting the needs of female PrEP users. Women are a key group who need access to PrEP as they comprised 6,700 new HIV infections nationally, yet only 7% of eligible women were receiving PrEP in 2018. 39,40 Similarly, we only had six transgender women in clinic during the time of our study and were unable to comment on trends in PrEP retention in this patient group. However, transgender women are an important group that are at risk for HIV acquisition and are in need of PrEP services. [41][42][43] Further efforts are needed to engage this population in HIV preventative care.
Patients with baseline STI diagnoses were less likely to remain in care. Although there are many possible reasons for PrEP disengagement, having an STI has been identified as a reason for PrEP discontinuation in other cohorts. 44 However, it is interesting that in our clinic having an STI diagnosis made while on PrEP was associated with retention in care. A similar finding was reported in an earlier retrospective study that included a portion of our clinic. 16 Retention in PrEP care after an STI diagnosis is possibly due to patients having comfort with a longitudinal sexual health provider and recognizing their high-risk sexual behavior for HIV acquisition. It remains unclear if the prevalence of STIs truly reflects increased rates of infection among PrEP patients or is simply reflective of frequent testing of a high-risk population. Our findings support the need for frequent follow-up visits with STI testing.
Limitations to our study include having data only from a single clinic within a large academic medical center. Therefore, we cannot account for experiences at health departments, private clinics, or community-based clinics in our region that provide PrEP services and may have differing retention rates. Similarly, we cannot account for STI diagnoses occurring outside of our medical system, which may result in underreporting of STI rates in our clinic. Another limitation is that some PrEP users will return to care after stopping therapy. Our study did not include persons who returned to care after their first discontinuation of PrEP. In addition, some PrEP patients are referred back to their primary care providers after PrEP initiation and others may have transferred care to providers outside of our clinic resulting in a lower retention rate in our clinic.
Conclusions
Overall, our clinic patients discontinue PrEP care frequently in both the short and long term. We found that MSM were more likely to remain in care in the short term compared with non-MSM patients, whereas self-referred patients were more likely to remain in care in the long term as compared with those referred from other sources. When followed over time, male, nonblack, patients with insurance coverage and patients with an STI occurring while on PrEP more often remained in care. Future studies will be needed to fully understand why patients discontinue PrEP care and to determine the best way to recruit, engage, and better retain patients in care. This is especially important in the southern United States where improved PrEP use and retention are critically needed to combat the national HIV epidemic.
Disclaimer
The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health.
|
2022-02-18T06:23:23.933Z
|
2022-02-16T00:00:00.000
|
{
"year": 2022,
"sha1": "3243a887899178618fabdeda8187355790c4b19a",
"oa_license": "CCBY",
"oa_url": "https://europepmc.org/articles/pmc7777185?pdf=render",
"oa_status": "GREEN",
"pdf_src": "PubMedCentral",
"pdf_hash": "9d4b637da610ab2f3c80394015566ffc2f7ed395",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
233306929
|
pes2o/s2orc
|
v3-fos-license
|
Asymptotic Expansion of Laplace-Fourier-Type Integrals
We study the asymptotic behaviour of integrals of the Laplace-Fourier type $P(k) = \int_\Omega\mathrm{e}^{-|k|^sf(x)}\mathrm{e}^{\mathrm{i} kx}\mathrm{d} x\;, $ with $k\in\mathbb{R}^d$ in $d\ge1$ dimensions, with $\Omega\subset\mathbb{R}^d$ and sufficiently well-behaved functions $f:\Omega\to\mathbb{R}$. Our main result is $ P(k) \sim \frac{\mathrm{e}^{-|k|^sf(0)}}{|k|^{sd/2}}\sqrt{\frac{(2\pi)^d}{\det A}} \exp\left(-\frac{k^\top A^{-1}k}{2|k|^s}\right) $ for $|k|\to\infty$, where $A$ is the Hessian matrix of the function $f$ at its critical point, assumed to be at $x_0 = 0$. In one dimension, the Hessian is replaced by the second derivative, $A = f''(0)$. We also show that the integration domain $\Omega$ can be extended to $\mathbb{R}^d$ without changing the asymptotic behaviour.
Introduction
Integrals over rapidly oscillating integrands occur frequently in physics and are often difficult to handle numerically. Their asymptotic behaviour for large parameter values (e.g. large wave numbers) is interesting in view of estimates, studies of limiting behaviour, and testing numerical solutions. In this paper, we study the asymptotic behaviour of integrals of the form under fairly general assumptions for k → ∞, first in one, then in d dimensions. Specifically, we shall assume that the integral in (1) converges absolutely and that s ≥ α, with α to be defined below. Our main results, derived in Sect. 2 for d = 1 and in Sect. 3 for d > 1, will be: 1. In one dimension, where x 0 = 0 is a critical point of f .
2.
In d > 1 dimensions, where A is the Hessian matrix of f at its critical point, assumed to be in x 0 = 0.
With this result, we extend the existing literature in two ways: we derive the asymptotic behaviour of Laplace-Fourier type integrals in one dimension and extend them to an arbitrary number d of dimensions. In one dimension, detailed studies exist with a different focus [1][2][3], and to our knowledge, the generalization to d dimensions is new.
2 One-dimensional case
Erdélyi's theorem for Laplace integrals
In one dimension, we begin with Erdélyi's theorem [4][5][6] for Laplace integrals, which states: Theorem 1 (Erdélyi). Let I(λ) be an integral of the form where f (x) is a real function of the real variable x, while g(x) may be real or complex. Then, if 2. f ′ (x) and g(x) are continuous in a neighbourhood of a, except possibly at a; 3. f and g admit asymptotic expansions f can be term-wise differentiated, for x → a + , where α > 0 and Re β > 0; and 4. I(λ) converges absolutely for sufficiently large λ; then the integral I(λ) has the asymptotic expansion for λ → ∞, where ν := (n + β)/α. The coefficients c n can be expressed by a n and b n as Beginning with this result, we insert the derivatives d m,n into the coefficients c n from (9) and write the asymptotic expansion I(λ) from (8) for some positive integer M that is to be constrained later, as where we have assumed that the inner sum over n in the second line exists for all 0 ≤ m ≤ M. To abbreviate notation, we have defined and introduce the symbol for the sums over n remaining in (10). If the outer sum over m in (10) exists for M → ∞, can write the asymptotic expansion I(λ) as Note that this is indeed an asymptotic expansion, since the symbols I m (λ) are an asymptotic sequence as λ → ∞ in R + , denoted by {I m }, because for all m ≥ 0, see [7, p. 10, definition 2]. The sum over n in the functions I m (λ) defined in (12) can be carried out for some choices of α and β, for which we shall give examples below.
Laplace-Fourier integrals in one dimension
We now wish to study the asymptotic behaviour of the Fourier integral for λ → ∞, with Re β > 0 and the fixed parameter k > 0. For doing so, we apply Erdélyi's theorem, setting Moreover, without loss of generality, we shift the origin such that a = 0 and take the integral I(λ, k) over the open domains Ω = (0, b) and assume that f (x) satisfies the conditions of Erdélyi's theorem with a = 0. The coefficients b n defined in (6) are given by and the symbols I m defined in (12) are given by We now show that, for fixed λ, k, the symbols I m as in (18) converge for each finite m when α > 1.
Proof. For m > 0, the derivatives d m,n as defined in (9) can be written as where D m,p are finite coefficients that are independent of n. To see this, note that any of the m derivatives applied in (9) acts on the parenthesis raised to the power of −ν, or on the sum in the parenthesis. The former derivatives create the Gamma functions in front, the latter the coefficients D m,p . As the limit x → 0 is taken, the expression in the parenthesis becomes unity, thus raising it to any power becomes independent of the exponent. 1 Thus, for each m there exists a positive constant K m > 0 such that the following estimate holds Using (17), (18) and (20), the symbols I m are then bound by (21) For α > 1, the sum in eq. (21) is absolutely convergent, which can be seen by d'Alembert's ratio test.
We consider the limit of the ratio of two consecutive terms where we made use of Gautschi's inequality x 1−s < Γ(x+1) Γ(x+s) < (x + 1) 1−s , for 0 < s < 1 and x > 0 [9]. Thus, the sums (21) are finite for m > 0 and α > 1. Since d 0,n = 1 for all n, an analogous proof shows |I 0 (λ, k)| < ∞. For α = 1, the inequality (21) becomes where the last step is valid for k a 0 λ < 1 and p F q denotes the generalized hypergeometric function. The case m = 0 is easier because d 0,n = 1 for all n, but otherwise proceeds analogously. Now that we have shown, that (13) together with the symbols I m (λ, k) is a valid asymptotic expansion for Laplace-Fourier integrals I(λ, k) as λ → ∞, we compare our expansion (13) to the expansion (9) from Erdélyi's theorem. Since the symbols I m originate from a resummation of Erdélyi's series, they contain the coefficients b n to infinite order, thus the total information about the phase function e ikx . Considering now Laplace-Fourier integrals with related exponential factors λ = k s implies P(k) = I(k s , k). The symbols I m are then specialized to We find for s ≥ α showing that the symbols I m (k s , k) denote an asymptotic sequence as k → ∞ in R + . Comparing this sequence to the terms in the expansion (9) from Erdélyi's theorem, shows that the terms in our asymptotic expansion (13) fall of much faster when s > α. For s = α, the sequence from Erdélyi's theorem even fails to be asymptotic.
Examples
For convenience, we introduce Note that for λ = k s and s = α, x α is a positive constant, while for s > α, x α > 0 approaches 0 in the limit k → ∞.
One-sided domain Ω = (0, b)
On the one-sided domain Ω, we can then directly apply Erdélyi's theorem by inserting the b n from (17) into (12). Since d 0,n = 1 for all n, we then have for m = 0, where p Ψ q denotes the Fox-Wright function. For this sum to converge, we must require α ≥ 1 and thus ν ≤ n + β. For α = 1 and arbitrary β > 0, we arrive at (28)
Two-sided domain
On the two-sided domain Ω, we assume that Erdélyi's theorem applies to I(λ, k) separately on the intervals (0, b 1 ) with x → −x and (0, b 2 ). Furthermore, we restrict α to being an even integer because otherwise f (x) could not have a maximum in the origin. In this case, it is advantageous to note that b n → (−1) β−1+n b n and d m,n → (−1) m d m,n for x → −x. Then, after adding the results for the intervals (−b 1 , 0) and (0, b 2 ), only those summands in I m (λ, k) from (12) remain for integer β which have even n if β = 2l − 1 is odd, or which have odd n if β = 2l is even, where l ∈ {1, 2, . . . }. We use this to write with the factor of 2 accounting for the results on both intervals. Some special cases for the leading term I 0 (λ) may be worth noting: (i) For α = 2 and integer β, the asymptotic expansion of f (x) begins with a term quadratic in x, and the envelope of the Fourier factor in (16) has an integer power of x. In this case, we have (ii) For α = 4 and β = 1, the asymptotic expansion of f (x) begins with a term quartic in x, and the Fourier factor has a constant amplitude in x. Then, the result is In the expressions (30) and (31), p F q is the generalised hypergeometic function.
In view of applications in physics, we investigate in more detail situations in which f (x) has an asymptotic expansion beginning with a term quadratic in x, and the amplitude of the Fourier factor is constant. Then, α = 2 and β = 1. In this case, we return directly to the definition (12) for the summands I m (λ, k) in the asymptotic expansion (13) of the integral I(λ, k). Since β is odd now, we can restricting the sum over n to even n, and since the integration domain is two-sided, we need to double the result. Then, from (12), where we have used in the second step. With we further find We finally specialise this result to integrals of the form with s ≥ 2 and f satisfying the previously stated conditions. Substituting λ → k s in (32) then immediately leads to the asymptotic series for k → ∞, with x 2 specialized to x 2 = k 2−s /(4a 0 ). Note that the terms A and B in (34) are both of order O(k s−2 ) here. The result (2) follows from (34) by noting that a 0 = f ′′ (0)/2.
d-dimensional case
Before we can now turn to the case of an arbitrary number d of dimensions, we need to prepare some concepts and notation. Here and below, D denotes the derivative of a function with respect to its arguments.
Preliminary remarks
Let Ω ⊂ R d be an open subset of R d and x 0 ∈ Ω a non-degenerate critical point of a smooth function f : Ω → R on Ω with f ∈ C ∞ . Then, by Morse's lemma, neighbourhoods U, V of the points y = 0 and x = x 0 and a diffeomorphism h : U → V exist such that With the same chart h : U → V, the Jacobian H = Dh and a smooth function g : Ω → R with g ∈ C ∞ , we define the function G : U → R by G(y) = (g • h)(y) det H(y). We further introduce multi-indices α = (α 1 , α 2 , . . . , α d ) and agree on the notation where µ = (µ 1 , µ 2 , . . . , µ d ) is a d dimensional real valued vector. We finally define the symbol and the derivative operator With these preparations, we can now continue with the following theorem for the asymptotic behaviour of a class of integrals on Ω [7].
Since the factor δ(α) ensures that only even α j can occur in the sum, it is convenient to define a multiindex n with elements n j = α j /2. Inserting (33) into (52) and noting that µ j = det A, we arrive at allowing us to write (47) in the form For integrals of the type with s ≥ 2, we substitute again λ → |k| s in (50). We then obtain immediately from (55) which proves the result (3).
Extension to R d
We finally wish to extend the integration domain Ω to R d . We begin with the conditions imposed in Theorem 2, but relax conditions (1) and (2) and add as a further condition: for all ε > 0.
To prove this statement, we consider the integral for k > 0, wheref denotes the Fourier transform of f . We further split the integration domain as For the asymptotic expansion ofP Ω (k), we can estimatē where V Ω is the volume of the domain Ω. Since the second term on the right-hand side of (62) is exponentially suppressed compared to the asymptotic expansion (57) of the first term (recall that f (0) < 0), we can conclude that ForPΩ(k), we estimatē Again, this result is exponentially suppressed by the factor exp(−|k| s σ(ε)) compared to the asymptotic expansion (57). Sincef ∈ L 2 (R d ), we can conclude as claimed.
Summary
We have shown here that Laplace-Fourier-type integrals of the form behave asymptotically like for |k| → ∞ under the following conditions: 1. The function f : Ω → R has a negative minimum at x 0 = 0; This result is valid for Ω ⊆ R d with d ≥ 1. We began in Sect. 2 by applying Erdélyi's theorem 1 to the more general class (15) of integrals in one dimension, discussed special cases, and restricted the general result to the case (66) with s ≥ 2. We extended the discussion to d > 1 dimensions in Sect. 3 and finally showed that the integration domain can be R d .
the Hessian
Integrals of this type with rapidly oscillating integrands frequently occur in physics, most notably in statistical physics. Asymptotic expressions like (67) can help to understand the behaviour of such integrals in the limit of small scales or large momenta, |k| → ∞, and as a test case for numerical integrations. In a forthcoming paper, we shall apply our result to the formation of small-scale cosmic structures.
|
2021-04-21T01:16:09.814Z
|
2021-04-20T00:00:00.000
|
{
"year": 2021,
"sha1": "1cfd4c95e6c79601a6556bec4791f4a56b0eeb66",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "1cfd4c95e6c79601a6556bec4791f4a56b0eeb66",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics",
"Physics"
]
}
|
266334995
|
pes2o/s2orc
|
v3-fos-license
|
Acceptability of a cross-sectoral hospital pharmacist intervention for patients in transition between hospital and general practice: a mixed methods study
Background and objective: Drug-related problems (DRPs) are often seen when a patient is transitioning from one healthcare sector to another, for example, when a patient moves from the hospital to a General Practice (GP) setting. This transition creates an opportunity for information on medication changes and follow-up plans to be lost. A cross-sectoral hospital pharmacist intervention was developed and pilot-tested in a large GP clinic. The intervention included medication history, medication reconciliation, medication review, follow-up telephone calls, identification of possible DRPs and communication with the GP. It is unknown whether the intervention is transferable to other GP clinics. The aim of the study was to explore similarities and differences between GP clinics in descriptive data and intervention acceptability. Methods: A convergent mixed methods study design was used. The intervention was tested in four GP clinics with differing characteristics. Quantitative data on the GP clinics, patients and pharmacist activities were collected. Qualitative data on the acceptability were collected through focus group interviews with general practitioners, nurses and pharmacists. The Theoretical Framework of Acceptability was used. Results: Overall, the intervention was found acceptable and relevant by all. There were differences between the GP clinics in terms of size, daily physician work form and their use of pharmacists for ad hoc tasks. There were similarities in patient characteristics across GP clinics. Therefore, the intervention was found equally relevant for all of the clinics. Shared employment with unique access to health records in both sectors was important in the identification and resolution of DRPs. Economy was a barrier for further implementation. Conclusions: The intervention was found acceptable and relevant by all; therefore, it was considered transferable to other GP clinics. Hospital pharmacists were perceived to be relevant healthcare professionals to be utilized in GP, in hospitals and in the cross-sectoral transition of patients.
Acceptability of a pharmacist activity for patients transitioning between hospital and general practice
Why was the study done?
Drug-related problems are often seen in patients transitioning across healthcare sectors.
• A pharmacist activity was developed and pilot-tested in a large General Practice (GP) clinic.It was unknown whether the activity was transferable to other GP clinics.• The pharmacist activity included talking to the patients about their usual medication and adjustment of prescriptions accordingly.The pharmacist activity also included a review of their medications, a follow-up telephone call to the patients and communication with the GP in case of drug-related problems.
• The aim of the study was to test the activity in different GP clinics and to explore similarities and differences in descriptive data and acceptability.
What did the researchers do?
• The activity was tested in four GP clinics within the same geographical area for three months.
• Descriptive data about the GP clinics, the patients and the pharmacist's activities performed were collected.• Data about acceptability of the activity was collected through focus group interviews with general practitioners, nurses and hospital pharmacists.• This qualitative data was combined with descriptive data to explore similarities and differences between GP clinics.
What did the researchers find?
• Overall, the activity was found to be acceptable and relevant by all.
• There were differences between the GP clinics in terms of size, daily physician work form and their use of the pharmacist for ad hoc tasks.• There were similarities in patients across GP clinics e.g. in terms of the number of medications or drug-related problems.The activity was found equally relevant for every clinic.• Shared employment with access to health records in both sectors was important in the identification and resolution of drug-related problems.The pharmacist had the possibility to bring issues back and forth between the hospital and the GP clinic.• Economy was a barrier for further implementation.
What do the findings mean?
• The activity was found acceptable and relevant by all; therefore, it was considered transferable to other GP clinics.• Hospital pharmacists were perceived to be relevant healthcare professionals to be utilised in GP, in hospitals and in the cross-sectoral transition of patients.
Introduction
Drug-related problems (DRPs) are often seen when patients are transitioning across healthcare sectors. 1,2A study found that 81% of discharged patients had at least one prescription error, clerical error or error due to inadequate communication of medicines stopped during admission. 1nother study found errors in 20 of 22 discharged patients related to treatment efficacy, safety, adverse effects, dispensing or missing electronic prescriptions. 2 DRPs while transitioning from General Practice (GP) to hospital were shown to be caused by inadequate focus on updated medication information. 3During patients' discharge, DRPs were commonly caused by, for example, inadequate information regarding medication changes during admission. 3[6][7] In Denmark, information on patients' current medication is shared through the Shared Medication Record (SMR) in which both the hospital and GP can access and update the information. 8,9Information quality depends on whether the SMR is properly updated.Other information about the patient is not shared via the SMR, and the hospitals and GPs have separate electronic patient records for this information.There is no easy access to information about the reasons for treatment decisions between the sectors.
Hospital pharmacists play a crucial role in collaborating with physicians and nurses within the hospital setting.They possess extensive knowledge regarding medication-related matters.Pharmacists working in both the Hospital Pharmacy and GP settings have access to patient records in both places.As a result, they can facilitate seamless cross-sectoral transitions for patients.
Previously, the cross-sectoral hospital pharmacist intervention was developed and pilot-tested in a large GP clinic. 3This intervention included medication history, medication reconciliation, medication review, follow-up telephone calls, identification of possible DRPs and communication with the GP.The pharmacist identified and solved several DRPs, medication errors were avoided and patient safety was improved.The intervention was considered relevant by staff in both healthcare sectors and patients, 3 but it is unknown whether the intervention is transferable to other GP clinics.
The purpose of this convergent mixed methods study was to explore similarities and differences between GP clinics by integrating both quantitative and qualitative data.We conducted qualitative focus group interviews to explore acceptability of the intervention.We used descriptive data of the clinics, patients and pharmacist activities to compare the results from the qualitative and quantitative data collection and analysis.
Design
A convergent mixed methods design was used in which quantitative and qualitative data were collected and analysed independently, then integrated and interpreted together 10 (Figure 1).
TherapeuTic advances in drug safety
The study is reported in accordance with the guidelines for Good Reporting of A Mixed Methods Study (GRAMMS). 11
Setting
The study was performed in four GP clinics within the same cluster (Northern Djurs Municipality) and their local hospital, Randers Regional Hospital (RRH), located in Denmark.
Characteristics of the Danish healthcare system and RRH are described in Table 1.
Participants
Quantitative study groups -descriptive data.Four GP clinics with differing characteristics participated in the study.Two of the clinics were solo practices (respectively 1570 and 2270 patients), one was of medium size (3800 patients) and one was a large clinic (8740 patients).
Patients in cross-sectoral transition were also participants.Patients were consecutively recruited Monday to Friday at RRH by pharmacists, or at the GP clinics by the GPs, a nurse or a pharmacist.Written informed consent was obtained.Patients ⩾18 years old were eligible for inclusion but were excluded if admitted to the maternity ward, hospitalized due to a psychiatric diagnosis, considered too ill (suicidal, cognitive impairment, life-threatening illness), or were unable to speak Danish.
Additionally, three pharmacists from the Hospital Pharmacy (Central Denmark Region) who performed the intervention were part of the quantitative study.They ranged between 15 and 27 years of experience and frequently dealt with medication
The Danish healthcare system
Everybody has free, tax-funded access. 12erybody has a real time SMR in which both the primary and secondary healthcare sectors can see and update the patient's current medication. 8Ps must update the SMR at elective referral to the hospital and strive to update the SMR in emergency admissions. 13 Randers Regional Hospital (2021) 14 Beds, n 190 Acute hospitalizations annually, n 18,000 Outpatient visits annually, n 95,000 Inhabitants in the catchment area (including the Northern Djurs Municipality), n 226,000
At admission, physicians
Medication reconciliation and updating of the eMAR.
During hospitalization, hospital pharmacists
Medication review on patients with a high risk of medication errors.
At discharge, physicians Assessment of the patient's medication, medication reconciliation, SMR update and generation of electronic prescriptions.An electronic discharge letter is written containing a description of the admission, treatment and follow-up plans.
At discharge, nurses
For patients in nursing homes or with municipal nursing care, an electronic discharge report containing information on the admission, treatment, care needs and follow-up plans is written. 15AR, electronic Medication Administration Record; SMR, Shared Medication Record.
reviews on a daily basis (see Table 2 for a description of a medication review).
Qualitative study groups -acceptability.Hospital pharmacists and healthcare professionals (HCPs) from the GP clinics participated in the qualitative study.
Intervention
The cross-sectoral intervention was developed and piloted prior to this study and included medication history, medication reconciliation, medication review and follow-up telephone call to patients after discharge 3 (Table 2), and ad hoc tasks.
During the study, the pharmacists had a shared employment between the Hospital Pharmacy and the GP clinic.The pharmacists were physically present in the clinic 3-6 days in January 2022 where they were introduced to the GP-staff and the intervention.In the study period (February-April 2022), they were in the GP clinics 1-2 days
Outcomes
Quantitative outcomes -descriptive data.Descriptive data on the GP clinic characteristics, the patients and the pharmacist activities were included.
Qualitative outcomes -acceptability.The Theoretical Framework of Acceptability (TFA) 16 was used to assess acceptability of the intervention.TFA describes affective attitude, burden, perceived effectiveness, ethicality, intervention coherence, opportunity costs and self-efficacy. 16dditionally, perspectives on future intervention implementation were explored.
Integrated mixed methods outcomes.Quantitative and qualitative data were integrated and interpreted to highlight similarities and differences between clinics.
Data collection and analysis
Quantitative data -descriptive data.Descriptive data were presented as numbers, means, medians, interquartile range or proportions when relevant.
DRPs were defined as events or circumstances involving drug therapy that actually, or potentially, interfered with desired health outcomes. 17ntinuous outcomes were compared using the Kruskal-Wallis rank test or Bartlett's equal-variances test, depending on whether or not data were considered normally distributed (more than two groups).Binary outcomes were compared in a Chi-squared test.
Pharmacists' ad hoc activities were registered by the hospital pharmacists during the intervention.
Qualitative data -acceptability.Semi-structured focus group interviews were conducted in May-June 2022.For each GP clinic, two interviews were held; firstly, with the hospital pharmacist involved and secondly, with HCPs in each clinic.
The interviews were facilitated by the last author (CO) and supplemented by the first author (CAS) in the clinics and at the Hospital Pharmacy.Interview guides were prepared with inspiration from the TFA reflecting the seven component constructs 16 (Supplemental Appendices 1 and 2).They were pilot-tested on one pharmacist (data included in the data analysis).
The interviews were audio-recorded, transcribed verbatim by CAS and anonymized.The transcripts were read by CO and CAS.A preliminary analysis was performed by CAS and discussed with the authors in an interdisciplinary workshop.NVivo (1.5.2) was used for deductive coding into the seven TFA component constructs 16 and a category for 'Thoughts of future implementation'.Quotations were chosen by CAS, CO and the second author (LJ).
Integrated mixed methods data.Similarities and differences between the GP clinics were explored and integrated in a joint display. 18
Results
Quantitative results -descriptive data GP clinic characteristics and study recruitment data is presented in Table 3.The clinics varied in size, types of HCPs, record system and daily work form.In clinic 1, six different types of HCPs were employed, and the senior physicians used supervision of junior physicians as their primary daily work form.Most of the smaller clinics were more traditional in their daily work form with consultations, and only employed physicians, nurses and a secretary.
A total of 511 patients were referred to the medical, surgical or orthopaedic surgery outpatient clinics at RRH; 13 patients were recruited.A total of 321 patients were hospitalized; 110 of them were included.Eleven patients were withdrawn due to death or unrecorded reasons.
Patient characteristics are described in Table 4.
A total of 66% of patients were older than 70 years; around half of them were men.
The patients received from 0 to 21 medications prior to hospital contact (mean: 9.2 medications).In total, 86% of patients had hospital medication changes; 43% of these patients did not have their medication changes properly described in the discharge letters to the GP.In 81% of patients, the discharge reports to municipal nursing care lacked information about medication changes.At follow-up, up to 40% of the patients had medication-related questions.Most patients liked the initiative with a follow-up telephone call.DRPs related to transition were discovered by the pharmacist in up to 20% of the patients.For example, a patient was discharged on amlodipine, and there was a discrepancy in the dosing between the SMR and discharge letter.A similar incident occurred with another patient who was discharged on acetylsalicylic acid.In the SMR, it was documented that the patient was taking acetylsalicylic acid 75 mg twice a day.However, in the discharge letter, it was documented that acetylsalicylic acid was reduced to once daily, thus revealing a discrepancy between the SMR and discharge letter.
Hospital pharmacist's ad hoc activities in the GP clinics are described in Table 5.
In clinic 3, the GP-staff had many questions to the hospital pharmacist about medicines, for example, drug choice, dosing interval and drug formulations.Also, in clinic 4, three cases of drug information were registered, for example, a question about long-term side effects.
Qualitative results -acceptability.In clinic 1 and 2, a GP, a nurse and the hospital pharmacists participated in the interviews [one interview with each of the two hospital pharmacists; one interview with each of the clinics (two participants each)].In clinic 3, two GPs and the hospital pharmacist participated [one interview with the hospital pharmacist; one interview with the clinic (two participants)]; and in clinic 4, the GP, two nurses, a secretary and the hospital pharmacist participated [one interview with the hospital pharmacist; one interview with the clinic (four participants)].
Acceptability is presented for each of the seven TFA component constructs 16 below.
Affective attitudes (how an individual feels about the intervention).
Affective attitudes were expressed in positive terms by both GP-staff and hospital pharmacists.The intervention gave a feeling of better collaboration between healthcare sectors and was considered relevant by the GP-staff.
Because you come from the hospital and reach out to general practice -it gives a feeling of better collaboration.(GPclin1) She just went into it and was a part of it [the clinic], so it was really sad when she stopped.(GP1clin3) It gave a feeling that it [the intervention] made sense.(GPclin4) The pharmacists felt welcome.It was exciting to work in the GP clinics and to see a different everyday life.The pharmacists also felt a humility about not wanting to disturb the physicians unnecessarily.Burden (perceived amount of effort that is required to participate in the intervention).The staff in all GP clinics expressed that it had not been a burden to participate in the intervention.It made them reflect on the way they normally work, but their everyday life in the clinic did not change.
It has only been positive that we have gained a different perspective on it [the medicine].(GPclin1) We have so much to consider and medicine is just a small part of it.It was great to have someone who just focused on it [the medicine].(Nurseclin2) I don't think it interfered in any way.You didn't think -Oh no, the pharmacist is coming and getting in the way.Not at all.It wasn't troubleshooting.No, it wasn't that experience at all -troubleshooting and pointing fingers.You can't use that for anything, simply.(GPclin2) Carrying out the intervention was not a professional burden to the hospital pharmacists.On the contrary, it was very rewarding to have direct contact with the patients.I wasn't challenged professionally, but I don't think you should put in the youngest colleague.You have to have a certain ballast to be able to have a discussion with the GP.(Pharmacist2) The pharmacists in clinic 1 and 3 found it difficult to find the right moment to talk to the GPs about patients and to offer ad hoc tasks.
They each have their own calendar, where patients are booked in.So it was rare that there was time to catch them between patients.(Pharmacist3) It had a lot to do with whether you were visibly present.(Pharmacist 3) In contrast to this, the pharmacist in the solo practices found that the doctors were easy to get hold of.
There has been easy access to the staff.There was easy access to the GP, and if you came and asked about something, they knew exactly what, and who, you were talking about.(Pharmacist2) The pharmacist in the solo practices got their own office from day 1.In clinic 3, the pharmacist was allocated a place to sit from time to time (in an examination room or in a hallway).In clinic 1, a seat at a cafe table in the conference room was allocated.This challenged the working environment.
The thing about not having a permanent workplace.And it's not because, when you only come twice a week, you have to have a specific workplace.But I was sitting at such a round cafe table.It faces the opposite of all computer desks, so I actually sat really badly.(Pharmacist1) Ethicality (the extent to which the intervention has good fit with an individual's value system).In all clinics, the intervention agreed well with their ethical values.The hospital pharmacists had not interfered in a negative way.Access to record systems in both healthcare sectors was perceived as particularly advantageous, but a contract or an employment in the clinic is necessary.
I have not experienced it as untimely interference, but more like positive sparring.(GP1clin3) You have a duty of confidentiality in both places.So I simply can't see how that would be a blocking factor.(GPclin1) When you do something like this.Then there must be official agreements.(GPclin4) The intervention was in line with the hospital pharmacist's ethical values.The hospital pharmacist in journals.sagepub.com/home/tawVolume 14 TherapeuTic advances in drug safety clinic 1 expressed a concern whether it could be confusing for the patients that another HCP looks at their medicine.Others could perhaps do it, but it requires allocating time to maintain a cross-sectoral focus.
What I did, they would be able to do in the clinic.The doctor would be able to do it.Or the nurse could do it.But will it be done?They focus on delivering the treatment that the patient demands that day at that time, and within the framework they now have.(Pharmacist1) Intervention coherence (the extent to which the participant understands the intervention and how it works).Overall, there was good coherence between GP-staff and hospital pharmacist perspectives, and the purpose of the intervention.The hospital pharmacist intervention was a link between healthcare sectors, and having access to both record systems was perceived as a really good idea.The extent to whether the GP-staff understood the elements of the intervention and how these worked is unknown.
It will be better for the patient because it catches some things that would otherwise fall through the safety net.(GPclin2).
That someone in a different and more systematic way reviews patients' medication and makes some suggestions for adjustments or discontinuations.It is always useful.Definitely.(GPclin1) We have access to e-journals [hospital records], but it is wildly outside the scope of what GP can offer.(GPclin4) They are busy in the hospital. . .we are busy here.We don't have time.We constantly think that it is their responsibility, and this is mine.So it is good that there is someone who builds this bridge [between sectors].(GPclin4)
Opportunity costs (the extent to which benefits, profits and values must be given up to engage in the intervention).
The staff in all of the GP clinics thought it was important to focus on the sector transition.Additionally, the staff believed they would benefit from having a hospital pharmacist to perform other delegated tasks in the clinic, for example, medication reviews, education and drug information.
Someone who has knowledge, and sort of knows what is up to date and what has just been found out. I think it would be fruitful. (GPclin4)
In GP clinic 1, the staff saw the hospital pharmacist's input as a benefit for the patients' annual check-up.The record note would be taken into account there and not months before, unless something life-threatening was discovered.
I would say if you have a medication review note, or something that you know is there, and then you say: now I need it; now we have to look at medicine.Okay, then we will find it and read it.Then it is a saving.Although it is an extra thing to have.(GPClin1) Economy was a barrier to implementation in three of four clinics.
Economy is a barrier, and so is practicality.Where do you get hold of someone like that -we can't hire a full-time pharmacist.(GPclin1) A fee would be motivating; if you could take a fee within the payment system we have now.I actually think that you could use that argument.(GPClin1) Basically if it is us who has to pay for it, you can say that it also has to pay off in another way, and I don't know how.(GP2clin3) In the solo practices, the GP preferred not to have an employer-employee relationship, as this would affect the professional relationship.
So, she's independent, you could say, impartial.She becomes more neutral in her being here when she does not have an employment relationship.So she's coming and if we're just looking politically at this thing with local hospitals, and we need to have more coherence, right?Then I could easily see a -not a pharmaconomist -it must be a pharmacist, I'm sorry to say.(GPclin2) I think that the effective, and the good and the fruitful -it is that there is someone who is neutral in relation to both the hospital, and us, and sees things from a higher perspective, and can see where the hell things are going wrong and has access to both records.(GPclin4) The hospital pharmacists saw the cross-sectoral task as particularly important; however, every patient cannot be followed up by a hospital pharmacist.Shared employment and access to both records systems was a benefit.We are not very good at discharging patients.It is of course a superficial solution to put in another professional group to deal with what the first professional group may not have done well enough, but there is no one else who has that focus.(Pharmacist2) The thing about a patient always being followed home by a pharmacist.I think it is a resource intensive task, but perhaps you could find another communication model.(Pharmacist1) Perceived effectiveness (the extent to which the intervention is perceived as likely to achieve its purpose).The GP-staff found it valuable to have a person with an overview of the patient's medical treatment in both healthcare sectors as the crosssectoral transition is often problematic.The staff experienced benefits from having a collaboration with a hospital pharmacist with shared employment between Hospital Pharmacy and GP.Both the great insight into what goes on at the hospital, and the advantages of having access to both record systems.For clinic 2, it was crucial that the hospital pharmacist came from their own local hospital, so the pharmacist would know about local circumstances.
The intervention has changed the GP's way of working with updating SMR, but apart from that it has not changed their daily workflow.
The medication reviews were useful; however, the GPs did not act on suggestions right away, but may be the next time the patient is in the clinic.
According to the secretary in clinic 4, patients often required prescriptions after a discharge, so it made good sense to have someone who catches it before it becomes a problem.
The shared employment and access to both record systems has made decidedly good sense, because it would have taken us a very long time to solve the problems ourselves.It was actually quite effective.(GPclin1) The thing about having someone who covers both sectors, is actually quite nice.(GPclin4) Before, updating SMR was something that we didn't really know what was used for elsewhere.Now it has actually become something we use, because we hand over information to someone who can use it at the hospital.I think that is a big advantage.(GPclin1) It gave me quite a lot, especially those 'Attention' patients, since they are not patients I otherwise pay attention to.I think that this small project has probably changed my way of thinking more than those big surveys.(GPclin2) It [the medication reviews] was useful.And also something that could result in making some changes.(GP1clin3) Yes, you take it [a medication review note] to heart.But it's not certain that you act on it now.It's not like we act urgently on a comment.It may well be six months before you see the patient again.But the note from the pharmacist will still be there.(GPclin2) My experience was that when I asked about something, she explored it thoroughly and spent a long time on it.She took things seriously and was thorough.(GP2clin3) The hospital pharmacists perceived that the intervention was effective in optimizing patients' medication in the sector transition -particularly as no one else has that focus.It has made very good sense to be employed in both sectors and to have access to both health records.It gave an understanding of both worlds and an opportunity to take topics back and forth between hospital and GP.It was difficult for the hospital pharmacists to find the optimal time for the follow-up telephone calls to the patients when the pharmacist was in the clinic only once or twice a week; however, most patients were happy about the call.I haven't called anyone who thought I should not.There were some who perhaps didn't think they The hospital pharmacists reported that they lacked feedback on the notes in the medical record.I made some notes in the system, but I don't think that the physicians necessarily took a position on the notes that I made.But I also have to respect that the doctor knows how far he can go with the patient.(Pharmacist 2)
Self-efficacy (the participant's confidence that they can perform the behaviour(s) required to participate in the intervention).
In GP clinic 1 and 4, the GP-staff felt confident with the intervention and the hospital pharmacist's role; however, the hospital pharmacists' tasks were not fully incorporated in their daily routines in clinic 1.
If we had actively hired the pharmacist, then we would also have had it much more under our skin.Not as such a project that comes like 'icing on the cake'.(GPClin1) I was pleasantly surprised.That's because you don't know what they [the pharmacists] represent.Knowledge, that is -great knowledge, I think.(NurseClin4) In GP clinic 2, it took some time for the GP-staff to build confidence, and in clinic 3, the GP-staff did not feel confident with the intervention or how best they could use a hospital pharmacist.The hospital pharmacists felt confident performing the intervention but were challenged by the fact that the GP-staff did not know how a hospital pharmacist could best be utilized in the clinic.They thought that the study period had been too short.
They found it difficult to define what kind of tasks they wanted to use the pharmacist for ad hoc.It was also only perhaps in the last 14 days, when you really started to get to know each other, that you could have the dialogue there.So perhaps the study period was too short.(Pharmacist3) Thoughts of future implementation.The staff in all of the GP clinics saw advantages of having a collaboration with a hospital pharmacist in the future.Besides cross-sectoral tasks, medication reviews and teaching, the GPs highlighted delegated tasks that could be carried out in the clinic, for example, follow-up of patients in treatment of hypertension or a medication review as part of the annual check-up.
It is important that it is someone with a connection to both sectors.Otherwise it will only be tasks in the clinic, and then we will not utilise the advantages of having focus on the sector transition and access to both systems.(GPclin1) I have considered if we could create a 'hybrid clinic'.Were a pharmacist could follow-up on our patients with hypertension and also have an extra eye on the patients that are discharged from the hospital.A 'sector transition clinic'.Politically, maybe that is a better name.(GPclin2) Economy was a barrier to most of the GP clinics; however, the possibility to share a hospital pharmacist between the clinics was a considered option.The small clinics did not want an employeremployee relationship; they preferred a neutral relation.However, it was still important that they were familiar with the pharmacist, so they would feel it as a collaboration and not as a correction.
It is important that it is a person that you can relate to, so that it is not a system.Because soon as it is a system, you can get the feeling that you are being corrected.So it's much nicer when it's a collaboration.(GPclin4) The hospital pharmacists saw advantages of working in a GP clinic alongside working at the Hospital Pharmacy.In addition to cross-sectoral tasks, medication reviews and teaching, other delegated tasks were mentioned, such as the possibility of a pharmacist being available to the municipal nursing care after discharge.However, the challenges of identifying which patients are in need of being followed up by a hospital pharmacist was highlighted.
The hospital pharmacists also contemplated the advantages of the hospital discharge process, which included enhanced communication regarding the medication changes and follow-up plans.Currently, discharge letters -written by various practitioners -are received by the GP and municipal nursing care, leading to inconsistent messages.
Integrated mixed methods results.Aggregated quantitative and qualitative data are presented in Table 6.Similarities and differences are described as well as interpretations of integrated results.
Discussion
In this study, we tested a cross-sectoral hospital pharmacist intervention in four GP clinics to explore similarities and differences between the clinics using quantitative and qualitative data.Overall, the intervention was well-accepted by the GP-staff and the hospital pharmacists.
The GP clinics There were differences in clinic sizes and the way their daily work was organized.
The smaller clinics prepared an office for the pharmacist.The pharmacist experienced easy access to the GP and felt integrated in the team.The larger clinics were used to interdisciplinary collaboration allowing the pharmacist more freedom to work independently, although pharmacists had less access to the GPs due to GPs busy schedule.There were differences in the way the GP clinics chose to use the pharmacist resource for ad hoc tasks.In the largest clinic, they oversaw the opportunity whereas others requested drug information, medication reviews or teaching.The differences may be due to busyness, visibility or an expression of different needs.It may also be because most of the GP clinics did not know what tasks a pharmacist could perform.
When looking at patient characteristics, we found patients with similar baseline characteristics (e.g.age and number of medications), SMR update, medication changes, DRPs identified in the medication reviews and at follow-up.Thus, patient characteristics were similar across clinics and therefore, the cross-sectoral pharmacist intervention was found equally relevant to all of the GP clinics.
The intervention
The intervention made sense to GP-staff as they experience many DRPs in cross-sectoral patient transitions.This is in line with previous research where DRPs were seen in 81-91% of discharged patients. 1,2In our study, the cross-sectoral intervention identified and solved several DRPs; in this way medication errors were avoided.
DRPs could be avoided if the hospital had more focus on the discharge process; however, this will not solve every problem.Some DRPs happen when transitioning patients between the hospital and the GP.No one else, besides the pharmacists in this study, has particular focus on medicine in the cross-sectoral field.Additionally, if the patient doesn't understand what medication changes have been made, there can still be DRPs after discharge.Some patients have low health literacy, 19,20 leading to increased health inequality. 21o reduce inequality in healthcare, we need to differentiate the treatment and healthcare offered to the individual to mitigate unequal access to and use of healthcare services (equity). 22A crosssectoral hospital pharmacist intervention with special focus on frail patients may be an option.In addition to improved treatment, communication and fewer readmissions, 23 coherence of the treatment is expected to improve.The population in the Northern Djurs Municipality may be different from other municipalities as 87% of the population has low socio-economic status 19 ; therefore, the intervention may be more applicable to clinics serving this population.Not every where others requested medicines information, medication review or education.
Small clinics:
In the smallest clinics an office was prepared for the pharmacist and they had easy access to dialogue with the GP.
Larger clinics:
In the larger clinics they were used to interdisciplinary collaboration allowing the pharmacists to work more independently; but it was difficult to find time to have a dialogue with the GPs due to GP busyness.
Acceptability:
Overall, the intervention was accepted by the GP-staff no matter the size or the way the GP clinic was organized.No professional burden.
The pharmacist was placed in the conference-room.
Dialogue with the
GPs was difficult due to busyness.
No burden
No professional burden.
The pharmacist got an office.The pharmacist had easy access to talk to the GP.
No burden
No professional burden.
The pharmacist was assigned a place to sit from time to time.Dialogue with the GPs was difficult due to busyness.
No burden
No professional burden.The pharmacist got an office.
The pharmacist had easy access to talk to the GP.Overall, the intervention was accepted by the pharmacists.There were differences in their experiences with the clinics.
Pharmacists
The intervention made sense, but was resourceintensive.
The crosssectoral task made sense; No others have this particular focus on medicine in sector transitions.
The intervention made sense.Also other tasks in the clinic would be a benefit.
The intervention made sense.Also other tasks in the clinic would be a benefit.
The intervention made sense;
The clinic would benefit from other tasks as well.
The country has an electronic SMR as in Denmark, and communication about the patients' medication may be even more challenged in these countries.
Staff in all of the GP clinics saw possible benefits of delegating tasks to a pharmacist.5][26] Shared employment and access to health records in both healthcare sectors was considered valuable by all.This was also the case when the intervention was pilot-tested. 3The GP clinics already have access to the hospital record (e-journal); however, it requires detective work to find the information they are looking for, and for some it is 'wildly outside the scope of what GP can offer'.
The shared employment also gave the pharmacist the opportunity to bring issues back to the hospital.
The pharmacists lacked feedback from the GPs on their medication review notes.As patients often are hospitalized for a very short time, the pharmacists are accustomed to working quickly and getting response from the hospital physicians shortly after their medication reviews.GP clinics primarily treat patients for current issues on the specific day.The GPs stated that they had taken the notes into consideration; however, the notes would be used at a later date when the patient had a consultation, for example, at an annual checkup.Thus, what the pharmacists considered as lack of commitment or an inefficiency of the intervention was actually due to differing cultures/expectations among the different HCPs.A study period of 3 months may have been too short to start a feedback loop and see the effect of the study; however, this was not the aim of the study.
It is unethical to have a cross-sectoral pharmacist intervention and a medication review note that is not taken into consideration before the patient revisits the clinic.A lot can happen in 6 months.If a cross-sectoral intervention is implemented in the future, the GPs must reconsider the work flow to gain the most of the intervention.
The pharmacist reported that the timing of the telephone call to the patients after discharge was difficult, as they were only in the GP clinics 1-2 days a week.Perhaps the use of remote access to the GP record system or having a team of several pharmacists working the same way would solve this problem.
Implementation in the future?For most, economy was a barrier for further implementation.The possibility to share a pharmacist within the cluster was considered an option.The possibility to delegate tasks from the GP to the pharmacist was also considered, for example, medication review prior to an annual check-up or follow-up on hypertension patients in a 'hybrid clinic'.This would give a chance to receive a fee covering the costs of having a pharmacist.This would, however, not cover crosssectoral tasks as in the intervention.The Danish healthcare system works in silos, both economically, and professionally.In the spaces between the silos, no one is responsible for things being connected. 27There is an ambition to have a future healthcare system, which supports a preventive and coherent healthcare system with more equality. 28With the current lack of physicians and nurses, we perceive hospital pharmacists to be a relevant HCP to be used in GP, in hospitals and in the cross-sectoral transition of patients.The question is, who should pay for the pharmacist?
The largest clinic perceived that an employeremployee relationship would commit them more; however, in both solo practices, they preferred not to have this relationship fearing it would not be neutral.
Future research
As perceived by one of the pharmacists, it would be very resource-intensive if every discharged patient were followed up by a pharmacist.Therefore, future research with focus on which patients may benefit the most from this intervention, is considered.
Possibilities for a pharmacist to be more integrated in the discharge process is considered.In some hospitals in England, a pharmacist prescriber is embedded within the medical team and is in charge of writing about medication in the discharge letter and review prescriptions. 29In the OPTIMIST study, 23 the extended pharmacist intervention included medication review, three motivational interviews, communication with the primary care physician, pharmacy, and nursing journals.sagepub.com/home/tawVolume 14 TherapeuTic advances in drug safety home, and follow-up after 6 months.This study showed to reduce the short-and long-term rates of readmissions.
As economy was a barrier to most of the clinics, further investigation on financial models is needed -who pays and how should the work of a pharmacist in a shared employment between the Hospital Pharmacy and GP clinics be organized?
Strengths and limitations.This study has strengths and limitations that merit further discussion.
Strengths
The intervention was initially tested in one GP clinic 3 and afterwards in four GP clinics with differing characteristic; therefore, acceptability was thoroughly explored.
The study was a mixed methods study including both quantitative and qualitative methods.The two data types were integrated and expanded the understanding of the topic.
Acceptability was assessed using TFA 16 as it represents a deliberate way to assess acceptability in the feasibility phase of a complex intervention.TFA captures key dimensions of acceptability, a strong tool when assessing acceptability.
Limitations
The collaboration between pharmacists and GP clinics is new in Denmark; therefore, not all of the GP-staff knew how a pharmacist can be utilized prior to the study.The study period was 3 months and in the solo practices the pharmacist worked there once a week, giving around 10 days where the pharmacist and the GP-staff met.Additionally, some of the pharmacists had a personal mind-set of not wanting to disturb the GPs unnecessarily.Therefore, a study period of 3 months may have been too short for the GP-staff and the pharmacist to become familiar with each other.
The GP clinics volunteered to participate when the Hospital Pharmacy was recruiting clinics for the study, possibly introducing selection bias.
Recruitment of patients referred to an outpatient clinic did not work so well.The GP-staff had to collect written informed consent from the patient, which drowned in busyness.If the intervention was implemented as part of daily work in the clinic, this process would not have been necessary.
Conclusion
The cross-sectoral hospital pharmacist intervention was found acceptable and relevant by all and; therefore, considered transferable to other GP clinics.The pharmacist in the smaller clinics had easier access to clinicians and felt integrated in the team.The larger clinics were more used to interdisciplinary collaboration, allowing the pharmacist more freedom to work independently.
The intervention was found equally relevant for all GP clinics; however, further investigation on how to choose patients for the intervention is needed.To increase equity in healthcare, differentiated solutions are needed.In a time with a shortage of physicians and nurses, hospital pharmacists are perceived to be a relevant HCP to be used in GP, in hospitals and in the cross-sectoral patient transition.
Shared employment with unique access to health records in both sectors was an important tool in the identification and resolution of DRPs.Financial models need further investigation.
journals.sagepub.com/home/tawVolume 14 TherapeuTic advances in drug safety were in the target group.It helped to call them, because that's where you get the truth.Sometimes they don't take the medicine that the doctor believes.(Pharmacist2) The patient could have been discharged on Thursday, and when I came the next Tuesday a lot could have happened during the weekend.(Pharmacist1) When you are at the hospital, you should really have been in the GP clinic.And when you are in the GP clinic you should really be in the hospital.It is a challenge that I do not know how to fix.Besides having several pharmacists working in the same way.(Pharmacist2) It would have been nice to know what they thought about the relevance of my comments.(Pharmacist1)
Table 1 .
Characteristics of the setting.
Table 2 .
3escription of the cross-sectoral hospital pharmacist intervention.3
Table 3 .
GP clinic characteristics and study recruitment.
Details about GP clinic characteristics and study recruitment GP clinic 1 GP clinic 2 GP clinic 3 GP clinic 4 GP clinic characteristics
Minor in brackets -Supervision: Practice assistants (medical student), physicians under education and nurses have consultations, senior physicians supervise when needed.(Consultation): Senior physicians may have consultations in smaller numbers.Consultation: Senior physician have patient consultations.(Supervision): Nurses and/or physicians under education have consultations; senior physicians supervise when needed.*GP clinic 3 is located in a rural area and consults patients from the rural district around.**Incl.practice assistants (medical students).***Accountant of the clinic.****Physician' work form.*****Only the Hospital pharmacists participated in the duties with the transitions of care between the hospital and GP clinic.GP, General Practice; RRH, Randers Regional Hospital.journals.sagepub.com/home/taw
$Shared Medication RecordPatients without a GP-updated SMR at referral to outpatient clinic (N; n; %)
Medication changes during hospitalization at RRH b
b Medication change: any addition/withdrawal of a drug or an increase/decrease in the dose.c Patients included without their consent according to study permissions, were not followed up by telephone.d Considered too ill: suicidal, cognitive impaired or with life-threatening illness (based on patient record notes).e Other: no medication; no voice.Statistics: *Kruskal-Wallis rank test; $ chi 2 ; ‡ Bartlett's equal-variances test (one-way).DRP, drug-related problem; GP, General Practice; IQR, interquartile range; RRH, Randers Regional Hospital; SMR, Shared Medication Record.
Table 4 .
(Continued) journals.sagepub.com/home/tawVolume14TherapeuTicadvances in drug safetyDRPs were identified during medication review in up to 81% of the patients -often related to dose, optimization of treatment, indication, medication reconciliation and a clean-up of the SMR.The severity of the DRPs were not assessed.
Table 5 .
Pharmacist ad hoc activities performed in the GP clinics.The GP clinics were given the opportunity to ask the hospital pharmacist to perform ad hoc activities when needed.This table shows the types and numbers of these activities.aTriple-whammy (concurrent use of a diuretic, a renin-angiotensin system inhibitor and a NSAID).360°evaluation of a physician under education.GP, General Practice; NSAID, non-steroidal anti-inflammatory drug; SMR, Shared Medication Record.The GP-staff were really great at welcoming new people.You felt welcome and comfortable, being there.And everyone had a reason to be there.
b Best practice in SMR.c Introduction of pharmacist students.d journals.sagepub.com/home/taw11 Quietly we got confident, but I could probably have been introduced a little better.(NurseClin2) So, exactly that, what can they be used for?Not that I don't know what they [pharmacists] stand for. . .No, but from there and then to what you can use them [pharmacists] for in your own daily practice.
Table 6 .
Joint display of mixed methods integration and interpretation.
|
2023-12-18T05:04:55.511Z
|
2023-01-01T00:00:00.000
|
{
"year": 2023,
"sha1": "e05a53ff36215c18c59637f7fcf2fced394bff2d",
"oa_license": "CCBYNC",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "e05a53ff36215c18c59637f7fcf2fced394bff2d",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
271491609
|
pes2o/s2orc
|
v3-fos-license
|
Segmental Acupuncture for Prevention of Recurrent Urinary Tract Infections. A Randomised Clinical Trial
Introduction and Hypothesis Urinary tract infections (UTIs) are a common medical problem and prophylaxis of recurrent UTIs is an ongoing clinical challenge. In the present study we examined whether acupuncture is able to prevent recurrent UTIs in women. Methods This multicentre randomised controlled trial, based at a University clinic and private acupuncture clinics, recruited women suffering from recurrent uncomplicated UTIs. Participants were randomised to the acupuncture group or control group. Acupuncture therapy consisted of 12 treatments over a period of 18 weeks, using a set of predefined body and ear acupuncture points. Cranberry products were recommended to all participants as standard of care. Results A total of 137 women were randomised (68 acupuncture, 69 control group) and occurrence of UTIs at 6 and 12 months could be assessed in 123 and 120 women respectively. Acupuncture combined with cranberry slightly increased the proportion of UTI-free women compared with cranberry alone at 6 months (59% vs 46%, p = 0.2). Between 6 and 12 months the proportion of UTI-free women was significantly higher in the acupuncture group (66 vs 45%, p = 0.03). The number of UTIs decreased from baseline to 12 months in both study groups. The number of UTIs at 12 months was significantly lower in the acupuncture group (median difference 1, p = 0.01). Conclusions Segmental acupuncture may be an effective treatment option for women with recurrent UTIs over a longer follow-up period and may limit antibiotics use. Further studies are needed. Supplementary Information The online version contains supplementary material available at 10.1007/s00192-024-05872-7
Introduction
Urinary tract infections (UTIs) are considered to be the most common bacterial infections, with approximately 80% of all UTIs occurring in women [1].Nearly 1 in 3 women is expected to suffer from at least one episode of UTI by the age of 24 years and almost half of all women will experience one UTI during their lifetime [2].Recurrences affect approximately 20-30% of women with initial UTI, although recurrence rates vary widely [3].
The study was presented as a poster abstract at the annual conference of the Austrian Society for Urogynecology in Linz, Austria (17-18 November 2023).
Handling Editor: Gin-Den Chen Editor in Chief: Maria A. Bortolini Thomas Ots and Daniela Gold have contributed equally, proposed shared first authorship Recurrent UTIs, defined as two UTIs within 6 months or three UTIs within 12 months [4], are a common problem seen in clinical practice with important medical, social and financial implications.Prophylaxis of recurrent UTIs is an ongoing challenge, with several different management strategies being used [5,6].Low-dose antibiotic prophylaxis for several months is reported to be effective but should not be administered first line, because it fosters the development of antibiotic resistance of the causative microorganisms, as well as the commensal flora [5,[7][8][9].In postmenopausal women vaginal oestrogen therapy reduces symptomatic UTI episodes [1,10].Further prevention strategies include the oral immunostimulant OM-89, the vaginal vaccine Urovac, lactobacilli prophylaxis, cranberry products, and acupuncture [10][11][12][13].American cranberries have been used in the prevention of UTIs for many years.The latest Cochrane update supports the use of cranberry products to reduce the risk of symptomatic UTIs in women with recurrent UTIs [14].
The use of acupuncture for UTI treatment has been studied for over 20 years [15][16][17].So far, two randomised controlled trials assessed the effect on the prevention of recurrent UTIs with positive results [15,17].Acupuncture points in those studies were chosen according to patients´ individual diagnoses following Traditional Chinese Medicine (TCM) patterns [15,16].
Segmental acupuncture, based on segmental anatomy, is in part an alternative model to the traditional Chinese meridian system.The locations of the needles are determined by the corresponding segments of the affected organs.This allows standardised treatment regimens without in-depth knowledge of TCM.Combination with auriculotherapy has been proven to be useful [18].
The aim of the present study was to assess the effect of segmental acupuncture combined with auriculotherapy in the treatment of recurrent UTIs in women.We hypothesised that women receiving acupuncture treatment in addition to standard treatment were more likely to have no UTIs at 6 months than women with standard treatment only.
Materials and Methods
This was a multicentre randomised controlled clinical trial with a 12-month follow-up conducted at a University Clinic of Obstetrics and Gynaecology, a University Clinic of Urology, and 7 private acupuncture clinics within Austria.Patients with recurrent uncomplicated UTIs were invited to participate.Inclusion criteria were a history of at least two symptomatic UTIs within the last 6 months or at least three symptomatic UTIs within the last 12 months.UTIs had to be diagnosed by a health care provider with a dipstick test and at least one positive urine culture within the last year was required.Exclusion criteria were pregnancy, diabetes, an indwelling urine catheter, renal insufficiency, transplantation, or immunodeficiency.Women with pelvic organ prolapse stage ≥ 2 and post-void residuals > 100 ml were excluded.The study was designed according to the Consolidated Standards of Reporting Trials (CONSORT) and STandards for Reporting Interventions in Clinical Trials of Acupuncture (STRICTA) guidelines [19].
Eligible and consenting patients were randomised to acupuncture treatment plus cranberry treatment or cranberry treatment only via the central computerised system "randomiser" at a ratio of 1:1.Blinding of clinical assessors was not feasible, because follow-up assessments were usually done by the same investigator, who randomised patients and informed them about study allocation.
Patients randomised to the acupuncture group received acupuncture treatments at 1 of 7 private acupuncture clinics participating in the study.Women randomised to the control group were asked not to undergo acupuncture treatment for any medical reason within 6 months after being included in the study.All study participants were counselled about urinary symptoms and protective behaviour.Daily use of cranberry products was recommended as standard of care in both study groups.Cranberry products were provided free of charge for 6 months.All participants received a study diary and were advised how to document urinary symptoms and intake of cranberry products.
In the case of UTI symptoms, i.e. frequency or urgency, dysuria, suprapubic tenderness, haematuria, or fever (> 38 °C), study participants were advised to access their health care provider.For diagnosis of UTI a dip-stick test of clean-catch midstream urine and/or a urine culture was recommended.Antibiotics treatment regime was individually determined by health care providers, taking into account information about uropathogens, resistance patterns and adverse effects profiles.
All study participants were scheduled for follow-up visits at 6 and 12 months.During the visits the study diaries were reviewed, amounts of UTIs assessed and questionnaires administered.
Acupuncture Treatment
All acupuncture treatments were performed by medical doctors with certified acupuncture diplomas of the medical board of Austria and at least 5 years of practice.Physicians were advised to limit conversations with study participants to a minimum.A total of 12 treatments was performed over a period of 18 weeks.Treatment sessions took about 30 min and were timed according to a defined time schedule with increasing time intervals.Patients were placed in a lateral position, which was changed each time to ensure balanced stimulation.
Acupuncture was performed according to the physiological concept of segmental acupuncture [20,21], which has been tested and practiced successfully in the clinic of the first author for many years.A predefined set of acupuncture body and ear points (Figs. 1, 2) was used without seeking an individual TCM-based diagnosis.
The organs of the urogenital system are innervated by nerves of the segments Th10 to S4 and local needle stimuli were applied in the area from Th11 to L1. Ventrally, three needles were placed at the lower abdomen in proximity to the acupuncture points Kidney 13 to 15. Dorsally, five needles were placed at the lower back in proximity to Bladder 24 to 28.Needles were placed unilaterally per segment about 1 cm from the midline, except for Bladder 24, which was pricked bilaterally.
These loci were combined with distant points located within the segments L4/L5, S1 and S2 corresponding to the urogenital organs.On the lower limbs the needles were placed approximately at acupuncture points Stomach 36, Spleen 6, Kidney 6, Kidney 7 and Bladder 60.Owing to patients' lateral position, all points except Stomach 36 were pricked unilaterally.On the head the acupuncture point Du 20, widely used as a calming point, was added.
For auriculotherapy smaller acupuncture needles (TEWA® 0,2 × 15 mm; asia-med) were inserted up to 1 mm on the following ear points: Kidney, Bladder, Sympathetic, Lower Pelvis, Knee in the middle of the triangular fossa, as described in French acupuncture (Fig. 2).Investigators were instructed to use an electrical potentiometer for the exact localisation of the respective points.
Cranberry Treatment
The following products were supplied to participants independent of study group allocation: Cranberry Granulate Alpinamed® (Gebro Pharma GmbH, Fieberbrunn, Austria), containing 225 mg proanthocyanidin (PAC), Cranberry capsules Preisel-Caps® (Caesaro Med GmbH, Leonding, Austria) containing 36 mg PAC and Cranberry capsules Urgenin® (Madaus GmbH, Cologne, Germany) also containing 36 mg PAC.Recommended intake was once daily for the Cranberry Granulate and twice to thrice daily for the Cranberry capsules.
Outcome Measures
The primary study outcome was the proportion of women without UTIs at 6 months.Secondary outcomes were the number of UTIs, antibiotics medication, use of cranberry products, health-related quality of life (HrQoL) and treatment satisfaction at 6 months.To assess the long-term effect of acupuncture we analysed the proportion of UTI-free women and number of UTIs at 12 months as further study outcomes.
The number of UTIs, antibiotics and cranberry use were assessed during review of the study diaries at the follow-up visits.
The HrQoL was assessed using the King's Health Questionnaire (KHQ) at baseline and at 6 months.The 32-item, condition-specific instrument was validated to assess HrQoL in women with lower urinary tract conditions and has been widely used in research.The German-language version of the KHQ was validated in women with stress urinary Incontinence [22], but may also be used in patients with other bladder problems.
Treatment satisfaction was assessed at 6 months using an adopted German version of the "Client Satisfaction Questionnaire-CSQ8", a validated tool for measuring global patient satisfaction at the end of treatment [23].
In the case of missing follow-up visits patients were contacted via telephone and information on cranberry intake, urinary symptoms and other missing data was collected.
Statistical Analysis
Sample-size estimation was performed for the primary outcome of the proportion of UTI-free women at 6 months.Based on the existing literature [15,16], the difference in the proportion of women free of UTIs between the acupuncture and the control group was expected to be between 25 and 50%.The sample size calculated to achieve a power of 80% was increased by an anticipated drop-out of 10 women per group (17%), resulting in a total of 136 patients planned for study inclusion.Summary statistics for continuous variables are presented as median and quartiles, and for discrete variables as count and proportion.The analysis of the main and secondary outcomes was calculated using Fisher's exact test for categorical data and the Mann-Whitney U test for continuous and count data.Two-sided p values < 0.05 were considered statistically significant.Differences between the groups were calculated using the odds ratio (OR) for binary outcomes and using Hodges-Lehmann median of pairwise differences for continuous outcomes.As a sensitivity analysis, we analysed the binary and count data outcomes adjusting for patient age using logistic regression models or quasi-Poisson regression models, summarising the effects as adjusted odds ratios and adjusted incidence rate ratios respectively.Data were analysed using the intention-to-treat principle.The main findings did not change using per-protocol analysis.No imputation for missing data was performed.All data analyses were done using R [24].
The study was approved by the local Ethics Committees and all participants provided written informed consent.
Results
Between March 2015 and April 2021 a total of 137 patients were enrolled.Sixty-eight women were allocated to acupuncture treatment and 69 women to the control group (Fig. 3).Patient characteristics of the two groups of the study population were comparable (Table 1).
The primary endpoint, occurrence of UTIs at 6 months, could be assessed in 123 patients (64 in the acupuncture group, 59 in the control group).Drop-outs were more common in the control group and among younger patients, with a median age of the drop-outs of 34.0 years in both groups.In the acupuncture group, the proportion of UTI-free patients at 6 months was 59% compared with 46% in the control group (OR 1.72; 95% CI 0.80-3.77;p = 0.2).The difference between study groups increased at 12 months' follow-up, with significantly more patients being UTI free between 6 and 12 months in the acupuncture group (66% vs 45%, OR 2.38; 95% CI 1.08-5.37;p = 0.03), as shown in Table 2.In comparison with the 12 months preceding study inclusion, the rate of UTIs decreased markedly in both study groups at both 6 months' and 12 months' follow-up (Fig. 4).The number of UTIs at 12 months was significantly lower in the acupuncture group than in the control group (median difference 1, p = 0.01).Owing to the unequal drop-out rates and the resulting unforeseen age imbalance between the groups, we additionally conducted sensitivity analysis by adjusting for patient age using logistic regression models or quasi-Poisson regression models.Adjusting for this important covariate did not qualitatively change the main findings.The percentage of women requiring antibiotics treatment decreased noticeably from pretreatment (97% vs 99%) to 6 months (22% vs 37%) and to the time period between 6 and 12 months (14% vs 27%) in the acupuncture group and the control group respectively.The study groups did not differ in terms of types of antibiotics treatment regimens received, which consisted of fosfomycin (19%), nitrofurantoin (13%), trimethoprim (10%), cefuroxime (8%) or other antibiotics (6%) or were unknown (44%).
Fifty-six patients (93%) randomised to acupuncture received all 12 treatments, 4 patients (6%) received between 9 and 11 treatments and in 8 patients the actual number of treatments was not recorded.In the control group, 1 woman received acupuncture treatments during the study period.The majority of patients in both study groups used cranberry products on a regular daily basis during the 6-month study period (median frequency 1.0 (0.9-1.0) vs 1.0 (0.75-1.0), p = 0.6).According to the study diaries and follow-up assessments 67% used Alpinamed ®Cranberry Granulate, 13% Preisel-Caps® tablets and 20% Urgenin® tablets, up to once (59%), twice (37%) or three times daily (4%).During days with urinary symptoms patients changed cranberry intake to once (20%), twice (59%) and three times or more daily (20%).
Treatment satisfaction at 6 months could be assessed in 122 women (89%).Overall treatment satisfaction was significantly higher in the acupuncture group (median CSQ8 sum score 32 vs 29, p < 0.001), with 88% being very satisfied and 77% judging their treatment as being very helpful for their urinary symptoms, compared with 55% and 57% in the control group respectively.The KHQ was completed by 127 (93%) at baseline and 75 (55%) at 6 months' follow-up, with no significant differences in subscales between study groups.
No adverse events were observed or reported during the entire study period.
Discussion
This was the first study to analyse the efficacy of segmental acupuncture in combination with auriculotherapy in women with recurrent UTIs in the course of a randomised controlled trial with 12 months' follow-up.We found that acupuncture combined with regular cranberry intake in women with recurrent UTIs reduced the risk of subsequent UTIs, compared with cranberry intake only.Although the primary outcome, i.e. the proportion of UTI-free women at 6 months, did not show a statistically significant difference between the groups, the secondary study outcomes provided important clinical findings.At 12 months the proportion of UTI-free women was significantly higher in the acupuncture group than in the control group, suggesting a long-term benefit of acupuncture.Acupuncture was well accepted, with 93% of participants attending all treatment sessions and no adverse events being recorded.
The study was designed according to the CONSORT and STRICTA guidelines and was performed with a high standard of conduct from randomisation to acupuncture treatment, data management and statistical analysis.Objective outcomes, i.e. number of UTIs and antibiotics use, were combined with subjective outcomes, i.e.HrQoL and treatment satisfaction.The use of cranberry products, the most commonly used self-medication, was monitored with the use of study diaries.Although three different cranberry products were provided, daily total PAC doses were comparable.The equivalent and regular intake in both study groups provided Fig. 4 additional information regarding the effectiveness of regular PAC, although this was not a predefined study outcome.
Our results are in line with the findings of the two previous RCTs studying the effect of acupuncture on the prevention of recurrent UTIs.In the study by Alraek et al. 73% of women in the acupuncture group were free of UTIs at 6 months compared with 52% of women in the control group with no treatment [16].Positive results were also reported by Aune et al., with 85% of women being free of UTI at 6 months in the acupuncture group, compared with 58% in the sham group and 36% in the control group [15].
In both Norwegian studies [15,16], acupuncture points were chosen according to the patients' individual TCM diagnosis with exact localisation of points and attention to "deqi" needling.In our study we used segmental acupuncture, which can be applied according to standardised treatment protocols and can be learned und used easily without acquiring deeper knowledge of TCM.Segmental acupuncture is based on the segmental anatomy presented by Henry Head in 1894.The human being as a vertebral being is arranged metamerically, i.e. the spinal cord of each vertebra corresponds to a specific segment.All segments show an identical structure: a spinal nerve (neurotome) runs from the spinal cord peripherally to the skin (dermatome) and has branches to muscles (myotome), bones (sclerotome), organs (viscerotome) and the vegetative nervous system (sympathetic nerve).It is assumed that the acupuncture stimuli proceed segmentally and follow the traditional TCM meridians only to a certain degree [21].This neurophysiological explanation of the effects of acupuncture has become increasingly recognised in Chinese and Western literature [25,26].The long-term effect of acupuncture may be explained in terms of an immune modulating effect [17,20].We did not use sham acupuncture in our control group, because sham needling on overlapping dermatomes has been shown to produce clinically relevant effects [27,28].
In our study cranberry products were provided free of costs for the first 6 months and monitored with study diaries in all study participants in order to increase compliance and minimise uncontrolled self-medication.According to the patient's diaries cranberry use was sustained and very regular and possibly higher than in a general population after medical recommendation.This may explain the good treatment result in the control group (cranberry use only).Cranberries contain two compounds with anti-adherence properties, the so-called Vaccinium macrocarpon and proanthocyanidins components.These components are meant to prevent fimbriated Escherichia coli from adhering to uroepithelial cells by inhibiting the synthesis of P fimbriae and by deforming the cell body of the bacterium [29].Our results are in line with the latest Cochrane review update, which found that the use of cranberry products reduces the risk of symptomatic, culture-verified UTIs in women with recurrent UTIs [14].
Several study limitations need to be considered.It is possible that the increased interaction that the acupuncture group received may have influenced the study outcome.Although we tried to minimise conversations during acupuncture sessions, we cannot exclude that some additional counselling occurred.Although drop-outs were anticipated, the percentage of participants lost to follow-up was higher in the control group (16%) than in the acupuncture group (6%).Drop-outs were younger than the overall study sample, which must be considered in the interpretation of this study as a possible source of attrition bias.However, unplanned sensitivity analyses indicate that the results do not change even after adjustment for patient age.The lack of blinding may have influenced the higher rate of reported urinary symptoms and the higher drop-out rate in the control group than in the acupuncture group.Some patients did not attend their follow-up visits, but were contacted via telephone at a later stage, and limited recall may have occurred.KHQ data were partly incomplete and interpretation is therefore limited.Furthermore, the positive finding favouring segmental acupuncture in the secondary outcomes should be interpreted as an explorative result.
In conclusion, acupuncture may be an effective treatment option for women with recurrent UTIs over a longer follow-up period.Further studies will need to investigate the use of segmental acupuncture, with and without auriculotherapy, versus cranberry intake, and clarify the therapeutic role of acupuncture and the minimum amount of acupuncture sessions needed.Future investigation may also include a sham controlled trial.The current results are promising for further integration of acupuncture into conventional medicine.
None of the funders had a role in conducting the research or writing the paper.
Fig. 3
Fig. 3 Consolidated Standards of Reporting Trials flow-chart of study participants
Table 1
Baseline characteristics of the study population
Table 2
Study outcomes at 6 and 12 months UTIs were diagnosed by health care providers using a dip-stick test UTI urinary tract infection *p values derived from Fisher's exact tests and Mann-Whitney U test respectively a Numbers represent groups at 6 months and at 12 months
|
2024-07-28T06:17:53.241Z
|
2024-07-25T00:00:00.000
|
{
"year": 2024,
"sha1": "4743386aa52164266336a64b9ccb7cc29e2f949b",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s00192-024-05872-7.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "5f89417c5377b65858899f52572216528aa7cbcd",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
219530796
|
pes2o/s2orc
|
v3-fos-license
|
Long distance adiabatic wireless energy transfer via multiple coils coupling
Recently, the wireless energy transfer model can be described as the Schrodinger equation [Annals of Physics, 2011, 326(3): 626-633; Annals of Physics, 2012, 327(9): 2245-2250]. Therefore, wireless energy transfer can be designed by coherent quantum control techniques, which can achieve efficient and robust energy transfer from transmitter to receiver device. In this paper, we propose a novel design of wireless energy transfer which obtains the longer distance, efficient and robust schematic of power transfer, via multiple states triangle crossing pattern. After our calculations, we demonstrate that our design can provide much longer transfer distance with relatively smaller decreasing in the transfer efficiency.
I. INTRODUCTION
The research of wireless energy transfer started at the age of the Tesla. The principle of wireless energy transfer can provide power transfer from one device (transmitter) to another device (receiver) via electromagnetic field and the major advantage of this technique is non-connected power transfer without the wires or cables [1]. Due to this great dominance, wireless energy transfer technique possesses significant applications in the charging of electric vehicles [2,3], mobile phones [4], lighting [5], implantable medical devices [6,7] and others power supply devices [8,9].
The normal setup of wireless energy transfer is based on two resonant coils with the constraint of exact resonance frequency between the transmitter coil and receiver coil [10][11][12][13]. Thanks to the coupled mode theory (CMT) of wireless energy transfer [13], the coupling equation of coils can be approximated to Schrodinger equation. Therefore, coherent quantum control technique can be employed to control power transfer between the coils. Recently, two coils coupling [19] and three coils coupling [20] propose an adiabatic technique to enhance the transfer efficiency, comparing with exact resonance method. The adiabatic following is a famous coherent quantum control technique, which is widely used in quantum system and classical systems, such that the quantum state controlling [14], quantum manybody processing [15], optical waveguide coupler [16], graphene surface plasmon polaritons (SPPs) coupler and terahertz SPPs coupler [17,18].
The coupling strength between two coils decreases exponentially with increasing distance, based on CMT of the coupling between coils [13]. Therefore, the limitation of previous researches is that the transmission distance is relatively short due to the two or three coils coupling. In this paper, we propose a novel design of adiabatic wire-less energy transfer via multi-coil system. In our design, we set up the multiple mediator coils between transmitter and receiver coil and all the mediator coils are identical. The two adjacent coils are equally spaced which provides equal coupling strength between two adjacent coils. The scheme of our designed configuration is illustrated in Fig. 1. In this configuration, we have two significant advancements comparing with previous researches, (i) we enhance the power transfer distance by contrasting with others adiabatic following based on two or three coils' coupling due to multiple mediator coils; (ii) our design improves the transfer efficiency comparing with exact resonance of multi-coil system.
In this paper, we design a novel long distance, efficient and robust wireless energy transfer scheme via multistate triangle crossing pattern, which provides complete power transfer from transmitter coil to receiver coil via multiple mediator coils. We demonstrate that the complete power transfer can be achieved independent of number of coils, without considering any loss in any coil (see Fig. 2). Subsequently, we consider the lossy case (the absorption and radiation in all coils and extracted rate from receiver coil) with different number of coils and numerically calculate the transfer efficiency η (as shown in Fig. 3). Finally, we plot the transfer efficiency η against number of coils and different lossy parameters (lossy in the mediator coils and extracted rate), as shown in Fig. 4.
II. MODEL
Based on the CMT of coupling between coils, we can easily express the coupled equation of muti-coil into multi-state Schrodinger equation, written as The schematic configuration of our designed multiple coils system. The coupling strength between two adjacent coils is k0 and the intrinsic frequencies of transmitter, mediators and receiver are ωt, ωm and ωr respectivelly. Γt, Γm and Γr are the corresponding intrinsic loss rates, due to absorption and radiation of coils, while Γw is the extraction of work from the receiver.
where a n is the power amplitude on the n th coil, with power P n = |a n | 2 and H is the Hamiltonian of the multicoil coupling system, which is, where ω t , ω m and ω r are the intrinsic frequencies of transmitter, mediators and receiver, with ω i = 1/ L i (t)C i (t). L i (t) and C i (t) are the inductance and the capacitance of i th coil. Γ t , Γ m and Γ r are the corresponding intrinsic loss rates, due to absorption and radiation of coils, while Γ w is the extraction of work from the receiver. In addition, the coupling strength between (i − 1) th and i th coil can be given by CMT, written as where M is the mutual inductance of the two coils. In our design (the scheme as shown in Fig. 1), we choose the variable inductance L and capacitance C to vary the intrinsic frequencies of transmitter ω t and receiver ω r via external control and the intrinsic frequencies of all mediator coils are constant ω m . We set up the transmitter coil, mediator coils and receiver coil with fixed distance. Therefore, the coupling strength between adjacent two coils is constant (time independent), k 0 .
Subsequently, it is well known that the definition of transfer efficiency η is the ratio between the extract work from the receiver coils divided by the total energy loss off the system [19,20], given by The previous researches had already shown the complete quantum transition between three quantum levels via triangle crossing pattern [21,22], which forces the quantum evolution along with one adiabatic state of the system. Their Hamiltonian is special case (three-level state) of our designed Hamiltonian H (see Eq. 2) and our Hamiltonian H is the multi-state chains of N quantum state, which has only coupling between its two neighbors of each state, such that 1 ↔ 2 ↔ ... ↔ N − 1 ↔ N . In this configuration, we can transfer our multi-level quantum system to three-level state like with a dressed middle state [23]. Therefore, we can produce complete transfer population to our designed multiple quantum states system H (as shown in Eq. 2) with triangle crossing pattern, while the frequency of the transmitter and receiver coil change in time in opposite direction and frequencies of mediators are the constant, given by [20] ω t = ω m + δ − α 2 t; ω m = constant; (4) Therefore, we can employ this frequency shift configuration to produce the complete power transfer from transmitter to receiver in the adiabatic following setting up. There are three transfer patterns within the triangle crossing pattern, such as sequential transfer (δ > 0), bow-tie transfer (δ = 0) and direct transfer (δ < 0). The previous researches has already shown that direct transfer pattern has better transfer efficiency η [20]. Thus, we will use direct transfer of triangle crossing pattern with δ < 0 in our paper.
III. LONG DISTANCE POWER TRANSFER
Subsequently, we can numerically calculate the power evolution of multiple coils coupling system with running coupled equation (Eq. 1) to demonstrate the power transfer from transmitter to receiver and illustrate the transfer efficiency η of our design. At the beginning, we demonstrate the result of power evolution of transmitter and receiver without any loss (Γ t = Γ r = Γ m = Γ w = 0), in order to illustrate complete power transfer with our design in the multiple coils system. We demonstrate this feature with the parameters δ = −7α, k 0 = 3.5α and numbers of coils are three, four, five respectively, as shown in Fig. 2. From the results, we easily obtain that the complete power transfer from the transmitter coil to receive coil via the mediator coils, no matter how many mediator coils are. This feature determines the transfer efficiency η of our design is larger than exact resonant case, because the energy in receiver coils is transfer back to transmitter coil (so-call Rabi oscillation) in the exact resonant case.
From the Fig. 2, we have already demonstrated that our design works functionally in principle. Afterwards, we can numerically calculate the loss case with the parameters intrinsic loss (mainly the absorption) of transmitter and receiver coil Γ t = Γ r = 0.001α, intrinsic loss (mainly the radiation) of mediator coils Γ m = 0.01α and extracted work rate Γ w = 0.05α, which is suitable for real scenario. Therefore, we obtain the numerical results of power evolution |a t | 2 and |a r | 2 with the loss and plot the transfer efficiency η along with the time from −50α −1 to 50α −1 , as shown in Fig. 3 (a) three-coil (b) four-coil (c) five-coil system. As we can see that the power evolution of transmitter |a t | 2 drops directly along with the time, due to the power transfer continuously flowing from transmitter to receiver. The power on receiver coil starts to increase, and then declines exponentially due to extract work from the receiver. In addition, when the number of coil is increasing, the transfer efficiency η (see orange line) decreases, due to the largely loss on mediator coils. Furthermore, we plot the transfer efficiency η along the number of coils with different loss of mediator coils Γ m and extract work rate Γ w . The transfer efficiency largely depends on number of coils, Γ m and Γ w , while increasing the number of coils leads to decrease of transfer efficiency η. When we have multiple mediator coils, the power on the coils flows within the mediator coils. However, the energy on the mediator coils has dissipation. Therefore, it is easily to obtain that improving the quality of mediator coils (increasing the Γ m with fixed Γ w = 0.05α) and increasing the extract work rate Γ w (with fixed Γ m = 0.01α) enhances the transfer efficiency, as shown in Fig. 4.
IV. CONCLUSION
In this paper, we propose a novel wireless energy transfer scheme to provide long distance, efficient, robust power transfer via multiple mediator coils, based on multi-state triangle crossing pattern quantum coherent control. Based on numerically calculations, we can illustrate that our design can provide much longer transfer distance (for example transfer distance can increase up to 2 times) with relatively smaller decrease in the transfer efficiency η (transfer efficiency η drops from 87% to 53%).
|
2020-06-09T01:01:01.074Z
|
2020-06-06T00:00:00.000
|
{
"year": 2020,
"sha1": "2a435d99a9aebf19c97d20b839e84d68f1b455ec",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1016/j.rinp.2020.103478",
"oa_status": "GOLD",
"pdf_src": "Arxiv",
"pdf_hash": "b1320dbcaada7e4cee5440285423f6b82388de6c",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
3060182
|
pes2o/s2orc
|
v3-fos-license
|
Pickup position and plucking point estimation on an electric guitar
This paper describes a technique to estimate the plucking point and magnetic pickup location along the strings of an electric guitar from a recording of an isolated guitar tone. The estimated values are calculated by minimising the difference between the magnitude spectrum of the recorded tone and that of an electric guitar model based on an ideal string. The recorded tones that are used for the experiment consist of a direct input electric guitar played on all six open strings and played moderately loud (mezzo-forte). The technique is able to estimate pickup locations with 7.75–9.44 mm average absolute error and plucking points with 10.45–10.97 mm average absolute error for single and mixed pickups.
I. INTRODUCTION
Several papers in the literature have dealt with analysing and synthesising plucked string instruments, particularly acoustic 1,2 and electric guitars. 3,4 In this paper, we focus on the analysis of electric guitar sounds. The motivation for this work is to understand the factors that influence the sound of popular guitarists, in order to be able to replicate their sound by extracting the relevant parameters from their recordings.
A number of parameters determine the timbre of the electric guitar. For instance, an electric guitar sound can be altered immensely by selecting different combinations of amplifier, loudspeaker cabinet, and effects. Case et al. 5 describe how the combination of the electric guitar, amplifier, and recording techniques enables musicians and recording engineers to define and refine their tone, and to explore new sounds as desired. The tone can be further varied by adjusting the parameters of the various elements in the chain. Moreover, the way the musician plays, for example, the strength and the location of the pluck, also influences the sound.
It is well known that the plucking point and pickup position produce a comb-filtering effect on the spectrum of the electric guitar. 4,6,7 To synthesise a realistic electric guitar sound requires careful choice of these parameters. For modelling realistic playing in acoustic guitar synthesis, Laurson et al. 2 incorporate the comb-filtering effect caused by the plucking point into the excitation signal, in order to provide better control over the timbre. Recent papers introduce techniques to model the physical interactions of the player with the guitar to produce a more realistic guitar sound, such as modelling the interactions of the guitar pick 8,9 or fingers 9 with the string, and the fingers with the fretboard. 9,10 When the pickup selector of an electric guitar is switched, the difference in the sound is recognisable. Furthermore, the positioning of pickups on particular electric guitar models contributes to their unique sound. Thus, estimating the precise location of the magnetic pickup of an electric guitar could possibly help distinguish which pickup configuration is selected for a known guitar, or which electric guitar model is played for an unknown guitar (e.g., Fender Stratocaster or Gibson Les Paul, etc.). Popular electric guitars have different pickup locations, thus, estimating the locations could help musicologists in determining which guitar is used in a recording where there is little information about the original instrument and/or its pickup selection.
To date, there are few papers on extracting information from electric guitar recordings, such as classifying the types of effects used 11 and estimating the decay time of electric guitar tones. 12 Other research involved extracting information from related string instruments, such as extracting plucking styles and dynamics for classical guitar 13 and electric bass guitar. 14 Papers that dealt with estimating the plucking point of a classical guitar have used both frequency-domain 15,16 and time-domain 17 approaches. This paper extends recent research on estimating the pickup position and plucking point of electric guitar tones. 18 The parameters are estimated using a frequency-domain approach, where the parameters of the electric guitar model that best fit the observed data are chosen. In this paper, we propose an improved method to estimate the locations of the pickup and plucking events based on the autocorrelation of the spectral peaks.
The paper is organised as follows: Sec. II explains the datasets that are used in this paper. The derivation of an ideal string model that includes a pickup model is explained in Sec. III and we extend the existing models in Sec. IV. In Sec. V, we introduce a method to estimate the plucking point and pickup position given a direct input audio recording of individual tones played on the electric guitar. We evaluate our method on two datasets: (1) we evaluate the accuracy of the estimates for tones played mezzo-forte on open strings using either single or mixed pickups in Sec. VI and (2) we evaluate the effects on the accuracy when different plucking dynamics and frets are played in Sec. VII. Finally, the conclusions are presented in Sec. VIII.
II. DATASETS
In this paper, we use two datasets, which are designed to (1) test the accuracy of our algorithms on single and mixed pickups; and (2) test the effects of different plucking dynamics and fret positions.
For the first dataset, we record (one instance for each combination) moderately loud isolated tones played at eight plucking points, on each of the six open strings, using five different pickup selections (three single and two mixed) on a Stratocaster model guitar manufactured by Squier. The Squier Stratocaster is modified so that the electric guitar can be recorded from three single pickups simultaneously. 19 Note that the mixed pickup selections are recorded on a separate occasion. The plucking points range from 30 to 170 mm from the bridge with 20 mm intervals and the strings are plucked using a 0.88 mm thick plastic plectrum. Figure 1 shows where the plucking events occur. The pickup selector allows us to select single pickups or mixed pickups. The single pickups consist of neck pickup, middle pickup, and bridge pickup. The two mixed pickups are a mix between neck and middle pickup and a mix between middle and bridge pickup, where all pickups are in-phase.
The second dataset is taken from Mohamad et al., 19 which consists of isolated tones played at three plucking points (above each pickup) with three single pickup configurations and three plucking dynamics, played on open and fretted strings (fifth fret and twelfth fret), with three repetitions of each condition.
All samples (first and second dataset) were recorded at 44 100 Hz sampling rate with the same electric guitar, string gauges, plectrum and recording equipment. The lengths of each string differ slightly due to the different positions of each bridge saddle. The measurements of the length of string and pickup locations are shown in Table I. The pickup locations are measured from each bridge saddle to the middle of the pickup, where the string is most strongly sensed.
III. ELECTRIC GUITAR MODEL BASED ON IDEAL STRING EQUATION
In this section, we discuss the theoretical background of an electric guitar model based on an ideal plucked string equation.
A. Ideal string model From the point a guitar string is plucked, waves travel in two opposite directions along the string propagating away from the plucking point. The waves are then reflected from the end supports of the string producing a standing wave in the string.
The amplitude spectrum of the ideal string model can be derived by integrating the initial geometrical form of the 649 38 99 157 Second, B3 650 41 100 158 Third, G3 652 45 102 160 Fourth, D3 651 46 101 159 Fifth, A2 652 49 102 160 Sixth, E2 650 49 100 158 plucked string (the initial form of the string is assumed to have a triangular shape). The Fourier series coefficients,Ĉ k of a string of length L plucked at a point q from the bridge with a vertical displacement a are given by 6 where a is the amplitude of the pluck, k is the harmonic number and R q ¼ q/L. For example, plucking one-third of the distance along the string results in every third harmonic having zero amplitude. Note that in the ideal string model, the end supports are assumed to be rigid and no energy is lost.
B. Velocity of ideal string
A typical electric guitar uses magnetic pickups to sense the vibration of its strings and convert it into electrical signals in order to produce sound. The magnetic pickup senses the velocity of the string, 20,21 therefore, modelling the electric guitar string requires a time derivative of the ideal string model. The velocity of an ideal string that is sensed at a single point d is given by 12 where r and M are the string's tension and mass per unit length, respectively), The effect on the timbre due to the pickup placement and plucking point can be understood via its spectrum. The Fourier series coefficients of the velocity of the ideal string sensed at a single point,V k can be computed aŝ For example, Fig. 2 shows the spectrum of the electric guitar model plucked at one-third of the string length with the pickup placed at one-fifth of the string length. Notice that in Fig. 2 for multiples of k 1 ¼ L/d and k 2 ¼ L/q harmonics are suppressed. This effect is what makes a neck pickup sound warmer than a bridge pickup, as more of the harmonics are not sensed or weakly sensed.
C. Pickup mixing effect
An electric guitar commonly has an option to mix two pickups together. Tillman 22 and Paiva et al. 7 studied the effect of mixed pickups. The electric guitar model in Eq. (3) can be extended to include mixing two pickups of distance d 1 and d 2 along the string of length L, assuming that both pickups sense at a single point: 18 is the sum of two sine functions and can be further derived using trigonometric equation: Note that a mixed pickup signal produces a sine function that relates to the average of the two pickup locations i and a cosine function that relates to half of the distance between the two pickup locations j. If the mixed pickups have opposite phases, this can be modelled aŝ where S À l ¼ S d 1 À S d 2 represents two mixed out-of-phase pickups. The in-phase connection of the two pickups is more typically used than the out-of-phase connection. 7 D. Plucking mechanism width effect on single pickup An electric guitar string is usually plucked with a finger or plectrum of a finite width d. Previously, the electric guitar model in Eq. (3) assumed that the string is plucked with a plectrum of infinitesimally small width. The effect of the width of the plucking mechanism d on the velocity of an ideal string sensed at a single point is given by 6 where plucking width affects the level of high harmonics causing a low pass filtering effect by introducing a 6 dB/octave rolloff above a mode number k ¼ 2L/(pd), where harmonics above mode number k d ¼ 2 L/d are not excited. 23 Hence, this will limit the spectrum to k < k d harmonics.
IV. EXTENDING THE EXISTING ELECTRIC GUITAR MODEL
In this section, we extend the ideal electric guitar string model to include the pickup width effect for single and mixed pickups.
A. Pickup width effect for a single pickup
The pickup senses the velocity of a string around an area (with a finite width w) rather than at a single point. Hence, the electric guitar model in Eq. (8) can be further extended aŝ Evaluating the integral gives and substituting into Eq. (9) yieldŝ where This effect adds a 6 dB/octave rolloff above the mode number k ¼ 2L/(pw), where harmonics above mode number k w ¼ 2L/ w exhibit very little excitation. The area sensed is assumed to have a rectangular shape, whereas in practice, the string is more strongly sensed around the middle of the pickup than at the ends. Paiva et al. models the pickup width effect with a Hamming window. 7 Note that the pickup width effect is similar to the plucking width effect where a wider pickup sensitivity lowers the level of high harmonics. Combining both width effects, the limit of the spectrum is reduced to k < minðk d ; k w Þ.
B. Final electric guitar model
The final electric guitar model can be computed by introducing the pickup width and plucking width effects into the mixed pickup model by substituting Eq. (10) into Eq. (4) and adding the plucking width factor from Eq. (8), where w 1 and w 2 are the widths of the two pickups: Typically, a mixed pickup such as a humbucker has two pickups with the same width. If both widths are equal such that w 1 ¼ w 2 , the model can be simplified tô Figure 3 shows two spectra of the final electric guitar model with different pickup widths, illustrating how a greater pickup width lowers the amplitude of higher harmonics.
V. ESTIMATING PLUCKING POINT AND PICKUP POSITION
This section explains the methods to estimate the locations along the string of the selected guitar pickup(s) and where it is plucked. An overview of the whole system is shown in Fig. 4.
A. Onset time estimation
The onset time of the recorded tone is estimated using spectral flux. The spectral flux is the sum of positive changes in the magnitude of each frequency bin across all frequency bins for a frame. 24 The peaks in spectral flux are interpreted as possible onset times. Since we are dealing with single tones, we select the highest peak as the estimated onset time. We use a frame size of 11.6 ms with overlapping windows of 50%. A window of 46 ms starting from the onset time is then taken to determine the fundamental frequency f 0 of the recorded tone using autocorrelation. 25 The initial estimate of the onset time is typically just before the plucking noise, thus we refine the onset estimate to be closer to the end of the plucking event. Starting from the initial onset estimate, we take a time-domain window of size 4T samples, where T ¼ 1=f 0 , and perform peak detection. We discard peaks which are less than 20% of the maximum value in the window in order to avoid unwanted small peaks at the beginning of the tone due to the plucking noise.
To determine the start of the plucking event, we find the last zero crossing of the signal before the first peak by working backwards from the peak to the initial onset estimated earlier. Figure 5 shows an example of onset estimation.
B. Computing the amplitudes of spectral peaks
Once the time of the plucking event is found, we perform short-time Fourier transform (STFT) analysis on the signal using a Hamming window with support size of 3T samples and zero padding factor of 4. The window size is chosen to be as small as possible, in order to capture the initial conditions of the pluck before information is lost due to the uneven decay of harmonics. Figure 5 shows an example of such a window of three cycles for an electric guitar tone at pitch A2.
We then search for spectral peaks in windows of 630 cents around expected partial frequencies f k ¼ kf 0 ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ffi 1 þ Bk 2 p , where B is the inharmoncity coefficient for each string 26,27 (using empirical measurements of B provided by Barbancho et al. 28 ). The magnitudes of the spectral peaks are further refined using quadratic interpolation. 29 Figure 6 shows the spectrum of the electric guitar tone from Fig. 5 with the detected spectral peaks represented by crosses.
The total number of harmonics K that we consider depends on a number of factors. If the number of harmonics is too low, we cannot properly estimate pluck or pickup positions that are close to the bridge. For instance, if we set K ¼ 20 harmonics and the string length L is 648 mm, we cannot estimate any pluck or pickup positions below L/K ¼ 32.4 mm. Also, the number of harmonics should not be higher than the Nyquist rate. For example if T is 66 samples, then we cannot set K to be more than 33 harmonics. The number of harmonics also depends on the fret at which the string is stopped. The number of harmonics available on an open string is double the number for the same string played at the twelfth fret. Also, when the string is fretted, the string length is shortened but the pickup width remains constant, hence, the number of harmonics available decreases [see Eqs. (11) and (13)]. Thus, we set the total number of harmonics for open string, fifth fret, and twelfth fret to be 25, 20, and 15, respectively.
C. Estimating spectral slope
In order to compensate the low-pass filtering effect due to pickup width, plectrum width, and plucking dynamics and compensate for the energy losses due to nonrigid end supports (e.g., bridge and fingers), the spectrum of the analysed signal needs to be flattened. The slope of the spectrum of an observed data X is estimated by fitting a line in the logfrequency domain. The best fitting line can be written as where the spectral peak X k for harmonic k is normalised to a maximum of 0 dB and / is the slope of the spectrum. Hence, the variable power of the harmonics determines the slope of the spectrum where k -/ has a À6/ dB/octave slope [see Eq. (3)]. Once the parameter / is determined, we can adjust this accordingly to obtain a flatter spectrum.
Once the slope of the spectrum is estimated, we use this value to obtain a better fit to the model. Ideally we want to flatten the spectrum to 0 dB/octave but this would produce unwanted troughs in the autocorrelation. We will further discuss the use of this technique and the problems of overflattening the spectrum in Sec. V D.
D. Estimating the pickup and pluck locations
The magnitudes of the first K harmonics are used to calculate the autocorrelation: 16 where X is the flattened spectrum. The autocorrelation of an electric guitar signal should produce two dominant troughs: the lag s q of one trough indicates the location of the pluck and the lag s d of the other indicates the location of the pickup. Note that the plucking and pickup positions have similar effects and produce similar troughs but at different locations. Distinguishing between the two troughs could be determined using post-processing techniques as discussed later in Sec. VIII. Once the time lag estimatesŝ are found, Figure 7 shows the autocorrelation of the electric guitar tone from Fig. 5, and the two dominant troughs can be seen, whereŝ q is at 69 samples andŝ d is at 100 samples. The autocorrelation is calculated from the spectrum that is flattened to À3 dB/octave. Also, note that we are only interested in the troughs that are located in the first half of the autocorrelation period. We estimate the plucking point and pickup position with Eqs. (16) and (17) where L ¼ 652 mm and T ¼ 408 samples. This yields an estimated plucking point at 110.26 mm and pickup position at 159.80 mm from the bridge, giving less than 60.3 mm error for both estimates.
If the plucking position is at or near the pickup, the troughs merge into one, making it impossible to estimate the two locations independently from the time lags of the troughs. Finding the plucking point of an acoustic guitar is therefore easier, because the autocorrelation of an acoustic guitar signal only produces one trough. 16 Troughs that are closer to zero lag represent pluck or pickup locations nearer to the bridge. Flattening the spectrum emphasises the higher harmonics, enhancing detection of troughs that correspond to positions near the bridge. Over-flattening the spectrum would create unwanted troughs near the zero lag. Figure 8 shows three autocorrelations of the same electric guitar tone where the slope of its spectrum is adjusted differently each time. We can observe that there is an unwanted trough near the zero lag if the spectrum is over-flattened. Moreover, we can also see that by not flattening the spectrum, the two troughs are merged into a single trough.
To solve the problem of merged troughs, where the pluck and pickup locations are close to each other, we employ a grid search to estimate the values. We calculate the mean square error between the autocorrelations of the observed data and our model for plucking points and pickup positions ranging from 25 mm to 180 mm with a spatial resolution of 1 mm. The electric guitar model is calculated using Eq. (3) to avoid using more parameters such as the plectrum and pickup width. Both the spectra of the observed data and the electric guitar model are flattened to À3 dB/octave beforehand. The minimum mean square error gives the estimated pluck and pickup locations. We refer to this method below as ASP1.
Estimates that are located near the bridge can be further improved. While flattening the spectrum to À3 dB/octave might suppress unwanted troughs near zero lag, any correct estimates near the bridge will have a less sharp trough near zero lag in the autocorrelation. To compensate for this problem, we flatten the spectrum to 0 dB/octave for any pluck or pickup estimates that are less than 60 mm from the bridge. Then we repeat the grid search procedure described above, where the range of the search is from 25 mm to the estimated value. This method will be referred to as ASP2.
E. Parameter estimation for mixed pickups
The electric guitar model with in-phase mixed pickup signal, given in Eq. (4), predicts two troughs in the autocorrelation, with time lags corresponding to the locations of the pluck s q and the average of the two pickup locations s i , plus one peak at lag s j corresponding to one half of the distance between the two pickups j. To estimate mixed pickup signals, first we estimate the locations of the pluckq and the average of the two pickupsî using the method described in Secs. V A-V D. Although a humbucker pickup could be considered as a mixed pickup, for our purposes, it will be useful to treat it as a wide single pickup and the lag s i will correspond to the middle of the humbucker. In the case of a known guitar, if the estimatesî are located in between two single pickups, we can assume that a mixed pickup configuration is selected (further details of how pickup configurations are identified using the estimates are discussed in Sec. VI D). Then, we search for s j to estimate the two locations of the mixed pickups (d 1 andd 2 ) and the plucking pointq. The steps of estimatingd 1 ;d 2 ; andq are shown in Fig. 9. The lagŝ j is estimated using peak and trough detections instead of grid search. We search for peaks and troughs from zero lag until the lag that corresponds to 65 mm (s ¼ 65 T/L). We chose the limit by finding the largest distance j amongst popular electric guitars. A Fender Telecaster has the largest distance between its two pickups which is around 120 mm (i.e., j ¼ 60 mm). We flatten the spectral slope to 0 dB/octave and calculate the log-correlation of the signal as described by Traube and Depalle: 16 Since the lag s j is near zero lag, we chose to flatten the spectrum to 0 dB/octave instead of À3 dB/octave to further emphasise the peaks and troughs in the search range. Furthermore, we take the log magnitude of the spectral peaks to calculate the autocorrelation which emphasises the low amplitude harmonics so that the peaks and troughs will become more apparent.
There are two cases to consider for finding the lag s j : one is when the plucking point distance from the bridge is near the distance j and the other is when the plucking point distance is not close to j. Figure 10 illustrates two logcorrelations with the same mixed pickup configuration where the string is plucked at 30 and 110 mm from the bridge (lags s q are 14.15 and 51.87 samples, respectively). Note that the distance j for this example is 29 mm (s j ¼ 13.66 samples) and the time lag limit for finding the peaks and troughs is 31 samples. Figure 10(a) shows the log-correlation of the electric guitar plucked at a distance from the bridge q % j. To find the estimated lagŝ j , we select the trough or peak that is closest to zero lag. In this example, the trough that corresponds to the plucking point s q seems to be more dominant than the expected peak at s j even though theoretically the peak and trough should cancel each other out. Here, we can assume that the plucking point q is at distance j. Thus, both the estimated lagsŝ j andŝ q are at the first trough which is at 13.01 samples (ĵ andq are 27.41 mm). Note that quadratic interpolation is used to refine the location of the trough. 29 Figure 10(b) shows the log-correlation of the electric guitar plucked at a distance q 6 ¼ j from the bridge. The peak that corresponds to the distance j is apparent. Similarly to the previous case, we are only interested in the trough or peak that is nearest to the zero lag. However, the logcorrelation always starts with a trough. The trough is removed if the absolute amplitude of the trough is less than the amplitude of the peak. Note that this method is also applied to the previous example. Hence, the peak is selected because it is now located closest to zero lag. The lag of the peak is at 14.71 samples which gives the estimated distancê j ¼ 31:08 mm. The peak location is also refined using quadratic interpolation.
Once the distance j is estimated, the estimated locations of the two pickups can be calculated asd 1 ¼î þĵ and d 2 ¼î Àĵ. We first present the results for estimating the pickup and plucking position of the electric guitar from tones recorded from each single pickup. We used the single pickup subset of the dataset described in Sec. II, comprising data from three single pickup configurations: bridge, middle, and neck pickup. The electric guitar is played at eight plucking points on each open string and recorded from all three pickups simultaneously giving a total of 144 audio samples for this experiment.
Using the procedure described in Sec. V, we estimate the plucking point and pickup position for each audio sample independently. Our approach cannot distinguish between estimates belonging to the plucking point and the plucking position. To disambiguate, more information would be required, such as the expected pickup position (i.e., the known physical locations of the pickups on the electric guitar under test). We take the estimated value that is closest to To assess the accuracy of the estimates we calculate the error, e between the estimated and ground truth values. Table II shows the average absolute errors of the plucking and pickup location estimates, comparing results with and without the second stage process described in Sec. V D. The errors for ASP1 range from 2 to 13 mm for plucking point estimation e q and 2-7 mm for pickup position estimation e d . The errors for ASP2 range from 2 to 9 mm for plucking point estimation e q and 2-5 mm for pickup position estimation e d .
The average absolute errors of e q 1 and e d b reduced by 41% and 20%, respectively, when we include the second stage process of ASP2. Overall, by applying the second stage process, the average absolute error of the pickup position estimates is reduced from 3.97 to 3.53 mm and the average absolute error of the plucking point estimates is reduced from 5.90 to 5.11 mm. Figure 11 provides an illustration of the pickup location estimations on the electric guitar using method ASP2, where the real pickup locations are drawn as thick vertical lines and the estimates of the bridge, middle, and neck pickup locations are shown by triangles, circles, and crosses, respectively. Pickups further from the bridge are estimated more accurately, with almost all neck pickup estimates being confined inside a 61 cm error.
B. Mixed pickup data
The electric guitar has two in-phase mixed pickup configurations: a mix of middle pickup and neck pickup (m þ n) and a mix of bridge pickup and middle pickup (b þ m). The method for estimating the locations of the pluck and the two pickups are described in Sec. V E where the distances i and q are estimated using the ASP2 method.
The distributions of absolute errors of the estimated pickup positions e d and plucking point e q are shown in Fig. 12. The thick line inside each box is the median, the bounds of the box represent the interquartile range, and the outliers are represented by the cross symbols (þ). For mixed pickup (b þ m), the median absolute errors of pickup position and plucking point estimates are less than 7 mm. For mixed pickup (m þ n), the median absolute errors of pickup position and plucking point estimates are less than 11 mm. The main source of error for mixed pickup (m þ n) is that the initial estimates of the average pickup positionî and plucking pointq have large errors in some cases. This is caused by some unexpected troughs in the autocorrelation which are more dominant than the troughs corresponding to the ground truth locations. This might be due to the nonlinear interactions between two mixed pickups, then enhanced by the spectral flattening.
C. Comparison with previous method
In this section, we compare the absolute errors for the current method (ASP2) with our previous method (MFS). 18 Our previous method also uses a frequency domain approach where a period of the tone is selected and its Fourier series is calculated. Then, we calculate the electric guitar models for single pickup in Eq. (3) and mixed pickup Eq. (4) for plucking points and pickup positions from 27 to 180 mm. Last, we search for the model that is closest to the observed data by minimising the difference between the magnitude spectrum of the model and observed data. Table III shows the comparisons between the current method and the previous method. For single pickups, the average absolute errors of the estimated pickup position e d and plucking point e q are improved by 55% and 53%, respectively. For mixed pickups, the average absolute error of the estimated pickup position e d and plucking point e q are improved by 10% and 5%, respectively.
D. Identification of pickup selection
The pickup position estimates can be used to identify which pickup configuration is selected. The electric guitar in this experiment has five pickup configurations, thus five regions can be allocated to distinguish between each other. Note that the mixed pickup signals yield estimatesî in between its two pickups, hence their regions are defined between the single pickup regions. For simplicity, we define the five regions to each have a width of 30 mm. The regions for bridge pickup ranges from 25 to 54 mm, middle pickup ranges from 85 to 114 mm, and neck pickup ranges from 145 to 174 mm. The regions for mixed pickup b þ m ranges from 55 to 84 mm and mixed pickup m þ n ranges from 115 to 144 mm.
The method can accurately identify which pickup configuration is selected. The neck and middle pickups are identified correctly in 97.92% of cases, the bridge pickup and mixed pickup b þ m estimates both have 91.67% correct, while the mixed pickup m þ n is correctly identified for 89.58% of the examples.
VII. RESULTS: VARYING DYNAMICS AND FRET POSITION
In this section, we examine the effects of plucking dynamics and fret positions on the estimates. Because the first dataset does not include multiple plucking dynamics or fret positions, we use the second dataset. We use the ASP2 method to estimate the pickup and plucking locations.
A. The effects of plucking dynamics
The strength with which a string is plucked not only determines the dynamic level of the produced tone, but also has an effect on its timbre. The relative level of high harmonics reduces when the string is plucked softly. Figure 13 shows three magnitude spectra of electric guitar tones played forte (loud), mezzo-forte (moderately loud), and piano (soft) on the open second string. We can see that the level resulting from mezzo-forte and piano plucks at the eighth harmonic are 3 and 8 dB lower, respectively, than for a forte pluck.
In this section, we examine the effects of different plucking dynamics on the estimates when the electric guitar is played on the open strings which in total is 486 audio samples (6 strings  3 pickups  3 plucking points  3 plucking dynamics  3 instances). Figure 14 shows the absolute errors of plucking point e q and pickup position e d estimates for each plucking dynamic. For each plucking dynamic, the median absolute errors of pickup estimates are less than 4 mm. The median plucking point estimation error is up to 9 mm and is largest when the electric guitar is played loudly. Also, the number of outliers for both pickup position and plucking point estimation errors increased for louder tones, and to a lesser extent for softer tones, compared with the very robust results for mezzoforte tones. This might be due to the nonlinear behaviour of the string when plucked at a higher force. For softer tones, the outliers are due to the grid search failing to find the troughs of the autocorrelation even though the troughs are around the expected time lag. Nevertheless, 94% and 98% of forte and piano results, respectively, have less than 30 mm absolute error. and plucking points if different fret positions are played. We test using the electric guitar played moderately loud which totals to 486 audio samples (6 strings  3 pickups  3 plucking points  3 fret positions  3 instances). If the electric guitar is fretted, the length of the string is shortened by a factor of 2 F/12 , where F is the fret number. The length of the string when fretted, L F can be computed from the scale length, L as Therefore, a pickup at a fixed location suppresses different harmonics when the string is fretted than when it is open. Figure 15 compares the absolute error of the estimates when the electric guitar is played on open strings, at the fifth fret and the twelfth fret. The median errors for all cases are less than 4 mm. The twelfth fret has the highest number of outliers compared to others, nonetheless, 95% of the results are less than 30 mm. The outliers for the fifth fret are due to unwanted troughs near zero lag. For the twelfth fret, the length of the string is halved (L 12 ¼ L/2), which causes problems for the detection of pickup and pluck positions. Due to symmetry, it is not possible to distinguish a distance x from distance L Fx from the bridge. For open strings and low fret positions, the pickup and pluck can safely be assumed to be located in the half of the vibrating string nearest the bridge, but for higher fret positions, it is possible that the pickup or pluck are nearer to the stopped end of the string than the bridge. Thus any pickup or pluck more than L F /2 from the bridge will not be estimated correctly, which explains most of the outliers observed for the twelfth fret data.
VIII. CONCLUSIONS
We describe a technique to estimate the plucking point and pickup position of an electric guitar based on the autocorrelation of the spectral peaks. Furthermore, we introduce a method to flatten the spectrum that reveals the troughs in the autocorrelation in order to estimate the pickup and plucking locations more accurately. The system is tested on single and mixed pickup configurations. For single pickups, the system is able to accurately estimate the locations of the pickup and the pluck, giving average absolute errors of 3.53 and 5.11 mm, respectively. For mixed pickups, the average absolute errors of the estimated pickup position and plucking point are 8.47 and 9.95 mm, respectively. The pickup position estimates are sufficiently accurate to distinguish which pickup configuration is selected. Also, this method could be used to distinguish between typical guitar models based on the pickup positions. Moreover, we compare our technique with a previous method and show that our current method improves on the accuracy of the estimates.
Last, we examine the effect on the estimates when the electric guitar is played at various fret positions or with various dynamic levels, in order to move closer to real-world situations where any musicians have control over these parameters. Our model works well across a range of dynamics, showing median absolute errors of less than 9 mm in all cases, although the number of outliers increases at both extremes of the dynamic range. The notches in the comb filter produced by the plucking point effect are less sharp due to the nonlinear coupling between vibrating modes, 30 where this effect can be more prominent when the string is plucked very hard. 31 This will depress the expected troughs in the autocorrelation which makes the grid search fail to recognise the troughs. The outliers caused by softer tones are due to the grid search not finding the expected troughs in the autocorrelation.
Likewise, the median error for different fret positions is less than 4 mm in each case, with an increasing number of outliers appearing as the fret number increases. For the fifth fret, the outliers are caused by an unwanted trough near zero lag which is falsely detected by the grid search. The outliers for the twelfth fret are due to the limitation of the procedure for finding the trough in the autocorrelation. Any pickup or pluck outside of this limit cannot be estimated correctly.
Further work can be done to test other techniques to flatten the spectrum, which could help avoid unwanted troughs near zero lag. These experiments use direct input recordings, so another direction of future work is to look into real-world signals (i.e., electric guitar tones recorded through a full production chain, including effects, amplification, mixing, and mastering). The method only finds the pickup positions of in-phase mixed pickups, so further investigation will be done on out-of-phase pickups. For out-of-phase pickups, the trough at lag s i and the peak at lag s j are swapped. Thus, identifying in or out-of-phase mixed pickups might be possible by searching for peaks at a certain range. Finally, our current model is not able to distinguish pluck from pickup estimates; mathematically their effects are identical, but the pluck position varies continuously while the pickup selection is discrete and rarely changes, so combining estimates over sequences of tones could facilitate the separation of these two effects.
Our plucking point and pickup position estimation could lead to several possible applications. The pickup positions and angles of popular guitars are distinct. Thus, accurate pickup position estimates could help musicologists and guitar enthusiasts to determine which guitar model and pickup selection are used in historical recordings where there is limited information about the original instrument. Conversely, the knowledge of musicologists can be used to distinguish pluck from pickup position estimates, e.g., it is known that a player has a tendency of playing near the bridge, thus, the other estimate could be the pickup position. Moreover, the pluck and pickup position estimates could be used as parameters for electric guitar sound synthesis (to use in MIDI guitars or guitar synthesisers with hexaphonic pickups), which opens the possibility of replicating the sound of popular guitarists by extracting relevant parameters from their recordings.
|
2017-06-23T05:48:20.634Z
|
2017-03-01T00:00:00.000
|
{
"year": 2017,
"sha1": "bb136e5715e6357e9e85dcbe3073740f57dd8a9b",
"oa_license": "CCBY",
"oa_url": "https://qmro.qmul.ac.uk/xmlui/bitstream/123456789/31405/1/Dixon%20Pickup%20position%20and%20plucking%202017%20Published.pdf",
"oa_status": "GREEN",
"pdf_src": "Anansi",
"pdf_hash": "01669e448b870a41a4d994e58079ea149d3e20c9",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Computer Science",
"Mathematics",
"Medicine"
]
}
|
227079158
|
pes2o/s2orc
|
v3-fos-license
|
Preparation, characterization, and evaluation of antioxidant activity and bioavailability of a self-nanoemulsifying drug delivery system (SNEDDS) for buckwheat flavonoids
Abstract The self-nanoemulsifying drug delivery system has shown many advantages in drug delivery. In this study, a self-nanoemulsifying drug delivery system of buckwheat flavonoids was prepared for enhancing its antioxidant activity and oral bioavailability. A nanoemulsion of buckwheat flavonoids was developed and characterized, and its antioxidant, in vitro release, and in vivo bioavailability were determined. The nanoemulsion was optimized by the central composite design response surface experiment, and its particle size, polymer dispersity index (PDI), zeta potential, morphology, encapsulation efficiency, and stability were evaluated. The antioxidant activity was tested by measuring its 2,2-diphenyl-1-picrylhydrazyl scavenging activity, hydroxyl radical scavenging activity, and superoxide anion scavenging ability. In vitro release of buckwheat flavonoids nanoemulsion showed a higher cumulative release than the suspension, and the release fitting model followed the Ritger–Peppas and Weibull models. The effective concentration of the nanoemulsion was evaluated in vivo using a Wistar rat model, and the area under the plasma concentration-time curve of the buckwheat flavonoids nanoemulsion was 2.2-fold higher than that of the buckwheat flavonoid suspension. The Cmax of the nanoemulsion was 2.6-fold greater than that of the suspension. These results indicate that the nanoemulsion is a promising oral drug delivery system that can improve the oral bioavailability to satisfy the clinical requirements.
Introduction
Buckwheat (genus: Fagopyrum; family: Polygonaceae), an important medicinal and edible herb, contains many bioactive compounds, including flavonoids, phenolic compounds, triterpenoids, amino acids, volatile compounds, etc. [1]. Buckwheat flavonoids contain a common benzo-g-pyrone structure and a polyphenolic structure consisting of a 15-carbon basic skeleton (C 6 -C 3 -C 6 ). The herb has various pharmacological activities, including antioxidant, anticancer, anti-inflammatory, and other activities [2][3][4]. Though buckwheat flavonoids possess diverse activities, their medicinal and edible values are limited due to the low solubility, low oral bioavailability, and poor systemic absorption. So, the clinical administration of buckwheat flavonoids requires a reasonably effective drug delivery system. cosurfactant from the organic phase into the aqueous phase may also be an important mechanism for the formation of the nanoemulsion [11,12]. The small particle sizes in a drug delivery system offer many promising advantages, including smaller surface tension and greater stability. The smaller particle size offers a greater interfacial surface area for drug absorption, and the solubility and bioavailability are enhanced. For example, a SNEDDS of resveratrol with the droplet size of 50 nm exhibited better antioxidant capacity and less toxicity than free resveratrol [13]. SNEDDS of nintedanib (a poorly soluble molecule) significantly increased its area under the plasma concentration-time curve (AUC) [14]. Therefore, the selfnanoemulsifying drug delivery system can improve the bioactivity of encapsulated components, which can also potentially improve the medicinal and edible values of buckwheat flavonoid.
To enhance the antioxidant activity and oral bioavailability of buckwheat flavonoids, a SNEDDS of buckwheat flavonoids was prepared in this study. The preparation and optimization were based on the central composite design response surface experiment, and particle size, zeta potential, and encapsulation efficiency were determined. The study specifically focused on evaluating the antioxidant activity and comparing the oral bioavailability of the nanoemulsion with that of suspension in vivo.
Materials
Buckwheat flavonoids (purity >95%) were procured from Xi'an Tianxiang Bioengineering Co., Ltd (Xi'an, China). Rutin was purchased from the National Institute for the Control of the Pharmaceutical and Biological Products (Beijing, China). 2,2-Diphenyl-1-picrylhydrazyl (DPPH) was purchased from Phygene Life Sciences Co., Ltd (Fuzhou, China). The reagents used in highperformance liquid chromatography (HPLC) were chromatographically pure, and the other chemicals and reagents and solvents were of analytical grade.
Preparation and experimental design
Buckwheat flavonoids nanoemulsion was prepared through selfnanoemulsification. Briefly, PEG-40 hydrogenated castor oil (surfactant) and propylene glycol (cosurfactant) were thoroughly mixed with a magnetic stirrer at room temperature, and castor oil was added to the mixture. Subsequently, the buckwheat flavonoids were added into the admixture and stirred, until they were dissolved completely. Then, distilled water was added dropwise into the system until the nanoemulsion was formed.
The experimental design was carried out by the central composite design (CCD) response surface experiment [15,16]. Based on the solubility study and the pseudo-ternary phase diagram, three factors of the oil phase (A), surfactant and cosurfactant (S mix , B), and water phase (C) were used as the independent variables, and the particle size (Y 1 ) and encapsulation efficiency (Y 2 ) were considered as the dependent variables. Design-Expert 8.0.6 software was used to analyze the experimental data, and the three-dimensional response surface graphs were plotted. The fitted model was expressed by the coefficient of R, and its statistical significance was determined by the F-test.
Characterization of nanoemulsion
The particle size and zeta potential of the nanoemulsion were analyzed using the dynamic light scattering method with a Malvern Zetasizer (Malvern Instruments Ltd, Malvern, UK). The morphology of nanoemulsion was observed under a transmission electron microscope (JEM-1011; JEOL, Tokyo, Japan) at 200 kV accelerating voltage. Encapsulation efficiency was calculated using the formula: encapsulation (%) = Ws/Wt×100% (Ws and Wt are the amounts of the supernatant and total drug, respectively). The nanoemulsion was dispersed in methanol and then ultrasonicated at 37 • C for 30 min. The mixture was centrifuged at 5000 g for 10 min, and the supernatant was analyzed by HPLC (Agilent 1200 LC; Agilent Tech Instrument Co, Santa Clara, USA) equipped with a C 18 column (Agilent Zorbax, 250 mm×4.6 mm, 5 µm). A mixture of methanol and acetic acid (50:50) aqueous solution (0.5%) was used as the mobile phase at a flow rate of 1 ml/min. The stability of the nanoemulsion was tested at different temperatures (−40 • C, 4 • C, 25 • C, and 37 • C) for 30 days. The nanoemulsion was also examined for the phase separation by centrifuging at 3000 g and 10,000 g. The nanoemulsion was diluted 10-and 100-folds with distilled water to investigate the particle size, zeta potential, and stratification of the system.
DPPH scavenging activity
The DPPH (0.2 mM, 1 ml) reagent was used for determining the antioxidant activity of the buckwheat flavonoid nanoemulsion (0.2, 0.4, 0.6, 0.8, and 1.0 mg/ml, 1 ml) [17,18]. DPPH ethanol solution and nanoemulsion were mixed and kept in the dark for 30 min at room temperature, and the absorbance was measured at 517 nm. The mixtures of DPPH and distilled water were used as the controls. Meanwhile, the suspension and the ascorbic acid were also examined using the above method. The DPPH scavenging activity was calculated by the following equation: DPPH scavenging activity (%) = Ac − As Ac × 100% where As and Ac are the absorbance of sample and control, respectively.
Hydroxyl radical scavenging activity The hydroxyl radical scavenging activities of buckwheat flavonoid nanoemulsion and suspension and the ascorbic acid solution samples were measured as previously reported [19,20]. In brief, FeSO 4 solution (2.5 mM, 1 ml) was added to the samples and then H 2 O 2 (2.5 mM, 1 ml) and salicylic acid (2.5 mM, 1 ml) were added successively. The temperature of the mixture was adjusted to 37 • C for 60 min. After completion of the reaction, the hydroxyl radical was measured by monitoring the absorbance at 510 nm. Meanwhile, distilled water in stead of the sample was used as the control. The scavenging activity of hydroxyl radicals was calculated with the following equation: Hydroxyl radical scavenging activity (%) = Ac − As Ac × 100% where As and Ac are the absorbance of sample and control, respectively.
Superoxide anion scavenging activity Pyrogallic acid (25 mM, 10 µl) was added to 3 ml of Tris-HCl buffer (pH 8.2). The absorbance of the mixture was determined every 30 s for 4 min at 325 nm. According to the absorbance, the auto-oxidation rate of pyrogallic acid was evaluated by the slope of the absorbance-time curve. Then the samples (0.2, 0.4, 0.6, 0.8, and 1.0 mg/ml) were measured following the above method, and the scavenging rate of superoxide anion radical was calculated by the following formula [21]: Superoxide radicalscavenging activity (%) = Vc − Vs Vc × 100% where Vs and Vc are the scavenging rate of sample and control, respectively.
In vitro release experiment
In vitro release experiment was carried out by dynamic dialysis of nanoemulsion [22].
where C i and Cn are the concentration at different time points, V and V i are the volumes of 50 ml PBS and sample (2 ml), respectively, and m is the initial amount of drug (10 mg).
In vivo experiment
The Wistar rats (200±20 g) were obtained from the Experimental Animal Center of Shanxi Medical University (Taiyuan, China), and all experimental procedures were approved by the Institutional Animal Care and Use Committee of Shanxi Medical University. The animals were maintained under a 12/12 h light/dark cycle at 25±2 • C and relative humidity of 60%±10%. The animals were provided free access to water and food (food was provided until 12 h before the experiment).
The buckwheat flavonoid nanoemulsion and suspension were administered intragastrically at a dose of 60 mg/kg to the test and control groups [23,24]. The blood samples (0.5 ml) were withdrawn from the fundus venous plexus into heparinized tubes at 0.083, 0.25, 0.5, 1, 2, 4, 8, 12, 24, and 48 h after oral administration of the drugs. The plasma was separated by centrifuging the blood sample at 10,000 g for 8 min. Then, the morin hydrate (internal standard) was added to 100 µl of plasma, vortexed for 3 min, and 300 µl of methanol was added subsequently and mixed for 5 min. The mixtures were centrifuged at 10,000 g for 8 min to precipitate the protein aggregates. The supernatant was dried, re-dissolved in 100 µl of methanol, and then centrifuged at 10,000 g for 5 min. The supernatants were analyzed by HPLC using a mobile phase containing methanol:acetic acid aqueous solution (0.5%; 50:50) at a flow rate of 1 ml/min. The analysis of samples and internal standard was not influenced by endogenous impurities. The standard curve, y=15,677 x+8.328 (r=0.9990), was plotted with the ratio of peak areas of the standard and internal standard as ordinate against the standard concentration as abscissa. The recovery rate was 95%-105%, and the relative standard deviation (RSD) was less than 3%; the RSD of precision was less than 3%, and the RSD of reproducibility was less than 2%.
Statistical analysis
The data were expressed as the mean±SD. The statistical analyses were conducted using the Student's paired t-test. The difference between the mean was considered statistically significant at P<0.05. The pharmacokinetic parameters were calculated with the Statistics software program DAS 2.0 (Mathematical Pharmacology Professional Committee of Shanghai, Shanghai, China). The statistical analysis of the standard curve was carried out by OriginPro 8.0 software (OriginLab Corporation, Northampton, USA).
Optimization of preparation of nanoemulsion using central composite design response surface experiment
The three factors of the oil phase, surfactant to cosurfactant, and water phase were used as independent variables, and the particle size and encapsulation efficiency were considered as dependent variables; the nanoemulsion optimization was performed by the Design-Expert software. F-test and P value were used to analyze the statistical significance of the regression models, and the analysis of variance for the response surface models is shown in Table 1, and the multiple regression equations from the design expert software were as follows: The model of the regression equations of the particle size was significant with P value <0.05, P value of the lack of fit of the particle size was 0.1383, and the coefficient of R was fine. The predicted R 2 (0.8236) reasonably agreed with the adjusted R 2 (0.9420). The adequate precision was measured by the signal-to-noise ratio. A ratio greater than 4 was desirable. The ratio of 11.044 indicated an adequate signal. This model could be used to navigate the design space.
The model of the regression equations of the encapsulation efficiency was very significant with P value <0.0001, the P value of lack of fit of the encapsulation efficiency was 0.2197, and the coefficient of R was fine. The predicted R 2 of 0.8622 reasonably agreed with the adjusted R 2 of 0.9443. The ratio of adequate precision was 13.856, and this model was adequate.
According to the results of regression analysis, three-dimensional response surface diagrams of the relationship between independent variables and dependent variables were drawn (Fig. 1). Among the main factors affecting the particle size, the water phase (C) showed the greatest effect, followed by the surfactant and cosurfactant (B) and the interaction items (B and C). The particle size of nanoemulsion was increased with the S mix and the water phase, and the effect of the oil phase (A) on the particle size was not very significant (Fig. 1A−C). As shown in Fig. 1B, the particle size was increased with the increasing values of the S mix and water phase simultaneously, which implied that the interaction effects of B and C were positively significant. Among the main factors of the encapsulation efficiency, the quadratic of oil phase showed a significant effect, and other factors were not significant. Among the main factors affecting the encapsulation efficiency, the surfactant and cosurfactant showed the greatest effect, and the encapsulation efficiency of nanoemulsion was increased with the increase in the surfactant and cosurfactant (Fig. 1D,E). The oil phase and water phase also affected the encapsulation efficiency significantly (Fig. 1D−F). The encapsulation efficiency was increased first and then decreased with the increase in the oil phase, and the encapsulation efficiency was enhanced when the water phase was increased. From the design response surface experiment, we evaluated the predictive optimum formulation composition of oil (16.5%), S mix (38.7%), and water (44.8%). Three batches of samples were prepared in parallel according to the optimized conditions. The predicted and experimental values of the particle size were 23.72 nm and 23.16±0.25 nm, respectively, and the statistical prediction error
Physiochemical characterization of nanoemulsion
Particle size and zeta potential are two important properties of nanoemulsion. Size of the droplets of buckwheat flavonoids nanoemulsion was 23.22±0.13 nm ( Fig. 2A), PDI was 0.22±0.06, and zeta potential was − 20.92±0.27 mV (Fig. 2B), as determined using the dynamic light scattering method with a Malvern Zetasizer. Fig. 2C shows that the nanoemulsion appeared as a pale yellow clear liquid, and the morphology of the droplets was nearly spherical or spherical, as observed under the transmission electron microscope (Fig. 2D), and the particle size distribution was uniform. The encapsulation efficiency of the nanoemulsion was 98.35%±0.04%. After the storage at different temperatures for 30 days, the particle size and the zeta potential exhibited no significant change. No noticeable creaming was observed after centrifuging at 3000 g and 10,000 g for 10min.
Antioxidant activity study of nanoemulsion
The antioxidant capacity of buckwheat flavonoids nanoemulsion was assessed by DPPH scavenging activity, hydroxyl radical scavenging activity, and superoxide anion scavenging activity.
DPPH exhibits the maximum absorption at 517 nm owing to its stable nitrogen-containing free radicals, and its purple color turns yellow when the free radicals are scavenged by antioxidants. The antioxidant capacity of the nanoemulsion was evaluated and compared with that of the suspension using ascorbic acid as a positive control. As shown in Fig. 3A, the DPPH scavenging activity of nanoemulsion was increased with the concentration from 0.2 to 1.0 mg/ml, and the IC 50 values of the nanoemulsion were 0.52 mg/ml. The DPPH radical scavenging activity curve of the suspension was also gradually increased, but at the concentration of 1.0 mg/ml, its scavenging activity was 37.1%, which was significantly lower than that of the nanoemulsion (81.92%). The results suggested that the nanoemulsion possessed potent DPPH free radical scavenging capacity.
Hydroxyl radicals are the most active free radicals, capable of lipid peroxidation and destruction of the biomacromolecules in cells. The buckwheat flavonoid nanoemulsion exhibited potent hydroxyl radical scavenging activity, and its activity was increased in a concentration-dependent manner (Fig. 3B). As a positive control, ascorbic acid displayed a strong free radical scavenging ability, and at the concentration of 1.0 mg/ml, the hydroxyl radical-scavenging activities of ascorbic acid, nanoemulsion, and suspension were 91.21%, 79.55%, and 40.45%, respectively. The scavenging ability of the nanoemulsion to hydroxyl radicals was significantly stronger than that of the suspension.
Pyrogallic acid is spontaneously oxidized under weak alkali conditions to form superoxide free radicals and intermediates, and its color varies with superoxide radical content. As shown in Fig. 3C, the superoxide anion scavenging ability of the nanoemulsion and suspension was increased in a concentration-dependent manner; at the concentration of 1.0 mg/ml, its maximum scavenging activity was 56.80%, which was obviously better than that of the nanoemulsion (29.15%). Compared with the DPPH and hydroxyl radicals scavenging abilities, superoxide anion scavenging abilities of nanoemulsion and suspension were significantly lower, indicating a moderate superoxide anion scavenging activity.
Cumulative release and model fitting of nanoemulsion
The in vitro release of the nanoemulsion was investigated using the dialysis method, and the drug release in the gastrointestinal tract was simulated in vitro. A dialysis tube with a biofilm structure was placed in the buffer in a thermostatic bath at 37 • C for simulating gastrointestinal fluid, and the drug digestion and peristalsis in the gastrointestinal tract were simulated by magnetic stirring.
The in vitro release of the buckwheat flavonoids nanoemulsion and suspension were continuously monitored for 48 h. The cumulative release of the suspension was 5.17% in 15 min, and only 15.14% drug was released in 48 h because of its poor solubility ( Fig. 4). The nanoemulsion released 25.09% in 15 min, 42.56% in 1 h, and the cumulative release was increased gradually until 88.16% of the drug was released in 48 h. The results showed that the cumulative release efficiency of the nanoemulsion was 5-folds higher than that of the suspension, and the effect of drugs in vitro was greatly improved.
Further studies on the release model fitting of the buckwheat flavonoids nanoemulsion were performed to understand the release pattern. Various dynamic models were fitted to the release curve, and the correlation coefficients (R 2 ), residual sum of squares (RSS), and Akaike information criterion (AIC) were calculated ( Table 2). The drug release model was closer to Weibull (R 2 = 0.9947, RSS = 18.6194, AIC = 33.2369) and Ritger-Peppas (R 2 = 0.9723, RSS = 113.4027, AIC = 51.3095) models.
Pharmacokinetic parameters and bioavailability of nanoemulsion
The efficacy of the buckwheat flavonoids nanoemulsion was further verified by monitoring the buckwheat flavonoids plasma concentrations in vivo after intragastric administration to the test and control groups. The samples were analyzed by HPLC. The curves of the mean plasma concentration versus time for buckwheat flavonoids nanoemulsion and suspension are shown in Fig. 5. The pharmacokinetic parameters were calculated with the Statistics 2.0 software program (DAS 2.0), and the results are shown in Table 3.
In our study, the AUC for the nanoemulsion was 1621.2 ±141.60 ng/ml·h, which was 2.2-fold higher than that of the control suspension (734.48±109.81 ng/ml·h). The Cmax (the maximum plasma drug concentration) of the nanoemulsion was 156.15±8.14 ng/ml, which was 2.6-fold higher than that of the suspension (59.89±6.49 ng/ml). The Tmax (the maximum plasma drug concentration) of the nanoemulsion reached peak concentrations faster than the suspension within 0.90±0.22 h, which might be related to the good solubility of buckwheat flavonoids in the nanoemulsion. The nanoemulsion of buckwheat flavonoids exhibited significantly higher AUC than the suspension (P<0.05).
Discussion
The CCD response surface experiment is a multivariate statistical method for optimization analysis [25], and it provides an idea of two more probability values, which can lead to a more comprehensive, intuitive, and accurate design. CCD was used in this study to determine the optimum conditions for nanoemulsion, which led to suitable particle size and encapsulation efficiency. The model effectively described and predicted the responses of the particle size and encapsulation efficiency with the changes in the constitution conditions in given experimental ranges. The particle size of nanoemulsion increased with the S mix and water phase, and the effect of oil phase was not very significant. The surfactant and cosurfactant showed the greatest effects, and the oil phase and water phase also affected the encapsulation efficiency. The results of these experiments were close to the predicted values from the optimization analysis, suggesting that the multiple regression models are reasonable and reliable. The nanoemulsion was obtained by the CCD experiment, and the nanoemulsion with small particle sizes displayed a smaller surface tension and greater stability. The smaller particle size can lead to a greater interfacial surface area for drug absorption, and the solubility and bioavailability are enhanced [26]. The zeta potential of the nanoemulsion is caused by the adsorption of charged molecules on its surface, which depends up on the types of emulsifier and the conditions of the medium, and this charge can cause molecular repulsions between dispersed phase droplets, so the zeta potential can improve the stability of the nanoemulsion [27][28][29]. In our study, buckwheat flavonoid nanoemulsion displayed a particle size within the abovementioned range, suggesting that our nanoemulsion system was stable.
Flavones possess antioxidant properties, but buckwheat flavonoid nanoemulsion showed significant antioxidant activity compared with suspensions in the three test methods because nanoemulsion technology can enhance the solubility and release of insoluble or poorly soluble drugs. A higher amount of drug might be responsible for the enhanced antioxidant potential [30]. The results of this study suggest that the SNEDDS is a feasible strategy to improve the antioxidant application, which can be potentially used as a natural antioxidant.
Drug release is the basis for all biological functions, including absorption, distribution, metabolism, and excretion. Drug release model fitting was performed to understand the release pattern [31]. As shown in Table 2, the best fitting parameters (R 2 =0.9947, RSS=18.6194, AIC=33.2369) were derived from the Weibull model, and the in vitro drug release from the nanoemulsion followed this distribution, suggesting that the in vitro release process is continuous and dynamic, simulating the human digestive tract environment. It also indicated that the drug release has a sustained release trend. Ritger-Peppas (R 2 =0.9723, RSS=113.4027, AIC=51.3095) might be similar to the drug released pattern, and its release mechanism might follow the Fickian diffusion (n<0.45) during the drug release process of the nanoemulsion. In our results, the in vitro release of the buckwheat flavonoids from the nanoemulsion was more advantageous than the suspension, which provided a potential possibility for the application of nanoemulsion flavonoids.
Bioavailability, an important criterion for evaluating the effect of drugs in vivo, is usually assessed by three parameters: Cmax, Tmax, and AUC. AUC is the most reliable indicator of bioavailability. It is directly proportional to the amount of prototype medicine that enters the body. This study demonstrated that the buckwheat flavonoid nanoemulsion is absorbed 2.2-fold higher than the suspension, as the drug particles in the nanoemulsion are relatively small and more drugs can be released. The absorption of a poorly water-soluble drug is often limited by its insufficient dissolution in the gastrointestinal tract [32]. The small droplets of the drugloaded nanoemulsion can pass through the membrane directly and get absorbed easily [33]. A self-nanoemulsifying drug delivery system of persimmon leaf extract was shown to accelerate the drug absorption, and the drug was not precipitated in the gastrointestinal tract [34]. Thus, the main problems of poor solubility and weak absorption of flavonoids were solved through the self-nanoemulsifying technology, and the oral bioavailability was significantly improved [35]. Nanoemulsion can be used as an effective carrier to improve the therapeutic effect of water-insoluble drugs.
In summary, this study was based on a previous experiment for developing buckwheat flavonoids into a nano-drug delivery system [36]. We constructed a drug delivery system and investigated its characteristics in vitro and the drug delivery to animals in vivo, which provided a theoretical basis for the application of buckwheat flavonoids in clinical practice. Our results demonstrated that nanoemulsion is an effective drug delivery system that can significantly improve the antioxidant activity, in vitro release, and bioavailability compared with the corresponding drug suspension. Nanoemulsion can improve the oral absorption of drugs, promote the dissolution of the poorly soluble drugs, and enhance the bioavailability of drugs.
|
2020-10-28T19:16:30.714Z
|
2020-11-01T00:00:00.000
|
{
"year": 2020,
"sha1": "fb02c2913d03e0a72de4ac449a9c5d335aa566b4",
"oa_license": "CCBYNC",
"oa_url": "https://academic.oup.com/abbs/article-pdf/52/11/1265/37786029/gmaa124.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "5d24ef804f649c4f0c51cf1c338214723fdfb903",
"s2fieldsofstudy": [
"Medicine",
"Environmental Science",
"Chemistry"
],
"extfieldsofstudy": [
"Medicine",
"Chemistry"
]
}
|
14828316
|
pes2o/s2orc
|
v3-fos-license
|
Sorafenib in advanced melanoma: a critical role for pharmacokinetics?
Background: Inter-patient pharmacokinetic variability can lead to suboptimal drug exposure, and therefore might impact the efficacy of sorafenib. This study reports long-term pharmacokinetic monitoring of patients treated with sorafenib and a retrospective pharmacodynamic/pharmacokinetic analysis in melanoma patients. Patients and methods: Heavily pretreated patients with stage IV melanoma were started on sorafenib 400 mg twice daily (bid). In the absence of limiting toxicity, dose escalation of 200 mg bid levels was done every 2 weeks. Plasma sorafenib measurement was performed at each visit, allowing a retrospective pharmacodynamic/pharmacokinetic analysis for safety and efficacy. Results: In all, 19 of 30 patients underwent dose escalation over 400 mg bid, and 28 were evaluable for response. The overall disease control rate was 61% (95% confidence interval (CI): 42.6–78.8), including three confirmed responses (12%). Disease control rate and progression-free survival (PFS) were improved in patients with high vs low exposure (80% vs 32%, P=0.02, and 5.25 vs 2.5 months, P=0.005, hazard ratio (HR)=0.28 (95% CI: 0.11–0.73)). In contrast, drug dosing had no effect on PFS. In multivariate analysis, drug exposure was the only factor associated with PFS (HR=0.36 (95% CI: 0.13–0.99)). Diarrhoea and anorexia were correlated with drug dosing, while hypertension and hand–foot skin reaction were correlated with drug exposure. Conclusions: Although sorafenib had modest efficacy in melanoma, these results suggest a correlation between exposure and efficacy of sorafenib. Therefore, dose optimisation in patients with low exposure at standard doses should be evaluated in validated indications.
Sorafenib is an oral agent that inhibits a large spectrum of cellular targets (VEGFR-2, PDGFR, c-KIT, FLT-3, CRAF, wild-type BRAF or BRAF V600E ; Wilhelm et al, 2004). The recommended dose of sorafenib in patients with hepatocellular carcinoma and advanced renal cell cancer is 400 mg twice daily (bid) (Strumberg et al, 2007). In preclinical studies, sorafenib efficiently inhibited BRAF activity in BRAF-mutated melanomas, leading to growth retardation in preclinical studies (Sharma et al, 2005;Wilhelm et al, 2008). A phase II trial of sorafenib in 37 metastatic melanoma patients reported a modest activity, with only three partial response (8%; Min et al, 2008). Another phase II randomized discontinuation trial confirmed these results, with no confirmed objective response, and only 19% of stable disease (Eisen et al, 2006). Unfortunately, BRAF mutations were not predictive of clinical outcome in several trials involving sorafenib in melanoma patients (Eisen et al, 2006;Flaherty et al, 2008;Amaravadi et al, 2009;Ott et al, 2010). Recently, the BRAF V600E inhibitor vemurafenib has shown significant clinical activity in patients with advanced melanoma (Chapman et al, 2011). Hence, it is unclear whether sorafenib exerts anti-tumour activity in melanoma through the inhibition of BRAF or other targets, such as c-Kit. For instance, imatinib, another c-Kit inhibitor, is active in KIT-mutated melanomas (Guo et al, 2011). NRAS, GNAQ and GNA11 are other potential molecular targets, particularly in uveal melanoma (Alsina et al, 2003;Van Raamsdonk et al, 2010).
Sorafenib dose-limiting toxicities (DLTs) included diarrhoea, hypertension and hand-foot skin reaction (HFSR). Notably, doses increases from 400 to 800 mg bid did not substantially increase sorafenib area under the curve (AUC) in phase I trials (Strumberg et al, 2007). However, intra-patient dose escalation has not been evaluated by pharmacokinetics. Owing to a large inter-patient variability (B50%) of sorafenib area under the plasma concentration-time curve over 12 h (AUC; Strumberg et al, 2007;Hornecker et al, 2011), a suboptimal exposure to sorafenib could result in a lack of anti-tumour activity in some patients. To date, this hypothesis could not be ruled out, as sorafenib exposure was not assessed in previous phase II and III trials. Otherwise, dose adjustment of sorafenib based on plasma exposure is not currently recommended. In addition, two clinical trials suggest potential benefit for sorafenib dose-escalation strategies in RCC, even after failure of sorafenib 400 mg bid dosing (Amato et al, 2008;Escudier et al, 2009).
In this context, we hypothesised that optimisation of sorafenib exposure might improve its efficacy in patients with metastatic melanoma, and that sorafenib AUC could be related to antitumor efficacy.
PATIENTS AND METHODS
From January 2008 to December 2009, consecutive patients with metastatic melanoma who progressed under previous therapeutic regimen containing one or more of the following: dacarbazine, fotemustine, interleukin-2, cisplatin, interferon or vaccine therapy, were offered sorafenib treatment in two academic cancer centres located in Paris, France (Cochin and Saint Louis Teaching Hospitals). At this time, vemurafenib was not available for patients with BRAF-mutated melanoma. BRAF mutation status was not assessed in our patients.
The schedule included an intra-patient dose escalation. A total of 30 patients with histological confirmed metastatic melanoma started sorafenib. All patients provided written informed consent, and the study was approved by the Local Ethics Committee.
Treatment plan
Patients were treated with sorafenib at a starting dose of 400 mg bid. In the absence of acute-limiting toxicity, intra-patient dose escalation of 200 mg bid every 2 weeks was planned. No maximum dose was specified. Sorafenib daily doses were only adjusted based on adverse events and not on plasma sorafenib exposure as the values of sorafenib AUC were not transmitted to clinicians.
Assessments
The primary endpoint was safety. Safety was assessed every 2 weeks during the whole-treatment period. In addition to summaries of adverse events classified and graded according to the National Cancer Institute Common Toxicity Criteria for Adverse Events version 3.0, term and category, safety analyses included evaluation of clinically significant laboratory test results and vital signs. A DLT was defined as any toxicity leading to dose reduction or to discontinuation of treatment. Tumour response was assessed by CT scan using one-dimensional measurements made at baseline, every 8 weeks thereafter and at the end of the treatment period if applicable. Treatment activity was evaluated using the revised RECIST guidelines (Therasse et al, 2000).
Plasma exposure to sorafenib
Sorafenib plasma concentrations were assessed in one sample drawn every 2 weeks (at the end of each period of dose escalation) by high-performance liquid chromatography (Blanchet et al, 2009). The accuracy, within-assay precision and inter-assay precision of this method were 96.9-104.0%, 3.4-6.2% and 7.6-9.9%, respectively. A specific bayesian estimator developed in our institution allowed estimating sorafenib AUC with a limited sampling strategy (Hornecker et al, 2011).
Statistical analyses
Overall survival (OS) was defined as the time from the treatment initiation to death (all causes). Survivors were censored at last follow-up. Progression-free survival (PFS) was defined as the time from the treatment initiation to the first recorded evidence of progression. Survivors without progression were censored at the date of last follow-up or death.
To retrospectively investigate the relation between clinical outcomes and drug exposure, different parameters were used: AUC measured 1 month after treatment initiation, mean and maximal AUC (AUCmax) over the whole-treatment period.
As AUCs were not normally distributed AUCs between groups were compared using a Wilcoxon rank-sum test. The correlation between daily dose of sorafenib and AUC was computed with Spearman's test. Response rate and toxicities were compared using Fisher's exact test. Survival curves were estimated using Kaplan-Meier method and compared using log-rank test. Univariate Cox proportional hazard models for PFS and OS were built to compute the hazard ratios (HRs) with their 95% confidence intervals (95% CIs) of potential baseline predictors. Potential baseline predictors tested for OS were as follows: sex, WHO PS (X2), age (459 years), AJCC stage, brain metastases, LDH baseline level (4ULN), time as metastatic disease (415 months), number of previous treatment regimen (42) and primary histological type. Variable tested for PFS included: sex, WHO PS (X2), age (459 years), AJCC stage, brain metastases, BMI (425 kg m À 2 ), primary histological type, time as metastatic disease (415 months), number of previous treatment regimen (42), LDH baseline level (4ULN), AUCmax (X100 mg l À 1 h À 1 ), early gradeX2 adverse events (at 2 months) including diarrhoea, hand-foot skin syndrome (HFSR), skin rash and hypertension considered separately or jointly. Then, multivariate analyses were conducted on all potential factors with P-value o0.2 in univariate analysis using a stepwise Cox model with enter variable with P-value o0.05 and remove if P-value 40.1. The median served as the cutoff point when continuous variables (mean and max AUCs) were separated into two groups.
Missing data were not estimated or carried forward in any statistical analyses. All analyses were performed using the JMP 8.0.2 (SAS Institute Inc., SAS Campus Drive, Cary, NC, USA). P-values were two tailed and considered significant when p0.05.
Patients characteristics
A total of 30 patients with histologically confirmed metastatic melanoma were treated with sorafenib. Baseline patients' characteristics are summarised in Table 1. The median daily dose was 800 mg bid (range 400-2600), and 19 patients (63%) underwent dose escalation (range 600-2600 mg bid). The median duration of treatment was 2.9 months (range 0.4-16.3).
Response and survival
Two patients discontinued treatment owing to severe toxicity before the first evaluation. Therefore, 28 patients were evaluable for response. One complete response and five partial responses were observed, including three confirmed responses. The overall response rate was 21% (95% CI: 6.2-36.6). The objective responses were assessed early, with a median time from treatment initiation of 2.3 months (range: 1.3-3.4 months). In all, 3 of 10 patients (30%) with cerebral metastasis had cerebral partial responses. Median duration of confirmed response was 6.1 months. In total, 11 patients (39%) had stable disease with a median duration of 4.4 months, for an overall disease control rate (PR þ SD) of 61% (95% CI: 42.6-78.8).
Safety
A total of 18 severe adverse events (grade X3) occurred in 11 patients at the starting dose of 400 mg bid: 8 hand and foot skin reaction (HFSR), 5 skin rash, 2 stomatitis, 2 hypertension and 1 fatigue. Sorafenib was discontinued in the four patients who experienced both grade 3 rash and HFSR, and then reintroduced at 200 mg bid. Despite this daily dose adjustment, the severity of toxicity was unchanged; therefore the treatment was definitively discontinued. The four patients with isolated grade 3 HFSR were able to continue sorafenib for up to 5 months with a 50% dose decrease.
During the dose escalation, only two patients discontinued sorafenib because of toxicity: a symptomatic grade 3 pancreatitis in the first case, and a grade 4 diarrhoea in the second case. Dose escalation was associated with an increased rate of grade X3 diarrhoea (26% vs 3%, P ¼ 0.03) and anorexia (26% vs 3%, P ¼ 0.03). None of the other severe adverse events, especially hypertension and HFSR, occurred more frequently during dose escalation ( Table 2).
Pharmacokinetics
During the whole-study period, 216 sorafenib plasma concentrations were assessed (Supplementary Table 1). The median sorafenib AUC was 63 mg l À 1 h À 1 (range: 16-206). The median intra-patient variability was 31% (range: 7-71%) and inter-patient variability was 45% at 400 mg bid. Sorafenib exposure did increase with dose (Spearman's test r ¼ 0.4, Po0.0001). Inter patient PK analysis showed that the median AUC was higher at all doses ranging from 600 to 1200 mg relative to 400 mg bid (Figure 1). Relative to 600 mg bid, the median AUC did not increase at higher doses. Intra patient PK analysis showed that dose escalation (range: 600-2600 mg bid) in 19 patients allowed achieving a greater sorafenib exposure in 13 (68%) of them (Figure 2). Two and four patients stable and decreasing exposure, respectively.
The long-term drug exposure monitoring showed that AUC rapidly reached its maximum after treatment initiation. Maximal AUC occurred during the first 2 months in 18/27 patients (67%) and the median time to reach the AUCmax was 36 days (range 8-161 days). Sorafenib exposure tended to decrease over time in case of prolonged treatment. In 11 patients receiving sorafenib for 4 4 months, AUC had decreased in the last part of treatment (after 90 days; 77 vs 61 mg l À 1 h À 1 , P ¼ 0.002). One month after treatment initiation, sorafenib median AUC was greater in patients with gradeX2 hypertension compared with those with normal blood pressure (82 vs 54 mg l À 1 h À 1 , respectively, P ¼ 0.02). Each measurement of sorafenib was compared with the simultaneous safety report (n ¼ 194 pairs). The median AUC was greater in case of grade X2 hypertension (84 vs 58 mg l À 1 h À 1 , Po0.0001), and grade X2 HFSR (76 vs 61 mg L.h, P ¼ 0.0008). Besides, AUC was not correlated with other adverse events such as diarrhoea, anorexia, allergic and nonallergic skin rash. The rate of severe adverse events (grade X3) was not increased with AUCs X100 mg l À 1 h À 1 (Table 3).
Concerning the relation between plasma sorafenib exposure and efficacy, it was first noticed that five of six responses occurred at 400 mg bid but these patients had high exposure at this dose (with AUC of 102, 101, 84 and 75 mg l À 1 h À 1 in four patients, and AUC not avalaible for the remaining patient). Then, the median AUCmax (100 mg l À 1 h À 1 , range 51-206 mg l À 1 h À 1 ) was used to classify patients into high or low exposure groups. Patients with high exposure had a higher probability of tumour control on target lesions (86% vs 50%, P ¼ 0.04, Figure 3), RECIST partial response or stable disease (80% vs 33%, P ¼ 0.02) and PFS (21 vs 10 weeks, P ¼ 0.005; HR ¼ 0.28 (95% CI: 0.11-0.72); Figure 4; Table 3). The Youden index of the receiver operating characteristic (ROC) curve of the disease control relative to the AUCmax was 100 mg l À 1 h À 1 (data not shown). Maximal exposure had a positive impact on PFS in univariate analysis (Table 3) and confirmed by the multivariate analysis as AUCmax X100 mg l À 1 h À 1 (HR ¼ 0.28 (95% CI: 0.11-0.72) was the only significant variable associated with PFS (Table 3).
Neither the AUC at 1 month after treatment initiation nor the mean AUC of the whole-treatment period were associated with a higher disease control rate (69% vs 46% P ¼ 0.4 and 54% vs 64% P ¼ 0.7, respectively) or a longer PFS (HRs ¼ 0.94 (95% CI: 0.40-2.28) and 0.51 (95% CI: 0.19-1.24), respectively). Thus, the discrepancies between the three pharmacokinetic parameters (AUC at 1 month, mean and max AUC) were investigated. Indeed, 6 (21%) and 8 (28%) patients were misclassified by the AUC at 1 month compared with the mean AUC and the AUCmax, respectively. Moreover, despite a low mean AUC, three responding patients had a high AUCmax, which could explain the clinical effect. Conversely, four patients with a mean AUC above the average but low AUCmax did not respond to the treatment.
DISCUSSION
In this multi-institutional experience with sorafenib dose-escalation in patients with metastatic melanoma, the main results consisted in the positive correlation between AUCmax, objective response and PFS. Although modest in melanoma, sorafenib efficacy was directly correlated with exposure, as seen with sunitinib in RCC and GIST (Houk et al, 2010) or pazopanib in differentiated thyroid cancers (Bible et al, 2010 To go further, the changes in sorafenib clearance and bioavalability with doses 4400 mg bid were described in a cohort of 71 patients treated with sorafenib in our institution, including the present series of melanoma patients (Hornecker et al, 2011). A one-compartment model with saturated absorption, first-order intestinal loss and elimination best described the pharmacokinetics of sorafenib. Absolute bioavailability significantly dropped with increasing daily doses of sorafenib. Area under the curve increased less than proportionally with increasing doses. Therefore, a split schedule three times a day might overcome absorption saturation, thereby leading to a higher exposure (Hornecker et al, 2011). Notably, tumour type did not seem to influence sorafenib pharmacokinetics. Only albumin was found to influence sorafenib clearance at standard doses . As well, in an independent cohort (Jain et al, 2011), no clinically important PK covariates were identified.
In this series, the highest AUC (AUCmax) was correlated with antitumor efficacy while the other PK parameters were biased by the dose-escalation schedule: the AUC at 1 month was too early and the mean AUC did not reflect periods of high exposure, shown to be correlated to antitumor efficacy in our study. The Youden index of the ROC curve of the disease control relative to the AUCmax was 100 mg l À 1 h À 1 , suggesting that highest exposures are responsible for efficacy. These properties of antiangiogenic treatments have been previously described and represented by a bellshaped dose-response curve (Reynolds, 2009). Strikingly, only 15% of samples assessed at 400 mg bid had an AUC over 90 mg l À 1 h À 1 vs 36% of samples at 600 mg bid and more (P ¼ 0.0003). With a target AUC of 90-100 mg l À 1 h À 1 , theses results pinpoint that most patients are underexposed to sorafenib at 400 mg bid, and that individualised dose adjustments would be required. In line with these results, a recent study has shown the superiority of sunitinib 50 mg daily 4 weeks out of 6 over a continuous daily dosing of 37.5 mg, pinpointing the need to reach a threshold exposure. Figure 2 Effect of dose escalation on intra patient sorafenib AUC (mg l À 1 h À 1 ). Median AUCs from 19 patients are represented. In red: increased exposure; in orange: stable exposure; in green: decreased exposure. Investigator-assessed tumour regression (i.e., maximum change from baseline in target lesions diameter). (n ¼ 27) Patients with RECIST progressive disease are indicated by an asterix. Clear grey: AUCmax o100 mg l À 1 h À 1 ; dark grey: AUCmaxX100 mg l À 1 h À 1 . Figure 4 PFS probability according to maximal exposure to sorafenib (AUCmax). Dot line: patients with AUCmax o100 mg l À 1 h À 1 /; solid line: patients with AUCmaxX100 mg l À 1 h À 1 .
Sorafenib in advanced melanoma N Pécuchet et al
Long-term pharmacokinetic follow-up allowed detecting that the AUC decreased over time, as previously described in hepatocellular carcinoma (Arrondeau et al, 2011). This unexpected result could explain the clinical efficacy of sorafenib dose escalation after failure at standard doses (Escudier et al, 2009) and argue for longterm pharmacokinetic follow-up. This decrease of AUC over time could result from increased expression of drug efflux pumps, as seen with imatinib (Burger et al, 2005). We therefore suggest validating in a prospective trial the AUC as a surrogate marker to tailor sorafenib dose adjustments, thereby avoiding increasing sorafenib dose until intolerable toxicity. This approach could probably improve the therapeutic index of sorafenib in approved indications such as hepatocellular carcinoma and renal cancer.
The limitations of this study include the limited number of patients, the limited sampling strategy and the proportion of patients in whom sorafenib standard dose was not tolerated. Dose escalation was feasible and no unexpected severe adverse event was seen, even in highly pretreated patients with brain metastasis. Only two patients discontinued sorafenib during dose escalation. Several hypotheses on the pathogenesis of sorafenib-related adverse events could be raised. Indeed, toxicities could be classified in three categories according to their correlation with dose and exposure. Diarrhoea and anorexia were related to sorafenib dose but not to its AUC. Regarding diarrhoea, this result is in line with a previous hypothesis assuming that intestinal toxicity may be due to a local effect of poorly absorbed drug. Indeed, the low solubility of sorafenib in aqueous media hampers its complete dissolution in digestive tract at high doses. Thus, the fraction of sorafenib not absorbed could exert a direct toxic effect on enterocytes. Interestingly, patients with abnormal gastrointestinal functions are prone to develop diarrhoea under sorafenib (Lauritano et al, 2009), and patients with abnormal liver functions have a highest rate of diarrhoea without elevated exposure (Miller et al, 2009;Michels et al, 2010). As a consequence, diarrhoea per se may decrease sorafenib exposure, due to reduced intestinal absorption and interruption of entero-hepatic cycle.
Regarding prediction of toxicity, hypertension and HFSR were related to the AUC in the present series. To date, only one pharmacodynamic study identified a rare polymorphism of VEGFR-2 as a predictor of HFSR and hypertension (Jain et al, 2010). Regarding prediction of efficacy, biomarkers have failed to select patients who would respond to sorafenib. The results of four independent trials conclude BRAF V600E mutation is not a predictive biomarker of response to sorafenib (Eisen et al, 2006;Flaherty et al, 2008;Amaravadi et al, 2009;Ott et al, 2010). We propose optimised maximal AUC (490-100 mg l À 1 h À 1 ) as an alternative predictor for the activity of sorafenib, as illustrated presently in melanoma patients. Dose individualisation with drug monitoring might prevent under exposure to standard dose of sorafenib and favour antitumor activity in other tumour types. Dedicated phase II studies guided by pharmacokinetics are mandatory to prospectively confirm these results.
|
2017-11-08T17:28:44.461Z
|
2012-07-05T00:00:00.000
|
{
"year": 2012,
"sha1": "6d71491cfc2b4767e940d2e11356246b4d12925a",
"oa_license": "CCBYNCSA",
"oa_url": "https://www.nature.com/articles/bjc2012287.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "6d71491cfc2b4767e940d2e11356246b4d12925a",
"s2fieldsofstudy": [
"Medicine",
"Biology",
"Chemistry"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
53782275
|
pes2o/s2orc
|
v3-fos-license
|
Primary palliative care team perspectives on coordinating and managing people with advanced cancer in the community: a qualitative study
Background Primary health care teams are key to the delivery of care for patients with advanced cancer during the last year of life. The Gold Standards Framework is proposed as a mechanism for coordinating and guiding identification, assessment, and support. There are still considerable variations in practice despite its introduction. The aim of this qualitative study is to improve understanding of variations in practice through exploring the perspectives and experiences of members of primary health care teams involved in the care of patients with advanced cancer. Methods Qualitative, semi-structured interviews, focus groups, and non-participatory observations involving 67 members of primary health care teams providing palliative care. Data were analysed using a grounded theory approach. Results We identified distinct differences in the drivers and barriers of community advanced cancer care coordination, which relate to identification and management, and access to effective pain management, and go some way to understanding variations in practice. These include proactive identification processes, time and resource pressures, unclear roles and responsibilities, poor multidisciplinary working, and inflexible models for referral and prescribing. These provide valuable insight into how professionals work together and independently within an infrastructure that can both support and hinder the provision of effective community palliative care. Conclusions Whilst the GSF is a guide for good practice, alone it is not a mechanism for change. Rather it provides a framework for describing quality of practice that was already occurring. Consequently, there will continue to be variations in practice. Electronic supplementary material The online version of this article (10.1186/s12875-018-0861-z) contains supplementary material, which is available to authorized users.
Background
Good provision of palliative care is a continuing clinical priority worldwide. People with advanced cancer are living longer and illness trajectories are changing [1,2]. Prevalence of symptoms, such as complex pain, are likely to increase [3], requiring more input and support from a range of health professionals over longer periods of time. Receiving care at home is of great importance for most patients [4]. In the UK, primary health care teams (PHCTs) are multidisciplinary teams which are intended to provide the majority of palliative care to community based patients in the last year of life (see Table 1 for how the organisation of care across the PHCT is configured). High quality community-based palliative care requires good multidisciplinary teamwork [5][6][7] and close working relationships [8].
The Gold Standards Framework (GSF) describes and brings together a number of evidence-based principles of practice as a guide for the care of palliative patients and their families [9]. It was intended to help PHCTs identify, assess, and support patients with palliative care needs. It can be incorporated into general practice infrastructures at various levels ( Table 2). Practices which achieve level 1, the maintenance of a register and regular palliative care meetings, are currently financially rewarded through the Quality and Outcomes Framework (QOF) [10]. Whilst the GSF is thought to have contributed to better-quality care, for example: in improving early identification of patients, there is still much room for improvement [11,12].
Although the majority of practices in England have adopted the GSF in some form, to at least level 1, there are well-recognised variations in practice [6,13] associated with shortcomings in the processes for coordinating, managing, and providing effective care [14,15].
Whilst, there has been evaluation and measurement of the effects of using GSF at every stage, it is hard to describe exactly what the benefits have been for those who have used it and whether it has facilitated good multidisciplinary working. Previous research has shown that there is considerable variation between practices in the effectiveness of interprofessional communication [10]. Practices categorised as 'high performing practices' are reported to display a clear shared purpose among staff for palliative care, whereas those categorised as 'minimal performing practices' demonstrate little utilisation of basic processes recommended in the framework and deficiencies in interprofessional communication. Effective primary palliative care requires good team relationships and communication [6]. We therefore explored the
Design and participants
We conducted semi-structured interviews and focus groups with purposively selected PHCT members and non-participatory observations of multidisciplinary GSF meetings. We explored the perspectives and experiences of PHCT members who provide care to people with advanced cancer in the community to see whether these can improve understanding of variations in practice. For the purpose of this study, care provided in the community relates only to people with palliative care needs being cared for in the home. Advanced cancer is defined as active and non-curative. The research was conducted in Leeds, UK and was part of a research programme on the management of advanced cancer pain.
Data collection
First, L.Z. (an experienced psycho-oncology researcher) conducted five single-professional focus groups with key community professionals who covered the whole city: clinical nurse specialists (CNSs), community matrons, joint care managers, members of the complex and palliative continuing care service (CAPCCS), and GPs. These explored experiences of coordinating and managing people with advanced cancer in the community using a topic guide, which was based upon a review of the current literature at that time (Additional file 1). This provided a comprehensive picture of practice across the city and suggested that effective cancer pain management extended beyond individual professional practice, skills, or experience to broader barriers related to the organisations and systems within which professionals work.
Subsequently, as a sampling strategy, we asked practice managers and GP leads across all general practices in Leeds a series of questions based on the levels of the GSF to determine practice approaches to coordination and management. This allowed us to identify different models of advanced cancer care coordination. We purposively selected six of these practices, representing two different approaches to care, which we categorised as high and minimal performers according to their responses, and contacted them by letter/follow up telephone call. Three responded and gave written consent to allow non-participant observations of multidisciplinary GSF meetings (one at each practice). The aim of these observations was to build up a picture of how care of patients with advanced cancer is routinely organised, how decisions are made about assessment of pain, how this is communicated between professionals, how strategies for management are devised and which professionals are seen as key in delivering support and care. Notes on these aspects and non-verbal communication were taken by the observer (J.H., an experienced health sciences researcher) during these meetings and subsequently expanded. Interviews were conducted by J.H. with individuals identified during GSF meetings as being most involved in coordinating care. Interviews explored experiences of managing people with advanced cancer in the community using a topic guide, which was based upon a second review of the literature and findings from the initial focus groups (Additional file 2). Informed written consent was obtained from all individual participants included in the study. All focus groups and interviews took place in participants' place of work, during working hours, no one else was present. Participants were not known to the researchers prior to the study.
Data analysis
Interviews and focus groups were audio-recorded and transcribed verbatim. Data were collected over two periods, from June to August 2010 and April 2013 to September 2014. Each participant was given a unique identifier to maintain confidentiality. We adopted a grounded theory analytic approach [16,17] as it provides an approach to construct theory driven understanding of health professionals behaviour and the factors that influence it. This combined concurrent data collection and analysis with modification to the topic guide to pursue emerging lines of enquiry. Debrief meetings for researchers took place after each interview providing space to reflect on the interview process and explore initial ideas. Further steps carried out by authors, separately and together, were familiarisation of the whole data set through multiple readings of transcripts, open and focused coding, memo writing and engagement with the literature, to facilitate the development of categories and concepts. Constant comparison and searching for negative cases were used throughout, whereby data segments and the developing codes and categories were compared both within cases to identify the sequencing of events, and how these were understood and acted upon, and between cases to examine variations between participants. Throughout data collection and analysis, data, codes and concepts were discussed within the research team (of varied disciplinary backgrounds: psychology, health sciences, sociology, and academic palliative care medicine) and with the wider steering group. The latter included patient representatives, clinicians and academics.
Results
In total, 67 health professionals took part. Five single professional focus groups comprised 27 health professionals: six GPs, eight CNSs, five joint care managers, four members of the Complex and palliative continuing care service, and four community matrons. Twenty-four general practice managers and/or GP leads responded to a series of questions about practice approaches to coordination and management of people with advanced cancer. Three practices were then selected (Tables 3 and 4) for non-participatory observation, involving a total of 32 health professionals. Eight interviews were then conducted with professionals: three GPs, three CNSs, and two district nurses. Mean focus group length was 31mins (range: 27 to 36mins) and mean interview length was 52mins (range: 27 to 64mins). Total observation time was 145 mins.
There were distinct differences in the drivers and barriers within the two models of community advanced cancer coordination. Each practice provided the system within which examples of multidisciplinary working, implementation of policy, and professional behaviours and attitudes could be mapped and subsequently compared. Drivers and barriers for these distinct models of operation include proactive identification processes, time and resource pressures, unclear roles and responsibilities, poor multidisciplinary working, and inflexible models for referral and prescribing. These are now presented within two key themes: identification and management, and access to effective pain management, with three subthemes: prescribing restrictions, home visits, and referrals.
Identification and management
A proactive approach, with early identification of and care planning for patients, was key to coordination and management. Proactivity demonstrated whether practices viewed people with advanced cancer as different, or not, from the general practice population and therefore requiring specific, flexible input. High performing practices, such as A and B, adopted formal proactive processes to ensure identification and monitoring of patients who were not previously known to GPs. They would initiate contact and, in doing so, commence assessment, monitoring and management of pain.
If I receive a letter from the hospital that tells me somebody's got a new diagnosis, I make contact. Then the obvious triggers like multiple hospital admissions, we look at that, and because of the evidence you'll identify your 1% that way so if somebody's bouncing in and out of hospital… Over time we've found that the register's grown quite significantly, so we try to just discuss patients who are at the amber to red end of the traffic light system. (GP 8, Practice B) These practices were engaged with and had ownership of the GSF, demonstrated through their management of the palliative care register and GSF meetings, and had clear roles and responsibilities within the primary palliative care Note: deprivation decile scores, 1 most deprived to 10 least deprived team. For example: in practices A and B, the palliative care lead GP, both of whom had a special interest in palliative care, was the nominated coordinator of the all work relating to the GSF, they led the meeting, and GPs generated and maintained the register personally, ensuring that all eligible patients were recorded. Patients mentioned at the meeting by the CNS or district nurse were already known to GPs and new referrals were noted down and recorded.
Through this system they not only had ownership of the GSF, but achieved level 1. They also adopted flexible working practices, for example: a buddy system was used, whereby patients had a second named GP to account for periods of annual leave or part-time working. Meetings consisted of discussions on patients' status, managing needs, bereavement support, and significant event or after death review to re-appraise appropriateness of response. Therefore achieving levels 2 and 3 of the GSF. Practice C did not differentiate between people with advanced cancer and other patients, did not regard them as requiring any special input, and therefore failed to adopt any flexible working practices. Observations highlighted a lack of ownership over the GSF and unclear roles and responsibilities within the primary palliative care team, for example: although they had a nominated coordinator to oversee all work relating to the GSF, this was the CNS, and not the palliative care lead GP, as a result the CNS led the meeting and identified patients for the register. Patients identified by the CNS and DN were not already known to the practice. There were no formal, proactive processes for identifying patients embedded as standard practice.
I try and actively encourage them to look at the palliative care register in terms of getting additional people on. They really don't engage with that. I've asked several times "do you know if we've any other patients to discuss?" and they don't have a system. They do have a list, I'm just not sure how up to date that is and they've said they don't want a long list, there'd be loads of patients. And we're not really meant to lead the meeting, it should be the GP. I'm meant to be there as an additional person to contribute, and they just listen to me and chip in, it's meant to be the other way round. (CNS 9, Practice C) Only patients already registered with the palliative care service were on their register. Although discussions about updates on patients' status and how to manage needs occurred, these were instigated by the CNS and district nurse, who felt these discussions were time limited and not seen as a priority. Consequently, although aspects of levels 2 and 3 of the GSF were covered, the impetus for them was led by specialist practitioners, demonstrating a lack of ownership from the practice.
Access to effective pain management
Patient access to effective cancer pain management was highly variable and influenced by professional priorities and complex practice level policies. The levels to which the GSF was adopted led to variations in effective pain management activities, in particular: prescribing restrictions, home visits, and referrals.
Prescribing restrictions
Many general practices operate a practice level policy whereby non-cancer patients are required to wait 48 h before prescriptions can be collected, however such policies may be flexible based on patient need. Practice C was relatively inflexible in its prescribing policy for people with advanced cancer. This limited the potential for a timely response to changing pain needs. The importance of this barrier is not only related to the need to offer fast, accessible services to cancer patients; but to the fact that some cancer patients only visit their practices to obtain repeat prescriptions and this represents their only encounter with primary care. This can influence their confidence in, and opinion of, their GP when care is transferred at a later stage of their illness.
I had a very elderly lady on absolutely huge doses… but she still needed quite a lot of breakthrough analgesia, I went to see her and I rang the GP. "This is what I need, I need it today…", "Yes, it will be ready". When her husband went to pick it up, it wasn't ready and they said to him, "No, you will have to wait 'til Monday now". And then over that weekend they tried to struggle through with the OxyNorm and he had to get the emergency doctor out and his family had to drive round looking for a chemist that was open for these drugs. (CNS 8, focus group) They also maintained a tight rein on prescribing budgets and minimised costs wherever possible. Medications prescribed on discharge from hospital were changed by GPs to explore whether a cheaper alternative might be effective. In an attempt to minimise potential wastage, patients were given limited supplies of anticipatory drugs. These universal cost saving prescribing practices were described in the context of patients with advanced cancer and demonstrated that despite such a diagnosis, the policy was not modified. Advanced cancer patients were treated in the same way as other patients in judgements made about prescribing. I reckon if someone isn't long for this world I would give them two vials of something to get them through the weekend and I'll see if they're still here on Monday. And then I'm more than happy to give them more. Otherwise I think there's too much wastage in the NHS. (GP 12, Practice C) In contrast, others believed that there was no justification for trying to limit costs, particularly given the limited length of time the patient would require the drug and that the greatest costs to the NHS would be having to fast track a patient or an unplanned hospital admission. The ethos underpinning policy and practice here was that palliative care requires effective management of symptoms. In the multidisciplinary focus groups and interviews with CNSs, they agreed that patients with advanced cancer should be prioritised in terms of access to the most effective and tolerable pain control irrespective of cost. This suggests that minimising the cost of analgesia in advanced cancer patients is not a high priority for all and in this instance, the policy appeared to be generated and maintained at a practice level.
I think cost does have a bearing, whereas I probably don't give a great deal of consideration to costing. I know that some GPs do, some of the drugs they're maybe not too keen to try and some will even suggest that they would prefer to go for something else as an alternative and they will mention costing. (CNS 7, focus group) In instances where GPs were known to be difficult to work with, poor multidisciplinary working was evident as CNS's recounted times when they went around them instead of negotiating with them.
There are always some GPs that you feel might be less amenable to prescribing for various reasons, if you felt that the GP wasn't willing to do what you thought was appropriate and couldn't give you a valid reason in your mind as to why, then in essence we would bypass the GP if we felt we couldn't negotiate with them. (CNS 8, focus group) However, this then has repercussions for overall coordination and management, particularly monitoring of symptoms and care planning.
Home visits
Patient access to effective cancer pain management was highly variable and influenced by professional priorities and complex practice and service level policies and pressures. One such practice-level policy relates to GPs undertaking home visits, especially within the context of perceived increasing pressures and demands on time. Within practices A and B, GPs initiated and negotiated their involvement and viewed themselves as being involved in patient care from diagnosis to death.
I think most people who are having palliative care at home will see their GP as the coordinator of that care. A GP is key to these patients, I do think we're very important. We ask them what they want, I don't force the issue, you know you can say to me if you want me to, "I would happily see you on a monthly basis" or whatever. In their last weeks to months of life when you have known them very well, I think you are a befriender as much as a clinician. (GP 8, Practice B) In contrast, practice C were reactive in their approach to care, evidenced by their lack of initiating or actively maintaining involvement with patients. Home visits were only undertaken in exceptional circumstances and were identified as a nurse's responsibility rather than the GP's.
We run more of a demand led system so it's up to the patient to ask, to make appointments, we don't have capacity really, if patients can come to the surgery we encourage it…Once they get more poorly, I think the district nurses and the Macmillan team take over more, we have more to do with people when they're able to come to see us. (GP 7, Practice C) Others highlighted home visits as a high priority, with GPs and nurses proactively advocating for joint visits as a key component of effective pain assessment and management. Sometimes it was necessary to spend time with patients and their relatives, perhaps over consecutive days, to fully understand the nature of their symptoms. This also contributed to building a relationship, facilitating disclosure of the reality of patients' pain, and providing psychological support.
I think personally it's nice to see them face to face. It's a lot nicer to see them in their own home, you probably feel like you've got more time, they're feeling more comfortable and a bit more secure to talk about things that they're not happy to talk about, it's more on their terms then. (GP 8, Practice B)
Referrals
GPs were highlighted by CNS's as the main referrer to their service. However, due to a lack of proactive identification processes, instances were recounted where patients had been missed and therefore not referred.
Quite often they'll say on the letters 'may be worth involving palliative care' but it still relies on the GP seeing and deciding to do that. I'm thinking of that patient we saw at home, when you got the information from the oncologist the last two or three letters to the GP had suggested that there should be a referral to palliative care and the GP just hadn't done it. (CNS 4, focus group) Poor multidisciplinary working was also evidenced by CNS's describing cases where patients were referred to their service without their GP knowing.
By far the majority of the referrals come from GPs to the CNSs. There's occasional one that a consultant will refer direct to me from an outpatients and I will look at this and think why have you done that? (CNS 1, focus group) This had consequences for patient access to other services, particularly district nursing, who are supposed to provide general palliative care alongside the GP. These services were then at a disadvantage as they have been brought in late to patient care.
It just doesn't happen, we get the referral and then we then refer to the DNs. They're a really vital role of that consistent support monitoring and holistic care for them and we're coming in focussing on their specialist needs, not their general. If they don't have that district nurse, they often end up in crisis and then just come straight to us. Whereas, any district nurses then are deskilled in the palliative care. (CNS 9, Practice C) Consequently, this lack of coordination and poor multidisciplinary working resulted in instances where there was no coherent pathway for patients to navigate through the services.
Discussion
We explored the perspectives and experiences of PCHT members who are involved in the multidisciplinary care of people with advanced cancer, to see whether these improved understanding of variations in practice. Within our research, Practices A and B were identified as high performing sites, whilst Practice C was identified as a minimal performer. Where this paper adds knowledge is that there were distinct differences in the drivers and barriers within these models of community advanced cancer coordination which help explain variations in practice. Practices A and B adopted formal proactive processes for identifying advanced cancer patients. They had clear roles and responsibilities with their primary palliative care teams which enabled good multidisciplinary working; and adopted flexible approaches to care, in particular evidenced by their attitudes to referrals, home visits, and prescribing. Within these practices, we identified that if professionals adopted a flexible approach to care then the scope to deliver effective individualised pain management was enhanced. Practice C however, were reactive in their approach to coordinating and managing care for these patients. There were no formal mechanisms in place for identifying patients; unclear roles and responsibilities within the team, which impacted upon multidisciplinary working; evidence of time and resource pressures at a practice level; and inflexible models for referrals and prescribing. This reactive, inflexible, and untimely approach to care meant that individualised care was difficult to achieve.
The extent to which a practice subscribes to a palliative care philosophy appeared to be fundamental to the provision of effective advanced cancer care coordination and management. We demonstrate how professionals within a multidisciplinary team work within an infrastructure that can both support and hinder the provision of effective community palliative care. Practice C operated within an ethos that did not differentiate the specific needs of people with advanced cancer and this inhibited a flexible approach to pain management. Adopting a universalist approach with people with advanced cancer had unintended consequences for coordination and management. The policies and procedures which they adhered to appeared to provide a structured mechanism for decision making in relation to pain management and diverted practice away from the subjective and interactive processes related to pain management evident in Practices A and B. However, when such a policy is formalised within a practice setting, the scope for individual professional perspectives and subsequent variation in practice becomes limited, even when the professional was not part of the devising body. In addition, the operation of inflexible prescribing restrictions, is a policy that could result in unintended consequences that are potentially more costly to the NHS, such as unplanned hospital admissions. Practice C illustrates how a not uncommon set of external and internal constraints [18] can have unintended consequences for pain management. Any strategy to support practices in improving pain management must be informed by an understanding of such constraints.
Challenges to effective multidisciplinary team working are evidenced by nurses' accounts of the difficulties in working with GPs, reporting power relationships, and implicit and explicit rules governing the process of inter-professional work [6,18]. These dynamics sit at the core of providing effective primary palliative care. Professional identities and organisational structures affect coordination and management because these are key aspects of effective teamwork. The GSF may have been intended to provide a framework to guide care and provide a toolkit for the coordination and management of advanced cancer care, however our data illustrates that, despite the financial incentives associated with it, it is inadequate in recognising the complexity of practice and implementation of change. Instead of providing a mechanism for change, we suggest that it provides a framework for describing quality of practice that was already occurring. It is a guide for good practice, but fails to describe an implementation approach, therefore cannot itself change practice.
In high performing practices, GPs were proactive in identifying and coordinating care in order to aim for continuity with patients [19,20]. They were engaged with and took ownership of the GSF and had clear roles and responsibilities. Although engagement and continuity are key, the workload of primary care is growing [20,21], with more GPs are working part-time, and out-of-hours care more frequently occurring with health professionals who are unfamiliar with the patient [22]. Our findings show how these developments can be overcome by providing proactive care and putting flexible systems in place to take account of these changes, for example: having a second named GP to cover periods of annual leave or part-time working. Future developments must recognise the changing landscape of primary care to enable adaptation.
Timely referrals were highlighted as enabling professionals to develop relationships with patients and their families earlier, enhancing the ability to deliver effective individualised patient care and enabling continuity [6]. The current focus of the GSF in the last year of life doesn't take account of the shifting trajectory of advanced cancer, including the increasing need for input and support over longer periods of time [1]. We question how care should be initiated and coordinated when different members of the PHCT enter and exit patient care at different points, therefore have different levels of engagement, and view the meaning of palliative care from the perspective of their input. Standardised definitions of roles and responsibilities are needed [20,23].
The way professionals, policies, and services within the UK primary care system interact is dynamic and complex, with many aspects of exactly how this occurs remaining unclear. This lack of clarity is likely to be due to the considerable variability in how the three components of the system interact, specifically the variability in the level of engagement between: generalists and specialists; professionals and patients; and professionals, policies, and service level initiatives. This means that although we are beginning to understand the component parts of this system, we do not fully understand the whole. For example, one of our study practices clearly recognised or perceived strong pressures to control costs and demand, and is unlikely to be atypical in doing so. This has implications for developing and targeting interventions. The recognition that advanced cancer coordination and management cancer-pain management in primary care occurs within a multidisciplinary team suggests that an intervention to improve this that was embedded at a professional or service level alone will struggle to be effective. Further measures to improve continuity and coordination need to be developed through close working with a range of practices, with varying abilities to respond to clinical policy frameworks.
Strengths and limitations
Exploring the involvement of all members of the PHCT, allowed us to gather a wide range of views and subsequently focus on those most involved in care. A limitation is it took place in one UK city and we were unable to recruit more practices to take part in the observations and associated interviews, particularly from more deprived areas. Future research could explore these drivers and barriers within a larger number of practices, representing a wider range of deprivation. Whilst this could be a potential limitation of the analysis, the themes that emerged concerning the organisation of care resonate with those reported more widely [24,25]. We identified two key contrasting approaches, although these may not be the only models of advanced cancer coordination and management, and illustrate and highlight drivers and barriers that can shape variation in practice. Secondly, our study spanned a change in the structure of primary care, when clinical commissioning groups replaced primary care trusts. Local priorities may have changed, however this was not evident within our findings. The significance of our study is that it provides insight into specific practice cultural and organisational factors that shape interpretation of policies and subsequent practice.
Conclusion
We identified distinct differences in the drivers and barriers within these models of community advanced cancer care coordination. These provide valuable insight into how professionals work together and independently within an infrastructure that can both support and hinder the provision of effective community palliative care. Whilst the GSF is a guide for good practice, it fails to describe an implementation approach, therefore is not a mechanism for change. Consequently, there will continue to be variations in practice. If general practices remain purely reactive in their approach to care, then this will have unintended consequences for coordination and management. Overcoming these issues is key to ensuring the provision of effective community palliative care.
|
2018-11-21T20:27:23.983Z
|
2018-11-20T00:00:00.000
|
{
"year": 2018,
"sha1": "f6a628597341b3e6622cbc4bc7ad0193e0d4db6a",
"oa_license": "CCBY",
"oa_url": "https://bmcfampract.biomedcentral.com/track/pdf/10.1186/s12875-018-0861-z",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "f6a628597341b3e6622cbc4bc7ad0193e0d4db6a",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
245034081
|
pes2o/s2orc
|
v3-fos-license
|
Description and evaluation of the community aerosol dynamics model MAFOR v2.0
. Numerical models are needed for evaluating aerosol processes in the atmosphere in state-of-the-art chemical transport models, urban-scale dispersion models, and climatic models. This article describes a publicly available aerosol dynamics model,
aerosol processes in the atmosphere in state-of-the-art chemical transport models, urban-scale dispersion models, and climatic models. This article describes a publicly available aerosol dynamics model, MAFOR (Multicomponent Aerosol FORmation model; version 2.0); we address the main structure of the model, including the types of operation and the treatments of the aerosol processes. The model simultaneously solves the time evolution of both the particle number and the mass concentrations of aerosol components in each size section. In this way, the model can also allow for changes in the average density of particles. An evaluation of the model is also presented against a high-resolution observational dataset in a street canyon located in the centre of Helsinki (Finland) during afternoon traffic rush hour on 13 December 2010. The experimental data included measurements at different locations in the street canyon of ultrafine particles, black carbon, and fine particulate mass PM 1 . This evaluation has also included an intercomparison with the corresponding predictions of two other prominent aerosol dy-namics models, AEROFOR and SALSA. All three models simulated the decrease in the measured total particle number concentrations fairly well with increasing distance from the vehicular emission source. The MAFOR model reproduced the evolution of the observed particle number size distributions more accurately than the other two models. The MAFOR model also predicted the variation of the concentration of PM 1 better than the SALSA model. We also analysed the relative importance of various aerosol processes based on the predictions of the three models. As expected, atmospheric dilution dominated over other processes; dry deposition was the second most significant process. Numerical sensitivity tests with the MAFOR model revealed that the uncertainties associated with the properties of the condensing organic vapours affected only the size range of particles smaller than 10 nm in diameter. These uncertainties therefore do not significantly affect the predictions of the whole of the number size distribution and the total number concentration. The MAFOR model version 2 is well documented and versatile to use, providing a range of alternative parameterizations
Introduction
Urban environments can contain high concentrations of aerosol particle numbers as a result of the emissions from local sources, most frequently vehicular traffic (Meskhidze et al., 2019), ship traffic (Pirjola et al., 2014), airports (Zhang et al., 2020), industrial emissions (Keuken et al., 2015), or all of these sources . The majority of the urban aerosol particles -in terms of number concentration -are ultrafine particles (UFPs), having aerodynamic diameters less than 100 nm (e.g. Morawska et al., 2008). UFPs exhibit high deposition efficiency and large active surface area, and they are often associated with toxic contaminants, such as transition metals, polycyclic aromatic hydrocarbons, and other particle-bound organic compounds (Bakand et al., 2012). Owing to their small size, inhaled UFPs can penetrate deep in the human lungs, deposit in the lung epithelium, and translocate to other organs. Longterm exposure to UFPs negatively affects cardiovascular and respiratory health in humans (Wichmann and Peters, 2000;Evans et al., 2014;Breitner et al., 2011). Sub-micrometre soot particles emitted from diesel engines, mainly consisting of light-absorbing black carbon (BC), other combustiongenerated carbonaceous materials, and condensed organics (Kerminen et al., 1997), often dominate the absorption of solar light by aerosols, thereby influencing the visibility in urban areas (Hamilton and Mansfield, 1991). The physicochemical characteristics of UFPs and their dynamic evolution also play an important role in changing the optical properties as they quickly coagulate with each other and larger particles or grow by the condensation of vapours into the size range of cloud condensation or ice nuclei, affecting the indirect climate effects of atmospheric aerosol by regulating cloud formation and cloud albedo, as well as changing the precipitation processes (Andreae and Rosenfeld, 2008).
In urban areas, the temporal variation and spatial inhomogeneity of both the particle number (PN) and particulate matter (PM) concentrations are closely linked to local meteorology and traffic flows (e.g. Kumar et al., 2011;Singh et al., 2014;Kukkonen et al., 2018). For example, particle concentrations in street canyons can be several times higher than in unobstructed locations. PN concentrations in a street canyon depend upon traffic characteristics, building geometry, turbulence that can be induced by traffic, the prevailing winds, and atmospheric stability (e.g. Kumar et al., 2009). However, measurements of particle number and size distributions in urban environments are scarce, and the complexity of the urban environment prevents extrapolation from single point measurements to the wider urban area.
A key question in applying aerosol process models is the scarcity of reliable and comprehensive emission data. Kukkonen et al. (2016) presented an emission inventory for particulate matter numbers (PNs) in the whole of Europe and in more detail in five target cities. The modelled PN concentrations (PNCs) were compared with experimental data on regional and urban scales. They concluded that it is feasible to model PNCs in major cities with reasonable accuracy; however, there were major challenges, especially in the evaluation of the emissions of PNCs. The rapid transformation of freshly emitted aerosol particles by condensation and evaporation, coagulation, and dry deposition was also found to pose challenges for dispersion modelling on the urban scale.
A substantial fraction of state-of-the-art chemical transport models contain treatments of aerosol processes (e.g. Kukkonen et al., 2012). However, only a limited number of urban dispersion models can deal with PN dispersion and processes affecting the particle size distribution, especially addressing the modelling of the dispersion of particles in complex urban terrain, such as street canyons (Gidhagen et al., 2004). This has been partly caused by the large effort toward model development that is necessary to implement size-resolved aerosol and particle dynamics models in urban modelling systems.
Modelling of particle transformation in parallel with plume dispersion is necessary to represent the evolution of the particle number and mass size distribution from the point of emission to the point of interest. Since the particle size and composition evolve on a short timescale, it is important to examine the evolution near the source at high spatial and temporal resolution. Modelling studies examining the evolution of particle emissions have used zero-dimensional (0-D) models (Vignati et al., 1999;Pohjola et al., 2003Pohjola et al., , 2007Karl et al., 2016), one-dimensional (1-D) models (Fitzgerald et al., 1998;Capaldo et al., 2000;Boy et al., 2006), twodimensional (2-D) models (Roldin et al., 2011), and threedimensional (3-D) models (Gidhagen et al., 2005;Andersson et al., 2015). Jacobson and Seinfeld (2004) have modelled the near-source evolution of multiple aerosol size distributions with a 3-D chemistry-transport model (CTM) over a high-resolution limited-area grid; however, only a few minutes were simulated. Long-range aerosol transport models coupled with numerical weather prediction models can be used to trace the mass and number concentrations of aerosols from point source emissions at the surface and different vertical levels Sarkar et al., 2017;Chen et al., 2018). The size distribution of emissions in large-scale models can only be approximated because they need to take into account the size distribution of the primary emitted particles at the point of emissions and the ageing processes that occur at sub-model grid scales (Pierce et al., 2009). Higher temporal resolution is therefore necessary to better characterize primary and secondary particle sources. Com-putational fluid dynamics (CFD) models, notably buildingresolving large eddy simulation (LES) models, are advantageous in simulating the airflow and dispersion of air pollutants in urban areas. Until now, only a few LES models have included modules for treating aerosol particles and their dynamics (Tonttila et al., 2017;Kurppa et al., 2019;Zhong et al., 2020). The implementation of aerosol dynamics into LES models increases their computational load tremendously.
Lagrangian approaches to the fluid flow are often employed in 0-D models that combine a vehicular plume model with an aerosol dynamics model in order to assess the impacts of coagulation, condensation of water vapour, and plume dilution of the particle number size distribution (e.g. Pohjola et al., 2007). On the urban scale, application of Lagrangian models is limited because of the large variability of emission sources and because they do not account for different wind speed or direction at different altitudes. However, the Lagrangian approach is advantageous for the examination of exhaust plumes in street environments, as it allows for the inclusion of more details on the representation of the aerosol dynamics and gas-phase chemistry than would be possible in a 3-D CTM. The traffic exhaust plume can be considered an isolated air parcel moving with the fluid flow, without mixing with other air parcels on the neighbourhood scale.
The Multicomponent Aerosol FORmation model MAFOR ) is a 0-D Lagrangian-type sectional aerosol process model, which includes multiphase chemistry in addition to aerosol dynamics. It was originally developed to overcome the limitations of monodisperse models with respect to the simulation of continuous new particle formation in the marine boundary layer. Later, the model was extended with a module for dilution of particles in urban plumes with particles from background air . The aerosol dynamics module of MAFOR simultaneously solves the time evolution of particle number concentration and mass concentration of aerosol components in each size section in a consistent manner. The model allows for changes in the average density of particles and represents the growth of particles in terms of both the particle number and mass.
The aerosol dynamics in MAFOR are coupled to a detailed gas-phase chemistry module, which offers full flexibility for inclusion of new chemical species and reactions. Many aerosol dynamics models are designed to be coupled with a separate gas-phase chemistry module when implemented in atmospheric 3-D models. However, there are only a few other aerosol dynamics models for use in atmospheric studies that inherently integrate gas-phase chemistry together with aerosol processes as a function of time. Examples are ADCHEM (Roldin et al., 2011) and AEROFOR (Pirjola, 1999;Pirjola and Kulmala, 2001) that both use the kinetic code developed by , originally representing a modified EMEP chemistry scheme (Simpson, 1992). An advantage of AEROFOR is that it allows for multicomponent condensation to an externally or internally mixed particle population. AEROFOR has been applied to study aerosol dynamics and particle evolution under different atmospheric conditions such as arctic, boreal forest, and marine environments (e.g. Pirjola et al., , 2002Pirjola et al., , 2004Kulmala et al., 2000) as well as for the study of diesel exhaust particles under laboratory conditions (Pirjola et al., 2015). However, the model has limitations with respect to the treatment of particle-phase chemistry and does not solve mass concentration distributions as a function of time.
MAFOR has been proven to be particularly useful for studying changes in the emitted particle size distributions by dry deposition (to rough urban surfaces), coagulation processes, considering the fractal nature of soot aggregates, and condensation and evaporation of organic vapours emitted by vehicular traffic. The model is very versatile in its application: due to its modular structure, the model user can switch the different aerosol processes on and off or use alternative parameterizations for the same process, depending on the research question.
The first objective of this paper is to present the model's structure, the treatment of aerosol processes, the coupling to multiphase chemistry, and the main updates compared to the first publication of the model (version 1, in Karl et al., 2011). The second objective of the paper is the evaluation of the model performance of MAFOR version 2 with respect to its ability to predict particle and mass number size distributions.
Several of the new features of MAFOR version 2 were investigated in numerical scenarios and compared to reference data. Specifically, they included the evaluation of (1) the model's sectional representation of the aerosol size distribution in a scenario of new particle formation in urban areas (Case 1; Sect. S2 in the Supplement), (2) Brownian coagulation under the condition of continuous injection of nanoparticles (Case 2; Sect. S3), and (3) the dynamic treatment of semi-volatile inorganic gases by condensation and dissolution (Case 3; Sect. S4), and (4) a new parameterization for nucleation in the case of neutral and ion-induced particle formation (Appendix H).
The main performance evaluation of MAFOR version 2 is addressed in a real-world scenario of a street canyon environment in comparison with other aerosol process models and experimental data. In combination with the plume dispersion module, MAFOR version 1 has previously been evaluated against PN measurements at a motorway (Keuken et al., 2012) and against observed particle size distributions in the exhaust plumes of passenger ships arriving or leaving a ferry terminal (Karl et al., 2020). The real-world scenario in the present study focuses on the application of MAFOR version 2 for plume dispersion in a street canyon based on a published dataset of observations (Pirjola et al., 2012); from now on it is referred to as the Urban Case. Results from the MAFOR model are intercompared to the aerosol process models AEROFOR and SALSA (Kokkola et al., 2008). The relative importance of aerosol dynamic processes in this scenario is evaluated for the three models, with the dispersioncoagulation model LNMOM-DC Sarkar et al., 2020) as a reference for the relevance of coagulation. The performance of the aerosol dynamics models is evaluated based on defined criteria, such as statistical performance indicators, computational demand, and number of model output variables.
Section 2 describes the structure of the community aerosol dynamics model MAFOR version 2, the included physical and chemical processes, and their numerical solution. In addition, previous applications of the model are summarized and the new setup for modelling of the particle evolution in a street canyon is introduced. Section 3 presents the methods and the experimental data that are used for evaluation of the model in the Urban Case scenario. Section 4 discusses the results from the evaluation and from the comparison with other aerosol dynamics models.
2 Model description MAFOR v2.0 is available as an open-source community aerosol model. The publication of MAFOR v2.0 as a community model is driven by the intention to provide both newcomers and experts in aerosol modelling with an easy-touse stand-alone aerosol box model. A consortium of aerosol scientists guides the development of the community model. For application in atmospheric studies, apart from SALSA (Kokkola et al., 2008) and PartMC (Riemer et al., 2009), there is no other aerosol dynamics model to date that is available as open-source code. In recent years, several aspects of the MAFOR model have been revised and updated with aerosol process parameterizations published in the peerreviewed literature. The main new features of MAFOR v2.0 compared to the original version (MAFOR v1.0, Karl et al., 2011) are the following: 1. coupling to the chemistry sub-model MECCA (Module Efficiently Calculating the Chemistry of the Atmosphere) of the community atmospheric chemistry box model CAABA/MECCA v4.0 (Sander et al., 2019); 2. extension of the Brownian coagulation kernel to consider the fractal geometry of soot particles, van der Waals forces, and viscous interactions; 3. inclusion of new nucleation parameterizations for neutral and ion-induced nucleation of H 2 SO 4 -water particle formation (Määttänen et al., 2018a, b) and H 2 SO 4water-NH 3 ternary homogeneous and ion-mediated particle formation (Yu et al., 2020); 4. the Predictor of Nonequilibrium Growth (PNG) scheme (Jacobson, 2005a) implemented and linked with the thermodynamic module MESA (Zaveri et al., 2005b) of the MOSAIC (Model for Simulating Aerosol Interactions and Chemistry; Zaveri et al., 2008) to enable dynamic dissolution and evaporation of semi-volatile inorganic gases; and 5. absorptive partitioning of organic vapours to form secondary organic aerosol (SOA), following the formulation of the two-dimensional volatility basis set (2-D VBS; Donahue et al., 2011, within the framework of dynamic condensation and evaporation.) The model can be run in three different types of operation: (1) simulation of an air parcel extending from the surface to the height of the planetary boundary layer (PBL) for multiple days along a given air mass trajectory or as a box model at a single geographic location, assuming a well-mixed boundary layer and clear-sky conditions (as a variation of this operation type, the multiphase chemistry during a fog cycle with pre-defined liquid water content and pH value of the fog and cloud can be simulated); (2) chamber experiment simulation assuming homogeneous mixing of constituents in a defined air volume for a given chamber geometry, considering sink terms and source terms of gases to and from chamber walls, deposition of particles to chamber walls, and constant dilution by replenishment of air; and (3) plume dispersion simulation that considers the evolution of the particle number and mass composition distributions in a single exhaust plume along one dimension in space by treating the transformation of emitted gases, condensing vapours, and particles concurrently with the dilution with background air during the spread of the plume volume. A special case is the simulation of dilution and ageing in a laboratory system for diesel exhaust using a simple parameterization for the dilution and cooling processes as described in Pirjola et al. (2015).
In the following sections, a detailed description of the physical and chemical processes and their numerical solution will be given. The focus is on presenting the new features that have been implemented after version 1.0. We begin with a review of the currently available aerosol process models in Sect. 2.1. Section 2.2 gives an overview of the structure and workflow of the MAFOR model. Section 2.3 describes the multiphase chemistry processes and each of the individual aerosol transformation processes in the model. Section 2.4 explains the dynamic treatment of semi-volatile inorganic gases in more detail. Section 2.5 presents SOA formation by absorptive partitioning of organic vapours according to the 2-D VBS. The numerical solution of the aerosol dynamics in the model is given in Sect. 2.6. A brief overview of previous applications of the model in plume dispersion scenarios is given in Sect. 2.7.
Throughout the paper, index q (q = 1, . . . , N C ) is used to denote chemical constituents, with N C being the number of constituents in the aerosol. Index i (i = 1, . . . , N B ) is used to denote the size section of the particle distribution, and N B is the number of size sections (bins). A list of acronyms and mathematical symbols is given in Appendix A. Table 1 provides a comparison of selected aerosol dynamics models that are currently used in studies of atmospheric aerosols. According to their representation of the particle size distribution, aerosol dynamics models can be divided into sectional, modal, monodisperse, and moment models (refer to Whitby and McMurry, 1997, for a detailed review).
Review of current aerosol process models
Sectional models (Gelbard and Seinfeld, 1990;Warren and Seinfeld, 1985;Jacobson and Turco, 1995;Pirjola and Kulmala, 2001;Korhonen et al., 2004) place a grid on the independent variable space (e.g. particle diameter or volume). The aerosol size distribution is approximated by a finite number of size sections (bins) whose locations on the grid can either vary with time or be fixed. The first attempts to solve the stochastic collection equation for a droplet size distribution used a single-moment sectional approach, which tracks either particle number or particle mass. Later, two-moment sectional models were developed, which explicitly track both particle number (i.e. zeroth moment) and the mass concentration of aerosol components (i.e. first moment) in each size bin to predict the particle number and mass size distributions (Tzivion et al., 1987). The two-moment sectional approach can conserve both number and mass very accurately (Adams and Seinfeld, 2002). Two-moment sectional models have been implemented in global aerosol microphysics models for improving the understanding of the processes that control concentrations of cloud condensation nuclei (CCN), for example the climate model GISS-TOMAS (Lee and Adams, 2010) and the global offline CTM GLOMAP (Spracklen et al., 2005).
Modal models (Wright et al., 2001;Vignati et al., 2004) represent the particle distribution as a sum of modes, each having a lognormal or similar size distribution, typically described by mass, number, and width. Modal size distributions can be solved very efficiently, which makes them favourable candidates for global 3-D CTMs. However, the accuracy of the modal method is lower compared to the sectional method, especially if the standard deviation (width) of the modes is treated as constant (Zhang et al., 2002). In monodisperse models , all particles in each mode have the same size but can have different composition.
Moment models (McGraw, 1997) track a few low-order moments of the particle population but do not explicitly resolve the size distribution. Anand and Mayya (2009) have developed a formalism based on an analytical solution of the coagulation-diffusion equation for estimating the survival fraction of aerosols in dispersing puffs and plumes under the assumption of an initially Gaussian-distributed particle number concentration and spatially separable size spectra. The parameterization scheme has been further developed and is termed the Log Normal Method Of Moments -Diffusion Coagulation (LNMOM-DC) model, enabling the simultaneous treatment of aerosol coagulation and dispersion in an expanding exhaust plume.
The sectional aerosol dynamics model MAFOR allows for multicomponent condensation of vapours (sulfuric acid -H 2 SO 4 , methane sulfonic acid -MSA, ammonia -NH 3 , amines, nitric acid -HNO 3 , hydrochloric acid -HCl, water -H 2 O, and nine different organic compounds) to an internally mixed aerosol that includes all atmospherically relevant aerosol constituents, i.e. sulfate, ammonium, nitrate, methane sulfonate (MSA p ), sea salt, soot, primary biological material, and mineral dust. The assumption of internally mixed particles, i.e. that all particles in the same size bin have the same chemical composition, lowers the accuracy in cases of high humidity in air because the ability to take up water can vary considerably for particles of the same size that have different composition (Korhonen et al., 2004). However, handling multivariate distributions that allow for same-sized particles with different hygroscopic properties involves large storage and computation requirements. The particle-resolved model PartMC-MOSAIC (Riemer et al., 2009;Tian et al., 2014) stores the composition of many individual aerosol particles (typically about 10 5 ) within a well-mixed computational volume. The computational burden is reduced by simulating the coagulation stochastically, assuming coagulation events are Poisson-distributed with a Brownian kernel.
The size-segregated aerosol model UHMA (Korhonen et al., 2004), another sectional aerosol dynamics model, has demonstrable good performance in reproducing new particle formation and solves the evolution of particle number and surface size distribution together with the composition distribution. In UHMA, the discretization of particle sizes is based on the volume of the particle core. A shortcoming of UHMA is that it does not explicitly solve the mass concentration change in individual aerosol components with time, whereas MAFOR takes into account the fact that the condensation or evaporation of an individual component results in the growth or shrinkage of the (total) mass concentration size distribution, affects the total aerosol mass, and moves the component's mass concentration distribution on the diameter coordinate.
The aerosol process models M7 (Vignati et al., 2004) and SALSA (Kokkola et al., 2008), partly owing to their computationally efficiency, have been implemented into the 3-D aerosol-climate model ECHAM5 (Bergman et al., 2012). SALSA is a sectional aerosol module developed with the specific purpose of implementation in large-scale models. It is part of the Hamburg Aerosol Model (HAM) (Stier et al., 2005) that handles the emissions, removal, and microphysics of aerosol particles and the gas-phase chemistry of dimethyl sulfide (DMS) within ECHAM5. Other implementation examples for SALSA in 3-D models are UCLALES-SALSA (Tonttila et al., 2017), PALM (Kurppa et al., 2019), and ECHAM-HAMMOZ . The focus of the implementation of SALSA is the description of the aerosol processes with sufficient accuracy, which is important for understanding aerosol-cloud interactions and their impacts on global climate. SALSA includes the aerosol mi-3974 M. Karl et al.: Community aerosol model MAFOR v2 crophysical processes nucleation, condensation, hydration, coagulation, cloud droplet activation, and oxidation of sulfur dioxide (SO 2 ) in cloud droplets. The main advantage of SALSA is that particle size bin width does not have to be fixed, and lower size resolution can be used in the particle size range less affected by microphysical processes.
Model structure
Figure 1 illustrates the model structure of MAFOR v2.0. The model consists of three basic modules: (1) a chemistry module, (2) an aerosol dynamic module, and (3) a plume dispersion module. MAFOR is coupled with the chemistry sub-model MECCA v4.0 that allows the dynamic generation of new chemistry solver code and photolysis routines after adding new species and/or reactions to the chemistry mech-anism. The newly generated code is packaged into a Fortran library that is included during the compilation of MAFOR, avoiding the need to build the MECCA interface each time when changes are made in the model code.
The chemistry module of MAFOR calculates time-varying gas-phase concentrations and aqueous-phase concentrations (in the droplet mode) by solving the non-linear system of stiff chemical ordinary differential equations (ODEs). The photolysis module JVAL (Sander et al., 2014) is used to calculate photolysis rate coefficients for photo-dissociation reactions. JVAL includes the JVPP (JVal PreProcessor), which pre-calculates the parameters required for calculating photolysis rate coefficients based on absorption cross sections and quantum yields of the atmospheric molecules. The Kinetic PreProcessor (KPP v2.2.3) (Sandu and Sander, 2006) is used to transform the chemical equations into programme code for the chemistry solver. The numerical integration of the ODE system of gas-phase and aqueous-phase reactions is done with Rosenbrock 3 using automatic time step control. The chemistry module also includes the emission and dry deposition of gases.
The aerosol dynamics module includes homogeneous nucleation of new particles according to various parameterizations, Brownian coagulation, condensation and evaporation, dry deposition, wet scavenging, and primary emission of particles. The composition of particles in any size bin can change with time due to multicomponent condensation and/or due to coagulation of particles. The aerosol dynamic solver updates number and component mass concentrations in the following order: (1) condensation and evaporation, (2) coagulation, (3) nucleation, (4) dry and wet deposition, and (5) emission. It returns an updated number concentration, updated component mass concentration per size bin, and updated gas-phase concentration of condensable and nucleating vapours.
The plume dispersion module calculates the vertical dispersion of a Gaussian plume as a function of x (the downwind distance from the point of emission) and the dilution rate for the particle and gas concentrations in the plume. Temperature in the plume and the plume height varies with time according to prescribed dispersion parameters. In the case that the MAFOR model were included into a dispersion or climate modelling system, the plume dispersion model in Fig. 1 would be replaced by the advection-diffusion modules of that system.
The model starts with the initialization of the particle number and mass composition distributions as well as gas-phase concentrations. In the plume simulation, the aerosol distribution and gas-phase concentrations of the background air and dispersion parameters are initialized based on the user input. Meteorological conditions are updated on an hourly basis. It is possible to tailor the properties of the (lumped) organic compounds for the simulation to best represent the conditions in a chamber experiment or specific atmospheric region. As the model begins the integration over time, each process is solved using operator splitting in the following order: plume dispersion, chemical reactions, and aerosol dynamics. The changed gas-phase concentrations from the chemistry module are used in the aerosol dynamic module in the condensation and evaporation as well as nucleation processes. Pre-existing mass and number are input in the calculation of aerosol dynamic processes. The module first calculates the mass concentration of liquid water in each size section and consequently the wet diameter of particles, which is used for the calculation of aerosol dynamic processes. The dilution of particles is calculated after the number and mass concentrations of the current time step have been updated.
MAFOR has an interface to the MOSAIC model (Zaveri et al., 2008) for the treatment of condensation and evaporation of semi-volatile inorganic gases. This interface encapsulates a reduced version of the MOSAIC solver code in an external Fortran library. The thermodynamic module of MOSAIC is the Multicomponent Equilibrium Solver for Aerosols (MESA) model (Zaveri et al., 2005b). MESA is used here to calculate aerosol phase state, the activity coefficients of electrolytes in the aqueous solution, the equilibrium concentration of ammonium (NH + 4 ) in all size bins, and the parameters for dynamic growth by dissolution. An operator-split aerosol equilibrium calculation in MESA is performed to recalculate electrolyte composition and activity coefficients in each size bin. Finally, the MOSAIC interface provides the parameters required to determine the solubility terms in the PNG scheme (Jacobson, 2005b). In the PNG scheme, condensation (dissolution) and evaporation of HNO 3 , HCl, and H 2 SO 4 are solved first. Following the growth calculation for all acid gases, NH 3 is equilibrated with all size bins, conserving charge among all ions. In this method, ammonia growth is effectively a time-dependent process because the equilibration of NH 3 is calculated after the diffusion-limited growth of all acids. The PNG scheme allows operator split to be done at a long time step (e.g. 150-300 s) between the growth calculation and the equilibrium calculation without causing oscillatory solutions when solving the condensation and evaporation of acid and base as separate processes (Jacobson, 2005b).
Two aspects in the implementation of the dynamic partitioning of inorganic and organic aerosol components in MAFOR v2.0 advance beyond the original concepts.
The condensation and dissolution of HNO 3 and HCl
were modified compared to the original PNG scheme. Condensation of the two gases to a particle size bin is applied when a solid is present in the bin using the minimum saturation vapour concentration. This leads to more nitrate mass to transfer to the aerosol phase compared to the original PNG scheme, which only considers solubility.
2. The coupling of the mass-based formulation from the 2-D VBS framework was implemented (Donahue et al., 2011) for organic aerosol phase partitioning, considering non-ideal solution behaviour, with the dynamics of organic condensation and evaporation according to a socalled hybrid approach, addressing the critical role of condensable organics in the growth of freshly nucleated particles.
Multiphase chemistry
The gas-phase and aqueous-phase chemistry mechanism is based on the MECCA chemistry sub-model of CAABA/MECCA v4.0 (Sander et al., 2019). In addition to the basic tropospheric chemistry it contains the Mainz Organic Mechanism (MOM) as an oxidation scheme for volatile organic compounds (VOCs), including alkanes, alkenes (up to four carbon atoms), ethyne (acetylene), isoprene, several aromatics, and five monoterpenes. Most of the VOC species of MOM are available for initialization in simulations with MAFOR. Diurnal variations of photolysis rates are based on Landgraf and Crutzen (1998) with the updates included in the JVAL photolysis module (Sander et al., 2014), such as updated UV-Vis cross sections as recommended by the Jet Propulsion Laboratory (JPL), evaluation no. 17 (Sander et al., 2011). The chemistry mechanism of MECCA was extended by a comprehensive reaction scheme for DMS adopted from Karl et al. (2007) and oxidation schemes of several amines: methylamine, dimethylamine, trimethylamine (Nielsen et al., 2011), 2-aminoethanol (Karl et al., 2012b), amino methyl propanol, diethanolamine, and triethanolamine. In total, the current chemistry mechanism of MAFOR v2.0 contains 781 species and 2220 reactions in the gas phase, as well as 152 species and 465 reactions in the aqueous phase. Initial concentrations of relevant gas-phase species, their dry deposition rate, and their emission rate can be provided by the model user.
The aqueous-phase chemistry is currently restricted to the liquid phase of coarse-mode aerosol (short: droplet mode). The composition of the liquid phase may be initialized with concentrations of the most relevant cations and anions. Transfer of molecules between the gas phase and the aqueous phase of coarse-mode aerosol and vice versa is treated by the resistance model of Schwartz (1986), which considers gasphase diffusion, mass accommodation, and the Henry's law constants. The mass transfer coefficient k m,q , a first-order loss rate constant, describes the mass transport of compound q from the gas phase to the aqueous phase: where D q is the molecular diffusion coefficient in the gas phase, c m,q is the molecular speed, α l,q is the mass accommodation coefficient (adsorption of the gas to the droplet surface), and r d is the droplet radius (mean radius of the monodisperse droplet mode). The first term represents the resistance caused by gas-phase diffusion, while the second term represents the interfacial mass transport. It is assumed that the liquid aerosol (cloud-fog droplet) behaves as an ideal solution and that no formation of solids occurs in the solution. The change in gas-phase and aqueous-phase concentrations, C g,q and C aq,q , of a (soluble) compound with time due to chemical reactions in a system with equilibrium partitioning is then described by and dC aq,q dt = Q aq,q + k m,q C g,q − where Q g,q and Q aq,q are the gas-phase and aqueous-phase net production terms in chemical reactions, respectively, and LWC is the liquid water content. The dimensionless Henry's law coefficient, H A,q , for the equilibrium partitioning is independent of the liquid water content. Aqueous-phase partitioning parameters and aqueous-phase reactions are adopted
Condensation and evaporation
The growth of particles through multicomponent condensation is implemented in MAFOR according to the continuumtransition regime theory corrected by a transitional correction factor (Fuchs and Sutugin, 1970). The scheme used for condensation and evaporation is the Analytical Predictor of Condensation (APC; Jacobson, 2005b) for dynamic transfer of gas-phase molecules to the particles over a discrete time step.
The difference between partial pressure of a condensable compound in air and vapour pressure on the particle surface is the driving force for condensation and evaporation in the model. Condensation and evaporation are solved by first calculating the single-particle molar condensation growth rate I q,i (m 3 s −1 ) for each compound q in each size bin i, given by where υ i is the particle volume, υ g,q is the molecular volume of the condensing vapour, and C eq,q (in µg m −3 ) is the saturation vapour concentration over a flat solution of the same composition as the particles. The factor N A /10 6 MW q is for conversion from mass-based to molecular units, where N A is the Avogadro constant (N A = 6.022×10 23 mol −1 ) and MW q is the molecular weight of the condensing vapour (g mol −1 ). The diffusion coefficient D q is estimated using an empirical correlation by Reid et al. (1987). The equilibrium saturation ratio of the condensing vapour, S q,i , is determined by the Kelvin effect and Raoult's law, S q,i = γ q,i Ke, with the molar fraction in the particle phase, γ q,i , and the Kelvin term Ke. The transitional correction factor β q,i is (Fuchs and Sutugin, 1970) where α q is the mass accommodation (or sticking) coefficient of compound q. The default values for the accommodation coefficient are 0.5 for H 2 SO 4 and 0.13 for MSA. The model user can replace these values by unity. The accommodation coefficient of organic vapours and all other inorganic vapours is assumed to be equal to unity. The Knudsen number is Kn = λ v /r i , λ v is the mean free path of vapour molecules, and r i is the particle radius. The Kelvin effect due to curvature of particles is considered for the condensation and evaporation of all vapours. Inclusion of the Kelvin term reduces the condensation flux of vapours to particles smaller than 10 nm diameter in size. The Kelvin term Ke is expressed as where R is the universal gas constant (R = 8.3144 kg m 2 s −2 K −1 mol −1 ), T is the air temperature (K), σ q is the surface tension (kg s −2 ), ρ L,q is the density of the pure liquid (kg m −3 ), and r i is particle radius in size bin i (m). Surface tension and density of the pure liquid for the condensing vapours are given in Table 2. The vapour pressure of the lumped organic compounds is modified by their molar fraction in the particle phase (according to Raoult's law) and by their molar volume and surface tension according to the Kelvin effect. The condensation flux of H 2 SO 4 and MSA is corrected by the effect of hydrate formation following Karl et al. (2007). For organic vapours, the revised flux formulation by Lehtinen and Kulmala (2003) is used, which accounts for the molecule-like properties of the small particles, by modification of the transitional correction factor, Knudsen number, and mean free path. The condensation of NH 3 is coupled to the concentration of acid gases (H 2 SO 4 , HNO 3 , and HCl). If the NH 3 concentration is at least 2-fold compared to the H 2 SO 4 concentration, then two NH 3 molecules are removed from the gas phase, assuming formation of ammonium sulfate [(NH 4 ) 2 SO 4 ]. If there is excess NH 3 available for reaction with HNO 3 to produce ammonium nitrate (NH 4 NO 3 ), then each HNO 3 molecule removes one NH 3 molecule from the gas phase. NH 3 can also react with HCl to produce ammonium chloride (NH 4 Cl). The formation of NH 4 NO 3 and/or NH 4 Cl then determines the saturation vapour pressures of NH 3 , HNO 3 , and HCl. At equilibrium, the relation between the saturation concentration and the gas-solid equilibrium coefficients K p,NH 4 NO 3 and K p,NH 4 Cl , together with the mole balance equation, can be used to obtain the analytical solution for the saturation concentration of NH 3 (i.e. C eq,NH 3 ), as follows.
C eq,NH 3 C eq,HNO 3 = K p,NH 4 NO 3 (6a) C eq,NH 3 C eq,HCl = K p,NH 4 Cl (6b) C g,NH 3 − C eq,NH 3 = C g,HNO 3 − C eq,HNO 3 + C g,HCl − C eq,HCl (6c) The saturation concentrations of HNO 3 (i.e. C eq,HNO 3 ) and HCl (i.e. C eq,HCl ) are obtained accordingly. The reaction of alkylamines with HNO 3 to alkyl ammonium nitrate is treated in analogy to the ammonia-nitric acid system. Alternatively, the PNG scheme, applicable across the entire relative humidity range, can be used to solve the growth by dissolution of HNO 3 and HCl, as well as equilibration of NH 3 , as will be described in Sect. 2.4.
Saturation vapour pressures of the organic compounds are based on the C 0 values (pure-compound saturation mass concentration) provided by the model user. Typical C 0 values are shown in Table 2. Alternatively, the absorptive partitioning of organics is considered using the 2-D VBS method, as will be described in Sect. 2.5.
The gas-phase concentration of a condensing vapour with respect to condensation and evaporation as well as gas-phase chemistry is predicted according to where N i is the number concentration of particles (m −3 ). The second term on the right-hand side (RHS) in this equation represents the condensation-evaporation flux to a particle population, as defined in Eq.
(3). The change in the particle-phase mass concentration, m q,i , of the compound in each size bin with time due to condensation and evaporation is described by with k T,q,i = 4π r i N i D q β q,i , where k T,q,i is the mass transfer rate (s −1 ) of gas to the particles of a size bin. A non-iterative solution for the gas-phase and particlephase concentration in each bin due to condensation over time is obtained by making use of the mass balance equation of the final aerosol-and gas-phase concentrations (Jacobson, 2005b). Details of the APC solver are given in Appendix B.
The condensation of H 2 O is accounted for by assuming the particles to be in equilibrium with the ambient water vapour. The uptake of water is calculated based on equilibrium thermodynamics (Binkowski and Shankar, 1995) using empirical polynomials (Tang and Munkelwitz, 1994) for the mass fraction of solute as a function of water activity. Polynomials for ammonium nitrate and ammonium sulfate are adopted from Chan et al. (1992). The water uptake of (soluble) semivolatile organics is treated as sodium succinate with polynomials adopted from Peng and Chan (2001), and water uptake of sea salt particles is treated as sodium chloride (NaCl) according to Tang et al. (1997).
Nucleation
New particles are introduced into the atmosphere either by direct emission or by in situ nucleation of semi-volatile or low-volatility vapours. Nucleated particles (critical clusters) have initial sizes of the order of a few nanometres or less, which is much smaller than typical primary emission particle size ranges. Competition between growth by condensation and loss by coagulation determines the survival probability of a nucleated particle through a certain size range, usually up to 100 nm. Since freshly nucleated particles are small, they are highly diffusive and have a high propensity to collide with pre-existing particles. Nucleation in the atmosphere is a dynamic process that involves interactions of precursor vapour molecules, small clusters, and pre-existing particles (Zhang et al., 2012). However, the atmospheric nucleation mechanism is still surrounded with uncertainties. Several options of parameterized nucleation mechanisms can be chosen in the model; Table 3 provides a list of the available mechanisms.
Sulfuric acid is a highly probable candidate for atmospheric nucleation . Sihto et al. (2006) reported that nucleation-mode particle concentrations observed in a boreal forest (Hyytiälä, southern Finland) typically depend on H 2 SO 4 concentration via a power-law relation with the exponent of 1 or 2. The proposed theory of atmospheric nucleation by cluster activation (option 5) or kinetic nucleation (option 1) could be used to explain the observed behaviour. Charged clusters formed on ions are more stable and can grow faster than neutral clusters. Ion-mediated nucleation (IMN) considers the role of ubiquitous ions in enhancing the stability of pre-nucleation clusters (Yu and Turco, 2001). The ionization rate of air is about 2 ion pairs cm −3 s −1 at ground level and increases up to 20-30 ion pairs cm −3 s −1 in the upper troposphere. A constant ionization rate of 2 ion pairs cm −3 s −1 is used in all nucleation parameterizations that consider charged clusters in MAFOR. The combined nucleation scheme (option 7) is a combination of IMN and cluster activation hereafter K2011) providing an upper estimate for the nucleation rate at low H 2 SO 4 concentrations under tropospheric conditions. Binary homogeneous nucleation (BHN) of H 2 SO 4 -H 2 O may be the prevailing mechanism in the upper troposphere, and in some cases, classical BHN theory has successfully explained the observed formation rates of new particles (Weber et al., 1999;. BHN is implemented in MAFOR based on the parameterization of hereafter V2002), which takes into account the effect of hydrate formation (Jaecker-Voirol et al., 1987;Noppel et al., 2002), extended to temperatures above 305 • C (Vehkamäki et al., 2003), which is suitable for predicting the particle formation rate at high temperatures in exhaust conditions (option 2). Bolsaitis and Elliott (1990) using unity mole fraction of H 2 SO 4 . c Temperature-dependent expression from Kreidenweis and Seinfeld (1998). d Wyslouzil et al. (1991). e Eq. (6) with K p,NH 4 NO 3 and K p,NH 4 Cl from Zaveri et al. (2008).
f Treated in analogy to the ammonia-nitric acid system. g Temperature-dependent surface tension for pure succinic acid from Hyvärinen et al. (2006). h Value for the organic vapours BSOV, BLOV, BELV, ASOV, ALOV, and AELV can be replaced by model user. Määttänen et al. (2018a;hereafter M2018) presented new parameterizations of neutral and ion-induced H 2 SO 4 -H 2 O particle formation (option 11) valid for large ranges of environmental conditions, which have been validated against a particle formation rate dataset generated in Cosmics Leaving OUtdoor Droplets (CLOUD) experiments. The implementation of the M2018 parameterization in MAFOR v2.0 has been tested in an urban background scenario (Case 1, T = 288 K and RH = 90 %), giving a maximum particle formation rate of 0.95 cm −3 s −1 when the H 2 SO 4 concentration peaked at 5 × 10 7 cm −3 (Sect. S2). Only the ion-induced nucleation was active under these conditions.
Participation of a third compound in the nucleation process might explain discrepancies between H 2 SO 4 -water nucleation theories and laboratory measurements as well as field studies. Ternary homogeneous nucleation (THN) in-volving NH 3 is a strong option due to the abundance of NH 3 in the atmosphere and its ability to lower the partial pressure of H 2 SO 4 above the solution surface. Merikanto et al. (2007) revised the classical theory of THN by including the effect of stable ammonium bisulfate formation (option 3), resulting in predicted nucleation rates that are several orders of magnitude lower compared to the original ternary nucleation model by . More recently, the particle formation rates for THN have been updated based on simulations with the Atmospheric Cluster Dynamics Code (ACDC; Olenius et al., 2013) using quantum chemical input data (option 13). ACDC simulates the dynamics of a population of molecular clusters by numerically solving the cluster birth-death equations. Details of the ACDC simulations of the ternary H 2 SO 4 -NH 3 -H 2 O system can be found in Henschel et al. (2016;hereafter H2016). The ACDC/THN lookup table published by Baranizadeh et al. (2016) was implemented in MAFOR v2.0, allowing for the interpolation of particle formation rates under various conditions. MAFOR v2.0 also includes an implementation of the lookup table parameterization of ternary nucleation (TIMN, option 4) by Yu et al. (2020;hereafter Y2020). TIMN includes both ion-mediated and homogeneous ternary nucleation of H 2 SO 4 -NH 3 -H 2 O. At very low NH 3 concentrations ([NH 3 ] ≤ 10 5 cm −3 ), TIMN predicts nucleation rates according to BHN. Hence, the TIMN scheme offers the clear advantage that it can be directly applied to calculate nucleation rates in the whole troposphere in 3-D models. Figure 2 compares the most relevant parameterizations for the particle formation from sulfuric acid nucleation under conditions relevant for the Urban Case scenario (T = 262 K and RH = 80 %) as a function of the H 2 SO 4 concentration. The H 2 SO 4 concentration for which the particle formation rate reaches J nuc = 1 cm −3 s −1 is 3.2 × 10 6 , 4.6 × 10 6 , 1.8 × 10 7 , 7.4 × 10 7 cm −3 , and 6.0 × 10 7 for K2011, M2018, Y2020 (at [NH 3 ] = 10 5 cm −3 ), H2016 (at [NH 3 ] = 2 × 10 6 cm −3 ), and V2002, respectively. K2011 gives the highest nucleation rates at low H 2 SO 4 concentrations and shows an almost linear dependence on [H 2 SO 4 ] because this parameterization does not consider kinetic limitation. The M2018 curve shows two turning points: the first at [H 2 SO 4 ] ∼ 1 × 10 6 cm −3 , when ion-induced nucleation reaches the kinetic limit, and the second at [H 2 SO 4 ] ∼ 3 × 10 7 cm −3 , when neutral BHN starts to dominate the total particle formation rate. The Y2020 parameterization is very sensitive to [H 2 SO 4 ] at low H 2 SO 4 concentrations but becomes insensitive to [H 2 SO 4 ] at high concentrations due to the limitation of nucleation by the ionization rate. Particle formation rates from M2018 at high [H 2 SO 4 ] are an order of magnitude higher than those predicted from the earlier V2002 parameterization.
when the concentration of organics was increased. Paasonen et al. (2010) proposed different empirical parameterizations for the nucleation of organics-H 2 SO 4 clusters, analogous to the kinetic and cluster activation mechanisms for H 2 SO 4 clusters . From their proposed organics-H 2 SO 4 nucleation mechanisms, three are included in MAFOR: (1) activation of non-identified clusters by both H 2 SO 4 and organics (OS1, option 8), (2) homogeneous heteromolecular nucleation between H 2 SO 4 and organic molecules combined with homogeneous homomolecular nucleation of H 2 SO 4 according to kinetic nucleation theory (OS2, option 9), and (3) homogeneous nucleation of the organics in combination with the nucleation routes of OS2 according to kinetic nucleation theory (OS3, option 10). The same low-volatility organic vapour (SOA precursor BLOV) is used in all three parameterizations; it may also be involved in particle growth by condensation. Further nucleation options are organics-H 2 SO 4 nucleation in diesel exhaust (HET, option 12), as suggested in Pirjola et al. (2015), and kinetic nucleation of amine-HNO 3 (option 6) proposed by Karl et al. (2012b) for amine photo-oxidation experiments.
Coagulation
Coagulation of particles leads to a reduction in the total number of particles, changes the particle number size distribution and the chemical composition distribution, and leaves the total particle mass concentration unchanged. Coagulation is more efficient between particles of different sizes (intermodal coagulation) than between same-sized particles (self-coagulation). The rate of coagulation is a product of size and diffusion coefficient: large particles provide a large collision surface, and the smaller particles have high mobility (Brownian motion). For instance, a particle of 10 nm diameter size coagulates about 170 times faster with a 1 µm particle than with another 10 nm particle (Ketzel and Berkowicz, 2004). Thermal coagulation of particles caused by Brownian motion of the particles is considered with an accurate treatment in MAFOR: a semi-implicit solution is applied to coagulation (Jacobson, 2005b). The (non-iterative) semi-implicit solution yields an immediate volume-conserving solution for coagulation with any time step. Brownian coagulation coefficients between particles in size bin i and j are calculated according to Fuchs (1964). For particles in the transition regime, the Brownian coagulation coefficient can be calculated with the interpolation formula of Fuchs (1964): where δ m is the mean distance from the centre of a sphere reached by particles leaving the sphere's surface and travelling a distance of the particle mean free path. Further, r is particle radius, D m is the particle diffusion coefficient, and ν p is the mean thermal speed of a particle with index i and j for the respective size bin. Details on the Brownian coagulation algorithm are given in Appendix C. Brownian coagulation is well understood for coalescing particles of spherical shape. Soot particles in diesel exhaust, however, are fractal-like agglomerates that consist of nanosized primary spherules. In the direct exhaust plume, the fractal shape of freshly emitted soot particles larger than 50 nm might increase their effective surface area that acts as a coagulation sink for the smaller particles (Ketzel and Berkowicz, 2004). The coagulation rate for agglomerate particles depends on particle mobility and the effective collision diameter; it is usually assumed that the collision diameter is equal to either the mobility diameter or the outer diameter (Rogak and Flagan, 1992).
The effect of fractal geometry on coagulation is treated in the model by considering the effect of shape on radius, diffusion coefficient, and Knudsen number in the Brownian coagulation kernel. It is assumed that the collision radius, r c , is equal to the outer radius, r f , of the agglomerate, defined as where n s is the number of primary spherules in the aggregate, r s is the radius of spherules, and D f is the fractal dimension. The model user is asked to provide values for r s and D f for the fractal (soot) particles. In accordance with Lemmetty et al. (2008), the effective density of fractal (soot) particles larger than the primary spherules is expressed as where D p,i is particle diameter of size bin i, while d s and ρ s are the diameter and density of the primary spherules (for soot: 1200 kg m −3 ), respectively. The Brownian coagulation kernel is modified for fractal geometry with (Jacobson and Seinfeld, 2004) with the mean distance, δ m , from the particle's centre and the Knudsen number for air evaluated at the mobility radius. Here, the particle diffusion coefficient is evaluated at the mobility radius. For D f = 3 (spherical shape), the fractal radius, mobility radius, area-equivalent radius, and collision radius are identical and equal to the volume-equivalent radius; hence, Eq. (12) simplifies to the Brownian kernel for spheres. Two forces that increase or decrease the rate of aerosol coagulation are van der Waals forces, which result from the interaction of fluctuating dipoles, and viscous forces, which arise from the fact that velocity gradients induced by a particle approaching another particle in a viscous medium affect the motion of the other particle. It has been shown that van der Waals forces can enhance the coagulation rate of particles with diameter < 50 nm by up to a factor of 5 (Jacobson and Seinfeld, 2004). Viscous forces retard the rate of van der Waals force enhancement in the transition and continuum regimes (Schmitt-Ott and Burtscher, 1982).
In MAFOR, the correction of the Brownian kernel for van der Waals and viscous forces is done as in Jacobson and Seinfeld (2004). An interpolation formula for the van der Waalsviscous collision kernel K V i,j between the free-molecular and continuum regimes is applied (Alam, 1987;Jacobson and Seinfeld, 2004): The quotient inside the curly brackets is the enhancement factor due to van der Waals-viscous forces. The correction factors W k for the free-molecular regime and W c for the continuum regime are given in Appendix D. Figure 3 shows the predicted effect of van der Waals forces and viscous forces on Brownian coagulation for spherical as well as for fractal particles (r s = 13.5 nm and D f = 1.7) when the volumeequivalent diameter of the first particle is 10 nm.
Brownian motion by far dominates the collisions of submicrometre particles in the atmosphere. The coagulation of particles in turbulent flow is affected by two mechanisms: spatial fluctuations of the turbulent flow and particle inertia, which cause the larger particles not to follow the flow. Since turbulent shear coagulation is only important for particles Figure 3. Modelled effect of fractal geometry and van der Waalsviscous forces when the volume-equivalent diameter is 10 nm and the volume-equivalent diameter of the second particle varies from 5 to 1000 nm. larger than several micrometres in diameter under conditions characterized by intense turbulence (Pnueli et al., 1991), its treatment is not considered in the model.
Dry deposition and wet scavenging of particles
Different mechanical processes contribute to the deposition of particles, mainly Brownian diffusion, interception, inertial impaction, and sedimentation. The effectiveness of the deposition process is usually described with the dry deposition velocity, V d , which depends on the properties of the deposited aerosol particle, the characteristics of the airflow in the atmospheric surface layer and inside the thin layer of stagnant air adjacent to the surface (the so-called quasi-laminar sub-layer), and the properties of the surface. The SPF1985 scheme considers dry deposition of particles by Brownian diffusion, interception, and gravitational settling. This parameterization is derived for deposition to completely rough surfaces based on the analysis of several field studies.
The KS2012 scheme can consider the deposition to a vegetation canopy and can be used for smooth and rough surfaces. In the KS2012 scheme, the deposition pathway is split into the aerodynamic layer between heights z 1 and z 0 and the in-canopy layer. Within the aerodynamic layer the Monin-Obukhov profiles of turbulence are assumed. The in-canopy layer is assumed to be well mixed and to have a regular wind speed U top (U top is the wind speed at top of the canopy, i.e. at height z C ). The deposition in the in-canopy layer is treated as a filtration process. KS2012 defines a collection length scale to characterize the properties of rough surfaces. This collection length depends on the ratio U top /u * and the effective collector size, d col , of the canopy.
The HS2012 scheme is based on a three-layer deposition model formulation with Brownian and turbulent diffusion, turbophoresis, and gravitational settling as the main particle transport mechanisms to rough surfaces. An effective surface roughness length F + is used to relate the roughness height to the peak-to-peak distance between the roughness elements of the surface.
The ZH2001 scheme calculates dry deposition velocities as a function of particle size and density as well as relevant meteorological variables. The parameterization is widely used in atmospheric large-scale models because it provides empirical parameters for dry deposition over different land use types.
The model user defines the roughness length, friction velocity near the surface, and other parameters specific to the dry deposition schemes in an input file. Figure 4 shows a numerical comparison of the deposition schemes for a typical rough urban surface, representative of a street canyon using friction velocity u * = 1.33 m s −1 , roughness length z 0 = 0.4 m, and an average particle density of 1400 kg m −3 . This example is chosen to illustrate the differences in the size dependence of the dry deposition velocity when all parameterizations are used with identical meteorological parameters and particle density. Effects of buildings on deposition are not considered.
Size-dependent deposition velocities calculated with the SPF1985 and KS2012 schemes agree within a factor of 2, except for large particles. Both curves have a minimum in the diameter size range 0.2-0.5 µm, while the curve from the ZH2001 scheme has a minimum at ∼ 2 µm. For the HS2012 scheme, an upper-limit value of the effective surface roughness length (F + = 2.75) was chosen, which is adequate for dry deposition to rough environmental surfaces, that results in higher deposition velocities for particles above 0.1 µm diameter compared to the other schemes. For particles in the size range between 0.01 and 0.5 µm the calculated deposition velocities with HS2012 are nearly independent of particle size.
Wet scavenging of particles is described with a simple parameterization of the scavenging rate for in-cloud removal of particles by accretion based on Pruppacher and Klett (1997). Nucleation-mode particles are not scavenged. The wet scavenging rate of particles, λ wet , (s −1 ) is parameterized as where f c is the volume fraction occupied by clouds, assumed to be 0.1, which is typical for the marine boundary layer. The precipitation rate P (mm h −1 ) can be provided in the input by the model user and may vary with time. 5. Sea salt particle source function (size-dependent number flux, F ) at different wind speed, sea surface temperature, and salinity with the parameterization by Spada et al. (2013). The effect of wind speed is shown with the green, violet, blue, and red solid lines (at SST 283 K and salinity of 35 g kg −1 ). The effect of SST is shown with the solid and dashed violet lines (at 9 m s −1 and salinity of 35 g kg −1 ). The effect of salinity is shown with the solid and dashed blue lines (at 8 m s −1 and SST 283 K).
Emission of particles
Emissions of primary particles are controlled by an input file. The prescribed particle emissions can either occur at a constant rate during the entire simulation period or be time-varying as in the simulation of the Urban Case. The emitted size spectrum of particles and their chemical composition are defined by the model user.
Emissions of marine sea salt particles are calculated online using the emission parameterization from Spada et al. (2013), which combines the number flux parameterizations of Mårtensson et al. (2003), Monahan et al. (1986), andSmith et al. (1993). Sea salt particles are assumed to be composed of NaCl. A treatment of primary organic aerosol (POA) particle emissions from the ocean surface will be developed in the future. The parameterization of Spada et al. (2013) describes the size distribution of sea salt particle emissions in terms of number for the diameter size range 0.2-10.0 µm. Sea salt particle emissions in the model depend on wind speed (provided in the meteorological input), sea surface temperature (SST; user-provided value), and salinity (user-provided value). The wind speed dependence is described by the whitecap coverage relating to the 10 m wind speed and the fraction of the sea surface covered by whitecaps. Figure 5 shows the size-dependent sea salt particle flux as a function of particle size for different conditions.
Dynamic partitioning of semi-volatile inorganic gases
Several aerosol models rely on thermodynamic equilibrium principles to predict the composition and physical state of inorganic atmospheric aerosols. Examples of thermodynamic equilibrium aerosol models commonly applied in 3-D CTMs include EQUISOLV II (Jacobson, 1999), MARS (Binkowski and Shankar, 1995), ISORROPIA (Nenes et al., 1999), and AIM (Wexler and Clegg, 2002). However, in cases in which the equilibrium timescale is long compared to the residence time of particles in a given environment, the thermodynamic equilibrium is not a good approximation (Meng and Seinfeld, 1996). A dynamic partitioning approach for the formation of secondary inorganic aerosol (SIA) is therefore preferable and is expected to give results that are more realistic.
To enable dynamic partitioning of semi-volatile inorganics in the model, the APC scheme for condensation and evaporation (Sect. 2.3.2) was extended with the PNG scheme (Jacobson, 2005a). The PNG scheme involves four steps: (1) calculation of the growth of semi-volatile acidic gases by dissolution at moderate and high aerosol LWC (determined as total liquid water over all sizes), (2) calculation of the growth of semi-volatile acidic gases by condensation at low LWC, (3) calculation of the growth of non-volatile gases (such as H 2 SO 4 when forming ammonium sulfate) at all LWC, and (4) equilibration of NH 3 / NH + 4 and pH between the gas phase and all particle size bins while conserving charge and moles.
In this implementation, the PNG scheme is coupled with the iterative equilibrium code MESA (Zaveri et al., 2005b) that calculates internal aerosol composition and the sizedependent solubility terms. Figure 6 illustrates the work-flow for the coupling between the PNG scheme and the thermodynamic equilibrium module of the MOSAIC model. MESA computes aerosol phase state, temperature-dependent equilibrium coefficients, activity coefficients of electrolytes (solutes), and the water activity coefficient in all size sections for solid, liquid, and mixed-phase aerosols. MESA solves the solid-liquid equilibrium by applying a pseudotransient continuation technique to the set of ODEs describing the precipitation reactions and dissolution reactions for each salt until the system satisfies the equilibrium or mass convergence criteria. The internal aerosol composition in MESA includes sodium (Na + ), chloride (Cl − ), potassium (K + ), calcium (Ca 2+ ), magnesium (Mg 2+ ), sulfate (SO 2− 4 ), NH 3 / ammonium(NH + 4 ), and HNO 3 / nitrate (NO − 3 ) in the ionic, liquid, and/or solid phases. MESA employs the multicomponent Taylor expansion method (MTEM; Zaveri et al., 2005a) for estimating activity coefficients of electrolytes. MTEM calculates the mean activity coefficient of the electrolyte in a multicomponent solution on the basis of its values in binary solutions of all the electrolytes present in the mixture.
The PNG scheme solves the growth of particles by dissolution of semi-volatile compounds (here HNO 3 and HCl) when the LWC is moderate or high (here: > 0.01 µg m −3 ); i.e. a liquid solution pre-exists on the particle surface. The concentration change in particle compound q (here either the dissolved, undissociated nitric acid plus the nitrate ion or the undissociated hydrochloric acid plus the chloride ion) due to dissolution in one size bin is where S q,i accounts for the Kelvin effect and H q,i is the dimensionless effective Henry's law coefficient for the respective size bin. However, if a solid pre-exists in a particle size bin, condensation occurs and The saturation vapour concentration C eq,q,i (short: SVC) varies continuously over the aerosol size distribution as a function of particle composition. The size-dependent SVC and the effective Henry's law coefficient are calculated in the MOSAIC solver at the beginning of the time step. The sizedependent SVC of HNO 3 and of HCl is determined by several processes (gas-ion reaction, solid-gas equilibrium, and solid-ion reactions). The minimum SVC arising in any of the processes is chosen for the calculation of the condensation term when a solid is present in a particle size bin. The gas concentration C g,q and the total dissolved concentration are unknowns in Eq. (15).
Integration of Eq. (15a) for one size bin over a time step t gives (Jacobson, 2005a) The final gas concentration of the semi-volatile acid and final particle concentration in each bin are obtained analogous to the APC scheme with the solution described in Appendix B. The solution is unconditionally stable and moleconserving.
When the LWC is below 0.01 µg m −3 , the growth of nitric acid is treated as a condensation process rather than a dissolution process. The saturation vapour concentrations of HNO 3 and HCl are calculated considering the gas-solid equilibrium of ammonium nitrate and the gas-solid equilibrium of ammonium chloride as described in Jacobson (2005b). The solution for the coupled ammonia-nitric acid-hydrochloric acid system is then obtained from Eq. (6) and the growth by condensation is treated in the APC solver (Sect. 2.3.2). The condensation and evaporation of low-volatility or nonvolatile gases, such as H 2 SO 4 and high-molecular-weight organics, are solved as a condensation process among all size bins independent of the aerosol LWC.
Following the growth calculation for the acidic gases, NH 3 is equilibrated with all ions and solids in all size bins of the aerosol phase, conserving charge among all ions, also for those that enter the liquid solution during the dissolution and condensation process. NH 3 is equilibrated with all size bins of the aerosol phase simultaneously, resulting in an exact charge balance among all ions in the solution, and conserves mass of NH 3 between the gas phase and all particle size bins.
Following the ammonia calculation, an operator-split internal aerosol equilibrium calculation in the MESA solver is performed to recalculate aerosol ion, liquid, and solid composition, activity coefficients, and Henry's law coefficients, accounting for all species in solution in each size bin. In order to reduce the computational time, the liquid solution terms and composition are updated at longer time intervals than the aerosol dynamic solver time step ( t aero ). The operator-split time interval between growth and equilibrium is 115 s in the current implementation. An advantage of the PNG scheme is that it can be applied at a long time interval (several minutes) without causing oscillatory behaviour in the numerical solution (Jacobson, 2005a). Such oscillatory behaviour at a long time step was observed in an earlier dissolution solver (Jacobson, 1997b) that did not treat the condensation (dissolution) of acid and base separately. Figure 6. Workflow of the dynamic partitioning of semi-volatile inorganic gases. The MOSAIC interface is called every t eq = 120 s, while the PNG solver is called every time step of the aerosol dynamic solver ( t aero ). The MOSAIC interface outputs the gas-solid equilibrium coefficient for ammonium nitrate, the minimum saturation vapour concentration (SVC min ), the effective Henry's law coefficient, the ion concentrations, and a dissolution flag (indicating if a solid is present in a size bin or not) for each size bin of the particle population.
Absorptive partitioning of organic vapours
The new concept for SOA formation in MAFOR v2.0 relies on the 2-D VBS framework introduced by Neil Donahue and co-workers (Donahue et al., 2011). This classification uses the carbon oxidation state and the saturation concentration of the pure compound to define the organic aerosol composition in a two-dimensional space. The 2-D VBS is able to represent the variety of organic aerosol components in the atmosphere and their conversion due to ageing chemistry.
A hybrid approach of condensation-evaporation (Sect. 2.3.2) and the absorptive partitioning into an organic liquid is used to treat condensation to an organic mixture considering non-ideal solution behaviour. For absorptive partitioning, the equilibrium gas-phase concentration (or saturation concentration) of the condensing organic vapour can be obtained from the following relation (Bowman et al., 1997): where m tot,p is the total particle mass concentration, m tot,q is the total mass concentration of compound q in the particle, f om is the fraction of absorbing organic material in the aerosol, and K om,q (m 3 µg −1 ) is the absorption partitioning coefficient of the compound. Using the relation for the mass-based absorption partitioning, (Donahue et al., 2006), Eq. (17) can be rewritten as with the effective saturation mass concentration C * q (in µg m −3 ) of compound q: where γ om,q is the activity coefficient of the individual compound (solute) in the organic mixture (solvent). A simplifying assumption of the 2-D VBS framework is that the activity coefficient is a function of the average carbon fraction (O : C) of the organic aerosol as well as the properties of the individual organic solute. Donahue et al. (2011) give an empirical relation to estimate the activity coefficient γ om,q for organic mixtures (at 300 K): where b CO is an empirical constant for the carbon-oxygen non-ideality (b CO = −0.3), n M is the size of the solute calculated as sum of carbon and oxygen atoms, f q C is the carbon fraction of the individual solute, and f s C is the carbon fraction of the solvent. The activity coefficient for compound q depends exponentially on the size of the solute, while the non-ideality is driven by the differences between the carbon fraction in the solvent and the solute. The formulation of the activity coefficient neglects the role of water or other inorganics in the absorbing material. The effect of these constituents may be treatable within the 2-D VBS framework in the future.
Three classes of organic compounds are represented in the model: oxidized secondary biogenic organics, oxidized secondary aromatic organics, and primary emitted organics. Each class is divided into three volatility levels, result-ing in a total of nine lumped gaseous SOA precursors. Formation of secondary organic compounds is coupled to the gas-phase chemistry of biogenic VOCs (isoprene, monoterpenes) as well as aromatic VOCs (toluene, xylene, trimethylbenzene). The lumped SOA precursors are produced in the gas-phase oxidation reactions via their molar stoichiometric yields. They can undergo oxidative ageing and/or oligomerization. Primary emitted organics can either undergo oxidative ageing or fragmentation. Figure 7 presents a scheme of SOA formation reactions in the model.
Extremely low-volatility organic compounds (ELVOCs) may play an important role in new particle formation. Ehn et al. (2014) have demonstrated the significant formation of ELVOCs with a branching ratio of ca. 7 % in the reaction of α-pinene with ozone (O 3 ). The compounds have been identified as highly oxygenated molecules (HOMs). Their formation is induced by one attack of ozone in the initial reaction of the monoterpene, followed by an autoxidation process involving molecular oxygen. In the model, the production of ELVOCs from monoterpenes (represented by BELV) is simplified by assuming direct formation in the reaction of the monoterpene with O 3 . The formation of HOMs in the reaction of aromatics with hydroxyl (OH) radicals occurs either via an autoxidation mechanism or via multi-generation OH oxidation steps . Again, only direct formation of ELVOCs (represented by AELV) in the initial reaction of toluene with OH radicals is implemented here. The model further assumes that BELV and AELV are the products from the oligomerization reaction of more volatile organics. It is possible to implement a more detailed treatment of the autoxidation mechanism in the future.
The implementation of the 2-D VBS framework requires a series of input parameters for each SOA precursor, namely the number of carbon atoms, number of oxygen atoms, saturation concentration C 0 , and enthalpy of vaporization. The user-provided C 0 value (in µg m −3 ) of the lumped organic compound is then used to compute the saturation vapour concentration according to Eqs. (17)-(20).
Numerical solution of the aerosol dynamics
The model solves the particle number and mass concentration distribution of a multicomponent aerosol using the full stationary (fixed) sectional method. The fixed sectional method (Gelbard and Seinfeld, 1990;Tsang and Rao, 1988) is computationally efficient and advantageous when treating continuous nucleation of new particles, which is relevant for the modelling of new particle formation. The method is also convenient for the combined treatment of nucleation, emission, coagulation, and particle transport because the particle volume in one size section is always constant (Korhonen et al., 2004). This is achieved by a splitting procedure for the particle growth that determines the fraction of particles in one size bin that will grow to the next size bin. However, this splitting procedure is prone to numerical diffusion, causing a wider particle size distribution with lower peak concentrations than the accurate solution. Relevant alternative sectional methods are the full moving structure (Gelbard, 1990), the hybrid structure (Jacobson and Turco, 1995), and the moving centre structure (Jacobson, 1997a), which all eliminate the numerical diffusion arising from the splitting between size sections. The full moving structure allows the particles to grow to their exact size. However, the full moving structure causes problems if new particle formation is considered. The disadvantage of the hybrid structure is that if the particles gain or lose non-volatile material, they must be fitted back to the fixed grid. The moving centre structure allows the particle size to vary in a section within certain boundaries. It causes some numerical diffusion due to averaging of moved particles with pre-existing ones in a section. Korhonen et al. (2004) tested different sectional structures in the simulation of the particle distribution during a new particle formation event and found that the hybrid structure was most vulnerable to numerical diffusion upon particle growth. The moving centre structure permitted fairly realistic treatment of the particle evolution (Korhonen et al., 2004). The ADCHEM model uses the moving centre structure due to its good performance when the size distribution is represented by only a few size sections (see Table 1). In the SALSA model, the moving centre structure is used for particles below 730 nm in diameter, whilst for particles larger than that, fixed size sections are used. In SALSA, the particle size spectrum is divided into three subranges based on the size. This enables variation in including or excluding microphysical aerosol processes and chemical components in simulations in each subrange based on the relevance of the process in the range. For instance, in the lowest subrange cloud processing can be neglected and particles contain only sulfate and organic matter.
Because of the advantages when simulating new particle formation, the fixed structure has been chosen for MAFOR . A fixed sectional grid on the diameter coordinate is used when the number of size sections can be selected by the model user. By using a high number of size sections, the numerical diffusion can be largely reduced. Karl et al. (2011) showed that in an 80 h simulation of the particle distribution in the Arctic marine boundary layer, the final number distribution for the model using 60 size bins closely agreed with the solution of the model using 120 sections.
To determine the number of size bins that are necessary to accurately represent an urban particle size distribution, numerical calculations using different number of size sections were performed (Sect. S2). This test (Case 1) confirmed that the model using 60 bins performs very well in comparison to a sectional representation using 160 bins (the reference in Case 1), although slight spreading of the nucleation mode due to numerical diffusion could be noted. For lower size resolution, the discretization errors were more relevant, leading to a broader nucleation mode with peak diameter at smaller size. Figure 7. Chemical reactions involved in SOA formation. BRO2 and ARO2 stand for all the peroxy radicals of the respective biogenic or aromatic VOCs. The molar stoichiometric yields α 1 , . . . , α 5 , and β 1 , . . . , β 5 represents the formation yields of SOA precursors in the gas-phase reaction of biogenic and aromatic VOCs, respectively. Oligomerization and fragmentation reactions are approximated with firstorder rate constants (Tsimpidi et al., 2010;Lambe et al., 2009;Carlton et al., 2010;Lim and Ziemann, 2009). The nine lumped organics are BSOV (biogenic semi-volatile compound), BLOV (biogenic low-volatility compound), BELV (biogenic extremely low-volatility compound), ASOV (aromatic semi-volatile compound), ALOV (aromatic low-volatility compound), AELV (aromatic extremely low-volatility compound), PIOV (primary intermediate-volatility compound), PSOV (primary semi-volatile compound), and PELV (primary extremely low-volatility compound).
In model simulations, size bins are evenly distributed on a logarithmic scale, ranging from the smallest diameter of 1 nm to the largest diameter of 10 µm. It is possible to use a different maximum diameter (in the range 1-10 µm). Typical model applications in plume dispersion simulations use 120 size sections to represent the aerosol size distribution in the size range 0.001-1.0 µm, resolving the nucleation mode at molecular level. Simulations are initiated with the particulate mass concentrations of the aerosol constituents in four aerosol modes: nucleation mode (Nuc; diameter range 1-25 nm), Aitken mode (Ait; diameter range 25-100 nm), accumulation mode (Acc; diameter range 100-1000 nm), and coarse mode (Coa; diameter range 1-10 µm). The initial mass concentrations of the lognormal modes are distributed over the size bins (Jacobson, 2005b): where D p,i is the particle diameter of section i, D p,i the corresponding width of the section, and M A,q and σ A the mass concentration of the constituent q and geometric standard deviation of the lognormal mode A, respectively. The initial number concentration in each mode is then matched by varying the geometric mean mass diameter, GMD m,A . Due to full stationary structure, collision of particles from section k with particles from section j generates a particle which has a volume between those of two sections i and i + 1 and needs to be partitioned between the two bins, as described in Appendix C. A semi-implicit method is applied to coagulation, which yields an immediate volume-conserving solution with any time step (Jacobson, 2005b). Though particle number is not exactly conserved, the error in number concentration is reduced when the number of bins to describe the size distribution is increased . Condensation and evaporation of vapours result in the redistribution of particles between adjacent size sections. Number concentration in section i increases when particles from section i −1 grow by condensation or particles from section i + 1 shrink due to evaporation. It decreases when particles from section i change volume by condensation or evaporation of vapour.
Considering the presence of a supersaturated vapour (e.g. H 2 SO 4 ), stable clusters containing a certain number of monomers, g * , will form continuously at the rate of neutral or ion-induced nucleation (see Sect. 2.3.3), denoted by J nuc (t). Then coagulation, heterogeneous condensation, and evaporation of vapour on and from particles of size i ≥ g * and nucleation of g * -mers are distinct processes. The time evolution of the particle number concentration (in m −3 ) and mass concentration (in µg m −3 ) of all aerosol constituents in section i (with i = g * , g * + 1, . . . , g * * + N B ) can be written as discrete general dynamic equations in Eqs. (22) and (23).
cond./evap. acting on size distribution Here, f is the volume fraction of the intermediate volume of the colliding particles, δ is the Kronecker delta function, λ dry (s −1 ) is the dry deposition rate, λ dil (s −1 ) is the dilution rate, N bg,i is the number concentration of background particles in the same size section, m bg,q,i is the mass concentration of background particles of compound q in the same section, Q i m,q (t) is the mass-based emission rate (µg m −2 s −1 ), H mix is the height of the simulation box (m), ρ q is the density of compound q (kg m −3 ), and c v is a conversion factor to convert kilograms to micrograms. In Eq. (23), M k is the total mass of a particle (µg) in section k (i.e. the sum of the masses of its individual components), M j is the mass of a particle in section j , and q nuc indicates that the compound is able to nucleate (e.g. H 2 SO 4 ). The first term on the RHS of Eq. (23) describes the effect of condensation and evaporation of a vapour on the total aerosol mass. The second and third terms on the RHS take into account the fact that the mass of the individual constituent increases or decreases and consequently the mass concentration distribution moves on the diameter coordinate.
The discrete equations describing the change in particle number and mass concentration with time are solved with forward finite differences. In plume dispersion simulations, MAFOR uses a time step of 0.1 s for the integration of chemistry and of the aerosol processes, which is sufficiently small when compared to the typical timescales in the range 0.5-4 s for dilution in exhaust plumes (Ketzel and Berkowicz, 2004). When simulating an air parcel along multiple-day trajectories and for chamber experiments, the time step is 5 s.
Previous applications of MAFOR in plume dispersion studies
In this section, published applications of MAFOR version 1 in plume dispersion studies and the previously developed procedure for treating the dilution term in the model are presented. An evaluation of MAFOR version 2, including the new features, against experimental data and two aerosol dynamics models is presented in Sect. 3. The MAFOR model version 1 has been used in the European TRANSPHORM (Transport related Air Pollution and Health impacts -Integrated Methodologies for Assessing Particulate Matter) project to examine the influence of aerosol transformation processes on PN concentrations in several European cities Kukkonen et al., 2016). Dry deposition and coagulation were found to be generally relevant on the neighbourhood scale, but less so in efficient dispersion conditions. Sensitivity tests with the model showed that coagulation causes removal of particles with < 25 nm diameter between the roadside and ambient environment. Particle removal was further enhanced when the fractal nature of soot aggregates and the combined effect of van der Waals and viscous interactions were considered.
For the treatment of dilution of vehicular exhaust gases and particles in combination with aerosol transformation processes on the neighbourhood scale, it is practical to divide the exhaust dilution near roadways into two distinct dilution stages: the first stage (tailpipe-to-road) is characterized by traffic-generated turbulence, and in the second stage (road-to-ambient) atmospheric turbulence prevails (Zhang and Wexler, 2004). The dilution ratio in the first stage can reach up to about 1000 : 1 in around 1-3 s, while the dilution ratio in the second stage is commonly of the order of about 10 : 1 on a timescale of about 10 min. A detailed simulation of the first stage would require the use of LES to explicitly describe the plume turbulent dispersion and account for the fluctuations in the wake of the vehicles (e.g. Chan et al., 2008). However, in practical applications, the early plume phase has been mainly treated using analytic equations for the jet-plume development up to a few seconds (e.g. Vignati et al., 1999). Due to the rapid temperature decrease immediately after exhaust release, the formation of a nucleation mode has already occurred within the timescale of the first dilution stage .
In the study of Karl et al. (2016), model simulations with MAFOR for the road-to-ambient particle evolution were initialized with particle size distribution measurements at the roadside and at an urban background station. It was assumed that emission of primary exhaust particles and nucleation processes had already occurred before the exhaust plume reached the air quality (AQ) monitoring site, located a few metres away from the street. The horizontal particle dilution parameterization was defined by a numerical power function, is the distance from the roadside and U is the horizontal wind speed (m s −1 ) perpendicular to the road (Pohjola et al., 2007). Typical values of the dispersion parameters a and b were chosen to represent different meteorological dispersion regimes. Assuming a circular plume cross section, the particle dilution rate as a function of time is then simply λ dil = b/t.
The dispersion parameters can either be derived from dispersion models or from concentration measurements (typically of NO x ) at several distances perpendicular to the road. The applied treatment of particle dilution assumes a wellmixed state within each cross-wind cross section of the plume. The simple dilution model coupled with the aerosol dynamics model has been tested and evaluated in an earlier study (Keuken et al., 2012) simulating the particle evolution downwind of a motorway under free dispersion conditions. The comparison of the modelled total PN and size distributions with measurements at different distances from the motorway gave reasonable agreement.
The model has also been applied to study the formation of particles in the exhaust of a diesel engine equipped with an oxidative after-treatment system (Pirjola et al., 2015) consisting of a dilution unit and an ageing chamber. The rapid dilution and cooling in the dilution unit were described with empirical parameterizations, wherein temperature follows the exponential curve of the Newtonian cooling and dilution is modelled by using an exponential equation for the dilution ratio, as in Lemmetty et al. (2006). These functions have been implemented in MAFOR and in AEROFOR. Modelled particle number size distributions of the two models were in good agreement with each other and with measurements after 2.7 s of exhaust dilution.
In a study of ship exhaust plumes, MAFOR was applied to determine the in-plume number size distribution and chemical composition of ultrafine particles at different distances from passenger ships (Karl et al., 2020). The dilution of aerosol particles in the ship exhaust plume was approximated using dilution parameters provided by the 3-D atmospheric dispersion model EPISODE-CityChem (Karl et al., 2019). The aerosol dynamics model was used to compute the particle number and mass distributions during the second dilution stage as a function of the distance from the ship stack along the centreline of the ship plume. Dilution in the first stage, when rapid cooling and expansion occur, was calculated with the jet plume model of Vignati et al. (1999), assuming a circular cross section of the plume. Neglecting the removal of particles by coagulation during the first-stage dilution was estimated to introduce an error of 10-15 % in the computed PN concentrations. The particle evolution in the ship plume during the second dilution stage was computed with the aerosol dynamics model considering nucleation, condensation and evaporation, coagulation of particles, dry deposition of particles, gas-phase chemistry within the plume, and mixing of the air parcel with gases and particles from the background. Modelled PN concentrations agreed within 50 % with measured PN concentrations when a peak in the signal was detected that related to the ship passage.
Recently, the MAFOR model has been utilized to investigate the particle number concentrations induced by aviation emissions in the surrounding communities of Zurich airport (Zhang et al., 2020). The offline coupling between the atmospheric dispersion model and MAFOR was achieved through the plume dilution curve, which was approximated by fitting a power-law function using the dispersion results and then adopted by MAFOR for the aerosol dynamics calculations. The plume dilution curve was analysed based on the centreline concentration of the plume. The particle evolution in the aviation exhaust was calculated with the aerosol dynamics code using the obtained dilution curve in conjunction with meteorological data (humidity, temperature, precipitation, and wind speed) and the background PN concentration. Kinetic nucleation of H 2 SO 4 , condensation and evaporation, coagulation, deposition, and mixing of the air parcel with the background particles were considered in the model simulations. The results suggested that particles between 10 and 30 nm contributed significantly to the particle number concentration. The predicted PN concentrations were within a factor of 2 of the measurements. Fig. 8a), during the afternoon traffic rush hour between 17:00 and 18:00 local time, were selected. The length of this street canyon is 230 m. In M2, the buildings downwind of the main street are oriented perpendicular to MA, and the distance between the buildings is ∼ 22 m (Fig. 8b). On the other side of the street, buildings are parallel to MA. The buildings are ∼ 21 m tall and the width of the canyon is 38 m, leading to an aspect ratio of 0.55. Although the aspect ratio is relatively shallow and MA is a half-open environment at the place of measurements, it can be considered a street canyon due to the large traffic intensity (Vardoulakis et al., 2003).
Measurements with Sniffer for dispersion studies in M2 were taken during the driving times on the second lane (outwards from the centre of Helsinki, A), during the standing times (5-10 min) downwind of MA in the space between the buildings (B, C, and D), and during the driving times on the side street (E, towards the city centre) shown in Fig. 8b. Monitoring with Sniffer included measurements of particles (particle number concentration, size distribution, particulate matter -PM 2.5 , and BC, as well as gases -NO, NO 2 , and NO x ; see details of the instruments in Pirjola et al., 2012). A weather station on the roof of the van at a height of 2.9 m above ground level provided measurements of the temperature and relative humidity as well as wind speed and direction. A GPS device saved the van's speed and location. Background concentrations of particles were measured by Sniffer at Lääkärinkatu; 300 m north from M2; additionally, background air concentrations of O 3 , NO, and NO 2 were monitored at the nearby urban background site Kallio-2 (60 • 11'14.85" N, 24 • 57'02.04" E). Measurements of NO, NO 2 , PM 2.5 , PM 10 , and BC from an air quality monitoring station (AQS) operated by the Helsinki Region Environmental Service Authority (HSY), located on the pavement of M2 (60 • 11'24.51" N, 24 • 54'56.81" E) (Fig. 8b), were also available.
Hourly meteorological data were estimated in this study using the meteorological preprocessor MPP-FMI (Karppinen et al., 2000). The MPP-FMI results for the selected day are based on meteorological measurements at Helsinki Vantaa airport (60.3267 • N; 24.95675 • E), a site which has been found to be meteorologically representative for the whole of the Helsinki Metropolitan Area. Data from MPP-FMI include the parameters defining the atmospheric stability, in addition to wind data. However, the meteorological data measured at Sniffer during the standing times in M2 were used whenever possible, as they better represented the local conditions in the street canyon. The dispersion situation for the Urban Case scenario is evaluated at Sniffer inlet height for particles, i.e. at a height of 2.4 m above the ground level.
Configuration of the simulation
In the Lagrangian air parcel simulation we assume that the initial height of the air parcel volume corresponds to the situation in which vehicular exhaust gases and particles have been diluted at a timescale of less than 0.5 s after release from the tailpipe (Pohjola et al., 2007), and the process of initial nucleation in the exhaust has been finalized. The initial air parcel height was assumed to be 0.80 m (Pohjola et al., 2007). As in previous plume dispersion studies for exhaust dilution near roadways (see Sect. 2.7), a two-stage dilution process was applied for the Urban Case scenario. The initial air parcel (sub-scale box in Fig. 9) is initialized with a concentration of particles and gases in the background air. In the first dilution stage, the dispersion of the plume and the growth of the (diluted) exhaust plume are calculated with the jet plume model of Vignati et al. (1999), which takes into account the turbulence generated by traffic, the atmospheric turbulence, and the entrainment of fresh air due to the jet effect of the exhaust gas. In the second dilution stage, when the air parcel reaches the curbside and is further transported to the ambient environment, atmospheric turbulence dominates the plume dispersion. Growth of the air parcel and dilution parameters are calculated with a line source dispersion model that considers the geometry of the street canyon.
The combination of the dispersion model and the aerosol process models was straightforward: the jet plume model and the street canyon dispersion model provided the required pa-rameters for the dilution function of the Lagrangian air parcel, while the aerosol process models then allowed analysing the aerosol transformation within the temporally expanding volume of the plume. Figure 9 illustrates the coupling of the plume dispersion models with the aerosol dynamics models. The dilution of particles in the moving air parcel is divided into two regimes, i.e. the first between the sub-scale box from emission source to curbside and the second between curbside and the ambient environment (street environment box). The change in particle number concentration in a size section due to dilution with background air during the first stage is expressed by During the second stage, it is expressed by The dilution ratio D R in the vehicle exhaust plumes increases approximately linearly with time during the first seconds of the dilution. Details on the calculation of the plume height as a function of the air parcel transport time and the dilution functions are given in Appendix E. The two dilution functions were implemented in MAFOR and the other Lagrangian-type aerosol process models that were used in the comparison for the Urban Case scenario. The dispersion situation in the street canyon was first evaluated using the simplified street canyon model (SSCM), a component of the urban dispersion model EPISODE-CityChem (Karl et al., 2019). This street canyon model follows the Operational Street Pollution Model (OSPM; Berkowicz et al., 1997) in most respects but simplifies the geometry of the street canyon. Then the dilution parameters for the second stage were derived from the simulated concentrations obtained from the street canyon model using line source emissions of total PN in both directions of the street.
In the Lagrangian simulation, a continuous flux of vehicular emissions to the moving air parcel occurs during the times when the air parcel is transported over the lanes. The air parcel is released at d = −22.5 m (d is the distance from curbside) and transported over the street (with the street geometry in Fig. 8b). All gaseous and particulate constituents of the air parcel are diluted during the transport, with the rate of dilution changing at curbside (d = 0 m). The air parcel receives emissions while passing over the two lanes in the outwards direction, is only diluted while passing over the tram tracks, and then again receives emissions while passing over the three lanes in the direction of the city. After passing d = 0 m, the air parcel is freely diluted, with no influence from buildings and ground surfaces (smooth terrain assumption).
The composition of the air parcel was initialized with particle size distribution data from Sniffer measurements in the background air, 300 m north of M2 (Fig. 8a). The chemical composition of the initial aerosol was based on the urban background aerosol described in Pohjola et al., 2007 (Table 2 therein). Table 4 summarizes the meteorological input and initial conditions for the Urban Case scenario.
Emission factors of gases and particulates for the Urban Case were adopted from Kurppa et al. (2020; Table 3 therein). Kurppa et al. (2020) applied a particle number emission factor of EF PN = 4.22 × 10 15 per kilogram of fuel. Fuel consumption per vehicle (veh) of 9.8 L per 100 km is assumed here for conversion of emission factors in particle number per kilogram of fuel to units of veh −1 km −1 . From this we obtain a particle emission factor of 4.14 × 10 14 veh −1 km −1 . This emission factor is 34 % lower than the estimate from Gidhagen et al. (2003) of 6.23 × 10 14 veh −1 km −1 , which was used in the model simulations of the LIPIKA campaign (Pohjola et al., 2007). Emissions of total particle numbers were distributed over the particle size spectrum by utilizing the number size distribution when Sniffer was driving on Mannerheimintie to the north so that the modelled size distribution after 5.5 m of distance from the start (on the middle of lane 2; d = −17 m) matched the measured size distribution on lane 2.
Exhaust particles were assumed to be composed of organic carbon (OC) and BC with constant modal OC-to-BC ratios -nucleation mode: 100 : 0, Aitken mode: 80 : 20, accumulation mode 1 (Acc1): 40 : 60, accumulation mode 2 (Acc2): 60 : 40, as in Karl et al. (2016). The emission factors for vehicle exhaust gases EF NO , EF NO 2 , EF H 2 SO 4 , and EF SVOC were 4.94 × 10 −4 , 1.39 × 10 −4 , 1.0 × 10 −7 , and 3.9 × 10 −7 g m −1 veh −1 , respectively (SVOC is the sum of semi-volatile organic vapours), adopted from Kurppa et al. (2020). The emission factors for the two line sources were then weighted by the vehicle count in each direction. Traffic flow was 1462 veh h −1 in the outward direction and 1085 veh h −1 in the city direction (Pirjola et al., 2012). Emissions of particles and gases in the outward direction were shared equally between the two lanes and the emissions toward the city were shared equally between the lanes in this direction. To calculate the particle emission rates (particles cm −3 s −1 ) and gas emission rates (molecules cm −3 s −1 ), the emission factors were divided by the width of the lanes to one direction and by the air parcel box height (plume height), assuming the air in the box is well mixed. The plume height, dilution rate, and emission rate of exhaust particles during the Urban Case simulation are plotted in Fig. E1.
Comparison with other aerosol models
Results from simulations of the Urban Case scenario with MAFOR were compared to results from two other aerosol dynamics models, AEROFOR and SALSA. Processes included in the simulation of the Urban Case for the respective aerosol process models are summarized in Table 5. MAFOR, AEROFOR, and SALSA consider the condensation of H 2 SO 4 and organic vapours emitted from the vehicles, in addition to Brownian coagulation and dry deposition. The dilution of particles and gases according to Eqs. (24)-(25) was implemented in AEROFOR and SALSA, ensuring that the same dilution schemes were applied in all models. The three sectional aerosol dynamics models used 120 bins for the diameter range between 1 and 1000 nm, a model time step of 0.01 s for the aerosol dynamics, and a time step of 0.5 s for changes in the dilution rate. The model evaluation was done without inclusion of sulfuric acid-water nucleation. A preliminary run with MAFOR showed that freshly nucleated particles formed by the atmospheric nucleation of H 2 SO 4 emitted from the vehicles, based on nucleation rates using the Määttänen et al. (2018a) parameterization, did not grow beyond a diameter of 2 nm in size.
Emissions of particles were inserted differently in the models. In AEROFOR and SALSA particle emissions were distributed over the respective size sections, while in MAFOR the emitted particles as a function of size were fitted with a lognormal distribution and attributed to four modes in terms of mass and modal composition (see Eq. 21). SVOC emissions were treated slightly differently in the models: in AEROFOR they were represented by one compound with properties of adipic acid, in SALSA as semi-volatile organic carbon (Kurppa et al., 2019), and in MAFOR they were split with half to PIOV (intermediate volatility; C 0 = 1.0 µg m −3 at 298 K) and half to PSOV (semi-volatile; C 0 = 0.01 µg m −3 at 298 K).
LNMOM-DC treats simultaneous coagulation and dispersion from a continuous emission source Sarkar et al., 2020). With respect to the coagulationdispersion system, the parameterization scheme for near- The model performance of MAFOR version 2 was evaluated in terms of total particle number, number size distributions, total particulate matter, and composition (only BC) by comparison against experimental data and against results from two other aerosol dynamics models in an urban environment. Model runs for the Urban Case were performed with the three aerosol dynamics under identical conditions for plume dispersion using the same configuration in the models to the extent that this was possible (Sect. 3.3). The focus of the model evaluation is on the analysis of aerosol processes that are relevant in urban environments. Experimental data on particle number and mass concentrations from observations within the street canyon M2, obtained with the Sniffer mobile lab, were used for the comparison. Statistical performance indicators for the model-observation (M-O) comparison were: index of agreement (IOA), coefficient of efficiency (COE), and the mean absolute error (MAE). The definitions of these indicators are given in Appendix F. In short, IOA is a refined index (Willmott et al., 2012) that spans values between −1 and +1 with values close to 1.0 representing better model performance. A COE value of 1.0 indicates perfect agreement, while negative values of COE indicate that the model predicts the observed variation less effectively than the mean of the observations. The M-O comparison was based on a fourpoint dataset obtained at the locations A, B, C, and D (see Fig. 8b) where Sniffer was positioned during the measurement campaign. Location E was excluded from the analysis because it appears that the measurements at E were affected by emissions from outside the street canyon. The statistics were prepared for each of the models. Note that model results are instantaneous concentrations, whereas experimental data represent an average over a longer time period (typi-cally 5-10 min). Therefore, it is worth noticing that the large variation in the traffic situations, especially while Sniffer was driving on the main street and on the side street, might have affected the experimental results. First, the predicted total PN concentrations from the three aerosol dynamics models were compared against measurements by SMPS (scanning mobility particle sizer; combined with a nano-SMPS). Figure 10 shows the modelled time series of total PN from the three models and the measured total PN (including 1σ standard deviation) as a function of downwind distance, which is the distance from the edge of the road (d = −22.5 m; Fig. 8), i.e. the starting point of the simulation, in the downwind direction. All models matched the total PN concentration at street level and the reduction of PN concentrations with increasing distance from the street, as the vehicular exhaust plume is diluted in the open space between the buildings. The total PN curve predicted by SALSA deviates from the other models after curbside; in 120 m of downwind distance total PN remains 52 % higher than in the other models. The statistical evaluation revealed that AEROFOR and MAFOR were in slightly better agreement with the measurement data than SALSA, although the differences in performance are small. Measured and modelled concentration values at the four measurement points, together with the statistical performance parameters for all models, are displayed in Table 6.
Next, the modelled and measured particle number size distributions were compared at the four point locations A, B, C, and D (Fig. 11). Modelled number size distributions at point A, at street level, to a large extent reflect how the vehicular particle emissions were distributed over the relevant size range. SALSA and AEROFOR, both using a bin-wise distribution of emitted particles, capture the measured size distribution at point A, especially in the size range < 20 nm in diameter, better than MAFOR using a mode-wise distribution. Clearly, the bin-wise distribution allows for a more accurate representation of particle emissions. However, the particle size distribution of SALSA does not match the peak of the measured size distribution at 15-30 nm, in contrast to MAFOR and AEROFOR. At the second location, point B, at 8 m of distance from the street, particle concentrations have been strongly diluted (Fig. 10) and the modelled distribu- tions are now closer to each other and the measured distribution. At points C and D, both modelled size distributions from AEROFOR and SALSA apparently overestimate number concentrations in the size range 7-20 nm compared to the measurements, indicating that the small particles are not removed efficiently enough. Number concentrations of larger particles (> 100 nm in size) at greater downwind distance (points C and D) show a large variability that was not captured by the models. The possibility that sources of large particles from outside the street canyon contributed to the number size distribution measured at points C and D cannot be excluded.
The measured size distribution from SMPS spans the size range of 3-420 nm in diameter with a size resolution of 138 bins. For the M-O comparison, the modelled size distributions (dN/d(log 10 )D p ) were synchronized to the size resolution of the measured size distribution by linear interpolation. The statistical comparison of size distribution was evaluated separately at points A, B, C, and D. Results of the performance evaluation at the four points and the average perfor-mance are presented in Table 7. It turns out that MAFOR and AEROFOR performed better in the prediction of the size distribution at street level (point A) compared to SALSA. However, the deviation between modelled size distributions from AEROFOR and the measured ones becomes larger with increasing downwind distance. All models show the weakest predictive capability at point D. Overall modelled size distributions from MAFOR are in good agreement with the measured distributions (IOA range: 0.71-0.85; mean IOA: 0.78), and the model has the smallest MAE at points B-D. MAFOR best reproduced the development of the number size distribution with increasing distance from the road edge. The weaker performance of SALSA (mean IOA: 0.63) is mainly due to the lower peak diameter of the modelled size distributions compared to the measured size distributions (Fig. 11). Modelled and measured total particle mass and BC concentrations were also compared. Modelled PM 1 (particles with < 1µm in diameter) from MAFOR and SALSA was compared against measurement data on PM 1 from an ELPI (electrical low-pressure impactor), assuming particle density of 1000 kg m −3 . MAFOR outputs mass concentrations and mass size distributions, while SALSA outputs volume distributions of total mass and components. From AEROFOR no output of particle mass or volume is available. Comparison of PM 1 from ELPI to PM 2.5 measured with DustTrak at Sniffer indicates that the mass of supermicron particles contributed little to PM 2.5 (Fig. 12a). The DustTrak measurements had large relative uncertainties, which can be attributed to shortterm variations caused by passing exhaust plumes at street level, for instance from heavy-duty vehicles, or from other sources outside the street canyon. Measurements of BC with an Aethalometer similarly show high uncertainty at street level and at point E (Fig. 12b).
Modelled PM 1 from SALSA considerably overestimated measured PM 1 . Modelled PM 1 from MAFOR was closer to the measurements, although modelled PM 1 at point A was 45 % higher than measured PM 1 (Fig. 12a) Measurements of black carbon concentrations show a steeper decline between points A and D than the modelled BC concentrations from the two aerosol process models (Fig. 12b). MAFOR overestimated measured BC concentrations between points B and D but captured the decreasing The comparison of gas-phase concentrations of condensing vapours was of particular interest to analyse discrepancies in the magnitude of condensation and evaporation between the models. In the absence of measurements of these compounds, only the model results were compared with each other. Figure 13 shows the comparison of modelled gasphase concentrations of sulfuric acid and semi-volatile organics (sum of condensable organic vapours) calculated by the three aerosol dynamics models. While modelled peak concentrations of condensable vapours at street level were very similar among the models, differences can be noted at greater downwind distance. For H 2 SO 4 , the maximum deviation of a single model from the model mean was ±3.0 % at peak concentration but ±96 % at 100 m of distance from the road edge. For SVOCs, the maximum deviation was ±2.4 % at peak concentration and ±32 % at 100 m of distance.
Modelled H 2 SO 4 from MAFOR shows a notably lower second peak (at around 18 m downwind distance) than the other two models. This appears to be a sign of faster condensation of H 2 SO 4 to the particle population in the simulation with MAFOR compared to the other models. The applied vapour pressure and accommodation coefficient of H 2 SO 4 were not identical in the different aerosol models. The relevance of condensation in MAFOR simulations will be discussed in more detail in Sect. 4.1.3.
Importance of aerosol processes
The importance of aerosol processes was evaluated for total PN concentrations by comparing the model runs including all processes to model runs excluding one of the aerosol processes, i.e. either condensation and evaporation, dry de-position, or coagulation, and excluding all aerosol processes (dilution only). The evaluation was based on the change in total PN concentration between point A and point D relative to the PN concentration at point A: The relative contribution of dilution was calculated as RC dilution (%) = PN dilution / PN all × 100, whereas the relative contribution RC proc (%) of aerosol processes was defined as RC proc = PN all − PN proc / PN all × 100.
(27) Table 8 summarizes the results of the process evaluation. Dilution dominated the change in total PN between street level and neighbourhood scale in the model runs, with a relative contribution in the range 86 %-96 %. Although the same dilution function was implemented in the models, PN change in simulations with AEROFOR was more strongly controlled by dilution than in simulations with the other models. In all aerosol dynamics models, dry deposition was the most important aerosol process, while coagulation played a minor role.
Dry deposition caused a reduction in the total PN concentration ( PN all − PN deposition ) by 9 %, 3 %, and 6 % in model runs with MAFOR, AEROFOR, and SALSA, respectively. Differences in the relative contribution of deposition in the models are most probably due to the different schemes for dry deposition in the models (Table 5). To assess the differences in the model results due to the application of different deposition schemes, additional model runs including all processes were performed with the MAFOR model, first us- ing the deposition scheme in AEROFOR (SPF1985) and second the deposition scheme in SALSA (ZH2001). The comparison of the final particle size distribution at point D is shown in Fig. E2, obtained from MAFOR runs with different dry deposition parameterizations. The HU2012 deposition scheme that was used in the reference run with MAFOR was more efficient in removing particles > 10 nm in diameter than the other two deposition schemes. However, differences between using either the scheme SPF1985 or ZH2001 were negligible, which implies that the application of different dry deposition parameterizations was not the main reason for differences of the predicted particle size distributions.
LNMOM-DC was employed to estimate the relevance of coagulation in the Urban Case by modelling the coagulationdispersion system with an identical setup. The change in the total PN due to coagulation at 100 m of downwind distance was estimated to be less than 2 %. Due to the small impact of coagulation, LNMOM-DC could not be utilized further to calculate the change in the size distribution parameters due to coagulation.
Condensation and evaporation contributed almost negligibly to PN changes but effectively increased total PN (negative RC value; Table 8). Under inefficient dispersion conditions, an increase in total PN due to condensation has been noted previously by Karl et al. (2016) in a study of aerosol processes on the neighbourhood scale. While condensation of vapours is not expected to change the total number concentrations, it serves to increase the volume of particles (Seinfeld and Pandis, 2006) and can modify the shape of particle size distributions. The increase in total PN is related to the competition between condensation and dry deposition or coagulation: small particles that grow by condensation as the air parcel moves away from the emission source will be less affected by removal through deposition or coagulation.
The results on the importance of aerosol processes from the three models in this study agree with the general notion that dilution dominates over other processes and that dry deposition onto the road surface is the only competitive aerosol process that alters total PN concentrations and size distributions related to vehicular traffic emissions in a street canyon (Kumar et al., 2011).
One method of determining the relative importance of various processes is timescale analysis (Ketzel and Berkowicz, 2004). Timescale analysis for a street canyon in Cambridge, UK, showed that timescales were of the order of 40 s for dilution, 30-130 s for dry deposition on the road surface, 600-2600 s for the dry deposition on the street walls, about 105 s for coagulation, and about 104-105 s for condensation (Kumar et al., 2008). The timescale analysis by Nikolova et al. (2014) based on results from CFD modelling for an urban street canyon in Antwerp, Belgium, showed that the timescale for coagulation was about 3 times longer than for dilution, while the timescale for dry deposition was close to that of dilution under low-wind-speed conditions.
The importance of coagulation in street canyons is subject to ongoing controversy. The relevance of coagulation may depend on a variety of different factors, such as exhaust emissions, meteorological conditions, canyon geometry, and complexity of the area (Kumar et al., 2011). The timescales for self-coagulation and inter-modal coagulation of nucleation-mode particles are typically longer than the timescales for dilution (Kerminen et al., 2007;Pohjola et al., 2007). Kerminen et al. (1997) concluded that under conditions characterized by exceptionally slow mixing, simultaneous processing of ultrafine particles by dilution, self-and inter-modal coagulation, and condensation and evaporation can occur. Karl et al. (2016) found that coagulation was relevant for street environments in situations when large numbers of small particles (diameter < 50 nm) from vehicle exhaust Table 8. Importance of dilution and aerosol processes in the Urban Case scenario: relative changes in total PN concentrations between points A and D ( PN) and relative contribution (RC) of dilution and aerosol processes. emissions co-occurred with a significant PN fraction of larger particles (diameter > 100 nm). Kerminen et al. (1997) estimated the timescale for inter-modal coagulation of particles with D p = 10 nm to be 900-1200 s during rush hours, which is short enough to allow moderate removal of nucleationmode particles by inter-modal coagulation.
Effect or influence of condensation and evaporation of organics
In the following, the relevance of condensation and evaporation of organic vapours in the Urban Case scenario is analysed with the MAFOR model. Condensation and evaporation are potentially important processes in the urban case simulation because condensable vapours are first emitted from the vehicles, then condense to primary emitted particles inside the street canyon, and eventually re-evaporate from the condensed phase as the air parcel moves away from the street. Condensation and evaporation do not change the total number concentrations but will alter the size distributions and particle volume. According to Kumar et al. (2011), the effect of condensation in street canyons is uncertain, especially regarding the sub-10 nm particles. Evaporation reduces the volume concentration of particles. Partial evaporation can also increase the rate of coagulation by increasing the diffusion coefficient of the remaining particles (Jacobson et al., 2005). The uncertainties of condensation and evaporation in the models are partly attributable to the algorithm of the condensation process (e.g. mass accommodation coefficient in Eq. 4) and partly to the properties of the condensing or evaporating vapours (e.g. volatility of the chosen substances, vapour pressures of the liquid). In addition, the emission of semi-volatile organic vapours by vehicles is highly uncertain. Several sensitivity runs were done with MAFOR to evaluate the effect of uncertain parameters in the condensation of organic vapours. The evaluation of modelled size distributions was done by grouping particle sizes into six size categories (size classes S1-S6; see Karl et al., 2016).
Sensitivity runs with MAFOR were as follows.
1. C 0 (SVOC) ×100 The model run with all processes presented in the previous sections is used as a reference. Results are shown in Table 9. The sensitivity tests reveal that uncertainties associated with the properties of the organic vapour(s) affect only the sizes of particles that are smaller than 10 nm, and these do not limit the ability to simulate most of the number size distributions and total PN concentrations. Even a 20-fold increase in SVOC emissions only affects the sub-10 nm particles. A 50fold increase in SVOC emissions results in clear growth of < 25 nm particles, mainly to sizes of 75-100 nm. The chemical composition of the traffic exhaust aerosol at points A and D computed with MAFOR indicates that condensation of organic vapours in the high emission case leads to uniform mass increases in the size range 20-200 nm compared to the reference (Fig. 14).
Modelled and measured mass size distribution of total particles at different distances from the edge of the road in the reference run and the sensitivity runs is presented in Appendix G and Fig. G1. The highest emission rate of SVOCs clearly leads to an overestimation of the measured mass concentration in the size range below 100 nm diameter. The simulations with MAFOR therefore allow estimating the magnitude of vehicle-emitted organic vapours to be on the order of 10 −7 to 10 −6 g m −1 veh −1 .
Uncertainties in the Urban Case scenario
Computation of the aerosol evolution within the street canyon environment of the Urban Case scenario involves several assumptions and uncertain parameters. In the following the uncertainties of the processes and the design of the street canyon scenario are discussed.
Dry deposition is identified as the most important aerosol process in the Urban Case; at the same time, the size dependence of the dry deposition velocity is very uncertain. Table 9. Effect of the chosen parameters for the condensing organic vapour(s) in the MAFOR model when simulating the Urban Case scenario (all processes included). The reference is the model run with all processes presented in Sect. 4.1.1. The size ranges of the six size classes are S1: 1-10 nm, S2: 10-25 nm, S3: 25-50 nm, S4: 50-75 nm, S5: 75-100 nm, and S6: > 100 nm.
Change in number concentration
Change in diameter Parameter Reference −76.2 −73.1 −70.9 −59.1 −57.9 −54.8 0.9 1.9 1.9 9.5 3.4 3.8 Measurements of dry deposition velocities for one particular surface type generally vary by 1 order of magnitude for a given particle size range of half of a logarithmic decade (Petroff et al., 2008). The HS2012 scheme used in the model is representative for dry deposition to rough environmental surfaces, which results in higher deposition velocities than for the other two aerosol dynamics models. The relative contribution on average of the three models was 9.7 %; together with an uncertainty of ±60 % , the RC of dry deposition could be as high as 15 %. The Zhang et al. (2001) parameterization used in SALSA predicts a size-dependent deposition velocity with a minimum at particle diameters of ∼ 1 µm, but measurements over vegetated surfaces suggest that the deposition velocity minimum occurs closer to ∼ 0.1 µm at the lower bound of the accumulation mode (Emerson et al., 2020). Dry deposition onto the road surface and/or building walls in a street canyon is mainly influenced by traffic movement and can reduce total PN concentrations by about 10 %-20 % (Gidhagen et al., 2004;Kurppa et al., 2019). Brownian coagulation was identified as a minor aerosol process. While the timescales for coagulation of nucleationmode particles is typically longer than the timescales for dilution, the effect of fractal geometry may enhance the coagulation rates. For small particles, fractal geometry enhances the coagulation kernel with increasing size of the colliding particle compared to spherical shape. A preliminary test of fractal geometry (r s = 13.5 nm and D f = 1.7) in a model run for the Urban Case (all processes included) resulted in PN reduction 0.2 % higher compared to compact particles. This suggests a higher importance of coagulation but does not change the conclusion that coagulation is a minor aerosol process in the Urban Case.
Evaporation might play a role in removing small particles and shrinking larger particles (Harrison et al., 2016), but the low temperature applied in the Urban Case scenario favoured condensation over evaporation. Uncertainties associated with the properties of the organic vapour(s) may affect the sizes of sub-10 nm particles. In particular, using a lower mass accommodation coefficient (α = 0.1) for the organic vapour(s) may suppress condensation on small particles (Fig. G1), since more vapour molecules reflect from the particle surface back to air. However, molecular dynamics simulations and measurements indicate that the accommodation coefficient of atmospherically relevant organics is consistent with α = 1 (nearly perfect accommodation), regardless of the molecular structural properties (Julin et al., 2014).
Traffic-originated particles in the diameter range of 1.3-3.0 nm, so-called nanocluster aerosol (NCA), have been measured in different traffic environments (Rönkkö et al., 2017). Hietikko et al. (2018) reported a clear connection between NCA concentrations and traffic volume in a street canyon. In the M2 street canyon, no significant number concentrations of particles with a diameter less than 4 nm have been observed. The measurement techniques of the instruments used, i.e. nano-SMPS and ELPI, are not suitable for detection of these small particles. The formation mechanism of NCA particles is not fully understood. It has been hypothesized that depending on the after-treatment systems of vehicles NCA represents non-volatile nano-sized particles formed in the combustion process in the cylinder or exhaust manifold or formed by an atmospheric nucleation mechanism during the dilution process of the exhaust (Järvinen et al., 2019;Alanen et al., 2015). The model is not able to simulate solid particles that form in the early stage of the engine exhaust. Nor did the sulfuric-acid-driven (atmospheric) nucleation produce these small particles (Sect. 3.3). Currently, the relative contribution of traffic-emitted NCA versus atmospheric nucleation to the formation of clusters and/or particles in this size range is not known and very likely depends on the driving conditions and environmental factors. Based on model calculations, condensational growth of NCA to larger sizes is more important than removal by coagulation on the street scale (Kangasniemi et al., 2019).
In the coupled dilution-aerosol process modelling of the present study, an average line source is assumed so that high particle emissions from certain vehicles (e.g. trucks or buses) are not considered. Gidhagen et al. (2004), using a CFD model for a street canyon, found a relatively high influence of coagulation on the removal of particles inside a street canyon. For a wind speed of 2 m s −1 , the effect of coagulation on total PN was 15 % at the leeward side and 21 % at the windward side. The reason for the higher influence of coagulation might be the more realistic simulation of dispersion in the street canyon, resulting in a longer residence time of particles inside the street canyon. The CFD simulation considered the plumes of all vehicles inside the street canyon (diluted with clean air), which enhances the effect of removal by coagulation because coagulation is more efficient close to the particle source. The average dilution timescale in the Urban Case (from road edge to point D) was 31 s, which is close to the dilution timescale of a real street canyon at a wind speed of 3 m s −1 (Nikolova et al., 2014). For low wind speeds and low traffic intensity the dilution timescale in a street canyon with a unit aspect ratio is typically 120 s (Ketzel and Berkowicz, 2004). With a longer residence time in the street canyon, processing of ultrafine particles by coagulation and condensational growth would be more relevant.
Based on the national calculation system for traffic exhaust emissions and energy consumption in Finland (LI-PASTO, 2021), the average exhaust emission of PM 2.5 by vehicles in 2010 was on average 1.5-2.9 times higher than that in 2017 (the reference year of EF PN used in the present study). The decreasing trend is qualitatively in agreement with the corresponding data in Fig. 6 in Kukkonen et al. (2018); however, that figure only addresses developments until 2014. Ultrafine particles originate from exhaust emissions, so those have probably diminished in time, mainly due to the implementation of diesel particulate filters. How much exactly is not known, as this depends on the development of engine technology, fuels, and other factors. Model simulations of the Urban Case show that the EF PN from 2017 is in accordance with the total PN concentrations measured in the street canyon.
Discussion of model performance
Statistical performance indicators in the comparison of model data against observation data in the Urban Case scenario provide an unambiguous criterion for evaluating the performance of MAFOR in comparison to that of other models. The results on the statistical performance of the model with respect to total PN, number size distribution, PM 1 , and BC are summarized here. 1. The model reproduced the reduction of total PN concentrations with increasing distance from the street in excellent agreement with the experimental data.
2. The model performs well for the number size distributions at street level and different distances from the street despite the coarser resolution of the particle emission size spectrum from vehicles.
3. The model performed weaker for PM 1 , but the mean error of the prediction is still acceptable given the high relative uncertainties of the measurements. The low predictability of the observed PM 1 variation is partly attributed to the long averaging interval of the measurements (ca. 5-10 min) compared to the instantaneous model simulation.
4. The model performs fairly well for BC, but varying traffic conditions may have affected the measurements, making the M-O comparison for BC less reliable.
Overall, the simulation of the Urban Case demonstrates the good performance of MAFOR v.2 in predicting particle number, size distribution, and chemical composition of traffic exhaust aerosol. A major advantage of the model is the consistent treatment of particle number concentrations and mass concentrations of each aerosol component through the simultaneous solution of aerosol dynamics processes in terms of number and mass. This procedure allows the changes in the average density of particles to affect the predicted number and mass size distributions. An added value of the model is that it can be used to determine the (order or magnitude) emission rate of SVOCs by comparison between the modelled and the observed size distribution of total mass.
In addition to the statistical model performance of the aerosol process models presented in Sect. 4.1.1, we define a set of additional criteria for the overall evaluation. Clearly, this is not a strength-and-weakness analysis because model user feedback cannot be provided at the current stage. The additional indicators are intended to characterize the capabilities of the models in an objective way to be comparable between the models. The selected additional criteria are 1. computing time, 2. comprehensiveness of model outputs, and 3. representation of aerosol chemical composition.
Computing time is an important criterion for comparing the computational efficiency of models and algorithms. Computer models that have an excessive demand of time are less attractive for the model user and are usually not suitable for integration in 3-D models. The computational time on a single CPU for the base simulation of the Urban Case scenario (all processes included) for a plume travel distance of 120 m was 1.5 min for MAFOR (Linux mini PC, 7.6 GB RAM), 1.2 min for SALSA (Linux desktop PC, 32 GB RAM), and 5.2 min for AEROFOR (desktop PC, Windows XP, 2.96 GB RAM, year 2002). Since the different aerosol dynamics models were run on different computers it is not possible to give an accurate ranking of the time required by each model. Nevertheless, roughly comparing the computational times of the models indicates that MAFOR runs with similar speed as SALSA.
Particle number size distribution is the basic output of all models. Additionally, model output of MAFOR comprises size distributions of total mass and the chemical composition (mass fractions). SALSA outputs volume size distributions of particle components, which at known density can be translated to mass concentration. An added value of MAFOR is the capability to resolve the chemical composition of each size section in terms of mass, which allows the size-resolved quantification of the condensed mass of volatile species within the full diameter range.
Regarding the speciation of the aerosol chemical composition in the models, MAFOR has a similar degree of detail and capabilities as SALSA, with the addition that two organic vapours (optionally three) of different volatility were used to represent condensation and evaporation of SVOCs. AERO-FOR used two condensable vapours (H 2 SO 4 and SVOCs) to describe the condensation and evaporation to an internally mixed aerosol, with all particles containing both compounds. In MAFOR and SALSA, the composition of the background aerosol (sulfate, BC, mineral dust, sea salt, etc.) can be defined separately from the composition of exhaust emissions.
Consistent treatment of mass-and number-based concentrations of PM
The consistent treatment of mass-and number-based concentrations of particulate matter in the model has several aspects: 1. initialization of the aerosol size distribution, 2. insertion of particles from aerosol source emissions, 3. mathematical solution of the aerosol dynamics processes, and 4. comparability to both the observed PM mass and number concentrations.
In the MAFOR model, the aerosol is initialized based on the modal mass composition, which is then distributed over the size bins of the model (Eq. 21) and converted to number based on the material density of the different aerosol components, assuming spherical particles.
This procedure ensures that the initial aerosol is consistent in terms of mass and number. The model simultaneously solves the number concentrations and mass concentrations for each size section as they change with time due to different aerosol dynamics processes in a given scenario. This method has two advantages: (1) it takes into account the concurrent change in average particle density during the evolution of an aerosol size distribution in the prediction of number and mass concentrations, and (2) it represents the growth of particles in terms of both the number and the mass. Finally, the output of modelled particle number size distribution and mass concentration size distribution can be directly compared to observed number and mass concentration size distributions, respectively.
Some of the above-mentioned aspects have uncertainties and limitations, which results in a certain deviation from the full consistency of number and mass.
In the real-world scenario in a street canyon environment (Urban Case), particle emissions are reported on the basis of numbers. However, the emissions in the MAFOR model are mass-based, and these are subsequently converted to numberbased using assumptions on their densities. The total PN emission factor is dependent on the setups of the measurements . First, the emissions may include either only solid particles or solid and volatile PN; second, the PN emission factor has a variable lower particle size cut-off, depending on the employed instrumental method.
In the case of the street canyon simulation, the PN emission factor was adopted from Kurppa et al. (2020) and emissions were distributed over the particle size distribution. This was done so that the modelled size distribution after a distance of 5.5 m from the start matched the measurement of the particle size distribution at street level. A limitation of this modelling was that the particle emissions were attributed to a modal distribution in MAFOR. The MAFOR model represented the variation of particle emissions between different size bins less well than the two other models, SALSA and AEROFOR, which used a bin-wise representation, in particular for the particles with sizes below 20 nm diameter.
When comparing the modelled total particle mass concentration distribution to observations from ELPI in the Urban Case (see Fig. 12), we have assumed that all particles were spheres and had the same density of 1000 kg m −3 . The ELPI charging efficiency depends on particle mobility diameter, whereas the ELPI measures the aerodynamic diameter of particles. This dilemma is usually circumvented by assuming that the particles are unit density spheres, for which mo-bility diameter equals aerodynamic diameter. For soot particles that form as agglomerates of approximately spherical primary particles with 10-30 nm diameter, the effective density decreases with particle growth. This in turn narrows their aerodynamic size distribution relative to their mobility distribution. The uncertainty due to changes in effective density of soot particles is estimated to cause a systematic error for the determination of PM with ELPI of about 20 % (Maricq et al., 2006). Salo et al. (2019) compared ELPI+ to PM 10 cascade impactors in combustion emission measurements. ELPI+ mass concentrations were larger for most combustion cases, probably because the effective density of the particles was not the assumed unit density and because volatile particles were measured by ELPI+, but not with the cascade impactors. DeCarlo et al. (2004) mention two issues that affect the conversion of particulate matter mass to numbers: ultrafine particles with irregular shape and the internal void volumes of diesel soot agglomerates. Therefore, the evaluation of modelled total mass concentration in comparison against the measurements relies on the assumption of spherical particles without internal voids.
Evaluation of the model improvements
The Urban Case scenario was selected for the evaluation of the model because it considers the scale between the release of exhaust and the roadside, for which the aerosol dynamics processes are typically not resolved in city-scale dispersion models. Semi-volatile organic vapours can grow nucleationmode particles with a non-volatile core that formed in the vehicle exhaust before the dilution process, without any significant chemical transformation in the atmosphere (Rönkkö et al., 2013). The improved treatment of semi-volatile organic compounds in MAFOR v2.0 with respect to their volatility distribution and their role in the growth of small particles was evaluated in Sect. 4.1. However, it was not possible to evaluate SOA formation through VOC photo-oxidation because the gas-phase concentrations of VOCs in the street canyon environment have not been measured. In follow-up work, it is planned to evaluate the performance of MAFOR v2.0 in simulations of secondary aerosol formation in aged vehicle exhaust in a smog chamber experiment or in an oxidation flow reactor (OFR). The model evaluation will be designed to consider the production of SOA precursors from the oxidation of VOCs using the mass-based formulation of the embedded 2-D VBS framework for organic aerosol-phase partitioning.
The simulation of SOA formation with coupled photochemistry and aerosol dynamics has previously been evaluated in a smog chamber experiment for the OH-initiated oxidation of 2-aminoethanol (Karl et al., 2012b). In the applied version of the MAFOR model, the coupling was with the gas-phase chemistry scheme of MECCA v3.0. The main advantage of using the new version 4.0 of MECCA in MAFOR v2.0 is the much more detailed VOC chemistry of the Mainz Organic Mechanism (MOM). In a study of the oxidation processes in the Mediterranean atmosphere, simulated atmospheric OH concentrations with the CAABA/MECCA box model using MOM chemistry were in good agreement with in situ OH observations (Mallik et al., 2018).
The performance of the improved coagulation kernel in MAFOR v2.0 was analysed in the simulation of a chamber experiment in the presence of continuous emission of nanoparticles (Case 2). For details, we refer to Sect. S3. When assuming compact spherical particles, the simulation of the evolution of the particle size distribution due to Brownian coagulation was in good agreement with the modelled particle size spectra and total particle number concentrations for the same case published in Anand et al. (2012). When fractal particles are considered in the model (D f = 1.75), the resulting particle size distribution is similar to the same case of Anand et al. (2012); however, growth of the fractal particles into a secondary mode is less efficient (Fig. S3). Differences in the coagulation efficiency probably lie in the details of the implementation of the fractal geometry in the coagulation kernel, although the same particle morphology was used in the present evaluation. The coagulation solution with respect to particle mass conservation is sufficiently accurate, with an error of less than 0.5 %.
The performance of the new binary parameterization of Määttänen et al. (2018a, b;M2018) in MAFOR v2.0 was compared to the AEROFOR model, as described in Appendix H. Simulation of particle formation was evaluated in a numerical experiment with zero background particles, mimicking conditions over the high Arctic in summer characterized by a very low number of pre-existing particles and low temperatures (Mauritsen et al., 2011;Karl et al., 2013). A particle burst occurred in simulations with both models 2 h after the beginning due to neutral nucleation. The maximum nucleation rate, total particle (D p > 3 nm) number concentration, and H 2 SO 4 concentration calculated by MAFOR agreed well with results from AEROFOR (Fig. G2). Growth of the nucleated particles in MAFOR was weaker and resulted in a size band of new particles that was narrower than in the simulation with AEROFOR (Fig. 15). The weaker particle growth might be attributed to differences in the treatment of sulfuric acid condensation and particle deposition.
The coupled PNG-MOSAIC system that enables dynamic dissolution and evaporation of semi-volatile inorganic gases (Sect. 2.4) was tested in numerical scenario calculations with different initial concentrations of NH 3 and HNO 3 at RH = 90 % (Case 3), as described in Supplement Sect. S4. The initial conditions for Case 3 were adopted from the tests of the PNG-EQUISOLV II scheme presented in Jacobson (2005a). In simulations of Case 3, H 2 SO 4 was condensed, HCl and HNO 3 were dissolved and/or dissociated, and NH 3 was equilibrated with dissolved and dissociated species. Uptake of water occurred at each model time step based on equilibrium thermodynamics. Under conditions of high concen-trations of both NH 3 and HNO 3 , an equilibrium was reached within about 6 h, and the time-dependent summed concentrations of inorganic aerosol species matched the equilibrium levels from EQUISOLV II fairly well (Fig. S2 in the Supplement). Under low nitrate conditions, the performance of the PNG-MOSAIC scheme is very accurate. Under low ammonia conditions, the simulated time series of summed concentrations of inorganic aerosol species from MAFOR are smooth, showing no sign of oscillation, and the model achieves similar accuracy as PNG-EQUISOLV II.
Planned developments for MAFOR
The future development of MAFOR beyond version 2.0 in view of application in urban settings is briefly outlined in the following. Specifically, the further improvement for application of the model in plume dispersion scenarios and the integration in 3-D atmospheric models on the urban scale will be the focus of the planned development for the next versions of MAFOR.
Plume dispersion simulation
The processes relevant for simulating urban cases and the emissions from mobile transport sources are in the focus of the upcoming development. The following topics will be addressed in the continued development of the model.
-Currently, the size spectra of particle emissions can only be represented in four modes. Improving the size resolution of particle emissions (bin-wise) in the model has high priority.
-Traffic-originated NCA particles may be formed via a delayed primary emission route by rapid nucleation of low-volatility vapours (e.g. primary emitted H 2 SO 4 ) during exhaust cooling after release from the tailpipe (Olin et al., 2020), or they are directly emitted as solid particles (Alanen et al., 2015). While the emission of nano-sized solid particles is already implemented (Karl et al., 2013), it is envisaged to implement the delayed primary route in the model to test this hypothesis.
-Ammonia emissions from road traffic represent an emerging issue (Farren et al., 2020); NH 3 is released from catalyst-equipped gasoline vehicles as well as light-duty and heavy-duty diesel vehicles that rely on selective catalytic reduction (SCR). Vehicle emissions of NH 3 may affect new particle formation via the ternary route and secondary aerosol formation in urban areas. It is planned to activate the PNG-MOSAIC module in plume dispersion runs in order to simulate SIA formation in vehicle exhaust plumes.
-Soot particles acquire a large mass fraction of sulfuric acid during atmospheric ageing. Condensation of H 2 SO 4 to soot particles was shown to occur at a similar Figure 15. Evolution of the modelled particle number size distribution in a 10 h simulation to compare the performance of the nucleation code M2018 in (a) MAFOR v2.0 to that in (b) AEROFOR. New particles were inserted at 1.0 and 1.5 nm diameter in MAFOR and AEROFOR, respectively. The first particle formation after 15 min is due to ion-induced nucleation, and the main particle burst after 2 h is due to neutral nucleation. Final mean D p was 18 and 22 nm, respectively, in MAFOR and AEROFOR simulations. Details on the configuration of the numerical experiment are given in Appendix H.
rate for a given mobility size, regardless of their morphology (Zhang et al., 2008). Coating of fractal soot agglomerates with H 2 SO 4 and water is accompanied by restructuring to a more compact form. The change in fractal dimension and effective density during soot ageing will be implemented in the model.
-Additional dilution schemes for ship exhaust for oceancruising vessels may be implemented. Chosson et al. (2008) proposed a dilution parameterization for use in CTMs based on sophisticated methods to represent dilution in boundary layers by taking into account the initial buoyancy flux of the ship exhaust. For close-to-stack dispersion, the current method in Karl et al. (2020) is considered to be more suitable (Sect. 2.7).
-Particles from ship exhaust can act as CCN. Aerosol activation will be implemented in the model based on the scheme of Abdul-Razzak and Ghan (2002) with a sectional representation. Instead of using a singleparameter representation for hygroscopicity growth (Petters and Kreidenweis, 2007), the dynamically calculated concentrations in the liquid droplet will be used.
With the proposed implementations, it is ensured that the model will remain state-of-the-art and could even become a benchmark model for aerosol dynamics process simulations.
Integration in 3-D atmospheric models
Implementation of the presented aerosol dynamics module into 3-D atmospheric dispersion models is facilitated by the operator splitting of processes and by the efficient integration of particle number and mass concentrations. The fixed sectional method is the most practical way to consider continuous nucleation of new particles together with the atmospheric transport and emission of particles. Coagulation is the process with the highest computational demand due to the representation of collisions of a particle from one size section with particles from all other sections. It will be considered in the future to implement an adaptive time stepping scheme for solving the coagulation process.
With regard to implementation of the aerosol dynamics code into large-scale atmospheric models it is of special interest to assess how much one can lower the accuracy of the size distribution description without compromising on the accuracy of the model results. The evaluation of the sectional size representation in Case 1 (Sect. S2 in the Supplement) revealed that the use of 16 size sections causes a numerical error of ∼ 10 %, and the use of 32 size sections causes an error of only ∼ 3 % in the final total PN concentrations under those conditions. The error of both representations is considered still acceptable when compared to measurement errors of observed total PN concentrations. Further, the computational demand increases only slightly when using a larger number of size sections. Overall, the size representation using 32 size sections is adequate for the simulation of long periods, as the accuracy in terms of size distribution changes and total number concentration is sufficiently high, while the computational demand is only 2 % higher compared to the lowest tested resolution of 16 size sections.
Aerosol representations in large-scale models are often limited to fewer than 20 size classes, as the particles in each size section have to be included in the advection routine and a higher number of advected species increases the computing time. Therefore, methods need to be developed for the mapping of the size representation used in the aerosol dynamics code and the advected particle species. The effect of changing the number of size classes in 3-D model needs to be tested thoroughly.
Studies have demonstrated the relevance of episodes of new particle formation in cities situated in high insolation regions such as southern Europe. Both photo-induced nucleation and traffic emissions play a critical role in determining UFP concentrations in cities (Rivas et al., 2020). In addition, there is the highly dynamic sequence of chemical and physical processes such as condensation, deposition, and coagulation that modulates the number size distributions, making modelling of UFP concentrations on the city scale a complex task.
It is planned to integrate the aerosol dynamics code into the open-source city-scale model EPISODE-CityChem (Karl et al., 2019). The first requirement is the implementation of a size-resolved particle number emission inventory that compiles PN emission factors and size distributions for different sectors (e.g. Paasonen et al., 2016). The basic assumption of these PN emission inventories is that all primary particles are non-volatile and composed of the same material, although one could assume a certain fraction of particles (in each size section) to be either BC, OC, or a different material. According to this definition, volatile particles would always be secondary particles, i.e. forming in photo-induced nucleation or by condensation of gases already existing in the atmosphere, ignoring the fact that volatile particles may also form rapidly very close to the source of emissions, on the sub-grid scale of the 3-D model (grid resolution typically 100-1000 m). Nevertheless, the division into primary non-volatile particles and secondary volatile particles serves as a good starting point for the implementation of aerosol dynamics in the city-scale model.
There are certain specifications of the MAFOR box model that need to be retained in the large-scale model: (1) the structure of the four aerosol modes (nucleation, Aitken, Accumulation, coarse) with each mode divided into the same number of size sections and (2) the consistency between number and mass calculations. Condensation and evaporation of a chemical species in MAFOR adhere to the mass balance between gas phase and particle phase. Therefore, the mass concentration of the condensing species in each size section has to be considered an additional model species. If, for instance, 16 tracers for PN (16 size classes) are used, then the condensation of a single gas species will require the addition of 16 tracers for mass concentration. For computational reasons, one should aim to restrict the variety of chemical aerosol components as much as possible, for example by lumping all components of primary emitted particles (BC, primary OC, sea salt, etc.) into one single non-volatile model species, consistent with the PN emission approach outlined above. The MAFOR box model inherently includes coupling to a detailed gas-phase chemistry. However, the aerosol dynamics solver can be applied as a separate module in 3-D atmospheric models. The treatment of secondary organic aerosol by a hybrid approach in MAFOR (Sect. 2.5) is already in line with possible implementation in 3-D models. For the implementation in an atmospheric model it is important to connect the vapours to their origin and source region, e.g. biogenic versus anthropogenic, for later research applications. The chemistry solver of the 3-D model needs to be modified to account for chemical reactions that lead to the production of gaseous precursors, or a subset of these, involved in SOA formation (Fig. 7).
Summary and conclusions
The open-source aerosol dynamics model MAFOR v2.0, as a new community model, was described and evaluated against measured data, and the predictions were intercompared with those of two other aerosol process models.
The main new features of MAFOR v2.0 compared to the original model version (v.1) are the following.
(1) The model has been coupled with the chemistry module MECCA, comprising detailed up-to-date photolysis rates of VOC chemistry. This allows the partitioning of chemical species and the subsequent aqueous-phase reactions in the liquid phase of coarse-mode particles.
(2) The model includes a revised Brownian coagulation kernel that takes into account the fractal geometry of soot particles, van der Waals forces, and viscous interactions.
(3) The model contains a multitude of state-of-the-art nucleation parameterizations that can be selected by the model user. (4) The model has been coupled with PNG-MOSAIC, enabling size-resolved partitioning of semi-volatile inorganics at a relatively long time interval.
(5) The model includes a hybrid method for the formation of SOA within the framework of condensation and evaporation. These features make the model well suited for studying changes in the emitted particle size distributions by dry deposition, coagulation, and condensation and evaporation of organic vapours in urban environments as well as for the simulation of new particle formation over multiple days.
The performance of MAFOR v2.0 was evaluated against field-scale measurements of plume dispersion in a street environment located in the centre of Helsinki, published by Pirjola et al. (2012). The experimental data were obtained with a mobile laboratory van at different locations in the street environment. The data included particle number measurements in the size range of 3-414 nm, black carbon, and fine particulate mass PM 1 . The model was also intercompared with the results from two other aerosol dynamic models (AEROFOR and SALSA). MAFOR reproduced the reduction of total number concentrations with increasing distance from the street in good agreement (IOA = 0.85) with observations. MAFOR performed well in predicting the num-ber size distributions at street level and at different distances from the street (average IOA = 0.78), and it was able to reproduce the development of the size distributions with increasing distance better than AEROFOR and SALSA. A limitation of MAFOR is that it represents the particle emission size spectrum as a multi-modal distribution, which may result in an underestimation of the number of small particles, while the total number of emitted particles is not affected. MAFOR predicted the variation of fine particulate matter PM 1 (IOA = 0.25) in the street environment in better agreement with experimental data than SALSA. The difficulty in predicting the variation of observed PM 1 is related to the long averaging interval of the mass measurements compared to the model simulations that reflect instantaneous concentrations.
Dry deposition was found to be the only aerosol process that can compete with dilution, in agreement with several previous aerosol process studies in street canyons. Brownian coagulation played a minor role, and this was also confirmed by a simulation with the dispersion-coagulation code LNMOM-DC. Longer residence time in the street canyon and higher-than-average emissions from certain vehicles may increase the relevance of self-and inter-modal coagulation of nucleation-mode particles. For future aerosol process modelling studies in urban environments it is recommended to (1) select an appropriate deposition scheme based on the environmental conditions, (2) parameterize the dilution rate based on turbulence-resolving CFD simulations, and (3) constrain the particle emission size spectrum by independent measurements in the same environment.
The early phase of the vehicle exhaust plume was not resolved in this study. The vehicle wake is the first spatial scale from which the emitted UFPs will disperse into the ambient environment (e.g. Kumar et al., 2011). The parcel of exhaust emission at the tailpipe contains pre-existing particles from fuel combustion, unburnt droplets from lubricant oil, and various precursor gases for condensation. This parcel may already contain traffic-originated particles in the diameter range of 1.3-3.0 nm, so-called nanocluster aerosol (NCA) particles that were previously not detected by the instruments due their small size. Their origin might be either the direct emission of non-volatile particles that formed in the engine or the rapid nucleation of low-volatility vapours during exhaust cooling after the tailpipe. The delayed primary emission route to explain the formation of NCA during exhaust cooling should be implemented in MAFOR in the future. The subsequent growth of NCA by organic vapours also needs to be investigated; MAFOR could be an ideal research tool for this, as the model allows constraining the emission rate of condensable organic vapours based on the measured mass size distribution.
For the consideration of the aerosol processes in urbanscale 3-D models, a division into primary non-volatile particles and secondary volatile particles is proposed here as a starting point for the implementation of the aerosol dynamics code. The treatment of primary particles as non-volatile is consistent with current size-resolved PN emission inventories. The volatile particles form by nucleation, and both particle types grow by condensation of semi-volatile or lowvolatility vapours. The division enables the mass-conserving approach to condensation and evaporation of vapours, and it allows minimizing the total number of aerosol chemical species in the 3-D model.
The continued development of the open-source code by the community is advised and steered by a consortium of aerosol scientists. Several aspects of the numerical solutions (efficient integration of number and mass concentrations, operator splitting of processes, use of the fixed sectional method, and low numerical diffusion) make the aerosol dynamics code a promising candidate for implementation into large-scale atmospheric models. Ultimately, it is intended to establish MAFOR v2.0 as a state-of-the-art benchmark model for evaluating aerosol processes in dispersion studies from local to regional and global scales. We encourage and support the integration of this aerosol dynamics code into urban-, regional-, and global-scale atmospheric chemistrytransport models, possibly also into Earth system models.
4008
M. Karl et al.: Community aerosol model MAFOR v2 Appendix A: List of acronyms and nomenclature A list of the acronyms and abbreviations used in this work is given in Table A1. The nomenclature used in this work is summarized in Table A2. Concentration of compound q in the aqueous phase, µg m −3 C g,q Concentration of compound q in the gas phase, µg m −3 C eq,q saturation vapour concentration over a flat solution of the same composition as the particles, µg m −3 C 0 q Saturation mass concentration of compound q, µg m −3 C * q Effective saturation mass concentration of compound q, µg m −3 C tot,q Total concentration of compound q in gas and particles, µg m −3 c m,q Molecular speed of compound q, m s −1 c v Conversion factor to convert kg into µg, µg kg −1 D Mass-based emission rate of compound q in particles of section i, µg m −2 s −1 R Universal gas constant, kg m 2 s −2 K −1 mol −1 r i Radius of particles in section i, m r c,i Collision radius of particles in section i, m r d Droplet radius, m S q,i Equilibrium saturation ratio of compound q over particles of section i T Air temperature, K V k,j Intermediate volume for the collision of particles from section k with particles from section j , m 3 W c Correction factor for van der Waals interactions in the continuum regime W k Correction factor for van der Waals interactions in the free-molecular regime α q Mass accommodation coefficient of compound q on particles α l,q Mass accommodation coefficient of gas q to the droplet surface β q,i Transitional correction factor for compound q in particles of section i γ om,q Activity coefficient of compound q in the organic mixture γ q,i Molar fraction of compound q in particles of section i δ Kronecker delta function δ m Mean distance from the centre of a sphere, m λ dil Dilution rate, s −1 λ i dry Dry deposition rate of particles in section i, s −1 λ i wet Wet scavenging rate of particles in section i, s −1
Symbol
Description and unit ν p,i Mean thermal speed of a particle in section i, m s −1 ρ eff Effective density of fractal particles, kg m −3 ρ L,q Density of the pure liquid, kg m −3 ρ p,i Average density of particles in section i, kg m −3 ρ q Density of compound q, kg m −3 σ q Surface tension, kg s −2 υ i Volume of particles in section i, m 3 υ q,i Volume of compound q in particles of section i, m 3 υ g,q Molecular volume of the condensing vapour, m 3
Appendix B: Analytical Predictor of Condensation
The Analytical Predictor of Condensation (APC; Jacobson, 2005b) obtains a non-iterative solution for the change in the gas-phase concentration of the condensable compound with time using the mass balance equation of the final aerosol-and gas-phase concentrations. An equation that describes the condensational growth of a component q onto particles of size i is Eq. (8) in Sect. 2.3.2 (Jacobson, 1997b).
The mass transfer rate k T,q,i between the gas phase and all particles of size i can be approximated as k T,q,i = 4π r i N i D q β q,i , where N i is the number concentration of particles of size i, r i is the radius of a single particle, D q is the diffusion coefficient, and β q,i is the transitional correction factor.
For the dissolution process, the saturation vapour concentration is a function of particle composition, and the corresponding equation is (Jacobson, 1997b) dm q,i dt = k T,q,i C g,q − S q,i m q,i H q,i , where S q,i is the equilibrium saturation ratio and H q,i is the dimensionless effective Henry's law coefficient for the respective size bin. Mass is conserved between the gas phase and all size bins of the particle phase by respectively writing the gas conservation equations for Eqs. (B1) and (B2) as dC g,q dt = − N B i=1 k T,q,i C g,q − S q,i C g,eq,q , k T,q,i C g,q − S q,i m q,i H q,i .
Equations (B1) and (B3) together represent N B + 1 ordinary differential equations for condensation and evaporation that are solved in the APC scheme. The APC solution follows from integration of Eq. (B1) to obtain the final concentration of compound q in size bin i. The resulting implicit expression for the mass concentration after a time step of condensational growth is m q,i,t = m q,i,t− t + tk T,q,i,t− t C g,q,t − S q,i,t− t C eq,q,i,t− t , where the subscripts t and t− t indicate the current time and one time step backward, and t is the length of the (growth) time step. The final gas-phase concentration of the component is currently unknown. Based on the mass balance equation, the total concentration C tot,q of the compound in gas and particles is constrained by C tot,q = C g,q,t + N B i=1 m q,i,t = C g,q,t− t Substituting Eq. (B5) in Eq. (B6) and solving for C g,q,t gives the final gas-phase concentration in the condensation process at the end of time step t: C g,q,t = C g,q,t− t + t N B i=1 k T,q,i,t− t S q,i,t− t C eq,q,t− t where C g,q,t− t is the gas-phase concentration of compound q calculated at the end of the chemistry time step. The concentration calculated from Eq. (B7) for condensation and evaporation cannot fall below zero but can increase above the total mass of the compound. Therefore, the gas-phase concentration is limited by C g,q,t = min(C g,q,t , C tot,q ). This value serves as an estimate and is substituted into Eq. (B5).
It is problematic that Eq. (B5) can result in a negative aerosol mass concentration or in a concentration that exceeds the maximum (i.e. the total compound concentration). Therefore, two limits have to be placed subsequently after the computation of Eq. (B5). The first limit is m q,i,t = max(m q,i,t , 0) and the second limit is (Jacobson, 2005b) where the values of m q,k,t on the right side of the equation are determined after the first limit has been applied for all size bins.
A solution for the growth by the dissolution process is given in Jacobson (2005a). Equations (B2) and (B4) together represent N B + 1 ordinary differential equations for growth by dissolution. The solution to the equations of dissolutional growth is obtained by integration of Eq. (B4) to obtain the final mass concentration of component q in size bin i. The resulting expression is Eq. (16) in Sect. 2.4. The final gasphase concentration C g,q,t in Eq. (16) is currently unknown.
Substituting Eq. (16) into the mass balance Eq. (B6) and solving for C g,q,t gives the final gas-phase concentration for dissolution at the end of time step t (Jacobson, 2005a): The Analytical Predictor of Condensation, with the mass balance restrictions above, and the solution for the growth by dissolution are unconditionally stable, since all final concentrations for the gas and particle are bounded between 0 and C tot , regardless of the time step.
Appendix C: Brownian coagulation
In the model size distribution, particles from the first size section collide with particles from all other size sections. Particles from the second size section collide with particles from the third to largest size section, and so on. The number concentration of particles in section i, N i , increases if the colliding particles result in a particle of the same size as particles in section i. It decreases if particles in section i coagulate with particles of other size sections or of the same section. When particles of volume υ k and υ j collide, the resulting particle has an intermediate volume V k,j = υ k + υ j . If the intermediate volume falls between the two size sections i and i + 1, then the new particle is split between the two sections and constrained by volume conservation. Thus, a size-splitting operator, the volume fraction f k,j,i for the partitioning to each model section i, is defined as in Jacobson (2005b).
all other cases (C1) An advantage of this method is that the volume fractions obtained in Eq. (C1) are independent of the representation of the size distribution.
Viscous force correction of the diffusion coefficient in the continuum regime is D ∞ i,j D r,i,j (r) = 1 + 2.6r i r j r i + r j 2 r i r j r i + r j r − r i − r j + r i r j r i + r j r − r i − r j .
In Eq. (D4), D r,i,j is a relative diffusion coefficient between particles i and j , and D ∞ i,j = D p,i + D p,j is the sum of the individual diffusion coefficients of the two particles.
The integral in the correction factors W k and W c is approximated by numerical integration using the Gauss-Legendre quadrature formula after transforming the variable r using the relation x = b/r (with the dimensionless coordinate x) so that the limits of the integral become 0, and 1/(1 + a/b) and 1/(1+a/b) 0 (integrand) dx b can be evaluated as a function of x (where a = r i , b = r j ; and b ≥ a).
Appendix E: Dilution function for the Urban Case
The first dilution stage of the diluted exhaust plume, between the upwind curbside and the downwind curbside of the street, was described with the jet plume dispersion model of Vignati et al. (1999). In this model, dispersion of the plume is calculated taking into account the atmospheric turbulence, the traffic-generated turbulence, and the entrainment of fresh air due to the jet effect of the exhaust gas. The expression for the evolution of the plume cross section during the first dilution stage is given by where S is the cross-sectional area of the air parcel or exhaust plume (in m 2 ), S 0 is the cross section of the initial air parcel (here: 0.8 m), σ w (0) is the initial entrainment velocity (in m s −1 ), and u 0 is the initial exhaust gas velocity (here: 0.23 m s −1 ). The entrainment velocity is given by where u street is the street-level wind speed, σ wt is the trafficgenerated turbulence, and u jet is the plume jet velocity. The proportionality constant α is set to 0.1, which is typical for mechanically induced turbulence (Berkowicz et al., 1997). The traffic-generated turbulence is estimated using the traffic count, street width, horizontal area of a vehicle, and typical vehicle speed. The evolution of the plume height, H p , during the first stage is derived from Eq. (E1), assuming a circular plume cross section: The dilution ratio D R in the vehicle exhaust plumes increases approximately linearly with time during the first seconds of the dilution. The dilution ratio is given by The change in the dilution ratio with time, dD R /dt, is obtained from the derivation of S(t)/S 0 , dD R dt = −2α 2 u 2 0 t + 2σ w (0) · √ S 0 + σ w (0)t S 0 . (E5) The particle dilution rate as a function of time for the first dilution stage is For the second dilution stage, between the downwind curbside and ambient environment, the dispersion situation was analysed with the simplified street canyon model SSCM, a component of EPISODE-CityChem (Karl et al., 2019) using realistic street canyon geometry, line source emissions of total particles in both directions of the street, and the meteorological conditions of the Urban Case. Modelled total PN concentrations were obtained at certain receptor points located perpendicular to the street in the downwind direction, beginning at the curbside, at distances of 10 m. A numerical power function was fit to the modelled PN concentration data. The resulting fit equation for total particle number concentration was found to be N tot (cm −3 ) = 1.24 × 10 5 d −0.306 , with downwind distance d from the curbside in metres (m). Figure E2. Modelled particle number size distribution with MAFOR using different dry deposition parameterizations at point D (after 78.5 s of plume transport time). HU2012 is the reference dry deposition configuration used in MAFOR, SPF1985 is the dry deposition scheme in AEROFOR, and ZH2001 is the dry deposition scheme in SALSA. Measured size distributions from SMPS are shown as red circles.
The dilution parameter b = 0.306 ± 0.05 is close to the reported value of 0.34 in Pirjola et al. (2012) that was derived from PN measurements. The obtained parameter b = 0.306 is used in Eq. (25) to calculate the change in particle number concentration with time due to dilution in aerosol dynamics models.
The height of the air parcel containing the vehicle exhaust, as a function of time during the second dilution stage, is given by where H p,0 is the height of the plume at the end of the first dilution stage, while a and b are dispersion parameters and depend on the atmospheric stability. For stable conditions prevailing in the Urban Case scenario, a = 61.14 and b = 0.91 (Petersen, 1980) were chosen. The evolution of air parcel height, dilution rate, and particle emission rates during the Urban Case scenario simulation is shown in Fig. E1.
To assess the differences in model results due to the application of different dry deposition schemes (Table 5), model runs including all aerosol dynamics processes and dilution were performed with the MAFOR model for the Urban Case scenario using the dry deposition parameterizations HU2012 (reference configuration in MAFOR), SPF1985 (deposition scheme in AEROFOR), and ZH2001 (deposition scheme in SALSA). The dry deposition parameterizations are introduced in Sect. 2.3.5. The comparison of the final particle size distribution at point D, obtained from these MAFOR runs with different dry deposition parameterizations, is shown in Fig. E2.
The HU2012 deposition scheme that is used in the reference run with MAFOR is more efficient than the other two deposition schemes in removing particles with sizes above 10 nm diameter. The final size distribution resulting from SPF1985 is similar to that from ZH2001.
Appendix F: Statistical indicators and model performance indicators
Statistical performance indicators for the model-observation comparison were calculated with the modStats function of the openair R package (Carslaw and Ropkins, 2012). The mean absolute error, MAE (also named mean gross error, MGE), is defined as where M and O stand for the model and observation results, respectively, and N o is the number of observations. The use of MAE compared to measures that are based on squared differences was preferred here because the absolute values of the differences are less sensitive to high values. Two measures of model performance were selected, the index of agreement (IOA) and the coefficient of efficiency (COE). In this study, the COE is used to rank the models according to their performance in predictive capability.
The calculation procedure of COE in openair is based on Legates and McCabe (1999). A COE of 1 indicates a perfect model. A COE of 0.0 indicates a model that is no better than the observed mean; therefore, such a model can have no predictive advantage. If COE takes negative values, the model is less effective than the observed mean in predicting the variation in the observations. COE is defined as The O with overbar is the observation mean.
The index of agreement (IOA) is a refined index for measuring model skill (Willmott et al., 2012). IOA spans values between −1 and +1 with values approaching +1 representing better model performance. When IOA is 0.0, it signifies that the sum of the magnitudes of the errors and the sum of the perfect-model deviation and observed deviation magnitudes are equivalent. Some caution is needed when IOA approaches −1 because it can either mean that the modelestimated deviations about O are poor estimates of the observed deviations or that there is simply little observed variability. IOA is defined as
(F3)
Appendix G: Comparison of modelled and measured mass size distributions Modelled mass size distributions (dM/dlogD p ) of total particles obtained for the reference run (all processes) and the sensitivity runs with different representation of condensable organic vapours were compared to the measured mass size distributions. The measured mass size distribution was obtained from particle number data observations with SMPS (138 size sections in the range of 3-420 nm; 150 s resolution data; on board the mobile lab Sniffer), assuming a particle density of 1000 kg m −3 . For points A and D, the modelled mass size distributions of the reference run and the five sensitivity tests for condensation of organics are plotted together with the measured mass size distributions in Fig. G1. The modelled mass size distribution obtained in the reference run matches the measured distribution at point A closely, except for the size range 40-100 nm in which the model overestimates measured mass (Fig. G1a), mainly because of inaccurate particle emissions in this size range. Increased volatility of the semi-volatile organics (SENS1) and a lower accommodation coefficient (SENS3) to some extent suppressed the condensation to the sub-10 nm particles. For the sensitivity run with adipic acid (SENS2) no deviation from the reference run is apparent in the mass size distributions. The 20-fold increase in SVOC emissions (SENS4) increases the mass concentrations of sub-10 nm particles at point D (by roughly a factor of 2), but not at point A. The 50-fold increase in SVOC emissions (SENS5) increases mass concentrations of sub-10 nm particles at point A, still consistent with the measured mass size distribution. However, at point D the mass concentrations of particles with diameter < 160 nm are largely overestimated compared to the measurements. Given a factor of 2 uncertainty of the experimental mass concentration data (measurement error and uncertain particle density), the emission rate of condensable organics is bound between the reference emission (EF SVOC = 3.9 × 10 −7 g m −1 veh −1 ) as a lower limit and the 20-fold emission (SENS4) as an upper limit for the model to be in agreement with observations. Based on MAFOR simulations, vehicle-emitted organics are thus determined to be on the order of 10 −7 to 10 −6 g m −1 veh −1 . Figure G1. Comparison of particle mass size distribution for the diameter range 5-500 nm in the Urban Case simulation. Plots show the modelled mass size distribution from the reference run (including all processes) and the five condensation sensitivity tests (SENS1 to SENS5) with MAFOR together with the observed mass size distribution derived from SMPS measurements using particle density of 1000 kg m −3 to convert from number to mass: (a) size distribution of the total mass at point A; (b) size distribution of the total mass at point D. Figure G2. Comparison of modelled total particle number with diameter > 3 nm (N 3 ), gas-phase concentrations of H 2 SO 4 and SO 2 , and nucleation rate from MAFOR v2.0 (black lines) and AEROFOR (red lines) in a test of the new nucleation parameterization M2018 for new particle formation under clear-sky conditions (T = 267 K, RH = 90 %). The simulations started at 8:00 local time. weather (GLORIA)".
Publisher's note: Copernicus Publications remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
|
2021-12-12T16:27:36.138Z
|
2021-12-10T00:00:00.000
|
{
"year": 2022,
"sha1": "9a6864e6fa18c3cdcaec1e0dbae87c1c1a46b534",
"oa_license": "CCBY",
"oa_url": "https://gmd.copernicus.org/articles/15/3969/2022/gmd-15-3969-2022.pdf",
"oa_status": "GREEN",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "64a68d7964f2b404f78d7ba8fd8051fbf824cf29",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": []
}
|
8855642
|
pes2o/s2orc
|
v3-fos-license
|
Drosophila apoptosis and Bcl-2 genes: outliers fly in.
After much speculation ([Abrams 1999][1]; [McCall and Steller 1997][2]; [Meier and Evan 1998][3]), work described in this issue of The Journal of Cell Biology and in a recent issue of Proceedings of the National Academy of Sciences of the United States of America ([Igaki et al. 2000][4]) unveils the
After much speculation (Abrams, 1999;McCall and Steller, 1997;Meier and Evan, 1998), work described in this issue of The Journal of Cell Biology and in a recent issue of Proceedings of the National Academy of Sciences of the United States of America (Igaki et al., 2000) unveils the long anticipated, missing piece of the apoptosome in flies. On page 703 in this issue, in a paper by Colussi et al., Kumur, Richardson, and colleagues characterize the first Drosophila members of the Bcl-2 gene family whose function is important for programmed cell death (PCD). The founding member of this gene family was identified as the proto-oncogene upregulated by t(14;18) translocations in B cell follicular lymphomas. Since this discovery, and its link to the regulation of cell death, the Bcl-2 family of proteins has grown to include more than twenty members of death-suppressing and -promoting molecules found in the genomes of worms, mammals, viruses, and now flies (Gross et al., 1999;Vander Heiden and Thompson, 1999).
Central components of the apoptosis machinery in worms, mammals and flies are schematized in Fig. 1. In C . elegans , both Ced-3 and Ced-4 are required for all PCD during worm development. Ced-3 encodes a founding member of the caspase family (cysteine proteases) while Ced-4 promotes the activation of Ced-3 through direct physical interaction. The upstream death regulator, Ced-9, protects cells from death by forming a complex with Ced-4, thus preventing the activation of Ced-3 by Ced-4. A proapoptotic protein with limited Bcl-2 similarity, Egl-1, can interact with Ced-9 to derepress Ced-4 and permit activation of Ced-3 (reviewed in Metzstein et al., 1998;Gumienny et al., 1999;Horvitz, 1999).
In mammals the Ced-4 homologue, Apaf-1, together with cytochrome c released from mitochondria, promotes oligomerization and activation of the mammalian apical caspase, caspase-9. Activated caspase-9 subsequently processes downstream effector caspases (e.g., caspase-3) thus initiating a caspase cascade leading to apoptosis. In this pathway, the Bcl-2 family of proteins is thought to act upstream of caspase activation, perhaps regulating the initial phase of caspase processing, while in other situations (e.g., death receptor signaling), Bcl-2 proteins may function to amplify or propagate an already initiated caspase cascade. The Bcl-2 related proteins can form homo-and heterodimers through their conserved domains (BH domains).
This ability to dimerize, and the relative abundance of the pro-and anti-apoptotic members, are thought to be important determinants regulating the propensity of a given cell to convert death signals into an apoptotic response (reviewed in Adams and Cory, 1998;Gross et al., 1999;Vander Heiden and Thompson, 1999).
How do the mammalian Bcl-2 proteins function to regulate cell death? Since human Bcl-2 can partially reverse cell death defects found in Ced-9 mutant worms, Ced-9 and Bcl-2 are thought to share at least some functional properties (Vaux et al., 1992;Hengartner and Horvitz, 1994). Thus, one proposed mechanism draws on possible functional analogies to the Ced-9 protein, which physically binds to and inhibits Ced-4. Consistent with this hypothesis, some Bcl-2 proteins were found to associate with Apaf-1 Inohara et al., 1998;Pan et al., 1998;Song et al., 1999), leading to speculation that Bcl-2 proteins could regulate apoptosis by interfering with Apaf-1-dependent caspase-9 activation. This is an attractive model, easily reconciled with the relationship between Egl-1, Ced-9, Ced-4, and Ced-3 in C . elegans . However, when native Bcl-2 proteins were directly examined, the predicted associations with Apaf-1 were not found, raising doubts as to whether the Apaf-1/Bcl-2 overexpression studies truly reflect the physiological condition (Moriishi et al., 1999). Alternative models for Bcl-2 function have also developed over the years. Some members of the Bcl-2 family are either resident proteins of mitochondrial membranes or transit from the cytosol to mitochondria in association with an apoptotic signal. Together with a resemblance to bacterial toxins and their ability to form channels in lipid bilayers, it has been suggested that Bcl-2 proteins also function to regulate the flow of caspase-activating substances, such as cytochrome c and apoptosisinducing factor (AIF), from the mitochondria to the cytosol by forming and/or regulating pores on mitochondrial membranes Reed, 1998, Gross et al., 1999). This picture is consistent with the ability of Bcl-2 to suppress the release of cytochrome c from isolated mitochondria (Kluck et al., 1997;Yang et al., 1997) and a reported association with the mitochondrial voltage-dependent anion channel (Shimizu et al., 1999). Although we have learned a great deal since their discovery more than a decade ago, the precise mechanism(s) by which these proteins exert their function remains both an elusive and intensely controversial issue. Given that evolutionary perspectives often provide clues to the nature of molecular function, an implicit undertone coloring this debate is whether (or to what extent) the anti-apoptotic Bcl-2 pro-teins actually share biochemical functions with Ced-9. Accordingly, the newly identified Bcl-2 homologues from a distantly related genetic model promise additional avenues to help resolve this problem. Colussi et al. (2000) identified Debcl (death executioner Bcl-2 homologue) and a second Bcl-2 homologue from a database search of the Drosophila genome. These two Drosophila proteins share a high degree of similarity, and among the published Bcl-2 members, they are most similar to Bok. Both contain BH1, BH2, and BH3 domains as well as a hydrophobic transmembrane region. Interaction profiling studies demonstrated that Debcl can bind to most of the mammalian anti-apoptotic family members (e.g., Bcl-2, Bcl-x L , etc.), but not to pro-apoptotic members (such as Bik, Bax, and Bak, etc.). In cell culture and in transgenic animals, directed expression of Debcl provokes extensive cell death which required an intact BH3 domain and was suppressible by the virally derived caspase inhibitor, p35. In the embryo, the appearance of Debcl RNA correlates with PCD at many stages and in various tissues. Colussi et al. (2000) also used the RNA interference technique, a relatively new method of blocking gene expression, to validate a pro-apoptotic function for Debcl and demonstrate a requirement for this gene during embryonic cell death. A concurrent paper by Igaki et al. (2000) characterizes the same gene, which they refer to as Drob-1. They used a clone that is 25 residues longer at the NH 2 terminus and found that Drob-1, like Debcl, was pro-apoptotic. In related studies, they also uncovered a requirement for the carboxyl hydrophobic transmembrane region and found that, like its mammalian counterparts, Drob-1 localized to mitochondria when expressed in cultured cells. While the two groups studying this gene agree on its expression profile and its pro-death properties, there are significant discrepancies as well. For instance, whereas the Debcl group contends that the gene does not include a BH4 domain, the Drob-1 group contends that it does. The issue might be more than academic since it would be the first example of a pro-apoptotic member of the family that contains a BH4 motif (outside of Bclx s ). Another discrepancy is that although expression of Debcl/Drob-1 provoked caspase activation, Colussi et al. (2000) found that p35 was able to suppress accompanying cell death (in cultured cells and the animal) whereas Igaki et al. (2000) report that it was not (only cultured cells were tested). The differential effects of p35 in the two studies might be due to the slightly different clones that were used or the differing expression levels obtained in the two labs. Other possibilities include the induction of p35-insensitive caspases or the activation of caspase-independent events that lead to cell death. Again, given the importance of caspase-independent events associated with killing by pro-death Bcl-2 genes in mammalian cells (Xiang et al., 1996;McCarthy et al., 1997), reconciling these differences will be more than simply an academic exercise.
How does Debcl/Drob-1 fit into our current view of cell death genetics in flies? Although it is far too early for a well-focused picture, the similarities to pro-death Bcl-2 In Drosophila, all embryonic PCD requires the activities of three closely linked genes, Rpr, Grim, and Hid. Expression of these apoptosis regulators initiates multiple downstream pathways to activate caspases and kill cells. Rpr, Grim, and Hid may induce formation of an apoptosome complex consisting of cytochrome c (cyt. c), Dark (Kanuka et al., 1999;Rodriguez et al., 1999;Zhou et al., 1999), and apical caspases such as Dronc (Loretta et al., 1999) and Dredd , which in turn promotes caspase activation and propagation of proteolytic activity to downstream, effector caspases (Abrams, 1999). Alterations in cyt. c (Varkey et al., 1999) could be regulated by Scythe, a protein that binds all three death activators (Thress et al., 1999(Thress et al., , 1998. Rpr, Grim, and Hid also engage caspases via one or more Dark-independent pathways. These involve derepression of native caspase inhibitors such as Diap1 . The proapoptotic Bcl-2 proteins (Drob-1/Debcl) probably function downstream of Rpr, Grim, and Hid and might directly engage caspases or function through Dark/cyt. c to propagate death signals. Currently, no pro-survival Bcl-2 gene has been reported in Drosophila. genes combined with epistasis data connecting Debcl to existing players of the Drosophila cell death pathway suggests a tentative molecular order for gene action (see Colussi et al., 2000, and Fig. 2). Debcl-associated death phenotypes were sensitive to the dosage of DIAP1 and Dark (the APAF-1/Ced4 ortholog) indicating that the protein functions upstream, or parallel to, the action of these genes. Expression of Debcl/Drob-1 also provoked caspase activation which (at least in the animal) was reversed by the broad-spectrum caspase inhibitor, p35. In contrast, cell killing by Debcl was insensitive to the dosage of the death activators Rpr, Grim, and Hid, suggesting that the protein either functions downstream or parallel to these genes. While the pathways in Fig. 2 ( and Colussi et al., 2000) offer a reasonable interpretation of the current data, the usual caution and caveats apply since the position of the fly Bcl-2 proteins is largely based upon dominant phenotypes resulting from directed overexpression studies. Nevertheless, given the attention these genes are likely to receive, we can expect rigorous testing of the model for years to come. In this regard, the isolation of null mutations in these genes and the identification of an anti-apoptotic ortholog are perhaps the highest priorities.
|
2018-04-03T01:20:21.087Z
|
2000-02-21T00:00:00.000
|
{
"year": 2000,
"sha1": "88e3e9f261ae48190e2fcc5e9aaa47b6ffd073e9",
"oa_license": "CCBYNCSA",
"oa_url": "http://jcb.rupress.org/content/148/4/625.full.pdf",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "88e3e9f261ae48190e2fcc5e9aaa47b6ffd073e9",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
}
|
237320983
|
pes2o/s2orc
|
v3-fos-license
|
Salbutamol Transport and Deposition in the Upper and Lower Airway with Different Devices in Cats: A Computational Fluid Dynamics Approach
Simple Summary Administration of inhaled salbutamol via metered-dose inhalers can effectively treat bronchoconstriction. Different devices are used for the delivery of this drug in cats, either in the hospital or at home, for long-term treatment. Effective drug administration may depend on the drug delivery device as well as patient cooperation. By using non-invasive computational fluid dynamics techniques, the impact of these devices on the deposition and transport of salbutamol particles in the cat airways was simulated and assessed. The results confirm a variable drug distribution depending on the device used. The percentage of particles reaching the lung was reduced when using spacers and increased when applied directly into an endotracheal tube. Abstract Pressurized metered-dose inhalers (pMDI) with or without spacers are commonly used for the treatment of feline inflammatory airway disease. During traditional airways treatments, a substantial amount of drugs are wasted upstream of their target. To study the efficiency of commonly used devices in the transport of inhaled salbutamol, different computational models based on two healthy adult client-owned cats were developed. Computed tomographic images from one cat were used to generate a three-dimensional geometry, and two masks (spherical and conical shapes) and two spacers (10 and 20 cm) completed the models. A second cat was used to generate a second model having an endotracheal tube (ETT) with and without the same spacers. Airflow, droplet spray transport, and deposition were simulated and studied using computational fluid dynamics techniques. Four regions were evaluated: device, upper airways, primary bronchi, and downstream lower airways/parenchyma (“lung”). Regardless of the model, most salbutamol is deposited in devices and/or upper airways. In general, particles reaching the lung varied between 5.8 and 25.8%. Compared with the first model, pMDI application through the ETT with or without a spacer had significantly higher percentages of particles reaching the lung (p = 0.006).
Introduction
Asthma and chronic bronchitis are common inflammatory airway disorders in cats. In people with asthma and chronic obstructive pulmonary disease (COPD; most commonly rat [19], and monkey [20] without clear clinical veterinary applications or just for improving human medicine and health. Each type and formulation of the drug and delivery device must be optimized for the species and respiratory disorder of interest. The deposition and transport of salbutamol varies and depends on particle size, inspiratory flow velocity, and devices design used, among others. CFD technique was used to understand pMDIs performance, reduce drug waste, and improve the delivery of the aerosol particles to the lung considering different inhalers and devices [14]. In addition, the airflow varies throughout the respiratory system. In the upper airways, the flow is turbulent (Reynolds number > 2000), whereas as the airways narrow in the lower zones, the flow becomes laminar. This will also influence the behavior of the particles [7]. The CFD tool allows these flow regimes to be simulated.
In this study using healthy cats, our first aim was to assess the efficiency of pMDIs of salbutamol using different spacers, masks, and an ETT comparing particle deposition and transport in the respiratory tract of cats with numerical models. For reaching this goal, CT-based CFD models were created for determining particle deposition and/or transport in four regions: device, upper airways (nose to the trachea), primary bronchi, and downstream lower airways/parenchyma (herein referred to as "lung").
Our hypothesis was that by using CFD to model pMDI delivery of salbutamol in healthy cats, most particle deposition would take place in the drug delivery devices and the upper airways. We also hypothesized that particle transport would be enhanced in cats with an endotracheal tube compared with a spacer chamber or a pre-oxygenation mask. This study would help to better understand conditions for optimal pMDI drug delivery in healthy cats as a steppingstone to the investigation of cats with inflammatory airway disease such as asthma and chronic bronchitis.
Animals
Two client-owned cats presented to the teaching veterinary hospital (Centre hospitalier universitaire vétérinaire d'Alfort-Chuva) between September 2019 and July 2020 with informed owner consent were enrolled in this study. One cat, without intubation, was under sedation for a head CT for otitis media and a second cat was intubated (ETT 4 ID) for a front limb CT which included the thorax. The weights of the cats enrolled were 4.25 and 4.5 kg, respectively. CT images were selected for geometrical reconstruction and further simulations.
Ethics
All procedures were conducted as part of normal veterinary clinical practice with owner's consent form (Art. R242-48, Ordre National de Vétérinaire) and approved by the Clinical Research Ethical Committee of the École Nationale Vétérinaire d'Alfort (ENVA), France (Number: 2020-05-30).
Pressurised Metered Dose Inhaler (pMDI) and Devices Selection, CT Images, and Design
Nine different possible combinations of devices were geometrically reconstructed based on CT images or manufactured data (see Figure 1).
Using the CT study from the cats, a geometrical reconstruction was performed and then used for further analysis. The selected devices were two types of pre-oxygenation masks with spherical ( Figure 1a) and conical (Figure 1d) shapes. Then, a spacer of 10 cm length and 4 cm diameter and a spacer of 20 cm length and same diameter were connected to the two previous masks (Figure 1b,c,e,f). For these CT images, no cats were required. Finally, a CT image of an intubated cat was selected with the aim of posteriorly creating a numerical model for simulating a direct administration of a pMDI through an endotracheal tube (Figure 1g) or through the two spacers (Figure 1h,i). Pressurized metered-dose inhaler (pMDI) and device models for computational fluid dynamics simulations: (a) pMDI connected to a spherical preoxygenation mask, (b) pMDI connected to a 10 cm spacer and a spherical preoxygenation mask, (c) pMDI connected to a 20 cm spacer and a spherical preoxygenation mask, (d) pMDI connected to a conical preoxygenation mask, (e) pMDI connected to a 10 cm spacer and a conical preoxygenation mask, (f) pMDI connected to a 20 cm spacer and a conical preoxygenation mask, (g) endotracheal tube (ETT) 4 mm ID, trachea, and main bronchus, (h) pMDI connected to a 10 cm spacer and an ETT 4 mm ID, trachea, and main bronchus, (i) pMDI connected to a 20 cm spacer and an ETT 4 mm ID, trachea, and main bronchus. Computational CT-based models with Ansys: a-f, non-intubated cat model; g-i, intubated cat.
Sedation, Multidetector Computed Tomography (MDCT) Protocol and Image Analysis
Cats fasted for 12 h prior to anesthesia but had free access to water until premedication was administered. Sedation or general anesthesia was adapted to each cat. Cats were spontaneously breathing and placed in sternal recumbency with the head elevated and the neck fully extended. Non-contrast-enhanced MDCT examinations were carried out using a 64-detector-row CT system (Brilliance 64; Philips, Amsterdam, The Netherlands) before the programmed CT study. The CT images were obtained using a matrix of 768 × 768, tube voltage of 120 kV, tube current of 196 mA, and a display field of view of 35 cm and a pitch of 0.5. Images for the study were acquired first from the nostrils to the most caudal border of the lungs [21]. One-millimeter-thick images were reconstructed using a high-resolution algorithm. At the conclusion of the CT examination, the cat was supervised until completely recovered.
Images were reviewed using commercial medical imaging software, DICOM (Digital Imaging and Communications in Medicine, DICOM) viewer (Horos v.1.1.7., 64-bit, Horos TM , Brooklyn, NY, USA) using a lung window (window width (WW): 1600; window level (WL): −550). A board-certified radiologist performed the evaluation of the images for major abnormalities involving the airways.
Geometrical Reconstruction and Numerical Discretization
The DICOM files derived from the CT studies were imported into the image-based geometry reconstruction software (MIMICS, Materialise Software, Leuven, Belgium; Figure 2).
Manual reconstruction of the upper airway (defined as the nasal cavity, nasopharynx, oropharynx, larynx, and trachea) and primary bronchi (also called principal or mainstem bronchi) geometry was conducted for the non-intubated cat.
By means of the commercial software (Ansys IcemCFD, v.20, Ansys Inc., Canonsburg, PA, USA), device volumes and airways were filled with tetrahedral elements that composed the computational mesh in which the airflow governing equations, droplet formation, and droplet trajectories were solved. In Figure 3, a grid is shown for the spacer of 10 cm and the spherical mask using sections and corresponding images.
Finally, two different masks of conical and spherical shape and two lengths of sacers as described previously were connected to the cat nose of the reconstructed model. The devices were created and attached to the patient-specific cat model in the commercial software package Rhinoceros (release 5, Robert McNeel and Associates, Seattle, WA, USA). The final model of the non-intubated cat is depicted in Figure 3. Representation of the numerical discretization in a non-intubated feline CT-based model of pressurized metered-dose inhaler (pMDI) drug delivery. The airways volumes are filled with three-dimensional elements (tetrahedrons). In this example, a pMDI is connected to a 10 cm spacer and a conical preoxygenation mask.
The same procedure was used for the manual reconstruction of the intubated cat. In this case, only the trachea and the first generations of the respiratory tract were segmented, and the tracheal tube and the inhaler were separately generated and attached in Rhinoceros. The models are represented in Figure 1. In this case, the geometry is much less complex with respect to that of the other cat as it starts directly from the trachea bypassing all the nasalto-laryngeal airways tract. Thus, the numerical grid contains far fewer elements resulting in less expensive computational costs. The volume was filled with three-dimensional elements (tetrahedrons; see Table 1), and the total number of elements varied depending on the presence or absence of spacers and the type of spacer. The number of tetrahedrons of the cat geometry within the non-intubated cats was approximately 18 million. The number of elements for the 10 cm and 20 cm length spacers ranged from 3 to 6 million, respectively. The number of elements for the spherical and conical masks ranged from 2.5 to 5 million, respectively. Prior to final computations, a mesh independence study was carried out in order to assess the dependence of the results on the grid size. Details of the study are provided in Appendix A.
CFD Analysis
Once the devices and airways volumes were filled with tetrahedrons, the resultant numerical grids of the cats were imported into a simulation software package (Ansys CFX, v.20, Ansys Inc., Canonsburg, PA, USA). This software solves the Navier-Stokes equations that describe flow motion in different conditions within the geometrical grids using numerical algorithms. In particular, the Ansys CFX software adopts the finite volume method. The exact mathematical formulation and the solving algorithms used by Ansys CFX are provided in the software manual (Ansys, 2020).
The peak inspiratory/expiratory flow of the non-intubated cat was 110 mL/s [22] and of the intubated cat, 30 mL/s. The flow of the intubated cat was obtained by means of a thermal anemometry ("hot wire") flow sensor (Dräger Julian Anaesthesia Machine, Lübeck, Germany). The peak inspiratory flow was imposed at the top of the pMDI (see Figure 4). The flow was considered turbulent, and the k-ω model was used. An initial turbulence intensity value of 5% was adopted.
A respiratory cycle of 3 s (1 s inspiration, 2 s expiration) was selected and 7 respiratory cycles for a total of 21 s using a time step of 0.01 s were computed. Fluid properties, air density of 1.185 kg/m 3 , and viscosity of 1.83 10 -5 Pa·s [21] were used. The pMDI modeling included a dose of 100 µg for puff, an initial droplet diameter of 10 µm, a velocity of 150 m/s, and a spray angle of 20 • [23,24]. The pMDI puff was applied just before the first inhalation waveform starts in order to simulate clinical conditions.
The pMDI used in the model consists of a canister connected to a metering valve capable of producing required dosages of 25 to 100 µg. The user controls the release of the drug through the actuator-nozzle that generates a measured liquid. We used a nozzle orifice of 0.5 mm diameter and located at the center line of the canister in the center of the mask or the spacer (Figure 4). The injected particles were then tracked through the geometric regions until they found one of the three specific conditions: (1) they collided and were trapped on the mask or spacer tube and/or on the airway walls, (2) they escaped from the domain through one of the outlet geometry faces, or (3) they continued in suspension in the flow. Suspension means that particles are still in a hold-up, and they could subsequently be exhaled or impact an airway wall in further breathings. The numerical models allow computing flow velocity and structures inside each cat's airways and devices. The structure of the flow was depicted using 3D streamlines, while the flow intensity was represented using a heatmap.
All computations were performed on an Intel 9 workstation and parallelized on 8 processors. The computational costs of every single simulation are given in Table 1. Further details of the particle modeling and independent study are described in Appendices A and B [25][26][27].
Data Analysis
Airflow streamlines (peak inspiratory flow structures) and droplet deposition were represented using heat maps and described qualitatively for the various conditions in both models. Percentages of particles deposited to the devices, to the upper airways, and transported to the lung as previously defined for all conditions in each model, were reported. Further data analysis used IBM ® SPSS ® Statistics software version 23 (Chicago, IL, USA). A Shapiro-Wilk test was used to test for the normal distribution of percentages of particles deposited to the aforementioned three regions and in suspension for further comparisons. A t-test for independent values was used to compare the means and SD of the delivery percentages of particulates to devices, upper airways, to the lung, and in suspension, between the two models (i.e., the non-intubated cat and intubated cat). Values of p < 0.05 were considered statistically significant with a 95% of confidence interval (CI).
Results
Using CT scans from two cats, nine different combinations of the model devices were simulated. Airflow streamlines, which represent the flow direction depicted with the intensity of the velocity (red = high velocity, dark blue = low velocity) of all simulation models, are represented in Figure 5 at peak flow during inspiration. In the case of the spherical mask, the presence of the spacer seemed to have less influence on the flow. The recirculating airflow structure appeared similar independent of the presence of the spacer (Figure 5a-c). This may have been due to the geometry of the mask. The spherical mask was shorter than the conical mask hence producing a flow recirculation for its proximity to the cat nose in all the cases. In general, the presence of the spacers greatly influenced the flow in the masks, as visible in Figure 6, especially in the case of the conical mask (Figure 5d-f). The flow recirculation appearing in the conical mask in the presence of the spacer was largely reduced when the spacer was not attached. In fact, the air jet generated at the entrance of the conical mask was generated at the entrance of the two spacers when these were connected, and hence the flow velocity reduced, and its structure accordingly modifies. The flow structure inside the spacer appeared similar within the models and independent of its length. After the skewed inlet of the inhaler ring, the airflow velocity in the spacer experimented with a sudden expansion at the spacer entrance, generating a local recirculation near the spacer walls. The airflow velocity increased again at the entrance of the mask and in the oral cavity and further accelerated in the trachea because of the constriction in the larynx. At this stage, the flow was completely turbulent.
The distribution of inhaled airflow was heterogeneous within the cat nasal cavity due to the complex geometry of the airways. The initial high-speed flow in these regions was slowed down in the dorsal meatus ( Figure 5), progressively becoming a low-speed flow in the ventral regions (blue regions of Figure 5). Then, the flow was directed to the nasopharynx and laryngeal regions, where local acceleration occurs due to the physiological constriction represented by the larynx. Finally, the flow moved forward and divided to the lower airways through the bifurcations and generations from the carina.
The airflow in the intubated cat was more homogeneous in comparison to that of the non-intubated cat (Figure 5g-i). As the inhalation proceeded from the endotracheal tube, the airflow entered directly to the trachea and distributed to the airways with a lower velocity as the cat was anesthetized. No influence on the flow patterns was visible when the spacers were added to the tracheal tube. However, important variations were expected regarding the particle inhalation.
The total of particles in percentages (%) reaching the lung after seven respiratory cycles airways or remained in suspension after the simulation are represented in Figure 6 (I:E ratio 1:2, inspiratory time 1 s; see Table 2), attached to the device and/or upper. Particle deposition percentage was computed as the ratio of deposited and injected particles times 100. These percentages are summarized for the inhaler, the spacers, and the masks in Table 3. They represent the amount of drug which is not able to reach the lungs. Those particles deposited on the muzzle or in the upper airways (nasal turbines, oropharynx, larynx, and trachea) are represented in Table 4. In the Supplementary Materials, the unsteady behavior of the droplets injected though the salbutamol pMDI are visualized during the breathing cycles (see Videos S1-S4).
The particle deposition in the devices was 2.6 times higher when a conical mask was used compared with a spherical mask alone. The drug deposition was twice when using a 20 cm spacer with a conical mask and 0.7 times higher when a 10 cm spacer was used compared to when the spherical mask was attached to them. However, the deposition was 1.6 times higher on the muzzle when using the spherical mask with respect to the conical one. Finally, 66.41% of the particles were deposited before reaching the upper airways in the conical mask and 48.96% in the spherical mask when they were used alone. When using these masks with a spacer of 10 cm that, as commented, is specific to cats, the particle deposition in the regions before the upper airways was found to be 22.65% and 26.85% (CM and SM, respectively). When a human pediatric spacer of 20 cm was used, these percentages were 41.35% (CM) and 30.15% (SM).
Mean and standard deviation (±SD) of the percentage of particles deposited into the devices, upper airways, and reaching the lungs (exit the computational model) when using a mask or an ETT with or without a spacer are represented in Table 5. Table 5. Percentage of particle distribution and deposition comparing when using a mask or an ETT with or without a spacer.
Particles
Group Mean and standard deviation (±SD) of the percentage of particles deposited into the devices, upper airways, and reaching the lung when a spacer was added or not are represented in Table 6.
In Figure 7, the salbutamol droplets deposition patterns are represented on the geometrical model surfaces. The pattern provided a qualitative picture of the regions affected by high or low salbutamol concentration. Visible droplets tend to deposit in non-uniform ways. There was a tendency of particles to deposit especially in the upper mask region (see Figure 7a,b). When the spacer was introduced into the model, the deposition on the mask was slightly reduced. A considerable amount of salbutamol was concentrated on the spacer surface (see Figure 7c-f). Salbutamol deposition on the muzzle was enhanced when using the spherical mask. In any case, salbutamol loss due to deposition was not significantly different between particles attached to the devices or upper airways (p= 0.428 and p = 0.089, respectively).
Discussion
The present study aimed to assess the transport, distribution, and deposition of salbutamol 10 µm particles in the upper and lower airways of cats by means of the CFD technique. The pMDI application through the ETT, with or without a spacer, had significantly higher percentages of particles reaching the lung compared with the non-intubated model. In humans, similar studies were used to improve aerosol delivery techniques with similar aims, and the use of the CFD technique is widely known [15,23,28,29]. However, compared with humans, the anatomy of cats' upper airways is more complex and thus challenging for this technique. Studies assessing deposition of aerosol clinical therapies in small animals are limited and based on the use of Scintigraphy in dogs [30]. The use of pertechnetate ( 99m Tc) delivered via a spacer and face mask apparatus in conscious cats has been published [31], but this technique employs the use of radioisotope and gamma-rays, which is more invasive and may be toxic for the patient. Less invasively, Leemans et al., 2009, compared the duration of action and the efficacy of ß2-agonists in cats with induced bronchoconstriction using barometric whole-body plethysmography (BWBP) [32]. The advantage of this numerical tool is that it is a non-invasive technique.
We simulated the most commonly prescribed bronchodilator at home and used in clinics, salbutamol, with a concentration of 100 µg per puff (Ventolin, GlaxoSmithKline plc (GSK), Brentford, UK). It is the same product with exactly the same dosage as the one used in humans (average weight of 70 kg). It is unknown what plasma level is required to achieve effective bronchodilatation in cats, and toxicity trials with aerosolized formulations in cats are missing. Leemans et al., 2009, are the only group that studied the effects and justified a dose for salbutamol pMDI. Their study recommended a single dose of 100 µg with a peak effect of around 15 min post administration and inducing an antispasmodic effect in the airways for up to 4 h. Two-or four-fold increments of drug dosages slightly improve bronchodilatory effects. Similar results were reported in humans [33,34]. Consequently, in this study, a single dose of 100 µg was simulated, with a fixed peak inspiratory flow for the non-intubated and intubated model. Treatment recommendations varied between the time of application and/or the number of breaths after activation of the pMDI [34]. Some companies recommend allowing the cat to breathe through the mask for 10 to 15 s or to take 5 to 10 breaths after activation of the pMDI (Trudell Medical International, London, ON, Canada). In this study, seven respiratory cycles for a total of 21 s (physiological I:E ratio of 1:2) were imposed since we wanted to allow enough time to make sure that all the particles were distributed into the model. Even so, in most cases, there was still a percentage of particles that remained in suspension, except for the direct pMDI endotracheal tube (ETT) administration. In this case, after less than one complete respiratory cycle (after 1.1 s), all particles were already distributed. It is also worth mentioning that for the conical mask, only 2% of the particles remained in suspension. Hence with ETT and conical mask alone, particles were transported faster.
The peak inspiratory/expiratory flow was lower in the simulated intubated cat (30 mL/s) compared with the non-intubated cat (110 mL/s). Significant greater percentages reaching the lung were found in this study when applying pMDI directly in the ETT (19.7-25.8%) compared with the other techniques (5.75-16.2%; p = 0.006), as preconized in our hypothesis. In humans, the serum concentrations of an inhaled drug are greater in intubated patients as there is no oropharyngeal deposition, no enteral absorption, and best lung deposition [35]. One-third of the total aerosol output per pMDI puff is expected in smaller ETT size (4 mm ID as was used in the current study) compared with >5 mm ID ETT [36]. Side effects after ß2-agonist administration directly in the ETT during general anesthesia were described in horses. Specifically, sinus tachycardia, premature ventricular complexes or/and hypotension [37], and sweating [38] were observed after the administration of aerosolized salbutamol. As there are no reports in cats, the exact plasma concentration at which bronchodilation is expected versus at which side effects will appear is unknown, and thus this practice should be employed with caution. Side effects in cats may include tremors, central nervous system (CNS) excitement, vomiting, mydriasis, and/or dizziness [39].
Flow velocities and patterns affect particle behavior and hence their deposition and transport [40]. It was suggested that the use of a spacer might reduce particle velocity from the pMDI and may allow a better distribution [8]. Other studies found that an add-on spacer with a pMDI may result in a considerable reduction in the number of respirable particles available to the patient [41]. The clinical significance of this effect is not well established in humans [7,42]. Although a spacer was specifically designed for cats (AeroKat ® , Trudell Medical International, London, ON, Canada), some veterinarians and/or clients still choose the option of pediatric/baby spacers. These spacers are usually longer and may or may not have a one-way valve (valved holding chambers, VHC). The dose delivered may vary considerably between spacers, and this has to be considered when changing from one spacer to another [43]. For this reason, we have included both types of spacers in the current study. Our results suggest that no benefit is produced by the use of one or the other type of spacer ( Figure 5), independently of the type of mask. The percentage of particles reaching the lung was similar in the presence and in the absence of a spacer (p = 0.757). This may be due to the flow structures inside the masks, which revealed high recirculation. The Global Initiative for Asthma (GINA) recommends the use of pMDI and a dedicated spacer/VHC with facemask for children aged 4 years and younger and a pMDI plus a dedicated spacer/VHC with mouthpiece for children aged between 4 and 6 years. In comparison, for adult humans, no mask is added to the spacer to maximize the spacer's function of reducing particle velocity yet improving particle transport [23]. No guidelines or consensus around this topic exists in veterinary medicine. In this study, more particles were deposited in the masks when the spacer was not added, probably as the spray nozzle was nearer (see Tables 3 and 4). In any case, more particles remained in suspension when using a spacer (p = 0.037). That is, even after 21 s, there were still particles that had not reached any structure or the lung. It is possible that when using a spacer, more time is needed for the salbutamol to fully reach the lungs. However, the total number of particles transported to the lungs was similar between all groups.
Particles reaching the lung using pMDI in humans vary considerably between studies. From 11 to 14%, drug deposition was reported in ambulatory patients using radiolabeled aerosols [34]. Other studies suggest that less than 10% of the inhaled bronchodilator finally reaches the target area when using pMDI alone [44,45], and from 10% to 38% with the use of the pMDI plus a spacer [46]. In a computational study using a salbutamol pMDI with a spacer attached to a human upper airway model and a flow velocity of 30 L/minute, 52.9% of particles traveled to the lung [23]. In the present study, the percentage of particles reaching the lung was higher than 10% in most of the cases and varied between 5.75 and 25.8% depending on the device used. Most of the studies considered particles sizes from 1 to 10 µm [29,40]. In humans, it was reported that the deposition also depends on the particle size [29]. In particular, while nanoparticles deposition tends to increase when the particle size decreases (<0.1 µm), deposition of microparticles tends to increase when the size increases (>1 µm).
Reduced lung deposition percentages will be expected in clinical situations, where cats do not tolerate excessive handling, particularly when respiratory distress is present. This is in comparison to the current study's models in which the devices fit perfectly with the cat's anatomy, and all the particles are utilized. The deposition fractions summarized in Table 3 suggest that spherical masks may enhance particle transport into the lungs compared to conical masks. However, in the presence of the spherical mask, the deposition in the rostral area, on the muzzle, and in the nasal passage is higher than in the presence of the conical mask (see Table 4). Additional studies, including a greater number of cat models, are needed for a better comparison between masks.
Other factors that may modify particle lung deposition when using pMDI aerosol in ventilated humans are the ventilator mode, settings, and circuit [47], and synchronization of pMDI application with breaths, among other factors [48]. In addition, the therapeutic response may be influenced by the patient's airway anatomy and disease severity [49]. Further studies are needed to determine all these factors in cats.
The results that can be extracted from the computational analyses have to be considered carefully. The computational models, in general, are affected by unavailable limitations. The main limitation of this study was that an ideal and perfect fitting mask with or without a spacer was simulated, where 100% of the particles are sent to the cat airways. This may not reflect reality. Another limitation concerned the limited number of animals. Further studies including additional animals are necessary. However, the aim of the study was to compare different devices and not different cat anatomies. For this reason, only two cats were used to develop the models. Finally, only particles of 10 µm size were simulated. Smaller particle diameters should also be investigated as their behavior strongly depends on size, as was demonstrated in humans [29].
Conclusions
Using CT scans to develop feline models and CFD to investigate salbutamol transport and deposition, this study determined that most of the particles deposited in the devices and/or upper airways before reaching the lung. Direct administration of pMDI through the ETT has the largest and fastest lung deposition compared with the rest of the devices. The use of a pediatric spacer versus a specifically designed spacer for cats did not impact droplets transport. Further studies using a larger number of cats, patient-specific airflows, and different particle sizes are warranted to investigate which delivery devices may have better and safer performance.
Supplementary Materials: The following are available online at https://www.mdpi.com/article/10 .3390/ani11082431/s1, Video S1: Unsteady simulation of the particle deposition and transport within the non-intubated cat model with spherical mask. Video S2: Unsteady simulation of the particle deposition and transport within the non-intubated cat model with conical mask. Video S3: Unsteady simulation of the particle deposition and transport within the non-intubated cat model with spherical mask and spacer of 10 cm length. Video S4: Unsteady simulation of the particle deposition and transport within the non-intubated cat model with conical mask and spacer of 10 cm length.
Data Availability Statement:
The data generated during the current study are not publicly available because they are part of a national research project. However, some data could be available from the corresponding author on reasonable request.
Acknowledgments:
The authors acknowledge the statistical support provided by V. Herrería Bustillo. The support of the Institute of Health Carlos III (ISCIII) through the CIBER-BBN initiative is highly appreciated.
Conflicts of Interest:
The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the decision to publish the results.
Appendix A Appendix A.1. Governing Equations
The fluid dynamics model presented in this study is based on the work presented by Kleinstreuer and co-workers [23] and Inthavong and co-workers [24] in humans, and it is well known and widely used for numerical simulations of particle transport and deposition in the human airways. For this reason, here we provide only a summary of the main features of the model, and additionally, validation with the literature data is presented.
The fluid dynamics of the cat breathing subjected to the turbulence and specific boundary conditions were solved simultaneously with the droplet formation and particle trajectories. The latter was solved using Newton's second law governed by drag and gravity as the major forces for dispersed droplets in low-density gas flow [23]. For solving the viscous and incompressible airflow in the cat airways, the well-known Reynolds averaged Navier-Stokes equations were used.
where v is the averaged fluid velocity vector, ρ is the gas density, and p is modified pressure, v eff is the effective kinematic viscosity where v is gas kinematic viscosity, and v t is turbulence kinematic viscosity). Between the kinematic and the turbulent kinematic viscosities exists the relation v t = k/ω. The turbulence was modeled using the Wilcox k-ω model [26]. This model predicts the turbulence using two partial differential equations for the variable k and ω, where k represents the turbulence kinetic energy and ω is the rate of dissipation of the turbulence kinetic energy. Both equations are provided below: where the term P k that represents the turbulent production can be obtained as: P k = υ t ∇v· ∇v + (∇v) T and α = 5/9, β = 0.075, β = 0.09 and σ k = σ ω = 2 are turbulent constants [26].
Appendix A.2. Model of the Salbutamol Spray
The droplet spray was simulated with gas-liquid interactive Enhanced Taylor Analogy Breakup (ETAB) model [23]. With this model, a particle exposed to gas flow is subjected to large drag force by the surrounding gas if a velocity difference between particle and gas exists. The drag force produces droplet formations accordingly to the Weber number (ratio of the inertia force to surface tension): where ρ is the air density, v is the relative velocity between air and particle, r is the particle radius, and σ is the coefficient of liquid surface tension. This non-dimensional ratio connects droplet deformation and the liquid surface tension. Particle trajectories are determined by CFX using Newton's second law in the presence of drag force and gravity. This equation can be expressed as: where v p is the particle velocity, m p is the particle mass, rho is the gas density, g is the gravitational force, c Dp is the drag coefficient, and d p is the particle diameter. The drag coefficient c Dp can be obtained with the relation: where C slip is the Cunningham correction factor and c D can be obtained as follows: where Re p is the particle Reynolds number given by: where v is the kinematic viscosity of the gas. Particles of 10 µm were injected through the nozzle orifice with the aforementioned velocity and the described spray properties and modeling.
Appendix A.3. Mesh and Particle Independence Study
Due to the complex geometry of the cat's upper airways, unstructured tetrahedral computational grids were created using the commercial software Ansys ICEM CFD, Version 20.0 (ANSYS Inc., Canonsburg, PA, USA). Tetrahedral elements filled the 3D airways volumes with a variable number and dimensions of elements. For this reason, to ensure independent results from the selected mesh element size, a mesh independence study was carried out. For this study, we considered the biggest geometry volume that is the cat model with the conical mask and the spacer of 20 cm. The model volume was discretized with an increasing number of elements, and by using each generated mesh, a CFD solution was obtained. We tested several different grids from 10 to 50 million elements. After the analysis of the velocity profiles at different geometrical cross-sections, it was clearly seen that from a grid size of around 30 million elements, the results were grid-independent.
The final model contains about 15 million tetrahedrons in the cat airways, about 6 million tetrahedrons in the spacer, about 5 million tetrahedrons in the mask, and about 4 million for the remaining model objects. As the mesh independence study estimates the necessary regional size, the other geometries used the same discretization technique. Of course, depending on the presence or absence of the spacer, and its length, and on the mask and its type, the global grid size changed. In other words, all geometries were previously subdivided into regions in which the value of the tetrahedron can be specified. For obtaining all the grids in all the analyzed cases, we used the same values obtained in the independent grid for each region of the considered models. As the mesh independence study was carried out on the biggest geometry (the spacer of 20 cm and the conical mask are the biggest devices), the grid sizes of the rest of the models were smaller. In particular, it is worth reporting that the grid size of the spherical mask was about 2.5 million tetrahedrons, and the spacer of 10 cm was composed of about 3 million tetrahedrons. The element sizes per each model that were used for the presented simulations are reported in Table 1.
A particle number-independence study was also conducted in order to verify that 2000 injected particles were statistically sufficient for describing the salbutamol transport and deposition. Since every single particle of 2000 injected is representative of a certain number of particles, it should be proved that the selected number correctly describes the physical phenomenon in the study. The geometry selected with this aim is the same used for the mesh independence study (spacer of 20 cm and conical mask). The CFD simulation was repeated using the same grid size but releasing a different number of particles. Particles exiting the geometries and depositing on the different regions of the airways were systematically collected, increasing the injected number to 200,000. No relevant changes in percentage were observed.
Validation of the Numerical Methodology with Literature Data
To validate the result obtained with the presented simulations, the computational methodology used for creating the in silico models needs to be validated. With this aim, and because no information is available in cats, spherical microparticle deposition was studied in a human upper airways geometry and compared with experimental and computational literature data (see below). For this, we used the typical idealized "path lung model" [26] widely known and used in the human computational aerosol technique. The comparison, at different breathing rates, for different particle diameters, and for the mentioned model are shown in Figure A1. In particular, particle diameters of 1, 5, and 10 µm were injected into the model through the mouth at light, normal, and heavy conditions (i.e., 15, 30, and 60 L/min). Steady inspiratory flows were computed, and particles deposited or exited from the model were collected. The results show that the model set-up is correct as it is capable of reproducing the results presented by Zhang et al. [29]. Additionally, the human upper airways in the presence of the pMDI and of the aerosol spray were simulated. The deposition fractions within the upper airways and the fraction of particles reaching the lung were compared with the results obtained by Yousefi et al. [24]. The latter presents a model of the airflow, droplet spray transport, and deposition in a pMDI attached to a human upper airway model, considering de Ventolin as a propellant. The model described in the present work includes a human oral cavity described in the previous Section B1 and a trachea truncated at its bottom. A pMDI model, as described by Kleinstreuer et al. [23], was attached to the mouth or to the model entrance (see Figure A2).
The following boundary conditions were given to the model: an inspiratory flow rate of Q = 30 L/min and a turbulence intensity value of 5% for the turbulence kinetic energy was applied to the canister ring (see Figure A2); a pressure condition was applied to the outlet (bottom section of the trachea); a perfect absorption to the model walls for the particles. At the inhaler nozzle, a droplet initial diameter of 10 µm was assigned with a spray velocity of 110 m/s and a spray angle of 10 • [24]. Finally, the used Ventolin density was 1230 kg/m 3 with an actuation dose of 100 µg [24]. Again, deposited particles were collected at the oral tract (mouth + soft palate + pharynx + larynx + trachea), and particles traveling to the lung were computed. The comparison with the data presented by Yousefi et al. demonstrates that again the methodology used in the present work is correct, and the model is capable of reproducing the literature results. Figure A2. Computational human model with human upper airway tract and pMDI used for validation (a) and regional deposition fractions (DF; in %) computed and displayed in comparison with the literature data (Yousefi et al., 2017) for inspiratory flows Q of 30 L/min (b). In (c), the velocity distribution inside the oral airways is represented by means of velocity streamlines.
|
2021-08-28T05:17:27.721Z
|
2021-08-01T00:00:00.000
|
{
"year": 2021,
"sha1": "bb82884468ebc9963bad27592263421bae1032e9",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2076-2615/11/8/2431/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "bb82884468ebc9963bad27592263421bae1032e9",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
266996646
|
pes2o/s2orc
|
v3-fos-license
|
Novel applications of photobiocatalysts in chemical transformations
Photocatalysis has proven to be an effective approach for the production of reactive intermediates under moderate reaction conditions. The possibility for the green synthesis of high-value compounds using the synergy of photocatalysis and biocatalysis, benefiting from the selectivity of enzymes and the reactivity of photocatalysts, has drawn growing interest. Mechanistic investigations, substrate analyses, and photobiocatalytic chemical transformations will all be incorporated in this review. We seek to shed light on upcoming synthetic opportunities in the field by precisely describing mechanistically unique techniques in photobiocatalytic chemistry.
Introduction
Considering that it is secure to use and easily accessible, light is the ideal energy source for environmentally friendly chemical synthesis. 1[4] Photocatalysis and biocatalysis (photobiocatalysis) [5][6][7][8][9][10] have attracted a lot of attention in the eld of catalysis due to features that make these chemical synthesis methods more effective and environmentally sustainable.2][13][14] In contrast, photocatalysis has become a potent method that uses visible light excitation to reach particular reactivities, via the formation of open-shell intermediates, that are inaccessible via thermal activation modes.While in the past organic compounds were directly activated using ultraviolet (UV) light, modern photochemical activation approaches depend on the selective excitation of photocatalysts with visible light and can prevent the harmful destruction of organic Praveen P: Singh Praveen P. Singh is an Assistant Professor in the Department of Chemistry at the United College of Engineering and Research, Prayagraj, India.He obtained his BSc and MSc degrees in Organic Chemistry from T. D. P. G. College (V.B. S. Purvanchal University) Jaunpur and his DPhil.from the Department of Chemistry, University of Allahabad, India.His current research interests include the development of synthetic receptors for the recognition of biological target structures and the application of visible light chemical photocatalysis towards organic synthesis as well as nanophotocatalysis.
molecules brought on by using high-energy UV light.][17][18][19][20][21] Replicability and reproducibility challenges are frequently raised by researchers in novel elds of research, including in photobiocatalysis.Studies are oen conducted by a select group of laboratories utilising their own in-house solutions to run and investigate reactions.It is therefore vital to provide criteria for reporting the technical and chemical specics of photobiocatalytic reactions as the subject advances to broader applicability.Since these guidelines have been shown to be signicant for both biocatalysis and photocatalysis, we emphasise here the characteristics that are particularly relevant to photobiocatalysis (Fig. 1). 22][25][26] Because enzymes are highly evolved for specic biological functions, their limited synthetic potential can be addressed by providing them with additional reactivity by using photoactive cofactors like avin or nicotinamide adenine dinucleotide (phosphate) (NAD(P)H) or photocatalysts like ruthenium and iridium organometallic complexes, organic dyes, etc. 27,28 The protein framework of enzymes, in turn, can provide a regulated environment that provides possibilities to direct reactive intermediates toward the required stereo-and chemo-selective outcome, a current issue for conventional photocatalysts. 29The problems of compatibility between enzymes and photocatalysts which arise because they both work under different reaction conditions and the kinetics of the photogenerated reactive intermediates and enzyme catalysis are obstacles to merging the two types of catalysis.Furthermore, it is difficult to employ protein engineering techniques for the improvement of photoenzymatic systems due to the challenges in achieving uniform illumination and an inert atmosphere for a signicant amount of samples. 30,313][34][35][36][37] The link between the photochemical process and the enzymatic transformation will be used to group reports in this evaluation.We will clarify crucial terms and concentrate on subtle mechanistic differences that set several techniques apart from one another.We aim to add to the understanding of how light can be employed in biocatalytic synthesis by describing the mechanistic differences between various techniques.
The rst technique focuses mostly on photoenzymatic catalysis, where a cofactor inside a protein active site is photoexcited, it facilitates the electron or energy transfer necessary to change a starting material into a nished product (Fig. 3a).Synergistic photoenzymatic catalysis is addressed in the second approach (Fig. 3b).In these processes, an external cofactor is excited to enable a chemical change within a protein active site.The third procedure incorporates tandem photocatalyst/enzyme reactions (Fig. 3c).In these trans-formations, the photochemical reaction takes place in the presence of the enzyme but is not a part of the mechanism by which the enzyme transforms raw materials into the end products.This procedure is further divided into (i), processes that use light to regenerate cofactors and (ii), reactions where the enzyme substrate is altered by photoexcitation.The fourth method addresses natural photosynthesis and enzymatic processes.Through the utilisation of cyanobacteria, these systems generate NADPH, which enzymes can use to reduce substrates (Fig. 3d).This concise review deals with photobiocatalysis as an effective method in energy production and chemical manufacture and is a follow-up to our work on photocatalyzed organic synthesis, 38,39 paying particular attention to the most recent and noteworthy developments in this eld.Those already working in the eld of photoredox catalysed synthesis and interested in the usage of photobiocatalysis-driven chemical and energy production may nd this review to be particularly useful.
Photobiocatalytic H 2 production
The long-term goal of researchers and a potential remedy for energy problems, environmental degradation, and global warming is to transform renewable solar energy into fuels and value-added chemicals.In order to efficiently capture and store solar energy in chemical bonds, photosynthetic biohybrid systems (PBS) can utilise both articial semiconductor materials with high solar conversion efficiency and living cells with high product selectivity. 40n intriguing approach for producing H 2 has been established through photobiocatalysis.Kosem et al. 41 have examined the natural properties of biocatalysts and the signicance of each component in determining the effectiveness of the system.Tris (2-amino-2-hydroxymethyl-1,3-propanediol) was shown via photocatalytic research to be the optimal electron donor for the reduction of viologen by TiO 2 (Scheme 1).A study of the biocatalytic reaction showed that the function of the whole-cell biocatalysts was strongly inuenced by cell permeability, the redox potential of the electron mediators, and the cell envelope.Recombinant Escherichia coli, which has a turnover frequency of 39.43 3.77 s −1 based on [FeFe]-hydrogenase activity, was illustrated to be a more effective biocatalyst than Anabaena variabilis in a photobiocatalytic system.A thorough investigation revealed that Tris and MV 2+ had less of an impact on H 2 synthesis than TiO 2 , light, and the biocatalysts.With a solar-to-H 2 conversion of 1.58 (0.10%), a maximum rate of 16.73 1.03 mol min −1 was achieved.The production of highly effective photobiocatalysts will be guided by an understanding of the functions of each component.Table 1 provides information on the capabilities of hydrogen evolution photobiocatalytic systems, including relative H 2 yield (to bare cells) and endurance.
Photobiocatalytic CO 2 reduction and conversion
A promising technique to address climate challenges and store solar energy is the conversion of CO 2 into value-added compounds utilising renewable solar energy as the driving force.8][59][60][61][62][63][64][65] The descriptions of the capabilities of the photobiocatalytic CO 2 conversion systems are given in Table 2.The mechanism 66a,b by which the production of CO 2 RR by photocatalysis proceeds is given in Scheme 2.
Coordination of electron transfer and enzyme protection
In order to coordinate electron transmission and enzyme protection for photo-enzymatic alcohol synthesis, Jiang and colleagues reported 78 a metal hydride-embedded titania (MH/ TiO 2 ) coating that is engineered on graphitic carbon nitride (GCN).The MH/TiO 2 coating serves two essential functions: (1) it prevents the GCN core and coating from deactivating alcohol dehydrogenase (ADH); and (2) it allows electron transfer from GCN to nicotinamide adenine dinucleotide (NAD + ) and then to the alcohol dehydrogenase-catalyzed form of formaldehyde.The coordinated photo-enzymatic system was able to create methanol at a rate of 1.78 0.21 mol min −1 mg (ADH) −1 , which is 420% more than the rate of the system made up of ADH and GCN without the coating.Additionally, the coordinated system is capable of producing methanol constantly for a minimum of three light-dark cycles, whereas the GCN and ADH system entirely shuts down aer one light-dark cycle.By combining synthetic and biological modules for solar chemical-based conversion, this study reveals the potential of redox-active mineral coverings (Scheme 3).
Photobiocatalytic conversion of proteomembranes into articial chloroplasts
By employing inverted E. coli vesicles and heterodinuclear tpphz-bridged Ru-Rh photocatalysts, Rau et al. presented 79 a hybrid system that can concurrently produce the two physiologically active cofactors NADH and ATP while drawing energy from an external source, such as visible light.Thus, it mimics how natural chloroplasts work, which is to produce reduced nicotinamides and ATP for subsequent energydemanding reductive cascade reactions.It was discovered by investigating the various ATP synthesis steps that the photocatalytically produced NADH actually acidies the inside of the proteomembrane vesicles.The phosphorylation of glucose ultimately results from the usage of the resultant pmf, a The apparent quantum efficiency (AQE) refers to the ratio of electrons used to convert the substrate to product and the incident photons.
transforming ADP and P i into ATP.In green plants, photochemically generated ATP is used in a biochemically comparable phosphorylation reaction to activate ribulose-5-phosphate for CO 2 xation and reduction.Thus, the nal step of the studied reaction is also similar to the natural chloroplast system.Additionally, it was discovered that the overall charge and lipophilicity of the coordination compounds can inuence how the examined Ru polypyridine complexes interact with the E. coli-generated vesicles.These ndings pave the way for more applications, such as chain reactions using both the cofactors ATP and NADH.If appropriate enzymes were added to the cofactor-producing inorganic-biologic hybrid system indicated, energy-intensive reductive activations of N 2 , as well as CO 2 xation processes would be feasible (Scheme 4).
Morpholine-based buffers that activate aerobic photobiocatalysis
According to the ndings of Gonçalves et al., 80 morpholinebased buffers, particularly 3-(N-morpholino)propanesulfonic acid (MOPS), encourage photoinduced avoenzyme-catalyzed asymmetric redox transformations by regenerating the avin cofactor through sacricial electron donation and by raising the functional stability of avin-dependent oxidoreductase.In order to solve the oxygen problem in aerobic conditions, which is harmful to delicate enzymes, the active forms of avin are stabilised by MOPS by creating a spin correlated ion pair ensemble, 3[avin-MOPS + ] (Scheme 5).
Photobiocatalytic articial dehalogenase for crosscoupling reactions
A light-harvesting metallo-enzyme platform for organometallic cross-coupling reactions under moderate conditions has been created by Wang et al. 81 It is feasible to increase the synergism of dual catalysis by rationally combining an articial photosensitizer (i.e., benzophenone) and a Ni II (bpy) complex, two catalytic entities that are relatively incompatible in solution.The effective transformations of various aryl halides to phenols and a useful C-N bond formation were two examples of the catalytic utility.The use of the Ni II cofactor, a wholly synthetic metal complex, also differs signicantly from the most popular photobiocatalytic procedures that rely on natural redox enzymes.In addition, this synthetic enzyme is the rst dehalogenase used for organic synthesis, which complements natural counterparts that are only known for bioremediation.As a result, this current study opens up new possibilities for combining synthetic photo and biocatalysts to push the limits of articial enzyme catalysis for a variety of difficult bond congurations (Schemes 6 and 7).
Aerobic photobiocatalysis
Due to the variety of enzymes available, their high catalytic activities and specicities, and the environmental friendliness of the processes, biocatalytic transformation has gained increasing attention in the eld of green chemical synthesis.The majority of redox enzymes in nature rely on nicotinamide cofactors such as NAD + /reduced NAD + /nicotinamide adenine dinucleotide (NADH).An exceptional possibility to create fully integrated green processes is provided by the utilisation of solar energy, particularly visible light, in the creation of cofactors through the coupling of photocatalysis and biocatalysis.However, the quick decomposition and inactivation of the enzymatic material caused by photogenerated reactive oxygen species (ROS) has made the use of photocatalysts and enzymes difficult.Li et al. developed 82 core-shell structured polymer micelles and vesicles with aggregation-induced emission (AIE) properties as visible-light mediated photocatalysts for extremely stable and recyclable photobiocatalysis under aerobic conditions.The photoactive hydrophobic core of the polymer micelles and the hydrophobic membrane of the polymer vesicles can effectively regenerate NAD + from NADH, while the hydrophilic surface layer of the polymer colloids protects the enzymatic material (glucose 1-dehydrogenase) from the attack of photogenerated ROS.The enzyme maintains its active state aer at least 10 regeneration cycles, and the polymer micelles and vesicles continue to function as photocatalysts.These polymer colloids could potentially help to establish commercially viable photobiocatalytic systems (Scheme 8).
Triplet-triplet annihilation-based photon-upconversion
The application of synthetic organic chemistry has been substantially expanded, particularly through the use of lightdriven enzymatic catalysis.However, the restricted wavelength range of visible (sun)light can oen only utilised by photoenzymes.Using triplet-triplet annihilation-based upconversion (TTA-UC), which transforms light with long wavelengths into light with shorter wavelengths, the wavelength range can be expanded.In their study on the viability of light upconversion, Park et al. developed 83 TTA-UC poly(styrene) (PS) nanoparticles that were doped with a platinum(II) octaethylporphyrin (PtOEP) photosensitizer and a 9,10-diphenylanthracene (DPA) annihilator (PtOEP:DPA@PS) Using 550 nm light, PtOEP:DPA@PS nanoparticles were photoexcited, resulting in the upconverted emission of DPA at 418 nm.With a high energy transfer efficiency, the TTA-UC emission may photoactivate avindependent photodecarboxylases.As a result, under green light irradiation (l = 550 nm), the photodecarboxylase from Chlorella variabilis NC64A was able to catalyse the conversion of fatty acids into long chain secondary alcohols (Scheme 9).Scheme 6 Artificial dehalogenase for cross-coupling. 81omplementary alcohol dehydrogenases were used, taking advantage of the mild conditions used in this stage.Numerous valuable 1-arylpropan-2-ols were produced with low to good overall yields (14-76%) and excellent stereoselectivity (90 to >99% ee) via the screening various ADHs for the enzymatic carbonyl reduction and the optimisation of the reaction conditions to facilitate the sequential photobiocatalytic linear approach (Scheme 10).Hyster et al. reported that avin-dependent "ene"-reductases can catalyze the asymmetric synthesis of tertiary alcohols via a photoenzymatic alkene carbohydroxylation (Scheme 11a). 85echanistic investigations indicate that the production of C-O bonds happens via a 5-endo-trig cyclization with the pendant ketone, resulting in an a-oxy radical that is subsequently hydrolyzed and oxidised to generate the product.Similarly, they also reported a highly chemo-and stereoselective C-alkylation of nitroalkanes with alkyl halides catalyzed by an engineered avin-dependent "ene"-reductase (ERED) (Scheme 11b). 86ccording to a mechanistic investigation, radical initiation is triggered by the excitation of an enzyme-templated chargetransfer complex that develops between the substrates and cofactor.Furthermore, they also reported a highly chemoselective and enantioselective Csp 3 -Csp 3 photoenzymatic crosselectrophilic coupling (XEC) between alkyl halides and nitroalkanes catalysed by avin-dependent 'ene'-reductases (EREDs) (Scheme 11c). 87This synthetic methodology demonstrates the unprecedented efficacy of biocatalysts in controlling stereoselectivity and differentiating Csp 3 electrophile substrates.In continuation of their work on asymmetric synthesis, the Hyster group have also carried out several other organic chemical transformations 88 using photobiocatalysis.
Future prospects for photobiocatalysis
Since photobiocatalysis is still in its infancy, there are undoubtedly many obstacles as well as much potential in this emerging eld.In this review, we have covered the most recent advances in the merging of photocatalysis with biocatalysis and its application in chemical transformations. 31This eld has developed several new opportunities with various challenges.In fact, the substance used as the photocatalyst plays a role in the conversion of light energy.The catalytic performance of the cascade system will be further enhanced by using strong, superior photocatalysts and enzymes that are complementary to one another.It might be possible to accomplish difficult conversions with the combination of novel catalysts.The combination of controlled enzyme evolution and innovative biocatalytic reaction mechanisms holds enormous potential for addressing long-standing issues in chemical synthesis.The application of photobiocatalytic techniques will denitely enhance the range of chemical reactions that can be carried out by organic chemists.The use of photobiocatalysis will continue to be advanced further with new concepts and approaches.
Conclusions
In recent years, photobiocatalysis has grown at an exponential rate, revealing important variations in the biological processes involved, the most important of which is the way light drives the chemical transformation.Synthetic organic chemists have a growing awareness of the potential of photocatalysis and its application in synthetic problem-solving and are becoming more well-versed than ever.The potential to synthesise ne compounds in this rapidly developing area has been demonstrated by the tactics that have emerged in the last decade to combine photocatalysis with biocatalysis in an advantageous manner.Without the need for harsh chemicals or heat energy, visible light photoexcitation of naturally existing photoactive enzymatic cofactors might facilitate challenging reactions that are not achievable using ground-state catalysis.Although photocatalysis can facilitate a wider range of chemical transformations, biocatalysis benets from the high selectivity that photochemistry at enzyme active sites provides, something made possible by the dened environment of the biocatalyst.However, for photobiocatalysis to be widely used, there must be dened guidelines for reporting photobiocatalytic experiments and access to reasonably priced, well-characterized illumination equipment.It is anticipated that light-dependent enzymatic reactions will become a common tool in biocatalytic laboratories as a result of all these discoveries.
Pankaj Nainwal Dr Pankaj Nainwal is a Professor in the School of Pharmacy, Graphic Era Hill University, Dehradun, Uttarakhand.He is an alumni of the JSS College of Pharmacy, Ootacmund, Tamil Nadu.During his academic journey he has taught in universities in various states.He specializes in pharmacognosy, phytochemistry, phytopharmacy, phytomedicines and photocatalysis.Pravin K: Singh Pravin K. Singh is an Assistant Professor at the Department of Chemistry, C. M. P. College, Allahabad, India (Constituent P. G. College of Central University of Allahabad).Dr Singh is actively engaged in advanced research work for the development of environmentally benign, new synthetic routes for various bioactive heterocyclic compounds.He completed his BSc, MSc, doctorate (DPhil.)and post-doctorate (DSc) studies at the University of Allahabad, India.
Fig. 2
Fig. 2 In the presence of light, the combination of photocatalysis and biocatalysis simplifies challenging reactions.Photocatalyst = PC; S 0 = singlet ground state; S 1 = first singlet excited state; T 1 = first triplet excited state; FMNsq = semiquinoneflavin mononucleotide.Reproduced with permission from ref. 26.Copyright © 2022 American Chemical Society.
Scheme 2 Scheme 4
Scheme 2 An energy diagram depicting photosynthetic CO 2 RR production coupled with water oxidation.The reduction potentials are given versus the NHE at pH 6.7.Reproduced with permission from ref. 66a.Copyright 2020 Springer Nature Publishing AG.
2. 9 .
Scheme 5 Mechanism that has been proposed for MOPS-mediated photobiocatalysis.Steps of the photoredox cycle: (i) the oxidized FAD is excited by light to FAD*, capable of (ii) oxidizing MOPS.The resulting FAD semiquinone (FADc − ) can (iii) generate ROS in the presence of O 2 via electron transfer or (iv) reduce the oxidized FAD bound to the enzyme (E-FAD), regenerating its oxidized form FAD. Steps of the biocatalytic cycle: (v) the resulting caged radical pair [E-FADHc + O 2 c − ] forms (vi) C(4a)-(hydro)peroxyflavin, responsible for the conversion of the substrate, and E-FAD is regenerated (vii).The ROS produced may induce enzyme deactivation, which is minimized in the presence of MOPS due to stabilization of the FAD semiquinone via formation of a [FADc − -MOPSc + ] ensemble (in purple).For simplicity, protonation equilibria are not shown.Reproduced with permission from ref. 80.Copyright © 2019 Royal Society of Chemistry.
Table 1
Summary of the performance of hydrogen evolution photobiocatalytic systems, including relative H 2 yield (to bare cells) and duration The apparent quantum efficiency (AQE) of a light-driven H 2 production system is the number of additional evolved H molecules multiplied by 2 and divided by the number of incident photons. a
Table 2
Summary of the performance of photobiocatalytic CO 2 conversion systems
|
2024-01-17T05:06:27.734Z
|
2024-01-10T00:00:00.000
|
{
"year": 2024,
"sha1": "bdd16b99f6af14c85a75f59bb7f7759f181c78a1",
"oa_license": "CCBYNC",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "bdd16b99f6af14c85a75f59bb7f7759f181c78a1",
"s2fieldsofstudy": [
"Chemistry"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
37364353
|
pes2o/s2orc
|
v3-fos-license
|
Expression of HMGB1 and its Clinical Significance in T-cell Lymphoma
T-cell lymphoma is a heterogeneous tumor in which T lymphocyte cells become cancerous. It is a common cancer in China and its incidence rate increases steadily in recent years. High mobility group box-1 (HMGB1) protein is a highly conserved nuclear protein which binds to DNA and regulates gene transcription. Recent studies show that HMGB1 plays important roles in the development and progression of tumors. It is highly expressed in many malignant tumors including liver cancer, breast cancer, colon cancer, and ovarian cancer. Survivin is a newly described member of the inhibitor of apoptosis (IAP) family. Its expression is undetectable in normal adult tissues but significantly upregulated in transformed cell lines and several malignant tumors. It may protect cells against apoptosis and promote tumor growth and invasion (Lu et al., 2007; Reis et al., 2011). Wang et al showed that HMGB1 may inhibit apoptosis of hepatoma cells and promote their proliferation via Survivin (Wang et al., 2004). However, in T cell lymphoma the expression of HMGB1 and its relationship with Survivin have not been reported. In this study, we investigated the expression of HMGB1 and Survivin in tissues from patients with T cell lymphoma and its correlation with clinicopathology.
Introduction
T-cell lymphoma is a heterogeneous tumor in which T lymphocyte cells become cancerous. It is a common cancer in China and its incidence rate increases steadily in recent years. High mobility group box-1 (HMGB1) protein is a highly conserved nuclear protein which binds to DNA and regulates gene transcription. Recent studies show that HMGB1 plays important roles in the development and progression of tumors. It is highly expressed in many malignant tumors including liver cancer, breast cancer, colon cancer, and ovarian cancer. Survivin is a newly described member of the inhibitor of apoptosis (IAP) family. Its expression is undetectable in normal adult tissues but significantly upregulated in transformed cell lines and several malignant tumors. It may protect cells against apoptosis and promote tumor growth and invasion (Lu et al., 2007;Reis et al., 2011). Wang et al showed that HMGB1 may inhibit apoptosis of hepatoma cells and promote their proliferation via Survivin (Wang et al., 2004). However, in T cell lymphoma the expression of HMGB1 and its relationship with Survivin have not been reported. In this study, we investigated the expression of HMGB1 and Survivin in tissues from patients with T cell lymphoma and its correlation with clinicopathology.
Materials
Tissue samples from 102 patients diagnosed with T-cell lymphoma by histopathology and immunohistochemistry from Jan 2002 to Dec 2011 (60 males, 42 females, age range 14-81 years, mean age 56±4.5 years) were included in this study. The grading of pathology and malignancy of T-cell lymphoma were determined according to "WHO Classification of Tumours of Haematopoietic and Lymphoid, Fourth Edition, 2008". Pathological stagesof T-cell lymphoma were classified according to AnnArbor staging system. Tissues from 40 cases of reactive lymphoid hyperplasia were obtained from biopsyspecimens in our hospital.
Immunohistochemical analysis
Samples were fixed in 4% formaldehyde solution, embedded in paraffin, and sectioned (4 μm). The sections were dewaxedin xylene, dehydrated in alcohol, and subjected to antigen retrieval by microwave irradiation. Then, sections were incubated with rabbit anti-human polyclonal antibodies against HMGB1 and Survivin (1:200 dilution, Santa Cruz, USA), followed by incubation with peroxidase-conjugatedsecondary antibodies. The peroxidase reactivity was visualized using a 3, 3'diaminobenzidine (DAB) according to the manufacturer's instruction of the DAB detection kit. As a negative control, a parallel experiment in which the primary antibodies were substituted by phosphate buffered saline (PBS) was performed.
Evaluation of immunohistochemistry
Antigen expression was evaluated using light microscopy. The immunoreactivity of HMGB1 was localized to the cytosol of tumor cells, while the immunoreactivity of Survivin was in nucleus. The dark brown staining indicates positive expression of HMGB1 and Survivin. Using a semiquantitative scoring system (Kawasaki et al., 1998), we evaluated the intensity and extent of HMGB1 and Survivin expression. The percentage of cells positive for HMGB1 and Survivin was determined and graded as follows: 0 = 0%-5%, 1 = 6%-25%, 2 = 26%-50%, 3 = 51%-75%, and 4 = 76%-100%. The intensity of HMGB1 and Survivin staining was graded as follows: 0 = none, 1 = weak, 2 = moderate, and 3 = intense. An immunoreactive score was calculated by adding the grade of percentage of positive cells to the grade of intensity of staining. Scores of 0 and 1 were judged as HMGB1 or Survivin negative, and scores of 2 and 2+ were judged as positive.
Statistical analysis
All data were analyzed with the SPSS 17.0 statistical software. Chi-square test was used to determine the relationship between the expression of HMGB1/Survivin and clinicopathologic features. Spearman rank correlation analysis was used to analyze the correlation between HMGB1 expression and Sruvivin expression. Statistically significance was defined as P < 0.05.
Expression of HMGB1 and Survivin in samples of T-cell lymphoma and reactive lymphoid hyperplasia and its relationship with clinicopathologic features
The positive immunoreactivity of HMGB1 was observed in the cytosol of cells. As shown in Figure 1A, specimens from 65 of 102 (72%) cases of T-cell lymphoma were HMGB1 positive, which is much higher than that of reactive lymphoid hyperplasia (16 of 40, 40%, P<0.01). The immunoreactivity of Survivin was localized to the nucleus. The percentage of Survivin-positive cases was also significantly higher in T-cell lymphoma than that in reactive lymphoid hyperplasia (61.8% [63/102] vs 45% [18/40], P < 0.05) ( Figure 1B).
As shown in Table 1, there was no correlation between the expression of HMGB1 or Survivin and the gender, age, or tumor location in patients with T-cell lymphoma (P>0.05). In inert, aggressive, and highly aggressive lymphoma, the positive expression rate was 32.3%, 71.4%, "+" = positive; "-" = negative DOI:http://dx.doi.org/10.7314/APJCP.2012.13.11.5569 Expression of HMGB1 and its Clinical Significance in T-cell Lymphoma and 100.0% respectively for HMGB1, and 16.1%, 78.6%, and 93.3% respectively for Survivin. Overexpression of HMGB1 and Survivin was associated with tumor aggressiveness (P<0.001). The positive expression rate in stage I/II lymphoma and stage III/IV lymphoma was 49.3% and 96.8%respectively for HMGB1 and 47.9% and 93.5% respectively for Survivin. Quantitative analysis showed that the expression of HMGB1 and Survivin were associated with the tumor stage (P<0.001).
The correlation between HMGB1 expression and Survivin expression in T-cell lymphoma
In patients with T-cell lymphoma, 65 cases were positive for HMGB1. Of them, 57 cases (87.7%) were also positive for Survivin. However, of 37 HMGB1-negative cases, only 6 cases (16.2%) were positive for Survivin. Statistical analysis revealed a significant correlation between the expression of the two proteins (r=0.467, p=0.001) (Figure 2 and Table 2).
Discussion
HMGB1, originally identified as a chromatin protein in 1973, is an important regulator of cell cycles (Wang and Chen, 2009). It is expressed in tumor tissues of several cancers including liver cancer, breast cancer, colon cancer, and ovarian cancer, and plays critical roles in gene regulation and tumor immunology (Jube et al., 2012). It has been shown that HMGB1 can be secreted from cells and its level in serum is significantly upregulated in cancer patients compared to that in normal controls (Cheng et al., 2008). We found that HMGB1 was expressed in T-cell lymphoma as well as reactive lymphoid hyperplasia and normal lymphoid. However, its expression was significantly elevated in T-cell lymphoma compared to that in reactive lymphoid hyperplasia and normal lymphoid. Survivin is an endogenous inhibitor of apoptosis and participates in early event of tumorigenesis. It is highly expressed in many malignant tumors and its expression is correlated with the malignancy, stage, recurrence, and transfer of tumors (Wang et al., 2004;Lu et al., 2007;Reis et al., 2011). Our results showed that the expression of Survivin was upregulated in T-cell lymphoma compared to that in reactive lymphoid hyperplasia. Consistent with previous reports (Kuniyasu et al., 2002;Cheng et al., 2008;Chung et al., 2009), the expression of HMGB1 and Survivin was correlated with the aggressiveness and stage of tumors. Therefore, HMGB1 and Survivin may be involved in the development and progression of T-cell lymphoma and could be new targets for targeted molecular therapy.
Both HMGB1 and Survivin function in the G2/Mphase to regulate cell cycles. They also promote tumorigenesis by inhibiting apoptosis (Tang et al., 2010). Wang et al. (2004) found that HMGB1 may promote tumor proliferation by upregulating Survivin expression. We evaluated the correlation between HMGB1 expression and Survivin expression, and found that they were positively correlated in T-cell lymphoma, indicating that Survivin may be a downstream target of HMGB1 in T-cell lymphoma and that HMGB1 may promote proliferation by upregulating Survivin expression.
Taken together, our findings suggest that HMGB1 may play important roles in the development and progression of T-cell lymphoma. Examination of the expression of both HMGB1 and Survivin will help us to determine the malignancy and stage of T-cell lymphoma.
|
2018-04-03T04:02:11.988Z
|
2012-01-01T00:00:00.000
|
{
"year": 2012,
"sha1": "6487813517c57337106638bd9712d1ffd9c7db96",
"oa_license": "CCBY",
"oa_url": "http://society.kisti.re.kr/sv/SV_svpsbs03V.do?cn1=JAKO201206735656365&method=download",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "f41dd8a80dcd84ed3ead45349290468f9ce25539",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
44850975
|
pes2o/s2orc
|
v3-fos-license
|
COMPARISON OF THE REPRODUCTIVE STATUS OF THE SCLERACTINIAN CORAL SIDERASTREA STELLATA THROUGHOUT A GRADIENT OF 20 o OF LATITUDE
Most studies on sexual reproduction of scleractinian corals in Brazil have been concentrated in a single population or in a small area (Pires et al., 1999; Francini et al., 2002; Neves & Pires, 2002; Pires & Caparelli, 2002; Pires et al., 2002; Lins-de-Barros et al., 2003). The present paper presents a comparison of the sexual reproductive status of six populations of the Brazilian scleractinian coral Siderastrea stellata Verrill, 1868. The six studied sites are distributed along a gradient of latitude of 20 in the Southwestern Tropical Atlantic Ocean, comprising areas throughout almost the whole species geographical distribution (Fig. 1).
Siderastrea stellata is endemic and a common coral species in Brazil (Laborel, 1970;Castro & Pires, 2001).It is a colonial, massive and zooxanthellate coral, occurring in all the Brazilian reefs and in coral communities from Maranhão (00°53'S, 044°16'W) to Rio de Janeiro State (23°S, 042°W) (Castro & Pires, 2001) (Fig. 1).In some reef areas, as in the Atol das Rocas (03 o 52`S, 033 o 49`W), it is the main reef coral builder species (Echeverría et al., 1997).It usually occurs in shallow waters (up to 10 m depth) and often occupies horizontal substrates (Segal & Castro, 2000).It is considered resistant to sedimentation, temperature and salinity variations, and strong wave action (Laborel, 1970).Siderastrea stellata is a gonochoric brooder species, with a high female to male sex ratio and an annual gametogenetic cycle (Lins de Barros et al., 2003).Released planula larvae contain zooxanthelae, settle after 48 hours in close contact with parental polyps.First septal cycle is formed by day 2-3 and colonial development may take many months to occur in laboratory observations (Neves & Silveira, 2003).
Ten colonies of S. stellata were collected from each site, at depths around five m and during its reproductive peak.Following Lins de Barros et al. (2003), the reproductive peak and planulation of S. stellata colonies occur from the end of January to early February in the Abrolhos Reef Complex (18 o S).Collections were carried out in 2001 at the following dates: January 24 th at Corumbau (16 o 54'S; 039 o 05'W) and Tamandaré (08 o 46'S; 035 o 87'W); January 27 th at Fernando de Noronha (hereafter called "Noronha") (03 o 51'S; 032 o 27'W); and January 29 th at Salvador (13 o S), Guarapari (21 o S) and Búzios (23 o S).Colonies were fixed in a 10% formaldehyde-seawater solution and deposited in the Cnidaria Collection of the Museu Nacional/Universidade Federal do Rio de Janeiro.
Colonies were decalcified in a solution of 5% formaldehyde and 10% formic acid and at least 10 polyps of each colony were dissected under a stereomicroscope.The total number of polyps examined was 645.Polyp fecundity was determined by counting all the oocytes within each polyp.The presence of larvae inside polyp coelenteron or being expelled by the mouth was also analyzed.Three Siderastrea stellata distribution colonies from each site were processed for histological examination.After decalcification, polyps were dehydrated, cleared and embedded in paraffin.Serial cross sections (7 µm) were obtained, and at least ten slides, with up to five polyps each, were produced from each colony using Mallory's Triple stain (Pantin, 1948).Slides were examined under a binocular microscope to determine the stage of gametogenesis and the general condition of tissues, which could indicate recent release of gametes.
Our observations indicated a synchrony of the late development of oocytes among the six sites.All colonies had fertile polyps, and more than 50% of the oocytes observed was mature, with the nucleus at the periphery of the cells (stage III -Lins-de- Barros et al., 2003) (Table 1).However, apparently, planulation season started earlier in colonies from Búzios compared to those from the other five studied sites.Histological analyses showed that mesenteries of the polyps from Búzios colonies seemed brittle, suggesting a recent gamete release.Colonies from Búzios presented the highest percentage of polyps without oocytes (48%), but from those, 13% had larvae (Table 1).Percentage of polyps with larvae from Búzios colonies was also high (32%) compared to the other sites (Table 1).Another fact that indicated that the planulation season of S. stellata at Búzios was in its peak at the date of collection (29 th January 2001), was the low average number of oocytes per polyp observed (1.86 ± 3.44 oocytes/polyp [mean ± SD]; (Table 1).The low polyp fecundity and high percentage of polyps without oocytes suggested that fecundation of most oocytes had already occurred, generating larvae, that were being released.Colonies from Búzios collected by the end of December 1999 and beginning of January 2000 (13 months before the collections of the present study) had at least five oocytes per mesentery, approximately 140 oocytes per polyp (unpublished data).
In contrast to Búzios, colonies from the other five sites showed a small percentage of polyps without oocytes (Table 1).However, as occurred in Búzios, planulation season had also started, since larvae inside the polyps were always observed (Table 1).However, the percentage of polyps with oocytes (100% of the examined polyps of colonies from Noronha and Guarapari had oocytes) and high polyp fecundity versus low number of larvae (Table 1) suggested that most of the oocytes produced had not been fertilized yet.
Búzios was the southernmost studied site (23 o S) and is near the southern limit of the S. stellata geographical distribution.It is localized in the Cabo Frio region, characterized by an upwelling phenomenon which occurs mostly during January and February, when minimum sea water temperatures can drop to circa 18 o C (Valentin & Moreira, 1978).Francini et al. (2002) also discussed that the gamete release of Mussismilia hispida colonies from Búzios (23 o S) was asynchronous when compared to colonies from Abrolhos (18 o S) and Santos (24 o S).The influence of temperature on coral reproduction is widely recognized (Wallace, 1985;Shlesinger et al., 1998;Pires et al., 1999;Heltzel & Babcock, 2002;Lins de Barros et al., 2003).The regulatory influence of upwelling on coral reproduction in Búzios, which specifically anticipates the onset of planulation of S. stellata, should be considered, but further studies are necessary to draw sound conclusions.
ACKNOWLEDGEMENTS
This research was funded by the "Fundação O Boticário de Proteção da Natureza" and the Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq).We also thank the Brazilian Environmental Agency (IBAMA), for providing collecting permits.We thank to A. Baird for valuable suggestions and text revision.
Table 1.Reproductive data of colonies of Siderastrea stellata collected in six sites, comprising a gradient of 20 o of latitude."N": number of polyps examined; "Fecundity": average number of oocytes per polyp; "n": number of oocytes measured and classified according to their development stage.For descriptions of stages I, II and III of oogenesis of S. stellata see Lins de Barros et al., 2003
Fig. 1 .
Fig. 1.Reproductive comparison of Siderastrea stellata.Map of Brazil, indicating the six studied sites (arrows) and the geographical distribution of S. stellata (vertical bar). .
|
2017-09-08T11:40:16.463Z
|
2007-03-01T00:00:00.000
|
{
"year": 2007,
"sha1": "1a2055f225c022824078fddee236c30162746ed5",
"oa_license": "CCBYNC",
"oa_url": "https://www.scielo.br/j/bjoce/a/5cFrZQxvcgC4Rh5SY3hgncH/?format=pdf&lang=en",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "1a2055f225c022824078fddee236c30162746ed5",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Biology"
]
}
|
225072900
|
pes2o/s2orc
|
v3-fos-license
|
New Estimates for the Numerical Radius
In this article, we present new inequalities for the numerical radius of the sum of two Hilbert space operators. These new inequalities will enable us to obtain many generalizations and refinements of some well known inequalities, including multiplicative behavior of the numerical radius and norm bounds. Among many other applications, it is shown that if $T$ is accretive-dissipative, then \[\frac{1}{\sqrt{2}}\left\| T \right\|\le \omega \left( T \right),\] where $\omega \left( \cdot \right)$ and $\left\| \cdot \right\|$ denote the numerical radius and the usual operator norm, respectively. This inequality provides a considerable refinement of the well known inequality $\frac{1}{2}\|T\|\leq \omega(T).$
Introduction
Let H be a complex Hilbert space, endowed with the inner product ·, · : H × H → C.
The C * -algebra of all bounded linear operators on H will be denoted by B(H). An operator A ∈ B(H) is said to be positive (denoted by A > 0) if Ax, x > 0 for all non-zero vectors x ∈ H, and self-adjoint if A * = A, where A * is the adjoint of A. For A ∈ B(H), the Cartesian decomposition of A is A = ℜA + iℑA, where are the real and imaginary parts of A, respectively. It is clear that both ℜA and ℑA are self-adjoint. If ℜA > 0, the operator A will be called accretive, and if both ℜA and ℑA are positive, A will be called accretive-dissipative.
Among the most interesting scalar quantities associated with an operator A ∈ B(H) are the the usual operator norm and the numerical radius, defined respectively by It is well known that A = sup x = y =1 | Ax, y |. The operator norm and the numerical radius are always comparable by the inequality The significance of (1.1) is having upper and lower bounds of ω(A); a quantity that is not easy to calculate. Due to the importance and applicability of the quantity ω(A), interest has grown in having better bounds of ω(A) than the bounds in (1.1).
The main goal of this article is to present new inequalities for the numerical radius. More precisely, we study inequalities for quantities of the forms ω(A + B) and ω(A + iB), A, B ∈ B(H).
Our approach will be based mainly on the scalar inequality valid for a, b ∈ R. Using this inequality is a new approach to tickle numerical radius inequalities.
This approach will enable us to obtain new bounds, some of which are generalizations of certain known bounds. For example, we will refine Kittaneh inequality [4], noting that the inequalities (1.3) refine the inequalities (1.1).
In fact, our approach will not enable us to refine these inequalities only, it will present a new proof and generalization of (1.3).
Moreover, using this approach, we will show that for the accretive-dissipative operator A. This inequality presents a considerable improvement of the first inequality in (1.1). As a result, we will be able to introduce a better bound for the sub-multiplicative behavior of the numerical radius, when dealing with accretive-dissipative operators. More precisely, we show that ω(ST ) ≤ 2ω(S)ω(T ) when both S and T are accretivedissipative. See Corollary 2.3 below for further discussion.
Other results including reverses of (1.3) and a refinement of the triangle inequality will be shown too.
In our proofs, we will need to recall the following inequalities.
Also, we will need the following result [6, Proposition 3.8].
In the following proposition, we restate (1.3) in terms of the Cartesian decomposition. Then
New Results
We begin our main results with the the numerical radius version of the inequality (1.2), which can be stated as follows. Then Further, this inequality is sharp, in the sense that the factor 1 √ 2 cannot be replaced by a smaller number.
Proof. Let x ∈ H be a unit vector. Then where the first inequality follows from the triangle inequality, the second inequality is obtained by Lemma 1.1, the third inequality is obtained by the arithmetic-geometric mean inequality and the forth inequality, follows from (1.2). Therefore, we have shown that for any unit vector x ∈ H, Now, by taking supremum over all unit vector x ∈ H, we get which completes the proof of the first assertion of the Theorem. To show that the factor 1 √ 2 is best possible, let B = 0 and assume that A is positive. Direct calculations show that the inequality is sharp, which completes the proof.
The following result shows how Theorem 2.1 refines (1.3). Then Proof. The first inequality follows from Theorem 2.1 by taking B = 0 and the second inequality follows from Proposition 1.1. This completes the proof.
Using the same method presented in Theorem 2.1, we can obtain the following result; a different form of Theorem 2.1.
Theorem 2.2. Let A, B ∈ B (H).
Then Further, the factor 1 √ 2 is best possible.
The next Corollary follows from Theorem 2.1 and by taking into account that the sum of two normal operators, need not necessarily a normal operator.
Corollary 2.2. Let A, B ∈ B (H) be two normal operators.
Then In particular, if T = A + iB is accretive-dissipative, then for any unitarily invariant norm · u . It is implicitly understood that · u is defined on an ideal in B(H), and it is implicitly understood that T is in that ideal, when we speak of T u . We notice, first, that Corollary 2.2 provides the numerical radius version of (2.2), in which A, B are normal; a wider class than positive matrices. Further, Corollary 2.2 provides a refinement of (2.2) in case of the usual operator norm since The next result provides a considerable improvement of the first inequality in (1.1), for accretive-dissipative operators.
Proof. Let T = A + iB be the Cartesian decomposition of T , in which both A, B are positive. Then Corollary 2.2 together with Lemma 1.3 imply This completes the proof.
From (1.1) and the fact that the operator norm is sub-multiplicative, we obtain the well known inequality It is well established that the factor 4 cannot be replaced by a smaller factor in general. However, when A or B is normal, we obtain the better bound ω(AB) ≤ 2ω(A)ω(B), and it is even better when both are normal as we have ω(AB) ≤ ω(A)ω(B). In the following result, we present a new bound for accretive-dissipative operators, which is better than the bound ω(AB) ≤ 4ω(A)ω(B).
We refer the reader to [2] for detailed study of this problem. If either T or S is accretive-dissipative, then Proof. Noting submultiplicativity of the operator norm and Theorem 2.3, we have which completes the proof of the first inequality. The second inequality follows similarly.
It is interesting that the approach we follow in this paper allows us to obtain reversed inequalities, as well. In [4], it is shown that In the following, we present a refinement of this inequality using our approach.
Corollary 2.4. Let T ∈ B (H) have the Cartesian decomposition T = A + iB. Then Proof. In Corollary 2.2, replace A and B by A 2 and B 2 . This implies where the last inequality follows by the triangle inequality for the usual operator norm. If T = A + iB is the Cartesian decomposition of the operator T , then .
which is equivalent to the desired result.
On the other hand, manipulating Proposition 1.1 implies the following refinement of the triangle inequality The connection of this result to our analysis is the refining term ω(A + iB).
On the other hand, noting Proposition 1.1, Therefore, This completes the proof.
|
2020-10-27T01:01:16.955Z
|
2020-10-24T00:00:00.000
|
{
"year": 2021,
"sha1": "11d64e8b2c43c40a133207da457c680632fc5367",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "11d64e8b2c43c40a133207da457c680632fc5367",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
}
|
18453091
|
pes2o/s2orc
|
v3-fos-license
|
Predictors of Postoperative Seizure Recurrence: A Longitudinal Study of Temporal and Extratemporal Resections
Objective. We investigated the longitudinal outcome of resective epilepsy surgery to identify the predictors of seizure recurrence. Materials and Methods. We retrospectively analyzed patients who underwent resections for intractable epilepsy over a period of 7 years. Multiple variables were investigated as potential predictors of seizure recurrence. The time to first postoperative seizure was evaluated using survival analysis and univariate analysis at annual intervals. Results. Among 70 patients, 54 (77%) had temporal and 16 (23%) had extratemporal resections. At last follow-up (mean 48 months; range 24–87 months), the outcome was Engel class I in 84% (n = 59) of patients. Seizure recurrence followed two patterns: recurrence was “early” (within 2 years) in 82% of patients, of whom 83% continued to have seizures despite optimum medical therapy; recurrence was “late” (after 2 years) in 18%, of whom 25% continued to have seizures subsequently. Among the variables of interest, only resection site and ictal EEG remained as independent predictors of seizure recurrence over the long term (p < 0.05). Extratemporal resection and discordance between ictal EEG and resection area were associated with 4.2-fold and 5.6-fold higher risk of seizure recurrence, respectively. Conclusions. Extratemporal epilepsy and uncertainty in ictal EEG localization are independent predictors of unfavorable outcome. Seizure recurrence within two years of surgery indicates poor long-term outcome.
Introduction
Epilepsy is a common disorder with an incidence of 50/100,000 per year and a prevalence of 5-10/1000 in North America [1]. About 30% epilepsy patients are intractable to antiepileptic medication treatment [2]. Randomized controlled trials have established resective surgery as an effective treatment for intractable epilepsy [3,4].
Despite improved outcome, seizure recurrence after resective surgery is not uncommon. A multicenter study demonstrated that only 50-68% of patients remained completely seizure-free after anterior temporal lobectomy (ATL) with two or more years of follow-up [5]. Multiple predictors of postoperative seizure recurrence have been identified, including older age at surgery, extratemporal seizure onset, discordant ictal and interictal findings, previous history of secondary generalized convulsive seizures, normal MRI, widespread imaging abnormalities, early postoperative seizures, and prolonged seizure history [6][7][8][9][10]. However, the published literature on epilepsy surgery outcome is limited by several factors: first, comparison of different studies shows conflicting results for the same variables making it difficult to draw definitive conclusions; second, many studies have employed cross-sectional designs that do not address temporal changes in postoperative outcome, potentially leading to conflicting results and conclusions among studies with different follow-up periods; third, the studies have tended to address outcome in selected subgroups defined by 2 Epilepsy Research and Treatment pathology (e.g., mesial temporal sclerosis (MTS)) or resection site (e.g., temporal lobe epilepsy (TLE), frontal lobe epilepsy) rather than the entire group of patients undergoing epilepsy surgery, which is more clinically meaningful in terms of success of epilepsy surgery [7,8,[10][11][12][13]. Recognizing these limitations, we sought to investigate the longitudinal outcome of epilepsy surgery over time, regardless of the presumptive diagnosis or resection site. Our goal was to identify the predictors of seizure recurrence in patients who underwent temporal and extratemporal resective epilepsy surgery.
Patient Selection and Presurgical Evaluation.
From the epilepsy database, we retrospectively identified the patients who underwent resective surgery between January 2006 and July 2012 at Parkland Memorial Hospital, affiliated with the comprehensive epilepsy program at the University of Texas Southwestern Medical Center. All patients had inpatient video-EEG monitoring and brain MRI. Positron emission tomography (PET), single-photon emission computed tomography (SPECT), neuropsychological assessment, and Wada test were performed in selected patients depending on clinical indication. Invasive monitoring with subdural grid electrodes or depth electrodes was performed when the scalp EEG findings were inconclusive. Intraoperative electrocorticography (ECoG) was done in selected patients to further tailor the resection area. The patients were discussed in the multidisciplinary epilepsy conference to reach consensus before proceeding with the surgery. The resected tissue was evaluated by experienced neuropathologists. We included patients who underwent surgery for intractable epilepsy during the above period and had at least 2 years of postoperative follow-up in our center. We excluded patients whose postoperative pathology confirmed high-grade malignant tumor because their outcome could be significantly influenced by the underlying tumor itself.
From chart review, we extracted the following demographic and clinical data: age, gender, preoperative seizure frequency, epilepsy duration prior to surgery, history of secondary generalized tonic-clonic seizures (SGTCS), number of antiepileptic drugs (AEDs) tried prior to surgery, resection site, ictal EEG findings, interictal EEG findings, imaging findings, Wada memory lateralization, and pathological findings. We did not consider auras in estimating the seizure frequency. MRI was considered as abnormal only if the observed findings were consistent with well-established, potentially epileptogenic entities; in other words, isolated abnormalities that are unlikely to cause seizures (e.g., chronic microvascular disease and nonspecific white matter changes) were not considered as abnormal. We classified the resection site as temporal or extratemporal. We determined the ictal onsets based on established criteria [14][15][16]. With respect to the resection site, we classified the ictal and interictal EEG findings as concordant (i.e., strictly confined to the resection area) or discordant (i.e., any evidence of a wider, even if lateralized, spatial distribution outside the resection area including more than one seizure onset zone). The Wada memory was classified as concordant with the resection site if the surgical hemisphere showed poorer memory function compared with the nonsurgical side as noted by ≥20% difference in the total recall of items; all other Wada results were classified as discordant.
Outcome Assessment.
We reviewed the charts of patients who had at least 2 years of postoperative follow-up. Seizure outcome was assessed using Engel classification [17]. Outcome was classified as class I (seizure-free or free of disabling seizures); class II (rare disabling seizures); class III (worthwhile improvement); and class IV (no worthwhile improvement). Specific to this study, the outcome was further stratified as seizure-free (class I) or seizure recurrence (classes II-IV). Presence of only isolated auras postoperatively was not considered to indicate seizure recurrence.
Longitudinal outcome was evaluated at annual intervals. Outcome at the 2-year interval was classified as class I if the patients remained seizure-free for the 2-year period prior to the follow-up visit. Starting at the 3rd postoperative year, the outcome was classified as class I if the patients remained seizure-free for the 1-year period prior to the follow-up visit. The time to first postoperative seizure was evaluated (see below). Immediate postoperative seizures, within 1 month after surgery, were not included in the analysis.
Statistical Analysis.
The data range and median values were summarized for the continuous variables such as age, number of AEDs, preoperative seizure frequency, and duration of epilepsy. Continuous variables were converted into categorical variables by grouping the values into categories for univariate analysis using chi-square or Fisher's exact tests as appropriate. Variables with values < 0.05 on univariate analysis were then tested in a multivariate Cox regression test to obtain hazard ratios and 95% confidence intervals (CI). Kaplan-Meier survival analysis was used to evaluate longitudinal seizure outcome. Statistical significance of survival analysis was tested by log-rank tests. All statistical analyses were performed using SPSS 10.0 (IBM Corp., Armonk, NY, USA).
Patient Characteristics.
There were 78 patients eligible for inclusion in the study. Of these, 8 patients were excluded because the postoperative pathology was consistent with high-grade malignant tumor. Thus, 70 patients (32 males and 38 females) were available for analysis ( Table 1). The age at epilepsy surgery ranged from 21 to 64 years (mean 39 years). Preoperative epilepsy duration ranged from 1 to 57 years (mean 18 years). Forty-six patients (66%) had history of SGTC seizures. Patients had tried multiple AEDs prior to surgery (range 1-10; mean 5.1). Fifty-four patients (77%) had temporal resections (including ATL and lesionectomy in the temporal lobe), whereas 16 (23%) had extratemporal resections (frontal, = 14, and parietal, = 2). Twelve patients had extraoperative invasive grid or depth electrode evaluation before resection. Forty-one patients had intraoperative ECoG. Ictal EEG findings were concordant in 52 (74%), discordant in 16 (23%), and inconclusive (not
During the follow-up period, 22 patients experienced seizure recurrence. Using a 2-year cut-off period, we found that the seizure recurrence followed two patterns that were clearly different ( < 0.05). In other words, 18/22 patients (82%) had recurrence within 2 years of surgery ("early" recurrence), whereas 4/22 patients (18%) had recurrence after 2 years ("late" recurrence). Among those with early recurrence, 15 (83%) patients continued to have seizures, whereas 3 patients became seizure-free with medication management during subsequent follow-up (Table 2). On the contrary, among those patients with late recurrence, only one (25%) patient continued to have seizures, whereas the other 3 patients became seizure-free with medication management ( Table 2). All the patients who regained seizure freedom were reclassified as class I at their subsequent follow-up visits. These findings suggest that early seizure recurrence, within 2 years of surgery, predicts poor long-term outcome.
Univariate Analysis of Predictors of Seizure Recurrence.
We analyzed the postoperative outcome (seizure-free versus seizure recurrence) at various follow-up periods using univariate analysis (Table 3). Among the variables of interest, the nonpredictors of seizure recurrence were age, gender, history of SGTC seizures, epilepsy duration, preoperative seizure frequency, number of AEDs, MRI findings, interictal EEG findings, Wada memory lateralization, and lesion pathology. Extratemporal resection predicted seizure recurrence at 2-, 3-, and 4-year intervals ( < 0.05). Discordance between ictal EEG localization and resection site predicted seizure recurrence at 2-, 3-, 4-, and 5-year intervals ( < 0.05). Outcome beyond 5 years could not be determined due to small sample size. Of note, among the 7 patients with normal pathology, 6 were seizure-free at last follow-up visit, whereas 1 had never achieved seizure freedom.
Survival Analysis of Long-Term Seizure Outcome.
Kaplan-Meier survival analysis demonstrated statistically significant differences in seizure outcome with regard to resection site and ictal EEG findings (Figure 1). Temporal resections (versus extratemporal resections) and concordance between ictal EEG and resection site (versus discordance) were associated with class I seizure outcome over the long-term follow-up intervals ( < 0.05).
Discussion
In this study, we present our single-center experience of longitudinal seizure outcome after epilepsy surgery in a heterogeneous group of 70 patients regardless of resection site or presumptive etiology. The main findings were as follows: (1) >80% of the patients experienced class I outcome at the last mean follow-up of 4 years; (2) in patients with seizure recurrence, the majority of recurrences (>80%) occurred early (within 2 years after surgery) and a majority of such patients (>80%) continued to have seizures over the subsequent follow-up period despite medical management; and (3) among multiple variables, extratemporal resection (versus temporal resection) and discordance between ictal EEG and resection area (versus concordance between the two) predicted 4.2-fold and 5.6-fold higher risk of seizure recurrence over time, respectively.
Seizure Outcome and Temporal
Patterns of Seizure Recurrence. In our group of patients who had both temporal and extratemporal resections, class I outcome was achieved in 84% at the last follow-up period (mean 48 months). These results are similar to the previous studies and metaanalysis [18] and indicate that epilepsy surgery, regardless of resection site, is beneficial in patients with medically intractable epilepsy. Analysis of seizure recurrence patterns in our study showed that the majority of seizure recurrence (82%) occurred within 2 years after surgery, which we chose as the cut-off for "early" recurrence. This early recurrence predicted poor long-term outcome in our study, with a majority (83%) of such patients continuing to have seizures despite optimum medical management. Previous studies of temporal lobectomy demonstrated an initial phase of steep seizure recurrence at about 1-2 years, followed by a relapse rate of 2-5% per year for 5 years before stable seizure freedom was achieved [19,20]. Using a 6-to 12-month cut-off, other authors have hypothesized that the early seizure recurrences were due to errors in localization of the epileptic focus or incomplete resection, whereas late recurrences were due to de novo epileptogenesis [21,22]. Although our cut-off of Epilepsy Research and Treatment 2 years is longer, it provides a practical timeline considering the widely accepted practice of tapering AEDs in patients who have achieved 2 years of postoperative seizure freedom.
Our results suggest that one should exercise caution when attempting to proceed with AED simplification if there has been seizure recurrence within 2 years of surgery.
Predictors and Nonpredictors of Seizure
Recurrence. Studies of predictors of postoperative seizure recurrence are helpful in selecting the best surgical candidates. Extensive research regarding the predictors of postoperative outcome has been done, and multiple positive or negative predictors have been proposed [23]. However, the literature shows conflicting results, often related to methodological issues, preventing direct clinical application. For example, crosssectional studies used the last follow-up (which was variable within the group) or the follow-up at a specific postoperative anniversary (e.g., 2 years) as cut-off points to assess outcome [24]. Such studies fail to address the changes in outcome over time due to running down phenomenon, seizure recurrence, and transient improvement or fluctuation [8]. Similarly, studies that have focused on specific disease entities, lesional status, or anatomic resection sites fail to address the overall outcome in an unselected group of patients making it difficult to understand the true impact or indications of epilepsy surgery.
Resection Sites.
In our study, the resection site was a powerful predictor of seizure recurrence, with extratemporal resections carrying nearly a 4.2-fold higher risk of seizure recurrence than temporal resections. These results are in keeping with prior studies, which showed seizure freedom in the range of 60-70% and 30-50% after temporal lobectomy and extratemporal resections, respectively [3,18,23]. Our study adds longitudinal follow-up data, demonstrating that the patients undergoing temporal resection experience significantly better outcome at each follow-up interval during the 5 years. At last follow-up, 91% (49 of 54) of patients who had temporal resections were seizure-free, whereas only 62% (6 of 16) of patients who had extratemporal resection were seizure-free. Less favorable outcome after extratemporal resections is probably related to the inherent difficulty in localizing and resecting the widespread epileptogenic zones in contrast to the temporal resections, which tend to result in a more complete removal of the epileptogenic zone. In addition, proximity to eloquent cortex (such as somatosensory, speech, and visual cortices) makes the resection of extratemporal foci more challenging, leading to incomplete removal despite invasive monitoring. We did not find any difference in outcome between temporal and extratemporal resections at the 5-year follow-up interval, which is most likely attributable to the relatively small sample size for extratemporal resections at that interval.
EEG Findings.
In our study, the ictal EEG findings concordant with the resection site predicted favorable outcome over the long term. At last follow-up, 92% of patients (48/52) with concordant EEG and 56% of patients (9/16 patients) with discordant EEG were seizure-free. This is along the lines of prior studies showing the value of EEG as a predictor of 6 Epilepsy Research and Treatment postoperative seizure outcome in the presence or absence of MRI abnormalities in patients with temporal or neocortical epilepsies [23,25,26].
In our study, interictal EEG was not a predictor of seizure recurrence, which is along the lines of the conclusions from a prior meta-analysis [23]. Studies are conflicting as to whether interictal EEG findings predict seizure recurrence [8,11,23]. This may be partly attributable to the variations in the definition of concordance between the interictal discharges and the resection site. Concordant interictal findings were defined as exclusive discharges in single brain regions, absence of generalized spikes, or occurrence of >70% lateralized discharges among various studies [8,27,28]. While it is intuitive to think that ictal and interictal EEG findings would be predictive of seizure outcome, our study showed that only the ictal EEG was a predictor of outcome. It is possible that ictal and interictal findings, taken together, might be predictive of outcome, but we did not specifically address that issue. Our results suggest the continued need for ictal recordings, rather than just the interictal data, for planning resection.
MRI Findings.
Our results are in agreement with other studies that have demonstrated that nonlesional MRI can be associated with an outcome as good as lesional MRI provided the scalp EEG findings are concordant with other functional studies and the planned resection site [9,12,[29][30][31]. The relatively good outcome in patients with nonlesional MRI in our study is not surprising. Among 17 patients with nonlesional MRI, the postoperative pathology showed gliosis, nonspecific changes, and MTS in 11, 5, and 1 patients, respectively. Many of the patients with nonlesional MRI showed localizing findings on other studies, concordant with the intended surgical site; for example, 9/11 patients (82%) had PET abnormalities, 10/13 patients had (77%) SPECT abnormalities, and 13/18 patients (72%) had ictal EEG findings that were concordant. Thus, electrophysiological and non-MRI imaging studies strongly supported a well-localized epileptogenic zone in these patients. Our findings suggest that patients with nonlesional MRI can be good candidates for surgery as long as other data are concordant.
Other Nonpredictors of Seizure Recurrence.
Besides interictal EEG and MRI findings, the other nonpredictors of seizure recurrence in our study were age, gender, history of SGTC seizures, epilepsy duration, preoperative seizure frequency, number of AEDs, Wada memory lateralization, and lesion pathology. It is well established that gender is not predictive of outcome, but the literature is conflicting as to whether the other characteristics have predictive value [32,33]. The discrepancies may be due to different patient populations, study methods, follow-up intervals, and classification methods.
Our study has a few limitations. Because of its retrospective nature, we were unable to determine if there were discrepancies in selecting patients for surgery. However, all the patients were discussed in a multidisciplinary conference, which ensured at least some degree of uniformity. The number of patients completing the long-term follow-up beyond 5 years was smaller due to loss to follow-up. We were unable to investigate the mechanisms of seizure recurrence because only a few patients with postoperative seizure recurrence underwent follow-up EEG or video-EEG evaluation. We were also unable to ascertain how many patients remained seizureand aura-free because of inconsistencies in documentation. Prospective studies in larger cohorts are needed for a better understanding of the pathogenesis of seizure recurrence after epilepsy surgery. Nevertheless, our study demonstrates that epilepsy surgery is beneficial in intractable epilepsy and that temporal resection and concordant ictal EEG are the major determinants of favorable outcome over long-term follow-up.
|
2016-05-04T20:20:58.661Z
|
2016-03-16T00:00:00.000
|
{
"year": 2016,
"sha1": "478e7bd6229c364de706803f791ed3ebe227e688",
"oa_license": "CCBY",
"oa_url": "http://downloads.hindawi.com/archive/2016/7982494.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "478e7bd6229c364de706803f791ed3ebe227e688",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
248460494
|
pes2o/s2orc
|
v3-fos-license
|
Neurocysticercosis: An Uncommon Cause of Acute Supratentorial Hydrocephalus
We report a 29-year-old woman with acute supratentorial hydrocephalus due to intraventricular neurocysticercosis (NC). Aqueductal stenosis due to web formation and a free floating intraventricular cyst with scolex were pathognomonic and led to the diagnosis of NC. Worldwide, NC is the most important parasitic infection of the central nervous system but is very uncommon in non-endemic regions. Intraventricular abnormalities occur in approximately 30% of the patients. Magnetic resonance imaging (MRI) plays a crucial role in the diagnostic work-up and in guiding intervention. Teaching Point: Brain magnetic resonance imaging in intraventricular neurocysticercosis is pathognomonic and essential in guiding treatment.
INTRODUCTION
NC is the most common parasitic infection of the central nervous system caused by the larval stage of the pork tapeworm Taenia solium. Increasing globalization, travelling, and migration has triggered the spread of NC in non-endemic regions. The cysts can be found in the parenchyma, in the subarachnoid space and/or in the ventricles. Ventricular cysts can cause cerebrospinal fluid (CSF) flow obstruction and/or arachnoiditis [1].
CASE REPORT
A 29-year-old Nepalese woman, who has been living in Belgium for 14 years, presented to the emergency department with worsening headache. She described unilateral headaches since 10 days with nausea and vomiting, not responsive to analgesics.
The neurological examination was unremarkable. There was no evidence of papilledema.
Brain computed tomography (CT) showed dilatation of the supratentorial ventricles with signs of transependymal edema (Figure 1a). A parenchymal calcification in the left occipital region was noticed. Brain MRI confirmed the acute triventricular hydrocephalus with aqueductal stenosis due to an intraluminal web (Figure 1b). A lobulated cystic lesion was seen in the occipital horn of the right lateral ventricle containing a small solid eccentric nodule with diffusion restriction (Figure 2a-c). There was no evidence of Gadolinium enhancement. Endoscopic ventriculostomy and septostomy under neuronavigation were performed. The changed location of the intraventricular cyst on the postoperative MRI confirmed its free-floating character (Figure 2d).
Subsequently, medical treatment with antiparasitic therapy (Albendazole) and steroids was started for two weeks. The patient responded well to therapy and could leave the hospital after 10 days. Meanwhile the diagnosis of NC was confirmed by serological tests detecting Taenia solium antigen and antibodies (ELISA) in the CSF and serum.
DISCUSSION
NC is a brain infection caused by the encysted larval stage (cysticercus) of the pork tapeworm Taenia solium. According to the location in the brain there are two forms: parenchymal and extraparenchymal NC. Compared to the parenchymal form, patients with intraventricular and/or subarachnoid disease have a worse overall outcome with a higher morbidity and mortality [1].
Intraventricular NC occurs as the cysticerci reach the ventricles through the choroid plexus, where they may pass freely or become attached to the ependyma. Intraventricular infection appears to be more frequent than previously thought presenting in up to 30% of patients with NC [2].
CT is more sensitive in detecting calcified lesions but has limited sensitivity for identification of intraventricular cysts [3]. In our patient the intraventricular cyst remained invisible on CT. The calcification represents the nodular calcified stage of NC.
Brain MRI is the modality of choice for the detection of extraparenchymal NC. 3D volumetric T2 weighted sequences have enhanced sensitivity for detection of cysticerci in the ventricles or subarachnoid spaces [4]. The intraventricular form of NC is a frequent cause of intracranial hypertension due to CSF outflow obstruction. Intraventricular cysts may be free floating and cause obstruction at the foramina of Monro, the Sylvian aqueduct, or the fourth ventricle and may lead to a rapid clinical deterioration. Intraventricular cysts are most common in the fourth and third ventricle and are less frequently seen in the lateral ventricles [5]. Diffusion-weighted images typically demonstrate diffusion restriction in the scolex which is visible as an eccentric dot in the cyst [6]. This finding is typically seen in the vesicular and colloidal-vesicular stage of NC. The aqueductal stenosis in our patient was due to the presence of a web, most likely secondary to arachnoiditis. Intraventricular cysts may also become adherent to the ependymal wall of the ventricle and result in ependymitis following cyst involution that may lead to intraventricular compartmentalization and make CSF diversion more problematic [2]. The differential diagnosis of an intraventricular cystic lesion includes neoplastic and infectious lesions but the demonstration of a diffusion restrictive scolex is pathognomonic for intraventricular NC.
CONCLUSION
Intraventricular NC is a rare cause of acute obstructive hydrocephalus. With increased international travel and immigration, one should consider NC as a possible etiology. The demonstration of a diffusion restrictive scolex on brain MR imaging is pathognomonic for NC.
|
2022-05-01T15:29:00.655Z
|
2022-04-29T00:00:00.000
|
{
"year": 2022,
"sha1": "d87b7731df0fabd7dc727e4d6f5739f620fe792c",
"oa_license": "CCBY",
"oa_url": "http://www.jbsr.be/articles/10.5334/jbsr.2742/galley/2117/download/",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "6fe8ac4291af0c08da1ad098b83594915f0284ec",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
261181609
|
pes2o/s2orc
|
v3-fos-license
|
The impact of modified cement with a special additive on construction mortar deformations
. The study is devoted to the modification of 42.5 class ordinary Portland cement with a 10% special expanding additive (cement powder and gypsum in a 40:60 percentage ratio), and the impact of this cement on mortar is determined. Using modified cement and multi-fraction quartz sand in a ratio of 1:3, prism-shaped samples with a size of 4x4x16 cm were fabricated, and left in a water bath for 24±2 hours after formation. After one day of curing, the samples were demolded, their linear dimensional change was verified using an indicator, and they were then submerged in water. After three, seven, and 28 days of curing, the samples' changes in linear dimensions were examined again using an indicator. It has been revealed that introducing a 10% specially developed expansion additive to the composition of ordinary Portland cement also contributes to the maximum expansion of the construction mortar at 3–7 days of hardening. The expansion percentage of the 28-day curing time is very little different from the 7-day indicator. Considering the data on the linear dimensional change of the test samples, we can conclude that the modification of cement with the synthesized additive allows the production of non-shrinkable, expansive construction mortars and concretes, which are currently being studied.
Introduction
Ordinary Portland cement has a variety of beneficial properties, and sometimes its use becomes inconvenient depending on the importance of the designed structures, buildings, and operating conditions.As during strengthening, shrinkage phenomena are observed, which lead to the formation of internal micro-cracks in cement stone, violating the cohesion of structures during monolithic construction, increasing the water and gas permeability of hydraulic structures, and reducing the durability.Special forms of cement have been developed to avoid the aforementioned problems.This cement not only does not shrink during hardening but also expands the strengthening system.Non-shrinking, expanding, and tensioning cements are widely used in the manufacture of various types of construction facilities for a variety of purposes, including hydro-and nuclear power plants; marine waterproofing; industrial structures; and ordinary, pre-stressed reinforced concrete structures.The management of special cement productions became possible thanks to the improvement of the theory of binding materials and their production technologies.
Corresponding author: meimaryan@mail.ruSeveral scientists were involved in the development of such special cement, in particular V.V. Mikhailov, I.V. Kravchenko, P.P. Budnikov, A.E. Sheikin, P.K. Meta, and those who, considering cement stone as a capillary-porous material, revealed the patterns of relationships between the structure of the resulting artificial conglomerate and its main properties: strength, deformation and stability in various aggressive environments, and also showed that the structure of cement stone depends on the mineral composition of cement, its hydration kinetics and the phase composition of hydrated new-formations.
The authors mentioned have developed different methods for obtaining special, particularly non-shrinkable, expanding cement, using quite expensive and scarce alumina cement, high alumina blast furnace slag, specially synthesized clinker, and different additives, as well as free CaO and MgO.According to their research, the expansion phenomenon is mainly explained by the formation of calcium hydrosulfaluminate as ettringite (3CaO•AI2O3•3CaSO4•31H2O) in the hardening system.When the system gains enough stiffness, the growth of needle-shaped ettringite crystals contributes to the system's expansion.The formation of ettringite at later times, when the cement stone has gained high strength, leads not only to the expansion of the system but also to the formation of microcracks in the artificial stone and sometimes to its complete deterioration, which is often observed in reinforced concrete (1)(2)(3)(4)(5)(6)(7)(8)(9)(10)(11)(12)(13)(14)(15)(16)(17).
When examining the current state of expanding cement production technology, it is worth noting that, while previously the composition of these cements consisted primarily of costly alumina cement and insufficient high-alumina furnace slag, since the end of the twentieth century, new, cheaper, and more efficient methods for the production of cements have been developed, namely the chemical activation of ordinary Portland cement with special expanding additives.
We concluded that the most cost-effective and preferable method in our circumstances is the development of a special-purpose expanding additive for ordinary Portland cement activation.That was based on the effectiveness of the non-shrinking and expanding cement production methods' comparative evaluation and considering the current state and capabilities of cement factories in the Republic of Armenia.
Materials and methods
Based on the above and taking into account the lack of alumina cement and high alumina furnace slag in the Republic of Armenia as well as the complexity of the special clinker production scheme, we came to the conclusion that for the production of such cements it is more appropriate and more affordable to develop a special expanding additive based on local raw materials.For such an additive development of the country's raw material base and various production wastes were studied.For the first time, gypsum can be used as a sulfatecontaining component for the additive, and for the first time, an attempt was made to use the production waste of the cement factory as a carbonate component.Electro filters absorb the dust from the flue gases removed from the furnaces, which contain a sufficient amount of CaO.The average chemical compositions of selected components are given in Table 1.Raw mixtures were prepared from the mentioned components with different percentage ratios, which were fired and the chemical composition of both the mixture and the sinter was determined.
Discussion of results
It has been proven by the calculation method (18) and physicochemical (x-ray, petrographic) studies that the sinter obtained from a 40:60 ratio (cement powder: gypsum) mixture, fired at 1000 o C, is more preferable because, in this case, the formation of a high amount of the main expanding mineral, 3(CA)CaSO4, is observed.The further increase in temperature does not significantly affect the increase in the amount of mineral formation.The calculated average chemical composition of the raw mix and sinter is given in Table 2, and Table 3 shows the average modulus characteristics and mineral composition of the raw mixture and sinter.Thus, considering that there is a sufficient amount of gypsum in the chemical-mineral composition of the processed raw material mixture, it can be assumed that with special additive synthesis and the sharp cooling of the resulting sinter, the formation and stabilization of individual minerals are possible, especially calcium sulfaaluminate, 3(CaO•AI2O3)CaSO4, which is important for the production of non-shrinkable and expansive cement and similar construction mortars based on them.
It has been studied and proven that introducing 10% of this additive into the composition of ordinary Portland cement has a favorable effect on the expansion of cement mortar of normal density (19), which is clearly visible in the case of the cement stone microstructure petrographic study.It has been revealed that with the introduction of the additive, wellformed needle-shaped crystals of calcium hydrosulfaaluminate are formed in the cement stone, the indicators of which are Ng = 1,463± 0,002.The microstructure of the cement stone was studied with an electron microscope and is shown in the Fig 1.The obtained data served as a basis for checking the effectiveness of the synthesized special additive on the change of linear dimensions of construction mortars.For this purpose, a cement mortar composition was prepared based on modified cement and multi-fraction quartz sand in a ratio of 1:3, from which 4x4x16 cm prism-shaped samples were formed, which were kept in a water bath for 24 ±2 hours after preparation.
The samples were then removed from the mold.After one day of curing, they were submerged in water and the change in linear dimensions was detected using an indicator.The samples were then tested for changes in their linear dimensions using the indicator at three, seven, and 28 days of curing.The data is shown in a graph and is shown in the Fig 2.
The phenomenon of expansion is also explained by the formation of calcium hydrosulfaluminate, i.e., ettringite (3CaO•AI2O3•3CaSO4•31H2O) as a result of the hydration of the mineral 3(CaO•AI2O3)CaSO4 during hardening.The growth of these crystals contributes to the volume expansion of the cement-sand composition system.
Moreover, the main expansion of the system is also observed at 3-7 days of hardening, which is explained by the intensive growth of ettringite crystals.Some reduction in the degree of linear expansion at 28 days of curing is because of the evaporation of system residual moisture.
Fig. 2. Dependence of additive expansion on curing time.
From the data presented in the graph, it is clear that in pure cement and construction mortars, the effective influence of the additive is also observed at the age of 3-7 days of hardening.
Conclusion
Studying the effective influence of the cement, which is modified with a 10% special expansion additive (40:60 percentage ratio of cement powder and gypsum) on the changes of construction mortar linear dimensions at different curing times, it has been revealed that the most expansion is observed at 3-7 days.The mortar does not have enough stiffness after one day of hardening, and the small quantity of ettringite does not guarantee system expansion.At 28 days, when the system has enough stiffness, ettringite formation does not affect system expansion.
Table 1 .
Chemical composition of raw materials.
Table 2 .
Calculated average chemical composition of the raw mixture and the sinter.
Table 3 .
Average modulus characteristics and mineral composition of the raw mixture and sinter.
|
2023-08-27T15:16:08.268Z
|
2023-01-01T00:00:00.000
|
{
"year": 2023,
"sha1": "94504a094f6c207da1b02096e6c1e30e84720c3a",
"oa_license": "CCBY",
"oa_url": "https://www.e3s-conferences.org/articles/e3sconf/pdf/2023/56/e3sconf_wfces2023_01034.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "8a217058ddb4bc892adaca3b8fa5b4dc9053a8c3",
"s2fieldsofstudy": [
"Materials Science",
"Engineering"
],
"extfieldsofstudy": []
}
|
3117915
|
pes2o/s2orc
|
v3-fos-license
|
The Rap activator Gef26 regulates synaptic growth and neuronal survival via inhibition of BMP signaling
In Drosophila, precise regulation of BMP signaling is essential for normal synaptic growth at the larval neuromuscular junction (NMJ) and neuronal survival in the adult brain. However, the molecular mechanisms underlying fine-tuning of BMP signaling in neurons remain poorly understood. We show that loss of the Drosophila PDZ guanine nucleotide exchange factor Gef26 significantly increases synaptic growth at the NMJ and enhances BMP signaling in motor neurons. We further show that Gef26 functions upstream of Rap1 in motor neurons to restrain synaptic growth. Synaptic overgrowth in gef26 or rap1 mutants requires BMP signaling, indicating that Gef26 and Rap1 regulate synaptic growth via inhibition of BMP signaling. We also show that Gef26 is involved in the endocytic downregulation of surface expression of the BMP receptors thickveins (Tkv) and wishful thinking (Wit). Finally, we demonstrate that loss of Gef26 also induces progressive brain neurodegeneration through Rap1- and BMP signaling-dependent mechanisms. Taken together, these results suggest that the Gef26-Rap1 signaling pathway regulates both synaptic growth and neuronal survival by controlling BMP signaling. Electronic supplementary material The online version of this article (10.1186/s13041-017-0342-7) contains supplementary material, which is available to authorized users.
Introduction
Transsynaptic retrograde signaling from postsynaptic cells controls the development and survival of presynaptic neurons [1][2][3]. At the Drosophila larval neuromuscular junction (NMJ), the bone morphogenetic protein (BMP) ligand glass bottom boat (Gbb) is secreted from the postsynaptic muscle and acts as a key retrograde signal that promotes the expansion of synaptic arbors [4][5][6][7]. In motoneurons, the Gbb signal is processed by a tetrameric presynaptic complex containing the type II BMP receptor wishful thinking (Wit) and either of two type II BMP receptors, thickveins (Tkv) and saxophone (Sax). Upon Gbb binding, this receptor complex phosphorylates the R-Smad mothers against decapentaplegic (Mad). Phosphorylated Mad (P-Mad) translocates into the nucleus through its interaction with the co-Smad Medea to regulate transcription of target genes [8]. Mutations disrupting this canonical BMP signaling pathway, including gbb, wit, tkv, sax, and mad, all display NMJ undergrowth and defective basal transmission [4][5][6][7]. In sharp contrast, genetic conditions to elevate presynaptic BMP signaling cause NMJ overgrowth with excessive formation of small "satellite" boutons [9][10][11][12], which bud off the main axis of the motor axon terminal. Based on these findings, it has been proposed that the level of BMP signaling is instructive for the regulation of NMJ synapse growth [10]. Subsequent work on the Drosophila brain has begun to reveal the importance of precise regulation of BMP signaling in the maintenance of adult neurons. It has been demonstrated that, in addition to synaptic overgrowth, elevation of BMP signaling induces abnormal brain neurodegeneration in the adult fly [9].
Drosophila NMJ studies have identified various endocytic proteins as negative regulators of BMP-dependent synaptic growth. For example, loss of two endocytosis regulators, Dap160/intersectin and endophilin, leads to an increase in synaptic P-Mad levels and NMJ overgrowth with excessive satellite bouton formation [10,13]. In addition, a similar phenotype is also induced by loss of spichthyin (Spict), Spartin, and endosomal maturation defective (Ema), all of which are involved in endolysosomal trafficking of BMP receptors [9,11,14]. Importantly, these endocytic genes are shown to functionally interact with BMP signaling pathway components at the NMJ [9][10][11]14]. These findings imply that endocytosis and subsequent lysosomal degradation of BMP receptors are important mechanisms involved in attenuating Gbb-induced signaling at the NMJ.
In a genetic screen for mutations that affect synaptic morphology at the Drosophila NMJ, we identified the gef26 gene, which encodes a PDZ guanine nucleotide exchange factor (PDZ-GEF) for the small GTPase Rap1. Gef26 was originally known to control the development of various organs primarily by regulating cadherinmediated cell-cell adhesion and integrin-dependent cellmatrix interactions [15][16][17][18][19]. Here, we report a novel role for the Gef26-Rap1 pathway in the regulation of BMPdependent synaptic growth and neuronal survival. Null mutations in the gef26 or rap1 gene cause NMJ overgrowth characterized by excessive satellite bouton formation, recapitulating the phenotype induced by elevated BMP signaling. Genetic interactions between gef26, rap1, and components of the BMP pathway suggest that Gef26 acts through Rap1 to restrain BMPdependent synaptic growth at the NMJ. Importantly, Gef26 promotes endocytic downregulation of surface expression of the BMP receptors Tkv and Wit. Finally, our genetic data indicate that regulation of BMP signaling by the Gef26-Rap1 pathway is critical for neuronal survival in the adult brain.
Results
Drosophila gef26 is required presynaptically for normal synaptic growth To identify genes involved in the regulation of synaptic development, we performed an anatomical screen on 1500 independent EP insertion lines [20,21]. We inspected third instar larval NMJs using the axonal membrane marker anti-HRP. In this screen, we isolated an insertion (G3533) localized in the first intron of the Drosophila gef26 gene (CG9491). These mutants displayed NMJ overgrowth with an excessive formation of small "satellite" boutons (data not shown), which protrude from parental boutons located at primary axon terminal arbors.
To determine the null phenotype of gef26 at the NMJ, we utilized the transheterozygous combination of gef26 6 , a previously reported null allele [19,22], and the Df(2 L)BSC5 deficiency (henceforth referred to as Df ) to delete the gef26 locus. A significant synaptic overgrowth phenotype was observed at every glutamatergic type-I NMJ in gef26 6 /Df third instar larvae. To quantify the gef26 phenotype, we measured overall bouton number and satellite bouton number at NMJ 6/7 and NMJ 4 from abdominal segment 2 (Fig. 1a, b; Additional file 1: Table S1). Compared with wild-type controls (w 1118 ), bouton number normalized to muscle surface area in gef26 6 /Df larvae was increased by 24% at NMJ 6/7 and by 51% at NMJ 4. At the same time, satellite bouton number in gef26 6 /Df was increased by 39% at NMJ 6/7 and by 219% at NMJ 4. Comparable synaptic growth defects were observed in larvae homozygous for gef26 6 ( Fig. 1a, b).
To determine whether gef26 function is required preor postsynaptically for normal synaptic growth regulation, we expressed a gef26 cDNA transgene (UAS-gef26) in gef26 6 /Df mutants under the control of tissue-specific GAL4 drivers. Expression of UAS-gef26 using a neuronal driver (C155-GAL4) fully rescued the NMJ growth defect of gef26 mutants (Fig. 1b). In contrast, expression of UAS-gef26 in all somatic muscles using the BG57-GAL4 driver failed to rescue the NMJ growth defect (Fig. 1b), suggesting that Gef26 functions presynaptically to restrain synaptic growth at the NMJ.
Additional evidence for a presynaptic requirement for Gef26 was provided by assessment of the effect of RNA interference (RNAi)-mediated knockdown of Gef26 expression. Neuronal expression of a dsRNA-fragment of gef26 (UAS-gef26 RNAi ) using C155-GAL4 increased both bouton number and satellite bouton number and mimicked the gef26 loss-of-function mutation, whereas muscular expression of the same dsRNA using BG57-GAL4 had no effect (Additional file 2: Figure S1a, b; Additional file 3: Table S2). This result supports the notion that Gef26 acts in presynaptic neurons to restrain synaptic growth at the NMJ.
We further characterized satellite boutons at gef26 mutant NMJs using several synaptic markers. Satellite boutons contained the active zone antigen NC82 and the synaptic vesicle marker cysteine-string protein (CSP) (Additional file 2: Figure S1c, d). In addition, satellite boutons were found to recruit the subsynaptic reticulum (SSR) marker discs-large (Dlg). Finally, NC82 in satellite boutons was nicely juxtaposed to the essential glutamate receptor subunit GluRIIC (Additional file 2: Figure S1e, f ). Thus, satellite boutons in gef26 mutants display the anatomical hallmarks of functional synapses.
Gef26 acts through Rap1 to regulate synaptic growth Since Gef26 acts via Rap1 to mediate various developmental processes [15-17, 19, 22], we decided to investigate whether Rap1 is the major target for Gef26 in the regulation of synaptic growth. We began by investigating whether loss of rap1 produces NMJ phenotypes similar to those caused by gef26 loss-of-function mutations. For this purpose, we analyzed NMJ morphology in third instar larvae homozygous for the rap1 MI11950 allele (hereafter referred to as rap1 M ) harboring a Minos element within the rap1 gene. Compared with wild-type controls, both overall bouton number and satellite bouton number in rap1 M mutants were significantly increased (Fig. 2a, b; Additional file 4: Table S3). To confirm the requirement for rap1 in the proper regulation of synaptic growth, we also examined NMJ morphology in third instar larvae expressing rap1 dsRNA (UAS-rap1 RNAi ) under the control of C155-GAL4. This genetic manipulation significantly increased overall bouton number and satellite bouton number (Additional file 5: Figure S2a, b; Additional file 6: Table S4). In contrast, muscular expression of UAS-rap1 RNAi did not noticeably alter NMJ morphology (Additional file 5: Figure S2a, b; Additional file 6: Table S4). Thus, loss of presynaptic rap1 produces gef26-like phenotypes at the NMJ.
Next, we assayed the transheterozygous interaction between gef26 and rap1 during synaptic growth. Heterozygous gef26 6 /+ or rap1 M /+ larvae displayed normal NMJ morphology. However, overall bouton number and satellite bouton number were both significantly increased in transheterozygous gef26 6 /+; rap1 M /+ larvae compared with single gef26 6 / + or rap1 M /+ heterozygotes (Fig. 2b). This type of genetic interaction suggests that Gef26 and Rap1 function in the same pathway.
Finally, we explored the epistatic relationship between gef26 and rap1. Neuronal overexpression of dominantactive Rap1-Q63E (UAS-rap1 CA ) using C155-GAL4 produced an NMJ undergrowth phenotype with fewer synaptic boutons (Fig. 2b). Importantly, neuronal overexpression of UAS-rap1 CA was able to induce a similar phenotype even in the gef26 6 /Df background (Fig. 2b), indicating that the overactivity of Rap1 completely suppresses the synaptic overgrowth in gef26 mutants. These results suggest that Gef26 acts upstream of Rap1 to restrain synaptic growth at the NMJ.
Gef26 and Rap1 regulate synaptic growth via inhibition of BMP signaling
Previous studies have identified Gbb as a key retrograde signal that stimulates synaptic growth at the NMJ [4][5][6][7]23]. Consistently, elevation of BMP signaling, which can be achieved by either presynaptic overexpression of a dominantly active Tkv receptor or loss of the inhibitory Smad Daughters against decapentaplegic (Dad), causes synaptic overgrowth with excessive satellite bouton formation [9,10], recapitulating phenotypes exhibited by gef26 or rap1 mutants. Therefore, we wondered whether Gef26 and Rap1 might regulate synaptic growth by inhibiting BMP signaling. To test this possibility, we first examined the transheterozygous interaction between gef26 or rap1 and dad at the NMJ. Like gef26 6 /+ and rap1 M /+ larvae, heterozygous dad J1E4 /+ larvae displayed normal NMJ morphology (Fig. 3a, b; Additional file 7: Table S5). In contrast, both overall bouton number and satellite bouton number were significantly increased in transheterozygous gef26 6 /+; dad J1E4 /+ and rap1 M ,+/+,dad J1E4 larvae compared with wild-type controls (Fig. 3a, b), suggesting a functional link between Gef26/Rap1 and the BMP signaling pathway during synaptic growth.
We next examined whether synaptic overgrowth in gef26 or rap1 mutants depends on BMP signaling. Heterozygosity for the BMP receptor gene tkv (tkv 7 /+), which had no effect on NMJ morphology in a wild-type Table S5). Moreover, removal of both copies of tkv (tkv 1 / tkv 7 ) in the gef26 6 /Df background caused a synaptic undergrowth phenotype, which was similar to that of tkv 1 / tkv 7 mutants (Fig. 3c, d). Thus, BMP signaling is necessary for synaptic overgrowth in gef26 or rap1 mutants.
Finally, we directly tested the role of Gef26/Rap1 in inhibiting BMP signaling by assaying P-Mad levels in gef26 and rap1 mutants. P-Mad accumulation at NMJ synapses and in the nuclei of ventral nerve cord (VNC) motoneurons was significantly increased in gef26 6 /Df or rap1 M /rap1 M larvae compared with wild-type controls (Fig. 3e, f ). Neuronal expression of UAS-gef26 in gef26 6 / Df mutants was capable of reversing the increase of P-Mad in motoneurons (Fig. 3f ), establishing the roles of Gef26 and Rap1 as negative regulators of BMP signaling. These results support a model in which Gef26 and Rap1 restrain synaptic growth by inhibiting BMP signaling.
Gef26 and Rap1 control BMP-dependent synaptic growth by regulating Drosophila fragile X mental retardation 1 (dfmr1) expression and microtubule stability At the Drosophila NMJ, BMP signaling has been shown to repress the expression of the dfmr1 gene [9]. The dfmr1 product (dFMRP) in turn negatively regulates the expression of the microtubule-associated protein 1B (MAP1B) Futsch [24], which promotes synaptic growth by stabilizing synaptic microtubules [25]. Therefore, we hypothesized that Gef26/ Rap1 might control synaptic growth by regulating microtubule stability via the dFMRP-Futsch pathway. To test the involvement of dFMRP in Gef26/Rap1-dependent regulation of synaptic growth, we first examined the transheterozygous interaction between gef26 or rap1 and dfmr1 at the NMJ. Total bouton number and satellite bouton number were significantly higher in transheterozygous gef26 6 /+; dfmr1 Δ50M /+ and rap1 M , +/+,dfmr1 Δ50M larvae than in wild-type controls, although the single heterozygotes displayed normal synaptic growth (Fig. 4a, b; Additional file 8: Table S6). In a subsequent experiment, we directly tested whether loss of Gef26 or Rap1 alters dfmr1 expression. Levels of dfmr1 mRNA were significantly lower in gef26 and rap1 mutants than in wild-type controls, as demonstrated by quantitative realtime PCR (Fig. 4c). Given the roles of Gef26 and Rap1 in inhibiting BMP signaling, these results imply that Gef26/ Rap1 restrains synaptic growth by relieving BMP-dependent repression of dfmr1 transcription.
Futsch reliably labels microtubules in presynaptic motor terminals [25]. Therefore, the above results suggest the involvement of microtubule stability in Gef26/Rap1-mediated regulation of synaptic growth. To directly test this possibility, we assayed the extent of synaptic growth in gef26 and rap1 mutants fed vinblastine, a microtubule-severing drug [26]. When vinblastine was fed at a low concentration (1 μM) that did not affect synaptic growth, it completely suppressed the synaptic overgrowth phenotype of gef26 6 /Df or rap1 M /rap1 M larvae (Fig. 4f, g; Additional file 8: Table S6). These results support the idea that Gef26/Rap1 controls synaptic growth by regulating microtubule stability via the Futsch pathway.
Gef26 regulates the endocytic internalization of the BMP receptors Tkv and Wit
We next attempted to determine how Gef26 attenuates BMP signaling. Mutations disrupting endocytosis, including endophilin (endo) and dap160, increase presynaptic P-Mad levels at the NMJ along with simultaneous synaptic overgrowth and the formation of excessive satellite boutons [10,13,27], suggesting that endocytosis of surface BMP receptors is an important mechanism to inhibit BMP-dependent synaptic growth. Since a similar phenotype was observed in gef26 mutants, we wondered if Gef26 regulates BMP signaling through endocytosis. To test this possibility, we first investigated genetic interactions between gef26 and mutations in endocytic genes. In heterozygous gef26 6 /+, endoA Δ4 /+, and dap160 Δ1 /+ larvae, total bouton number and satellite bouton number were at wild-type levels ( Fig. 5a, b; Additional file 9: Table S7). In sharp contrast, both parameters were significantly increased in transheterozygous gef26 6 /+; endoA Δ4 /+, or gef26 6 /dap160 Δ1 larvae (Fig. 5a, b), raising the possibility that Gef26 regulates BMPdependent synaptic growth through an endocytic mechanism. It has been proposed that Dap160 interacts with the endosomal protein Nervous wreck (Nwk) to negatively regulate synaptic growth [10,28]. However, total bouton number and satellite bouton number were normal in transheterozygous gef26 6 /+; nwk 2 /+ larvae (Fig. 5b), suggesting that Gef26 and Nwk regulate BMP signaling through distinct pathways.
We then examined the impact of gef26 knockdown on the endocytic internalization of BMP receptors in neuronal BG2-c2 cells. We transiently transfected a Myc-Tkv-Flag or Myc-Wit-Flag construct into control or gef26knockdown cells (Fig. 5c) and prelabeled the cells with an anti-Myc antibody at 4°C. We then initiated endocytosis by incubating the cells at 25°C for 10 min and visualized the internalization of the labeled surface receptors by Myc staining. Total Myc-Tkv-Flag or Myc-Wit-Flag was also monitored by staining for the intracellular Flag-tag after cellular permeabilization. In controls cells, we observed several Myc-Tkv-Flag-or Myc-Wit-Flag-positive intracellular puncta ( Fig. 5d; data not shown). Importantly, when examined in only cells with similar fluorescence intensities of Flag staining, the number of intracellular Myc-Tkv-Flag-or Myc-Wit-Flag-positive puncta per cell was dramatically reduced in gef26-knockdown cells (Fig. 5d, e), suggesting that Gef26 is required for the endocytic internalization of BMP receptors.
Given the role of Gef26 in BMP receptor internalization, we examined whether synaptic vesicle endocytosis is affected in gef26 mutant NMJs. We stimulated third instar fillets with 90 mM K + in the presence of the styryl dye FM1-43FX. During a 1-min labeling period, dye uptake into synaptic boutons was not significantly different between wild-type and gef26 6 /Df mutant animals (Additional file 10: Figure S3a, b). This result indicates that loss of Gef26 does not grossly affect endocytosis at the presynaptic terminal of the NMJ.
Finally, we investigated whether Gef26 collaborates with Rap1 and the BMP pathway to maintain normal locomotor ability and neuronal survival. To this end, we first examined transheterozygous combinations of gef26 and rap1 or dad with respect to locomotor dysfunction. At 20 days of age, transheterozygous gef26 6 /+; rap1 M /+ and gef26 6 /+; dad J1E4 /+ flies displayed mildly reduced climbing response compared with age-matched gef26 6 /+, rap1 M /+, or dad J1E4 /+ flies (data not shown). However, these transheterozygous flies at 30 days of age exhibited severely reduced climbing ability (Additional file 11: Figure S4c, d). We also examined transheterozygous interactions between gef26 and rap1 or dad with respect to brain neurodegeneration. At 20 days of age, heterozygous gef26 6 /+, rap1 M /+, or dad J1E4 /+ flies were not distinguishable from wild-type controls with respect to the total number of vacuoles (Fig. 6f ). In sharp contrast, there was a significant vacuolization in the brains of transheterozygous gef26 6 /+; rap1 M /+ or gef26 6 /+; dad J1E4 /+ flies (Fig. 6f ), supporting a functional link between Gef26, Rap1, and the BMP signaling pathway in the regulation of neuronal survival in the adult brain.
How might Gef26 regulate BMP signaling? Increasing evidence suggests that endocytosis of surface BMP receptors is a key mechanism of signal attenuation at presynaptic NMJ terminals. In support of this notion, our data imply that Gef26 inhibits BMP signaling by regulating the endocytic internalization of its receptor(s). gef26 displays transheterozygous interactions with mutations disrupting endocytosis (i.e., dap160 and endoA) during synaptic growth. In addition, gef26 mutant NMJs show an increase in the level of surface Tkv, supporting the role of Gef26 in receptor endocytosis. Most directly, we show that Gef26 facilitates the endocytic internalization of the BMP receptors Tkv and Wit in cultured cells. These findings imply a model in which Gef26 attenuate BMP signaling through facilitating endocytosis of BMP receptors (Fig. 6g).
Elevated BMP signaling has been implicated in the pathogenesis of hereditary spastic paraplegia (HSP), a group of neurodegenerative motor disorders. In mammalian cells, several HSP proteins, including NIPA1, Spastin, and Spartin, have been shown to inhibit BMP signaling [31]. At the Drosophila NMJ, the NIPA1 homologue Spichthyin (Spict) and Spartin also inhibit BMP signaling to restrain synaptic growth [9,11]. Importantly, it has now been demonstrated that elevation of BMP signaling in adult spartin flies causes progressive neurodegeneration and locomotor dysfunction [9]. Consistent with these studies and the proposed role of Gef26 as an inhibitor of BMP signaling, depletion of gef26 in the adult fly induces neurodegeneration and locomotor function. Thus, the current study solidifies the notion that precise regulation of BMP signaling is critical for the maintenance of adult neurons. A future challenge will be to investigate whether PDZ-GEF1 and other human Gef26 homologues contribute to the maintenance of the human motor system and, if so, whether this neuroprotective role involves the regulation of retrograde BMP transsynaptic signaling.
A final point of interest is the mechanism of how the Gef26-Rap1 pathway facilitates BMP receptor endocytosis. In various experimental systems, Rap1 has been identified to regulate actin-driven cellular processes. For example, mammalian Rap1 promotes cell spreading by localizing the RacGEFs Vav2 and Tiam1 to sites of lamellipodia extension [32], which is driven by Rac-dependent actin polymerization. In addition, Dictyostelium Rap1 is also involved in chemotaxis by activating the Rac signaling pathway through RacGEF1 [33]. Since actin polymerization is known to provide mechanical forces required for multiple stages of endocytosis [34], it is tempting to speculate that Rap1 facilitates endocytosis by regulating actin polymerization through the RacGEF-Rac signaling pathway. Interestingly, the Rac signaling pathway has been implicated in the regulation of BMP-dependent synaptic growth at the Drosophila NMJ [35]. In future studies, it will be interesting to investigate the role of the Rac signaling pathway in Rap1-dependent endocytosis.
Molecular biology
Full-length cDNAs for gef26 and rap1 were obtained by reverse transcription PCR of total RNA extracted from Drosophila S2R+ cells and introduced into the pUAST or pUAST-Myc vector to generate UAS-gef26 and UAS-Myc-rap1. For UAS-Myc-rap1 CA , glutamine 63 was mutated to glutamate by overlapping PCR using UAS-Myc-rap1 (the template DNA) and the primers 5'-ATGGCCGT-GAACTCCTCCGTACCC-3′ and 5'-TACGGAGGAGTT-CACGGCCATGCG-3′ in combination with the BglII-Myc-linked primer 5'-GGGAGATCTGCCACCATG-GAACAAAAACTCATCTCAGAAGAG-GATCT-GATGCGTGAGTACAAAATC-3′ and the XbaI-linked primer 5'-GGGTCTAGATAGCAGAACACATAGGGAC-3′, respectively, and the assembled product was introduced into pUAST. For pAc-Myc-tkv-Flag, a full-length cDNA (clone ID: LD45557) for tkv (CG14026) was obtained from the Drosophila Genomics Resource Center (Bloomington, IN, USA). The cDNA insert was PCRamplified and then introduced into the pTOP Blunt V2 vector (Enzynomics, Daejeon, Republic of Korea). Myc and Flag epitope-tag sequences were introduced immediately downstream of the signal sequence and at the C-terminus of Tkv, respectively, by PCR-based mutagenesis. The resulting Myc-tkv-Flag insert was subcloned into the pAc5.1 vector. For pAc-Myc-wit-Flag, Flag epitope-tag sequence was introduced downstream of the wit sequence of pAc-Myc-wit [9] by PCR-based mutagenesis. The resulting Myc-wit-Flag fragment was re-introduced into the pAc5.1 vector.
To measure levels of dfmr1 expression, total RNA was extracted from the third instar brain and ventral ganglion using the TRIsure kit (Bioline, Taunton, MA, USA) and reverse transcribed using the SuperScript III cDNA synthesis kit (Invitrogen). Quantitative real-time PCR reactions were performed using SYBR Select Master Mix (Applied Biosystems, Foster City, CA, USA) on an Applied Biosystems 7500 Real-Time PCR System. The mean Ct of triplicate reactions was used to determine relative expression of dfmr1 using the 2 -ΔΔCT method. Expression of rp49 was used as the internal control. The primers used were: dfmr1, 5'-GGATCAGAACATAC-CACGTG-3′ and 5'-CTGGCAGCTATCGTGGAGGCG-3′; and rp49, 5'-CACCAGTCGGATCGATATGC-3′ and 5'-CACGTTGTGCACCAGGAACT-3′.
For RNA interference (RNAi) experiments in BG2-c2 cells, gef26 dsRNA was produced by in vitro transcription of a DNA template containing T7 promoter sequences at both ends, as described previously [38]. The DNA template was produced by PCR from the UAS-gef26 vector using primers containing a T7 promotor sequence followed by gef26-specific sequences: 5'-GTGGCCGGCTCTACCAGT-3′ and 5'-TGGTACGC-GAGTCGAACG-3′.
Immunostaining of larval NMJs
Wandering third-instar larvae were dissected in Ca 2 + -free HL3 solution and fixed in PBS containing 4% formaldehyde for 20 min. Fixed larval fillets were washed with PBT-0.1 (PBS, 0.1% Triton X-100) and blocked with PBT-0.1 containing 0.2% BSA for 1 h. Samples were sequentially incubated with primary antibodies overnight at 4°C and fluorescently-labeled secondary antibodies for 1 h at room temperature. The following monoclonal antibodies from the Developmental Studies Hybridoma Bank (DSHB, Iowa City, IA, USA) were used as primary antibodies FITC-and Cy3-conjugated secondary antibodies (Jackson ImmunoResearch) were used at 1:200. Images were captured with an LSM 800 laser-scanning confocal microscope using a C Apo 40× W or Plan Apo 63 × 1.4 NA objective.
Quantification of bouton number and satellite bouton number was performed at NMJ 6/7 and NMJ 4 in abdominal segment 2, as previously described [20]. Bouton number was normalized to muscle surface area. Statistical analysis was performed using SigmaPlot (Systat Software, San Jose, CA, USA). Comparisons were made by one-way ANOVA analysis with a post-hoc Turkey test. For comparison of only two samples, an unpaired Student's t-test was used. Data are presented as mean ± SEM.
Histology, immunostaining, and TUNEL staining of adult brains
Heads from adult flies at 2, 10, 20, 30, and 40 days posteclosion were fixed overnight in PBS containing 4% paraformaldehyde at 4°C, embedded in paraffin, and subjected to serial 5-μm sectioning in a frontal orientation. Serial sections covering the entire brain were placed on a single slide and stained with hematoxylin and eosin (H&E) using a standard protocol. Vacuoles larger than 5 μm were counted throughout the entire brain.
For immunostaining analysis, brains from 20-day-old flies were dissected in ice-cold PBS, and fixed overnight in PBS containing 4% formaldehyde at 4°C. Fixed brains were subsequently permeabilized in PBT-0.3 (PBS, 0.3% Triton X-100) for 1 h and blocked with PBT-0.3 containing 5% BSA for 1 h. The brains were sequentially incubated with primary antibodies for 48 h at 4°C and fluorescently-labeled secondary antibodies for 24 h at 4°C. The following primary antibodies were used in this study: anti-Elav (7E8A10, DSHB) at 1:10, anti-Repo (8D12, DSHB) at 1:10, and anti-cleaved caspase-3 (Cell Signaling) at 1:100. Antibody-stained brains were mounted in SlowFade antifade medium (Invitrogen). Fluorescent images were acquired with a LSM 800 laser-scanning confocal microscope using a C Apo 40× W objective.
TUNEL assays on paraffin sections of adult brains were performed using the In Situ Cell Death Detection Kit (Roche, Mannheim, Germany). Briefly, paraffin sections were dewaxed according to standard procedures. After washed with PBS, the sections were permeabilized in PBS containing 0.1% sodium citrate and 0.1% Triton X-100 for 15 min at room temperature. After washing with PBS, the samples were incubated with the TUNEL reaction mixture in a dark humid chamber for 1 h at 37°C, prior to DAPI staining for 5 min at room temperature. TUNEL-and DAPI-positive cells were counted in three consecutive, middle frontal sections of adult brains.
Adult climbing test
Adult locomotor ability was assayed as described previously [9]. For each genotype tested, approximately 100 flies were collected within 1 day of eclosion; aged for 2, 10, 20, 30, and 40 days; and placed into a glass graduated cylinder. After 5 min of adaptation to their environment, flies were gently vortexed for 5 s. The distance climbed by individual flies in a 30 s period was measured. Climbing assays were repeated 3 times for each genotype, and the results were averaged.
|
2017-12-28T18:08:03.443Z
|
2017-12-28T00:00:00.000
|
{
"year": 2017,
"sha1": "eb3e103fe9d5440557fe7c3c41394512bc75bb44",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1186/s13041-017-0342-7",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "eb3e103fe9d5440557fe7c3c41394512bc75bb44",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
250102164
|
pes2o/s2orc
|
v3-fos-license
|
A Qualitative Study of Factors Influencing Unsafe Work Behaviors Among Environmental Service Workers: Perspectives of Workers, and Safety Managers: The Case of Government Hospitals in Addis Ababa, Ethiopia
Background: Environmental Service (EVS) is a term that refers to cleaning in healthcare facilities. EVS personnel are exposed to a variety of hazards, including physical, chemical, ergonomic, cognitive, and biological hazards that contribute to the development of diseases and disabilities. Recognizing the conditions that promote unsafe behavior is the first step in reducing such hazards. The purpose of this study was to (a) investigate the attitudes and perceptions of safety among employees and safety managers in Addis Ababa hospitals, and (b) figure out what factors inhibit healthy work behaviors. Methods: The data for this study was gathered using 2 qualitative data gathering methods: key informant interviews and individual in-depth interviews. About 25 personnel from 3 Coronavirus treatment hospitals were interviewed to understand more about the factors that make safe behavior challenging. The interviews were recorded, transcribed, and then translated into English. Open Code 4.02 was used for thematic analysis. Results: Poor safety management and supervision, a hazardous working environment, and employee perceptions, skills, and training levels were all identified as key factors in the preponderance of unsafe work behaviors among environmental service workers. Conclusions: Different types of personal and environmental factors were reported to affect safe work behavior among environmental service personnel. Individual responsibility is vital in reducing or eliminating these risk factors for unsafe behaviors, but management’s involvement in providing resources for safe work behavior is critical.
Introduction
Cleaning of healthcare facilities is performed for medical, sanitary, and public health reasons. Maintaining an environment with a low pathogenic burden is essential for avoiding complications during the care and recuperation of patients. 1 One specific department that is cardinal to organizational outcomes is environmental services (EVS). From fostering a culture of safety to improving hospital user experience as well as employee engagement, the EVS department plays a vital role in transforming the culture of an organization. 2 EVS is a term that refers to cleaning in healthcare facilities. 2 EVS personnel perform a critical role in health care, collaborating with hospital staff to ensure the safety of patients and staff through proper medical cleaning and disinfection. Because of its importance, the service is frequently referred to as "the first line of defense against infection control." 3 EVS staff clean patient rooms, nursing units, surgical areas, offices, laboratories, waiting rooms, and restrooms regularly to assist in the prevention of hospital-acquired infections. 4 This is a crucial activity in a healthcare facility because a defective environmental service has an impact on a hospital's ability to function and the quality of treatment it provides. 3 And amid a pandemic marked by the need for cleanliness, it makes perfect sense that the individuals cleaning the hospitals where Coronavirus (COVID-19) victims fight for their lives are of critical importance.
Hospital environmental hygiene, however, is far more complex than other types of cleaning. EVS personnel are exposed to a variety of hazards, including physical, chemical, ergonomic, cognitive, and biological hazards that contribute to the development of diseases and disabilities. 3,5,6 Despite these risks, environmental service personnel are generally undertrained, underpaid, and underappreciated by other hospital staff. When this is combined with understaffed environmental service departments, it leads to long-term concerns about patient and healthcare worker safety. 4,7,8 The EVS workforce has also been proven to be one of the most vulnerable groups to nosocomial COVID-19 infections. 9 Currently, in Addis Ababa, 9 government-owned hospitals serve as COVID-19 treatment centers. Despite the many studies on safe work behaviors among front-line health care 2 Environmental Health Insights professionals, there is scant evidence on barriers to safe work practices among environmental service personnel. The limited literature on factors affecting safe practices among hospital cleaners reveals that we currently lack adequate models for standards and techniques that can work at scale to maintain safety in contexts where risks are prevalent, compliance costs are high, and enforcement capability is poor. Given the very different healthcare systems and regulatory environments, the approaches used successfully in developed countries cannot be directly applied to developing ones. In light of this, the purpose of this article is to investigate the attitudes and expectations of environmental service personnel and safety managers in Addis Ababa government hospitals during COVID-19, as well as to identify the factors that impede safe work behaviors among these workers in 2020.
Methods and Materials
The study was conducted in 3 government hospitals in Addis Ababa city, the Ethiopian capital, from June 25 to July 22, 2020. The 3 government hospitals were purposefully chosen for this investigation because of their large COVID-19 patient flow. The study team consisted of 4 investigators, 4 professional data collectors, and 1 supervisor. The supervisor and the 4 data collectors received 2 days of intensive training. Lessons on data collection strategies and how to effectively handle study participants were conducted during the training.
The study involved 2 groups of participants in each hospital. One group was made up of personnel from the environmental services department, while the other was made up of in-hospital Infection Control and Patient Safety personnel (IPPS). An on-site survey was conducted in each hospital to identify study participants. As per the census, there were 69 environmental service workers and 15 infection control and patient safety specialists across all 3 hospitals.
Study participants were then selected using staff rosters provided by the hospitals' Human Resources departments. The authors wanted to include as many EVS/IPPS staff as feasible to reflect workers along the typical patient trajectory. Thus, purposive sampling was used to acquire data that was representative of the setting. Semi-structured key informant interviews (KII) and in-depth interviews (IDI) were used to collect data in this study. Interviews were conducted until the data was saturated, or until recurring patterns emerged in the individuals' narratives. The study included 19 EVS personnel and 6 infection prevention and patient safety officers from various task classifications, hospital settings, and seniority levels.
The study team developed the interview guides after conducting a thorough evaluation of the relevant literature (Supplemental Material S1 file). The interview guides used with both groups of participants were made similar to ensure that the replies were comparable. The interviews lasted 35 to 45 minutes; probing questions were used when replies were unclear or ambiguous, or to get more extensive information. In addition to the study participants, the interview included an interviewer, a note-taker, and an observer.
The tape-recorded interviews were transcribed in Amharic and translated into English by the researchers. The translated data were exported into Open Code 4.03 software to facilitate coding and analysis. A priori themes were coded based on the study objectives and emergent themes were identified based on the narratives of research participants. The credibility of the analysis was further enhanced by having 2 researchers analyze each data set. All members of the research team worked systematically through entire data sets, giving full and equal attention to each data item. Individual extracts of data were coded in as many different themes as they fit and as many times as deemed relevant. Memos were recorded to identify interesting aspects in the data items and emerging impressions that may form the basis of themes across the data set.
Biweekly research meetings were held throughout the coding process to allow time for peer debriefing and to help the research team to examine how their thoughts and ideas were evolving as they engaged more deeply with the data. Meeting minutes were recorded as a means of establishing an audit trail and to help keep track of emerging impressions of what the data means and how they related to each other. A method of negotiated agreement was then used to reconcile any differences. Related verbatim quotes are used to help in the interpretation of the data when presenting the data.
Wachemo University College of Medicine and Health Sciences' institutional review board granted ethical approval (IRB). Before collecting data, a permission letter was obtained from the Addis Ababa City Health Bureau. Furthermore, participants' involvement was contingent on their full approval and agreement. All the study participants were de-identified during the analysis and the reporting of the data used in this study.
Result
This study included a total of 25 hospital personnel. Environmental service personnel accounted for 19 of the total, while personnel from the Infection Control and Patient Safety departments accounted for 6. Females made up 20 of the study participants (80.0%). The participants' mean age was 38.2, with a standard deviation (SD) of 8.2 and a range of 18 to 59.
Factors Related to Unsafe Work Behaviors Themes and sub-themes (categories) identified
Three main themes emerged from the participants' perspectives concerning the factors associated with unsafe work behaviors-(1) Poor management and supervision of safety, (2) Unsafe workplace conditions, and (3) Perceptions, skills, and training level of workers (Table 1).
Theme 1: Poor management and supervision of safety
Ineffective safety management. Almost all of the participants cited a general lack of personal protective equipment (PPE) as a primary impediment to safe behaviors. There was also evidence of a tendency to suggest that hospital administrators were unaware of the need for timely, adequately, and appropriately provided PPE. One respondent, for example, voiced dissatisfaction with the shortage of crucial PPE supplies, as described below (P12 refers to participant 12).
P12: One barrier to safe behaviors is that the personal protective equipment if available is very worn-out and defective. I've
been working here for 6 months, and so far I haven't gotten any PPE. I use the worn-out equipment that was used by those who worked here before the pandemic.
Furthermore, many participants stated that personal protective equipment such as masks, gloves, and goggles were either too big or too small for them to wear comfortably. As a result, it appears that the workers were left with nothing to use.
P4: Although I know we are supposed to use facemasks while cleaning, I prefer instead to cover my face with a scarf because the masks the hospital gives us do not fit properly.
On the other hand, key informants considered in this study justified supply shortfalls as follows: (K2 refers to Keyinformant 2).
K2: . . . We [hospitals] survive with our existing old gear because the pandemic has strained all available resources. We take what we can get and use it to the best of our advantage; the government provides what it has and what it can.
Respondents cited a lack of training as another key cause for the staff 's failure to fully engage in safe work practices. The lack of best practices in safety management was also visible from the participants. This impacted the work behavior of the staff. For example, a worker's job status, such as whether they are a permanent or temporary employee, affects their access to training or PPE, even if both types of employees may be exposed to the same risk. In addition to these ineffectual best practice regimes, the lack of efficient inventory management systems within hospitals was identified as a significant impediment to employees' pursuit of safe work practices.
P8: Often the hospital wards get so loaded with COVID patients that we have to work overtime and when we ask for mask or glove replacements, our superiors say okay, but once we enter the storeroom, the item is out of stock. We can't afford to lose our jobs so we work under these circumstances.
Finally, the apparent mismatch between participants' expectations of benefiting from reward programs and hospital administrators' incapacity to set up effective programs appears to have an impact on employees' enthusiasm for safe work practices. Poor monitoring and supervision of safety. Maintaining workplace safety and health in any workplace is as much the duty of the manager as it is the responsibility of the employees themselves. In the present study, participants discussed in detail the lack of strict supervision and scheduled inspection as one of the major barriers to safe work behavior among the cleaning staff.
P5: Supervisors came this morning but they asked us how the work was going and not about our protection. . . . There' s no concern about what we're missing, more so when we're on the night shift.
Theme 2: Unsafe workplace conditions
Unsafe psychological environment. The data gathered in this study demonstrates the prevalence of a high-stress environment. Many participants, for example, regarded departmental pressure as onerous due to productivity demands.
P16: We feel the pressure. We feel that any interruption in our services could cause severe problems. These patients may be individuals who we know or whom we work with. We see them fight for their lives and feel like we must make their stay here as comfortable as possible. So it's hard when you're feeling that, that constant urge to fix everything.
Some features of the workers' characterization of workplace conditions were also reflected upon by key informants.
K4: Any worker who develops or is suspected of developing symptoms gets placed in our isolation centers. So the alienation from your family and the concerns about your job security is constantly at the back of your mind . . . you suffer psychologically, of course, ideally not to the point of stress-induced accidents.
Others felt that they were working under extreme pressure. They stated that some mid-level staffs seem to believe, perhaps incorrectly, that exhibiting an authoritarian attitude toward workers is the way to enhance productivity and achieve objectives in the workplace. Similarly, subgroup demographic characteristics (eg, job position and experience level) also had a tremendous impact on safety perceptions. For some, the key factor in avoiding infections was to rely on experience and self-confidence while doing the work; this is reflected in their optimistic views of themselves and their conceptualizations of advantageous improvements related to age, such as the ability to execute tasks with minimal risk to oneself.
P10: the techniques of working safely and preventing any kind of danger have been perfected by seasoned staff, so I feel like I can function perfectly well if there is a pandemic or an outbreak. Guidelines add little to my know-how
Other young participants seem to have accepted that more seasoned employees are less likely than their novice peers to get a work-related illness. One worker related this to the levels of promotion that, as seen in the excerpt below, come with experience.
P6: Older and more seasoned staffs are more likely to be in team leader roles and are often active in activities that do not require them to face dangerous conditions. Socio-cultural aspects of the work. Socio-cultural aspects of work such as culture, beliefs, and attitudes too became apparent as a major impediment to safe work behaviors. A repeated allusion from the interviewees was that some had a negative attitude toward safe work behaviors within the workforce.
P18: Several workers in our hospital believe that using personal protective equipment is a waste of time and that using masks increases the incidence of syncope because it doesn't give the brain enough oxygen, particularly the older employees. They say to us the only security we need is God.
For some, perceptions of minimal health threat from the Coronavirus dissuaded safe work behaviors. Workers not skilled enough to deal with safety issues. Participants were quick to point out that many of their coworkers, particularly the new ones, lacked the basic skills and experience required for employees to execute their jobs properly. According to them, this resulted in many employees using untested work techniques and normalizing workplace risks.
P14: We see inexperienced workers take greater risks when conducting routine tasks. The hospital administration frequently assigns the task of training to employees who have been on the job longer. But we don't get additional time to do that many times; we still need to finish our jobs. Therefore we leave it to the new employee to ask questions. So if you're inexperienced; in cleaning hospitals in the COVID-19 era, it shouldn't be your meal ticket.
Discussion
The COVID-19 pandemic presents substantial challenges to health systems around the world, including balancing the additional service delivery needs required to combat the pandemic with maintaining and improving access to critical health services. 10 Historically underfunded health systems in parts of Africa were strained to breaking point during the pandemic. 11 In many cases, frontline health workers lacked adequate protective equipment for much of the pandemic, putting their lives at risk. 12 It is well established that effective performance in any healthcare setting necessitates the availability of appropriate facilities, and supplies. 13 Employers are responsible for providing, replacing, and paying for personal protective equipment that can protect the user against health and safety risks at work, reduce physiological stress, facilitate engagement, and keep people comfortable, according to the Ethiopian infection prevention and patient safety guideline, and the national healthcare waste management guideline. [14][15][16] However, the implementation of such recommended evidence-based measures leaves much to be desired as Ethiopian health care workers still lack access to appropriate PPE. 17,18 This was especially evident in the current study; the lack of adequate and appropriate personal protection equipment was a serious impediment to employees' safe work practices. Hospitals in low-income countries use the same supply chains as hospitals in wealthy countries to get medical supplies, but they have significantly less negotiation leverage to secure resources. 19 In Sub-Saharan Africa, healthcare spending accounts for only 5% of GDP or approximately half of the global average. 20 PPE is in short supply throughout the region. Purchasing PPE can be a difficult process for African healthcare providers, many of whom are small and medium-sized operations. Highly specialized knowledge of the market is needed to gain a clear grasp of what equipment to buy, where to get it, and how much to pay for it. 21 Simultaneously, the pandemic has brought greater financial strains, particularly on healthcare facilities. There are direct expenses associated with the disease, as well as indirect costs associated with the general economic downturn. During the crisis, many of the smaller healthcare facilities that provide 6 Environmental Health Insights crucial services in countries across the region have battled to stay afloat. As a result, healthcare professionals have little additional revenue to spend on PPE. 22 Bridging this financing gap and helping with the knowledge gap is critical to help more healthcare providers access needed equipment. Linking partners across the supply chain, from PPE manufacturers to financial institutions to healthcare businesses, is critical to getting this right. 19 Similarly, any behavior in a health-related workplace must be sustained by a high degree of knowledge and empirical evidence. 23 Within the academic literature, a focus on the training and empowerment of EVS personnel in lower and middle-income countries (LMICs) has gathered pace in recent years. In an assessment of hospital training practices in India, Bangladesh, The Gambia, and Zanzibar, less than a third of the facilities assessed provided formal training to their EVS personnel. 7 Similar investigations in Ghana, Tanzania, and Nigeria have also highlighted deficiencies in the training, knowledge, and practices of EVS personnel. [24][25][26] According to these studies, training for both healthcare staff and hospital cleaners represents an enormous opportunity for quality improvement. In the present study, the absence of appropriate and up-to-date safety training in hospitals was seen as contributing to the cleaners' inability to effectively adhere to safety regulations. Moving forward, the people who are largely responsible for cleaning hospitals must be included in the focus in order to develop and maintain a safe atmosphere in Ethiopian hospitals.
Hospital policy as a feature may affect the observance of safety protocols. 27 Despite their key role in infection prevention and control (IPC), little reference is made to cleaning staff in many of the international and regional IPC/environmental hygiene guidelines. The absence of cleaners among the key stakeholders included in the WHO Essential Environmental Health Standards in Health Care, generally referred to as the gold standard, is a glaring example of this omission. 28 Past studies also point to a generalized neglect of cleaners in LMICs; cleaners have little control over their role, responsibilities, and work environment. 8,24 In the present study, workers' perceptions of hospital administrative models were identified as impediments to their workplace safety. EVS personnel had the impression that the hospitals' administrative models were geared to prioritize technical outputs over people and the environment. However, these viewpoints are not limited to medical establishments. Within the wider context of LMICs, there is a societal undervaluing of these individuals' roles and rights. 7,8 Even so, small efforts can be made to begin to address these challenges, beginning with the work environment.
EVS personnel, healthcare providers, and the rest of the hospital staff must cultivate productive, mutually respected relationships. While hospital training is crucial for preventing HCAIs, it also has the potential to influence relationships with healthcare providers, foster recognition of cleaning staff as valuable members of the workforce, and assist cleaning staff in understanding the importance of their role in infection prevention. 8 Nonetheless, it is important to recognize that without broader system changes; the benefits of training may not be fully realized. A successful program, according to the WHO core components for IPC programs, must work across the entire system and include organizational and cultural change. 29,30 This should include a stronger emphasis on staffing and cleaning equipment. 31 The physical environment, as an element of the workplace, has a direct impact on worker safety. 32 Overcrowding in hospitals harms healthcare delivery and results. 33 Despite WHO recommendations to reduce hospital overcrowding, it continues to be a problem in most African hospitals today. 34 The primary cause is a mismatch between bed supply and demand, as well as a poor flow of patients through beds. As demand increases and the bed supply shrinks, flow through hospitals becomes impaired. 35 EVS personnel in the current study reported that the recent crowded working environment produced by the pandemic jeopardized their ability to work safely. As a result, today's managers must alter their management culture to make significant progress in these areas. First and foremost, it is critical to place a strong emphasis on diverting patients to community services and providing additional services in the community that are often provided in hospitals (eg, hospital outreach programs). Further significant improvements will necessitate large infrastructure and workforce investments in order to expand the workforce's flexibility and the healthcare system's capacity. 36 Because of the trade-offs between hazardous exposures and the challenges of donning, wearing, and doffing PPE, EVS personnel frequently fail to properly adhere to PPE and infection control protocols. 37,38 Individual-level factors including knowledge, beliefs, attitudes, risk perception, and socio-demographics have consistently been highlighted as factors that influence PPE-related behaviors and safety compliance in several studies conducted throughout the developing world. 21,[39][40][41] EVS personnel, managers, and institutions must work together to improve the safety culture in healthcare facilities. This culture necessitates a company-wide commitment to developing, implementing, evaluating, and maintaining effective and current safety practices. 42 Although organizational and cultural considerations in the context of workplace safety have garnered a lot of attention in recent years, individual-level factors affecting healthcare worker safety have received less attention. 7 Several individual-level characteristics were observed to contribute to poor compliance and other safety-related outcomes in the current study. While some EVS personnel relied on their own risk assessments when selecting whether or not to use PPE, others had significant knowledge gaps when it came to correct PPE use, transmission modes, and other infection control concerns. These mindsets eventually lead to increased Tamene et al 7 risk-taking and an inability to prepare for the next unknown. 8 According to a Tanzanian study, however, these beliefs, habits, and knowledge gaps can be altered through scientific problembased training programs. 24 There are some limitations to the current study. This is a 3-site study, and the findings are unlikely to be representative of all other hospitals. Other hospitals will inevitably have their own characteristics that mediate barriers to safe practices; however, some of the ones found in this study will likely resonate there as well. Furthermore, social desirability may skew participant responses, causing them to deliver sociably desired answers. As a result, it's impossible to rule out the possibility of some individuals being reluctant to share their real-life experiences. The PPE findings described in this study are almost entirely based on the participants' perceptions, rather than empirical evidence, such as assessments of protective garment effectiveness, durability, and fit. The reported results have not been independently corroborated. Though perceptions are important, they can be skewed by passion and vested interests, and thus may fail to accurately reflect actual circumstances. Aside from these flaws, the study has several advantages. This is the first study in Ethiopia that provides a full and meaningful assessment of barriers to safe practices among EVS personnel.
Future Direction for Research
The number of hospitals and the regions studied in this study was limited due to a lack of research resources. Future research should include a broader network of hospitals. Future research may also be required to determine the expenses associated with injuries caused by unsafe work practices, as well as worker and process downtime due to injuries induced by unsafe practices. Furthermore, while research and more broadly publications from LMICs are scarce, what is available illustrates that EVS personnel are a neglected part of the health workforce, with either no or inadequate training; the current study also demonstrated that training is an important determinant of safe practices. It was done, however, in a broader context. The type and frequency of training, as well as the information presented, may need to be investigated more in future studies. Finally, in terms of the research agenda, more work should be done to see how findings from high-income countries might complement or be coupled with those from LMICs' neglected frontlines.
Conclusion
This was the first study to look into what drives environmental service workers to engage in unsafe workplace practices. Changes in the organization's policies, processes, managerial actions and priorities, and resources dedicated to safety are all required to implement a safety culture. Furthermore, the commitment to, and support of, safety should be conveyed to workers at all levels through active and sincere engagement by those in leadership positions. Thus, improved access to personal protective equipment, decent working conditions, occupational health and safety training, mental health and psychosocial support, remuneration and incentives, and a supportive work environment, including a manageable workload, should be part of the solution efforts going forward. Finally, and importantly, individual accountability is key to improving and sustaining safe practices.
Author Contributions
AT: wrote the proposal, supervised the collection , entry, and analysis of data, and engaged in the development of the manuscript. AH, FE, AG: Participated in the design, methodology, data analysis, and review of the manuscript The final paper was read and approved by all authors.
|
2022-06-29T15:07:43.292Z
|
2022-01-01T00:00:00.000
|
{
"year": 2022,
"sha1": "2e274106b621fbb99412e7d87924c5f2c6b4fdb0",
"oa_license": "CCBYNC",
"oa_url": "https://journals.sagepub.com/doi/pdf/10.1177/11786302221109357",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "835bd324733e3d7c4d18c64c0b6c8ef67d4d535a",
"s2fieldsofstudy": [
"Environmental Science",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
4000089
|
pes2o/s2orc
|
v3-fos-license
|
Stomach-brain synchrony reveals a novel, delayed-connectivity resting-state network in humans
Resting-state networks offer a unique window into the brain’s functional architecture, but their characterization remains limited to instantaneous connectivity thus far. Here, we describe a novel resting-state network based on the delayed connectivity between the brain and the slow electrical rhythm (0.05 Hz) generated in the stomach. The gastric network cuts across classical resting-state networks with partial overlap with autonomic regulation areas. This network is composed of regions with convergent functional properties involved in mapping bodily space through touch, action or vision, as well as mapping external space in bodily coordinates. The network is characterized by a precise temporal sequence of activations within a gastric cycle, beginning with somato-motor cortices and ending with the extrastriate body area and dorsal precuneus. Our results demonstrate that canonical resting-state networks based on instantaneous connectivity represent only one of the possible partitions of the brain into coherent networks based on temporal dynamics.
Introduction
The parsing of the brain into resting-state networks (RSNs) has been widely exploited to study the brain's functional architecture in health and disease (Fox and Raichle, 2007). With long-time scales, RSNs closely match the anatomical backbone of the brain (van den Heuvel et al., 2009;Honey et al., 2009;Shen et al., 2015). With short-time scales (~10-100 s), spontaneous brain activity is characterized by the emergence and dissolution of network patterns encompassing and extending classical RSN topologies (Ponce-Alvarez et al., 2015;Shine et al., 2016) with rich temporal trajectories (Mitra et al., 2015). Temporal trajectories indicate the existence of delays between regions, whereas the methods most often used to parse brain activity into functional networks (seedbased correlation and independent component analysis) make the implicit assumption that RSNs are characterized by instantaneous or zero delay connectivity. Therefore, we analyzed delayed connectivity in resting-state BOLD signals using techniques widely used in electrophysiological studies of large-scale brain dynamics (Lachaux et al., 1999) that quantify the stability of temporal delays between time series.
More specifically, we studied the delayed coupling between resting-state brain activity and a visceral organ, the stomach. The stomach continuously produces a slow electrical rhythm (0.05 Hz, one cycle every 20 s) that can be non-invasively measured (electrogastrogram, EGG [Koch and Stern, 2004]). The gastric basal rhythm is continuously (Bozler, 1945) and intrinsically (Suzuki et al., 1986) generated in the stomach wall by a network of specialized cells, the interstitial cells of Cajal (Sanders et al., 2014), which form synapse-like connections not only with gastric smooth muscle but also with afferent sensory neurons (Powley and Phillips, 2011). The stomach is an interesting candidate for large-scale brain coordination for several reasons. First, visceral inputs can reach a number of cortical targets (Critchley and Harrison, 2013;Park and Tallon-Baudry, 2014). Second, gastric frequency (~0.05 Hz) falls within the range of BOLD fluctuations that are used to define RSNs and that are free from known cardiac and respiratory artifacts (Glerean et al., 2012). Finally, the amplitude of alpha rhythm, the dominant rhythm in the human brain at rest, depends on the phase of gastric rhythm (Richter et al., 2017).
We simultaneously recorded brain activity with fMRI and stomach activity with EGG ( Figure 1a) in 30 human participants at rest with open eyes. We then determined the regions in which spontaneous fluctuations in brain activity were phase synchronized with gastric basal rhythm; we refer to these regions as the gastric network.
EGG-BOLD phase coupling defines the gastric network
We first determined gastric frequency (Figure 1b) in each participant as the frequency of the largest spectral peak within the normogastric range (0.033-0.066 Hz). The mean EGG peak frequency across the 30 participants was 0.047 Hz (±SD 0.003, range 0.041-0.053). EGG peak frequency measured inside and outside the scanner did not differ (EGG outside the scanner measured in 29 of the 30 participants, mean 0.046 Hz ± SD 0.006; two-sided paired t-test, t(28)=0.35, p=0.725 Bayes Factor <0.001, indicating decisive evidence for the null hypothesis).
In each participant and at each voxel, we quantified the degree of phase synchrony between the EGG signal and BOLD time series filtered around gastric frequency (Figure 1c). We computed the phase-locking value (PLV) (Lachaux et al., 1999), a measure widely used in electrophysiology that varies between zero when two time series show no consistent phase relationship (Figure 1c, bottom eLife digest The brain is always active. Even when it is not receiving sensory input, it generates its own spontaneous activity. This activity shapes how we interpret future sensory signals and creates our inner mental world. Moreover, this spontaneous activity is not random. When a healthy volunteer lies inside a brain scanner without performing any task, his or her brain shows predictable patterns of activity. Specific groups of brain regions -often with related roles -become active at the same time as one another. Each set of regions is referred to as a resting state network.
Of course, the brain does not operate in isolation from the rest of the body. Our internal organs continuously send signals to the brain via the spinal cord and cranial nerves. Specialized cells in the stomach wall in particular produce a slow rhythmic pattern of electrical activity. Known as the gastric rhythm, this activity helps ensure that the stomach muscles contract at the correct speed for digestion. But the stomach also produces this rhythm even when empty, suggesting that it has other roles too.
To find out what these might be, Rebollo et al. placed electrodes on the abdomen of healthy volunteers lying inside brain scanners. By examining the volunteers' spontaneous brain activity, Rebollo et al. identified a new resting state network that is active in synchrony with the gastric rhythm. The regions within this so-called gastric network are not active at the same time as each other, but instead become active in a specific sequence that is repeated at each gastric cycle. Many of the regions within the gastric network belong to other resting state networks too. Some of the regions help regulate automatic bodily functions such as heart rate, while others process information about the body's position in space.
The existence of the gastric network suggests a link between the automatic regulation of processes such as digestion, and spontaneous brain activity. Future studies could examine whether this link impacts perception and cognition, and whether this link plays a role in disorders where the connection between the digestive system and the brain appears to be altered. panel) and one when two time series have a consistent phase relationship over time (Figure 1c,upper panel). PLV has three important properties: PLV is high for any lag between the time series as long as this lag is constant over time, PLV is independent of signal amplitude, and PLV gives no indication on the directionality of interactions between the two time series. In each participant and at each voxel, we estimated the PLV that could be expected by chance from EGG signals that were shifted with respect to the BOLD time series. The empirical PLVs were then compared with chancelevel PLVs using a cluster-based statistical procedure that intrinsically corrects for multiple comparisons (Maris and Oostenveld, 2007). Significant phase coupling between the EGG and resting-state BOLD time series occurred in twelve nodes (voxel threshold p<0.01, two-sided paired t-test between observed and chance PLV; cluster threshold corrected for multiple comparisons, Monte-Carlo p<0.05). Exact p-values are reported for each cluster in Table 1.
The gastric network (Table 1, Figure 2a) comprises the right primary somatosensory cortex (SIr), bilateral secondary somatosensory cortices (SII), medial wall motor regions (MWM), comprising the caudate cingulate motor zone (CCZ), posterior rostral cingulate motor zone (RCZp), and right supplementary motor area (SMA), a region of the right occipito-temporal cortex overlapping the extrastriate body area (EBA), as well as nodes in the posterior cingulate sulcus (pCS), dorsal precuneus (dPrec), occipital cortex (ventral and dorsal portions, vOcc and dOcc), retrosplenial cortex (RSC), and superior parieto-occipital sulcus (sPOS). Estimating chance-level PLV by computing gastric-BOLD coupling between the BOLD signal of one participant with the EGG of the other 29 participants resulted in a qualitatively similar network, with coupling occurring either in the same or neighboring voxels (Supplemental Figure 2). The average shared variance between the EGG and BOLD signals across participants, as estimated from squared coherence, ranged from 12% in the left anterior dorsal precuneus to 16.9% in the posterior cingulate sulcus (Table 1).
An analysis of covariance across nodes did not reveal significant links between gastric-BOLD coupling strength (defined as the difference between empirical and chance PLV) and gender (F(1, 28) =1.02, p=0.46), body mass index (BMI) (F(1, 28)=1.3, p=0.3) or trait anxiety scores (F(1, 28)=1.02, p=0.47. Statistics (including Bayes Factor) per node are reported in Table 2. Note that there is less variation in BMI in our sample than in the general population since all participants had a BMI smaller than 25. Controls: gastric frequency specificity, false-positive rate, and head micromovements To assess the robustness of the gastric network, we ran several controls. First, we verified that EGG-BOLD coupling was specific to gastric frequency. We filtered both EGG and BOLD time series at frequencies that were slightly offset from the peak gastric frequency of each participant and recomputed cluster statistics. Summary statistics (sum of the absolute t-values resulting from the paired t-test between empirical and chance-level PLV at each voxel, either summed across the whole brain or within the gastric network) decreased when shifting below or above the gastric peak frequency ( Figure 2b). This result indicates that the gastric network corresponds to BOLD fluctuations specifically occurring at gastric frequency. Second, we estimated the likelihood of false positives with our statistical procedure. We randomly sampled surrogate datasets in which a random time shift was applied to the EGG of each participant a thousand times. Next, we tested whether any of those 1000 combinations would generate summary statistics as large as the original data when compared with the chance-level estimate we used to determine significantly coupled regions at the group level ( Figure 2c). This result was never observed, indicating that the probability of our results being a false positive is less than 0.001.
Third, we verified that gastric-BOLD coupling strength was unrelated to BOLD power at gastric frequency. We computed the correlation between BOLD power at gastric frequency and coupling strength for each participant and voxel, and found the two measures to be unrelated (Fisher z-transformed Pearson correlation coefficients tested against zero, t(29)=1.19, p=0.24; Bayes factor <0.001, indicating decisive evidence for the absence of a link between coupling strength across the brain and BOLD power at gastric frequency).
Finally, we investigated whether submillimeter head movements might have influenced the results. We defined voxel motion susceptibility as the regression coefficient of head movement (Power et al., 2012) from the BOLD time series. Coupling strength and voxel motion were unrelated (Fisher z-transformed Pearson correlation coefficients tested against zero, t(29)=-0.34, p=0.73; Bayes factor <0.001, indicating decisive evidence for the absence of a link between coupling strength and head movement). Stomach contractions might also lead to small head movements that could be phase locked to gastric rhythm. Although gastric rhythm is continuously produced even during fasting, it is larger during stomach contractions. Thus, we tested whether the effects we found were due to differences in EGG power (or frequency) across participants. We found no link between coupling strength in the 12 nodes and EGG power (ANCOVA, F(1, 28)=0.9, p=0.51; all Bayes factor <0.33, indicating substantial evidence for the null hypothesis) nor between coupling strength and EGG peak frequency (ANCOVA, F(1, 28)=1.6, p=0.17; Bayes Factor <0.33, indicating substantial evidence for the null hypothesis in 9 of 12 nodes; Bayes Factor <1.3 in the three remaining nodes, indicating anecdotal or no evidence).
The gastric network is thus specific to individual gastric peak frequency, is highly unlikely to be a chance finding, is not dependent on BOLD power, and is not linked to spurious effects of head movement on the BOLD signal. . The gastric network. (a) Regions significantly phase synchronized to gastric rhythm (n = 30, voxel level threshold, p<0.01 two-sided; cluster level threshold, p<0.05, two-sided, intrinsically corrected for multiple comparisons). (b) Gastric-BOLD coupling is specific to gastric frequency. Summary statistics in the gastric network are maximal at the EGG peak frequency (orange) and decrease when offsetting the filter to higher or lower frequencies.
(c) Summary statistics distribution under the null hypothesis from 1000 surrogate datasets in which the EGG signal was time-shifted with respect to the BOLD signal. The empirical finding (orange arrow) falls well outside the null distribution. (d) The gastric network (orange) comprises the following somatotopically organized regions: primary somatosensory cortex (Panel SI, with peak activations during stimulation of the trunk and hand (Fabri et al., 2005), finger (Weibull et al., 2008), face (Kopietz et al., 2009), and mouth, that is, teeth (Trulsson et al., 2010), lips and tongue (Miyamoto et al., 2006); secondary somatosensory cortex (Panel SII, cytoarchitectonic subdivisions of SII according to (Scheperjans et al., 2008); OP1, parietal operculum 1 and OP4, parietal operculum 4, presented on a horizontal slice at z = 18); medial wall motor areas (Panel MWM, with peak activations during movement (Amiez and Petrides, 2014) in the caudate cingulate zone (CCZ), posterior rostral cingulate zone (RCZp) and supplementary motor area (SMA)); and extrastriate body area (Panel EBA with peak activations during body part viewing (Orlov et al., 2010), note that because of the visualization on an inflated cortex, the extension of the EBA node to the cerebellum is not visible). (e) Regions in which the alpha and gastric rhythms are coupled (green, modified from [Richter et al., 2017]). Abbreviations are the same as those in Table 1. DOI: https://doi.org/10.7554/eLife.33321.005 The following source data and figure supplement are available for figure 2: Source data 1. csv file containing the sum of t in gastric network resulting from filtering the EGG and the BOLD signal with an offset with respect to gastric peaking frequency, used for The gastric network includes body maps associated with touch, action and vision We then examined the areas comprising the gastric network in more detail. By definition, the gastric network is composed of regions with activity that co-fluctuates with gastric basal rhythm. Five nodes of the gastric network also share a common functional feature, somatotopic organization, as detailed in Figure 2d.
The gastric network includes the following regions with a well-known body representation based on touch: the right primary somatosensory cortex in the hand and mouth region and bilateral secondary somatosensory cortices. We quantified the overlap between these gastric network nodes and known cytoarchitectonic subdivisions of the somatosensory cortices (Geyer et al., 2000;Grefkes et al., 2001). The gastric network mostly overlapped with area 1 (60.2% of the SIr node) and to a lesser extent, with area 2 (13.1%) and area 3b (9.9%). The SII nodes of the gastric network overlapped with the secondary somatosensory cortices and more precisely with the somatotopically organized subdivisions of the parietal operculum OP1 and OP4 (22). The right SII node mostly overlapped with area OP1 (35.2% of the node), while the left SII node overlapped with both OP1 (21.7%) and OP4 (14.9%). Additionally, both left and right SII nodes extended more ventrally to the temporal cortex.
The gastric network also includes three medial wall motor regions (CCZ, RCZp, and SMA) that reveal their somatotopic organization when participants are required to move specific body parts (Amiez and Petrides, 2014). Note that gastric-BOLD coupling also included a more posterior area in the cingulate sulcus (pCS). Finally, the gastric network overlapped with the EBA, a region of the lateral occipital cortex activated when participants view images of body parts (Downing et al., The following source data available for 2001) with a clear somatotopic organization (Orlov et al., 2010). The overlap between the gastric network and EBA occurred in the lower face region, which includes the mouth. Thus, the gastric network overlaps with body maps classically associated with different modalities, including touch in somatosensory cortices, action in MWM and vision in the EBA.
The gastric network includes regions involved in the generation of the alpha rhythm
Finally, we found gastric-BOLD coupling in the posterior bank of the parieto-occipital sulcus (dOcc and vOcc) and retrosplenial cortex. In a previous study using magneto-encephalography (Richter et al., 2017), the amplitude of the alpha rhythm in these regions was modulated by gastric phase (Figure 2e).
Gastric-brain coupling in the right posterior insula
The insula is one region that receives visceral inputs (Critchley and Harrison, 2013;Park and Tallon-Baudry, 2014), but it did not appear to be significantly phase synchronized to the EGG using our whole-brain, statistically conservative procedure. Thus, we performed post-hoc region-of-interest analysis of the three insular subdivisions (anterior dorsal, anterior ventral, posterior [Deen et al., 2011]) in both hemispheres. Only the right posterior insula showed evidence for gastric-BOLD coupling across participants (empirical vs. chance-level PLV, paired t-test, two sided, t(29)=2.78, p=0.043, Bonferroni corrected; all other regions, p>0.21).
The use of statistical thresholds results in binary outputs. To get a finer grained picture, we computed effect sizes in the 6 insular subdivisions and in the 12 gastric network nodes (Cohen's d for the difference between empirical and chance PLV on the mean time series in each region of interest). Mean Cohen's d across gastric network nodes was 1.19 ± 0.21 STD, ranging from 0.80 in the dorsal occipital cortex to 1.62 in the right secondary somatosensory cortex. The right posterior insula had an effect size of 0.84, within the lower range of the gastric network. All other insula subdivisions displayed smaller effect sizes (right: dorsal anterior 0.61, ventral anterior 0.54; left: posterior 0.60, dorsal anterior 0.41, ventral anterior = 0.52). Thus, the right posterior insula does show evidence for coupling with the stomach, with an effect size comparable to that of the weakest nodes of the gastric network, provided signal-to-noise ratio is first increased by averaging within a region of interest.
Partial overlap between gastric network and autonomic networks
Is the gastric network specific to the stomach, or is it also linked to other organs such as the heart? We determined brain regions (FWE corrected p<0.05; Figure 3A) fluctuating with high-and low-frequency heart rate variability, that represent parasympathetic and a mixture of sympathetic and parasympathetic outputs, respectively. Of the gastric network, 30% was also related to heart rate variability, mostly in medial motor regions and in the posterior cingulate cluster (low-frequency heart rate variability), and, to a lesser extent, in the dorsal occipital cluster (high-frequency heart rate variability). Because we did not record any measure that isolates sympathetic output, we additionally analyzed the overlap between the gastric network and known sympathetic areas (Beissner et al., 2013). This overlap was very limited (34 voxels, 4.7% of the gastric network) and confined to SIr and the anterior parts of medial motor regions (Figure 3-figure supplement 1).
We also determined brain regions that correlate with pupil diameter (n = 20 due to data loss or artefacts; Figure 3B). The strongest correlations were found in occipital regions, somato-motor cortices and medial wall motor regions. 17% of the gastric network (SI, SIIr, MWM and EBA) overlaps with regions correlating with pupil diameter. Shared variance between pupil diameter and EGG, estimated from squared coherence, was 9.7 ± 2.5%. Coupling strength averaged across SI, SIIr, MWM and EBA did not correlate with shared pupil-EGG variance (mean r = 0.05, p=0.82, BF = 0.17 which indicates substantial evidence for the null hypothesis).
Temporal sequence within a gastric cycle and delayed connectivity between the nodes of the gastric network In the different nodes of the gastric network, gastric-brain coupling occurred with different phase delays with respect to the gastric cycle. We analyzed between-participant phase-delay consistency and found temporal delays of~3.3 s between the earliest nodes (somatosensory cortices) and latest nodes (dorsal precuneus and EBA) of the gastric network (Figure 4a,b). The delay in the right posterior insula was in the range of the earliest nodes of the gastric network ( Figure 4a). The Watson-Williams test for circular data confirmed that different nodes of the gastric network were coupled to the gastric rhythm with different phase delays (F(11, 29)=5.22, p<10 À6 ), indicating a precise temporal sequence of activations within each gastric cycle. Thus, each node of the gastric network appears to be characterized by a specific temporal delay with respect to gastric phase. These temporal delays were accompanied by delayed functional connectivity (FC) between the nodes of the gastric network.
We first illustrated this point with an example in a single participant (Figure 4c), with two 200 s time series of the gastric network (MWM and EBA). The two time series systematically co-varied with a temporal delay. The existence of temporal delays between the nodes of the gastric network is one of the reasons why the gastric network could not be observed in prior studies. Indeed, fMRI RSN studies are typically based on measures of instantaneous FC, such as shared variance estimated from the squared Pearson correlation coefficient, which does not detect the temporally delayed interactions revealed here. These measures differ from delayed FC measures based on the consistency of phase delays over time, such as shared variance estimated from squared coherence. In the example illustrated in Figure 4c, instantaneous FC between the two time series is 56%, whereas delayed FC is 86%. If we advance the timing of the medial wall time series by 2 s, instantaneous FC increases to 86%. This finding shows that the difference between the two FC estimates is due to temporal delays only.
We then estimated both instantaneous and delayed FC between all nodes of the gastric network in all participants. Delayed FC between gastric nodes (mean 40.8% ± SD 8%, ranging from 26.5% between the right primary somatosensory cortex and RSC, up to 63.9% between the ventral and dorsal occipital cortices) was systematically larger (paired t-test, t(29)=9.02, p<10 À10 ) than instantaneous FC (mean 30.2% ± SD 11%, ranging from 9.3% between the dorsal precuneus and right SII, up to 61.2% between right and left SII). Next, we verified (Figure 4d,e) that two regions belonging to both the gastric network and the same RSN (i.e. two regions of the gastric network with little temporal delay, such as MWM and SIr) would display large values of both delayed and instantaneous FC, whereas two regions belonging to the gastric network but not to the same classical RSN (i.e. two regions of the gastric network with a large temporal delay, such as MWM and EBA) would show large delayed FC and small instantaneous FC. Thus, in contrast to classical RSNs, the gastric network appears to be characterized by between-node delayed connectivity.
Slow temporal fluctuations in gastric-BOLD coupling are associated with changes in BOLD amplitude and occur simultaneously in all nodes Thus far, we have identified a sequence of activation that occurs at each gastric cycle, which characterizes gastric-BOLD coupling. We then investigated whether slow temporal fluctuations in the strength of gastric-BOLD coupling were accompanied by fluctuations in BOLD amplitude. As illustrated in Figure 5a, we found that episodes of elevated gastric-BOLD synchronization corresponded to episodes of increased BOLD amplitude. Indeed, time-varying PLV and BOLD time series, computed in sliding time windows of 60 s (approximately three gastric cycles), were significantly correlated (Fisher z-transformed Pearson correlation coefficients t-tested against zero, Bonferroni corrected p<0.006 in all gastric nodes, mean r across nodes 0.18 STD =± 0.02, ranging from 0.15 in MWM to 0.22 in SIIl).
Next, we tested whether slow temporal fluctuations in gastric-BOLD synchronization occurred simultaneously or independently in the different nodes of the gastric network (Figure 5b). We computed the correlation between time-varying PLVs for all possible node pairs in each participant and found that at the group level, this correlation was significantly positive (Fisher z-transformed average Pearson correlation coefficients against zero, t(29)=9.22, p<10 À10 , mean r = 0.129 ± STD 0.075, range across participants 0.02-0.35). To determine whether the overall pattern of synchronous fluctuations in gastric-BOLD coupling strength was driven by specific node pairs, we investigated correlations between node pairs. All node pairs but RSC-SIr, sPOS-RSC, sPOS-SIIr and sPOS-ladPrec showed a significant positive correlation at the group level ( sPOS. Thus, slow temporal fluctuations in gastric-BOLD coupling are associated with changes in BOLD amplitude and occur simultaneously in all nodes.
Discussion
Here, we reveal the existence of the gastric network, comprising brain regions with BOLD time series that are phase synchronized with gastric basal rhythm. Within the gastric network, approximately 15% of the BOLD variance is explained by gastric-BOLD phase synchrony. The gastric network cuts across classical RSNs and shows only partial overlap with autonomic control regions. A number of brain regions composing the gastric network have convergent functional properties involved in mapping bodily space through touch, action and vision. The network is characterized by a precise temporal sequence of activations within a gastric cycle, beginning with somato-motor cortices and ending with extra-striate body area and dorsal precuneus. This temporal sequence is accompanied by delayed functional connectivity between nodes of the gastric network, which explains why this RSN could not be identified with standard correlation methods that only capture instantaneous connectivity. Furthermore, slow temporal fluctuations in gastric-BOLD coupling are associated with changes in BOLD amplitude and occur simultaneously in all nodes. Thus, our results suggest that canonical RSNs based on instantaneous connectivity represent only one of the possible partitions of the brain into coherent networks based on temporal dynamics.
Neural origin of gastric-BOLD coupling
SIr, SII and medial wall motor regions likely receive direct gastric inputs. The stimulation of the splanchnic (spinal) nerve that innervates the stomach evokes responses in contralateral SI and bilateral SII in several mammals (Amassian, 1951), and the spinothalamic tract was recently shown to target MWM in monkeys (Dum et al., 2009). Vagal stimulation can also evoke responses in somatomotor cortices (Ito et al., 2003). advance compared with that of other nodes. Thus, these areas could be the entry point of gastric afferences. We found the right posterior insula, a region that receives direct cardiac inputs in monkeys (Zhang et al., 1998) and is considered as a visceral cortex, to be coupled with the stomach, with coupling similar to the weakest node of the gastric network. In addition, the right posterior insula appeared with a phase advance, in line with its role in visceroception. To be revealed, gastric-BOLD coupling in the right posterior insula required a region-of-interest approach, that is an increase of signal-to-noise ratio by averaging across voxels. The modest involvement of the insula in the present data might be due to the absence of an interoceptive task. Indeed, BOLD signal in the insula increases when participants explicitly monitor a visceral variable (see, e.g. [Critchley et al., 2004]).
Regions receiving direct visceral inputs are also early nodes of the gastric network. This suggests that the BOLD fluctuations locked to the gastric rhythm have a neural origin. An additional argument for a neural origin is that we found gastric-BOLD coupling in parieto-occipital regions, where neural activity in the alpha range is modulated by gastric phase (Richter et al., 2017). However, below, we examine the possibility that other non-neural mechanisms might contribute to gastric-BOLD coupling.
Artefactual BOLD fluctuations caused by head movements driven by stomach contractions seem unlikely. Indeed, gastric-BOLD coupling was neither related to head movement nor to EGG power that increases during stomach contractions. Another possibility is a vascular artifact. During digestion, gastric blood flow does indeed vary (Matheson et al., 2000), but cerebral blood flow is unaltered (Gallavan et al., 1980). Artificial distension of the stomach can cause increases in peripheral blood pressure (Min et al., 2011), but this peripheral increase is mostly due to the insertion of a bag catheter, not to its inflation (Cantù et al., 2008). Finally, spontaneous fluctuations in blood pressure in humans occur at approximately 0.1 Hz (so-called Mayer waves), which is much faster than gastric rhythm. Thus, a vascular effect seems unlikely, and the hypothesis that activity in the gastric network is driven by neural activity in areas directly receiving ascending inputs appears more plausible.
What is the functional role of the gastric network?
Twenty years and thousands of articles after the discovery of the default network, the debate on its functional role at rest or during tasks is still open. Thus, any discussion of the functional role the gastric network can only be tentative and speculative at this stage. Several non-mutually exclusive interpretations can nevertheless be considered.
The functional role of the coupling between stomach and body maps might be related to homeostatic regulations, which would account for the partial overlap between the gastric network and regions involved in heart rate variability. More specifically, the gastric network might be involved in the regulation of digestion, which is accompanied by changes in cardiac output and heart rate (Kelbaek et al., 1989). In addition, the unusual experimental setting with abdominal electrodes and a moderate fasting state might have drawn participants' attention toward their internal state, notably of hunger. Since participants had been fasting for only 2 hr, their state of hunger was probably rather moderate and unlikely to have dominated their spontaneous thoughts for 20 min.
We find areas containing body maps in the gastric network. This could simply indicate that the stomach, as any other body part, such as for example the hand, is represented in any body map. The body maps of the gastric network are classically associated with different sensory modalities and resting-state networks. In addition to the primary and secondary somatosensory cortices, the gastric network includes MWM (CCZ, RCZp and SMA) that are involved in motor preparation and display a clear somatotopic organization (Amiez and Petrides, 2014;Picard and Strick, 2001). The gastric network also comprises the EBA, a functional region in the occipito-temporal cortex that selectively responds to visual images of the human body (Downing et al., 2001;Weiner and Grill-Spector, 2011) and is causally involved in body visual recognition (Urgesi et al., 2007), with a fine topographical organization (Orlov et al., 2010). The stomach, an organ that cannot be easily touched, moved or seen, thus appears to be mapped in body maps related to touching, moving or seeing the body. However, the areas where the stomach is represented are more multi-sensory than usually held. The EBA is not purely visual since it is also activated when participants move or imagine body parts without visual feedback (Astafiev et al., 2004), as well as during haptic recognition of body parts (Kitada et al., 2009;Costantini et al., 2011). Primary somatosensory cortex combines internal and external bodily information since it receives both tactile and visceral afferents (Brüggemann et al., 1997;Follett and Dirks, 1994). Medial wall motor regions do not only contain motor maps but also receive visceral inputs (Levinthal and Strick, 2012). Thus, activity in 5 out of the 12 nodes of the gastric network could be simply explained by a representation of the stomach in the brain body maps.
However, the gastric network is not limited to body maps, it also comprises regions that play a role in mapping the external space in bodily coordinates, namely, the right superior parieto-occipital sulcus, dorsal precuneus and RSC. The superior parieto-occipital sulcus region is a visuo-motor area that encodes visual stimuli in bodily coordinates during action (Bernier and Grafton, 2010). The dorsal precuneus and RSC both implement the integration of information into an egocentric reference frame (i.e. centered on the body), a key basic mechanism involved in many different situations (Burgess et al., 2001;Vann et al., 2009), including foraging. All 12 nodes of the gastric network but three (the posterior cingulate sulcus and ventral and dorsal occipital clusters) either contain body maps or map external information in bodily coordinates. One could thus speculate that the gastric network coordinates these different body-centered maps. Indeed, the gastric rhythm is continuously produced and originates in the center of the body. In this view, the function of gastric-BOLD coupling in those nine areas would be to co-register body-centered maps of the body and of the external space.
In which type of tasks would the gastric network play a role? Foraging and feeding behaviors are likely candidates, since they involve both the coordination of different egocentric maps and the homeostatic regulation of digestion. Besides, in SI and EBA gastric-BOLD coupling is maximal in the hand and mouth region, suggesting a potential link with the stereotypical actions of feeding behavior, where food goes from hand to mouth, and from mouth to stomach. Still, the coordination of different systems of bodily coordinates is important for many actions besides feeding, such as navigating in the environment or grasping any object. Whether the gastric network plays a role in food-related, but also nonfood-related behaviors, remains to be determined.
Delays and directionality of interactions
The gastric network is characterized by temporal fluctuations with delays between the gastric rhythm and brain regions. Delays in resting-state functional connectivity have been highlighted only recently (Yellin et al., 2015;Mitra and Raichle, 2016), but have long been documented in stimulus-induced BOLD responses (Saad et al., 2001;Kruggel et al., 1999). Within functionally coherent systems such as the visual (Saad et al., 2001) or auditory (Kruggel et al., 1999) systems, delays of 2 s are common. In this light, our finding of up to 3 s delays between areas much further apart thus not appear so surprising. Still, the interpretation of long delays is not straightforward. They are unlikely to directly reflect synaptic delays of fast sequential neural transmission between areas since feed-forward transfer, with only minimal local computations, can be as fast as 10 to 15 ms per processing stage (Thorpe et al., 1996). However, if local recurrent processing is involved, longer delays might occur. Delays might additionally reflect regional differences in the timing of the vascular response (Saad et al., 2001;Kruggel et al., 1999), in slow changes of neural activity over time, as in accumulation processes (Yellin et al., 2015), or in the involvement of neuromodulatory influences. The different factors may further be combined, that is neuromodulation might affect cerebrovascular reactivity (Krimer et al., 1998).
What is the directionality of the brain-stomach interactions? The methods used here do not allow to answer this question since PLV is not a directed measure. If the gastric network plays a role in homeostasis, interactions are likely to be bidirectional, because homeostasis implies both the monitoring of ascending inputs, to evaluate the peripheral state, and the production of descending control commands. Medial wall motor regions, which both receive inputs from the spino-thalamic tract, and generate sympathetic outputs, might fit with this schema. On the other hand, the gastric-locked modulation of the alpha rhythm in the ventral and dorsal occipital clusters was previously shown to be mostly due to ascending influences from the stomach to the brain (Richter et al., 2017).
The gastric network is a novel resting-state network
RSNs have been defined as segregated systems that show synchronous fluctuations during rest (Fox and Raichle, 2007). The gastric network, albeit distinct from classical RSNs falls under this definition. In terms of dynamics, the gastric network is defined by its phase synchronization with the stomach and its delayed connectivity between nodes. The gastric network can thus be considered a novel RSN that could not be previously observed due to methodological reasons.
As opposed to classical RSNs, the gastric network is characterized by delayed connectivity, with temporal delays that can extend to several seconds but are stable over time and captured by coherence and phase synchrony. Delays are an intrinsic characteristic of brain dynamics unfolding in anatomically connected networks (Deco et al., 2011) and pervasive even at the timescale of BOLD signal fluctuations (Mitra et al., 2015). Canonical RSNs based on instantaneous connectivity represent only one of the possible partitions of the brain into coherent networks based on temporal dynamics. Therefore, we propose the addition of delayed connectivity to the operational definition of RSNs.
Experimental procedure Participants
Thirty-four right-handed human participants took part in this study. All volunteers were interviewed by a physician to ensure the following inclusion criteria: absence of digestive, psychiatric or neurological disorders; BMI between 18 and 25; and compatibility with MRI recordings. Participants received a monetary reward and provided written informed consent for participation in the experiment and publication of group data. The study was approved by the ethics committee Comité de Protection des Personnes Ile de France III (approval identifier: 2007-A01125-48). All participants fasted for at least 90 min before the recordings. Data from four participants were excluded. Two were excluded because coughing artifacts caused excessive head movement during acquisition and corrupted the EGG data, and two were excluded because their EGG spectrum did not show a clear peak that could allow us to identify the frequency of their gastric rhythm. A total of 30 participants (mean age 24.2 ± SD 3.31, 15 females, mean BMI 21.48 ± SD 1.91) were included in the analysis described below. Because effect size was not known a priori, the study was powered to detect medium size effect (i.e. slightly above the median sample size of fMRI studies [Poldrack et al., 2017]).
MRI data acquisition
MRI was performed at 3 Tesla using a Siemens MAGNETOM Verio scanner (Siemens, Germany) with a 32-channel phased-array head coil. The resting-state scan lasted 900 s during which participants were instructed to lay still and fixate on a bull's eye on a gray background. A functional MRI time series of 450 volumes was acquired with an echo-planar imaging (EPI) sequence and the following acquisition parameters: TR = 2000 ms, TE = 24 ms, flip angle = 78˚, FOV = 204 mm, and acquisition matrix = 68Â68 Â 40 (voxel size = 3Â3 Â 3 mm 3 ). Each volume comprised 40 contiguous axial slices covering the entire brain. High-resolution T1-weighted structural MRI scans of the brain were acquired for anatomic reference after the functional sequence using a 3D gradient-echo sequence (TE = 1.99 ms, TR = 5000 ms, TI-1 = 700 ms/TI-2=2500 ms, flip angle-1 = 4˚/flip angle-2 = 5˚, bandwidth = 240 Hz/pixel, acquisition matrix = 240 Â 256Â224, and isometric voxel size = 1.0 mm 3 ). The anatomical sequence duration was 11 min 17 s. Cushions were used to minimize head motion during the scan.
Physiological signal acquisition
Physiological signals were simultaneously recorded during functional MRI acquisition using MRI compatible equipment. The electrogastrogram (EGG) and electrocardiogram (ECG) were acquired using bipolar electrodes connected to a BrainAmp amplifier (Brain products, Germany) placed between the legs of participants; the electrodes received a trigger signaling the beginning of each MRI volume. EGG was acquired at a sampling rate of 5000 Hz and a resolution of 0.5 mV/bit with a low-pass filter of 1000 Hz and no high-pass filter (DC recordings). ECG was acquired at a sampling rate of 5000 Hz and a resolution of 10 mV/bit with a low-pass filter of 1000 Hz and a high-pass filter of 0.016 Hz. Eye position and pupil diameter were recorded from the right eye with an EYELINK 1000 (SR Research, Canada) and simultaneously sent to BrainAmp amplifiers.
The skin of participants was rubbed and cleaned with alcohol to remove dead skin, and electrolyte gel was applied improve the signal-to-noise ratio. The EGG was recorded via four bipolar electrodes placed in three rows over the abdomen, with the negative derivation placed 4 cm to the left of the positive one. Figure 1a shows the electrode placement scheme. Electrodes covered a large portion of the left abdomen, to increase the chance of having an electrode close to the pacemaker of the stomach, located at the greater curvature of the mid to upper corpus (O'Grady et al., 2010). Because the gastric rhythm propagates from the pacemaker zone to the whole stomach, any electrode placed over the stomach (not necessarily over the pacemaker) will record the gastric rhythm, but with a possible delay. The midpoint between the xyphoid process and umbilicus was identified, and the first electrode pair was set 2 cm below this area, with the negative derivation set at the point below the rib cage closest to the left mid-clavicular line. Another electrode pair was set 2 cm above the umbilicus and aligned with the first electrode pair. The positive derivation of the third pair was set in the center of the square formed by electrode pairs one and two. The positive derivation of the fourth electrode pair was centered on the line traversing the xyphoid process and umbilicus at the same level as the third electrode. The ground electrode was placed below the lower left costal margin. The ECG was acquired using three bipolar electrodes that shared the same negative derivation, set at the third intercostal space. The positive derivations were set at the fifth intercostal space and separated by 4 cm.
Electrophysiological data were collected during fMRI data acquisition, as well as at least 30 s before and after. In addition, to rule out the possibility that the scanner pulse and B0 magnetic field could distort the frequency content of the EGG, a second EGG acquisition with an 8 min duration was performed after the acquisition of the MRI scans, with the participant positioned outside the tunnel of the scanner. Paired sample t-test was then performed to compare the peak frequencies obtained for each participant inside the scanner with those obtained outside the scanner for the same channels. This control analysis was run on 29 participants due to corrupted data in the EGG recordings outside the scanner tunnel in one participant.
MRI preprocessing
Brain imaging data were preprocessed using Matlab (Matlab 2013b, MathWorks, Inc., United States) and the Statistical Parametric Mapping toolbox (SPM 8, Wellcome Department of Imaging Neuroscience, University College London, U.K.). Images of each individual participant were corrected for slice timing and motion with six movement parameters (three rotations and three translations). Two participants who moved more than 3 mm during the functional scan were excluded from the study. Each participant's structural image was normalized to Montreal Neurological Institute (MNI) space of 152 participants' average T1 template provided by SPM with affine registration followed by nonlinear transformation (Ashburner and Friston, 1999;Friston et al., 1995). The normalization parameters determined for the structural volume were then applied to the corresponding functional images. The functional volumes were spatially smoothed with a 3 mm 3 full-width half-maximum (FWHM) Gaussian kernel. The time series of voxels inside the brain, as determined using an SPM a priori mask, were subjected to the following preprocessing steps using the FieldTrip toolbox (Oostenveld et al., 2011) (Donders Institute for Brain, Cognition and Behaviour, Radboud University Nijmegen, the Netherlands. See http://www.ru.nl/neuroimaging/fieldtrip, release 01/09/2014). Linear and quadratic trends from each voxel's time series were removed by fitting and regressing basis functions, and we bandpass filtered the BOLD time series between 0.01 and 0.1 Hz using a fourth order Butterworth infinite impulse response filter. A correction for cerebrospinal fluid motion was obtained by regressing out the time series of a 9 mm diameter sphere located in the fourth ventricle (MNI coordinates of the center of the sphere [0 -46 À32]).
EGG preprocessing
Data analysis was performed using the FieldTrip toolbox. Data were low-pass filtered below 5 Hz to avoid aliasing and downsampled from 5000 Hz to 10 Hz. To identify the EGG peak frequency (0.033-0.066 Hz) for each participant, we computed the spectral density estimate at each EGG channel over the 900 s of an EGG signal acquired during the fMRI scan using Welch's method on 200 s time windows with 150 s overlap. Spectral peak identification was based on the following criteria: peaking power larger than 15mV 2 and sharpness of the peak. Two participants were excluded from further analysis at this stage because their spectral peak was not well defined, with a power smaller than 15mV 2 . In 20 participants, peaking power was the largest at the EGG electrode with the best defined spectral peak. In 10 participants, we used the second most powerful channel because the spectral peak was sharper. Data from the selected EGG channel were then bandpass filtered to isolate the signal related to gastric basal rhythm (linear phase finite impulse response filter, FIR, designed with Matlab function FIR2, centered at EGG peaking frequency, filter width ±0.015 Hz, filter order of 5). Data were filtered in the forward and backward directions to avoid phase distortions and downsampled to the sampling rate of the BOLD acquisition (0.5 Hz). Filtered data included 30 s before and after the beginning and end of MRI data acquisition to minimize ringing effects.
MR gradient artifacts affect the electrophysiological signal down to approximately 10 Hz, which is far above EGG frequency (~0.05 Hz). Thus, no specific artifact gradient procedure was necessary. We further checked that EGG frequency inside and outside the scanner did not differ (see Results).
Data analysis Quantification of gastric-BOLD phase synchrony
The BOLD signals of all brain voxels were bandpass filtered with the same filter parameters as the ones used for the EGG preprocessing. The first and last 15 volumes (30 s) were discarded from both the BOLD and EGG time series. The updated duration of the fMRI and EGG signals in which the rest of the analysis was performed was 840 s. The Hilbert transform was applied to the BOLD and EGG time series to derive the instantaneous phases of the signals. The PLV (Lachaux et al., 1999) was computed as the absolute value of the time average difference in the angle between the phases of the EGG and each voxel across time (Equation 1).
where T is the number of time samples, and x and y are the two time series. The PLV measures phase synchrony irrespective of temporal delays and amplitude fluctuations and is bounded between 0 (no synchrony) and 1 (perfect synchrony). Two pure sinewaves at the same frequency will thus always have a PLV of 1. However, the stomach is not perfectly regular, and the EGG is not a perfect sinewave. The phase-locking procedure identifies BOLD regions that go faster when the stomach goes faster, and slower when the stomach goes slower. The bandpass filter we use is large enough to retrieve all those fluctuations. The PLV was first assessed over the whole duration of the recording. In a second step, we computed the time-varying PLV in a 60 s time window shifted by 10 s.
Statistical procedure for determining regions showing significant gastric-BOLD coupling at the group level
We employed a two-step statistical procedure adapted from a previous work (Richter et al., 2017). We estimated chance-level gastric-BOLD coupling at each voxel and in each participant. We then used group-level statistics to determine regions in which gastric-BOLD coupling was greater than chance.
We first estimated the chance-level PLV at each voxel for each participant. We created surrogate datasets in which the phase relationship between the EGG and BOLD time series was disrupted by offsetting the EGG time series with respect to the BOLD time series. In other word, acceleration/ deceleration in the EGG are no longer aligned with the acceleration/deceleration in the BOLD. In practice, the EGG time series was shifted by a time interval of at least ±60 s (i.e. approximately 3 cycles of the gastric rhythm) with respect to the BOLD time series. Data at the end of the recording were wrapped to the beginning. Given the 420 samples in the BOLD time series, this procedure generated 360 surrogate datasets from which we could compute the distribution of the chance-level PLV for each voxel in each participant. The chance-level PLV was defined as the median value of the chance-level PLV distribution for each voxel and participant. Because the amplitudes of the series in the surrogate datasets are identical to the original ones, any bias due to signal amplitude is present in both original and surrogate datasets. Note that PLV is a measure that depends on sample size (Vinck et al., 2010;Bastos and Schoffelen, 2015). Here, we can safely compare PLV values between the original and surrogate datasets in each participant because the original and surrogate datasets contain exactly the same number of samples (420 samples in all datasets in all participants). We defined coupling strength as the difference between the empirical PLV and chance-level PLV.
In a second step, we tested whether the empirical PLV differed from the chance-level PLV across participants. We used a cluster-based permutation procedure (Maris and Oostenveld, 2007), as implemented in FieldTrip (Oostenveld et al., 2011), that extracts clusters of voxels showing significant differences at the group level while intrinsically correcting for multiple comparisons. This nonparametric method is exempt from the high rate of false positives associated with the Gaussian shape assumption often present in fMRI studies (Eklund et al., 2016). The procedure consists of comparisons between the empirical PLV and chance-level PLV across participants using t-tests at each voxel. Candidate clusters are formed by neighboring voxels exceeding the first-level t-threshold (p<0.01, two-sided). Each candidate cluster is characterized by the sum of the t-values in the voxels defining the cluster. To determine the sum of t-values that could obtained by chance, we computed a cluster statistics distribution under the null hypothesis by randomly shuffling the labels 'empirical' and 'chance level' 10,000 times and applied the clustering procedure. At each permutation, we retained the largest positive and smallest negative summary statistics obtained by chance across all voxels and thus built the distribution of cluster statistics under the null hypothesis and assessed the empirical clusters for significance. Because the maximal values across the whole brain are retained to build the distribution under the null hypothesis, this method intrinsically corrects for multiple comparisons. Clusters are characterized by their summary statistics (sum(abs(t))) and Monte-Carlo p value. Clusters with a Monte-Carlo p value<0.05 (two-sided, corrected for multiple comparisons) were considered significant and are reported in the Results section as nodes of the gastric network.
As an additional control, we computed gastric-BOLD coupling at each voxel between the BOLD data of the participant and the EGG data of the other 29 participants. Chance-level PLV was defined as the median of the 29 surrogate PLVs, and compared to empirical PLV using the clustering method described above. Note that in this case, chance level PLV is estimated from only 29 surrogate data sets, as compared to 360 surrogate data sets in time-shift approach, resulting in a less precise estimate.
Quantification of gastric-bold shared variance
To estimate the amount of variance in the BOLD signal that could be accounted for by gastric coupling, we computed the squared coherence coefficient between the EGG and average BOLD time course across all voxels in each significant cluster using FieldTrip software. The coherence coefficient measures phase and amplitude consistency across time and is a frequency domain analog of the cross-correlation coefficient in the temporal domain. Therefore, its squared value can be interpreted as the amount of shared variance between two signals at a certain frequency (Bastos and Schoffelen, 2015). First, we estimated the frequency spectrum of the full-band (0.01-0.1 Hz) EGG and BOLD signals (Welch method on a 120 s time window with 20 s overlap). We then computed the coherence coefficient between the spectrum of each participant's EGG and each cluster's time series at gastric frequency (!Þ as the absolute value of the product of the amplitudes (A) of the signals and their phase (j) difference averaged across time windows (t) and normalized by the square root of the product of their squared amplitudes averaged across time windows.
The coherence coefficient was then squared and averaged across participants such that the final group value represented the shared variance between the EGG and each cluster BOLD activity at the normogastric peak.
Between-participant phase-delay consistency
To quantify temporal delays in the gastric network, we ran group-level analysis on the gastric-BOLD phase-locking angle. In each participant, we first computed a mean BOLD time series per node by averaging the voxel time series in each significant cluster. We then computed the relative phase-locking angle f k relative of the node k between the node time series x and the EGG y using equation 3, where f k relative corresponds to the phase-locking angle f k of node k with respect to the EGG minus the average angle across all nodes. f k relative thus quantifies the phase advance or lag of each node relative to the gastric network. We analyze relative, rather than absolute phase values, because of there might be a constant but unknown phase delay between the recorded EGG and the rhythm of the gastric pacemaker.
Between-participant phase-delay consistency was then obtained at each node by averaging the unit vectors of the relative phase-locking angles across P participants using Equation 5.
Between À participant phase À delay consistency k ¼ To determine whether there were significant differences across the angle of gastric network clusters, we submitted the values of each node and participant's relative phase-locking angle to Watson-Williams test, a circular analog of one-way ANOVA for circular data, using the circstat Matlab toolbox (Berens, 2009) Functional connectivity: correlation and coherence FC was defined as shared variance and computed using either the squared Pearson correlation coefficient or squared coherence. We computed the Pearson correlation between the bandpass-filtered BOLD time series (gastric peaking frequency ± 0.015 Hz) averaged across voxels in each gastric node, as well as in two control regions outside the gastric network, the right ventral precuneus and right ventral insula. The ventral precuneus, a core node of the default network, was defined using a 9 mm 3 ROI centered in the coordinates provided by Fox et al. (2005) (MNI x=-5 y = À52.5 z = 41). The right ventral insula ROI was provided by the parcellation performed by Deen et al. (2011).
To compute coherence between BOLD time series, we first estimated the frequency spectrum of the full-band (0.01-0.1 Hz) BOLD time series using the Welch method with 36 time windows of 120 s with 20 s overlap. We then computed coherence using the FieldTrip implementation of equation number two and used the squared coherence at the gastric peak frequency of each participant as an estimate of shared variance.
Heart rate variability analysis
We first removed the MRI gradient artefact from the ECG data using the FMRIB plug-in Niazy et al., 2005, version 1.21) for EEGLAB (Delorme and Makeig, 2004, version 14.1.1), provided by the University of Oxford Centre for Functional MRI of the Brain (FMRIB). Data from the three ECG channels was then 1-100 Hz bandpass filtered using a FIR filter, designed with Matlab function firws. We then retrieved the inter-beat-interval (IBI) time series by identifying R peaks using a custom semi-automatic algorithm, which combined automatic template matching with manual selection of R peaks for extreme IBIs. This procedure was performed in the ECG channel of each participant that required the least manual identification of R peaks. The resulting IBI time series were then interpolated at 1 Hz using a spline function (order 3), and band-pass filtered at high (0.04-0.15 Hz) and low frequencies (0.15-0.4 Hz) of heart rate variability using a FIR filter (designed with Matlab function FIR2, center frequency for LFHRV 0.1 Hz ± 0.06 Hz, HFHRV centered at 0.275 ± 0.125 Hz) and then downsampled at MRI frequency (0.5 Hz). The amplitude envelope of HFand LF-HRV were then computed using the Hilbert transform and used as regressors of interest (without convolution with the HRF as in [Critchley et al., 2003]) in two separate first level GLMs, which also included six movements parameters as regressors. The MRI pre-processing parameters were the same as for the gastric-BOLD coupling analysis (slice-timing and motion correction, co-registration to MNI space and spatial smoothing of FWHM = 3 mm). The BOLD time series were high-pass filtered (cutoff: 128 s) for the GLM analysis. GLM analysis was performed using SPM8 (Friston et al., 1994). Contrast images from the first level were entered into two separate second level random-effects analysis to test for consistent effects across the 30 participants separately for HF and LF HRV. The contrast images were spatially smoothed (FWHM = 8 mm), and submitted to a one-sample T-test. Statistical inference was performed at the voxel level, family-wise-error-corrected (p FWE < 0.05) for multiple comparisons over the whole brain.
Pupil diameter analysis
Pupil size during blinks and saccades (as automatically detected by the EyeLink software) was estimated by interpolating between pupil size 100 ms before and 100 ms after each event. Artefacted windows separated by less than 200 ms were combined and treated as a single epoch. Data from seven participants were excluded due to a high (>20%) amount of artefacted data. Data from three participants were excluded because MRI and pupil data could not be synchronized, due to missing triggers. Pupil data from the remaining 20 participants were downsampled at MRI frequency (0.5 Hz), bandpassed filtered (0.0078-0.1 Hz) using a butterworth infinite impulse response filter and used as a regressor (convolved with the canonical HRF as in [Yellin et al., 2015;Schneider et al., 2016]) in a first level GLM, which also included six movement nuissance regressors. The BOLD time series were high-pass filtered (cutoff: 128 s) for the GLM analysis (SPM8). Contrast images from the first level were entered into a second level random-effects analysis to test for consistent effects of pupil size across the 20 participants. The contrast images were spatially smoothed (FWHM = 8 mm), and submitted to a one-sample T-test. Statistical inference was performed at the voxel level, (p<0.001, uncorrected for multiple comparisons).
Bayes factor
Bayesian statistics on correlation coefficients were computed and interpreted according to (Wetzels and Wagenmakers, 2012) and (Kass and Raftery, 1995), and Bayesian statistics on two sample (unpaired) comparisons according to (Rouder and Morey, 2011). Regarding the specific test of an absence of effect of voxel motion susceptibility on coupling strength (H0), submillimeter voxel motion was estimated as in (Power et al., 2012), and H1 was modeled as the minimum effect size required to detect a significant difference from zero, given one-sample t-test of 29 degrees of freedom on a normal distribution with a mean of 0 and a standard deviation of 1. The same method was used to test for the absence of a difference between the EGG peak frequency measured inside and outside the scanner.
Nifti overlays availability
Unthresholded t maps of empirical vs chance PLV comparisons (intermediate step for Figure 2a), mask of significant clusters (Figure 2a), unthresholded and significant mask of HF and LF HRV and pupil diameter ( Figure 3) and average phase-locking angle of each significant cluster (Figure 4b) are available at Neurovault (Gorgolewski et al., 2015) at the following address: http://neurovault.org/ collections/GMHHGEXA/
|
2018-04-03T00:50:21.606Z
|
2018-03-21T00:00:00.000
|
{
"year": 2018,
"sha1": "1e19e9caa825a52d8e3b4e7fd24a99027df252a9",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.7554/elife.33321",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "1e19e9caa825a52d8e3b4e7fd24a99027df252a9",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
239633701
|
pes2o/s2orc
|
v3-fos-license
|
Economic Security of Ukraine’s Railway Transport in the Context of National Security
The authors consider the legislative and economic prerequisites for the need to ensure the economic security of Ukraine’s railway transport in the context of national security. The research methodology is composed of such methods as a theoretical generalization for further clarification of the conceptual apparatus of railway transport economic security, a legalistic method for analysis and systematization of legal bases in the field of Ukraine’s national security, an abstract-logical method for theoretical generalization and formulation of research conclusions, etc. Taking into account the dynamics and nature of railway freight transportation, conclusions about the strategic place of the industry in ensuring sustainable development of the state and, accordingly, the need to protect its economic security were drawn. The authors outline the legal principles of economic security of railway transport in the system of legal regulation of Ukraine’s security policy. The article provides the author’s definition of “economic security of railway transport” concept, based on current tendencies in national security.
INTRODUCTION
Thirty years of Ukrainian independence is a vivid example of the emergence and change (following internal and external factors) of challenges and threats to the state economic security, among which both purely national and international ones stand out.An important prerequisite for counteracting threats to the economic security is their prevention or minimization of negative consequences, where world experience or the development of an adequate national path is appropriate.And, at the same time, failure to solve problems at the nascent stage causes the subsequent emergence of complex threats, overcoming which will require a significant amount of legal, organizational, financial and economic resources.
Historically, Ukraine's railway transport plays an important role in the development of the state's economy.First of all, railway transport makes a significant share of freight turnover among other modes of transport, in particular, the automobile one.Secondly, railway transport performs a necessary social function of passengers' transportation.Thirdly, railways carry out the cargo transportation of the strategic industries, including the agro-industrial complex, the fuel and energy industry, the metallurgical and mining industries, and the chemical industry.Freight transportation of the defense industry and the military-industrial complex, as well as military transportation, is carried out by enterprises of the industry to meet the needs of Ukraine's security and defense sector.
Taking into account the long process of the railway transport reformation, economic crises and military aggression of the Russian Federation against our state, an important prerequisite for Ukraine's national security is, in our opinion, sustainable development of the railway industry and systemic economic security support.These circumstances have determined the relevance of this study, which aims to identify threats to the economic security of railway transport in present conditions, ways to minimize or prevent them, as well as to formulate capabilities to ensure the economic security of Ukraine's railway transport.consists of Ukraine's current legislation, materials of the State Statistics Service of Ukraine, the Ministry of Infrastructure of Ukraine and the joint-stock company "Ukrzaliznytsia".
To solve this goal, the following forms, ways and research methods were used: systematic approachto determine the components of the economic security of railway transport; theoretical generalizationto clarify the conceptual apparatus of the economic security of railway transport; historicalto study the processes of emergence, formation and development of new approaches to the formation of the national security system and its components; legalisticto analyze legal sources in the field of Ukraine's national security and their systematization; economic-statistical and economic-mathematicalto analyze the dynamics of performance indicators of the railway industry for the reporting period; abstract-logicalto generalize and formulate the conclusions.
The study of the economic security of Ukraine's railway transport in general and its structural enterprises is given attention in the works of Zh. S. Kostyiuk, B. B. Ostapiuk, I. I. Rekun, etc.The research in the field of national security is much widely presented in the works O. S. Vlasiuk, B. P. Horbulin, O. P. Dzoban, V. H. Pylypchuk.At the same time, the researchers have not considered the question of the economic security content of railway transport in the components of national security.
RESULTS
Current trends in national security policy and analysis of railway transport activity have allowed us to define the essence of the economic security of railway transport as the resistance of enterprises, institutions and organizations of a single production and technological complex of railway transport enterprises to real and potential threats under conditions of ensuring stability and strengthening capacities in its activities.The analysis of the safety environment allowed formulating the basic components of railway transport capacities, in particular, the legislative framework, organizational structure and management, professional level and education, resource provision.
DISCUSSION
Active study of the basic principles of the economic security of Ukraine's railway transport is certainly connected with the study of the state's economic security.
The systematization of threats to the economic security of railway transport enterprises was presented in the works of P. V. Lapin and S. P. Mishchenko [1,2].A. V. Rachynska classified the risks in railway transport as conditions for increasing economic security [3].Risks and threats to the economic security of railway transport were identified in the works of V. L. Dykan [4].Scientists S. M. Synytsia and O. V. Vakun studied the peculiarities of the economic security management of railway transport enterprises [5].Estimated figures of the economic security of railway transport enterprises were studied by T. O. Murenko [6].Systematization of factors influencing the economic security of railway transport was carried out by O. Yu.Cherednychenko [7] etc. [8][9][10] Elaboration of scientific sources to define the essence of "economic security of railway transport" concept allowed us to conclude that there is no single approach in its formulation (Table 1).
Nowadays, following Ukraine's national security policy and the chosen course for European and Euro-Atlantic integration, the construction of a national system of resilience and capacity planning has become relevant [11][12][13][14].In the context of ensuring the economic security of railway transport, sustainability is understood by the authors of this article as the ability of enterprises, institutions and organizations of railway transport, as well as the capacity of their employees, to quickly adapt to changes in the safety sector and operate with the aim to minimize vulnerabilities [15,16] caused by the external and internal environment.Similarly, capacity planning is understood as the definition of the main directions of Ukraine's transport policy implementation, strategic goals of railway transport development and the expected results of their achievement, taking into account the real and potential threats to the economic security of railway transport.
Accordingly, taking into account current tendencies in support of the national security components, the economic security of railway transport, in our opinion, should be understood as the resilience of enterprises, institutions and organizations of a single production and technological complex of railway transport enterprises to real and potential threats under conditions of ensuring stability and strengthening capacities in its activities.
Legal basis for ensuring the economic security of railway transport is the Law of Ukraine "On the National Security of Ukraine" in the context of protection of such Ukraine's fundamental national interests as sustainable development of the national economy to ensure growth and quality of the nation's life [17].In addition, the Law states that Ukraine's policy in the spheres of national security and defense is aimed at ensuring the economic security of Ukraine, and thus at ensuring its componentthe economic security of railway transport.
The National Security Strategy of Ukraine also emphasizes the need to create an effective system of security and resilience of critical infrastructure, the objects of which include railway transport [11].It should be noted that the National Economic Strategy for the period up to 2030 pays special attention to railway transport in terms of challenges and barriers to further development and identifies ways to overcome them.
It is also important to mention that the Law of Ukraine "On Railway Transport" emphasizes that "the activity of railway transport as part of a unitary transport system contributes to the proper functioning of all sectors of public production, social and economic development and strengthening the state's defense capacities", which also indicates the need to ensure, in particular, the economic security of railway transport [21].
As noted earlier, Ukraine's railway transport is an important component of strengthening the welfare of our country, taking into account that the state of ensuring its economic security has an impact on the overall situation with Ukraine's national security.Thus, in recent decades the share of railway transport in goods transportation by other modes of transport has fluctuated from 23% in 2000, to 24% in 2010 and 20% in 2019 [22].At the same time, the distribution of freight turnover by modes of transport shows favorable statistics for railway transportfrom 44% in 2000 to 52% in 2010 and 51% in 2019.It is also impossible to avoid such an indicator as the transportation of 1 ton of cargo by different modes of transport: in 2019 this indicator reached 581 km for railway transport against 57 km for the automobile one.
It should be noted that the reduction of the abovementioned indicators began in 2014, due to the military aggression of the Russian Federation against Ukraine, namely the reduction of transit cargo and the termination of railway connection with the aggressor state, as well as the temporary occupation of the Autonomous Republic of Crimea and the City of Sevastopol and some districts of Donetsk and Luhansk regions.
Freight transportation of strategic industries is also evidence of railway transport's contribution to national security.Thus, it is Ukraine's railways that transport significant volumes of coal, oil and oil products, ferrous metals, ore, and grain cargo.In particular, in 2019 they transported coal -19%, ore -24%, grain -13% of the total railway traffic [22].
As noted above, the reduction in railway transport performance began in 2014, for which there is no data at all.For clarity, the indicators of 2013 were included (Table 2).It can be seen that in 2019, compared to 2013, the transportation volume of coal, oil and oil products, ferrous metals, etc. decreased almost twice.Also in the same period, the transportation of grain cargo increased from 23 million tons to 40 million tons.In general, if to compare the volumes of cargo transportation of strategic industries by railways in 2019 and 2000, there is a decrease in the main nomenclature and an increase in more than 6 times of grain transportation volume.
An important element in ensuring national security is the implementation of military transportation by
Advances in Economics, Business and Management Research, volume 188 railway enterprises.Thus, the normative documents of the Ministry of Defense of Ukraine stipulate that railways conduct transportation of military units under conditions of "sufficient time and the need to move military units to a distance greater than the day ' s march" and with the aim to preserve motor resources and fuel [23].If necessary, such rolling stock as railway passenger (including human) and freight (covered, platforms, gondola) cars, is used.
A separate Resolution of the Cabinet of Ministers of Ukraine regulates "the mechanism of military railway transportation, their planning, organization and provision, calculations for such transportation, maintenance and service of railway access tracks to military units, institutions and organizations of the Armed Forces in peacetime and special period" [24].The document governs the creation of separate structural subdivisionsdepartments of military communications on the railways, which should ensure the interaction of the Ministry of Defense of Ukraine with enterprises, institutions and organizations of railway transportin the locations of railway departments.According to official data, the Department of Military Communications on Railway Transport of Ukraine performs railway transportation to ensure activities in the interests of the Armed Forces of Ukraine, including during command post exercises, transportation of personnel and equipment of the Armed Forces of Ukraine during international military exercises, in particular in the framework of the international cooperation program "Partnership for Peace", as well as the rotation of peacekeeping forces [25].
It should be noted that railway transport enterprises are entrusted with ensuring "implementation of military railway transportation plans; provision of rolling stock for military railway transportation; movement of military trains by railway according to the relevant schedule; military echelons, military vehicles moving under the protection of military guards, water, heating, lighting; provision of loading and unloading devices and removable military equipment following legislative regulations; control over the movement of military echelons, as well as in some cases of military transport, their priority loading and delivery to the destination" [24].
It should also be mentioned that an important prerequisite for ensuring security at critical infrastructure facilities, which are enterprises and railway transport facilities, is the activity of a special department within the structure of the joint-stock company "Ukrzaliznytsia", which is responsible for organization and control over the implementation of legislative regulations on mobilization training, mobilization and civil defense in the special period of railway transport transfer from peaceful to martial law by the staff, structural subdivisions and branches of the enterprise [26].
Military activities on the territory of our state gave impetus to the use of military means for completely peaceful purposes.Thus, the use of unmanned aerial vehicles (UAVs) can be considered as one of the modern means of ensuring the economic security of railway transport.Both foreign (Maghazei O. & Steinmann M., 2020) and domestic (Lapin P. V. & Katsman N. D., 2021) scientists emphasized the technological capabilities of UAVs in the activity of railway transport.In particular, it is assumed that the prospects for the use of UAVs are not only in the implementation of visual control or monitoring, namely in the process of goods protection [27], but also with the technology improvementin project management in railway transport [28].Undoubtedly, the use of UAVs is not only to ensure the safety of goods, but also the preservation and diagnosis of rolling stock, infrastructure objects, counteraction to terrorist threats and prevention of critical situations, railway accidents and more.
As noted above, an important prerequisite for Advances in Economics, Business and Management Research, volume 188 ensuring the economic security of railway transport, based on the threats and challenges to its activities, or analysis of the security environment, is to determine its capacity.Thus, in our opinion, among the basic components of railway transport capacity can be identified the following: legislative framework, organizational structure and management, professional level and education, resource provision.This choice of components is due, in particular, to the lack of the updated Law of Ukraine "On Railway Transport of Ukraine" and the subordinate legislation; periodic replacement of management and the lack of clear transparent criteria for effective work; reduction of the requirements for the professional staff level; the unresolved problem of the critical level of rolling stock wear, etc.Thus, a comprehensive approach to solving the problems of the economic security of railway transport and their prospective analysis will prevent and minimize the consequences of negative tendencies.
CONCLUSIONS
The economic security of railway transport is associated with many components of national security, in particular, technogenic security (train performance security), environmental security (protection of the environment from pollution by railway transport), military security (security of military transportation and cargo), information security (automation and signalization in railway transport), personnel security (professional education of employees, the activities of the supervisory board), because the gaps in these areas directly influence the financial performance of the industry.
Taking into account new approaches to national security, the economic security of railway transport also needs to be renewed.Thus, we stressed the need to ensure the sustainability and capacity strengthening of enterprises in the industry, which are achieved by reducing the level of taxation (property tax, land tax, value-added tax, etc.), the introduction of "zero" tax rates (in particular, value-added tax on all types of passenger traffic), as well as intensification of digital technologies and services introduction.
Table 1 .
Scientific views on the essence of the "economic security of railway transport" concept Kostiuk, Zh.S., 2013 "the state of railway transport enterprises, which helps to achieve organizational and technical unity, high quality and efficiency of transport services, which ensures the efficient functioning and sustainable development of railway transport based on a targeted set of measures that aim to prevent or mitigate the negative impact of external and internal threats" [18, p. 177] Rekun, І. І., 2015 "a system of long-term management of financial and economic security taking into account existing and potential threats and risks, the definition of operational and future goals of the economic security system, formation of plans' strategy and flexibility following changes in the internal and external institutional environment of the enterprise" [19, p. 320] Mezhokh, Z. P., 2007 "is determined by the strategic priorities and parameters of its functioning, which meet the transportation needs of cargo owners and passengers, the quality and competitiveness of transport products while maintaining sustainable work, social security of employees and financial stability of the industry"[20, p. 41]
Table 2 .
Transportation of goods by public railway transport, million tons Source: data from the State Statistics Service of Ukraine [22].
|
2021-08-30T16:11:19.659Z
|
2021-08-27T00:00:00.000
|
{
"year": 2021,
"sha1": "dac3d4f7e97575ff828d5a5f12615ea97bccf143",
"oa_license": "CCBYNC",
"oa_url": "https://www.atlantis-press.com/article/125960311.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "16574f063acb52cadfebcd2876ef244d4180e441",
"s2fieldsofstudy": [
"Economics",
"Engineering"
],
"extfieldsofstudy": [
"Business"
]
}
|
248965413
|
pes2o/s2orc
|
v3-fos-license
|
Reproducibility of the First Image of a Black Hole in the Galaxy M87 from the Event Horizon Telescope (EHT) Collaboration
This paper presents an interdisciplinary effort aiming to develop and share sustainable knowledge necessary to analyze, understand, and use published scientific results to advance reproducibility in multi-messenger astrophysics. Specifically, we target the breakthrough work associated with the generation of the first image of a black hole, called M87. The image was computed by the Event Horizon Telescope Collaboration. Based on the artifacts made available by EHT, we deliver documentation, code, and a computational environment to reproduce the first image of a black hole. Our deliverables support new discovery in multi-messenger astrophysics by providing all the necessary tools for generalizing methods and findings from the EHT use case. Challenges encountered during the reproducibility of EHT results are reported. The result of our effort is an open-source, containerized software package that enables the public to reproduce the first image of a black hole in the galaxy M87.
Introduction
Developing reproducible analyses is a challenging aspect of scientific research. Few realworld studies have been performed to provide guidance on the necessary processes and products, especially in domains relying on scientific computing. There reproducibility is limited by the availability of data, software, platforms, and documentation. Consequently, despite a group's best efforts, other scientists attempting to reproduce an analysis may find that the necessary information is incomplete.
We present an interdisciplinary effort to develop and share sustainable knowledge necessary to understand, reproduce, and reuse the published scientific results of the Event Horizon Telescope (EHT) project's analysis of the black hole in the center of the M87 galaxy [1]. Unlike our previous reproduction of Advanced LIGO's observations [2], none of the authors of this paper was involved in the original EHT analysis. Thus, our work builds exclusively on the several papers describing the EHT project workflow [3], data [4], [5], and software [6] that are available online. Each EHT paper presents specific aspects of the scientific discovery but a comprehensive approach including documentation, software, and environment to reproduce the published results of the EHT project is still missing. To this end, this paper follows rigorous reproducibility directions and expands preliminary work presented in a poster [7].
As part of our contributions, we investigate the availability and integrity of the data used to recreate the images of the M87 black hole. We model the image processing workflow and study its limitations in terms of software availability, dependencies, configuration, portability, and documentation. We rebuild the workflow's software stack to reproduce the published images; we use the software stack for our analysis of discrepancies between original and reproduced results. We document each step in this process, starting from a systematic assessment of the availability of data, software, and documentation. We deliver a collection of fully documented containers for data validation and image reconstruction. Finally, we compile guidelines to increase the reproducibility of computational workflows in scientific projects.
Our work enhances the reproducibility and reach of scientific projects like the EHT project, and facilitates the engagement of the overall scientific community, including postdocs and students, regardless of the domain.
M87 Event Horizon Telescope (EHT)
The EHT project uses Very Long Baseline Interferometery (VLBI) to link together eight radio telescopes around the world to study the immediate environment of a black hole with angular resolution comparable to the size of the black hole itself. In April 2019, the EHT Collaboration published measurements of the properties of the central radio source in M87 [8], including the first direct image of a black hole. The results, that received world wide attention, revealed for the first time a bright ring formed as light bends in the intense gravity around a black hole in the galaxy M87. The black hole is 6.5 billion times larger than the Sun.
The EHT project provides links to their calibrated data [5] published in CyVerse Data Commons, a publication describing their data processing and calibration [4], a link to the software used in their imaging workflow [6], and a publication describing the imaging workflow [3]. The EHT Collaboration released both data products and software, hosting them on third-party repositories. This is a common approach for many NSFfunded projects ranging in size from individual investigators to international collaborations.
Characterization of the EHT Workflow
The EHT workflow comprises three key components: the data collection, the data processing, and image building (see Figure 1).
Data Collection. Eight telescopes in the EHT network collect radio interferometry data on certain days at certain times that have permissible weather conditions for all sites, allowing the gathering of data from multiple angles and effectively turning the Earth into one single giant telescope. The EHT data used for the generation of the first M87 black hole images consists of spatiotemporal data of visibility amplitudes collected over five days in 2017 (i.e., April 5, 6, 7, 10, and 11). For each day, collected raw data contains both high and low telescope frequencies. Figure 1: High level overview of the EHT project with its eight telescopes collecting radio interferometry data, its three workflow components including three pipelines for image building, and an image of the M87 black hole extracted from Figure 3 in [8].
Raw Data Processing. Raw data is first pieced together by using the Earth's geometry and clock/delay model to obtain a common time reference and the pairwise correlation coefficients are computed. Then, the data is reduced to a manageable size for use in source imaging and model fitting: data is fringe-fitted, calibrated a priori, and network calibrated. Fringe-fitting is performed using the EHT-HOPS Pipeline for Millimeter VLBI Data Reduction [9]. Data undergoes a priori calibration and network calibration in the post-processing stage of the EHT-HOPS pipeline to create .uvfits [4] files. The processed data is stored in the First M87 EHT Results [5] data repository in .csv, .txt, and .uvfits formats and are available to the community. We use this processed data for analysis as both raw data is not open-access and processing scripts are not open-source at the time of this reproducibility study.
Image Building. To reduce biases and increase trust in results, the EHT Collaboration uses three independently-designed pipelines to generate the black hole images. They are: the Difmap M87 Stokes I Imaging Pipeline (DIFMAP) [10], the EHT-Imaging M87 Stokes I Imaging Pipeline (EHT-Imaging) [11], and the Sparse Modeling Imaging Library for Interferometry (SMILI) [12]. Each pipeline is based on different methods, algorithms, and software libraries but uses the same input data. While the code for each individual pipeline is available as open-source software, the repositories do not contain all of the scripts for image post-processing and generation. Providing documentation for scientific software is challenging and we find that documentation for packaging, installing, and running the pipelines can be incomplete or or is unavailable for certain parts of the analysis. Table 1 lists the available, unavailable, and incomplete data, scripts, code, and documentation used by the EHT workflow and shared with the community before our reproducibility study. To succeed in our effort, we generated and made available the missing components.
Validating the Data Integrity
A key aspect of any work when reproducing scientific results is the validation of the data integrity: the data used for the generation of the original EHT images should match the data made available to the community. The integrity of data is often considered secondary but can compromise any reproducibilty effort, as it was previously demonstrated by the author in [2]. Figure 1 in [3] characterizes the original data in terms of telescope baselines (i.e., u-v coverages). Scripts to compare the properties of the original data with available data were not available. We generated the missing Python scripts and integrated them into a Jupyter notebook using standard Python modules such as matplotlib, pandas, and numpy. Figure 2 shows the comparison between the properties of the original data used in Figure 1 in [3] (set of sub-figures in Fig. 2(a)) and the reproduced properties using the available data (set of sub-figures in Fig. 2(b)). Top left plots in the two sets represent the intra-site EHT interferometer baselines (short baselines). The top right plots represent the aggregate baseline coverage of the Table 1: Availability of data, scripts, code, and documentation before our reproducibility study. Available and incomplete components are linked to the paper presenting them; missing components are marked as unavailable.
Raw data
Unavailable Processed data Available [5] Scripts Raw data processing Unavailable Processed data validation Unavailable Image post-processing Unavailable Figure EHT array for all four days observed. The bottom plots show the short and long baseline coverage observed by each telescope set at high and low frequencies each day. Qualitatively we can assess the integrity of the data that we input to the three pipelines (i.e., DIFMAP, EHT-imaging, and SMILI). The only difference is the incomplete left plot in Fig. 2(b) due to the fact that the analysis is based on both the available processed EHT data and the unavailable intra-ALMA data from the Atacama Large Millimeter/submillimeter Array (ALMA). This external dataset is not included in the EHT Data Products; based on communications with the EHT Collaboration, the data is not needed for the pipelines to be able to reproduce the black hole images.
Rebuilding the EHT Software Stack
The three EHT pipelines that are part of image building can be modeled in terms of their functional modules (Figures 3(a), 4(a), and 5(a)). Each pipeline comprises a parameter definition module for users to establish workflow-specific behaviour as well as data preparation and data pre-calibration modules to pre-process the input files that are fed to the core of each pipeline. A module performing the image reconstruction cycles runs the image reconstruction algorithm; note how each pipeline uses a different number of cycles. The output of each pipeline includes a final image and statistics module that is used for qualitative and quantitative analysis of the reconstructed results, respectively. In SMILI, the first two modules are inverted and a image evaluation module for data visualization is available at the end of the pipeline.
Although the three pipelines share similar high-level steps, each of them has its own set of auxiliary steps, dependencies, and implementation. Figures 3(b), 4(b), and 5(b)) show the dependencies and software components of each pipeline in relation to its functional modules. DIFMAP ( Figure 3) is written in C and uses the CLEAN algorithm for image reconstruction involving iterative deconvolution, paired with a technique called "difference mapping." EHT's DIFMAP script takes a file containing observation data, a mask (set of cleaning windows) file that defines areas of interest for the algorithm to iterate upon, and five command-line arguments, which have been provided in the EHT repository [6]. After loading this file, the script initializes values, reads the file specifying the mask, and begins the pre-calibration phase, which involves its first cleaning and phase self-calibration. Afterwards, the image undergoes twenty rounds of amplitude self-calibrations and cleanings, and this is when image reconstruction occurs. EHT-Imaging ( Figure 4) uses the Regularized Maximum Likelihood (RML) method of image reconstruction and relies heavily on the eht-imaging Python module (EHTIM) to complete its processes. The EHTIM module defines numerous classes to allow the loading, simulation, and manipulation of VLBI data. By leveraging the classes in this module, the EHT-Imaging workflow loads both the low and high band data files of a single day's observations into a data object and performs various data preparation and pre-calibration steps. The workflow then moves to an imaging cycle with four iterations. Each successive iteration relies directly on the image generated in the previous iteration. After four iterations, the final image is output. The pipeline also allows for optional outputs includ-ing the final image and an image summary file containing various imaging parameters and data related to the imaging process.
SMILI ( Figure 5) is also written in Pyhton and uses RML like EHT-Imaging. Prior to imaging, SMILI also uses the EHTIM module in order to use data sets pre-calibrated consistently with the other workflows. After the pre-calibration stage, the software generates data tables that are used for the final imaging process. Reconstruction of an image begins with a circular Gaussian with successive iterations relying on the image generated from the previous iteration. There are four stages of iterations with each stage performing three imaging cycles. Once completed, the software outputs the final image and packages the input, pre-calibrated, and self-calibrated data files for traceability.
Note that each pipeline has its own GitHub repository [10], [11], [12]. The compilation of each pipeline's original code from the three EHT repositories resulted in several errors. For example, on a Power9 system we missed dependencies and had to remove optimization compilation flags from the installation script to generate the executable code successfully. In general none of the three pipeline codes include a comprehensive list of required software dependencies and libraries used or their versions. We solved dependencies manually by editing problematic scripts; we used Spack, Anaconda, and Pip to install the latest stable version of each necessary library. Once the compilation was successfully completed, we experienced runtime errors with EHT-Imaging and SMILI that we solved by correcting syntax issues in part of the Python code. We could not find documentation on how to transform the grayscale output of DIFMAP and SMILI into the colored and formatted images from Figure 11 in [3]. We solved this issue by utilizing the EHTIM module for post-processing of grayscale output. In the process of rebuilding the EHT software stack, we documented the software packages used, their dependencies, the compilation requirements, and the execution processes for all three pipelines, completing the unavailable or incomplete components in Figure 1.
Packaging and Distribution
To support the portability of the ETH workflow across different platforms, we created a collection of four Docker containers that allows users to reproduce two key results from the EHT project: the characterization of available data (i.e., Figure 1 in [3]) and the final EHT images of the black hole in Figure 11 in [3]). A first container hosts the entire setting to reproduce the validation of the data integrity; its includes the data tarball from the EHT Data Product page along with our Bash, Python, and Docker scripts. We developed these scripts to automate the installation and configuration of the environment in an easily accessible and portable way. In order for users to be completely satisfied with the validation of the data integrity, we have incorporated a spare tarball within the container for users to perform the md5sum program on it to compare with md5sum of the data from the EHT Data Products page. If both md5sum match, then users knows that the data in the Data Products page has not been modified in any way, and they can move on with the validation by running the Python scripts to reproduce the images of the black hole. The other three containers are used to reproduce the final EHT images of the black hole. Each of the EHT pipelines is packaged into an independent container that automates their installation, dependency setup, environment configuration, and execution. The containers include our own scripts and auxiliary files for conducting the image postprocessing steps, which are not available in the original EHT repository.
All four containers are publicly available in a Docker Hub 1 . Additional documentation for deploying and using these containers is available in Github 2 , along with the scripts to generate the figures reproduced in this paper. These materials augment existing containers in the EHT Docker Hub 3 and the EHT repositories [6].
Reproducing EHT Images
We tested the containerized pipelines both on commodity hardware (a laptop with Inter CPU) and a Power9 cluster at the University of Tennessee, Knoxville. Figure 6 compares our results: Figure 6a shows the original images from Figure 11 in [3] and Figure 6b shows our reproduced images using the containerized pipelines. The two figures show that we can reproduce the M87 images for all three pipelines.
The images in Figure 6 provide us with a qualitative comparison. Both sets of images look visually similar in terms of shape and brightness, and the similarity is consistent across pipelines. To perform a quantitative analysis, we compare the "closure" quantities reported in Table 5 in [3] with those reported by our executions of the the three pipelines. For each day and each pipeline, we compare both the χ 2 CP and χ 2 log CA quantities computed across the top set of parameters [3]. For brevity, we only report the values with 0% systematic uncertainty. We observe consistency between the two sets of results with no perfect agreement for the EHT-Imaging and SMILI pipelines. We also find a larger difference between the original and reproduced values for the DIFMAP pipeline: this is consistent with the discussion of the different time averaging used in DIFMAP.
Lessons Learned and Guidelines
We compile lessons learned and guidelines to support the reproducibility of scientific projects based on our experience and observations reproducing the M87 black hole images from the EHT project.
Data Availability. The unavailability of the raw data made the direct validation of the pipeline input data unfeasible. As a proxy for data validation, we reproduced Figure 1 in [3], as this figure captures properties of the data telescope frequency and coverage. We are able to reproduce most of these properties except for the intra-site EHT interferometer baselines (short baselines) because the data from the intra-ALMA data) is not available. While this was not the case in this study, any incomplete or missing dataset may result in the users inability to fully verify the data integrity, and can threaten the entire reproducibility process. Data size or ownership constraints can be an obstacle to make raw data available to the public. Under these circumstances, data integrity mechanisms such as hashes ensure the correctness of processed data when releasing the raw data is not feasible. We add the additional service to run an MD5 integrity check for the pipeline input data as part of our EHT container set to facilitate data integrity validation. Table 2: Closure quantity χ 2 values and statistics for top set images with 0% systematic uncertainty. We compute the difference δ between the Top Set values in Table 5 in [3] and our reproduced values. None of the values agree exactly, but our value is consistent with the spread reported in [3] for the EHT-Imaging and SMILI pipelines.
Software Availability. Several pieces of software were unavailable at different stages of the EHT workflow, and for the three pipelines. The raw data and corresponding software to process the data are not available, neither are the scripts to run the data validation. We developed those scripts for data validation purposes. The code for running the three pipelines is completely available but the image post-processing scripts are not, which forced us to experiment with different settings in order to obtain results comparable to the original for each pipeline. Finally, the plotting libraries used to reproduce the results in Figure 1 from [3] were insufficiently defined. Thus, we manually tuned our plotting scripts to obtain a suitable plotting configuration. The qualitative differences between the original and reproduced images can be the result of our manual tuning, which indicates that just sharing the core for the three pipelines is not sufficient to reproduce the original images of the black hole in the galaxy M87. To support portability across platforms, we generated four containers that allow users to execute the data integrity validation and original pipeline codes. We also enable execution of the end-to-end workflow by providing all auxiliary materials for the image post-processing, figure generation, and result analysis.
Documentation Availability. In general, there is insufficient documentation on how to pack-age, install, and execute the EHT pipelines, as well as on how to perform both qualitative and quantitative analyses of the results. This hinders the overall reproducibility effort. For instance, documentation is key to reproduce Table 5 in [3]. Regarding the pipelines, there is insufficient information about software dependencies and versions used as well as file locations and their use. Documenting the whole EHT workflow beyond its image reconstruction components is crucial for the successful reproducibility of the results. Our documentation covering configuration and use of the entire EHT workflow was instrumental for the success of our reproducibility study.
Software Packaging. The incomplete documentation resulted in installation, dependency, and portability challenges. We manually edited dependencies to allow installation and compilation, and had to override installation instructions that were resulting in unstable environments. We found that by containerizing the workflow we can hide these challenges from the end user, simplifying the installation and deployment of the EHT workflow.
Methods Description. Incomplete descriptions of the results analysis process (e.g., the data averaging time or the additional systematic error budget added to the uncertainties) complicates the reproducibility of the χ 2 statistics in Table 5 in [3], as errors add up quickly. Conducting an adequate quantitative assessment of the final results becomes very challenging under these circumstances. Members of the EHT Collaboration highlighted in conversations with the authors of this paper how a qualitative comparison of the images is more interpretable.
Access to Final Results. The authors of [3] did not release the output fiducial images, and therefore we did not have a fixed reference to use for direct comparison of our reproduced images and the original ones. This, in addition to partial access to the data and incomplete description of the methods used, prevents us from conducting a complete validation of the reproduced images.
Access to Distributed Knowledge. The EHT Collaboration made substantial investments to allow independent users to qualitative and quantitatively reproduce their results and ensure the robustness of the EHT project. Nonetheless, we found challenging to reproduce the original results without direct knowledge of the methods and analyses, or the direct collaboration with the authors of the studies. Our experience illustrates the general challenges that users external to a project face when gathering knowledge on data, code, and documentation that was originally generated from multiple teams in a distributed fashion. The effort of the EHT Collaboration to remove biases by designing and deploying three completely separate pipelines, while instrumental for the trustworthiness of the project results, is also an obstacle to the project reproducibilty.
Conclusions
In this paper we deliver our experience reproducing the black hole images from the EHT project, and report new guidance and practices for building reproducible scientific research. Our work complements the work of the EHT Collaboration with supplemental data, scripts, documentation, and a set of containers. Postdocs, graduate and undergraduate students, and even high school students can benefit from accessing our data and code, and using our documentation to reproduce findings from the EHT project, learn about the EHT funding, and ultimately get involved in STEM research. Our guidance and practices can be incorporated more broadly by other scientific workflows. The EHT project continues to be a leader in reproducibility efforts and have provided comprehensive data products for their recent observations of Sag A * .
Assessing the level of detail required to cover the vast knowledge developed in a project the size of EHT is a complex task. Finding the balance between the effort from original research teams to enable reproducibility, and users attempting to reproduce the results is still an open question. Our experience with the EHT and LIGO projects reveals an important and recurring issue in reproducibility: challenges remain in disseminating findings in a way that allows reproducibility of results without direct interaction with the original team that produced them.
|
2023-02-10T16:13:26.567Z
|
2022-05-20T00:00:00.000
|
{
"year": 2022,
"sha1": "fa56a749cca4b2cf14f5bddc5d2e54688a90e7a3",
"oa_license": "CCBY",
"oa_url": "https://ieeexplore.ieee.org/ielx7/5992/5232784/10040660.pdf",
"oa_status": "HYBRID",
"pdf_src": "Arxiv",
"pdf_hash": "7facaa997abc92dcf5882fbb0b8f7898e8613406",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
8888744
|
pes2o/s2orc
|
v3-fos-license
|
Genome-Wide Association Study Dissecting the Genetic Architecture Underlying the Branch Angle Trait in Rapeseed (Brassica napus L.)
The rapeseed branch angle is an important morphological trait because an adequate branch angle enables more efficient light capture under high planting densities. Here, we report that the average angle of the five top branches provides a reliable representation of the average angle of all branches. Statistical analyses revealed a significantly positive correlation between the branch angle and multiple plant-type and yield-related traits. The 60 K Brassica Infinium® single nucleotide polymorphism (SNP) array was utilized to genotype an association panel with 520 diverse accessions. A genome-wide association study was performed to determine the genetic architecture of branch angle, and 56 loci were identified as being significantly associated with the branch angle trait via three models, including a robust, novel, nonparametric Anderson-Darling (A-D) test. Moreover, these loci explained 51.1% of the phenotypic variation when a simple additive model was applied. Within the linkage disequilibrium (LD) decay ranges of 53 loci, we observed plausible candidates orthologous to documented Arabidopsis genes, such as LAZY1, SGR2, SGR4, SGR8, SGR9, PIN3, PIN7, CRK5, TIR1, and APD7. These results provide insight into the genetic basis of the branch angle trait in rapeseed and might facilitate marker-based breeding for improvements in plant architecture.
control branch angles in Arabidopsis and peach 9 . Moreover, a recent study in Arabidopsis showed that mutants with defects in auxin homeostasis or auxin response genes, such as wei8 tar2, tir1-1, afb4-2 afb5-5, and arf10-3 arf16-2, have altered branch angles 10 . However, despite the increasing understanding of branch angle mechanisms in model plants, the genetic basis of branch angle in rapeseed has not been elucidated, a situation that reflects the complexity of genetic studies of polyploid plants.
A genome-wide association study (GWAS) provides a methodical analysis of the genetic architecture of complex traits in crops. GWAS identifies the underlying QTLs at a relatively high resolution, taking full advantage of ancient recombination events 11,12 . To date, GWAS has been successfully performed in many crops, including rice 13 , maize 14,15 and rapeseed [16][17][18] . Among the proposed statistical approaches for GWAS, the mixed linear model (MLM) is a popular method that can eliminate the excess of low p-values for most traits 19,20 . However, MLM can lead to false negatives by overcompensating for population structure and kinship 21 , and MLM has also limited statistical power to detect rare alleles, which in fact constitute a substantial proportion of the natural variation 22 and have potentially large phenotypic effects 23 . Accordingly, a complementary strategy, the Anderson-Darling (A-D) test, has recently been proposed to rectify these shortcomings 24 . The A-D test is a novel nonparametric statistical method that offers a higher power than MLM for traits that have abnormal phenotypic distributions and are controlled by moderate effect loci or rare variations 24 .
In the present study, we investigated correlations between the angles of branches at different positions and between these angles and other important agronomic traits. Genome-wide SNPs of this panel were assessed using the 60K Brassica Infinium ® SNP array, and the corresponding phenotype was evaluated in four environments.
GWAS was performed with 520 diverse accessions to identify underlying QTLs that contribute to rapeseed branch angle variations. A total of 56 loci that were significantly associated with the branch angle trait were identified by three association methods: MLM, the general linear model (GLM) and the A-D test. Considerable candidate genes were identified based on LD decay range of these loci, including multiple orthologues of well-characterized Arabidopsis genes. This study demonstrates that GWAS can be used as an effective approach for dissecting complex quantitative traits in rapeseed.
Results
Phenotypic variations in branch angle among accessions. High positive correlations were observed among the angles of different branches, indicating similar genetic control (Table 1). In general, the correlation coefficient between branches was reduced with decreasing physical proximity. For example, the angle of the second branch from the top showed a maximum correlation with the angle of the third branch (r = 0.72) and then with the fourth and fifth branches (r = 0.61, 0.53, respectively). In addition, we observed that the average angle of a different number of branches from the top was significantly positively correlated with the average angle of all branches, particularly when the branch number reached five (r = 0.93, Table 1). Thus, it is feasible to measure the five top branches as a representation of the branch angle phenotype. Phenotypic data for 30 individual plants are presented in Supplementary Data S1.
We collected phenotypic data for the association panel in four environments; the trial performed at the Changsha farm was the only trial with phenotypic data across two growing seasons (Table 2). Extensive phenotypic variations were observed for branch angles in the association panel, as indicated by the descriptive statistics shown in Table 2. In the four environments, the branch angle varied from 21.7 ± 1.9° to 71.7 ± 4.8°, with an average ranging from 40.3 ± 6.3° to 43.2 ± 6.3°. The coefficient of variation was constant in the different environments and ranged from 14.5% to 16.2%. The phenotypic data for all accessions in the four environments as well as BLUP values are presented in Supplementary Data S2.
The branch angles of the association panel in the four environments exhibited significantly positive correlations with each other, indicating the reliability and repeatability of these phenotypic data (Supplementary Table S1). Analysis of variance (ANOVA) revealed that the genotype, environment (year and location) and genotype × environment interaction all had significant effects on the branch angle, suggesting the crucial influence of environment on branch angle regulation (Supplementary Table S2). Based on phenotypic data for the four environments, the broad-sense heritability of the branch angle was as high as 78.5%. We then analysed the relationship between the branch angle and other important agronomic traits (only data for 2012/2013 and 2013/2014 at Changsha were available for these traits). Notably, branch angle was significantly positively correlated with plant height, which had the highest coefficient (r = 0.25, 2012/2013 Changsha), followed by branch number (r = 0.17, 2012/2013 Changsha, Table 3). Moreover, significant positive correlations were also observed between the branch angle and four yield-related traits, including the main inflorescence pod number, pod length, seed number per pod and seed yield (2012/2013 Changsha, Table 3). Similar results were observed in the 2013/2014 Changsha phenotypic data (Supplementary Table S3). Therefore, the results indicated a close relationship between branch angle and multiple plant-type and yield-related traits.
SNP performance, quality and in silico mapping. The Illumina Brassica 60K Infinium ® SNP array was used to genotype 530 rapeseed accessions. The raw data generated using the Illumina Infinium platform were further analysed with Genome Studio software by cluster refinement with an optimum accession Call Rate > 0.7; SNP Call Freq > 0.75; Minor Freq > 0.05; AA, BB frequency > 0.03; and GenTrain Score > 0.5. Through this analysis, 520 accessions and 33,218 polymorphic SNPs (63.7%) were retained. After excluding SNPs lacking clearly defined clusters or with multiple loci in the genome, 19,167 high-quality SNPs (36.7%) genotyped across 520 rapeseed accessions were utilized for association mapping. The genotyping scores for all polymorphic SNPs are presented in Supplementary Data S3.
Population structure and linkage disequilibrium. The population structure of the association panel was calculated for the 19,167 SNPs using STRUCTURE, and the parameters LnP(D) and Delta K suggested that the 520 genotypes could be assigned to two groups. A probability of membership threshold of 0.60 was used, and 65 and 398 lines were assigned to Groups 1 and 2, respectively, with the remaining 57 lines classified into a mixed group (Supplementary Data S4). In addition, the lm procedure in R showed that population structure accounted for 15.8% of the phenotypic variation of branch angle. The data for population structure and kinship are presented in Supplementary Data S5.
When r 2 = 0.1, seven chromosomes (A01 to A07) exhibited comparatively modest LD, with distances ranging from 708 to 873 kb ( Fig. 1, Supplementary Table S4). Chromosomes A09, C03, C05, C06 and C09 showed stronger LD, with distances ranging from 1,039 to 2,968 kb. However, particularly reinforced LD patterns were observed for chromosomes A08, A10, C01, C02, C04, C07 and C08, which presented a corresponding LD decay ranging from 4,264 to 8,704 kb. Consistent with the performance of the major A chromosomes, the A subgenome exhibited modest LD, with a distance of up to 1,046 kb, whereas the C subgenome exhibited extremely conserved LD of 7,882 kb when r 2 = 0.1. The average LD decay for the entire genome was 6,660 kb when r 2 = 0.1 ( Supplementary Fig. S1, Supplementary Table S4).
Environment
Min ± SD (°) Max ± SD (°) Mean ± SD (°) CV (%) To better utilize the genotyping and phenotyping information obtained in the present study, two permissive models, GLM and the A-D test, were introduced into our association analysis. Briefly, the two models detected 48 (GLM) and 24 (A-D test) loci significantly associated with BLUP and individual environmental data at the corresponding Bonferroni threshold, values of − log 10 (p) = 4.3 for GLM and − log 10 (p) = 5.6 for the A-D test (Table 4, Fig. 2). We then compared the consistency of the loci identified among the three methods. All loci detected using MLM were repeatedly detected through either GLM or the A-D test, and four loci were consistently detected across all three models, including Bn-A03-p6228570 on A03, Bn-A04-p4410144 on A04, Bn-A07-p13172047 on A07 and Bn-scaff_16062_1-p345501 on C05 (Table 4). A total of 16 associated loci were consistently detected between GLM and the A-D test, whereas 20 and 8 loci were exclusive to GLM and the A-D test, respectively (Table 4). Altogether, the three methods identified 56 unique loci significantly associated with the branch angle trait. Except for C01, these loci are unevenly distributed over all chromosomes. A03 and A07 both have a maximum of 10 loci, A04 and C08 five loci, and A10 and C03 three loci. The remaining chromosomes have either one or two loci (Table 4). Approximately two-thirds of the loci (38/56) are distributed in the A subgenome; the remaining loci are distributed in the C subgenome. When using a simple additive model, the 56 loci explained up to 51.1% of the phenotypic variation.
Candidate gene mining. When using the whole genome genes as reference, two categories of genes, genes with auxin efflux transmembrane transporter activity (GO:0010329) and genes with auxin transmembrane transporter activity (GO:0080161), were found to enrich in the LD decay ranges of significant loci (false discovery rate < 0.05, Supplementary Fig. S3, Supplementary Data S6). Based on the GO annotation, Arabidopsis orthologue information and published gravistimulation microarray data, we further predicted candidate causal genes for loci significantly associated with the branch angle trait within the observed LD decay (r 2 > 0.1), with 77 plausible candidate genes predicted for 53 loci. Due to the low rate of LD decay (776.8 kb on average when r 2 = 0.1), more than one-third of the GWAS loci (20/53) have at least two candidate genes (Supplementary Data S7). For example, three candidate genes, BnaA01g12950, BnaA01g13320 and BnaA01g13580, which are orthologous to Arabidopsis CRK5 25 , ARF9 26 and TRH1 27 , were collectively identified within the LD decay of the GWAS locus Bn-A01-p7430311 (r 2 > 0.2, Supplementary Data S7). Briefly, 48 (62.3%) candidate genes are related to auxin asymmetric redistribution, and 10 (13.0%), 5 (6.5%), and 5 (6.5%) candidate genes are involved in gravity perception, gravity signal transduction and organ curvature, respectively (Supplementary Data S7). The remaining nine genes (11.7%) are associated with ROS, phototropism, ethylene, and strigolactone (Supplementary Data S7).
In Arabidopsis, sgr1-sgr9 represent a series of defective shoot gravity perception mutants with abnormal branch angles 5 . The vacuolar membrane dynamics of the stem gravity-sensing cells of the sgr2, sgr3, sgr4 and sgr8 mutants are abnormal and affect the sedimentable movements of statoliths (amyloplasts) 5 . In the sgr9 mutant, interaction between statoliths and actin filaments is perturbed, resulting in attenuated statolith sedimentation 28 . In the present study, two orthologues of SGR4, BnaA04g09380 and BnaC04g31610, are located at 8.4 Mb on A04 and 33.4 Mb on C04, 205.4 kb downstream from the peak SNP Bn-A04-p6929056 and 370.9 kb upstream from the peak SNP Bn-scaff_16876_1-p1162532, respectively (Supplementary Data S7). We also identified an orthologue of SGR2 at 13.6 Mb on C08, which is 302.7 kb downstream from the peak SNP Bn-scaff_16468_1-p450133 (Supplementary Data S7). In addition, the SGR9 orthologue BnaA10g26980 was identified at 17.1 Mb on A10, 132.8 kb upstream from the peak SNP Bn-A10-p17414621 (Supplementary Data S7). Notably, the loci harbouring the SGR4 orthologue on C04 and the SGR9 orthologue on A10 were both identified by stringent MLM. In addition, the orthologues of Arabidopsis SGR3 and SGR5 were detected within the LD decay of the SNPs at 18.1 Mb on A02 and 22.7 Mb on A06 by using the A-D test, respectively, though the corresponding signals were not significant (− log 10 (p) = 4.6 < 5.6, − log 10 (p) = 4.8 < 5.6).
Discussion
The ideotype theory has prompted crop geneticists to map and clone plant type-related QTLs. To date, several genes that control the rice tiller angle, including LAZY1 6 , TAC1 8 13 . In the present study, we performed GWAS in 520 diverse rapeseed accessions to reveal the QTLs affecting the branch angle trait. Based on the results of GWAS and GO annotation, we identified 77 plausible genes underlying the abundant phenotypic variation, including orthologues of well-characterized Arabidopsis genes, such as SGR2, SGR4, SGR9, LAZY1, TIR1, AFB4, PIN3 and PIN7.
The phenotypic data obtained for branches at different positions enabled us to determine the branch(es) that provide the most reliable representation of the branch angle phenotype. High correlation (r = 0.93) was observed between the average angle of five branches from the top and the average angle of all branches, and measuring only these five branches rather than all branches would reduce the time required to perform such an evaluation and preclude confounding due to the variable branch numbers of different plants. Our statistical analyses revealed that plant height and branch angle are positively correlated, suggesting that shorter accessions tend to accompany a more compact canopy architecture. In addition, we also observed positive correlations between plant-type traits and yield-related traits. These results are informative to breeders attempting to adapt branch angles to achieve the ideal canopy architecture and high yields.
The association panel examined in the present study exhibited a strong LD of up to 6,660 kb (1,046 and 7,882 kb for the A and C subgenomes) at a cut-off value of r 2 = 0.1, which can be explained by three possible reasons. First, the often cross-pollination habit of rapeseed (natural outcrossing rate of 10-30%) could partially account for this phenomenon, as limited recombination from inadequate outcrossing is insufficient to break strong LD. The similar phenomenon has been observed in crops with low natural outcrossing rate, such as rice, which showed residual LD at a distance of 2,000 kb 21 . Second, the majority of accessions in the present association panel are Chinese elite breeding accessions; therefore, strong artificial selection for certain traits, such as punctual flowering time, high yield, and low erucic acid and glucosinolate levels, has exerted strong selective sweeps on the flanking regions of favourable genes and consequently has caused strong LD. Similar phenomena have been observed in maize, with LD decay ranging from less than 1 kb in landraces 30 to more than 100 kb in elite breeding lines 31 . Third, the "founder effect" could also account for the strong LD in our association panel. that more than 39.7% of the 63 certified low erucic acid or glucosinolate cultivars from 1985 to 1996 are directly derived from these donor parents 33 . Consequently, the "founder effect" may have caused the strong LD observed in our association panel. Furthermore, the last two explanations might also be responsible for the stronger LD observed in our association panel as compared to that of several reported B. napus populations [34][35][36] . The 56 GWAS loci in the present study cumulatively explained up to 51.1% of the phenotypic variation when using a simple additive model. Interestingly, despite the numerous significant SNPs identified, none of them explained more than 7.0% of the phenotypic variation. The significant SNPs identified by GLM and MLM explained 2.6-6.6% and 3.2-6.0% of the phenotypic variation, respectively (Table 4). Similar genetic architectures have been observed for other complex traits, such as maize leaf angle, whereby 96.0% of the 30 significant QTLs had less than a 2.5° effect 15 . In addition, the 29 large branch angle lines in the association panel have favourable alleles for 32.8 ± 5.6 (± SD) of the 56 loci associated with branch angle, whereas the 27 small branch angle lines have 17.9 ± 4.9 (± SD) favourable alleles (Supplementary Fig. S2). Accordingly, these results suggest that the observed large differences in branch angle among inbred rapeseed lines are not caused by merely a few genes with large effects, but rather by the cumulative effects of numerous QTLs having only small individual impacts on the trait.
In the present study, three methods (MLM, GLM and the A-D test) were collectively applied to dissect the genetic architecture of branch angle. Considerable GWAS loci (described in the Results) overlapped between the methods; however, we also observed variations in the power and applicable scenarios of these three methods. Compared with GLM and particularly MLM, the A-D test showed two advantages. First, because the A-D test is a nonparametric test, it is more robust, particularly for traits with abnormal phenotypic distributions and controlled by moderate effects or rare variations 24 . For example, two rare genes BnaC08g09050 and BnaC08g44370 (MAF = 0.09, 0.08, respectively), which are orthologous to well-known Arabidopsis SGR2 37 and 5PTASE13 38 , were solely detected in the A-D test. Second, population structure has profound effects on the results of GWAS, as reported in a previous study 34 . Because the A-D test does not include a correction for population structure, p-value overcorrections do not occur when the method is applied to traits that are correlated with population structure 39 . Nonetheless, MLM performed better than the A-D test for major QTLs, particularly those with common alleles 24 , such as BnaC04g31610 and BnaC07g15160 (MAF = 0.29, 0.35, respectively). Compared with GLM, MLM is more stringent because it involves familial relationships (kinship) to further reduce p-value inflation, which caused false negative results in the present study. For example, the loci harbouring BnaA07g19520, BnaA10g19550 and BnaA10g26980 failed to reach the significance threshold in MLM but not in GLM, even though the orthologues of these genes in Arabidopsis, TIR1 10 , LAZY1 40 and SGR9 28 , have well-documented roles in branch angle regulation. Because the robust A-D test and GLM may introduce more false positives, it is feasible to combine the A-D test and GLM with MLM to maximize the detection power and take notice of potential false positives.
Despite the large number of loci associated with the branch angle trait, few loci were consistently detected in all environments and a considerable number of loci were only detected in one environment (Table 4). There are three possible reasons for these results. First, because GLM and the A-D test are permissive models that potentially introduce false positives, certain loci may have represented spurious signals that reflect confounding population structure. Second, environment can influence QTL expression and its magnitude because environment represents the manifestation of complex biotic, abiotic and agronomic factors 41 . Indeed, environmental effects are evident in previous QTL mapping analyses [41][42][43] . In the present study, ANOVA analysis revealed that environment did have significant effects on branch angle; therefore, the expression of certain sensitive QTLs could be affected by environment. Third, the GWAS model chosen can also affect the results because different models are proposed based on different statistical assumptions. For example, the locus harbouring one copy of LAZY1 on C03 was detected in two environments (2012-2013 Nanjing and 2013-2014 Wuhan) by using the A-D test but not by using GLM.
Branch angle is a special gravitropic set-point angle (GSA) representing a plant architecture trait that is primarily governed by plant gravitropism; however, the precise mechanisms underlying branch angle maintenance and development remain poorly understood. Interestingly, a recent study proposed a model to address this issue 10 , showing that an auxin-dependent antigravitropic response acts antagonistically with the gravitropic response to maintain angled growth: the branch angle value is dependent on the magnitude of the antigravitropic response and is mediated via TIR1/AFB-AUX/IAA-ARF-dependent auxin signalling pathway within stem endodermal cells 10 . Intriguingly, in the present study, the orthologues of Arabidopsis auxin signalling genes, including TIR1, AFB4, AXL1, RUB1, ARF9, ARF10 and IAA13, were collectively identified as candidate genes (Supplementary Data S7). Although the precise mechanism underlying the antigravitropic response is not fully understood, this model provides a conceptual framework for understanding the mechanism responsible for the branch angle trait and highlights a new avenue for further research.
Plant materials.
A set of 530 diverse rapeseed inbred accessions, including landraces and elite varieties, was collected to construct the association panel, a subset of which was reported in a previous study of flowering time 44 . Based on the information obtained for these varieties, plants were assigned to three germplasm types: winter type (41), semi-winter type (435) and spring type (54). The origins of the plants showed that 485 accessions originated from Asia, 32 from Europe, 8 from North America and 5 from Australia (Supplementary Data S4). Remarkably, China contributed 476 accessions that originated from three rapeseed sub-regions with diverse climates, land fertilities and hydrologies, and these accessions broadly represent the major genetic diversity of the Chinese rapeseed gene pool. Experimental design and trait measurement. In 2011/2012 in Wuhan, we measured the branch angles of 30 randomly selected lines at three weeks after the final flowering stage (Supplementary Data S1). A small section from the base of the stem encompassing each branching node was photographed and analysed using Photoshop to measure the adaxial angle of the branch to the stem. Correlation analysis between the angles of branches at different positions was performed in R 44 .
The association panel was grown in the 2012/2013 and 2013/2014 growing season using a randomized complete block design with three replications on experimental farms at Changsha (N 28.22°, E 113.00°), Wuhan (N 30.52°, E 114.32°) and Nanjing (N 32.05°, E 118.78°) China. Meteorological data for the three locations are presented in Supplementary Data S8. Each line was grown in a plot with two rows and 12-15 plants in each row. The phenotypic investigation started approximately three weeks after the final flowering stage. In 2012/2013, we measured the five branches from the top of each plant, and four plants for each accession from two replicates were selected. In 2013/2014, we extended the sample size to 12.
Correlation analysis and analysis of variance (ANOVA) of branch angle for the association panel across different environments was performed in R 44 . Subsequently, an R script based on a linear model was used to obtain the broad-sense heritability and best linear unbiased prediction (BLUP) of the multi-environment phenotypes for each accession 45 . The BLUPs and individual environment data were used as the phenotypes for association analysis. Pearson's correlation coefficient between branch angle and multiple agronomic traits, including plant height, branch number, main inflorescence pod number, pod length, seed number per pod, seed weight and seed yield (only data from 2012/2013 and 2013/2014 at Changsha were available for these traits), were calculated in R 44 .
SNP genotyping, filtering and in silico mapping. Leaf tissue samples from the entire association panel were obtained from bulk of at least four individuals for each accession at the seedling stage. DNA was extracted using a modified CTAB procedure according to Murray and Thompson 46 . The DNA quality was carefully assessed prior to genotyping.
SNP genotyping was performed using the Illumina Brassica 60K Infinium ® SNP array according to the manufacturer's instructions (http://www.illumina.com/technology/infinium_hd_ assay.ilmn). SNP data were clustered and automatically called using Illumina Genome Studio genotyping software. First, accessions with a Call Rate < 0.7 were excluded and all SNPs were reclustered. Next, SNPs with Call Freq < 0.75, Minor Freq < 0.05, AA or BB frequencies < 0.03 or GenTrain Scores < 0.5 were excluded. The remaining SNPs were manually reassessed, and those that did not show three clearly defined clusters were also excluded. Because heterozygous SNPs cannot be distinguished from hemi-SNPs or false calls, heterozygous calls were treated as missing values.
Fifty base pair sequences of retained SNPs after filtration were used to perform a BLASTN 47 search against the B.napus genome database (http://www.genoscope.cns.fr/brassicanapus/). Using an e-value threshold of e −12 , SNPs corresponding to multiple loci in the genome were excluded, and only the top blast hits were retained for further analysis.
Population structure, kinship and LD decay. The filtered SNP dataset for selected accessions with a Call Rate > 0.7 was entered into STRUCTURE V2.3.3 48 . Five independent runs were performed with a K-value (the putative number of genetic groups) varying from 1 to 10, with the length of the burn-in period and the number of MCMC (Markov Chain Monte Carlo) replications after burn-in both to 100,000 iterations under the admixture model. The most likely K-value was determined using the log probability of data [LnP(D)], and the ad hoc statistic Delta K, which is based on the rate of change of LnP(D) between successive K-values. The cluster membership coefficient matrices of five independent runs from STRUCTURE were integrated to obtain a Q matrix by the CLUMPP software 49 . The proportion of phenotypic variation that contributed by population structure was calculated via the lm function in R 44 . Relative kinship coefficients (K) were calculated using the SPADeGi software package 50 . All negative values between individuals were set to 0. The linkage disequilibrium measurement parameter r 2 was used to estimate linkage disequilibrium (LD) of A and C subgenome chromosomes via TASSEL5.0 51 . When calculating LD for a specific chromosome, the LD window size was adjusted to the chromosomal SNP number to force the calculation for all marker pairs. Locally paired scatterplot smoothing in R was employed to obtain a graphical representation of LD curves.
Genome-wide association study. Trait-SNP association analysis was performed using three methods.
Both GLM and MLM were implemented in TASSEL 5.0 51 . GLM takes into account population structure as a fixed effect. On this basis, MLM incorporates kinship as a random effect to further eliminate the excess of low p-values 19 . For GLM and MLM, the significance of associations between SNPs and traits was based on the uniform threshold p ≤ 5.2 × 10 −5 (− log 10 (p) = 4.3). The A-D test for branch angle was conducted by using the R package ADGWAS 1.0 24 . The A-D test is a nonparametric test that includes no correction for population structure. A more stringent threshold was set for the A-D test with p ≤ 2.6 × 10 −5 (− log 10 (p) = 5.6).
To better understand the explanatory power of the significant SNPs, we used the SNP genotypes at candidate loci as predictor variables in multiple linear models fitted to the phenotypic variables and subsequently ran a model comparison analysis (stepwise AIC procedure implemented as the R function "stepAIC") 52 to determine the best fitting model. The adjusted R 2 of the best fitting multiple regression model was referred to as the phenotypic variation explained by the significant SNPs.
Candidate gene mining. To define regions of interest containing potential candidate genes, local LD decay was calculated within flanking regions up to 12,000 kb on either side of significant SNPs using TASSEL5.0 51 , and a cut-off value of 0.1 was used for the LD statistic r 2 . Genes within the observed LD decay were annotated using the software Blast2GO v3.3.5 with the default settings 53 . In particular, genes with GO terms for gravitropism, amyloplast, and auxin were highlighted. Using the whole genome genes as reference, GO enrichment analysis of all genes within the LD decay ranges was implemented using Fisher's Exact Test in Blast2GO v3.3.5 (false discovery rate < 0.05) 53 . Next, we performed BLASTX searches against the Arabidopsis genome to determine whether candidate SNP-tagged genome regions contain genes orthologous to Arabidopsis genes with established roles in shoot gravitropism. Additionally, we exploited the Arabidopsis "Electronic Fluorescent Pictograph" (eFP) browser 54 and microarray data from previously published gravistimulation studies [55][56][57] to further characterize candidate genes. Notably, associated SNPs that are not in or near branch angle-related genes within the LD decay (r 2 = 0.1) were considered linked to a more distant gene, and the closest one of these genes was considered the most likely candidate.
|
2018-04-03T00:45:02.255Z
|
2016-09-20T00:00:00.000
|
{
"year": 2016,
"sha1": "b9d7dedc5b4a82a979dce58744797c1a6a801f99",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/srep33673.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "b9d7dedc5b4a82a979dce58744797c1a6a801f99",
"s2fieldsofstudy": [
"Agricultural and Food Sciences",
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
}
|
229261912
|
pes2o/s2orc
|
v3-fos-license
|
Design and Fabrication of Pneumatic Can Crushing Machine
This paper is based on the design and fabricate of a pneumatic can crusher that will reduce to the smallest possible amount of the volume of aluminum cans by 70%. The can crusher is made up of various parts containing parts such as a lever, base frame, can bin, piston cylinder arrangement, chain sprocket mechanism and bearing. The inspiration behind this design came from the wastages in eateries, canteens of big companies where people gather and consume a lot of canned beverages. Thus, it makes sense that there should be an easy way to dispose of used cans properly during large social gatherings. The Can Crushing machine works with the help of pneumatic single acting cylinder. The machine is portable in size, and as such is easily transportable. Most companies find it difficult to dispose of their used cans in hotels and canteens and to create enough storage space that is required. This paper deals with the operations, the design and structural analysis of can crusher. A Can crusher is a device to reduce large material object into a smaller volume. The crusher reduces the size or change the form of waste materials so that they can be disposed off or recycled easily. The Can crushing machine is designed to crush aluminum waste cans by 80%. reduction in volume. It is used primarily to ease transportation of aluminum waste for recycling purposes. The machine is designed to smash an empty can of diameter 65mm and height 120mm to height of between 25mm to 30mm. It uses compressed air for its operations with the following component parts: pneumatic cylinder, solenoid valve, control unit and hoses. The cans are fed into the hopper and the cans travel in an orderly manner through the chute into the crushing chamber. The air compressor through the pneumatic cylinder supplies the required crushing force. The crushed cans drop through the created space into the collection tray below the crushing chamber.
INTRODUCTION
When people footstep the cans after finishing their drinks, the can does not look symmetrically flat and it looks messy. This condition sometime makes cans produce sharp edges which could be harmful or injurious to people. Furthermore, people always throw cans anywhere. These conditions pollute the environment. So this design is used for crushing cans as flat as possible, try to reduce time and cost as well as eliminate the sharp edges.
A can crusher is a machine designed to reduce large solid materials into a smaller volume, or smaller pieces. Crushers may be used to reduce the size, or change the form, of waste materials so they can be more easily disposed of or recycled, or to reduce the size of a solid mix of raw materials (as in rock ore), so that pieces of different composition can be differentiated.
The Can crushing machine which is fabricated incorporates the use of the "quick return mechanism" for crushing the cans one at a time in one stroke, i.e.; one can be crushed during the forward stroke of the piston. (Jayakumar 2011) [2] . This mechanism is most commonly used in shaping machines. The quick return mechanism is an inversion of the slider crank which converts rotary motion into reciprocating motion. All parameters related to the design aspects were considered and calculated. Various stress factors were considered and suitable tolerances and factor of safety was accordingly employed to reduce chances of failure and increase the life and durability of the machine. (Shadab. et al) [6] . Adequate care was also taken to ensure negligible slipping of the belt to achieve maximum efficiency. The objective of this project is to design and fabricate a can crusher which incorporates the use of "quick return mechanism". This can crusher can be used to crush aluminum cans of (200ml) for suitable disposal and subsequent recycling. The Quick return mechanism is a mechanical device which is used for crushing cans. The can crusher on crushing the cans reduces the size of the cans and hence create enough space to accommodate the waste. This paper is centered on the design and fabrication of pneumatic Can crusher meant to reduce aluminum waste Cans, that will reduce the volume of aluminum by 80 percent. Can crushers be primarily used to reduce the size of cans for easy transportation of large volume for recycling. A Can crusher is defined as a device used for crushing aluminum Cans for easier storage in recycling bins thereby giving the needed extra space by reducing the size or the Cans.
The inspiration behind this design came from the festivals, wastage in, canteens of big companies that are involved in large parties where people gather and consume a lot of canned beverages. Thus this can crusher was created, with a portable and pneumatically operated mechanism. There are many designs of can crushers. Some of the designs are manual, pneumatic and hydraulic.
As Canned beverages, drinks and fruits are frequently consumed even in homes. These Cans do take up lot of space and transportation cost is also high for moving huge number of Cans from one place to the other. Even if people step on the tin after usage, it does not always like symmetrically flat and it looks messy and irregular. The act of crushing the Cans with our feet usually leads to sharp edges that can harm people. Furthermore. we tend to dispose these Cans badly, thereby leading to environmental pollution. Recycling aluminum requires only 5% of the C02 emission as compared with primary production. More than 100 billion aluminum cans are sold every year, but less than half of them are recycled. A similar number of aluminum cans in other countries are also incinerated. (Suryakant et al 2016) [10] Aluminum Cans are some of the easiest materials to recycle. New drink Cans appear on the shelf just six weeks after recycling. The aim of this paper is to create a crushing device that reduces the volume of aluminum cans to about 80% of the original size, of cans with low cost and also reduce the rate of environmental pollution (Shadab el al 1997) [6] . More so, is aimed at fabricating a Can crusher that has storage to locate the Can after crushing
Developmental Trend in Can Crusher
The type of crusher used in modern day is a product of improvement and modification of simple components that make up the machine. These modifications arise as a result of research for better and more convenient and faster means of crushing empty beverage Cans. The development in the trend of Can Crushing device has gradually resulted to various types of Can Crushing devices ranging from the manual to the hydraulic and finally the pneumatic type.
Industrial use of Can Crusher
In the industry. crushers are machine that uss a metal surface to break or compress materials. Mining operations uses crushers commonly classified by the degree to which the fragment of the starting material. with primary and secondary crusher handling coarse materials and tertiary and quaternary crushers reducing ore particles to finite gradations. The manual can crusher is used for crushing Cans, and it requires manual efforts in order to reduce the size of the Can to the barest minimum. There are different ways formerly used in crushing Cans manually. These include the use of stones, heavy metal or hard wooden objects
Hydraulic Can Crusher:
Hydraulic system of crushing cans is similar to the pneumatic system; the only difference is that hydraulic system uses hydraulic oil compared to pneumatic system which uses compressed air as the working fluid
Pneumatic Can Crusher:
A pneumatic type of Can crusher uses compressed air as a working medium for power transmission. The system of operation is similar to that of hydraulic system except that the hydraulic system uses fluid. Air compressor converts the mechanical energy of the prime mover into mainly pressure energy of the compressed air Figure\4. Isometric view of Can crushing machine
Principles of operation of Pneumatic Can Crushing Machine
This type of can crusher generally consists of a pneumatic system to drive a piston forward or reverse backward. When the piston is moved forward a plate which acts as a bore of the piston moves along with the piston and crushes the can. A groove that is created which allows the crushed cans to drop into the collection container at the bottom of the frame.
MAINTAINENCE AND SAFETY -
Maintenance -As a result of its continuous usage, the machine will undergo wear and tear of the mating and sliding components. Hence it requires maintenance. This process consists of the repairing and replacement of its component parts, called maintenance. The can will be crushed to 70% of its original height The can was crushed to 1.5" in height
Durability
Open and close the drawer of the bin that stores aluminum can The aluminum can bin will slide in and out with ease The aluminum can bin slide with much ease
Bin can hold over 20 cans
Load 20 cans in the chute All 20 can will be crushed one after the other Bin could hold 20 cans and still had some space
Self-Loading
Put cans into the selfloading mechanism and they will slide down The crusher will be able to self-load appropriately without any complications The self-loading mechanism can hold cans with about 25mm space at the top
Can must end-up in bin after Crushing
Crush a can to see if it falls through the hole into the bin The can will fall into the bin They will fall into the bin with ease
METHODOLOGY
In designing and fabricating this tin can crusher, a flow of methods had to be used the design and crusher the tin. First of all, a process planning had to be charted out. (Soakakke 2008) [9] . This acts as a guideline to be followed so that, the final model meets the requirement and time. This would determine the efficiency of the Can crushing machine. Controlling and analyzing these steps are very important as each of them will lead to having an effective and efficient system. The flow chart starts with the introduction to commence the process. Literature review on the title is done thoroughly covering all the aspect of the project. The medium for this research are via internet and books, in which case essential information related to the project is gathered for referencing. In conceptualization, few designs are done using the sketching which is then saved to be reviewed from one stage to the other. The sketching is the first step, then the designs and concepts are than reviewed and recalculated to fit the best dimensions and performance of recycle can crusher. However, the drawing using software is just a guideline to be followed to improve the can crusher (Shadab et. al) [6]. After the draw is done, it proceeds to the project proceeds to fabrication process. The fabrication process involved cutting, welding, drilling, bending and others. After every process, the parts are check to make sure that the output of the process is in conformity with the product requirement. Then comes the analysis processes. In which case the can crusher will be tested to see if it fulfills the requirement such as easy to crush the can. Methods and process involve in this project include; joining using bending, welding, drilling, and cutting process. This project involves the process of designing the different parts of the crusher machine considering the forces and ergonomic factor for people to use. In the design and fabrication of the pneumatic Can Crusher, the main aim is to study the complete design of semi-Automatic can crusher machine. In this design and calculation procedure parameters have been taken into consideration from design data Book, Thesis and, Journals.
International Journal of Advances in Scientific
In this paper we have developed a semi-automatic Pneumatic Can Crusher that can crush average Cans symmetrically as much as possible before the crushed cans will land in the collection bin. The design is environmentally friendly and uses simple working mechanism. The main aim of this machine is to crush an empty Can of diameter 60mm and height of about 120mm. Can dimension 65mm diameter (32.5mm in radius) and height 120mm.
RESULTS AND DISCUSSION
The can crushing machine was tested and found to be quite effective and efficient in its operation.
It can crush between fifteen and twenty can per minute. It is very robust, and cost effective and can be readily available for Hoteliers and Proprietors of fast food outlets. The frame is made up of mild steel materials which made it very stable to withstand the crushing force and other auxiliary vibrating force. The machine has all it takes in terms of speed, reliability, durability and performance indicators.
CONCLUSION
The above design procedure is being adopted for the fabrication of fully automatic can crusher machine which will make the product durable for a long time as well as make it efficient. Thus, with the help of a functional design we can fabricate an automatic can crusher machine to simply increase the number of cans to be crushed as well as to reduce the human fatigue involved. At the start of this project, our priority was to research can crushers and how they work, how to build them, and we had to make sketches of a frame, bin, and crushing mechanism. Once we had our design picked out, we were ready to go through and make all decisions that were necessary to finish and complete our project successfully. ( Singh Pankaj et al) [8] . In building the can crusher, we started with the component parts and handle one after the other until we successfully complete them all. We saved the building of the crushing mechanism for last because we knew it would take the longest and most complex to build.
|
2020-11-19T09:16:12.394Z
|
2020-01-01T00:00:00.000
|
{
"year": 2020,
"sha1": "83033d8c9fd2595f07d978d4f2ac06a6371793ef",
"oa_license": null,
"oa_url": "https://ijasre.net/index.php/ijasre/article/download/1140/1688/2180",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "d693bddc9f2360ec334811f92ab37f35d7baffb6",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Materials Science"
]
}
|
235429191
|
pes2o/s2orc
|
v3-fos-license
|
Evaluation of cadmium and arsenic effects on wild and cultivated cardoon genotypes selected for metal phytoremediation and bioenergy purposes
Cynara cardunculus L. is a multipurpose crop, characterized by high production of biomass suitable for energy purposes and green chemistry. Taking advantage of its already demonstrated ability to grow in polluted environments that characterize many world marginal lands, the aim of this work was to investigate the response of different cardoon genotypes to exposure to cadmium (Cd) and arsenic (As) pollution, in order to use this crop for rehabilitation of contaminated sites and its biomass for energy production. In this study, seeds of two wild cardoon accessions harvested in rural and industrial Sicilian areas and of a selected line of domestic cardoon were used, and the grown plants were spiked with As and Cd, alone or in combination, at two different concentrations (500 and 2000 μM) and monitored for 45 days. The growth parameters showed that all the plants survived until the end of experiment, with growth stimulation in the presence of low concentrations of As and Cd, relative to metal-free controls. Biomass production was mostly allocated in the roots in As treatment and in the shoots in Cd treatment. Cd EXAFS analysis showed that tolerance to high concentrations of both metals was likely linked to complexation of Cd with oxygen-containing ligands, possibly organic acids, in both root and leaf biomass with differences in behaviour among genotypes. Under As+Cd contamination, the ability of the plants to translocate As to aboveground system increased also showing that, for both metal(loid)s, there were significant differences between genotypes studied. Moreover, the results showed that Cynara cardunculus var. sylvestris collected in an industrial area is the genotype that, among those studied, had the best phytoextraction capability for each metal(loid).
Introduction
Heavy metals and metalloids pollution is a major environmental and human health problem in all industrialized countries resulting from industrial activities, modern agricultural practices and mining (Adriano 2001;Miguel and Marum 2011;Pérez-Sirvent et al. 2012;Fernández et al. 2013;Guarino et al. 2018;Sahito et al. 2021). Among metals and metalloids, of most concern are As and Cd; both are highly toxic and have no known physiological benefit. As toxicity is implicated as a probable cause of bladder, lung, skin and prostate cancer in humans, among others (Peralta-Videa et al. 2009). Meanwhile, Cd can be absorbed via the alimentary tract, penetrates through placenta during pregnancy, and damages membranes and DNA (Kabata-Pendias 2004). Moreover, Cd may cause kidney and bone damage, affects the female reproduction system, which implies a serious threat for mammals and humans (Peralta-Videa et al. 2009), and is the only metal that might create human or animal health risks at plant tissue concentrations that are not generally phytotoxic (Peijnenburg et al. 2000).
Although high concentrations of trace elements in agroecosystems influence the growth and development of the plants through negative impacts on processes such as respiration, photosynthesis, electron transport and cell division (Wójcik et al. 2009;Pourrut et al. 2011;Muszyńska and Hanus-Fajerska 2015), different plant species are able to tolerate them, survive, grow, and reproduce on soils contaminated with heavy metals and metalloids (Muszyńska and Hanus-Fajerska 2015). This is thought to occur through a variety of mechanisms, including storage and detoxification/ sequestration of heavy metals and metalloids (Tran and Popova 2013) in the shoot, mainly based on chelation and subcellular compartmentalization (Yadav 2010;Tran and Popova 2013) or maintaining shoot concentrations at low level up to a critical soil value above which relatively unrestricted root-to-shoot transport result (Violante et al. 2010). Phytoremediation is a biological technique that uses such plants to remediate soils contaminated with trace metals; the choice of plant depends on a variety of factors, including high biomass production and high metal tolerance.
Among the species proposed to remediate the soils from metal(loid)s, there is a growing interest on Cynara cardunculus L. (cardoon), a perennial species from Asteraceae family, native to Mediterranean countries. It comprises the subspecies, C. cardunculus L. subsp. scolymus (L.) Hegi = C. cardunculus L. subsp. scolymus (L.) Hayek (globe artichoke) and two botanical varieties C. cardunculus L. var. altilis DC. (domestic cardoon) and C. cardunculus L. var. sylvestris Lam. (wild cardoon) that is considered to be the wild ancestor of globe artichoke (Rottenberg and Zohary 1996;Raccuia et al. 2004a). The domestic cardoon has been cultivated for many years as a traditional food source in some parts of southern Europe, particularly in Italy, France and Spain. In addition, its high production of biomass and grain (Raccuia and Melilli 2007;Angelini et al. 2009;Raccuia et al. 2012) can be used for different purposes, including feed, bioenergy, green chemistry, pharmaceutical, nutraceutical and phytoremediation of heavy metals (Raccuia and Melilli 2004;Genovese et al. 2016aGenovese et al. , 2016bLeonardi et al. 2016aLeonardi et al. , 2016bRaccuia et al. 2016;Toscano et al. 2016;Gominho et al. 2018). The wild cardoon is a robust thistle with a characteristic rosette of large spiny leaves and branched flowering stems that accumulate biomass mainly in roots (Raccuia and Melilli 2004).
All these characteristics, its good adaptability to the Mediterranean climate, to stressful environmental conditions (salt, heat and drought stress) (Mauromicale and Licandro 2002;Raccuia et al. 2004b;Benlloch-González et al. 2005;Argento et al. 2016;Docimo et al. 2020;Pappalardo et al. 2020), its good tolerance to stress induced by contaminants both during germination and growth phases (Llugany et al. 2012;Sánchez-Pardo et al. 2015;Leonardi et al. 2016aLeonardi et al. , 2016bPappalardo et al. 2016Pappalardo et al. , 2020Arena et al. 2017;Capozzi et al. 2020) and low input management, suggested its potential use for phytoremediation. From these studies, the potential of cardoon to accumulate heavy metals and metalloids from polluted soils is very clear. However, to date there are no specific studies regarding the resistance mechanisms to pollutants, of different varieties of cardoon, and whether there is influence of the genotype. These studies could be useful to understand at the same time the interesting dual use of this crop not only to remediate contaminated soils from toxic elements but also for biomass production for bioenergy (Mehmood et al. 2017). The energy potential of cardoon is attributable not only to the characteristics listed above but also to low moisture content of the biomass at harvest; a biomass composition mainly of a lignocellulosic-type and a high heating value (Fernández et al. 2006). Toscano et al. (2016) carried out two different pilot systems for biodiesel and pellet productions using cardoon biomass and grain: the results showed that cardoon plants may be used for different energetic purposes, making cardoon a very competitive and sustainable energy crop in Mediterranean environment and an economic alternative for farmers. In fact, from the perspective of the circular economy, the Asteraceae family and some members of the families Brassicaceae, Poaceae, Fabaceae and Malvaceae are fast-growing economic crops, and their biomass production during phytoremediation activities will make an increasing contribution to meet sustainable future energy demands (Ingrao et al. 2016;Witters et al. 2012;Sahito et al. 2021;Zehra et al. 2020a;Zehra et al. 2020b).
In this work, the growth under As and Cd stress conditions of two varieties of wild cardoon and one domestic cardoon were compared with the aim to (i) evaluate the variability in response of different varieties and genotypes of cardoon plants; (ii) assess the concentration, bioaccumulation and translocation ability of As and Cd in different parts of the plant; and (iii) understand the tolerance of these plants to heavy metal(loid)s.
Plant material
For this research, we used different cardoon genotypes, belonging to the genetic bank of the National Research Council-Institute for Agricultural and Forest System in the Mediterranean city of Catania, Sicily-IT (CNR-ISAFOM): one C. cardunculus L. var. altilis (Gen.1) and two genotypes of C. cardunculus L. var. sylvestris Lam. (Gen.2 and Gen.3).
The domestic cardoon (C. cardunculus var. altilis DC.) is a selected line by CNR-ISAFOM to produce biomass for green chemistry. The wild cardoon (C. cardunculus var. sylvestris Lam.) populations were collected in two different sites in Eastern Sicily (Raccuia et al. 2004a). The first (R14CT-Gen.2) was collected at 820 m above sea level in Randazzo (CT-Sicily-IT) (37°53′10.9″ N 14°57′13.2″ E), within Nebrodi Regional Park. The second (A14SR-Gen.3) was collected in the territory of Augusta (SR-Sicily-IT) (37°14′ 13″ N 15°11′05″ E) at 2 m above sea level. The two wild genotypes were chosen precisely because they came from two very different sites: the R14CT genotype was selected in a high mountain area characterized by an uncontaminated environment, temperatures ranging from a minimum of 3 to a maximum of 28°C and an average annual rainfall of 830.5 mm; the A14SR genotype, on the other hand, was selected from a coastal and industrial area near the port city of Augusta (SR). This site is characterized by temperatures ranging from 7 to 30°C and an average annual rainfall of 714.1 mm (SIAS -Sicilian Agrometeorological Information Service -SIAS -Sicily n.d.). All the cardoon populations were harvested during the summer of 2014, and both the wild genotypes used for our trials were collected in uncultivated lands within patches of cardoon spontaneous vegetation.
Field experimental design
To simulate field conditions, the trial was conducted outdoors in controlled environmental conditions in CNR-ISAFOM experimental field located at Cassibile, Syracuse (Sicily) (36°5 8′33″ N 15°12′17″ E) at 50 m above sea level from December 2014 (sowing) to July 2015 (last harvested). The monthly minimum temperatures ranged from 4 in February to 20.5°C in July and the maximum ones from 16.6 in December to 34.3°C in July; the average annual rainfall was 555.1 mm (SIAS -Sicilian Agrometeorological Information Service -SIAS -Sicily n.d.). For the sowing experiment, 396 seeds of Gen.1, Gen.2 and Gen.3 were placed in plastic pots (diam. 3 cm) using a clear cover to retain moisture until seedlings appeared, to allow selecting healthy seedlings.
In January 2015, 4-week-old wild and domestic cardoon plants with three or four leaves were transplanted into plastic pots (diam. 45 cm) filled with 13.0 kg of commercial potting soil (1 plant per pot, 3 independent biological replicates, 189 plants in total). After 2 weeks of planting, a N-P-K fertilization (20:10:10) was added to the soil with a ring application at the rate of 50 g per pot, and it was repeated each 30 days at the same rate until the end of the experiment. Five months after sowing, 500 mL of aqueous solution of As, Cd or As+Cd, each at two different concentrations 500 and 2000 μM, were added to each pot and compared with a control. The experimental design sought to vary the concentrations to reflect high metal stress conditions but also low stress conditions where growth might be stimulated through hormesis. The actual concentrations were chosen based on preliminary trials (Leonardi et al. 2016a(Leonardi et al. , 2016bPappalardo et al. 2016), which indicated differences in plant response over these concentrations and by comparison with values used in the literature (Sun et al. 2008;Papazoglou 2011;Llugany et al. 2012). The arsenic solutions, created from sodium dibasic arsenate heptahydrate (Na 2 HAsO 4 7H 2 O), and those of cadmium, made from cadmium nitrate tetrahydrate (Cd (NO 3 ) 2 4H 2 O), were named as As0 (control), As500 and As2000, and Cd0 (control), Cd500 and Cd2000, respectively, with the numbers representing concentration in micromolar. Finally, the mixtures of both metals were prepared by concentrated solutions of As and Cd and named as As0+Cd0 (control), As500+Cd500 and As2000+ Cd2000.
The pots within each treatment were arranged adopting a randomized experimental design. Crop water requirements were satisfied by a drip irrigation system with a flow control valve which made it possible to reduce percolation losses. During the growth cycle, plant growth parameters (height, number of leaves) and visual systems such as the presence of yellow and dried leaves were recorded, and each individual plant was observed, in order to detect visible toxicity symptoms.
Sampling and chemical analysis
Cardoon plants (3 plants per treatment) were harvested at 3 different time points during growth, every 15 days until 45 days after the artificial contamination of the soil, from June to July 2015. After the harvest, plants were gently removed from the pots, and the fresh weights of the individual plant per genotype and treatment were subsequently determined. Shoots and roots were further separated, and after removing the soil particles from roots, they were washed with tap water, then with distilled water and finally with 0.01 M HCl for approximately 5 s in order to remove external metals from the root surface . Root length (cm) and dry biomass of roots and shoots were determined. Finally, the samples were dried for 72 h in a temperaturecontrolled oven at 70°C. Soil samples were collected from each pot, air-dried at room temperature and ground to pass a 2.0-mm mesh.
For chemical analysis, samples of roots and shoots were cut with stainless steel scissors and ground in an agate pestle and mortar with liquid nitrogen to obtain homogeneous samples. The powdered dry plant samples were digested in a closedvessel microwave digestion system (MARSXPRESS by CEM Corporation, NC, USA) equipped with sensors for temperature and pressure (175°C, 1600 W). Triplicate 0.5 g samples with 1 mL of Yttrium internal standard (1 mg L −1 ) were put inside the microwave vessels and digested in a mixture of 8 mL of 65% HNO 3 and 2 mL of 30% H 2 O 2 . After digestion, the solution was quantitatively transferred into pre-cleaned 50-mL volumetric flasks and diluted to the mark with deionized water. The samples were stored at 4°C for subsequent analysis. The concentration of As and Cd in soil samples was determined by triplicate digestion of 0.5 g soil sample in a high pressure microwave system (175°C, 1600 W) with a mixture of 3 mL 65% HNO 3 and 9 mL 37% HCl (USEPA 3051A-USEPA 1998). After digestion, the solution was quantitatively transferred into pre-cleaned 50-mL volumetric flasks and diluted to the mark with deionized water. A Merck mixed metal standard (M6) was used as a certified reference to ensure the accuracy of analyses.
Samples were analysed for As and Cd using an Agilent 7500ce (Agilent Technologies, CA, USA) inductively coupled plasma-mass spectrometry (ICP-MS) (with octopole reaction system), employing an RF forward power of 1540 W and reflected power of 1 W, with argon gas flows of 0.81 L min −1 and 0.21 L min −1 for carrier and makeup flows, respectively. The instrument was operated in spectrum multi-tune acquisition mode, and three replicate runs per sample were employed. Each mass was analysed in fully quant mode (three points per unit mass).
The following isotopes were monitored: 75 As, 89 Y and 111 Cd. 103 Rh was added as internal standard at a concentration of 20 μg kg −1 . 111 Cd was analysed in no-gas tune, 75 As was analysed using helium tuning to remove any polyatomic interferences, while the internal standards 103 Rh and 89 Y were analysed in both modes.
A series of standards were prepared by serial dilution of a 1000 mg L −1 stock solution with HNO 3 2% (v/v) (Merck KGaA, Darmstadt, Germany). The calibration curve fit (at least five standard concentrations) was of R 2 =0.999 in all cases. The mean concentration in blank digests was 0.07 μg L −1 for As and 0.05 μg L −1 for Cd. The detection limit was 0.01 μg L −1 for both metals. All analyses were performed in triplicate for each pot and are reported on a dry weight (DW) basis.
X-ray absorption spectroscopy
In order to determine the speciation in which the metals exist in plant biomass as a basis for understanding uptake and detoxification mechanisms (Adediran et al. 2015;Adele et al. 2018), we analysed roots and leaves using X-ray absorption spectroscopy on beamline B18 at Diamond Light Source. Due to limited beamtime, we chose to compare biomass from the domestic genotype 1 and wild genotype 3 only. Plant material was dried in an oven at 70°C for 72 h and finely ground for speciation analysis. This effectively open-air drying is likely to have affected the oxidation state of As in plant tissues from those in fresh biomass, rendering speciation outcomes unreliable. Thus, although we have As data (which in fact shows both trivalent and pentavalent As), the remainder of this contribution will focus on Cd only. Spectra were collected using QEXAFS in a liquid nitrogen cryostat to reduce sample damage, at the Cd K-edge, using a Si 311 monochromator. Spectra were acquired in fluorescence mode by means of a 9-element solid-state Ge detector. The beamline energy was calibrated using a Cd foil (26711 eV), and data were collected up to 13 A −1 with 0.5 eV resolution. Consecutive spectra from the same point were examined for possible beam damage, and damage was minimal.
Spectra were analysed using the Demeter suite of programmes (Ravel and Newville 2005). XANES spectra were compared to freshly prepared Cd standard solutions (nitrate, phytate, cysteine, citrate, malate, and histidine), all prepared at 4 mM (pH 5 for Cd phytate and Cd Cysteine, pH 7 for the other standards) and held in polythene tubes. We then performed EXAFS analysis to assess the coordination environment of Cd in leaves and roots. Coordination numbers were changed manually and are estimated to be ±1 given the small useful data range. The goodness of the fit was estimated by calculating the residual R factor; R = Σ i (experimental-fit) 2 / Σ i (experimental) 2 . A lower R factor represents a better match between the fitted standard spectra and the sample spectrum (Terzano et al. 2008).
Data analysis
The phytoextraction ability of cardoon plant was evaluated by calculating the metals yield (mg) in shoot dry biomass and the translocation factor (TF) of As and Cd as below: metals yield (mg) in shoot dry biomass = the heavy metal concentration in the shoots multiplied by the total biomass weight at the end of experiment (Zhao et al. 2003;Adediran et al. 2015); TF= the heavy metal concentration in shoot (mg kg −1 )/the heavy metal concentration in roots (mg kg −1 ) at the end of experiment (Zhao et al. 2003;Adediran et al. 2015).
Differences in the growth of the plants (biomass and root length) and in element accumulation in different plant organs, genotypes and treatments were subjected to the Bartlett's test for homogeneity of variance and then analysed using factorial analysis of variance (ANOVA), using CoStat software (CoHort software, Montenery, CA, USA). The means were statistically separated on the basis of Student-Newmann-Keuls test when the 'F' test of ANOVA for treatment was significant at least at 0.05 probability level. Significance was accepted at p≤0.05 level (Snedecor and Cochran 1989).
Plant growth parameters
To assess the effects of Cd and As on plant growth, the total plant biomass and the roots and shoots biomass were measured after harvest. The variation of the biomass allocation among organs is considered a useful parameter to the selection of plants to be used in phytoremediation applications (Iori et al. 2017). The statistical analysis showed that sampling time did not influence the plant growth parameters studied; therefore, we compared the data at the end of the experiment (45 days) only.
During the growth cycle, no visible toxicity symptoms (death or defoliation) were observed, and all the leaves were similar to the control. Despite the presence of metals, the treated plants continued to grow and survived until the end of the trial, but the biomass production was different, depending on genotype and metals type and amount added to the soil. Moreover, the results showed that growth might be stimulated, when the heavy metal dose is low; this finding is in line with the results reported by Feng et al. (2018) on Cd and Cu tolerance and bioaccumulation in Sesuvium portulacastrum. Figure 1 reports the dry biomass of different parts of plants; control plants showed a similar trend in biomass partitioning, allocating most of the dry biomass in the roots with a mean value of 24.81 g DW. The highest biomass was recorded for Gen.1, and the treated plants showed mostly a higher total biomass than control plants. In particular, the plants treated with As showed a behaviour significantly different from controls, but only at low As concentration: the growth of plants was stimulated with an increase in the total biomass mostly allocated in the roots with values of 33.45 g DW for As500 and 28.44 g DW for control. The same trend was observed in As+Cd treatments at low concentration with an increment of the total plant biomass mostly allocated in the roots, indicating a strong Cd and As tolerance when the two contaminants were together. In particular for As500+Cd500, the value of the roots biomass was 37.10 g DW, statistically different from the control (28.44 g DW). This phenomenon was considered one of the many interesting paradoxes related to As toxicity (Woolson et al. 1971;Carbonell-Barrachina et al. 1997Miteva 2002;Garg and Singla 2011;Finnegan and Chen 2012). This stimulating effect at low As concentrations was from a direct interaction of As with plant metabolism or with soil plant nutrients (Finnegan and Chen 2012;Guarino et al. 2018). Most plants considered tolerant possess mechanisms to retain much of their As burden in the root (Finnegan and Chen 2012), which allows them to avoid As toxicity, with growth benefit deriving from As stimulation of inorganic phosphate (Pi) uptake (Tu and Ma 2003;Finnegan and Chen 2012). In fact, arsenate is taken up through the transport system of Pi transporter (PHT) proteins (Ullrich-Eberius et al. 1989;Meharg and Macnair 1990, 1991, 1992Wu et al. 2011) in As hyperaccumulators (Wang et al. 2002;Tu and Ma 2003), As-tolerant non-hyperaccumulators (Meharg and Macnair 1992;Bleeker et al. 2003) and As-sensitive non-accumulators (Abedin et al. 2002;Esteban et al. 2003). From this interaction, as reported in a study on Pteris vittata L., phosphate likely substantially increased plant biomass and arsenate accumulation by alleviating arsenate phytotoxicity (Tu and Ma 2003).
By contrast, with Cd treatments, there was an increase of the plant biomass, mostly of the shoots, in both concentrations with values of 35.24 g DW for Cd500, 33.25 g DW for Cd2000 and 28.44 g DW for control. The same trend is seen when cadmium was in combination with arsenic but only for As500+Cd500 with a value of 29.26 DW. Raccuia and Melilli (2007) have reported significant differences among wild and cultivated cardoon genotypes for aboveground biomass yield; in this work, we wanted to study the differences of biomass production of different genotypes even in the presence of heavy metal(loid)s. Results showed that wild cardoon had a biomass production lower than domestic one and Gen.3 showed a higher biomass production than Gen.2. It is possible that Gen.3 had developed adaption strategies to defend itself against environmental stresses due its provenance from highly polluted area. In particular, under As500+Cd500 treatment, the results of roots biomass production showed a value of 28.92 g DW that was statistically different from the control (24.57 g DW). One mechanism that plants use for As detoxification is the reduction of arsenate, As(V), to arsenite, As(III); the complexation of As(III) with phytochelatins (PCs), produced from the plants; and the sequestration to vacuoles (Li et al. 2015).
Regarding shoot biomass, Gen.3 showed a behaviour similar to the control at low concentrations and for all treatments studied. In fact, the low concentrations were favourable for plant growth, while high concentrations caused inhibition effect. However, the growth of Gen.2 was inhibited, showing a decrease of biomass, at both concentrations and for all treatments studied. In particular the lowest value of 11.56 g DW was under As2000 that was statistically different from the control plants (20.97 g DW).
Root elongation was influenced by metal concentrations (Fig. 2). In particular, regarding As treatment, there were no statistically significant differences among genotypes, but the toxic metal stress stimulated the root length, at low concentration only, for Gen.1 and Gen.3 with values of 22.50 cm and 20.17 cm, respectively, statistically different from the control plants (19.00 cm and 17.33 cm, respectively). The root length of Gen.2 decreased with increasing metal concentration, especially at As2000 with a value of 13.83 cm compared to the control (21.00 cm). Using Cd treatment, Gen.3 was significantly different from the other genotypes in showing stimulation of root length for both concentrations, with values of 19.83 cm for Cd500 and 21.67 for Cd2000.
In the presence of both metals, the results showed a significant stimulation of root length at both concentrations for Gen.1 and Gen.3, but the highest value (22.75 cm) was only at As500+Cd500 for Gen.1. Indeed, statistical analysis showed there was no stimulation in Gen.2.
Heavy metals accumulations
Arsenic and cadmium accumulations in the plants were analysed at different concentrations and at different times of exposure. The statistical analysis showed that the times of exposure did not influence the parameters studied; for this reason, only the accumulations of As and Cd at the end of the experiment were considered. The results showed that for all genotypes, As accumulated mainly in the roots (Fig. 3-1). Moreover, the arsenic root concentrations increased significantly with increasing As contamination in the soil. In particular, under As2000 μM, the As concentrations in roots were 15.32 mg kg −1 in Gen.1, 11.22 mg kg −1 in Gen.2 and 12.50 mg kg 1 in Gen.3 (Fig. 3-1), whereas under As500 μM, the corresponding values were 0.91mg kg −1 , 1.58 mg kg −1 and 0.90 mg kg −1 . Our work supports the observation of Llugany et al. (2012) that showed that, regardless of the form of supplied As, cardoon plants accumulated As mainly in the roots, consistent with immobilization of the As in root cells. Our results are also consistent with Gupta et al. (2008) who found that As was preferentially concentrated in roots relative to shoots in chickpeas (Cicer arietinum L.), interpreted to be due to enhanced production of thiols in roots. Thus, although most studies show that As is translocated to shoots, other studies have shown that the actual distribution can depend on a variety of factors, including plant species, pH, redox state of the soil and microbial activity (Abbas et al. 2018).
Regarding shoots accumulation, as shown in Fig. 3-2, Gen.3 showed a different behaviour, compared to the other genotypes, with a higher capacity of accumulation of arsenic in leaves at As500.
Cadmium accumulation in cardoon roots showed the same behaviour as arsenic accumulation and increased significantly with the increase of the Cd concentration in the soil. The highest Cd concentration in roots was 4.79 mg kg −1 in Gen.1 under Cd2000 μM (Fig. 3-1). However, Cd accumulation was lower than that of arsenic in roots for all genotypes. In contrast with As, Cd concentrations in shoots were higher than those in roots, and the plants accumulated higher levels of Cd under the highest concentration of metal in the soil. The highest value of 18.72 mg kg −1 DW was found under Cd2000 μM in Gen.3 (Fig. 3-2). Although other studies generally report higher concentrations of Cd in roots relative to shoots, our observations are consistent with studies of Capozzi et al. (2020) and Arena et al. (2017) that showed that Cd on cardoon plants exhibited the highest values of the translocation factor (TF) indicating higher concentration in shoots than in roots. According to Chaney and Giordano (2015) and Alloway (1995), Cardoon's efficient translocation for Cd occurs via transporters of Ca 2+ , Fe 2+ , Mg 2+ , Cu 2+ and Zn 2+ ions, into aerial parts of plants through this interaction with the available nutrient elements (Nazar et al. 2012;Arena et al. 2017). Also, cauliflower and sunflower planted in moderately Cd-contaminated soil showed enhanced Cd uptake in shoots and low accumulation in roots (Ma et al. 2021;Zehra et al. 2020a). It has been suggested that in shoots, the detoxification of Cd occurs through the synthesis of sulphur-rich organic acids, such as glutathione and phytochelatins, which sequestered Cd into vacuoles (De la Huguet et al. 2012).
In their study, Llugany et al. (2012) showed that As accumulation was higher in plants grown in the presence of Cd than in those exposed to As alone. This means that the presence of Cd increased the ability of the plants to absorb As and translocate it to shoots, suggesting the potential ability of cardoon plants for synergic phytoextraction of Cd with other metals. Our results are consistent with earlier study of Sahito et al. (2021) that evaluated the arsenic accumulation in sunflower accessions in the presence of mercury, and they found the highest concentration of As in the above-ground parts of plants. In fact, in our work, the concentrations of both metals were always greater than those in treatments of As and Cd alone. Furthermore, we showed that for both metals, there were significant differences between genotypes studied, with the highest accumulation of metals in Gen.3 (Fig. 3, 1-2).
Moreover, our results are in accordance with the study of Pappalardo et al. (2020) that showed that sylvestris activated genes associated with transport of contaminant and which are involved in the synthesis of strong chelators that bind the metals in a non-toxic form. In particular, altilis and sylvestris plants treated with Cd and As expressed genes for phytochelatin synthase (PS), natural resistance of macrophage (NRAMP3), heavy metal ATPase (HMA), inorganic phosphate transport (PHT), ABCC transporter and zinc and iron protein (ZIP) that are involved in abiotic stress response in model plants. The same authors also showed that NRAMP3, ZIP11, ABCC and PHT genes, that usually are activated in accumulator model plants, under Cd or As stress were activated also in wild cardoon, but not in the domestic one.
Cadmium speciation and coordination in biomass
Comparison of standards and samples showed no variation in the XANES (probably due to the large core hole lifetime of Fig. 1 Root and shoot biomass of cardoon genotypes in response to As, Cd and As+Cd treatments at 45 days after contamination. Values are expressed as mean of biological replicates (n = 3). Different uppercase letters indicate statistically significant differences among the biomass on different levels contaminations among the genotypes. Different lowercase letters indicate statistically significant differences among the biomass on different levels contaminations among the same genotype (p≤0.05) 7.3 eV at the Cd K-edge), precluding reasonable assessment of Cd speciation distribution among plant tissues. Nevertheless, Fourier transformed data showed a shift in the main feature around 1.7 Å between roots and leaves (Fig. 4). Hence, we performed EXAFS fitting of the soil, root and leaf data in order to investigate possible changes in coordinating atoms around Cd. Due to the limited range of the EXAFS data (k ≤10 Å -1 ), only first shell coordination was possible.
In soil samples, Cd was always coordinated to 6 oxygen atoms at a distance of 2.28±0.01 Å (Table 1), consistent with the Cd-O distance in CdCO 3 (Boyanov et al. 2003). Cd was also coordinated to 6 oxygen atoms in roots of both genotypes, with the same bond distance. Differences between genotypes emerged in the coordination of Cd in leaves. In the domesticated genotype 1, coordination was dominated by Cd-S, with 4 sulphur atoms around each Cd atom at a distance of 2.45 ±0.01 Å. Addition of oxygen atoms did not improve fit to the data. By contract, Cd-O bonding dominated Cd coordination in leaves of the wild genotype 3, with 5 oxygen atoms and one sulphur atom around each Cd. There were no apparent differences between samples treated with Cd only and those treated with a mixture of Cd and As.
We noted above that the wild Gen.3, sourced from an industrial area (assumed to be contaminated), accumulated more of each metal (Cd and As) than either the wild Gen.2, sourced from a clean area, or the domesticated Gen.1. Furthermore, Gen.3 grew better than Gen.2, although it produced lower biomass than Gen.1. We attributed this behaviour of Gen.3 to possible development of adaptive mechanisms that enable it to tolerate metal toxicity during its growth on contaminated soil. Using similar techniques (EXAFS), Isaure et al. (2015) showed that Cd in non-metal resistant species of Arabidopsis halleri was predominantly coordinated to sulphur atoms whereas in metal-resistant species, Cd was coordinated to sulphur and oxygen atoms. They postulated that coordination to oxygencontaining ligands (possibly organic acids) was responsible for metal tolerance in these phenotypes. Our results are consistent with this interpretation, if we consider Gen.1 to be nontolerant to Cd, and that it responds to Cd uptake by producing Fig. 2 Root length of cardoon genotypes in response to different As, Cd and As+Cd treatments at 45 days after contamination. Values are expressed as mean of biological replicates (n = 3). Different uppercase letters indicate statistically significant differences among the root length on different levels contaminations among the genotypes. Different lowercase letters indicate statistically significant differences among the root length on different levels contaminations among the same genotype (p≤0.05) Fig. 3 Concentration of As, Cd and As+Cd in roots (1) and shoots (2) of different cardoon genotypes, spiked with As and Cd, alone or in combination, at the end of experiment. Values are expressed as mean of biological replicates (n = 3). Different uppercase letters indicate statistically significant differences among the roots and shoots concentration on different levels contaminations among the genotypes. Different lowercase letters indicate statistically significant differences among the roots and shoots concentration on different levels contaminations among the same genotype (p≤0.05) sulphur-containing ligands to complex and translocate Cd into shoots. Such a mechanism is consistent with our previous studies on Zn toxicity and uptake in Brassica juncea when inoculated with bacteria (Adediran et al. 2015;Adele et al. 2018).
Phytoextraction efficiency and translocation factor of arsenic and cadmium
As and Cd yield in shoot dry biomass and translocation factor (TF) were measured (Fig. 5). To understand the phytoextraction ability of cardoon plants, the association of high biomass production and the ability to accumulate contaminants in its tissues were assessed by calculating the metal yield in shoot dry biomass. The best phytoextraction was achieved in cadmium treatment, where the yield of metal accumulation in shoot increased significantly with the increase of the Cd concentration in the soil. The highest values of 0.735 mg and 0.798 mg were found under Cd 2000 μM in Gen.1 and Gen.3, respectively. When the metals were in association, the highest values of 0.983 mg in dry shoot was found for Gen.3 in Cd2000 treatment that was statistically different from the control (0.007 mg).
Moreover, for effective toxic metal phytoextraction, TF is an important parameter for assessing the ability of a plant to translocate the absorbed metal from the root to harvestable aerial biomass, and it should be greater than 1.0 (Wei and Chen 2006;Adediran et al. 2015). In this work, the best results were mostly achieved in the low concentrations of all treatments, but the highest TF (16.19) was in plants of Gen.3 in Cd500 treatment. The results confirmed that Cd showed more phytoextraction (higher in shoots than roots) across treatments and genotypes, suggesting the possibility to use cardoon plants to remediate Cd-contaminated soil through phytoextraction techniques.
Conclusion
All cardoon genotypes accumulated As mainly in the roots, indicating the immobilization of this metal in root cells. By contrast, cadmium was accumulated especially in leaves; that means that cardoon plants had a good translocation ability to transfer Cd from roots to shoots, with translocation apparently effected by sulphur-rich ligands, possibly cysteine, glutathione or phytochelatins, based on EXAFS analysis. The interaction effect of As+Cd has increased the resistance of plants to these metals, allowing the plants to survive, even in presence of high concentrations of both metals. Furthermore, the accumulation of metals was higher in plants exposed to cocontamination of As and Cd than that of plants under As or Cd alone. Also, cardoon, under As+Cd contamination, translocated more As from root to shoots/leaves. Lastly, Fig. 4 Fourier transforms of the k 2 -weighted EXAFS data for Cd K-edge data of soil, root and leave spectra comparing the two genotypes analysed for this study (Experimental data solid line, fit data dotted line. The y-axis has been manually offset by 0.5 for each sample for clarity.) comparing the cardoon genotypes studied, the results demonstrated that C. cardunculus L. var. sylvestris, A14CT (Gen.3), collected from polluted soil, was the one that accumulated high levels of both contaminants, adapting mechanisms that enable it to tolerate metal toxicity during its growth on contaminated soil. It suggests its use in future works to remediate soils from these toxic elements as with both ability of phytoextraction and phytostabilization, thanks to its good tolerance of both heavy metal(loid)s. For this reason, it would be useful to continue the trials with the selected genotype 3, with the aim to test for more years its remediation efficiency in polluted soils, taking advantage, at the same time, the use of these marginal lands for the biomass production for sustainable bioenergy purposes.
Author contribution CL, writing (original draft), methodology, validation, data curation, software and writing (review and editing); VT, investigation, data curation, software and writing (review and editing); CG, methodology, data curation, software and writing (review and editing); JFWM, methodology and visualization; BTN, supervision, methodology, data curation and writing (review and editing); and SAR, supervision, conceptualization, project administration, resources and funding acquisition Funding Open access funding provided by Università degli Studi di Catania within the CRUI-CARE Agreement. This work was a part of PhD studies for Chiara Leonardi (CL) (grant number 120248/3/6).
Availability of data and materials All data generated or analysed during this study are available from the corresponding author on reasonable request.
Declarations
Ethics approval and consent to participate Not applicable.
Consent for publication Not applicable.
Competing interests The authors declare no competing interests.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
|
2021-06-15T13:41:36.025Z
|
2021-06-15T00:00:00.000
|
{
"year": 2021,
"sha1": "48beb753a0b86ff0ce94f47972e245aa04158eb9",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s11356-021-14705-9.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "48beb753a0b86ff0ce94f47972e245aa04158eb9",
"s2fieldsofstudy": [
"Agricultural And Food Sciences",
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
2761781
|
pes2o/s2orc
|
v3-fos-license
|
Mining gene networks with application to GAW15 Problem 1.
The Genetic Analysis Workshop 15 (GAW15) Problem 1 contained baseline expression levels of 8793 genes in immortalized B cells from 194 individuals in 14 Centre d'Etude du Polymorphisme Humain (CEPH) Utah pedigrees. Previous analysis of the data showed linkage and association and evidence of substantial individual variations. In particular, correlation was examined on expression levels of 31 genes and 25 target genes corresponding to two master regulatory regions. In this analysis, we apply Bayesian network analysis to gain further insight into these findings. We identify strong dependences and therefore provide additional insight into the underlying relationships between the genes involved. More generally, the approach is expected to be applicable for integrated analysis of genes on biological pathways.
Background
Recent genetic dissection of common diseases has largely been through linkage and association studies involving discrete or continuous traits including intermediate phenotypes such as gene expression data from microarray experiments. The latter can involve thousands of genes, and annotation of their roles in biological pathways and in relation to DNA polymorphisms poses immense challenges and has sparked huge interest [1]. These include development of methods appropriate for a much richer structure than classic clustering [2], discovery of interaction between genes, and inference of causal relationships.
A key challenge in analysis of gene expression data is the reconstruction of regulatory networks. Several approaches directly extend classical techniques such as cluster analysis to infer the relationship between plural variables. A novel but apparently unpopular approach of cluster analysis is to extract the patterned information formally and use it in typical linkage and association analyses. More impor- tantly, cluster analysis can be followed by Gaussian graphical modelling [2,3] and multivariate analysis in which a partial correlation coefficient (instead of a correlation coefficient) is used to measure the direct interaction between variables. In graphical modelling, the relationship between plural variables is represented as an independence graph G = (V, E), whose vertices V denote variables and edges E denote conditional dependence structure. Other approaches include regularization and moderation for suitable estimates of the covariance matrix and its inverse, by a full Bayesian or an empirical Bayes approach and followed by heuristic searches for an optimal graphical model http://www.strimmerlab.org/notes/ ggm.html. A Bayesian network is notable because it provides a natural approach to model regulatory networks. As has been argued elsewhere [4], if the expression level of a given gene is regulated by certain proteins then it should be a function of the active levels of these proteins. Due to biological variability and measurement errors, the function would be stochastic rather than deterministic. A Bayesian network uses a generic analytic approach for identifying robust predictors of among-individual variation in expression levels, intermediate phenotypes, or disease end points. It has been successfully applied to APOE gene variation and plasma lipid levels [5]. Mathematical details on Bayesian networks are available [6], as is a comprehensive survey of genomic approaches to biological pathways [7].
The Problem 1 data from Genetic Analysis Workshop 15 (GAW15) offers an excellent opportunity for investigating the utility of Bayesian networks. An earlier report [8] showed evidence of substantial variation in expression levels between individuals and association with singlenucleotide polymorphisms (SNPs), as well as a cluster of 25 of 31 target genes in two master regulatory regions. Here, as a further step of analysis, we performed Bayesian network modelling to gain insight into these findings.
Methods
Gene expression levels, treated as continuous variables, can be assumed to follow a multivariate normal distribution, and to be consistent with a Bayesian network with linear Gaussian conditional densities. The prior of this network is characterized by a prior network reflecting our belief in the joint distribution of the variables in question, and equivalent sample size (ESS) effectively behaving as if it was calculated from a "prior" data set of that size. For instance, without a priori knowledge of the regulatory network, the prior network could be one in which all expression levels are independent in order to avoid explicitly biasing the learning procedure to a particular edge. The common approach to the learning procedure starts with a training set and evaluates networks according to an asymptotically consistent scoring function that is obtained through the Bayesian framework [6]. In the case of B-course software http://b-course.hiit.fi to be used here, discretisation of continuous data has been applied to capture the nonlinear relationship between variables and the choice of prior is such that the resulting ESS prior distribution is close to Jeffrey's prior. The software infers causal relationship according to the statistical dependence under some additional assumptions concerning latent variables. Mathematical details, including the definition of Jeffrey's prior, are given elsewhere [9].
The GAW15 Problem 1 consists of 194 individuals from 14 three-generation CEPH (Centre d'Etude du Polymorphisme Humain) pedigrees, with baseline expression levels of genes in immortalized B-cells. The data provided contains expression of 8793 genes. Following an earlier investigation [8], expressions whose variations are greater among individuals than within individuals are considered, leading to 3554 expressions. By further considering the evidence of master regulations, mapping was done without taking into account possible relationships among phenotypes, leading to 25 of the 31 target genes. These were used here for the network analysis, involving 56 unrelated individuals.
Affymetrix CEL-files were preprocessed with BioConductor package affy, but the target gene expressions were used directly. The probe set IDs were matched with the annotation database of human genome focus array distributed with GAW15 Problem 1 and from the Affymetrix website http://www.affymetrix.com. All data management, correlation and hierarchical cluster analysis were done with the R system http://www.r-project.org.
Results
Cluster analysis shows that the dendrogram (not shown) differs somewhat from the earlier report [8], possibly due to difference in sample sizes. Network analysis using Bcourse (100 th checkpoint) showed that the following genes are independent of any other genes in the model: NFYC, LSM3, RAN, VAMP2, RAP80, INPP5A, STC2, and SNRPB. Edges TIMM17A to NDUFB2 and RPN2 to MIR16 are very strong and removing any of them would result in a model with probability less than one millionth that of the original model. Other results are shown in Table 1. Removing any of the edges in Edge Set 1 from the chosen model would decrease the probability of the model to less than one thousandth the probability of the original model, while removing any of the edges in Edge Set 2 decreases the probability of the model by the ratio listed. The network models are shown in Figures 1 and 2. The socalled causal structure assumes that dependencies between variables are due to causal relationships between variables in the model.
Discussion
Our analysis provides new insights into the complex interactions of gene expression levels in GAW15 Problem 1 data. This work demonstrates the potential usefulness of statistical inference on causal structure. Without an a priori biological hypothesis, it serves as an exploratory tool for subsequent confirmatory analysis. We chose not to repeat the linkage and association analysis but use earlier findings directly [8] and have used the non-informative prior in the analysis as in the current version of B-course. More generally, the influence of the prior network can depend on a variety of factors and is the subject of ongoing research.
An apparent limitation of this work, though not uncommon in gene-expression studies, is the relatively small sample size used. To fully elucidate the biological pathways involved may be difficult. For example, CYCS is involved in six pathways according to http://escience.inv itrogen.com/ipath/. Nevertheless, this would be a useful step towards understanding of the biological mechanism underlying the master regulators in question. A further limitation relates to the assumption often made in analysis of gene expression data that expression levels of genes are proxies for the activity level of the proteins they encode, although there are numerous examples in which activation or silencing of a regulator is carried out by posttranscriptional protein modifications. Statistical robustness and biological interpretability remain as the two main challenges for Bayesian network analyses, to which replication, bootstrapping and benchmarking have been proposed.
Our inference of gene networks also exploits the covariance structure of the data, like structural equation modelling [10,11], but it is exploratory or hypothesis-generating rather than confirmatory or hypothesis-driven. A number of other software systems are of interest, e.g., ASIAN (a web-based regulatory network framework [12], http:// eureka.cbrc.jp) and deal [13]. The B-course software can also generate input files for HUGIN, a commercial tool for inference with Bayesian networks http://www.hugin.com. Further investigations would be fruitful and may involve Importance of the causal structure Figure 2 Importance of the causal structure. Solid line has direct causal influence ("direct" means that causal influence is not mediated by any other variable that is included in the study). Dashed line indicates there are two possibilities, but we do not know which holds. Dashed line without any arrow heads indicates there is a dependency but we do not know the reciprocal dependence.
Importance of the dependencies Figure 1 Importance of the dependencies. Solid line has direct causal influence ("direct" means that causal influence is not mediated by any other variable that is included in the study).
genotype data, comparison between groups [5], or SNPs within the same gene, among others.
Conclusion
Bayesian network modelling is applied to GAW15 gene expression data and shown to be more informative than classic cluster analysis. While the findings are the subject of further investigation, the approach merits further attention.
|
2014-10-01T00:00:00.000Z
|
2007-12-18T00:00:00.000
|
{
"year": 2007,
"sha1": "ce7fb82e3fce0c1891a0492ea055441a5ab16170",
"oa_license": "CCBY",
"oa_url": "https://bmcproc.biomedcentral.com/track/pdf/10.1186/1753-6561-1-S1-S52",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "ce7fb82e3fce0c1891a0492ea055441a5ab16170",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
22166478
|
pes2o/s2orc
|
v3-fos-license
|
Endoscopic third ventriculostomy is a safe and effective procedure for the treatment of Blake ’ s pouch cyst
Cystic malformations of the posterior fossa are often revealed by neuroimaging studies in the brains of children. This abnormal amount of cerebrospinal fluid accumulated in the posterior fossa is classified in the spectrum termed the Dandy-Walker complex (DWC) or as arachnoid cysts1-7. The DWC encompasses the following anomalies: DW malformation1-5, DW variant8, and mega-cisterna magna8. Tortori-Donati et al.9 have recently suggested that persistent Blake’s pouch cyst (BPC) is a separate entity within the DWC6. BPC consists of posterior ballooning of the superior medullary velum into the cisterna magna. All malformations considered part of the DWC are distinct, but overlapping developmental anomalies with different degrees of malformation. Such disorders affect the medullary vela, the cerebellar
hemispheres and vermis, the choroid plexus of the fourth ventricle, the subarachnoid cisterns of the posterior fossa, and the surrounding meningeal structures [10][11][12] .
Considering clinical aspects, DWC cysts are often related to other developmental anomalies and very commonly involve hydrocephalus, with greater motor rather than cognitive impairment in most cases 8,9,13 .To many authors 9,[14][15][16][17] , treatment options for patients with BPC and hydrocephalus range from marsupialization of the posterior fossa cyst to shunting of the cyst or ventricle; however, all of them have reported complications and high morbidity associated with open surgery and shunt-related problems.Endoscopic third ventriculostomy (ETV) has been effective in the treatment of obstructive hydrocephalus, with some advantages over the conventional approaches [18][19][20] .Thus, in line with the current trend of using minimally invasive procedures in neurosurgery, the ETV has been considered a good alternative for BPC treatment.The purpose of this study was to report an experience with ETV in a series of patients with BPC.
METHODS
Patients aged <18 years diagnosed with midline cystic malformations of the posterior fossa, within the DWC, were enrolled between March 1996 and March 2005 at Hospital São Paulo, the neurosurgery service of the Federal University of São Paulo School of Medicine (EPM-Unifesp), in São Paulo, Brazil.The study was approved by the Unifesp Research Ethics Committee, and was conducted in accordance with the provisions of the Declaration of Helsinki.The written informed consent was obtained from all patients' parents or legal guardians.
Initially, the sample included 33 children diagnosed with midline posterior fossa cyst, aged between 1 month and 2 years.Of these, seven patients did not meet the criteria for neurosurgical treatment (individuals in poor clinical condition for surgery or refusal to participate in the research), therefore they were excluded from the study.The diagnosis of BPC was based on transfontanellar ultrasound, computed tomography (CT), cranial magnetic resonance imaging (MRI), fetal MRI (28-week gestation), and defined as midline posterior fossa cyst with a patent cerebral aqueduct and associated with hydrocephalus (Figure).Thus, within a series with 26 cases of DWC, eight patients met the criteria for BPC.All of them were treated with ETV, and were followedup for five years.
The Denver Developmental Screening Test (DDST), which takes into account the four main areas of child development (personal-social, fine motor-adaptive, gross motor, and language), was applied to each patient for motor and cognitive assessments.Cognitive performance was classified as: the child can learn easily and accomplish school tasks adequately; the child learns more slowly than his/her classmates; the child requires special educational assistance; and the child is not able to learn.
ETV procedures were performed under general anesthesia using a rigid 6-mm neuroendoscope with zerodegree lens.Cyst fenestration was performed using a 4F Fogarty catheter (Baxter Healthcare Corp., Houston, TX, USA; Edwards Laboratories, Inc., Santa Ana, CA, USA) and complemented with scissors, whenever it was necessary.The procedure was performed with the patient in the supine position and a 30° head-up tilt, with the head in the neutral position or slightly to the left.A 3x3 cm area was shaved in the right coronal suture, and a linear skin incision of approximately 4 cm was made until the periosteum, parallel to and 2 cm away from the coronal suture.The edges of the incision were separated, and a trepanation was performed at Kocher's point 19 .It is worth mentioning that in the newborn, the bone can be perforated by rotational movements using a 15-blade scalpel, or in diastasis cases of the coronal suture, especially in those of the severe hydrocephalus, when a trepanation is not necessary.Then, the dura mater was opened, followed by hemostasis and a punctiform opening in the arachnoid.The trocar of the endoscope was inserted through the brain parenchyma to the lateral ventricle and maintained for endoscope insertion (Aesculap, Tuttlingen, Germany; Karl Storz, Tuttlingen, Germany).This was positioned in the lateral ventricle to visualize the internal structures, such as Monro's foramen, the thalamostriate vein, and the septum pellucidum, which are intraventricular anatomical landmarks for endoscopic navigation, and for subsequent location of the fornix and the floor of the ventricle.In the presence of hemorrhage during the procedure, irrigation was performed with saline heated to 37°C.
RESULTS
Of eight patients with BPC considered eligible for this study, five were male and three were female.Their ages were 1, 7, 26, 2, 11, 9, 2, and 48 months.Three patients were prenatally diagnosed.All of them had hydrocephalus and motor deficiencies, and underwent ETV.
Motor assessment of these patients at a five-year followup yielded normal findings, presenting a ventricular index lower than 0.4.All subjects showed reduction in cyst volume and ventricular diameters.In general, they improved and only one had residual cognitive dysfunction, despite overall neurological improvement.There were no complications in any of the eight cases.
DISCUSSION
In our case series with 26 midline posterior fossa cysts, they were all classified within the DWC, being eight patients diagnosed with BPC and treated by ETV.In all of the situations, ETV showed reduction in cyst volume and ventricular diameters, allowing the development of cerebellum and cerebellar vermis.
BPCs are considered an embryological regression failure of Blake's pouch (originating from the rudimentary tela chorioidea of the fourth ventricle) caused by an imperforate foramen of Magendie 9,21 , and leading to dilatation of the superior medullary velum towards the cisterna magna 9,[22][23][24][25] .Another significant aspect is that the foramen of Luschka is characterized by later opening during embryological development as compared with that of Magendie.Hence, the fourth ventricle will dilate simultaneously with the supratentorial ones up to the opening of the foramen of Luschka when there is imperforation of Magendie's foramen.As a result, there is an imbalanced cerebrospinal fluid flow between the ventricles and the cisterns 8,14,21,[25][26][27][28] .Hence, with temporary imperforation of Magendie's foramen, the ventricles remain dilated.Furthermore, based on the theory of BPC, the cerebellar hemispheres and vermis will be subjected to a certain degree of compression and remain underdeveloped.Finally, it has been suggested that BPC and mega-cisterna magna have different origins from DW malformation and variant; whereas the former is now considered to derive from a formative defect of the posterior membranous area, and the latter originates from an issue of the anterior membranous area 9,14 .
Cyst marsupialization is often proposed in an attempt to reconstruct a new Magendie's foramen and thus establish a new pathway for cerebrospinal fluid.If this procedure achieves its purpose, it may reduce the pouch, enabling reexpansion of the cerebellar hemispheres and vermis and, consequently, relieving hydrocephalus [29][30][31] .
According to Tortori-Donati et al. 9 and Cornips et al. 13 , when the lateral ventricles (or cyst) undergo shunting, they return to normal size, with consequent collapse of the cystic fourth ventricle.Conti et al. 17 found that cyst fenestration alone was not efficient as a treatment of cases in which there was associated syringomyelia.The solution was to place a ventriculoperitoneal shunt.In their study, the authors considered that ETV could be an alternative therapeutic solution.
The arachnoid membrane is usually very thin in the absence of inflammatory processes.T2-weighted fast imaging employing steady-state acquisition (FIESTA ® , General Electric Company; CISS ® , Siemens) MRI images are required for this differentiation; however, membranes that are tightly adhered to the cerebellum are difficult to differentiate.The most important points for diagnosis are cerebral aqueduct patency in continuity with the posterior fossa cyst and absence of cerebellar malformations of the posterior fossa.This diagnosis can be suspected by ultrasound during the fetal period.CT may also be helpful in the diagnosis, but it should be avoided in this age group.
Regarding clinical status, motor function is more greatly impaired.The children are unable to walk, but their intelligence is within the normal range.This aspect differentiates BPCs from simple posterior fossa arachnoid cysts, because they do not often show such symptoms.Immediate improvement after treatment leads us to believe that the motor status occurs only due to compression of the brain stem and cerebellum, and that after normalization of the blood pressure by ETV, symptoms disappear in most patients.
Ventricular indices, as well as volume, are not suitable criteria for monitoring these patients, because, as described in the literature, cystic or ventricular cavities do not always decrease in size.The results in our patients were assessed after evident improvement of motor function and normalization of head circumference.
In our series, ETV was used in patients with symptomatic BPC in an attempt to avoid the risks and morbidity associated with open surgery and issues related to shunt valves, such as occlusion and overdrainage.Surgical treatment was based only on the opening of the third ventricle floor and Liliequist's membrane, which in this age group is not adhered to the tuber cinereum 13 .The procedure was effective in all cases, with reductions in cyst volume and fourth ventricular diameter as well as supratentorial ventricular diameter, allowing the development of cerebellum and cerebellar vermis.
The results were indeed excellent in this series, since we always perform a small frontal craniotomy in infants in order to tightly close the dura mater.Moreover, we do not use any substitutes for dura mater or any glue.We are, therefore, dealing with an obstructive hydrocephalus, and neuroendoscopy is the technique that provides the best results for this type of disease.Furthermore, ETV is an established, minimally invasive procedure that decreases patient' s risk, shortens length of stay,
Figure .
Figure.T2-weighted magnetic resonance imaging of a Blake's pouch cyst (midline posterior fossa cyst with a patent cerebral aqueduct, wide posterior fossa, and hydrocephalus).
|
2017-06-05T00:37:13.170Z
|
2013-08-01T00:00:00.000
|
{
"year": 2013,
"sha1": "bbebccf2fceaf20688284eff60264e3e43ad9bab",
"oa_license": "CCBYNC",
"oa_url": "https://www.scielo.br/j/anp/a/5LcNqxr9MCPtNvhJQdkytxH/?format=pdf&lang=en",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "074f84d0ccffb40220a26ce6ff5ee92228950e57",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
233257586
|
pes2o/s2orc
|
v3-fos-license
|
Low neutralizing antibody responses against SARS-CoV-2 in older patients with myeloma after the first BNT162b2 vaccine dose
Patients with multiple myeloma (MM) are at an increased risk for infection because of their immunocompromised state, old age, and comorbidities. 1 Coronavirus disease 2019 (COVID-19) causes moderate to severe acute respiratory dysfunction in 77% of patients with MM, and ~8% end up in critical condition. 2 More than 80% of patients with MM who are infected by severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) require hospitalization, 3 whereas ~33% of hospitalized MM patients with COVID-19 may die because of the infection. 4 This is mainly due to the limited therapeutic options for COVID-19. 5 Vaccination against SARS-CoV-2 could be an important preven-tive strategy against COVID-19 for patients with MM, but its ef fi cacy in MMis largely unknown. 6 The BNT162b2 messenger RNA (mRNA) vaccine is the fi rst anti – SARS-CoV-2 vaccine approved by the US Food and Drug Administration and the European Medicines Agency as a result of its high ef fi cacy in apparently healthy adults. 7 Recently, it was reported that the fi rst BNT162b2 dose provided some protection against COVID-19 among nursing facility residents. 8,9 However, there is no information in the
Vaccination against SARS-CoV-2 could be an important preventive strategy against COVID-19 for patients with MM, but its efficacy in MMis largely unknown. 6 The BNT162b2 messenger RNA (mRNA) vaccine is the first anti-SARS-CoV-2 vaccine approved by the US Food and Drug Administration and the European Medicines Agency as a result of its high efficacy in apparently healthy adults. 7 Recently, it was reported that the first BNT162b2 dose provided some protection against COVID-19 among nursing facility residents. 8,9 However, there is no information in the literature about its efficacy in patients with MM or with other malignant diseases. Herein, we report the development of neutralizing antibodies (NAbs) against SARS-CoV-2 in patients with MM after the first dose of the BNT162b2 vaccine.
Major inclusion criteria for the participation of patients with MM in this study included age .18 years; presence or smoldering myeloma or active MM, irrespective of the treatment given or the line of therapy; and eligibility for vaccination, according to International Myeloma Society recommendations. 8 Volunteers of similar age and sex, who served as controls, were included in this analysis. Major exclusion criteria for myeloma patients and controls included the presence of autoimmune disorders or active malignant disease, HIV or active hepatitis B and C infection, or end-stage renal disease. Herein, we report a subanalysis of a prospective study (NCT04743388) evaluating the kinetics of anti-SARS-CoV-2 antibodies following COVID-19 vaccination in healthy subjects and patients with hematological malignancies or solid tumors.
After vein puncture, serum of patients and controls was collected on day 1 (D1; before the first BNT162b2 dose) and on day 22 (D22; before the second dose of the vaccine). Serum was separated within 4 hours of blood collection and stored at 280 C until the day of measurement. NAbs against SARS-CoV-2 were measured using methodology approved by the US Food and Drug Administration (enzyme-linked immunosorbent assay; cPass SARS-CoV-2 Neutralizing Antibody Detection Kit; GenScript, Piscataway, NJ) 10 at the above time points. Samples from the same patient or control were measured in the same enzyme-linked immunosorbent assay plate. The study was approved by the institutional Ethical Committees in accordance with the Declaration of Helsinki and the International Conference on Harmonization for Good Clinical Practice. All patients and controls provided informed consent before entering into the study.
The current study population included 48 patients with MM (29 males/19 females; median age, 83 years; range, 59-92 years) and 104 controls (57 males/47 females; median age, 83 years; range, 65-95 years), who were vaccinated during the same period at the same vaccination center (Alexandra Hospital, Athens, Greece). The advanced age of the participants was the result of the Greek vaccination program that prioritizes octogenarians and health care workers for COVID-19 immunization.
The characteristics of the patients with myeloma are depicted in Table 1. In summary, at the time of vaccination, 35 (72.9%) patients were receiving antimyeloma therapy, 4 were in remission after prior therapy and did not receive any therapy at the time of vaccination, and 9 had smoldering myeloma.
On D1, no patient or control had NAb titers $30% (the cutoff defining positivity); similarly, there was no difference with regard to NAb titers between patients with MM and controls on D1. After the first dose of the vaccine, on D22, patients with MM had lower NAb titers compared with controls: median NAb-inhibition titers was 20.6% (range, 0-96.7%) for patients with MM vs 32.5% (range, 5.2-97.3%) for controls (P , .01; Figure 1). More specifically, only 12 (25.0%) patients with MM vs 57 (54.8%) controls developed NAb titers $30% on D22. Four (8.3%) patients with MM and 21 (20.2%) controls developed NAb titers $50% (which corresponds to clinically relevant viral inhibition). 11 All 4 patients with MM were in remission without receiving any antimyeloma therapy: 3 patients after frontline therapy with bortezomib, lenalidomide, and dexamethasone (2 patients had achieved a very good partial response and 1 had achieved a partial response on the day of the administration of the first dose of the vaccine) and 1 patient after second-line treatment with lenalidomide and dexamethasone for 14 months (the patient had achieved a very good partial response on D1 of the vaccination). These 4 patients also had normal levels of the uninvolved immunoglobulins after treatment. No other correlation was observed between the antimyeloma treatment given and the development of NAb titers on D22.
Interestingly, only 1 (11.1%) of 9 patients with smoldering myeloma had NAb titers $30% (positivity cutoff) vs 11 (28.2%) of 39 patients with active MM. This patient had normal levels of the uninvolved immunoglobulins, whereas the other 8 patients had immunoparesis in $1 uninvolved immunoglobulin. This observation is of great interest, because hypoglobulinemia has been associated with an inferior antibody response among patients with chronic lymphocytic leukemia and COVID-19. 12 Our data indicate that the first dose of BNT162b2 leads to production of lower levels of NAbs against SARS-CoV-2 among patients with MM compared with non-MM controls of similar age and sex and without malignant disease. This may be due to the effect of myeloma cells, which suppress normal B-cell expansion and immunoglobulin production. Furthermore, some antimyeloma therapies have a B-cell-depleting activity that, in turn, may impair immune response to vaccines, whereas the myeloma microenvironment and antimyeloma treatments may impair T-cell function. 13 Patients with MM often exhibit suboptimal seroconversion rates after a single-dose vaccine against bacteria and viruses; therefore, booster doses are needed to assure adequate protection, such as with the seasonal flu vaccine. 13 We should also take into consideration that the production of NAb titers against SARS-CoV-2 at a level $50% on D21 after the first BNT162b2 dose has been low, even among healthy individuals aged 65 to 85 years. 11 However, higher antibody titers after a single dose of mRNA-based vaccine against SARS-CoV-2 have been detected in individuals who have recovered from COVID-19. 9 Because our results indicate that elderly myeloma patients have a blunted antibody response after the first vaccine dose, they also suggest that the administration of a second timely vaccine dose is essential to develop an adequate antibody-based immune response in this elderly subpopulation with a malignant hematological disease that deregulates the immune homeostasis. Antimyeloma therapy seems to negatively affect NAb production (after a single dose), although larger patient numbers are needed to evaluate the effects of specific antimyeloma regimens on the immune responses of anti-SARS-CoV-2 vaccination. Furthermore, this low antibody response of elderly patients with myeloma after the first BNT162b2 dose may not be seen in younger patients. Our ongoing study will also answer this question. Systemic autoinflammatory disorders encompass a heterogeneous group of monogenic disorders that are characterized by recurrent episodes of systemic and organ-specific inflammation. 1 Using a genotype-first approach, Beck et al recently described VEXAS (vacuoles, E1 enzyme, X-linked, autoinflammatory, somatic) syndrome, a new late-onset treatment-refractory inflammatory syndrome with associated hematological abnormalities. 2 VEXAS is caused by acquired somatic mutations at methionine 41 (p.Met41) of UBA1, the major E1 enzyme responsible for initiating ubiquitylation. The mutations were predominantly found in myeloid lineages and were absent in lymphoid lineages. Functional analysis identified loss of the cytoplasmic isoform UBA1b, initiated from p.Met41, and the subsequent gain of a new
|
2021-04-17T06:16:18.542Z
|
2021-04-16T00:00:00.000
|
{
"year": 2021,
"sha1": "17a5ecb532235dba8a88cb94fb6cf6490498f943",
"oa_license": null,
"oa_url": "https://ashpublications.org/blood/article-pdf/137/26/3674/1812118/bloodbld2021011904.pdf",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "34ee80d047440f452a79da72200e464046f4fbdd",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
221245997
|
pes2o/s2orc
|
v3-fos-license
|
Electrostatic Equilibria on the Unit Circle via Jacobi Polynomials
We use classical Jacobi polynomials to identify the equilibrium configurations of charged particles confined to the unit circle. Our main result unifies two theorems from a 1986 paper of Forrester and Rogers.
Introduction
The use of Jacobi polynomials to describe configurations of charged particles that are in electrostatic equilibrium goes back at least to the work of Heine and Stieltjes in the 19th century (see [4,9,10,11]). Their work considered particles of identical charge confined to an interval in the real line. The key to the calculations is to relate the condition of being a critical point of the appropriate Hamiltonian to the second order differential equation satisfied by the polynomial whose zeros mark the equilibrium points (see [12]). In the case of n particles confined to an interval with charged particles fixed at the endpoints, the relevant second order differential equation is precisely the ODE satisfied by the degree n Jacobi polynomial P (α,β) n (x), namely (1) (1 − x 2 )y + (β − α − (α + β + 2)x)y + n(n + α + β + 1)y = 0, where the real numbers α and β are related to the magnitude of the fixed charges at the endpoints of the interval. Many variations and generalizations of Stieltjes' work have been realized since his original papers (see for example [5,6]).
It was approximately 100 years before the work of Heine and Stieltjes was adapted to the setting of the unit circle by Forrester and Rogers in [1]. In that paper, the authors studied highly symmetric configurations of charged particles that are on the unit circle and in electrostatic equilibrium, meaning the total force on each particle is normal to the circle at its location. They described the equilibrium configurations in terms of the zeros of the appropriate Jacobi polynomials. Our main result (Theorem 1 below) will generalize the results from that paper by allowing for a broader collection of configurations and charges.
Since we will be working with the two-dimensional electrostatic interaction, we will consider Hamiltonians of the form as in [1,2], where the particles at the points {e it j } M j=1 are considered "mobile," the particles at {e iη j } K j=1 are considered "fixed," and σ(x) > 0 denotes the charge carried by the particle located at x ∈ C. To avoid any ambiguity that may arise from rotating the circle, we will always assume K ≥ 1. In our main result, we will consider the configuration space that consists of all where φ m+1 = 2π. We will denote this configuration space by S and note that S is convex. Let us suppose that p, q > 0 are fixed. On S we consider the HamiltonianH given bỹ Notice that since φ 1 = 0 always, we can think ofH as being a function of 2mn + 2m − 1 real variables. This Hamiltonian is of the form H from (2) (2) for an appropriate choice of parameters. More precisely, we will proceed by using the ODE (1) to find a critical point of the HamiltonianH, which we will deduce is the maximizer from a uniqueness result that we prove in Section 2. To find the critical point, we will first consider the case in which the particles with charge p and q are fixed (see Section 3) and then use a symmetry argument to handle the general case in Section 4.
Critical Points of H
Our main result of this section is a uniqueness result that applies to all Hamiltonians H of the form (2). We will apply it in several special cases in later sections. Proof. We follow the method used to prove similar statements in [5,6]. Define the Hessian H of H to be We will first show that −H is strictly positive definite, which will then imply that H is strictly concave on each connected component of its domain (using the fact that every such connected component is a convex set; see [8,Theorem 1.5]). To this end, we calculate the partial derivatives Observe that the negative of each diagonal entry is precisely the sum of the off diagonal entries of the same row plus a positive term. It follows that −H is diagonally dominant and has only positive eigenvalues and is therefore strictly positive definite. It follows that H is strictly concave on each connected component of its domain.
Notice Recall the convention that if a particle of charge q is located at a point a ∈ C and a particle of charge p is located at a point b ∈ C, then the force on the particle at b due to the particle at a is 2pq/(b −ā) (as in [2,3,7]). With this convention, we have the following lemma relating critical points of general Hamiltonians of the form (2) to the condition of electrostatic equilibrium (see also [2]).
Proof. We have already seen that We can rewrite this as σ(e iη b )σ(e it k ) 2 and the desired result follows.
It follows from Lemma 3 that {t * j } M j=1 is a critical point of H if and only if we have equality in (4) for all k = 1, 2, . . . , M . For future reference, notice that the expression on the right-hand side of (4) is one half of the sum of the charges on all of the particles in the system except the one at e it * k .
p and q Charges Fixed
In this section, we will take a preliminary step towards the proof of Theorem 1 and consider the HamiltonianĤ given bŷ The HamiltonianĤ isolates the θ-dependence of the HamiltonianH by fixing the locations of the points {e iφ j } m j=1 and {e iψ j } m j=1 at the m th roots of 1 and −1 respectively. Let us also define the configuration spaceŜ to be the set of all {θ j } 2mn j=1 satisfying θ j < θ j+1 j = 1, 2, . . . , 2mn − 1, Observe thatŜ is convex. In this context, we have the following result. Proof. Notice that the HamiltonianĤ is of the form H from (2) with M = 2mn; K = 2m; {e iη j } 2m j=1 equal to the roots of z 2m − 1; σ(e iθ k ) = 1 for all k = 1, . . . , 2mn; σ(e 2πij/m ) = p; and σ(e (2j+1)iπ/m ) = q for all j = 1, . . . , m. By Theorem 2, it will suffice to show that the zeros of the polynomial in (3) form a critical point ofĤ onŜ because the maximum must occur at a critical point.
Let us define the polynomial Q nm (z) to be the polynomial given in (3). It follows that (where we abbreviate P (α,β) n by P n ) Using these calculations and the differential equation (1), one can verify that Q nm satisfies the ODE We see that the only poles of T are at 0, 1 and −1 and hence the zeros of Q nm and the poles of T (z m ) are disjoint sets. We also notice that Thus, if Q nm (e iθ * j ) = 0 for j = 1, 2, . . . , 2mn, then it holds that where we used the identity . Now set p = α + 1/2 and q = β + 1/2 and calculate the expression in (5). We find Since this is true for every j = 1, . . . , 2mn, Lemma 3 and (6) show that the zeros of Q nm form a critical point ofĤ. By Theorem 2, this is the only critical point onŜ and hence is the maximizing configuration.
p and q Charges Mobile
Now we will consider the full HamiltonianH. This Hamiltonian is of the form H with K = 1, η 1 = 0 and M = 2mn + 2m − 1, and with the t j s denoting the arguments of all of the particles in the system other than the particle at 1.
Proof of Theorem 1. By Theorem 2, it suffices to show that the suggested configuration is a critical point ofH. Let {e iθ * j } 2mn j=0 denote the zeros of Q nm from the previous section. We know from Proposition 1 that for all k = 1, 2, . . . , 2mn. It remains to check that the partial derivatives with respect to each φ k (k = 2, . . . , m) and ψ k (k = 1, . . . , m) vanish at this configuration and for this we will use Lemma 3. From symmetry, we know that in this configuration, the sum of the forces on each particle of charge p or q is radial at that point. Also, the magnitude of the force on all particles of charge p is the same and the magnitude of the force on all particles of charge q is the same. This means that if e iφ j = e 2iπ(j−1)/m and e iψ j = e iπ(2j−1)/m , then for each k = 1, 2, . . . , m there are real constants C and C so that We can rewrite these expressions as where B m (z) = z m − 1 and D m (z) = z m + 1. The above expressions simplify to 2pnm + pqm + p 2 (m − 1) = C 2qnm + pqm + q 2 (m − 1) = C This shows that we can write 2mn j=1 p e iφ k − e iθ * j + m j=1 pq e iφ k − e iψ j + m j=1 j =k p 2 e iφ k − e iφ j = e −iφ k 2pnm + pqm + p 2 (m − 1) 2 (7) 2mn j=1 q e iψ k − e iθ * j + m j=1 pq e iψ k − e iφ j + m j=1 j =k q 2 e iψ k − e iψ j = e −iψ k 2qnm + pqm + q 2 (m − 1) 2 (8) for all k = 1, . . . , m. At e iφ k , the sum of the charges on all of the other particles is 2mn+mq+ (m − 1)p. At e iψ k , the sum of the charges on all of the other particles is 2mn + mp + (m − 1)q.
We can now apply Lemma 3 to conclude that the suggested configuration is a critical point ofH on S (note that to reach this conclusion, we do not need to apply (7) when k = 1 because we assume φ 1 = 0 always). By Theorem 2, this is the only critical point in S and hence must be the maximizing configuration.
|
2020-08-24T01:00:53.574Z
|
2020-08-20T00:00:00.000
|
{
"year": 2020,
"sha1": "3b56d00ce98b801621c6fac3b055bb1a1f08b8e2",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/2008.09176",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "3b56d00ce98b801621c6fac3b055bb1a1f08b8e2",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Physics",
"Mathematics"
]
}
|
253560408
|
pes2o/s2orc
|
v3-fos-license
|
FRENCH CATHOLIC LITERARY REVIVAL: HISTORICAL AND CULTURAL BACKGROUND
Summary The article focuses on the background of French Catholic literary revival of the first half of the XXth century. Among the fundamentals of the analysed phenomenon are distinguished three major ones: historical background, theological preconditions and the influence of purely literary predecessors of the XIXth century. The first one is presented through the research of French peculiarities of republican secularisation, its educational reforms and the social-cultural impact of the Dreyfus affair. The ecclesiastical context is described through the challenge of theological modernism and popes’ encyclicals as an attempt to deal with them. Specific French Catholic identity’s division between Gallicanism and Ultramontanism with its influence on the cultural context is also mentioned. Finally, the return of Christian spirituality and the birth of specific apophatic poetics are observed from symbolism of romanticists Chateaubriand and Lamartine through Baudelaire’s aesthetic of sin to decadent poetization of embodied evil and divided human soul of d’Aurervilly and Huysmans. Special emphasis is made on the tradition of political engagement of Catholic writers from J. de Maistre through Ch. Maurras to L. Bloy. Hence, Catholic literary revival is regarded as complex cultural, historical, theological and literary French phenomenon.
Introduction
Contemporary historical literary studies in broad Western tradition are marked with the return of religious topics that have been marginalized for almost half a century because of predomination of postmodern critical traditions, where the heritage of XXth century engaged Christian writers is analysed as mono-narrative, even highly ideological, thus out of date. Nevertheless, such phenomena of late XX-early XXI centuries as the resurrection of the author (Compagnon;Vanhoozer, 1998) and the onto-theological turn in French philosophy, in particular phenomenology, (Marion;1977, Lévinas, 1984, within the general emphasis on ethical-political thinking (Derrida 1991;1992)) generated a new wave of critical interest in Christian literary writing. French humanities returned to the problems of such phenomenon as Catholic literary revival in the context of XXth century's interwar French culture as well as its definition, chronological terms, poetical peculiarities and representative circles. For instance, H. Serry in the historical research «Naissance de l'intellectuel catholique» (Serry, 2004) studies the birth and the development of engaged catholic intellectual in the context of post Dreyfus' secularisation processes of French Republic and particularly focuses on the main journalistic centres that could be regarded as the source of the whole movement of «Renouveau catholique» (Catholic revival). However being a historian, he predominantly deals with historical and partially religious causes while neglecting the pure literary ones. On the other hand, R. Griffiths in his «Révolution à rebours. Le renouveau catholique dans la littérature française » (Griffiths, 2020) diachronically focuses on the preconditions, stylistic peculiarities and presumptive chronological terms of French catholic literary revival, paying less attention to vast cultural and religious background as well as to problematic field of the distinction between the literary predecessors of the movement and its representatives of XXth century. Ultimately, the comparative analysis of B. Sudlow «Catholic Literature and Secularisation in France and England, 1880-1914» (Sudlow, 2011 tries to build the bond between two similar yet poetically and causally distinct literary answers to the historical trend of secularisation in the late XIXearly XXth centuries, still shutting the research in the methodological boundaries of sociology. Our article, being based on the cultural-historical diachronical method as well as P. Ricouer's hermeneutics, is to fill the gaps in the analysis of vast movement of French catholic literary revival. More precisely, our tasks are to distinguish the most fundamental historical, religious and purely literal premises that caused the revival of explicitly showed Christian literary and cultural activity in the context of the phenomenon of engaged intellectuals.
Historical background: from secularisation to the Dreyfus affair
Among the most representative socio-political processes that during XIXth -early XXth centuries influenced the birth and the development of the Catholic literary identity, two major must be mentioned: the republican secularisation and the Dreyfus affair. During XIXth century (precisely 1789-1914) France is going through a large number of political regimes that finishes with the establishment of the Third Republic in 1871. The Republic, based on the Enlightenment's philosophy and the prevailing trends of positivism and scientism, with the urban bourgeoisie's support declared a vector for democratization, secularisation and the strengthening of liberal values. As a result, the country has been divided into two hostile ideological camps. One fought for the republican idea of political left while the conservative right wing (mostly Catholic) identited themselves with the support of monarchical system.
Since 1870-1880s, the Republic has announced a course for secularisation, which object was complete separation of state from church while prime mover -the range of educational reforms. Contemporary sociologist Jose Casanova describes the French model of secularism as such: «Another direction of secularisation took the form of laïcité, that is, the emancipation of all secular spheres from clerical-ecclesiastical control. In this respect, secularisation is marked by antagonism between the laity and the clergy. In contrast to the Protestant direction, here the separation of religious and secular is strictly preserved, but this boundary is moved to the margins in order to enclose, privatize and marginalize everything religious there, separating it from any presence in the secular public sphere. The paradigm of this direction is France's relationship with Latin Catholicism» (Casanova, 2017: 127-128). One of the first steps of educational secularisation was Jules Ferry's law of 1881-1882, which made primary education in all French schools compulsory, free and, the most significant, secular. In 1886, the R. Goblet's law, according to which the right to teach in public schools was granted only to secular persons, was adopted. In 1902, secondary education was reformed (réforme du bac). The number of humanitarian subjects that made up the core of pre-revolutionary education reduced while the number of hours devoted to living languages as well as natural sciences increased. This kind of division in conformity with republican educators must have coincided with the needs of the XXth century's secular state. Universities' reforms also took place, although less radically. At the theological faculties, the chairs were occupied by clergymen themselves. This was the core of conservative Catholics, predominantly monarchists. Therefore, in 1886, in Ecole pratique de haute études the Section of Religious Sciences was established. This academic structure was tend to spread a secular view into religious phenomena, based on the doctrines of Renan and Tain (Pelletier, 2013). Another manifestation of the Republic's rebuilt was social policy in 1870-1880s, aimed to renew the state's functionaries based on the principles of meritocracy. Consequently, the access to state management was granted not only to wider circles of French bourgeoisie, but also to Protestant and Jewish elites. The division of identities became clearer year by year. Their printed organs as well. The apogee of 30 years of republican secular policy became 1905, when the law of separation of church and state came into force. This very law caused a sharp controversy in French society and finally divided the country into «two Frances».
Another event that contributed not only to the crystallization of Catholic identity in its opposition to the secular one, but to its social-political manifestation was the Dreyfus affair (1894-1906). The trial and related public debate on the possible espionage of French Army's Jewish officer for the benefit of the German Empire. «Two Frances» publically demonstrated their sides. Monarchist Catholics of the right wing, who devastatingly criticized the Republic and its ideological foundations, mainly chose an anti-Dreyfussarian position, while secular Democrat Republicans as well as the liberal Catholics stood on Dreyfussarian positions. There were few exceptions. Thus, Ch. Péguy -one of the French Catholic literary revival's fundamental writers -publically supported Dreyfus' innocence. Both camps resorted to long and heated discussions on the pages of French newspapers and private magazines. January 13, 1898 on newspaper «L'Aurore», E. Zola comes to the forefront with his article «J'accuse» («I accuse») addressed to the Republic's President. He accuses the state of anti-Semitism, while the military court of insufficient evidence, hence conscious bias. Two thousand copies were sold out in a few hours, the article provoked further debates. The known historian of French literature A. Tribaudet notices: «for the first time, a piece of paper sold on the street became a call to religious war» (Tribaudet, 1936: 386). On February 5, 1898, the official Jesuit paper «La civiltà cattolica» issued an anti-Dreyfussarian indictment of Jews and Protestants, accusing them of collaboration with Germany, while the Jewish journal «Univers israélite» described the whole Dreyfus affair as a consequence of the church's attack on Reason. As a result, the groups of revisionists and anti-revisionists were formed.
After E. Zola's court verdict on February 24, 1898, begins the period of so-called leagues. «La Ligue des Droits de l'Homme» (League of Human Rights) stands on the Dreyfusarian position. Ch. Péguy is on their side. On the contrary, the anti-Dreyfussarian intellectuals in December 1898 established «La Ligue de la Patrie Française» (League of the motherland France). Such significant names for upcoming French catholic literary revival as P. Bourget, M. Barres, Ch. Maurras are its supporters. On June 20, 1898 the nationalist conservative movement «Action française» (French Action) and the magazine with the same name is organized. Ch. Maurras quickly becomes one of its major ideologues. G. Bernanos -crucial Catholic literary revival's author of the XXth century will be one of its writers. The intellectuals of both Leagues provided implacable struggle against their opponents on the pages of their own journals: the Republic against the Monarchy, left-liberal republicanism against right-wing conservative traditionalism, atheism against Catholicism, secularism against the union of state and church, the prominence of Jewish politicians and intellectuals against anti-Semitism, pacifism against militarism (Pelletier, 1995). No previous French generation had the opportunity to publicly identify and discuss the cornerstone ideological contradictions of the era on such a scale. That is how, according to its researcher H. Serry, the phenomenon of «engaged intellectual» was born. Since then the French writer had no moral right to ignore social-political, philosophical and ideological challenges of the times. That is why almost all the representatives of French Catholic literary revival -from Ch. Péguy to F. Mauriac and G. Bernanos -were also political essayists (Julliard, 1995).
Theological preconditions: popes' encyclicals and ecclesiastic modernism
For the conservative Catholic circles, the whole XIXth century manifested itself with the variety of theological disputes, which at the end of the century crystallized into a single challenge of ecclesiastical modernism. In French ecclesiastical context, it was primarily a struggle between two movements: Gallicanism (the question of national church, dependent on Rome purely in theological matters) and Ultramontanism (the complete French church's subjugation to the papal throne). The clerical representatives of the first movement after the French Revolution tried not only to build as autonomous church as possible, but also leaned towards liberal values of social progress. In contrast to them, the second branch rooted in the ideas of monarchy and loyalty to Rome, hence received a name of «Eglise intransigeante» (irreconcilable church) (Pelletier, 2019). In addition, during the century, the discourses of historicism, positivism, scientism, secularism, Renan's and Tain's religious theories raise the question whether the church is outdated in part of its teachings and if it could be changed. Part of intellectuals, fascinated by philosophical or political discourses of that time eventually come to atheism. The majority of writers who at the end of the XIXth-the beginning of XXth centuries converted to Catholicism lost their faith either because of dominant philosophical system (French Enlightenment, Kantianism, later the philosophy of Schopenhauer, Nietzsche and Marx), or their secularized parents turned away from the church (Mauriac, Barres, Huysmans) (Gugelot, 2002). The generation of French Catholic literary revival is predominantly the converted generation, who newly discovered the biblical faith for themselves.
Therefore, in 1864, Pope Pius IX issued the encyclical «Syllabus», in which he condemned the popular discourses of that time: pantheism, naturalism, positivism, scientism, socialism, communism, laicism, fideism, rationalism, arguing why each of them could not be combined with Catholicism (Tribault, 1972). In order to newly affirm traditional Catholic noncontradiction of faith and reason (fides et ratio) in 1874, in the encyclical «Aeterni patris», Pope Leo XIII declared the philosophy of Thomas Aquinas to be the most completed philosophical reflection of Catholic dogmatic faith. Such significant figure for French Catholic literary revival as J. Maritain actualized this scholastic system in his own philosophical neothomistic thought (Maritain, 1920). Then in 1891, in the encyclical «Rerum Novarum», Pope Leo XIII deals with the issue of socialism and communism proclaiming the social doctrine of the church. In 1892, in the encyclical «Inter Sollicitudines», addressed primarily to the French episcopate, the Pope called all Catholics for «rallying» to the Republic. In fact, possible reconciliation with the Republic raise a question of tolerance of republican, thus secular, values hence provoked a wave of misunderstanding among conservatives (Pelletier, 2003). Subsequently, French Catholic elites once again divided into two leagues: those, who supported ecclesiastic «Ralliement» (Rallying), and those, who were strictly against.
Finally, during the pontificate of Pius X, all the above-mentioned challenges led to global modernist ecclesiastic crisis. Modernism in the Catholic theological context must be understood as an attempt to renew the very principals of church, which means the possible change in traditional dogmatic and moral foundations in the accordance with modern progressive ideas (Dictionnaire de la théologie catholique, 1908). The discussed questions were not external, consequently variable, but rather the status of the creed itself (depositum fidei). Doctrinal modernism had its roots in Kantian subjectivism, where God cannot be known by any efforts of human mind, therefore dogmas that claim to be an objective reflection of his revelation are nothing but subjective creations of dubious value. In France, ecclesiastic modernism was primarily embodied in biblical modernism of A. Loisy, who in his book «L'Evangile et l'Eglise» («The Gospel and the Church») questioned dogmatic judgments on the divinity of Christ, the heaven, the mystical nature of church and its sacramental (Ibid, 1908(Ibid, : 2024. After all, in 1907 in the papal decree «Lamentabilii sane exitu» officially condemned Loisy's 65 modernist or relativistic thesis and in the following encyclical «Pascendi» called modernism the synthesis of all heresies (Pelletier, 2013), while demanding of the entire Catholic clergy to declare an antimodernist oath. Thus, the modernist crisis in French Catholic Church manifested fundamental religious division that already existed on political level and contributed to the appearance of theological discussing among engaged intellectuals on the public level, which in accordance with a huge wave of converted writers newly enlightened Christian problematics.
Literary predecessors: from romanticism to decadence
A number of French literature's histories manifest the existence of literary movements that could be considered the literary predecessors of Catholic literary revival of the first half of the XXth century (Milner, 1985;Delon et al, 2007). Thus, a list of the XIXth century's authors influenced directly Catholic novel's generation. Firstly, Jansenist Port-Royal's theological fideism is worth mentioning and particularly B. Pascal's philosophical system, which became the crucial principle of Catholic revival's poetics, especially those of Mauriac and Bernanos. In 1802, F.-R. de Chateaubriand's «Génie du christianisme» («The Genius of Christianity») (De Chateaubriand, 2018) was published. It became the first book in French XIXth century's fiction, which was openly estheticizing Christianity while asserting its artistic and moral superiority over all other religions. «Les Martyres» («Martyrs, or the Triumph of the Christian Faith») appeared in 1809 (De Chateaubriand, 1936). One of the first French romanticists, Chateaubriand, uses Christian lexicon and symbolism to elaborate poetical style that, in author's opinion, should testify the unique light of true beauty, which for him is God. French Catholic revival's ideological inspirator -Ch. Maurras wrote, that before Chateaubriand, the word was nothing but abstraction (Norra, 2001). Subsequently, Chateaubriand begins a long line of writers, engaged in political writing. Line that goes directly to Barres, Claudel, Mauriac, Bernanos. Moreover, the theorist of traditionalism and activist of French Restoration L. de Bonald with his political pamphlets affects P. Bourget and Ch. Maurras' writing, whereas reactional, highly metaphorical, even stylistically uncompromising Joseph de Mestre's journalistic writing impacts on the foundation of L.Veuillot's Catholic professional journalism around the periodical «L'Univers» («The Universe») as well as L. Bloy's political emotional pamphlets.
Another romanticist, A. de Lamartine, in his poetry calls the Lord de profundis of the inner self, while using highly symbolic religious language, where the echo of «The Genius of Christianity» is clearly felt. The poet's reflexions are concentrated around human soul, which, on his view, was created by sacrifice. Hence, the very sacrifice of our soul to save the others makes possible adding the individual sacrifice to the Christ's sacrificial gift on behalf of all humanity. Poet elaborates the conception of gradual ascension to God. Apart from broad Christian symbolism, the messianic discourse is clearly audible (Tarasiuk, 2014). Moreover, after 1848 the poet resorts to distinctly reactionary political writing, opposing to republican educational project. A. de Vigny, Lamartine's contemporary, often uses biblical images in his poetry («Moses», «Flood»), however his Catholic feeling is radically different. That is the universe, where God is silent (De Vigny, 1946), world sorrow is tragically felt and abandonment by God only sharpens human religious senses («Destinés») («Destined»).
According to A. Tribaudet, an integral part of Ch. Baudelaire's poetics is its inner Christianity, which, in sharp contrast with the romanticist Christian pathos and lyricism, affirms the Christian view of human nature affected by original sin. The original name of Baudelaire's «Fleurs du mal» («Flowers of Evil») was «Limb» (Baudelaire, 1942). Depicting the world where evil reigns, where sinful human nature constantly generates and aestheticizes the monstrosity, Baudelaire criticizes the Enlightenment myth about originally innocent, naturally good human. In contrast to Rousseau and the Romanticists, Baudelaire portrays human nature negatively, noticing, that it is not capable of producing anything but crime while affirming that all modern heresies came from one huge heresy of modernity: the rejection of the idea of original sin. (Tribaudet, 1936) Decadent poetics correlates with the Christian worldview similar to Baudelaire. Thus, B. d'Aurervilly (converted to Catholicism in 1846) and J.-K. Huysmans (converted in 1891) elaborate the aesthetic of embodied evil, using directly the image of Satan and demons in their novels «Les Diaboliques» («The She-Devils») and «Là-bas» («Down there»). These authors do not explain the evil neither by social deformation (realism), nor by physiological and psychological determinism (naturalism), needless to say, they are far from the enlightened theory of the original human decency. Depicting and poeticizing the evil, the depth of human fall with both body and soul, they are deeply rooted in the Christian belief in conception of sin. In 1866, in the introduction to his novel «Une vieille maîtresse» («The Old Mistress») d'Aurervilly creates the theory of Catholic novel, arguing that while the Christian worldview strictly divides the Evil and the Good, offers clear identity, it nevertheless gives no limits to the freedom of Catholic author's imagination (d'Aurervilly, 1851). Catholic writer must not judge his own characters and their poetical universe, but to depict the eternal human division between God and Satan in all its truth and tragedy. Having read the most decadent novel of the era, «À rebours» («Against Nature») by Huysmans, d'Aurervilly predicted that the author would have to choose between «the barrel of the gun and the foot of the cross» (Tribaudet, 1936). In after conversion cycle of novels -«En route» («On the road», 1895), «La Cathédrale» («The Cathedral», 1898) and «L'Oblat» («The Oblat», 1903) -Huysmans from the Christian perspective, mostly using the apophatic language of Catholic Rhineland Mystics, presents the images of saints and sinners, explores the intermedial connections between literature and Christian art, music and architecture and elaborates the problems of internal spiritual struggle (Huysmans, 2019).
In 1851, in the text «Prophètes du passé» («Prophets of the Past»), d'Aurervilly named the predecessors (de Maistre, Bonald, Chateaubriand, Lamennais) and the successors (l'Isle-Adam, Bloy) of his writing method. Consequently, his literary canon is partly approaching our research of Christian discourse in French literature after 1789 (D'Aurervilly, 2011). However, his emphasis on L. Bloy alone could not be sufficient as we analysed above. Similar poetics and Catholic worldview coordinates actualized much broader Christian discourse. Fascinated by passions M. Barres, ideological novelist P. Bourget, the mystical and visionary Ch. Péguy, polemical L. Daudet, finally the flourishing of the Catholic novel and political writing in the first half of XXth century, which is associated with the names of F. Mauriac, P. Claudel, G. Bernanos and J. Green. This younger generation, having absorbed the elements of French XIXth century literary interpretation of Christianity and having answered to major secular challenges of that time, nevertheless created a qualitatively new poetics, which we call today French Catholic literary revival.
Conclusions
Taking all the above-mentioned peculiarities into consideration, it becomes clear that the movement of French Catholic literary revival of the first half of the XXth century did not appeared on no foundation. It was a cultural answer to the range of challenges, which came into view during the previous century. We have divided these confronts into three categories in accordance with the nature of these phenomena: historical, theological and purely literary. Among historical premises two fundamental were distinguished. The birth of the Third French Republic within its vector on fast secularisation in all social, predominantly educational, domains and as a result the division of French cultural elites into two hostile camps, each elaborating its own doctrines and forming its own printed organs. Then, the Dreyfus affair -the socialpolitical precedent that formed a new cultural phenomenon of French «engaged intellectual», no matter what side defended. Active political response on the periodicals' pages will be one of the characteristics of all Catholic revival's writers. Yet the ecclesiastic preconditions were of no less significance. The whole XIXth century was the period of multiple Popes' encyclicals that were to react on the philosophical and social currents of that time: from scientism and Kantianism to socialism and communism. The identity of French Catholic writes were much complicated by the permanent movement between Gallicanism and Ultramontanism and the deep modernistic theological crises at the end of the century. Nevertheless, exactly this instability gave the writers the opportunity to elaborate Christian spiritual topics much more freely. The romanticists Chateaubriand and Lamartine initiated the use of biblical allusions and symbols, Baudelaire introduced the problematic of evil and human attraction to it, the decadents Huysmans and d'Aurervilly returned to the theme of original sin and elaborated the whole apothatic Catholic poetics. Without these literary predecessors French Catholic literary revival could not appear. However, the broader background is still to be analysed. For instance, the phenomenon and the reasons of Catholic literary conversions, the role of printed organs as well as cultural circles, the poetical peculiarities and the specific of political engaged writing of both predecessors and representatives of French Catholic literary revival must be studied in our following researches.
|
2022-11-17T16:10:26.501Z
|
2022-11-15T00:00:00.000
|
{
"year": 2022,
"sha1": "c1bf33c95fa78a049aadd8161f6169539a8aa472",
"oa_license": "CCBY",
"oa_url": "http://pnap.ap.edu.pl/index.php/pnap/article/download/953/907",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "7f29032b83622a550117fd0c5774318beea65223",
"s2fieldsofstudy": [
"Art"
],
"extfieldsofstudy": []
}
|
233777641
|
pes2o/s2orc
|
v3-fos-license
|
Formation of the scientist image in modern conditions of digital society transformation
The publication considers factors that are influencing formation of scientists image especially: availability to inform scientist or scientific organization about the registration, scientometric indices, use of global identifiers to improve accuracy in calculating indicators, publication of papers in journals with high impact factor, publications in resources that provide visibility in global information space, involvement in global communications system, level of competence. Specialists in various fields of science developed a number of practical recommendations for various techniques and tools that can be used and are helpful to create and to make, both personal image and image of the organization, institution, firm, etc. Also, main directions of using digital technologies to create the image of scientists are identified and substantiated. Scientists formulated recommendations to make their own image using digital systems based on analysis of scientific literature and personal experience: author’s digital identifier ORCID, profiles in international scientometric systems, saved publications in electronic libraries, profiles in social and scientific electronic networks, etc.
Introduction
The Concept of Digital Economy and Society Development of Ukraine for 2018-2020 [8] states that integration of Ukrainian science into the European research space will provide an opportunity to develop advanced scientific ideas, participate in interdisciplinary projects focusing on promising ideas, technologies and innovations. Also, one of the important elements of the EU Digital Single Market and part of the paradigm "Open Innovation -Open Science -Openness to the World" is development of the European Open Science Cloud and European Data Infrastructure. The main areas of harmonization of Ukraine's research initiatives with the European Research and Innovation Area include: development of interoperable digital infrastructures for educational and scientific institutions, connection to the GEANT educational network and distributed computing system, data collection, storage and processing of European grid infrastructure; opening access to data and publications made at the expense of state funding, creation of technological "road maps" of public-private partnership, commercialization of scientific developments for industry and social challenges, etc. The publication [56] describes methodological principles underlying the regulations adopted in Ukraine. They determine the procedure for state certification of scientific institutions. The problems of implementation of government decisions on transition to international criteria for evaluating the work of scientists are analyzed. Emphasis is placed on analysis of methodological approaches that characterize publishing activity of researchers and research institutions.
The work "Criteria of a Scholar" [28] highlights the problem of personality in science. Its contribution to the world treasury of knowledge is identified. Aspects of professional and public recognition of the scientist are revealed. The main attention is paid to analysis of modern criteria and methods of evaluation of scientists' activity. The world experience of construction of scientometric indicators and features of their application in Ukraine are analyzed. The scientific biography as phenomenon of scientist recognition is considered.
The research [62] is devoted to study of objective and subjective factors that contribute to realization of scientific potential of young scientists. Conceptual interpretation of scientific potential is clarified. It enables its empirical measurement. Two levels of theorizing (macro-and micro-level) are taken into account, but scientific potential of micro-level is more appropriate because it is based on subjective component of scientific potential and allows to study scientific potential of scientific subjects. Macro-level approach is based on neo-institutional paradigm of social development and the resource approach. It emphasizes resource and institutional components of scientific potential. Theoretical interpretation of the concepts "subject of scientific activity", "factors of realization", "scientific activity" is carried out. Realization factors of scientific potential of young scientists are empirically determined and investigated.
Problem of everyday practices of science functioning as a social institution is considered in [11]. There are metaphors comparisons in this area such as: "black box" and "unconscious science". Ethical and pragmatic implications of both approaches are analyzed and our own metaphor of science "reverse side of the mirror" is proposed. Heuristic potential of practical implementation of this metaphor is revealed on the example of presented training program of self-presentation (image creation) and selforganization for young scientists and teachers, developed for the Council of Young Scientists of the Belarusian State University.
The work [25] analyzed culturological aspect of virtual reality nature as an object of study and sphere of scientist self-presentation. Peculiarities of status of an individual transformation as a consumer of various kinds of information in network are determined. Crisis aspects of "user-virtualuser" communication is clarified. An attempt is made to outline the significance of the latest changes in the field of culture related to development and spread of virtual reality.
Analysis of network communities as sources of information communication is performed in the publication [12]. It provides brief description and comparative analysis of popular web communities. The scheme of functioning of the site "Ukrainian scientific Internet community" (http://www.naukaonline.org) is developed.
Development of digital infrastructure (for science and education institutions) is also crucial for ensuring open access to scientific data and knowledge, further commercialization of research, creation of innovations, products and services. New knowledge and developments carried out at the expense of financing from the state budget should be openly available and become property of society as a whole. However, lack of access to global scientific digital infrastructures -to the global knowledge base, computer services, consulting, research in fundamental and applied fields negatively affects Ukrainian science in general, is a significant limitation for Ukrainian scientists, engineers and civil servants. It does not allow assess possibilities of Ukrainian science; look for options for cooperation in international projects, etc., in particular in areas related to digital technologies [8]. Also, it is important to use resources of the European Open Science Cloud and European Data Infrastructure to obtain upto-date research results by Ukrainian scientists and their implementation in Ukraine.
During this project the Science Europe Association noted in Plan S [10] that, from 2021, all scientific publications on research results funded by public or private grants will be provided by national, regional and international research councils and funding bodies. They must be published in and structure of such fees should be transparent to inform the market, finance potential standardization and limit payment fees; in cases where high-quality journals or open access platforms do not yet exist, those who fund them will coordinate their creation and support, where appropriate; support will also be provided for open access infrastructures as needed; financially encouraging governments, universities, research organizations, libraries, academies and research societies to align their strategies, policies and practices, especially to ensure the transparency of research; funds do not support a "hybrid" publication model, however, as a transitional path to full open access in a well-defined time frame and only as part of transformation mechanisms, financial support can facilitate such activities; above listed principles are applied to all types of scientific publications, but it is clear that timeframe for achieving of open access for monographs and book sections will be longer and require separate and appropriate process; financial commitment that it should evaluate intrinsic and external value of the work and not consider the publication channel, impact factor (or other metrics of the journal) or the publisher, during evaluating research results and making funding decisions; financial control of observers' compliance and authorization.
Recommendations for scientist image formation
A group of researchers in the work [58] determined that scientists image formation is influenced by: availability of information about scientist and academic or scientific organization to which its belongs; scientometric indices; use of global identifiers to increase accuracy in calculating indicators; publications of scientist in journals with high impact factor; publications in resources for which visibility in the world information space is provided; involvement in the world communications system; level of competence. Important qualification mechanism for proving of scientific significance of scientists work, trust in their results is presence of dissertations, students, monographs and regular publications in specialized scientific journals, impact rating, scientometric systems, creation of scientific school of scientists or involvement in this school, participation in various projects, programs, scientific conferences, which testifies to demand for the scientist. Also, it is important to ensure free access to scientific and creative heritage of scientist and its school. It forms high degree of confidence in authority of scientist and supports it and its students. In this aspect new opportunities open up to increase status and confidence in results of its research. In these conditions, there is such phenomenon as formation of personal web pages in the global network. It makes possible to disseminate information not only in their country but also abroad, in the world scientific community [49]. The publication [49] emphasizes that now personal information about scientists on the Internet is growing -on personal pages and websites of official institutions. It can be successfully used in the system of qualification assessments of scientific status of scientists and effectiveness of its research, development of its scientific school or direction. In science there is enough information for objective analysis. The web pages of scientist most vividly show development and personification of knowledge today, represent not only scientist image, but also serve as a source of analysis for predicting of science development, advanced technologies and areas. Personal web pages of scientists became an important component of international scientific information systems, which present scientific publications in an organized interface (Google Scholar Citations, ORCID, Microsoft Academic Search, etc.). It provides ability to enter and identify personal information. Scientists, who today understand importance of citation system, seek to deeply present results of their scientific activities and their continuous development. Bibliographic information, digital copies of publications, annotations of works, links to electronic publications on the Internet, hyperlinks to industry sites, audio files from interviews, texts of lectures, reviews are added [49]. Indeed, today personal websites and web pages of scientists are comprehensive source of biographical and bibliographic information, expanding the opportunity to present to the international scientific community information about their professional activities, scientific results and ideas in form of published and unpublished scientific papers. It greatly helps scientists to create their own image. in the scientific space.
We agree with the statement in the publication [49] that professionally structured and filled on the established principles page with information about scientist allows to significantly increase the level of qualification assessments during the scientific examination of the scientist's work. It is an important issue in the development of Ukrainian today -during discussion of methods of scientific research and efficiency research of scientific institutions. Also, it is more important to provide objective assessments of content of scientific work as an individual scientist, and to study the impact on development of particular branch of individual scientific schools.
The work [13] considered problem of searching for data in social networks. Prospects of using ontological models for a semantic approach implementation into processing of requests from users of social networks are shown. Ontological model of social network "Scientists of Ukraine" was built. It is designed to ensure coordination of scientific activities of domestic scientists. Algorithm for semantic search of information according to the developed ontological model is proposed.
The research [59] proposed approach to creation and integration of user profile data in scientific social networks and open registers. Application of this approach provides maximization of information presentation about scientific publications and research work of scientist for the world scientific community. In this way researcher gets significant opportunities to expand cooperation with domestic and foreign organizations or scientists.
Large-scale study of visibility in scientific social network ResearchGate of American and European higher education institutions is described in the publication [31]. Institutional visibility of ResearchGate is strongly related to number of academic staff. Publications presence in the Web of Science is sufficient condition for institutional profile presence in ResearchGate. For higher education and research institutions on the Internet, the ResearchGate score is more relevant to number of publications than to citations impact. The ResearchGate score should not be used to compare institutions on research quality. The ResearchGate became the most popular website among academic social networks in terms of regular users, but not all institutions joined, and therefore assessments given to scientists and institutions are contradictory. Also, presence of European and US higher education institutions in ResearchGate in 2017 was assessed and the impact and quantitative scores of these institutions in ResearchGate were reflected. Most of the 2,258 European and 4,355 U.S. higher education institutions included in sample had institutional profile in ResearchGate. For institutions with doctoral programs, the presence in ResearchGate was closely linked to number of Web of Science publications. Thus, institutional results in ResearchGate reflect volume of research more than clarity; this figure strongly correlates with number of Web of Science publications. However, value of ResearchGate scores for institutional comparisons has some limitations [31]. Therefore, it is also important for scientific and scientific-pedagogical workers to have personal profiles in various scientometric systems and specialized social networks and to use such systems for scientific communication and personal image building.
The publication [17] explored number of services to determine those that best meet needs of scientists for publication, dissemination and use of scientific information resources. It was also emphasized importance to use open electronic systems with international recognition, including electronic libraries, international scientometric systems, open journal systems to perform scientific work.
Following quantitative and qualitative indicators of publication activity of degree seekers are used during defending dissertations, awarding academic titles, certification of graduate and doctoral students: the Hirsch index, i10-index and others. Thus, in context of development of the digital society and the improvement of digital technologies, the training of researchers of the new technological era, and in particular the training of graduate and doctoral students needs significant updating with the use of digital open systems. We believe that it is important to improve skills of scientists and future PhDs in use of digital technologies not only for research, but also to build a personal image. Figure 1 schematically shows benefits of professional image formation of scientist. It influences career growth, receiving various scholarships, awards, grants, projects, etc. Because scientist's image formation is important part of its scientific career and primarily affects its ratings in various scientometric systems, receipt of various grants, awards, scholarships or additional research funding.
Digital technologies use as means for scientist image formation
There are recommendations based on our own experience [17], [20], [29] and analysis of the scientific literature [5], [12], [36], [49], [58], [59], [63], [68], [70] for scientists and future PhDs on the use of digital technologies for personal image building: 1. Digital scientist ID ORCID. It is a unique digital identifier of the author. It solves problem of correctly documents identifying of the particular author. It is advisable to exchange information between profiles and import-export of bibliographic records from profiles and other resources, using the capabilities of specialized bibliography management systems to save time (Mendeley, EndNote) [58]. The ORCID accounting system provides researchers with two main opportunities: obtaining of unique identifier and monitoring of research work results; use of application programming interface for data transfer between different accounting systems and establishment of authorship of scientific works in each of them. Project participant is provided with unique identifier and personal profile in the ORCID register after registration. It allows you to control data of your own research results. It can be entered into ORCID register, edit your personal information, transfer data from one accounting system to another and establish the authorship of scientific papers in each of them, establish communication with other researchers or organizations. Information about the ORCID should be added when sending publications, when applying for grants, used in other research processes, entered into various search engines, scientometric databases and social network to ensure the link between the scientist's name and the results of his research. It will contribute to improvement of information links at the international level, increase the results representation of domestic research in the world scientific space [36].
2. Information about the scientist on the official organization website where the researcher works. One of science personalization types is creation of such pages on official institutions websites which are related to scientist activity. Important attribute of websites pages of official institutions is link to full-text resources of institutional repositories. It makes possible to increase scientist visibility in the scientific space, to increase citation rates of its works [49].
Personal pages of scientists should also include information about the Internet addresses of personal profiles in scientific information systems (Google Scholar Citations, Microsoft Academic Search, ORCID, Scopus, Publons, etc.). It will allow you to explore scientists citations network, to establish links between scientists in particular area of research. Thus, presentation and assessment completeness of personal scientist contribution depends on many components. Since bibliometric and statistical methods are common methods in the world scientific environment, the most important principle is to involve wide source base for bibliographic identification of personal contribution and development of scientific schools. Scientists themselves should be interested in presenting such important sources as dissertation abstracts, scientific papers, which are subject primarily to bibliometric analysis. Important element of personal web pages metadata of scientists should also be other forms of scientist activity: membership in the editorial boards of journals (titles of journals), related to scientific connections of the person (supervisors, opponents, editors, reviewers, students, coauthors), list of all publications of scientist and publications about its life and work with references to electronic versions of documents [49].
The work [49] emphasized that scientist personal page should contain accumulated information about results of its scientific activity (information about publications, reports, patents, etc.). Some of this information can be represented in the form of appropriate references to bibliographic databases. Data stored in system should be accessible from outside (experts, colleagues, bibliographic institutions, etc.). This approach ensures evolutionary development of the bottom-up scientific space, and existing Internet technologies make possible to implement this approach today.
It is very important to want to take on the formation of consolidated systems of personal information about scientists for a state that seeks to conquer information space of science. It helps to form idea of national science and achievements of scientific institutions and communities. Examples of such consolidated systems exist in many countries, including Russia ("Russian Scientists") and Belarus ("Belarusian Scientists") where these systems were created by integrating web page into the search documentation of scientific libraries. These systems may contain various information that expands idea of scientist identity in the authoritative file "Scientists of Belarus" maintained by the National Library of Belarus. It contains not only detailed questionnaire of the scientist but also information, for example, about scientific dynasties: the presence of relatives with degrees and scientific titles, their personal data; other relevant information is added; contact telephones, etc. in addition to the usual biographical and bibliographic data characteristic of biographical systems. Collected in authoritative file, branched and semantically related information provides comprehensive information about research teams, scientific dynasties, publishing activity of scientists, etc. [49].
Ukraine also implemented similar project called "Scientists of Ukraine" [40]. It is one of the information blocks of the complex project "Science of Ukraine: access to knowledge" (http://www.nbuv.gov.ua/node/3565). Also, the system "Scientists of Ukraine" is systematized by fields of knowledge, scientific degrees and titles, regions, departmental and institutional subordination of the register of scientists of Ukraine. The system is designed to search for scientific publications and publications of Ukrainian scientists, related to electronic library and information resources. Retrospective information and information about scientists who do not have degree but have scientific publications may also be entered in the register of scientists. Search and information capabilities of the system make it possible to: find colleagues engaged in relevant field of research; select lists of scientists by place of work, place of dissertation defense, institution, department, city; view list of scientists publications: abstracts, dissertations, books, scientific articles; download available full texts of scientific publications; receive information on available information sources of reference and biographical nature; determine range of scientists connected by scientific and family ties; view information of bibliometric profiles of scientists; use automatically created list of co-authors. We are impressed by appeal stated on the main page of the "Scientists of Ukraine" system, namely "We hope to make image of Ukrainian science better with your help!".
3. Self-archiving of scientific results (electronic libraries, depositories, etc.). Own scientific works (articles, monographs, manuals, experimental data, audio recordings of various scientific events, electronic presentations and abstracts, etc.) should be placed (self-archived) in electronic libraries or institutional repositories. "Self-archiving" means free copy of electronic document on the World Wide Web placement by the author in order to ensure open access to it. Mostly, this term refers to selfarchiving of articles in peer-reviewed scientific journals and conference proceedings, as well as dissertations, research results to increase its availability, use and citation. In various electronic libraries there is a section of statistics. Using it you can perform operational slice of data on information resources use. Scientist can track use dynamics of its own scientific works, how often they are interested in results of scientific research, and thus assess how relevant is the problem he is working on, or his colleagues are working [17].
Personal profiles in scientometric systems (Google Scholar, Scopus, Web of Science and other).
Scientist can track bibliographic references to his publications, view citations, citation schedules of his publications after creation of personal profile in this system. Scientometric platforms can be powerful tool for publishing, disseminating and analyzing of research results use (citation). Using these systems it is possible to carry out quantitative and qualitative evaluation of scientific results of both individual researchers and research teams or organizations [17]. Indeed, the "index-citation" is a kind of rating scale that determines quantitative and qualitative contribution of scientist into science. However, this criterion is formalized and therefore seems to depend less on subjective influences. So, it is still cannot be considered only reliable one. Most experts believe that the citation index is only one of scientific level indicators that is reached by the scientist [27]. 5. Electronic social and professional networks (ResearchGate, Mendeley, Academia.edu, Facebook, etc.). We believe that electronic social and professional networks due to convenience of their tools and services became main ones for quick feedback from public and dissemination of their own scientific results. For example, there are electronic social networks created specifically for the scientific community, namely: ResearchGate, Ukrainian Scientists Worldwide, Computer Science Student Network, LinkedIn, Scientific Social Community, SciPeople and others. Areas of application of electronic social networks by scientists and future PhD: scientist self-presentation; search for scientific material and tracking news about scientific mass events; support of scientific contacts and organization of thematic groups or pages; evaluation and monitoring of the effectiveness of own scientific works.
We will describe some examples of application aspects of electronic social and scientific networks for scientist image formation and performance of research work [17]: 1) it is important to search scientific material and track news about scientific events. Register in it, create a private page, post information about yourself and make settings. You can have personal pages in various social networks. It all depends on your goal: communicating with other scientists or finding scientific contacts for events, etc. Many scientific mass events are held in the world every day: conferences, seminars, round tables, master classes, trainings, etc. on topics related to your research, new books and journals are published. Researchers try to disseminate their research results to colleagues by posting links to them or announcing where they can be viewed or downloaded. In scientific social networks you should subscribe to selected person or thematic page and new news will be displayed in your news feed. If you are doing research try to watch news every day and you will really know and focus on scientific research on the chosen problem; 10 2) to maintain scientific contacts, present themselves and organize thematic groups or pages. It is necessary that the information is comprehensive -you need to mention awards, diplomas and certificates when posting information about yourself on a personal page. Therefore, academic social networks are the best space for establishing professional contacts. You can write a message to the author whose publication you are interested in and ask additional questions. Electronic social and professional networks due to popularity can be a good tool, allowing you to use different methods: group work, discussion, solution and analysis of situational problems, getting advice and more. Also, with the help of electronic social and professional networks can be a significant information impact, which will expand the awareness of changing the worldview of users of these networks. You can create a thematic group, invite participants to it and together explore a problem, share experiences, present research results, etc.; 3) to conduct certain parts of experimental research (surveys, questionnaires, tests) or to implement a joint project. The networks functionality allows you to create closed and open groups. It can include only members defined by the administrator, so in a closed group it is possible to place the necessary material and information; texts, videos, images, links, surveys, questionnaires, etc. In addition, there is opportunity to conduct surveys, independently evaluate learning outcomes by all users of the group and conduct discussions, share experiences, and simply communicate with like-minded people. We believe that electronic social networks can be powerful tool for conducting certain aspects of research; 4) to assess and monitor effectiveness of their own research. Research should be actively discussed in process of its implementation and not only after the publication of the results. The researcher wish to share its own experiences in professional networks is great opportunity to hear feedback about their research. For this purpose you should also use statistics tools offered in most social networks. Received analytical reports will show which publications attract the most attention and approval and from which countries users are interested in your posts and publications [29]. We emphasize that modern scientist should be professional in electronic and academic social networks.
6. Approbation of research results (reports, speeches, webinars, videos, participation in scientific events). Important role in scientist image formation is played by its dialogue with public -both directly during meetings and through the media. It is participation in public, scientific discussions, open round tables, seminars, press conferences, as well as in such image events as Science Days, exhibitions, seminars, festivals, intellectual and scientific games, talk shows on television and others. [27]. Figure 2 presents personal profiles of scientists in various scientometric systems, academic social networks, etc.
We state that today use of digital technologies is relevant and forced measure. General public will be able to get acquainted with scientific results. It will affect scientist image formation and institution image where the researcher studies or works [17]. Also, more and more often scientists image is researched and measured by rating of their scientific publications in various digital open systems in the Science Citation Index system.
It should be emphasized that despite understanding of self-presentation importance of scientists in online social communications. Personalized pages and personal websites of scientists have number of shortcomings in terms of information completeness. Another disadvantage of personal information presenting about scientists on the web is multiplicity of personal pages posted on different sites. The main problem here is need for constant support in current state of many personal pages on various sites [49].
In order to solve this problem there are number of interconnected digital systems, creating personal profiles on them. Scientists can import or export personal data through exchange formats to other scientific information and scientometric systems.
From our own experience and information on the information resources https://researchgate.net, https://www.mendeley.com we will describe some recommendations for improving personal digital profiles: every time of your appearance in search results, your photo and institution name, along with your name, help other researchers to quickly identify you. personal photo availability in your profile is also a great way to increase visibility in search engines, because it is proven that profiles with photos are viewed 3 times more than those without author photo; institution name where you work must be relevant, because it is displayed next to your name; you can help others understand importance of your research, open access to your publications makes you more open to potential colleagues, sponsors and employers; indicate your research interests and research areas that you do or that interest you; listing your interests and skills increases your search visibility and helps you find researchers with similar interests; provide detailed information on previous jobs and your experience of participating in various studies, grants, projects, etc.; specify keywords to summarize current and future areas of research; specify your other profiles or export data from different profiles; update information in your digital profiles.
Conclusions and prospective for further researches
Currently, use of digital technologies is effective for conducting, presenting and implementing results of scientific research in practice. IT market is constantly improved and new digital technologies are developed. Mastering them is important for training scientists, university professors, future PhDs because they are ones who carry out important research for the development of science and education.
We analyzed scientific literature and our own experience. So, we emphasize that development scientist and future PhD image formation is important multifaceted and purposeful process aimed at professional recognition and public activity. Therefore, we recommend use of digital technologies. Authors identified and substantiated directions and means for scientist and future PhD image formation in digital transformation of society. Scientists also given recommendations for their own image formation using digital systems: 1) create an author's digital ID ORCID; 2) create profiles in various international scientometric systems; 3) update personal information on institution website where you work, add hyperlinks to your profiles in scientometric systems and digital ID ORCID; 4) use social networks to interact with colleagues, share experiences, observe colleagues reactions to discussions or information on certain issues, invite colleagues to participate in various scientific events, etc.; 5) present your own scientific results in open access: self-archive scientific publications in electronic libraries; 6) monitor use of their own scientific publications and identify those that are "popular", etc.
|
2021-05-07T00:03:41.786Z
|
2021-03-01T00:00:00.000
|
{
"year": 2021,
"sha1": "aece4870c5d644804f8285d2c7d41bbc3795a983",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1742-6596/1840/1/012039",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "4406765362ec1ea1f77bc5191d866329bf8d2671",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Sociology",
"Physics"
]
}
|
2391979
|
pes2o/s2orc
|
v3-fos-license
|
Selecting score types for longitudinal evaluations: the responsiveness of the Comprehensive Developmental Inventory for Infants and Toddlers in children with developmental disabilities
Objective The objective of this study was to examine the responsiveness of the Comprehensive Developmental Inventory for Infants and Toddlers (CDIIT) in children with developmental disabilities (DD). Methods The responsiveness of a measure is its ability to detect change over time, and it is fundamental to an outcome measure for detecting changes over time. We compared the responsiveness of four types of scores (ie, raw scores, developmental ages [DAs], percentile ranks [PRs], and developmental quotients [DQs]) in the five subtests of the CDIIT. The CDIIT was administrated three times at intervals of 3 months on 32 children with DD aged between 5 months and 64 months (mean =30.6, standard deviation [SD] =17.8). The CDIIT is a pediatric norm-referenced assessment commonly used for clinical diagnosis of developmental delays in five developmental areas: cognition, language, motor, social, and self-care skills. The responsiveness was analyzed using three methods: effect size, standardized response mean, and paired t-test. Results The effect size results showed that at the 3-month and 6-month follow-ups, responsiveness was small or moderate in the raw scores and DAs of most of the subtest scores of the CDIIT, but the level of responsiveness varied in the PRs and DQs. The standardized response mean results of the 3-month and 6-month follow-ups showed that most of the subtest scores of the CDIIT had respectively moderate and large responsiveness in raw scores and DAs, but the responsiveness varied (from no to large) in PRs and DQs. Conclusion The findings generally support the use of the CDIIT as an outcome measure. We also suggest using the raw scores and DAs when using a norm-referenced pediatric developmental assessment to evaluate developmental changes and program effectiveness in children with DD.
Introduction
Developmental disabilities (DD) are a group of chronic conditions that are attributable to physical and mental impairments during the developmental period. 1 Common examples are intellectual disability, cerebral palsy, and autism spectrum disorder. 2 Children with DD often manifest lifelong disabilities in cognition, language, motor, social, and selfcare skills. 2 According to a new report from the Federal Centers for Disease Control and Prevention, the prevalence of DD is approximately one in six, which means that ~15% of children aged 3-17 years have one or more DD 3 with various degrees of severity and need coordinated services for their special health care, education, and social welfare, submit your manuscript | www.dovepress.com
1104
Tsai et al such as early intervention and continuing special education. Therefore, a comprehensive measure is warranted to detect the area and extent of developmental delays, to evaluate the effectiveness of the early interventions or education programs, and to predict the prognosis and needs for future health care and services in children with DD.
The Comprehensive Developmental Inventory for Infants and Toddlers (CDIIT) 4 is specifically designed for infants and children aged 3-71 months. It is commonly used to assess five important developmental areas: cognition, language, motor, social, and self-care skills. The CDIIT was designed to be used as a diagnostic and screening test to identify strengths and weaknesses in the five developmental areas and to establish developmental levels. 4 The CDIIT is included among the recommended measures for children with DD in child developmental centers in Taiwan because of its comprehensive coverage of pediatric development, concrete and interesting materials, complete norm establishment, and clinical applicability. The CDIIT has been proved to be psychometrically sound, having good internal consistency, test-retest and interrater reliabilities, construct validity, concurrent validity, predictive validity, and diagnostic accuracy, [4][5][6][7][8][9][10] and may have the potential for use as an outcome measure to assess and monitor developmental skills when children with DD are the subjects of intervention.
Responsiveness is fundamental to an outcome measure for detecting changes over time (its evaluative purpose). 11,12 The responsiveness of a measure is its ability to detect change over time, especially in response to an intervention. 11,13 Therefore, in both clinical practice and research, an outcome measure must have sufficient responsiveness to detect treatment effects. 11,[13][14][15] However, the responsiveness of the CDIIT has yet to be established, so the potential of the CDIIT for use as an outcome measure for evaluating children's development and treatment effects longitudinally is unknown.
The purpose of this study was to examine the responsiveness of the CDIIT longitudinally and thoroughly in children with DD. We compared the responsiveness of the four types of CDIIT scores: the raw scores, developmental ages (DAs), percentile ranks (PRs), and developmental quotient (DQ). The results may serve as a reference in determining which scores to use as outcome indicators of the CDIIT and also as a reference for choosing scores for a norm-referenced pediatric developmental assessment.
Methods Participants
A total of 32 children with DD aged between 5 months and 64 months were recruited from the Child Development and Assessment Center of the Chi Mei Medical Center in Taiwan between March 2012 and December 2014. These children were receiving early intervention programs at the time of the study. The early intervention program was individually based on the results of each child's individualized assessments, including observation of free play or play in a group, interviews of the caregiver, and standard assessment tools, but not targeted to specific activities in the CDIIT. Written informed consent was given by their primary caregivers, and the Institutional Review Board of the Chi Mei Medical Center approved the protocol for this study.
Measures comprehensive Developmental inventory for infants and Toddlers
The CDIIT consists of two parts: the diagnostic test (CDIIT-DT) and the screening test (CDIIT-ST). 4,5 Only the CDIIT-DT was used in this study. The CDIIT-DT includes five subtests and a behavior rating scale for assessing a child's developmental capacities and behavioral characteristics in five developmental areas: cognition, language, motor, social, and self-care skills. The cognition subtest assesses a child's mental capacities, including attention; perception; memory; reasoning; and concepts of color, shape, size, and number. The language subtest consists of expression and comprehension subdomains. The motor subtest includes two subdomains: gross motor and fine motor. The gross motor subdomain includes items to assess gravity compensation, locomotion, and body-movement coordination, while the fine motor subdomain includes items for basic hand use and visual-motor coordination. The social subtest has sections on interpersonal communication, affection, personal responsibility, and environmental adaptation. The self-help subtest comprises items about feeding, dressing, and hygiene skills.
Every item on the CDIIT-DT is scored 0 or 1, respectively, indicating whether the child "fails" or "passes" that item. Scores can be assigned based on clinical testing or home observation by the caregivers. In the present study, items in the cognition and motor subtests and part of the language subtest were individually and directly assessed by a trained administrator. The social and self-help subtests were scored by the primary caregivers. Based on the CDIIT manual, the raw score of each subtest and total score can be transformed into three other types of scores: DA, PR, and DQ. Altogether, the four types of scores were obtained for each subtest, for the gross motor and fine motor subdomains, and for the whole test.
As regard the reliability of the CDIIT-DT, the internal consistency, 9
1105
responsiveness of cDiiT in children with DD coefficient =0.76-1.00), and interrater reliability (intraclass correlation coefficient =0.76-1.00) of the subtests and composites are good. 5 The CDIIT-DT has good accuracy, similar to that of the Peabody Developmental Motor Scales -Second Edition for motor development evaluation in preschool children. 7 With respect to its validity, the construct validity, concurrent validity, and predictive validity have been shown to be valid. The construct validity has been validated with exploratory factor analysis. 8 Regarding the concurrent validity, the scores of the CDIIT subtests have been shown to be significantly and moderately correlated with the scores of the Bayley Scales of Infant Development-II in preterm and full-term infants. 6,10 In addition, the CDIIT also has fairly good predictive validity for diagnostic results and later school performances or special education needs, as measured by the Child Problems Referral Survey and Preschool Children Development Checklist. 16
Procedures
The children were administered the CDIIT three times at intervals of 3 months by trained administrators in clinical settings. The administrators were therapists of occupational therapy, physical therapy, speech therapy, and psychology, all of whom were trained in the standard procedures of the developmental center. Demographic information was collected from the caregivers of the children, and the administrators, children, and caregivers were blinded to the purpose of the study.
statistical analysis
Children's CDIIT raw scores were transformed into DAs, PRs, and DQs according to the norms of normally developing children presented in the original manual. The demographic properties of the participants and the CDIIT scores were then characterized with descriptive analysis. The four types of scores (ie, raw scores, DAs, PRs, and DQs) were used for analyzing the responsiveness.
The responsiveness of the CDIIT was examined with the effect size (ES), standardized response mean (SRM), and paired t-test. All statistical analyses were performed using SPSS 17.0 (SPSS Inc., Chicago, IL, USA).
effect size
The ES, a measure of change, is calculated by dividing the mean difference between baseline and follow-up measurements by the pooled SD of the baseline and follow-up measurements. 17 Values of 0.20, 0.50, and 0.80 indicate small, moderate, and large ES, respectively. 18
standardized response mean
The SRM is the mean difference in the scores of two consecutive measurements divided by the SD of that difference. 19 Thus, the SRM gives an estimate of change in the measure that is standardized relative to the variability of change scores. As with ES, values of 0.20, 0.50, and 0.80, respectively, are considered to show small, moderate, and large responsiveness. 18 Paired t-test The statistical significance of the change in scores was determined using the paired t-test. 20 The alpha level was set at 0.05.
Participant characteristics
A total of 32 children with DD (23 boys and nine girls) ranging in age from 5 months to 64 months (mean: 30.6 months, SD: 17.8 months) and their caregivers participated in the study. The diagnoses of the children with disabilities consisted of psychomotor retardation (n=18), cerebral palsy (n=6), attention deficit hyperactivity disorder (n=4), Prader-Willi syndrome (n=2), Rubinstein-Taybi syndrome (n=1), and Marfan syndrome (n=1). The characteristics of the 32 children are presented in Table 1. Table 2 presents the mean and SD of the raw scores, the DAs, the PRs, and the DQs for each subtest of the CDIIT.
responsiveness Table 3 shows the responsiveness for the four types of scores (raw scores, DAs, PRs, and DQs) for each subtest of the CDIIT.
effect size
At 3-month follow-up, all the subtests had small responsiveness in the raw scores, except for the language subtest, which was not responsive. Regarding the DAs, all the subtests had small responsiveness (0.21-0.30). However, in the PRs, only the language and the motor subtests had small responsiveness. The other subtests were not responsive in the PRs. For the DQs, only the motor (0.30) and self-care (0.23) subtests had small responsiveness.
At 6-month follow-up, all the subtests had small responsiveness in the raw scores (0.34-0.47). Regarding the DAs, all the subtests had small responsiveness, but greater than that at 3-month follow-up (0.32-0.45). However, in the PRs, except for the social and self-care subtests, which had no (0.04 and 0.19, respectively) responsiveness, the cognition and language subtests had small responsiveness (0.43 and submit your manuscript | www.dovepress.com
1106
Tsai et al 0.38, respectively) and the motor subtest even had moderate responsiveness (0.66). For the DQs, the social and self-care subtests had no responsiveness (0.10 and 0.12, respectively), and the other three subtests had small responsiveness (0.24-0.48).
standardized response mean
At 3-month follow-up, with regard to the SRMs of the raw scores of the subtests, the social and self-care subtests had small responsiveness (0.39 and 0.48), the language subtest had moderate responsiveness (0.55), and the cognition and motor subtests had large responsiveness (1.22 and 1.24). For the DAs, all subtests had responsiveness that was better than moderate; the cognition and motor subtests had large responsiveness (1.07 and 1.13) and the other three had moderate responsiveness (0.62-0.78). As regard the SRM of the PRs, only the language and motor subtests were responsive (0.34 and 0.36). For the DQ, the subtests of cognition, motor, and self-care were responsive (0.20-0.29) and the other two were not responsive (0.17 and 0.19).
At 6-month follow-up, all the SRMs of the raw scores of the subtests had extremely large responsiveness (1.38-1.96), except for the social and self-care subtests, which were moderately responsive (0.57 and 0.65). For the DAs, all the subtests had responsiveness that was better than moderate; the cognition, language, and motor subtests had extremely large responsiveness (1.29-1.68) and the other two had moderate responsiveness (0.72 and 0.78). As regard the SRMs of the PRs, the motor subtest was moderately responsive (0.54) and the cognition and language subtests had small responsiveness (0.49 and 0.35). The other two were not responsive (0.04 and 0.19). For the DQs, the cognition subtest was moderately responsive (0.54) and the language and motor subtests had small responsiveness (0.26 and 0.44). The other two were not responsive (0.12 and 0.14).
Paired t-test
At 3-month follow-up, all the changes in subtest scores were significant (P<0.01) for the raw scores and DAs, but not for the PRs and DQs. Furthermore, at 6-month follow-up, the results were similar to those at 3-month follow-up, but with additional significant changes in the PRs and DQs in the cognition and language subtests.
Discussion
We believe that this is the first study to examine the responsiveness of the CDIIT in children with DD. In this study,
1107
responsiveness of cDiiT in children with DD the responsiveness of the CDIIT was thoroughly analyzed. The raw scores DAs, PRs, and DQs were examined with three statistical methods of responsiveness. Regarding the variability of the scores of the initial assessment (ES), the results of the 3-month and 6-month follow-ups showed that most of the subtest scores of the CDIIT had small responsiveness in raw scores and DAs, but the responsiveness varied in PRs and DQs. Regarding the variability of the change scores (SRM), the results of the 3-month and 6-month follow-ups showed that most of the subtest scores of the CDIIT had moderate and large responsiveness, respectively, in raw scores and DAs, but the responsiveness varied (from no to large) in PRs and DQs. These findings about responsiveness support the use of the raw scores and DAs of the CDITT by clinicians and researchers as an outcome indicator to track change over time and to evaluate program effectiveness and developmental changes for children with DD. Based on the results, both the raw scores and the DAs of the CDIIT are suggested for evaluative purposes because the two types of scores have different purposes. A raw score represents how many items a child passes (1) or fails (0) in a subtest. The DA refers to a child's level of development within a subtest. 21 Therefore, changes in raw scores reflect the degree to which the child has mastered items of functional skills and behaviors in relation to the results of a previous assessment. On the other hand, a change in DA reflects the degree to which the level of development has changed in the intervening time between repeated assessments. Therefore, both the raw scores and the DAs can be used for different purposes to track changes in children's performance over time, depending on the focus on mastery or development of skills/behaviors.
In this study, the PRs and DQs were less responsive than the raw scores and DA or not responsive at all. Thus, the PR of the CDIIT is not recommended for use as an outcome measure. The PR is the percentage of scores in its normative sample that are better than, the same as, or lower than it, which might explain why PRs were less responsive to children's changes. The functional performance and behaviors of the children with DD did improve, possibly due to intervention or normal development, as indicated by the raw scores and DAs. However, these improvements of the children with DD did not surpass those of normally developing children of the same age in the normative sample provided in the CDIIT manual. 4 Because no well-accepted index has been acknowledged for evaluative purposes, 18 especially in the pediatric field, we used three indices to examine the responsiveness of the CDIIT. We found that in general, the values of ES were smaller than those of SRM. This systematic difference can be ascribed to the different denominators of the formulas. The formula for ES is ES = X change /SD pooled ; its counterpart for SRM is SRM = X change /SD change . The two formulas have the same numerators (X change , mean change between baseline and follow-up measurements). The denominators, however, are different. That for ES is the pooled SD of the baseline measurement and follow-up measurement (SD pooled ), while that for SRM is the SD of change in scores (SD change ). In our study, SD change was smaller than SD pooled for every type of score (raw scores, DAs, PRs, and DQs). Thus, in our study, the ES values were smaller than the SRM values. From these observations, it appears that multiple indices should be used to examine the responsiveness of a measure for better interpretation in different contexts. 18 One possibility might explain why the responsiveness of the social and self-care subtests was generally smaller than those of the other three subtests, especially those of the PRs and DQs. The social and self-care subtests, which are composite skills, are comparably advanced and based on the fundamental/basic component skills in the other three developmental areas (cognition, language, and motor). Children's component skills improve and should be integrated, and then their advanced skills can improve. Therefore, children's social and self-care skills are unlikely to improve a great deal in a short period of time (eg, 6 months in this study) along with the other three developmental areas. The children PR, mean (SD) DQ, mean (SD)
Limitations
This study has several limitations. First, the children with DD with various diagnoses were recruited from a single medical center in southern Taiwan, so the representativeness of our sample was limited. Second, we did not examine whether differential responsiveness existed in subgroups with different diagnoses because of the small sample size. The responsiveness of the CDIIT may require further investigation in specific groups of children with DD or with a populationbased sample. Third, despite the interval of 3 months between assessments, the possibility of a practice effect cannot be excluded nor can the possible inflation of the responsiveness of the CDIIT as a result of this effect. Fourth, although the interrater reliability has been examined in the clinical setting, 5 it was not specifically examined in the present study. The fifth limitation is the small sample size of this pilot study. Therefore, additional studies in a larger cohort with equal representation across diagnostic categories need to be carried out to generalize the findings and recommendations.
Conclusion
Our results revealed that the CDIIT was responsive in terms of raw scores and DAs, and they supported the use of the CDIIT as an outcome measure for assessing the developmental areas at intervals of 3 months and 6 months in children with DD. In addition, the raw scores and DAs are suggested for evaluative purposes in norm-referenced pediatric developmental measures. Additional studies with a larger sample size are needed to support our findings.
|
2018-04-03T03:18:47.645Z
|
2016-05-04T00:00:00.000
|
{
"year": 2016,
"sha1": "42e0e5d72364fef1e6658abbf7e40053cd3361a6",
"oa_license": "CCBYNC",
"oa_url": "https://www.dovepress.com/getfile.php?fileID=30197",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "42e0e5d72364fef1e6658abbf7e40053cd3361a6",
"s2fieldsofstudy": [
"Medicine",
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
55030817
|
pes2o/s2orc
|
v3-fos-license
|
Teachers ’ Perceptions about Teaching Multimodal Composition : The Case Study of Korean English Teachers at Secondary Schools
Twenty-first-century literacy is not confined to communication based on reading and writing only traditional printed texts. New kinds of literacies extend to multimedia projects and multimodal texts, which include visual, audio, and technological elements to create meanings. The purpose of this study is to explore how Korean secondary English teachers understand the 21 literacies and multimodal composition in this era of new types of communication. Framing the study are questions pertaining to what these teachers think about teaching multimodal composition in their writing classrooms. The schools of South Korea, including those in this study, prioritize high-stakes standardized tests, and teachers as well as students and parents gauge success by these test scores. As a result, teachers primarily rely on direct instruction via lectures to provide skills and knowledge to ensure that students will succeed in the high-stakes tests. So while teaching and assessment practices in the classroom still adhere to traditional approaches, ongoing technology outside school has transformed the ways in which young people – the students – generate, communicate, and negotiate meanings via diverse texts. If the primary goal of education is to teach students lifelong skills needed in society, it is the responsibility of schools and teachers to recognize social changes and promote individual learning needs.
Introduction
Since English became a mandatory school subject in Korea in 1997, the focus has been on teaching grammar (Fouser, 2011).Most class activities have involved translating passages from English into the native language, memorizing vocabulary in isolated contexts, and drilling grammatical rules.Teachers did not need good oral skills in English because explanations were provided in the learners' native language, and the focus was not on facilitating communication in the target language (Brown, 2007).As English has taken hold as an international language, English teachers are encouraged to focus more on teaching communicative abilities (Brown, 2007).Also at this time, the increasing communication among people in different countries via the Internet makes geographic boundaries less significant.As a result of the needs of a fast changing society, changes in the classroom also are needed, including new insights about "texts, new models of learning, and new national needs" (Myers, 1996).However, in spite of the importance of communicative competence in English education, Korean teachers of English are likely to have far less motivation to teach writing compared to other areas such as reading and listening because of the continuing test-driven orientation and a lack of teacher confidence in teaching writing (Yang & Son, 2009).
While the curriculum does not emphasize writing, students now have more opportunities to read and write English outside of school.Online communities such as Facebook, tweets, and blogs enable students to read what others write and to express themselves in writing their own ideas (Vasudenvan et al., 2010).Many students who do not show any interest in writing in class participate actively and competently in these types of activities without realizing that they are practicing writing (Park & Selfe, 2011;Witte, 2007).This illustrates the gap between the school curriculum and networked environments where students use English for interacting with people globally.It also points to the need to support the provision of improved writing instruction to students, and to encourage teachers to expand their definition of literacy and to learn ways in which to combine digital technology with traditional writing instruction (NCTE, 2008).The purpose of this study is to examine teachers' perceptions of such changed environments saturated with various modes and to suggest practical guidelines to support enhanced writing instructions.
Literacy and Multiliteracies
According to the definition of literacy approved by the National Council of Teachers of English (NCTE, 2008), "literacy is a collection of cultural and communicative practices shared by a group of people".By such a definition, literacy is unstable, dynamic, and flexible as it reflects ever-changing social values, attitudes, and interests.Individuals today need a wide range of abilities to respond to ever-changing social needs (Myers, 1996), and multiple and multimodal literacies, using the tools of technology, continue to challenge the traditional form of literacy.As a result, the English Language Arts curriculum must change.As the Internet and digital technology require reconsideration of the definitions of text and writing pedagogy (Froehlich, 2013), new media literacies demand that students master three types of skills: (1) functional skills which enhance their understanding about managing technology; (2) critical skills which are to help them regard digital technology as a tool to understand social and political contexts; and (3) rhetorical skills which may help them choose the best way to convey their ideas (DeVoss et al., 2010).
The new literacies are not confined to communication through the reading and writing using only printed texts.Rather, literacy now includes the use of multimedia and multimodal texts -visual, audio, and technological -to produce all types of products (Grabill, 2005).In other words, multimodal aspects of texts challenge the concept of language (Kress, 2000).Kress (2000) used an example of a science classroom where students were asked to write about and to draw what they had done.They did not just simply reproduce what they had learned, but transformed their understanding by using different semiotic system modes such as speech, images, and writing.
The New London Group (1996), a group of ten academic researchers, expanded the definitions of literacy and literacy pedagogy by introducing the notion of multiliteracies to demonstrate that modes of representation are far broader than language.The importance of various communication modes may differ, depending on a given cultural context; for example, some cultures put more emphasis on visual or aural modes over print.Even so, new communication media, with rapidly evolving technologies, have reshaped the ways in which people globally understand and use language today.
The key emphasis of multiliteracies is on encompassing a variety of representational modes as communication channels (Mills, 2009).The verbal or linguistic mode is regarded as one of the integral parts of communicationperhaps even the basic mode, but it is not sufficient to account for multimodal text designs.For example, rather than formal language, computer users generate more spoken-like, informal texts, and even use symbols as new standard terms (Mills, 2009).In response to such a fast-changing textual environment, literacy education supporting multiliteracies attempts to move from a formal, standard, mono-modal mode towards more informal, regional, and multimodal forms of communication (Cope & Kalantzis, 2000).Texts such as emails, websites, and images cannot be overlooked in relation to print literature (Mills, 2009).The New London Group (1996) points out many forms of communication as types of literacies that should be recognized in the classroom.
Multimodality and Writing Instruction in Korea
Visual literacy and communication modes have an impact on educational settings (Kress & van Leeuwen, 1996).Specifically, the multimodal approach is believed to be beneficial to English language learners with limited English in that it helps them engage in multiple reading and writing activities.In other words, shifting modes from visual to verbal or vice versa helps students better understand, appreciate, and interpret complex concepts written in English (Early & Marshall, 2008).Britsch (2009) addressed the importance of nonlinguistic representations as central to English language development.As several researchers (Coggins et al., 2007;Gerlic & Jausovec, 1999) have indicated, interactions of verbal and nonverbal communication are likely to promote understanding about content because of the positive relation between brain activity and the use of nonlinguistic representations.
Another benefit of multimodality is its emphasis on recognizing marginalized voices.This approach is closely related to critical analysis which allows students to become aware of the political and dominant forms of literacy (Rowsell et al., 2008).By understanding the nature of literacies as being conditioned in a situation where they develop (Bomer et al., 2010), students foster an insight that literacy extends beyond learning only standard English and print-based representational modes (Mills, 2009).
Multimodality, however, represents a complex set of challenges for Korean teachers and schools.English teachers rarely integrate multiple modes into their writing instruction because of the emphasis being placed on form-focused instructions (e.g., error correction) (Vasilopoulos, 2008).They may provide video clips or images to entertain students between lessons, but in many cases, the integration of multimodality may not be relevant to instruction and does not complement traditional literacies (Han & Kinzer, 2008).The mismatch is explained by an imbalance in teaching content.As noted, despite recognition of the benefits of the Communicative Language Teaching (CLT) approach, the main focus of English education in Korea is still on teaching more receptive skills such as reading and listening rather than productive skills such as speaking and writing (Monaghan & Saul, 1987).Teachers continue to spend most of their instructional time teaching reading comprehension to prepare students for tests, with writing instruction being pushed to a low-profile position (Kwon, 2003).Teachers' use of multimodal instruction mostly limits writing activities to one-time rewards before or after reading instruction.In a culture where the results of high-stakes tests are of paramount importance, teaching writing which is not grounded in social, cultural, and political contexts cannot connect students to understanding communities beyond printed texts (Shin & Cimasko, 2008).
Teacher Perceptions of and Attitudes towards Multiliteracies and Multimodality
Perceptions are shaped by historical, political, social, and cultural contexts (Holloway, 2012).This is the case with perceptions of literacy, which, according to Gee (2000), is situated in a social context, a view that differs from traditional approaches that regard language as a closed system.The use of language and meaning are closely related to experiences that people make in the material world (Gee, 2000); and, therefore, literacy practices are complex social acts whereby participants interact and interpret an occasion (Reder & Davila, 2005).
Students live in a complex environment where the mediation of new literacies enables them to generate their identities (Reder & Davila, 2005).Most of them use the new technologies that transform traditional print, form multiple identities, and utilize different forms of expressions (Luke, 1998).Students "read" a variety of textual forms such as video games, films, graphics, and visual images on a daily basis (Ajayi, 2011).In order for teachers to maximize students' learning opportunities, Luke (1998) suggests that they have a critical dialog with students about how culture and new media affect them, and how contexts and knowledge continue to change.For instance, Roswell et al. ( 2008) emphasize several points that may affect teachers' attitudes and perceptions regarding teaching multiliteracies: 1) Students recognize cultural, ethical, and social changes in the classroom; 2) Students bring a range of diverse representational resources into the classroom and integrate them to make intercultural texts; 3) Teachers recognize linguistic and cultural diversity, and use them as teaching resources; 4) Teachers recognize students' different interests, preferences, and dialects, and use them as opportunities to teach and learn; and 5) Literacy practices provide chances for negotiating, contesting, and refiguring attitudes and mindsets.Teachers could ensure that students' personal and cultural resources are rooted locally and socially, and that the school is not isolated from their communities (Holloway, 2012).
Teachers' reported perceptions and attitudes about multimodality have been both positive and negative.Antonietti et al. (2006) analyzed the psychological correlates of multimedia computer-supported instructional tools through a questionnaire.This study, which included 272 teachers working in kindergartens and primary and secondary schools, examined "motivational & emotional aspects (attraction, involvement, boredom, and tiredness), activation states (participation and effort), mental abilities (attention, language, and logical reasoning), cognitive benefits and learning benefits (better understanding, memorization, application, and overall view), and metacognition (planning)" (p.273).Teachers responded that the use of multimedia was positive since it facilitated comprehension, memorization and learning.They also appreciated multimedia for its association with visual thinking and ability to provide a global view.However, some factors such as confusion, tiredness, and excessive involvement negatively affected teachers' attitudes towards using multimedia.The outcome of this study was consistent with previous studies in that a new tool helped students achieve their desired goals.
Methodology
This study presents an exploratory overview of Korean English teachers' understanding about multimodal composition and practices in the digital age.In particular, it examined how teachers implement multimodal composition in a school culture that privileges, above all, standardized test scores.The guiding question is as follows: What attitudes and perceptions do Korean secondary English teachers articulate regarding teaching multimodal composition?
Research Sites
Research sites located in a metropolitan area in South Korea were purposefully selected, based on several factors such as type of school (e.g., a high school for academic focused learners, and a vocational school) and teachers' use of multimodality.The metropolitan area is home to 48 percent of the national population and has the greatest number of schools in South Korea.However, the characteristics of the schools vary greatly within this large area.
For example, schools in more affluent sections may be well equipped with technology to aid English language classes.In schools with smaller class sizes, student-centered activities are more likely to be implemented, compared to conventional classrooms with larger numbers of students where the average class size is 35 to 40.Also, students in a metropolitan area may have greater exposure to multimodal texts compared to their counterparts in rural areas.Sources of information in the city such as television, text media, and advertisements (even on the street) provide students with more opportunities to access (and produce) multimodal texts.In accordance with increasing interest in multimodal texts, some teachers may have considered the possibility of bringing these literacy activities into the classroom.
Teacher-Participants
Teacher-participants were selected from a pool of teachers that indicated in a demographic survey that they were using multimodality.Specifically, five teachers currently teaching English writing and using multimodality in middle and high schools but teaching different levels of students and types of curriculum were selected.This is called "purposeful sampling."Unlike random sampling used in quantitative studies (Rossman & Rallis, 2003), purposeful sampling strategies are employed to collect information from specific participants or processes where the researcher gains a great deal of information about each case (Patton, 2002).In other words, the aim of purposeful sampling is not to generalize but to better understand the phenomenon that the researcher is interested in.
Procedure
Before the study began, all participants were given a survey to complete, including open-ended questions about the teachers' backgrounds.This information helped identify the characteristics of the population and narrowed down interview questions afterwards (Patton, 2002).The questionnaire items comprised: gender, teaching experience, education, and questions about their writing classroom.Most of the data were collected via interviews.An in-depth interview is a useful way to collect rich data because it uses open-ended questions to explore participants' feelings and perspectives (Patton, 2002).In this way, a deeper understanding can be developed as the interviewer and interviewee co-construct meanings (Rossman & Rallis, 2003).
Data Analysis
Pre-interviews with all teacher-participants were used to explore their general attitudes towards and perceptions of multimodal composition.The pre-interviews were more likely to be informal conversations with the subjects, and the interview questions were spontaneously formulated to be specific to the teachers' individual interests and situations.Most data were collected during the semi-structured and structured interviews.Documents collected and the researcher's notes were also used to help the researcher to investigate how the teachers used multimodal designs to teach writing effectively, how they perceived multimodal composition, and what multimodal elements they used.An observation checklist was used to collect information during classroom visits and to correlate between the interviews and teachers' actual behaviors.Their classroom practices were logged using a scale of 1, 2, and 3, with 1 meaning that a given practice was not observed, 2 indicated that it was rarely observed, and 3 denoting that a practice was observed most of time.These data sources along with the interviews were considered together to increase credibility and accuracy (Patton, 1996).In other words, multiple sources of data, or multiple perspectives, were used to check and interpret the same event by means of triangulation.
All data were recorded and immediately transcribed and translated if necessary.The preliminary analysis began with reading the researcher's notes and verbatim transcripts several times.By using cross-case analysis (teachers working at different types of schools), and constant comparative analysis, the researcher broke down the raw data and synthesized it to find patterns.During this process, conceptual categories or themes were identified.The process of analyzing data was recursive.The researcher compared data against the data corpus, and constantly returned to research questions and findings.In this way, each question was answered by using the constant comparative analysis technique.(Patton, 1996) 4. Results
Teaching Multimodal Composition and Affective Engagement
The findings of the current study indicated that the teachers anticipated positive effects of multimodal writing on their students' motivation to write.All the teachers surveyed incorporated multiple modes such as images, video, and music as well as printed texts to engage students on a daily basis.The teachers reported that they were interested in the use of technologies and various texts because traditional methods which depended mainly on linguistic modes had little effect on learners' affective engagement.They all knew that outside the classroom students were exposed to new texts and resources, whereas in the traditional classroom paper and pencil were the primary tools used for the purpose of conveying messages.These findings are consistent with those of a number of researchers (Hughes, & Narayan, 2009;Thompson, 2008;Vasudevan, Schultz, & Bateman, 2010); Specifically, the teachers' comments revealed their perception that students were more likely to participate actively in collaborative projects and reflective learning practices, demonstrating the essential features of willingness and enthusiasm.
Visual images can be used as a cue to elicit students' responses.Teacher 1 stated that a picture is worth a thousand words.She believed that a picture as a pre-reading or pre-writing activity was more likely than a verbal explanation to stimulate students' curiosity as well as their imagination about the reading content.In order to help her middle school students better understand the content, Teacher 1 showed a cover of the Time magazine titled "The Truth about Tiger Moms," where an Asian girl is playing the violin facing her mom.She asked students to guess the content of the cover story with vocabulary they already knew by looking at the cover photo.Groups of four students brainstormed together for about five minutes, and then wrote how they would feel if they were the young girl in the picture.Teacher 2 reported that her students showed a high degree of attention and commitment in a book project.In what she described as a successful lesson, she described how her high school students were engaged in making their own book, using both writing and drawings, based on having first read a book of their own choosing.
Newfield ( 2011) regarded participation as a necessary process by which learners learn how to think independently and critically articulate their own ideas and feelings.In multimodal composition classrooms, the students take part in the whole process, from prewriting activities to writing to presenting their writing, felt ownership of their final products.Hence, intrinsic motivation is likely to increase in conjunction with ownership and participation.The teacher participants in this study noted that their students seemed to be more engaged in multimodal lessons compared with traditional lessons, as indicated by willingness of the students of Teacher 1 and 2 to complete the target tasks set All of the teachers in this study used multimodal composition as either prewriting or post-reading activities, rather than as the main method of writing instruction.Also, not all recognized the need to have students communicate across modes in school for a variety of authentic purposes and, therefore, did not provide writing instruction using multimodality.That is, multimodal composition played a secondary role of assisting in traditional writing instruction.Despite this secondary role, however, teachers regarded student engagement as a reason for incorporating multimodality by facilitating connections to students' interests (Cope & Kalantzis, 2000).
Helping Students Understand Content
Multimodal composition reinforces formal education by providing various notions of literacies which help students participate in diverse ways of meaning-making (Jewitt, 2008).Teacher 1 gave her students an assignment in which they had to create vocabulary video clips by using a movie-making program.The students made creative video clips to help their classmates memorize vocabulary in more effective ways.Teacher 1 reported that the students seemed to memorize and retain information for a longer period compared with students using conventional methods of studying vocabulary; and the students themselves said they could connect visual modes (e.g., images) with example sentences and remember meanings better.
Multimodal composition also helped Teacher 2's students understand stories when they revisited texts to make an illustrated book.The students were given an assignment in which they were supposed to re-create Aesop's fables.Before doing so, they broke into groups to discuss the stories, characters, themes, plots, and settings.This process helped them clarify related information because they had to reread and/or discuss the stories.According to Teacher 2, it was a challenge for some students to read the stories written in English, but they shaped and reshaped the content of the stories as they built connections via discussions and representations of knowledge.
Based on their understanding of the texts, students also reinterpreted the author's meaning by adding, deleting, and restructuring the stories using English.She considered that multimodal composition provided additional and instructive strategies for some students, allowing low-performing students not to merely copy stories from the text but to interpret them from their own points of view.
For Teacher 3, tables or graphs are ways to facilitate understanding of the reading content.In a writing activity, tables of pros and cons were visually presented to help his high school students determine their position and write supporting ideas.The sample general statement was "GMOs (genetically modified organisms) should be banned."Tables containing the pros and cons helped the students organize their ideas and evaluate the short or long term effects of eating GM food.Teacher 3 said, "When it comes to technology, I rarely use it.Sometimes, I use it when it is necessary.But, you know, I usually teach reading.I only need a textbook and chalk.As you see, tables or graphs can provide effective ways to help students' understanding."
Facilitating Effective Communication
For English language learners, incorporating multimodal composition can mean involving different learning strategies (Ajayi, 2009).Teacher 4 created lesson plans integrating hands-on activities because his vocational high school students' English proficiency and confidence were both quite low.As he stated, "Some of my students quit studying English since they graduated from middle school.They prefer drawing because they don't have an ability to express in English what they think."He allowed students to respond by using diverse modes other than language, instead of asking them to answer verbally.Many students depended on visual modes, such as drawings and photos, for alternative ways of presenting their understanding of texts.Teacher 4 observed that, "It is important to keep students going forward at the beginning stage.They do not care about their English scores.So making them draw is the only way in which I encourage them, because they like it." The English proficiency level for Teacher 5's students in a foreign-language-oriented high school was quite high and the students' motivation to study was relatively greater than that of peers in other schools.She said that many of the students had lived in foreign countries, and even those who had never traveled were good in English and were not afraid of English.She said that students today, unlike during her school years, could use messenger smartphone applications for recreation and correspondence.They could communicate by writing (texting) without extra charges if they could access the Internet.She added, "I know, they may not communicate in English when sending messages to peers.At least, they write and respond to their friends."She was clear that by doing these literacy activities on their own, whether in Korean or in English, the students could have a positive experience of writing and develop the conviction that writing was enjoyable, easy, and practical.
Both Teachers 4 and 5 had more positive perceptions towards multimodal composition because they believed that it facilitated effective communication by allowing learners to use all the available resources to convey messages.However, they did not recognize the equal importance of each semiotic mode, but considered the nonlinguistic modes as secondary methods to assist language learning.Also, even though they believed that diverse modes could play an important role in fostering communication, their focus was still on practicing the linguistic mode.
Providing Different Modes
In the Korean context, final products including tests, quizzes, and final papers may be more highly valued than the process.However, classroom instruction in writing can involve multimodality, which students may be familiar with or become interested in.Tables, graphs, and pictures can be used in the classroom to explain concepts or central aspects of meanings.In digital environments, teachers may also select digital resources and make different choices so that their students can develop various competencies (Chang & Lehman, 2002).For instance, lessons might include: summarizing a text by using tables or graphs making a story using pictures or photos students bringing belongings and creating a story related to their belongings writing a script about one's self before shooting a movie writing a summary of a movie or a book
Overt Instruction Using Multimodality
It is believed that explicit instruction can maximize learners' academic growth in that it provides clear explanations and demonstrations in small incremental steps.By supporting learning during each step, teachers can not only help students understand content but also reduce pressure, an overwhelming feeling that they may experience.Besides, teachers select the teaching content in accordance with students' cognitive capabilities and interests and then deliver the content in an efficient manner.Explicit instructions in developing a multimodal composition can include: providing an introduction connecting previous lessons to new ones using visual aids (e.g., PPT) presenting instructional goals and explaining them in clear language (e.g., using both verbal and visual modes) guided practice, from easier to more difficult tasks (e.g., providing multiple modes in the beginning, but explaining mostly in verbal mode in the later stages) independent practice until students can perform tasks without teacher support (e.g., filling the summary table) giving feedback which is timely, concrete, and appropriate (e.g., verbal or written feedback).
Communicative Language Teaching (CLT)
The interview data showed a strong connection between language learning and multimodal composition in that learners of both are required to be sensitive to literate environments and to understand that language learning is dynamic and situated in social trajectories.With this in mind, the Korean government has emphasized that more resources should be directed toward improving students' communicative competence in English in addition to their learning the written language (Kwon, 2003).In a sense, Communicative Language Teaching (CLT) can provide a useful framework within which to teach multimodal composition since in CLT learners are encouraged to use both linguistic and nonlinguistic resources to negotiate meanings and to complete the communicative task at hand.In order to facilitate communication, teachers in a multimodal composition classroom can encouraging students to use nonlinguistic modes to convey meanings focusing on content over form providing a variety of language inputs represented by multiple modes of forms which enable students to read and use socially and culturally accepted language offering opportunities to develop 21st century literacy skills.
Discussion
All teachers articulated positive attitudes toward teaching multimodal composition, although acknowledging challenges because of the social and school culture emphasizing end results instead of the process.All teachers answered that multimodal composition allowed students to engage in writing by providing different semiotic resources, which is the basis of effective classroom management contexts.They agreed that, although multimodal composition may not improve students' academic performance directly, it may potentially motivate learners during stages of prewriting and writing and evoke a deeper understanding of the content being taught.In addition, multimodal composition is more effective for students who do not express themselves in traditional ways.For example, some students may understand certain meanings better than others according to their preference for a certain mode.
Therefore, all teachers in the study have used multimodal composition as one of several teaching strategies by allowing students to respond using diverse modes other than language, instead of accepting responses in the linguistic mode only.Many of the students depended on visual modes, such as drawings and photos, for alternative ways of presenting their understanding of texts.That is, the teachers had more positive perceptions toward multimodal composition because they believed that it facilitated effective communication by allowing learners to use all available resources to convey messages.
Conclusion
Today, digital technologies and information development have altered the nature of communication from the traditional perspectives of literacy, which were limited to reading and writing, to multiliteracies focusing on local diversity and global connectedness.All the teachers participating in the study agreed that knowledge is constructed not only in printed texts, but also in dynamic texts supported by multiple modes.As Johnson and Smagorinsky (2013) indicated, the nature of multiliteracies is participatory and multimodal; 21st century learners have more opportunities to read and express ideas more actively online.Students have become more engaged in literacy activities outside the classroom in innovative and significant ways through the use of online tools.Such social, cultural, and literacy practices may have a significant effect on both teachers and students in Korea, where digital technology is evolving rapidly.
There is, therefore, a need for educators in Korea to pay attention to social, cultural, and economic changes so that meaningful learning can occur.Of greatest importance is communication among teachers, students, parents, and administrators in order to understand the relevance, importance, and learning outcomes of multimodal composition.Through ongoing discussions of teaching multimodal composition, for example, teachers may develop a wide range of good options in teaching and learning 21st century skills.In addition, teachers may find ways in which to balance social needs against a test-oriented school culture by considering the practical use of knowledge and learning goals in relation to students' personal interests.
Table 1 .
Teacher-participants in this study by gender and school type
|
2018-12-07T12:10:29.972Z
|
2016-05-03T00:00:00.000
|
{
"year": 2016,
"sha1": "f42a14c7953ba85748e375eb6588f27ea3f106b3",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.5539/elt.v9n6p52",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "f42a14c7953ba85748e375eb6588f27ea3f106b3",
"s2fieldsofstudy": [
"Education"
],
"extfieldsofstudy": [
"Psychology"
]
}
|
259767175
|
pes2o/s2orc
|
v3-fos-license
|
Next-generation sequencing reveals mitogenome diversity in plasma extracellular vesicles from colorectal cancer patients
Background Recent reports have demonstrated that the entire mitochondrial genome can be secreted in extracellular vesicles (EVs), but the biological attributes of this cell-free mitochondrial DNA (mtDNA) remain insufficiently understood. We used next-generation sequencing to compare plasma EV-derived mtDNA to that of whole blood (WB), peripheral blood mononuclear cells (PBMCs), and formalin-fixed paraffin-embedded (FFPE) tumor tissue from eight rectal cancer patients and WB and fresh-frozen (FF) tumor tissue from eight colon cancer patients. Methods Total DNA was isolated before the mtDNA was enriched by PCR with either two primer sets generating two long products or multiple primer sets (for the FFPE tumors), prior to the sequencing. mtDNA diversity was assessed as the total variant number, level of heteroplasmy (mutant mtDNA copies mixed with wild-type copies), variant distribution within the protein-coding genes, and the predicted functional effect of the variants in the different sample types. Differences between groups were compared by paired Student’s t-test or ANOVA with Dunnett’s multiple comparison tests when comparing matched samples from patients. Mann–Whitney U test was used when comparing differences between the cancer types and patient groups. Pearson correlation analysis was performed. Results In both cancer types, EV mtDNA presented twice as many variants and had significantly more low-level heteroplasmy than WB mtDNA. The EV mtDNA variants were clustered in the coding regions, and the proportion of EV mtDNA variants that were missense mutations (i.e., estimated to moderately affect the mitochondrial protein function) was significantly higher than in WB and tumor tissues. Nonsense mutations (i.e., estimated to highly affect the mitochondrial protein function) were only observed in the tumor tissues and EVs. Conclusion Taken together, plasma EV mtDNA in CRC patients exhibits a high degree of diversity. Trial registration ClinicalTrials.gov: NCT01816607. Registered 22 March 2013. Supplementary Information The online version contains supplementary material available at 10.1186/s12885-023-11092-x.
Background
Colorectal cancer (CRC) is the third most common cancer type worldwide [1]. It is a heterogeneous disease in terms of high biological complexity and clinical outcome. Extracellular vesicles (EVs) are known to contribute to tumorigenesis, progression, and drug resistance in CRC [2] and may be important CRC biomarkers [3]. Mutations in mitochondrial genes have been reported to have a role in cancer development [4]. Variations in the mitochondrial DNA (mtDNA) sequence can act as functional adaptors allowing tumor and immune cells to adjust to the metabolic needs imposed by various tissue environments during cancer progression [5]. Recent reports have demonstrated that the entire mitochondrial genome can be packed inside EVs [6,7] and restore metabolic activity in cells with impaired metabolism [6].
The mitochondrial genome is a 16.5-kilobase circular double-stranded DNA molecule present in multiple copies per cell. It contains 37 genes that encode 13 protein subunits of the mitochondrial respiratory chain/oxidative phosphorylation system, two rRNAs, and 22 tRNAs for mitochondrial translation [8]. The mtDNA replication is independent of the cell cycle and also occurs in postmitotic cells. Because the mutation frequency of replicating mtDNA is high, mutant mtDNA copies are often mixed with wild-type copies in the cell (termed heteroplasmy). The mtDNA polymorphisms may alter mitochondrial function, particularly in tissues that are highly dependent on the metabolism. Nevertheless, if a mutation is pathogenic, the cell can often tolerate a certain proportion of the mtDNA variant before the biochemical threshold is exceeded with resulting metabolic defects [8].
Cell-derived mitochondrial components, besides mtDNA, have been found in the extracellular space [9]. For example, secreted cell-free respiratory-competent mitochondria have been detected in blood [10] and EVs have been shown to contain functional mitochondria [11] and be enriched in mitochondrial proteins [12]. In addition, a novel EV population of mitochondrial origin, mitovesicles, was described by D' Acunzo et al. [13].
Despite the increasing interest in EV mitochondrial components, the characteristics of secreted cell-free mtDNA still remain insufficiently understood. Here, we present a method for the successful isolation and sequencing of the full mitochondrial genome from whole blood (WB), peripheral blood mononuclear cells (PBMCs), plasma EVs, and tumor tissue from CRC patients as an initial investigation for the potential use of EVs as a source of cell-free mtDNA and their potential as CRC biomarker. We have further analyzed mtDNA diversity by assessing the total variant number, level of heteroplasmy, variant distribution within the protein-coding genes, and the predicted functional effect of the variants in the different sample types.
Patients and procedures
The rectal cancer patients were enrolled onto a prospective biomarker study (ClinicalTrials.gov: NCT01816607) conducted at Akershus University Hospital (Lørenskog, Norway) and the colon cancer patients participated in a prospectively maintained CRC database and ancillary biobank at Southern Hospital Trust (Kristiansand, Norway). All patients had histologically verified colon or rectal adenocarcinoma without metastatic disease at the time of diagnosis, but the rectal cancer patients presented tumor manifestations within the pelvic cavity that were considered at high risk of disease recurrence and were consequently given chemoradiotherapy before the surgical procedure. All patients received curative-intent treatment according to prevailing national guidelines. The patient and disease characteristics are shown in Supplementary Table S1. The total group of patients was representative for the distribution of men and women affected by CRC.
Preparation of patient samples
In this study, we included various biospecimens from CRC patients. Each of eight rectal cancer patients provided WB, PBMCs, citrate plasma, and formalin-fixed paraffin-embedded (FFPE) tumor tissue that were sampled at the time of diagnosis, and stored for a median of 23 (range, 10-37), 47 (range, 34-62), 50 (range, , and 53 (range, 40-69) months, respectively. Each of eight colon cancer patients provided WB at the time of diagnosis (stored for a median of 78 (range, 68-85) months) and fresh-frozen (FF) tumor tissue sampled within an hour after rigorous standard operating procedure (stored for a median of 75 (range, 68-83) months from the surgical resection).
The tumor tissues were cut in 30-μm sections of 25-100 mm 2 tissue with at least 20% tumor cells, as determined by an experienced specialist in gastrointestinal pathology, prior to analysis. The WB was collected by venipuncture in sodium citrate-treated BD Vacutainer CPT tubes (Becton, Dickinson and Company, Franklin Lakes, NJ, USA) for preparation of PBMCs and PAXgene RNA tubes (PreAnalytiX GmbH, Hombrechtikon, Switzerland), the latter stored at -80 °C until analysis. Citrate plasma samples were prepared by centrifugation at 2,000 g for 10 min, and aliquots were stored at -80 °C. The PBMC specimens were prepared from 6-8 ml of WB by centrifugation with a horizontal rotor centrifuge at 1500 g for 20 min. The buffy coat layer was transferred to a fresh 15-ml tube, resuspended and washed twice in phosphate-buffered saline (Gibco by Life Technologies, Paisley, UK) with centrifugations at 300 g for 15 and 10 min. The mononuclear cells were thereafter resuspended in RPMI-1640 medium (Gibco) supplemented with 10% dimethyl sulfoxide (Sigma-Aldrich, Saint Louis, MO, USA) and immediately frozen at -150 °C. Prior to DNA extraction, 150 μl of thawed PBMC preparations or PAXgene WB samples were transferred to microcentrifuge tubes and centrifuged at 5000 g for 10 min before the supernatants were carefully removed.
Isolation and characterization of EVs
EVs (heterogeneous populations of small and medium sized vesicles) were isolated from 100 μl plasma using qEV Single Size Exclusion Chromatography Columns (IZON Science, Oxford, UK). The columns were equilibrated with 10 ml of 0.20-μm-filtered phosphate-buffered saline. EVs were isolated after 1 ml void volume as 250-μl fractions, and the eluted fractions number 5 and 6 were combined. All samples were stored at -80 °C. The size and concentration of the vesicles were determined by Nanoparticle Tracking Analysis (NTA; Malvern, Amesbury, UK). Here, three 60-s videos were captured for each sample (slide shutter 1206 or 1259, slider gain 366) and the videos were analyzed by the NTA 3.4 software (Malvern). For morphological examination and detection of EV-associated proteins, EVs from one of the patient samples were analyzed with transmission electron microscopy (TEM) and western blot. Formvar/carbonsupported 100 mesh hexagonal copper grids (Electron Microscopy Sciences, Hatfield, PA, USA) were places on top of a 5 μl drop of the EV sample for 5 min. The grids were washed three times with distilled H 2 O before incubation with 2% methylcellulose (Sigma) containing 0.3% uranyl acetate (Electron Microscopy Sciences) for 10 min on ice. Surplus of methylcellulose-uranyl acetate was removed using a filter paper and the grids were air dried before examination using a Tecnai G 2 Spirit TEM (FEI, Eindhoven, The Netherlands) equipped with a Morada digital camera using RADIUS imaging software. Images were processed using Adobe Photoshop. Prior to the western blot analysis, 500 μl of the EV solution was concentrated using Vivaspin ® 500 10 K centrifugal concentrator (Sartorius Stedim Lab, Stonehouse, UK) and lysed in M-PER ® Mammalian Protein Extraction Reagent supplemented with Halt ™ Protease Inhibitor Cocktail and Halt ™ Phosphotase inhibitor Cocktail (all from Thermo Fisher Scientific, Waltham, MA, USA). For detection of CD9 and CD63, non-reducing conditions were used. 10 μg protein from EVs and 5 μg from HCT116 cells (a CRC cell line as positive control) were separated by NuPAGE Bis-Tris (Novex by Life Technologies, Carlsbad, CA, USA) and transferred to Immobilon-P membranes (Millipore Corporation, Billerica, MA, USA). The primary antibodies were anti-CD9 (Ts9 1:500) and anti-CD63 (Ts63 1:500; both from Thermo Fisher Scientific), anti-ALIX (3A9 1:500; Abcam, Cambridge, UK), anti-APOA1 (B-10 1:1000; Santa Cruz Biotechnology, Heidelberg, Germany), and anti-GM130 XP (D6B1 1:1000; Cell Signaling Technology, La Jolla, CA, USA). Secondary antibodies were from Dako Denmark AS (Glostrup, Denmark). Peroxidase activity was visualized using Super-Signal West Dura Extended Duration Substrate (Thermo Fisher Scientific) and the membranes were scanned with ImageQuant Las 3000 system (FujiFilm, Tokyo, Japan). Positive bands were analyzed using Fujifilm Multi Gauge V3.1 and the images of all full-length blots are provided in the Supplementary Information file. All relevant data of our experiments has been submitted to the EV-TRACK knowledgebase (EV-TRACK ID: EV210384).
DNase treatment of EVs
The samples and reagents were thawed on ice and 200 μl of samples were first incubated with 20 μl DNase (DNa-seI Amplification Grade; Sigma-Aldrich) at 37 °C. After 30 min 40 μl Proteinase K (Qiagen, Hilden, Germany) was added and the samples were further incubated at 37 °C for 30 min before 20 μl stop-solution was added and the samples were incubated at 70 °C for 10 min. The samples were put on ice and stored at -80 °C. Specifically for the evaluation of contaminating DNA from outside of EVs, samples from two patients were pooled before 100 μl aliquots were incubated with or without DNase in triplicates.
DNA isolation
QIAamp DNA FFPE Tissue Kit and DNeasy Blood & Tissue Kit (Qiagen) were used to extract DNA from FFPE tissues and the other tissues, respectively, according to the manufacturer's protocols. To increase the DNA yield from the EV samples, an additional spin with open tubes was performed prior to DNA elution, and the samples were eluted with water preheated to 70 °C. For all samples, DNA was quantified using Nanodrop ND 1000 Spectrophometer and the Qubit fluorometer 2.0 in combination with the Qubit dsDNA HS Assay Kit (all from Thermo Fisher Scientific).
mtDNA sequencing
For the WB, PBMC, EV, and FF tumor specimens, the mtDNA was amplified using two pairs of site-specific primers (forward: MTL-F1 5'-AAA GCA CAT ACC AAG GCC AC -3'and MTL-F2 5'-TAT CCG CCA TCC CAT ACA TT -3'; reverse: MTL-R1 5'-TTG GCT CTC CTT GCA AAG TT -3' and MTL-R2 5'-AAT GTT GAG CCG TAG ATG CC -3') and TaKaRa LA TaqDNA polymerase (TaKaRa-Bio, Kusatsu, Japan), to generate two long fragments spanning the complete mitochondrial genome. The two primer pairs failed to amplify the mtDNA in FFPE-samples and we developed an mtDNA amplification procedure based on 21 primer sets that produced overlapping PCR products to generate the complete mitochondrial genome from FFPE tissue DNA. Of these, 12 primers (Supplementary Table S2) have been published by Levin et al. [14] and 9 primers (Supplementary Table S3) were a combination of the published primers. The process of library preparation followed the suggested protocols of Human mtDNA Genome for the Illumina Sequencing platform (Illumina, Inc., San Diego, CA) with the first steps of the protocol adjusted. The master mixes were divided into 4 PCR tubes per sample, and a temperature gradient (51-68 °C) was used during the first amplification. The DNA was subsequently purified using gel electrophoresis, and bands representing the circular mtDNA amplicon (9.1 kilobases and 11.2 kilobases) or short PCR products were cut out from the gel. Extraction and quantification of mtDNA were performed using QIAEX II Gel Extraction Kit (Qiagen) and Qubit dsDNA HS Assay (Thermo Fisher Scientific). Successful long-range PCRs were represented by a bright band of the expected size. The amplicons were pooled and libraries were generated using the Nextera XT DNA Library Preparation Kit and Nextera XT Index Kit (both Illumina). AMPure XP beads (Beckman Coulter, Brea, CA, USA) were used to purify the DNA library and provide a size-selection step to remove short library fragments. Bioanalyzer-based normalization was performed using the Agilent High-Sensitivity DNA Kit (Agilent Technologies, Waldbronn, Germany) and the libraries were pooled and sequenced on a MiSeq Benchtop Sequencer (Illumina) using a MiSeq Reagent Kit v3 (Illumina) with 2 × 300-basepair read lengths.
mtDNA variant analyses
All sequence data generated was mapped to the revised Cambridge Reference Sequence (GenBank ID NC_012920.1) [15,16] using the MiSeq Reporter built-in software v2.6 (Illumina). This software applies a Burrows-Wheeler Aligner [17] and generates BAM alignment files. The Mutserve via mtDNA-Server (https:// mtdna-server. uibk. ac. at) [18] was used for variant calling and annotation with default parameters and filter settings; Minimum Base Call Quality Score for a Call (< 30), Indel Repeat Length (> 8), and Low Variant Frequency (< 0.010). Only variants with final filter pass were included for downstream analysis. This variant caller has various internal quality controls and was shown to have best performance compared to other variant callers in regards to evaluating heteroplasmy [19]. Variant frequencies > 0.990 were defined as homoplasmy, while heteroplasmy was defined by frequencies of 0.10-0.990 and low-level heteroplasmy by < 0.10. Variants flagged as previously reported nuclear mitochondrial DNA (NUMTs) by Mutserve were identified. Haplocheck v1.3.2 was used to detect contamination in the mtDNA samples [20] and Haplogrep v2.3.0 for haplogroup classification [21] through the mtDNA-Server. The Ensembl Variant Effect Predictor software was used with default parameters to predict the potential role of the variants [22]. An mtDNA circular plot was made in Geneious (v2023.0).
Quantification of mtDNA damage
This assay relies on the ability of a modification on the template DNA to inhibit restriction enzyme cleavage, as detailed previously [23]. Total DNA from the DNasetreated and non-treated pooled patient samples was analyzed with droplet digital PCR. A sequence flanking a TaqI restriction enzyme site in the 12S ribosomal RNA gene (MT-RNR1) was amplified using the forward (5′-AAA CTG CTC GCC AGA ACA CT -3′) and reverse (5′-CAT GGG CTA CAC CTT GAC CT-3′) primers in the absence and presence of the enzyme. The samples were partitioned by the QX200 Droplet Generator (Bio-Rad Laboratories, Oslo, Norway) and analyzed with the QX200 Droplet Reader (Bio-Rad Laboratories). The data was given as the percentage of non-digested (nd) mtDNA [(mtDNA TaqI copies per μl -mtDNA nd copies per μl) × 100].
Statistical considerations
Analyses were performed using GraphPad Prism v9.2.0. Differences between groups were compared by paired Student's t-test or Repeated-Measures ANOVA with Dunnett's multiple comparison tests when comparing
mtDNA variant number and heteroplasmy -rectal cancer patients
The median coverage depth was 15 237 × . After processing the sequences with adequate quality scores (Q30, median of 89.9%), the proportions of aligned sequence reads for WB, PBMCs, EVs, and FFPE tumor tissue were 98.7%, 99.7%, 99.8%, and 99.8%, respectively. The ratio of transversions and transitions, and GC content across the mitochondrial genomes were comparable for all four tissue specimens from the eight rectal cancer patients (Supplementary Table S4). As expected [24], the total number of variants was similar in WB and PBMCs with a median of 41.0 (range, 31-107) and 36.5 (range, 27-58), respectively (Fig. 1a). When comparing all tissue types, differences were detected (Repeated-Measures ANOVA: sample types (column) p < 0.0001 and patients (row) p = 0.71).
Characterization of plasma EVs by NTA showed median concentration of 1.85 × 10^9 (range, 1.5 × 10^9-2.1 × 10^10) particles/ml and median mode size of 124.3 (range, 102.0-255.9) nm (Fig. 2a, b). The NTA histogram (Fig. 2c), western blot ( Fig. 2d; full-length blots are presented in Supplementary Figure S1), and TEM images (Fig. 2e) from the selected patient sample confirmed vesicles of various sizes with the expected cup shape and the presence of expected EV proteins (CD63, CD9, ALIX). The heterogeneous EV sample showed absence of the contamination marker from the Golgi apparatus (GM130) but APOA1, a protein found in high-density lipoproteins, was detected. The samples were pre-treated with DNase and Proteinase to eliminate contaminating molecules adherent to the EV surface or present in plasma, with a significant reduction (19%) in total DNA concentration (Supplementary Figure S2a; paired t-test: p = 0.0005). To examine whether the DNase treatment might artificially generate new mtDNA variants, the samples treated with and without DNase were analyzed for damage in MT-RNR1, with similar damage level (Supplementary Figure S2b; paired t-test: p = 0.49). For the total rectal cancer cohort, plasma EVs presented twice as many mtDNA variants compared to WB and PBMCs, with the median number of 113.0 (range, 70-224; Dunnett's multiple comparison: p = 0.046 and p = 0.020, respectively; Fig. 1a).
The FFPE tumors showed significantly higher mtDNA variant number than the other tissues, with median of 327.5 (range, 167-391; Dunnett's multiple comparison: p < 0.0001; Fig. 1a), corresponding to approximately sevenfold the WB number. This finding was not unexpected as the FFPE-derived DNA was fragmented and the mtDNA genome was amplified in a small-amplicon format that possibly could introduce false positives. The fixation process impacts the quality of the DNA with undesirable modifications such as deamination that introduces C:G > T:A mutations, and these substitutions were associated with TVN (Pearson correlation: r = 0.93, p = 0.0009); hence, the FFPE-data must be considered with care because of the different pre-processing protocols. Both FFPE-tumor tissue and EVs had considerable amounts of private variants not detected in WB or PBMCs. 104 variants (4.8%) were shared among all the sample types from the rectal cancer patients and the EVs and FFPE tumors had most overlap in variants among any two types (Fig. 1c). The mtDNA variants were also analyzed for the level of heteroplasmy, as an initial investigation into the diversity of the mitochondrial genomes in the various tissues (Fig. 1e). As shown in Fig. 1g, lowlevel heteroplasmy (< 0.10) mtDNA variants were more frequent in the EVs than in WB and PBMCs (Repeated-Measures ANOVA: sample types < 0.0001 and patients p = 0.84; Dunnett's multiple comparison: p = 0.042 and p = 0.017, respectively), whereas FFPE tumor samples had increased number of low-level heteroplasmy compared to EVs (Dunnett's multiple comparison: p < 0.0001).
mtDNA variant numbers and heteroplasmy -colon cancer patients
We sequenced WB, EVs, and FF tumor samples (Q30, median of 88.4%) available from eight colon cancer patients to investigate further if EVs and tumor tissue hold increased mtDNA variants. Here, all biospecimens were rapidly frozen, circumventing the effects of artificial mutations induced by formalin fixation. The median sequencing coverage depth was 12 289 × , and the proportion of reads mapping to the reference mitochondrial genome of WB, EVs, and FF tumor tissue were 99.3%, 99.7%, and 99.5%, respectively. The ratio of transversions and transitions, and GC content were comparable for WB, EVs, and FF tumors (Supplementary Table S4). When comparing the various tissue types, differences were detected between groups (Repeated-Measures ANOVA: sample types p = 0.0061and patients p = 0.24). The colon cancer plasma EVs had median concentration of 8.03 × 10^9 (range, 1.3 × 10^9-3.0 × 10^10) particles/ml and median mode size of 119.3 (range, 105.4-165.5) nm (Fig. 2a, b). We could verify an increased total number of mtDNA variants in EVs (median 86.5; range, 54-202) compared to WB (median 58; range, 34-115) (Dunnett's multiple comparison: p = 0.021) as well as to FF tumor tissue (Dunnett's multiple comparison: p = 0.0047; Fig. 1b). 125 variants (16.3%) were shared among all sample types from the colon cancer patients, and although WB and FF tumor (median 51; range, 53-66) had similar total variant numbers, tissue-specific features appeared (Fig. 1d). As also shown in Fig. 1d, similar to the rectal cancer patient, the EVs contained extensive exclusive variants and the EVs overlapped with the tumor tissue to a higher degree than WB did. All EV mtDNA sequencing data was used to further emphasize the full mtDNA genome present inside the vesicles, represented by a circular plot (Supplementary Figure S3). Figure 1f shows the colon cancer patients' mtDNA variants represented as homoplasmic, heteroplasmic, and low-level heteroplasmic states. Low-level heteroplasmy variants were abundant in the plasma EVs compared to WB and FF tumor (Repeated-Measures ANOVA: sample types p = 0.0066 and patients p = 0.53; Dunnett's multiple comparison: p = 0.033 and p = 0.0044, respectively; Fig. 1h).
Sequencing the complete mitochondrial genome in FFPE tissues necessitated multiple PCR primer sets for mtDNA amplification because the long-range PCR did not amplify successfully. In order to investigate if the different mtDNA amplification methods could explain the differences in variant number, the multi-primer method was applied on WB samples and the results compared with the original (two primer pairs) sequence data from three of the colon cancer patients. The multi-primer method (the technical quality of the sequence reads is shown in Supplementary Table S4) yielded an approximately tenfold increase in variant number to median 612 (range, 562-721) from 60 (range, 53-66), pointing to a bias with the use of multiple primers (Supplementary Figure S4; paired t-test: p = 0.0057). Amplifying the mtDNA genome in small-amplicon format can increase the risk of involving NUMTs segments that can be misinterpreted as mtDNA heteroplasmy. Indeed, a considerably increased risk of NUMTs co-amplification with the multi-primer approach was observed for the WB samples (Supplementary Table S5). The FFPE tissue had an increased chance of co-amplified NUMTs compared to the other samples, and also indicated cross-contamination (Supplementary Table S5), making FF tumor tissue more appropriate for analysis of the full mitochondrial genome.
Distribution of variants within the mitochondrial genome
Among the mtDNA variants detected in rectal cancer patient samples, 39.6% in WB, 40.0% in PBMCs, 54.0% in plasma EVs, and 52.0% in FFPE tumors were distributed along protein-coding regions and the remaining in non-coding-regions (D-loop, rRNA, tRNA, intergenic regions). In the colon cancer patient samples, the proportions were 42.7% of the WB variants, 48.4% in EVs, and 47.8% in FF tumor. Normal-cell heteroplasmies tend to cluster within the non-coding D-loop, whereas tumorspecific somatic mutations are more evenly dispersed across both coding and non-coding regions [25]. Interestingly, for both cancer types, coding region variants versus D-loop variants was highest in EVs (4.2 for rectal cancer, 3.8 for colon cancer) and higher in tumor tissue (2.3 for both FFPE rectal cancer specimens and FF colon cancer specimens) than in WB and PBMCs (1.6 for the rectal cancer samples, 1.8 for the colon cancer samples), suggesting that plasma EVs contained molecular information towards acquisition of functional variants in their mtDNA.
We found differences between the sample types and the various regions of the mtDNA in both the rectal (2-way ANOVA: regions p < 0.0001 and sample types p < 0.0001) and colon (2-way ANOVA: regions p < 0.0001 and sample types p = 0.038) cancer patients (Fig. 3). For the 13 mitochondrial genes, the majority showed quite similar numbers in WB and PBMCs and higher numbers in EVs and FFPE tumors from rectal cancer patients (Fig. 3a). In the colon cancer specimens (Fig. 3b) (NADH dehydrogenase, subunits 4 of complex I) as most affected. Based on mutations per kilobase (Fig. 3a, b), MT-ND5 and MT-ND4 were also the most affected in WB and FFPE tissue samples, respectively, from the rectal cancer patients. Generally, in plasma EVs, MT-ND1 (NADH dehydrogenase, subunit 1 of complex I) and MT-CO1 (cytochrome c oxidase, subunit 1 of complex IV) had most mutations per kilobase.
Predicted variant effects on the protein structure
Finally, we determined potential consequences of the variants in the coding mtDNA sequences (Fig. 4a, b) to explore whether protein structures might be affected. The tumor tissues and plasma EVs contained nonsense mutations that cause premature stop codons estimated to have high effect on the protein function (disruptive, probably causing protein truncation, loss of function, or trigging nonsense-mediated decay). These were not detected in WB and PMBCs. In addition, the EVs were more abundant in missense mutations (Fig. 4c, d)
Biological relevance of circulating mtDNA
We further investigated whether circulating mtDNA characteristics were dependent on the patient TN-status (Supplementary Figure S5). Higher degree of low-level heteroplasmy (< 0.10) mtDNA variants was observed in EVs from patients with lymph node metastasis (N1-2; Mann-Whitney U test: p = 0.046). Taken together, the EV mtDNA pointed towards a complex composition of secreted mtDNA with more low-level heteroplasmy and variants with potential impact on the transcripts.
Discussion
In this study, for the first time we successfully isolated and sequenced the mtDNA cargo of plasma EVs for comparison with the mtDNA in WB, PBMCs, and tumor tissue from CRC patients. The EVs were abundant in mtDNA with more complex composition, including a higher degree of low-level heteroplasmy, compared to WB as reference. The EVs had numerous private mtDNA variants not detected in WB, PBMCs, or tumor tissue. The variants clustered in the coding regions, forming mutations with impact on the transcripts. Our data also highlights the possibility of analyzing the full mitochondrial genome of FFPE tumor tissue, but the technical requirements implied that FF tumor tissue was more expedient for the purpose. In both cases, a higher overlap of variants was detected between EVs and tumor tissue than WB and tumor tissue, suggesting that circulating EV mtDNA could be interesting to further study.
We have previously shown that plasma EVs from the rectal cancer patients, when fed to cultured human monocytes, caused monocyte transcriptional responses comprising protein binding, apoptotic mitochondrial changes, immune cell signaling, and cell growth, among other biological processes [26]. In the present study, we verified that EVs contained the intact mitochondrial genome. The total DNA concentration of the EV samples was reduced by the DNase and proteinase treatment, but it did not affect the total mtDNA variant number (data not shown) nor did it cause more mtDNA (MT-RNR1) damage. Overall, this suggests that the full genome is present and protected inside the vesicles.
Replication is the primary source of new mutations in mtDNA. The mutation rate observed in mtDNA is 10-17 times higher than that of the nuclear genome, and caused by the lack of histones, efficient DNA repair mechanisms, and the proximity to reactive oxygen species generated by oxydative phosphorylation [27]. Tumor cells have altered bioenergetic processes, such as increased glucose metabolism, altered calcium regulation, altered production of reactive oxygen species, or altered interorganelle interaction. These changes may result from pre-existing or de novo mutations of nuclear-or mtDNA, changes in gene copy number, or altered gene expression [5]. The plasma EV mtDNA variants were in both the coding and non-coding regions but with high protein-coding region to D-loop ratio, suggesting that EV mtDNA entails adaptive metabolic features. Of the 13 protein-coding genes, MT-ND5 had the highest number of variants in plasma EVs from both colon and rectal cancer patients. MT-ND5 is the most frequently mutated mitochondrial gene in cancer [28]. It is evidence for negative selection of truncation mutations in the mtDNA genes, but for some malignancies, including CRC, the opposite has been shown with suggested functional oncogenic impact of mitochondrial mutations in the initiation and clonal evolution of the cancer [28]. In our study, the EV mtDNA displayed high diversity and several distinctive variants not found in the other samples, pointing towards a possible involvement of EVs in regulation of mtDNA heterogeneity of the patients. However, since we did not sequence samples from healthy controls and EVs are secreted by all cells, we cannot exclude that heterogeneous EV mtDNA variants originate from other tissues than the cancer cells. To truly answer this, further studies are needed.
The mtDNA contains three relevant classes of phenotypes; recent germline mutations, somatic mutations, and ancient adaptive polymorphisms. These variants appear within a cell with normal mtDNA generating a mixed mitochondria-containing cytoplasm of variant and reference mtDNA, a state known as heteroplasmy [5]. Detection of low-level (< 0.10) heteroplasmy has been important for diagnosis and prognostication of mitochondrial diseases, but also in cancer and age-related research [29]. In plasma EVs from both the colon and rectal cancer patients, low-level heteroplasmy variants were frequent. In order to sequence the complete mitochondrial genome in FFPE tissues, the mtDNA amplification necessitated multiple PCR primer sets. The apparent low number of variants in FF tumor compared to FFPE tumor suggested that the tumor tissue conservation method or mtDNA amplification procedure before sequencing might have had an impact on variant detection. Additional mtDNA variants might have been generated during the tissue formalin-fixation process in the form of nucleotide modifications such as G > A and C > T transitions, which previously have been suggested as resulting artifacts using this conservation [30,31], as well as increased co-amplification of NUMTs [32,33]. It can be possible to reduce such sequence information by experimental or bioinformatic methods, none of which is sufficiently standardized. Of note, the mtDNA copy number per cell can vary by several orders of magnitude depending on the cell type [34]. Our current strategy did not allow us to determine whether the variants came from the enrichment of mutant mtDNA or mtDNA copy number variations.
Although the origin of the EVs is unknown, in our study they showed similarities to the tumor tissues with the presence of nonsense mutations, and could possibly be involved in tumor cell signaling to adjust their metabolic needs. However, the cell has multiple pathways to recover mtDNA and maintain mitochondrial quality, including mtDNA repair, degradation, clearance, and release. Damaged mtDNA can be removed from the cells through EVs (by means of fragmented or the intact full genome), migrasomes, or other pathways of clearance, in order to maintain cell homeostasis (reviewed in [35]). The plasma EV mtDNA was abundant in coding region missense mutations, making it tempting to speculate that functionally detrimental somatic mtDNA mutations in cells can be expelled via EVs.
Limitations to this study includes the influence of sample storage and potential mtDNA amplification bias. Experimental factors [29] and especially contaminating NUMTs, generated by the transfer of mtDNA into the nuclear genome, can complicate mtDNA sequencing analysis [32,33]. However, the long-range targeted PCR prior to sequencing circumvented this problem to some degree [36] and highlights snap-freezing as the more suitable conservation method for mitogenome analysis of solid tissues. Another limitation is the low number of patients selected mainly based on the availability of biobank materials, hampering a more thorough investigation of the biological relevance.
Conclusion
In conclusion, our investigations revealed that plasma EV mtDNA exhibits a high degree of diversity, suggesting involvement in CRC biology. Quality score of 30 ng) proteins from one rectal cancer patient and HCT116 cells (Pos ctr; 5 ng). The cropped fields, representing the blots in Figure 2c and each with the respective protein, are marked with dotted lines. CD9 was reprobed after CD63, and APOA1 was reprobed after GM130. Additional bands are unspecific or previous target proteins. Fujifilm Multi Gauge V3.1 was used to analyze and adjust the brightness and contrast. Supplementary
|
2023-07-12T13:59:44.412Z
|
2023-07-12T00:00:00.000
|
{
"year": 2023,
"sha1": "6bfe77cf80b3d8eccabfa10aa90a5f3746afc78e",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Springer",
"pdf_hash": "6bfe77cf80b3d8eccabfa10aa90a5f3746afc78e",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
224893577
|
pes2o/s2orc
|
v3-fos-license
|
A new pine pest for Diyarbakır: Observations on Buprestis (Ancylocheira) dalmatina Mannerheim, 1837 (Coleoptera: Buprestidae)
This study was conducted in 2019 on pine trees in Dicle University Campus. Bark samples were taken from the dried pine trees and brought to the laboratory. Samples were cultured in plastic containers in 26 ± 1 O C, 65 ± 5% humidity and climatic chambers set to 16: 8 period. As a result of this study, Buprestis (Ancylocheira) dalmatina Mannerheim, 1837 species belonging to Buprestidae family of Coleoptera was determined. This is the first record for pine trees in Diyarbakir. In addition, this species is the first record for the insect fauna of Diyarbakır. This harmful species, which is one of the rare insect species in the world, is thought to have entered the province of Diyarbakır, possibly through production materials brought to the region without supervision.
Introduction
Insects (Hexapoda) are the group of animals with the largest number of species with more than one million species described in the world (Price 1997). Consistent estimates reveal that 50-90% of existing insect species on Earth are still undiscovered. However, it reveals that identified insects contain more than half of all known organism species. Insects make up the most diverse form of animal life in terrestrial ecosystems. Many species are harmless and essential components of the natural ecosystem. Since they are cold-blooded, the rates of key physiological processes in their life cycle are determined by environmental conditions, especially temperature and humidity. In general, they have short generation periods, high fertility rates and high mobility. (Moore and Allard 2008).
Although Anatolia is not a continent, it contains all the features of a continent in terms of an ecosystem and habitat features, and it looks like a small continent in terms of biodiversity. Each of the seven geographical regions of Turkey has a distinctive climate, flora, and fauna.
Species belonging to the Buprestidae family are generally common in tropical and subtropical regions. Their length varies between 3-80 mm. Many of them are metallic, green, black, and in different colors. The last part of their body is tapered. The head is recessed, and embedded. Antennas are sawed. Most adults feed on flowers and leaves. The larvae cause main damage by opening galleries in the roots, stems, branches, and shoots of the trees.
The pine tree species of the Pinaceae family belonging to the genus Pinus are among the most common trees in Turkey. Pine trees are mostly coniferous, and evergreen trees. However, there are also shrub forms. Pine trees grow in climatic regions, which vary by species. Pine trees grow well in sunny areas. Most species need very little water once they bloom. They develop relatively well on permeable soils.
In Turkey, pest control is carried out against about 50 harmful insects, plants, fungi, mites, bacteria, and virus species in an area of 500 thousand hectares of forest. Around 8-10 million TL is spent annually for these efforts. It is known that the damage of insects in the forests is much more than the bushfires damage (Eroğlu 2017). The most important of the insects that damage the trees are the species that live in cambium and tissues close to cambium. Although these insects are called "cambium insects", their main and larvae pathways (eggs and larvae cavities) are in the phloem. These are also called "phloem beetles". The large group of these insects is members of the Curculionidae, Scolytinae (Curculionidae), Cerambycidae and Buprestidae families of the Coleoptera order (Eroğlu 2017).
This study aims to determine the Buprestis (Ancylocheira) dalmatina species belonging to the Buprestidae family, which are harmful in pine trees and to investigate its some morphological features.
Material and Methods
This study was carried out on pine trees (Pinus parviflora) in Dicle University Campus, Faculty of Agriculture, in 2019. The study area (37°53'31.8"N 40°16'22.2"E) is consisting of 895 pine trees, 82 of them are blue cypress (Cupressus arizonica) and 813 of them are Persian pine (Pinus parviflora). Samples of dried out pine trees were taken and brought to the laboratory (Figure 1). The samples brought to the laboratory were divided into small pieces and placed in 20X20X30 cm plastic containers covered with thin cheesecloth. They were then cultured in climatic chambers set at 26 ± 1 O C, 65 ± 5% humidity, and 16: 8 light / dark period. The identification of the sample obtained as a result of the study was made by Prof. Dr. Göksel Tozlu (Atatürk University, Faculty of Agriculture, Department of Plant Protection, Erzurum). Measurement (by digital caliper) and weighing's (by digital precision scales) were made on the adults.
Results and Discussion
As a result of the study, Buprestis (Ancylocheira) dalmatina Mannerheim, 1837 species belonging to Buprestidae family was determined.
Adult period
In the study, 147 Buprestis (Ancylocheira) dalmatina adults were obtained from larvae cultured in the laboratory (Figure 2). Among the 147 individuals obtained as a result of the study, measurements were made on 22 individuals randomly selected (Table 1). Adult period is easily recognized by the golden yellow bands on the metallic black elytron (Figure 3).
Larval stage
The length of the last stage larvae of the pest reaches approximately 3 cm. The head width of the larva is around 7 mm (Figure 4).
Pupae stage
In the observations, The larvae of Buprestis (Ancylocheira) dalmatina prefer the bark of pine trees to turn into pupae ( Figure 5). The average length of the pupae of Buprestis (Ancylocheira) dalmatina was measured as 17 mm and the width as 7 mm ( Figure 6).
Damage
The harm of the adult of Buprestis (Ancylocheira) dalmatina does not matter. The main damage is made by larvae. The larvae of the pest feed on the wood tissues of pine trees. The trunks of pine trees suffer great damage as a result of feeding the larvae (Figure 7-8). Buprestid species mainly cause significant damage by feeding on plants' woody stem tissues. As a result of their feeding, they can dry the host plants completely or partially in a short time. For this reason, the damages caused by the feeding holes of larvae are very important. The species belonging to the Buprestidae family, which is the primary pest, causes the secondary harmful insect species to settle on the plants as well as their direct harm.
In conclusion, how this species, which is rarely found in pine trees in different regions in Turkey, is transmitted to a local area in Diyarbakır is thought-provoking in terms of internal quarantine. There is very little information about the biology, ecology and natural enemies of this species in Turkey and in the world. This harmful species, which is one of the rare insect species in the world, is thought to have entered the province of Diyarbakır, possibly through pine saplings brought to the region without supervision. Similar to this finding in our study, Lorubio et al. (2018) stated that B. dalmatina, which was first found on P. halepensis in Italy, may have been infected by pine saplings brought from the Balkans.
In In the observations made in the study area, because of the damage of Buprestis (Ancylocheira) dalmatina, 90 Persian pine trees corresponding to approximately 10% of the area were completely dry and 58 of them were cut. It was seen that there were some dry trees in other pine area close to the study area. Buprestis (Ancylocheira) dalmatina, which is quite destructive, needs to be watched carefully. It is important to focus on the recognition, biology, natural enemies and control methods of this species. In this study, it was observed that this species fed with Persian pine (Pinus parviflora) did not prefer the blue cypress (Cupressus arizonica) species in the same area. In addition, it is necessary to carefully monitor whether this harmful species also exhibits a specialization in terms of nutrition preference. We believe that this study will form the basis for further research planned to be conducted on Buprestis (Ancylocheira) dalmatina in the future.
|
2020-10-19T18:06:28.811Z
|
2020-08-27T00:00:00.000
|
{
"year": 2020,
"sha1": "f5d159292b601850d072092f020c86dc20b892d2",
"oa_license": null,
"oa_url": "https://doi.org/10.31195/ejejfs.763872",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "3c2329062f6c46db4bcf1c8b79f3c13428e6f453",
"s2fieldsofstudy": [
"Agricultural and Food Sciences",
"Environmental Science"
],
"extfieldsofstudy": [
"Biology"
]
}
|
246856201
|
pes2o/s2orc
|
v3-fos-license
|
Prodynorphin and Proenkephalin in Cerebrospinal Fluid of Sporadic Creutzfeldt–Jakob Disease
Proenkephalin (PENK) and prodynorphin (PDYN) are endogenous opioid peptides mainly produced in the striatum and, to a lesser extent, in the cerebral cortex. Dysregulated metabolism and altered cerebrospinal fluid (CSF) levels of PENK and PDYN have been described in several neurodegenerative diseases. However, no study to date investigated these peptides in the CSF of sporadic Creutzfeldt–Jakob disease (sCJD). Using liquid chromatography-multiple reaction monitoring mass spectrometry, we evaluated the CSF PDYN- and PENK-derived peptide levels in 25 controls and 63 patients with sCJD belonging to the most prevalent molecular subtypes (MM(V)1, VV2 and MV2K). One of the PENK-derived peptides was significantly decreased in each sCJD subtype compared to the controls without a difference among subtypes. Conversely, PDYN-derived peptides were selectively decreased in the CSF of sCJD MV2K, a subtype with a more widespread overall pathology compared to the sCJD MM(V)1 and the VV2 subtypes, which we confirmed by semiquantitative analysis of cortical and striatal neuronal loss and astrocytosis. In sCJD CSF PENK and PDYN were associated with CSF biomarkers of neurodegeneration but not with clinical variables and showed a poor diagnostic performance. CSF PDYN and PENK-derived peptides had no significant diagnostic and prognostic values in sCJD; however, the distinct marker levels between molecular subtypes might help to better understand the basis of phenotypic heterogeneity determined by divergent neuronal targeting.
Several biofluid markers of neuronal damage, neuroinflammation and synaptic dysfunction have been evaluated in sCJD, aiming to improve diagnosis, prognostic evaluation, stratification and management of patients [3][4][5][6]. However, the continuous identification of new potential cerebrospinal fluid (CSF) biomarkers is still mandatory to achieve a better understanding of other pathogenetic pathways involved in sCJD.
Initial evidence from these studies suggested that decreased CSF PDYN and PENK levels may reflect an impairment and/or neurodegeneration of the striatal medium spiny projection neurons (MSNs), which produce both peptides under dopaminergic signaling [9,10]. Moreover, dysfunctions in the PDYN pathway appear to be involved in developing behavioral and sleep disorders in neurodegenerative disease [12]. Nevertheless, no study to date has evaluated CSF PDYN and PENK in sCJD, a highly heterogeneous disease from both clinical and neuropathological points of view.
Using our previously developed liquid chromatography-tandem mass spectrometry (LC-MS/MS) method in multiple reaction monitoring (MRM) mode for the measurement of CSF PDYN-derived peptides [9] and a new assay for PENK-derived peptides, we investigated the profiles of these markers in sCJD, including its molecular subtypes, and studied the possible associations between the neuropeptide levels and those of other biomarkers and clinical variables, such as disease stage and survival.
Results
Sporadic CJD cases and controls showed no difference in sex, but sCJD cases were slightly (but not with statistically significance) older than the controls (Table 1). There was no effect of sex and age on CSF biomarker levels (Supplementary Results).
Sporadic CJD patients showed no difference in both PDYN-derived peptide levels compared to the controls, whereas the PENK peptide [DAE...LLK], but not [FAE...YSK], was significantly lower in sCJD compared with the controls (p < 0.001) ( Table 1 and Figure 1). These findings were confirmed even after age-adjustment or using the mean of the two peptides for the calculations as described [9] Figure 1). All results were confirmed even after age adjustment and by including the mean of each couple of peptides in the calculations (Supplementary Results). All peptides showed a suboptimal accuracy in the discrimination between sCJD subtypes and controls and among sCJD subtypes with an AUC < 0.80 except for a borderline performance of PENK peptide [DAE...LLK] in the comparison between sCJD MM(V)1 and controls (AUC 0.844 ± 0.077) (Supplementary Table S1).
All peptides showed a suboptimal accuracy in the discrimination between sCJD subtypes and controls and among sCJD subtypes with an AUC < 0.80 except for a borderline performance of PENK peptide [DAE...LLK] in the comparison between sCJD MM(V)1 and controls (AUC 0.844 ± 0.077) (Supplementary Table S1).
In the sCJD group, we detected strong correlations between the levels of the two PDYN-(r = 0.614, p < 0.001) and the two PENK-derived peptides (r = 0.734, p < 0.001), suggesting a consistent and reliable estimate of both opioid peptide levels in the CSF. The same was also confirmed in the control group (PDYN: r = 0.661, p < 0.001, PENK: r = 0.673, p = 0.001). In the sCJD group, we detected strong correlations between the levels of the two PDYN-(r = 0.614, p < 0.001) and the two PENK-derived peptides (r = 0.734, p < 0.001), suggesting a consistent and reliable estimate of both opioid peptide levels in the CSF. The same was also confirmed in the control group (PDYN: r = 0.661, p < 0.001, PENK: r = 0.673, p = 0.001).
The HR (CI 95%) 0.122 (0.051-0.294), p < 0.001) as predictors of the mortality hazard ratios in sCJD, whereas the concentration of each peptide was not associated with survival.
Discussion
In the present study, we investigated, for the first time, the levels of the PDYN-and PENK-derived peptides in patients with sCJD to thereby provide new insights into the possible mechanisms underlying the dysregulation of brain opioids in neurodegenerative diseases.
Brain opioid peptides play an important role in the striatal network, which consists of MSNs and GABAergic interneurons [13]. MSNs comprised two sub-populations, belonging to the direct and the indirect basal ganglia pathways expressing PDYN and PENK, respectively. However, PDYN is also expressed the cerebral cortex at levels almost comparable to those of the striatum [7,14].
In our study, we showed that both PDYN-derived peptides were more significantly decreased in sCJD MV2K, a subtype with a more severe combined cortico-striatal pathology compared to the typical sCJD MM(V)1 and the VV2 subtypes [2,15], which we confirmed in the present cohort using a semiquantitative neuropathological analysis. Thus, the neurodegeneration of striatal MSNs combined with the cortical neuronal loss may be responsible for the PDYN-derived peptide decrease in the CSF of MV2K.
Accordingly, decreased CSF PDYN levels have been also reported in HD and DLB [9,12,16]-proteinopathies with a prominent striatal or cortico-striatal pathology [9,16]. Since sCJD MV2K and MM(V)1 showed comparable reduced levels of PDYN [SVG...LAR] compared to the VV2 groups, correlating with a higher neuropathological score in the cerebral cortex, it is plausible that the degree of cortical pathology significantly affects this peptide's concentration in the CSF.
Concerning PENK, we found a decrease of the PENK [DAE...LLK] peptide in all sCJD subtypes compared to the controls without differences among subgroups. From the biochemical point of view, the two PENK-derived peptides include the amino acidic residues 142-157 and 236-251. Given that the protein precursor PENK contains several cleavage sites between the amino acid residues 157 and 236 [17], the measured peptides in the CSF belong to distinct cleavage products with possibly different pathophysiological destinies. However, apart from sCJD patients, decreased or a trend towards decreased CSF PENK levels were also previously reported in HD, AD, FTD and DLB, suggesting a common pathogenetic pathway in most proteinopathies [8,10]. Further study on larger patient cohorts will be needed to fully elucidate the precise role of brain opioid peptides in neurodegenerative diseases.
Our findings of positive correlations between the peptides and biomarker of neuronal damage supports the link between ongoing neurodegeneration and CSF PDYN and PENK alterations. Nevertheless, the lack of association between CSF PDYN and PENK levels and disease duration in sCJD appears to exclude an influence of the disease progression rate on the CSF levels of brain opioid peptides. However, the many variables, including the wide phenotypic heterogeneity influencing disease duration in sCJD, would require a much more in-depth analysis in a larger cohort to reach a more definite conclusion on this issue.
The major strength of our study is the inclusion of all most prevalent subtypes of sCJD with a large majority of cases being autopsy-verified. On the other side, the implementation of mass-spectrometry-based biomarkers in clinical routines is currently hampered by the limited availability of the technique.
Despite the overall limited diagnostic and prognostic roles of PDYN and PENK in sCJD, our study showed differentially altered levels of PENK-and PDYN-derived peptides in most prevalent sCJD subtypes. These findings might reflect distinct degrees of striatal and cortical pathology in the sCJD spectrum and might help to better understand the disease neuropathological heterogeneity for the development of targeted therapies.
Patient Selection, CSF and Neuropathological Analyses
For the present study, we selected CSF samples of 63 patients with sCJD and 25 healthy controls that had been submitted to the Neuropathology Laboratory (NP-Lab) at the Institute of Neurological Sciences of Bologna (Italy) or to the National CJD Surveillance Unit at the Istituto Superiore di Sanità in Rome (Italy) from 2009 to 2021 for diagnostic activity related to CJD surveillance. The study was conducted in accordance with the Declaration of Helsinki and approved by the Istituto Superiore di Sanità Ethics Committee (approval number CE-ISS 09/266; 29 May 2009). Informed consent was given by study participants or by their next of kin.
The diagnosis of sCJD fulfilled the current European diagnostic criteria [18] and neuropathological consensus criteria [19]. The sCJD group comprised 56 cases with a definite neuropathological diagnosis and seven cases with a probable diagnosis of sCJD (all tested positive by CSF prion real-time quaking-induced conversion (RT-QuIC) assay) [20,21]. Demographic and clinical data of the sCJD cohort are shown in Supplementary Table S3. For the analysis according to the sCJD molecular subtypes, we merged the subjects with definite sCJD (28 MM(V)1, 15 VV2 and 13 MV2K) with those with a probable sCJD diagnosis (five VV2 and two MV2K) and a high level of certainty for a given subtype as described [22]. In each case, we calculated the length of survival and the disease stage as described [23]. The control group included 25 subjects lacking any clinical or neuroradiologic evidence of central nervous system disease and having CSF p-tau, t-tau and Aβ42 in the normal range [22]. CSF samples were obtained by LP following a standard procedure, centrifuged and stored at −80 • C. CSF PDYN-and PENK-derived peptides were analyzed using LC-MS/MS at Experimental Neurology Laboratory at Ulm University Hospital in all cases as described [9]. A detailed description of the sample preparation and MRM method is provided in the Supplementary Methods and Table S4. CSF t-tau, protein 14-3-3, NfL and YKL-40 were analyzed in the sCJD group and AD core biomarkers in the control group at the NP-Lab of Bologna using commercially available ELISA kits as already published [22]. PrP Sc seeding activity was detected by RT-QuIC [20,21].
We provided a mean combined score for each case based on two operators' semiquantitative assessments of neuronal loss and astrogliosis (0, no significant pathology; 1, mild; 2, moderate; and 3, severe, see Supplementary Figure S2) in the striatum (i.e., a mean score of the caudate and putamen) and cerebral cortex (i.e., a mean score of the frontal, temporal, parietal and occipital lobes).
Statistical Analyses
Statistical analyses were performed using IBM SPSS Statistics version 21 (IBM, Armonk, NY, USA) and GraphPad Prism 7 (GraphPad Software, La Jolla, CA, USA) software. For continuous variables, depending on the data distribution and the number of groups, we applied the Mann-Whitney U test, t-test, Kruskal-Wallis test (followed by Dunn-Bonferroni post hoc test) or the one-way analysis of variance (followed by Tukey's post hoc test). All reported p-values were adjusted for multiple comparisons. The Chi-Square test was adopted for categorical variables.
We used multivariate linear regression models to adjust for age differences in CSF biomarkers between the groups and used Spearman's correlations to test the possible associations between analyzed variables. We also performed univariate/multivariate Cox regression analysis to test the association between marker levels and survival as well as
|
2022-02-16T16:09:38.530Z
|
2022-02-01T00:00:00.000
|
{
"year": 2022,
"sha1": "371ba3edaa1b627930601d26e306b1864a666a08",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1422-0067/23/4/2051/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "bb104b3c2b2572841d2104db6b6a87ecf66eb95f",
"s2fieldsofstudy": [
"Biology",
"Chemistry"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
218943469
|
pes2o/s2orc
|
v3-fos-license
|
Appraisal of Cymbopogon citratus (Lemon grass) for Antibacterial Activity Against Uropathogens
Received: Revised: Accepted: Published online: March 18, 2020 April 16, 2020 April 18, 2020 April 30, 2020 Urinary tract infections (UTI) are one of the major public health concerns in both genders, but variations in the anatomy, physiology and behaviour of urogenital and reproduction tract make women more susceptible. UTI is more prevalent and severe in women of all ages and in older men because of multi-drug resistant strains and high recurrence, it has become an important socioeconomic burden. Due to the microbial resistance, several life-threatening side effects, repeated high doses, high cost and low effectiveness of these antibiotics motivated the researchers to explore natural remedies for UTI therapy. The purpose of the research was to evaluate the antibacterial effect of Cymbopogon citratus (C. citratus), against uropathogens isolated from UTI patients, mainly include Staphylococcus aureus (S. aureus), Pseudomonas aeruginosa (P. aeruginosa) Klebsiella pneumoniae (K. pneumoniae) and Escherichia coli (E. coli). Isolates were confirmed through conventional biochemicals techniques. Ethanolic extract of C. citratus was evaluated against isolates through disc diffusion method and minimum inhibitory concentration was also determined. Ethanolic extract of C. citratus was phytochemically characterized through high profile liquid chromatography (HPLC). Antibacterial susceptibility was determined by measuring zone of inhibition (ZOI) and E. coli, S. aureus, P. aeruginosa and K. pneumoniae showed average14.0, 13.0, 13.0 and 8mm ZOI against ethanolic extract, respectively. HPLC showed flavonoids and phenolics components present in ethanolic extract of C. citratus. In mouse model C. citratus also decreased the significant number of uropathogens. This study reports the role of lemon grass for treating UTI and provides new remedy for the treatment of UTI. ©2020 PVJ. All rights reserved
INTRODUCTION
Microbial infections, including Escherichia coli (E. coli), Enterococcus faecalis and Klebsiella species have been found to be the main causes of urinary tract infection (UTI). Various signs including painful urination or dysuria, haematuria, urinary urgency, burning, frequent urination, nausea and vomiting observed in UTI (Anderson et al., 2004). UTIs are among the most common conditions requiring medical treatment with 6-10% of all young women with bacteriuria. The incidence of UTIs increases (25-50%) with age in females aged 80 years and older have bacteriuria (Hung et al., 2009). UTIs occur as a result of interactions between the uropathogens and host. The uropathogens initially bind to the epithelial surface, colonize and spread throughout the mucosa causing tissue damage. Pathogens can ascend into the urinary bladder after the initial colonization period, leading to symptomatic or asymptomatic bacteriuria. Further progression may lead to renal impairment and
RESEARCH ARTICLE
pyelonephritis. Specific virulence factors residing on the uropathogen's membrane are responsible for bacterial resistance to the normally effective defense mechanisms of the host. Bacterial adhesions and their associated epithelial binding sites have recently been identified and natural mechanisms of anti-adherence are currently being investigated (Jafari et al., 2012). According to statistical calculations, the association between UTIs caused by E. coli in the female (P<0.05) was significant (Jahandeh et al., 2015). Reported research showed 80% of UTIs are caused by E. coli and 10-15% caused by S. saprophyticus. In United States and other region of the world Enterococci, Klebsiella, Enterobacter and Proteus mirabilis rarely cause uncomplicated cystitis and pyelonephritis (Ronald, 2002).
Eternal antibiotic resistance in bacteria leads to the alterations in antibiotics for the control of bacterial infections . Moreover, antibiotics impose many side-effects on the host gut flora, hypersensitivity and immunosuppression (Patel, 2007). The pharmaceutical industries developed latest generations of antibiotics for the treatment of resistant bacterial strains (Barrata et al., 1998). Antibiotics resistant strains were reported first time in Brazil from 1980s and now it persists worldwide. Until now various strategies has been developed including modified drugs but natural bioactive antimicrobial agents proved to be a good alternate for the treatment of bacterial infections with very few limitations (Buckova et al., 2018).
The herbal medicines have greater antimicrobial activity due to presence of different bioactive chemicals like Allicin, flavonoids, terpenoids, tannins, alkaloids etc. (Fayyaz et al., 2019;Mahmood et al., 2019;Abbas et al., 2019). Different plant materials have been reported to possess immunostimulant , antiviral (Aslam et al., 2014), antibacterial (Arshad et al., 2017;Yasmin et al., 2020) and antiprotozoan activity (Abbas et al., 2017;Zhang et al., 2020), while some plant material extracts have been reported to consist up of biofuel (Fatima et al., 2016). C. citratus plant extract also reported as an excellent source of various bioactive compounds which can be used to treat UTIs. It can be utilized as a remedy for ophthalmia, intestinal sickness, elephantiasis pneumonia and vascular scatters. Analysts found that C. citratus has characteristics of nervous disease preventing agent, bactericidal, antiseptic, astringent, antioxidant, fungicidal and sedative characteristics (Ronald, 2002).
The use of medicinal plants as key drugs to sustain human health is emphasized by the World Health Organization (WHO). Brazil, Latin America and Argentina have gradually increased the use of medicinal plants. About 80% of people in developing countries use medicinal plants as conventional remedies. Therefore, these plants should be studied in order to better understand their properties, safety and efficiency (Cimanga et al., 2002). Many medicinal plants has been investigated phytochemically for the bioactive compounds and their therapeutic use bioactive compounds including tannins and phenols are growth inhibiters for pathogenic bacteria (Cui et al., 2016). Identifying new and effective strategies as an alternative treatment will be the foremost priority. Medicinal plants and essential oil of plants showed significant bactericidal and antioxidant activity (Naik et al., 2018). Multi drug resistant bacteria cause long term acute infections which fails antibiotic therapy for the control of pathogens (Patel, 2007).
Recent studies on medicinal plants and their therapeutic use is quite helpful to solve the antibiotic resistance issue of bacteria. The infectious diseases are leading cause of diseases and one third of the total deaths are attributed to infectious diseases. The multi-drug resistant bacteria are reported to be associated mainly with human diseases epidemics. The emerging antibiotic resistance against antibiotic trigger the use of herbal medicine as an alternative.
This present study was conducted to evaluate the bioactive compounds of C. citratus and their antimicrobial activity against UTIs causing bacteria. C. citratus readily grow in Asia and no such previous work related to antimicrobial activity against UTIs causing bacteria is available in literature. This study helped to evaluate the role of C. citratus for treating UTIs and provide new dimension in medical field.
Plant extract and Phytochemical Anaylysis:
Cymbopogon citratus plants were collected from local nurseries and confirmed from a Botanist. Leaves were air dried and crushed to powdered form. Ethanolic extract (80%) of C. citratus leaves was made through conventional method by using soxhlet apparatus (Redfern et al., 2014). High performance liquid chromatography (HPLC) was performed for the phytochemical analysis of ethanolic extract of C. citratus. By adding the plant extract sample in HPLC grade at 0.1mg/µl concentration and strained through 0.2 millipore membrane filter. It was then subjected on RP-18 column. The fractions correlating to maximum peaks with fixed retention time were collected by using a fraction collector. It was performed by using two LC-10AT pumps (Shimadzu).
Collection of Bacterial culture and
In-vitro antibacterial analysis: UTIs causing bacteria including gram positive (Staph. aureus, K. pneumoniae) and gramnegative bacteria (E. coli and P. aeruginosa) cultures and confirmation was done through biochemical testing. Antibacterial activity of ethanolic extract of C. citratus was performed through disc diffusion method. Bacterial culture compared with 0.5% MacFarland standard having 1.5 × 10 8 CFU/ml grown as a lawn culture on Mueller-Hinton agar plates. Filter paper discs dipped in 50ul extract for overnight and used to evaluate their antibacterial activity against selected uropathogens. Discs placed on Mueller-Hinton agar plates with bacterial lawn and incubated at 37 o C for 24 hrs. Zones of inhibition (ZOI) were measured for results. Recommended antibiotics discs used as positive control and PBS dipped filter paper disc used as negative control for each bacterium (Cui et al., 2016). ZOI were compared with negative and positive control.
Minimum Inhibitory Concentration (MIC)
: MIC of C. citratus ethanolic leaves extract was performed in micro dilution plate. 50µl of extract with two-fold serial dilution and 50µl of nutrient broth were added from well no 3 to 12 in micro dilution plate. Then 20 µl of bacterial inoculums was added in each well. Well no 1 and 2 were maintained as control negative and positive containing nutrient broth (50µl) + bacterial inoculum (20µl) and antibiotic + nutrient broth (50µl) + inoculum (20µl) respectively. After incubation of 24 hours at 37 o C, turbidity change was observed.
Minimum Bactericidal Concentration (MBC):
To check the MBC, wells showing no visible growth in MIC, 20ul of mixture were transferred onto fresh nutrient agar plate with micropipette by spread plate method and incubated at 37ºC for 24 hours.
In-vivo testing of extract in mouse model: Experimental mouse model as reported previously (Hung et al., 2009;Cui et al., 2016) was used to evaluate the in-vivo effect of 80% ethanolic leaves extract of C. citratus. Albino mice (7-9 weeks) of age were injected with 1× 10 7 CFU in 50µl PBS bacterial culture and after three days treatment for 6 days with 50 µl C. citratus extract. C. citratus extract was injected intraperitoneally. Three days post treatment mice were sacrificed and bacterial titer were measured from homogenized tissues of bladders. 0.2 ml of homogenized tissues were poured on LB medium and bacterial titter after 24 hours was measured by using the formula: CFU per ml= number of colonies ×10 dilution × total volume
Confirmation of bacteria:
The bacteria isolated from UTI patient were characterized through conventional biochemical methods presented in Table 1.
Phytochemical Anaylysis of extract: HPLC showed the phenolics and flavonoids compounds present in ethanolic leaves extract of C. citratus. The quantity and positive control are presented in Table 2. In-vitro antibacterial activity of extract: Plant extract of C. citratus was evaluated through disc diffusion method. ZOI compared with positive controls (antibiotics), ampicillin is used for S. aureus, E. coli, K. pneumonia and gentamycin was used for P. aeruginosa. C. citratus has proved to be more effective against P. aeruginosa, Staph. aureus and E. coli and less effective against K. pneumoniae in disc diffusion method. The largest ZOI were observed against E. coli (14mm) and P. aeruginosa (13mm). Table 2 showed all the mean ZOI against uropathogens.
Minimum inhibitory concentration (MIC) and minimum bactericidal concentration (MBC):
MIC of plant extract performed in micro dilution plate. Minimum concentrations inhibiting the growths of E. coli, Staph. aureus, P. aeruginosa and K. pneumoniae were 6.25, 0.8, 3.125 and 6.25µl, respectively. All the details of MIC are given in Table 3. 20µl from each MIC determined well evaluated through spread plate method on agar plates and all showed same MBC values as MIC.
In-vivo testing of extract in mouse model: After 6 days of treatment, all the animals were sacrificed and bladders were homogenized in PBS. Bacterial titer was significantly decreased in both gram negative and positive uropathgens are presented in Table 5.
DISCUSSION
Antimicrobial effects of medicinal plants against UTIs causing bacteria and resistant strains proved to be an effective alternative to drugs. Increasing multidrug resistance in uropathogens demanded a safe substitute to drug therapies. Current study proved the antimicrobial effect of C. citratus (lemon grass) against selected gram negative and gram positive uropathogens. Results acquired from disc diffusion method and MIC showed the 80% ethanolic leaves extract of C. citratus can inhibit the growth of gram negative (E. coli, P. aeruginosa, K. pneumoniae) and gram-positive Staph. aureus. Similar observations reported by (Baratta et al., 1998;Cimanga et al., 2002;Pereira et al., 2004;Naik et al., 2010) against C. citratus oil. All the uropathogens were sensitive to ethanolic extract of C. citratus through disc diffusion method. P. aeruginosa previously reported resistant to C. citratus oil but 80% ethanolic leaves extract of C. citratus showed promising effect against P. aeruginosa in the current study, ZOI measured was 13mm as compared to positive control gentamycin 25mm. Previous studies of (Onawunmi et al., 1984;Duraipandiyan et al., 2006;Naik et al., 2010, Jafari et al., 2012 reported P. aeruginosa as least sensitive or resistant against C. citratus oil. Current study showed that the ethanolic leaves extract of C. citratus showed MIC value against gram negative bacteria from 3.125-6.25ul as compared with previously reported much weaker response 400mg/ml of methanolic extract against P. aeruginosa and E. coli (Duraipandiyan et al., 2006;Jafari et al., 2012;Naveed et al., 2013). Gram negative uropathogens were more resistant against C. citratus extract than gram negative bacteria. Staph aureus showed least MIC value of 0.8ul that inhibit the growth of bacteria in 96 well plate. As compared to other studies may the difference in techniques lead to the minor variability of the results. P. aeruginosa showed less resistant against 80% ethanolic leaves extract than methanolic extract, which is in agreement of Khan et al. (2012).
In current study HPLC showed the flavonoids and phenolic compounds were present in ethanolic extract that were most effective against uropathogens. Active compounds that include flavonoids, geranyl acetate, phenolic compounds, steroids and saponin were already reported in C. citratus oil by Duraipandiyan et al. (2006), Mothana et al. (2010), Hindumathy et al. (2011).
Authors contribution: JL, XCH, DW, KMW and AR conceived and designed the study. TF executed the experiment. WB, FMZ and ML analyzed the data. All authors have interpreted the data, critically revised the manuscript for important intellectual contents and approved the final version.
|
2020-05-07T09:10:23.477Z
|
2020-01-01T00:00:00.000
|
{
"year": 2020,
"sha1": "96ff8a21a5f08ce7ffd0c1c9f60164d5abadeb29",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.29261/pakvetj/2020.040",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "c56b6060c74a488d733803f08ad9d43442337b84",
"s2fieldsofstudy": [
"Medicine",
"Environmental Science"
],
"extfieldsofstudy": [
"Biology"
]
}
|
263922296
|
pes2o/s2orc
|
v3-fos-license
|
Oxygen-conserving reflexes of the brain: the current molecular knowledge
Abstract The trigemino-cardiac reflex (TCR) may be classified as a sub-phenomenon in the group of the so-called ‘oxygen-conserving reflexes’. Within seconds after the initiation of such a reflex, there is neither a powerful and differentiated activation of the sympathetic system with subsequent elevation in regional cerebral blood flow (CBF) with no changes in the cerebral metabolic rate of oxygen (CMRO2) or in the cerebral metabolic rate of glucose (CMRglc). Such an increase in regional CBF without a change of CMRO2 or CMRglc provides the brain with oxygen rapidly and efficiently and gives substantial evidence that the TCR is an oxygen-conserving reflex. This system, which mediates reflex protection projects via currently undefined pathways from the rostral ventrolateral medulla oblongata to the upper brainstem and/or thalamus which finally engage a small population of neurons in the cortex. This cortical centre appears to be dedicated to reflexively transduce a neuronal signal into cerebral vasodilatation and synchronization of electrocortical activity. Sympathetic excitation is mediated by cortical-spinal projection to spinal pre-ganglionic sympathetic neurons whereas bradycardia is mediated via projections to cardiovagal motor medullary neurons. The integrated reflex response serves to redistribute blood from viscera to brain in response to a challenge to cerebral metabolism, but seems also to initiate a preconditioning mechanism. Better and more detailed knowledge of the cascades, transmitters and molecules engaged in such endogenous (neuro) protection may provide new insights into novel therapeutic options for a range of disorders characterized by neuronal death and into cortical organization of the brain.
In the last few years, the so-called 'oxygen-conserving reflexes (OCR)' [1] have been gaining increasing interest, especially among neurosurgeons and other neuroscientists [2][3][4][5][6][7][8]. This term was coined by the research work of Wolf et al. [1] and Andersson et al. [9], who studied oxygen consumption in resting human beings. They found that apneic situations with bradycardia were associated with a slightly smaller reduction in arterial O2 saturation than apneic situations without bradycardia.
A typical example of these OCRs in natural life is the 'dive reflex' observed in diving mammals. It is a protective OCR aimed to keep the body alive during submergence in cold water, preparing itself to sustain life [1,3,10,11]. It is elicited by contact of the face with cold water and involves breath-holding, intense peripheral vasoconstriction, bradycardia, decreased ventilation and increased mean arterial pressure, maintaining the heart and the brain adequately oxygenated at the expense of less hypoxiasensitive organs [10,12].
Human beings are not able to hold their breath for as long as diving mammals. This might be due to a less-developed diving response [1]. However, the dive reflex is considered playing a major role in the etiopathogenesis of sudden infant death syndrome (SIDS) or crib death, whose underlying pathological substrates are considered to be mostly congenital in nature and involving the brainstem [13,14]. Another example of OCRs in human beings represents the trigemino-cardiac reflex (TCR) which was first reported by Schaller et al. [14] during surgery in the cerebellopontine angle. It was observed that electrical, mechanical or chemical manipulation of the trigeminal nerve on its intra-or extracranial course may provoke drop in mean arterial blood pressure and bradycardia [8,14,15].
Very little is known of these two principal clinical examples of reflexogenic aberrance, and the international literature seems far away to provide the exact pathophysiology of TCR and divereflex. It appears, however, that the dive reflex may rather be a Oxygen-conserving reflexes of the brain: the current molecular knowledge B. Schaller a , J. F. Cornelius a , N. Sandu sub-phenomenon and that the TCR is the superordinate principle [2][3][4][5]. Understanding both these OCRs would be of enormous clinical importance to resolve major problems, especially during surgery or invasive procedures, but also the dreaded SIDS. The importance and frequency of the TCR and its sub-phenomenon, the diving reflex, prompted us to evaluate in more details the current knowledge of their molecular bases, as well as of their clinical implications.
It is generally accepted that the diving reflex and ischemic tolerance involve, at least in part, similar physiological mechanisms [8,12,15]. As regards TCR, this seems to be the higher principle, with the diving reflex being one specific clinical manifestation among others when stimulating the trigeminal nerve. The discovery of those endogenous neuroprotective strategies underlines the clinical importance of TCR. Even though no convincing experimental data exist, TCR may be a specific example of a group of related responses generally defined by Wolf as 'OCRs' [2]. Within seconds of the initiation of such a reflex, there is a powerful and differentiated activation of the sympathetic system [2]. The subsequent elevation in cerebral blood flow (CBF) is neither associated with changes in the cerebral metabolic rate of oxygen (CMRO2) nor with the cerebral metabolic rate of glucose (CMRglc). Hence, it represents a primary cerebral vasodilatation [2]; a state in which the arterial blood pressure seems not to have any influence. However, a temporary reduction in peripheral consumption of O2 resulting in a slower O2 uptake from the alveolar space to the blood, would temporarily conserve O2 for the benefit of the central nervous system and the heart, which cannot sustain their metabolism without O2.
It has been largely shown that various noxious stimuli may, when applied below the threshold of brain damage, induce tolerance in the brain against a subsequent deleterious stimulus of the same or even another modality; these phenomena are called 'ischemic pre-conditioning' and 'cross-tolerance', respectively [14,16]. They probably involve separate systems of neurons of the central nervous system [8]. One of these two systems which mediate reflexive neurogenic protection emanates from oxygen-sensitive sympatho-excitatory reticulospinal neurons of the rostral ventrolateral medulla oblongata. These cells, excited within seconds by a reduction in CBF or CMRO2, initiate the systemic vascular components [17]. They profoundly increase regional CBF without changing CMRO2 or CMRglc and hence rapidly and efficiently provide the brain with oxygen [17]. The exact projections are currently undefined. They are thought to project from the rostral ventrolateral medulla oblongata to the upper brainstem and/or thalamus and finally project to the small population of cortical neurons. These appear to be dedicated to reflexively transduce a neuronal signal into cerebral vasodilatation and synchronization of electrocortical activity [17]. Reticulo-spinal neurons of the rostral ventrolateral medulla oblongata are 'premotor' neurons and, as such, are critical for detecting and initiating the vascular, cardiac and respiratory responses of the brainstem to hypoxia and ischemia [18]. The systemic response to excitation of rostral ventrolateral medulla oblongata neurons, however, results from activation of a network of effector neurons distributed elsewhere in the central nervous system [18]. Thus, sympathetic excitation is mediated by projections to spinal pre-ganglionic sympathetic neurons whereas bradycardia is mediated by projections to cardiovagal motor medullary neurons [8,17]. The integrated response serves to redistribute blood from viscera to brain in response to a challenge to cerebral metabolism [18].
The second mechanism that protects the brain itself from ischemia is represented by the intrinsic neurons of the cerebellar fastigial nucleus and mediates a conditioned central neurogenic neuroprotection. This mechanism is activated by excitation of the intrinsic neurons of the fastigial nucleus and is independent of the first mechanism. These two mechanisms initiate the systemic components of the oxygen-conserving TCR within seconds of excitation [18]. The CBF is significantly increased without changing CMRO2 and thus, the brain is rapidly provided with oxygen.
These mechanisms described above need a pre-exposure that can be seen clinically by a repetitive stimulation of the TCR, for example during operation [14]. That the brain may have neuronal systems dedicated to protecting itself from ischemic damage at first appears to be a new concept. However, upon reflection, this is not surprising given that there exist naturalistic behaviours characterized by very low levels of regional CBF, such as diving or hibernation [12]. The exact mechanisms of neurogenic neuroprotection are unknown, but such neuroprotective adaptation may be part of preconditioning strategies [19]. Probably, these reflexes, like the TCR, may prevent other brain insults as well -which therefore remain unrecognized.
Accordingly, it can be suggested that the TCR represents a 'physiological' entity rather than a pathological one. Better and more detailed knowledge of the cascades, transmitters and molecules engaged in such endogenous protection may provide new insights into novel therapeutic options for a range of disorders characterized by neuronal death and into cortical organization of the brain. Hypoxic or anoxic tolerance is found ubiquitously in nature, especially in diving species and hibernating species [20,21]. A common feature in most anoxic-tolerant species or during hibernation is a pronounced metabolic depression [22]. For example, it is now well accepted that during diving, turtle brains undergo metabolic depression, which is characterized by a depression in electrical activity [23,24]. [25][26][27], and down-regulation of the excitatory receptors alpha-amino-3-hydroxy-5-methyl-4-isoxazolepropionic acid (AMPA) [28] and N-methyl-D-aspartic acid (NMDA) [29] also occurred. In contrast to glutamate, increases in gamma-aminobutyric acid (GABA) release were observed after IPC [26,27]. These results also suggest that IPC, like in diving species, promotes a metabolic rate down-regulation in brain, thus reducing energy consuming pathways.
One question that arises in the field of ischemic preconditioning (IPC) is whether it induces metabolic depression in a similar
In the diving reflex, however, adapted species must be metabolically prepared to respond to a potential hypoxic insult. It is possible that activation of the diving reflex by shifting blood flow to the brain, provides additional time that allows the brain prepare for the eventuality that anoxia ensues, by activating signalling pathways similar to those observed as triggers of IPC.
Two good candidates to trigger a neuroprotective cascade during the diving reflex and IPC are adenosine and the activation of the ATP-sensitive potassium channel. Several studies have demonstrated the role of the adenosine A1 receptor in both anoxia tolerance in diving species and in IPC [29][30][31][32][33][34][35][36][37]. Activation of the K ϩ ATP channel, likely plays a role in at least some of the mechanisms of IPC [29,38]. However, the precise K ϩ ATP channel involved remains undefined. Recently, two ATP-sensitive potassium channels have been described. One of these channels resides in the plasma membrane; the other resides in the mitochondrial inner membrane.
The mtK ϩ ATP has been suggested to be the key channel involved in IPC [39,40]. It has been suggested that opening of the mtK ϩ ATP channel may depolarize mitochondrial membrane potential promoting an increase in the electron transport chain rate, and thus increasing ATP production [41]. These two triggers are the logical result of the oxygen-sensing mechanism, because they are both linked to ATP levels. Once they are activated, a number of signalling pathways ensue that orchestrates the anoxic-/ischemic-tolerant phenotype. For a more in-depth description of some of these signalling pathways and genes expressed after IPC see Gidday et al. [16].
Further improvement in knowledge may be assigned by state-ofthe-art imaging methods in the next few years: first in animal models, then in human beings and finally during operations. Recent clinical studies suggest the existence of such an endogenous neuronal protective effect in the human brain [42,43]
|
2016-05-13T19:46:28.740Z
|
2009-04-01T00:00:00.000
|
{
"year": 2009,
"sha1": "7da95984690fc508818f5532ad6a01d7de6611c7",
"oa_license": null,
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/j.1582-4934.2009.00659.x",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "7da95984690fc508818f5532ad6a01d7de6611c7",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
238811111
|
pes2o/s2orc
|
v3-fos-license
|
Lessons from the Electric Vehicle Crashworthiness Leading to Battery Fire
: Electric vehicles (EVs) are currently emerging as alternative vehicles due to their high energy efficiency and low emissions during driving. However, regarding the raising concern, the safety of EVs can further be improved before they completely replace conventional vehicles. This paper focuses on reviewing the safety requirements of EVs, especially those powered by Li-ion battery, based on the mechanical abuse tests from the international standards, national standards, regulations and other laboratories standards, and safety of occupants from the regulations and safety programs. Moreover, the publicly reported real-world fire incidents of EVs based on road crashes were collected and reviewed. The objective is to highlight the gap and challenges arose between the current safety requirements and real-world fire incidents of EVs and provide the way for assisting the future research in the area of EV safety, particularly light duty passenger vehicle. The serious challenges observed include high impact speed, multi-crashes per incident, multiple barriers of different types involved in the accident, and post-crash safety (serious injury and demise) of occupants and rescue teams. While addressing these challenges, this review will aid researchers and manufacturers working in batteries, EVs, and fire safety engineering to narrow the gap and enhance the safety of future EVs in areas of battery materials, fire extinguishing, and vehicle’s body structure.
Introduction
Currently, the strict environmental constraints have made the transport sector focus on the clean technologies to reduce emissions in the environment [1]. In this context, electric vehicles (EVs) have emerged as a promising solution to reduce the tank-to-wheel emissions in the road transport [2] as they offer a zero emission in the driving mode [3][4][5]. Besides, EVs have higher tank-to-wheel efficiency of about 60% to 80% compared to 20% to 35% [6] of conventional vehicles and lower recharge cost as electricity is cheaper than petrol/diesel fuels.
A few years ago, the automotive industry experienced an evolution of advanced EVs such as battery electric vehicles (BEVs), hybrid electric vehicles (HEVs), plug-in hybrid electric vehicles (PHEVs), and fuel cell vehicles (FCVs) [7]. In addition to that, EVs have showed an uptrend in both demand and production [8][9][10] incorporating Lithium-ion battery technologies for storing electrical energy and propulsion of EVs. The success of EV market is strongly attributed by implementation of clean energy/air policy [8,9] (e.g., C40 policy for world's megacities to adopt EV technologies in bus, taxis, and other citywide fleets [10][11][12][13]) and improved technologies (energy efficient motors, higher energy storage system) [9]. Despite the extraordinary attributes, safety, short driving range, heavy against damage during an accident [29]. Apart from safety, crashworthiness has a key contribution toward development of safe, reliable, and comfortable vehicles. EV is designed similarly to conventional vehicle design strategies except that it contains electrified drivetrain which replaces the internal combustion engine in the front compartment, with the addition of battery pack and structural modification to support the additional vehicle weight. As a result, using these new car materials, it is critical to investigate EV crash behavior. The EV structure of consists of the body frame, chassis, drive system, and energy storage system. An example of these main components and their functions toward crashworthiness are shown in Figure 1. For frontal and rear crashes, the body frame forms the crumple zone to absorb the kinetic energy, lower the acceleration, and protect the occupant and other vehicle components. The presence of energy storage system with energetic and flammable materials introduces another safety issue once it deforms due to collision. The deflection of the cells might result in a short circuit within one or more cells, generating heat that could ignite the chemicals within the cell. The resulted flame propagates to the adjoined cells and explode, which in turn could endanger the occupants. Since diverse battery chemistries are already commercialized, their proneness to ignition, huge heat released, and toxicity rise an attention to researchers to further examine the crashworthiness of EVs.
The Concept of Crashworthiness of EVs
The crashworthiness defines the ability of the vehicle structure to sufficiently provide protection in the accident involving persons against body injury and protect its cargo against damage during an accident [29]. Apart from safety, crashworthiness has a key contribution toward development of safe, reliable, and comfortable vehicles. EV is designed similarly to conventional vehicle design strategies except that it contains electrified drivetrain which replaces the internal combustion engine in the front compartment, with the addition of battery pack and structural modification to support the additional vehicle weight. As a result, using these new car materials, it is critical to investigate EV crash behavior. The EV structure of consists of the body frame, chassis, drive system, and energy storage system. An example of these main components and their functions toward crashworthiness are shown in Figure 1. For frontal and rear crashes, the body frame forms the crumple zone to absorb the kinetic energy, lower the acceleration, and protect the occupant and other vehicle components. The presence of energy storage system with energetic and flammable materials introduces another safety issue once it deforms due to collision. The deflection of the cells might result in a short circuit within one or more cells, generating heat that could ignite the chemicals within the cell. The resulted flame propagates to the adjoined cells and explode, which in turn could endanger the occupants. Since diverse battery chemistries are already commercialized, their proneness to ignition, huge heat released, and toxicity rise an attention to researchers to further examine the crashworthiness of EVs. Examples of EV main components, their functions to enhance crashworthiness, and impact of the crashworthiness on EVs. The body frame and chassis are from Liu et al. [30] with permission from Elsevier, the drivetrain and energy storage system are from Navale et al. [29], with permission from Taylor & Francis. The impacts of the levels of crashworthiness are displayed on the right images as an example. The EV shown on the top right is from Yu et al. [31], with permission from Elsevier, and on the right bottom is from Science Focus [32]. Examples of EV main components, their functions to enhance crashworthiness, and impact of the crashworthiness on EVs. The body frame and chassis are from Liu et al. [30] with permission from Elsevier, the drivetrain and energy storage system are from Navale et al. [29], with permission from Taylor & Francis. The impacts of the levels of crashworthiness are displayed on the right images as an example. The EV shown on the top right is from Yu et al. [31], with permission from Elsevier, and on the right bottom is from Science Focus [32].
Requirements for the Safety of EVs
EVs must be subjected to the same stringent crash testing and meet a certain safety standards as conventional cars. Moreover, EV-specific standards must also be met ensuring the safety of specific components. However, production standards for EV are not applied only to the whole vehicles, but also to the safety of individual components. For example, ensuring protection of energy storage system against crashes, no chemical leakage, and isolating the chassis from the high-voltage to prevent an electric shock. Besides, the crashworthiness of EVs is prone to be affected by their mass and distribution of the center of gravity [33].
Energy storage system is the heart of the EV and its safety may pose risk to the whole vehicle. In the increased awareness on safety of stringent crash regulations, automakers have also started to consider the safety of energy storage system for improved crashworthiness. Li-ion battery (LIB) is widely employed to store the electrical energy used for EV propulsion and it falls into three form factors, namely cylindrical, pouch, and prismatic (see in Figure 2a). The LIB cells are normally stacked together in an appropriate fashion to form a module and later a battery pack in order to produce an intended voltage (Figure 2b,c). Then, the battery pack is configured in either underfloor, T-shaped, or rear configuration to meet the space and performance requirement of the EV. The pack configuration is carefully selected to protect the battery as best as possible in combination with a high quality of occupant protection. Nevertheless, most of the OEMs buy cells and they assemble the cells to modules and a pack. Hence, the safety of LIBs in vehicles is a priority of the manufacturers before installing in the EV.
Requirements for the Safety of EVs
EVs must be subjected to the same stringent crash testing and meet a certain safety standards as conventional cars. Moreover, EV-specific standards must also be met ensuring the safety of specific components. However, production standards for EV are not applied only to the whole vehicles, but also to the safety of individual components. For example, ensuring protection of energy storage system against crashes, no chemical leakage, and isolating the chassis from the high-voltage to prevent an electric shock. Besides, the crashworthiness of EVs is prone to be affected by their mass and distribution of the center of gravity [33].
Energy storage system is the heart of the EV and its safety may pose risk to the whole vehicle. In the increased awareness on safety of stringent crash regulations, automakers have also started to consider the safety of energy storage system for improved crashworthiness. Li-ion battery (LIB) is widely employed to store the electrical energy used for EV propulsion and it falls into three form factors, namely cylindrical, pouch, and prismatic (see in Figure 2a). The LIB cells are normally stacked together in an appropriate fashion to form a module and later a battery pack in order to produce an intended voltage ( Figure 2b,c). Then, the battery pack is configured in either underfloor, T-shaped, or rear configuration to meet the space and performance requirement of the EV. The pack configuration is carefully selected to protect the battery as best as possible in combination with a high quality of occupant protection. Nevertheless, most of the OEMs buy cells and they assemble the cells to modules and a pack. Hence, the safety of LIBs in vehicles is a priority of the manufacturers before installing in the EV. [15]. The photos of packs are from Warner [34], with permission from Elsevier.
For successful applications in EVs, LIBs need to be tested and meet the safety requirements. Three abuses such as mechanical, electrical, and thermal abuses are typically used to represent the field phenomena. To characterize the road circumstances, only mechanical abuses on LIBs will be explored. The principle objective of the mechanical abuse is to evaluate and ensure that LIB remains safe in all road circumstances. The level of test varies from cell to pack or vehicle level depending on the standard or regulation. Table 1 shows the list of international standards and regulations employed in mechanical abuse tests, and they are defined as follows: [15]. The photos of packs are from Warner [34], with permission from Elsevier.
For successful applications in EVs, LIBs need to be tested and meet the safety requirements. Three abuses such as mechanical, electrical, and thermal abuses are typically used to represent the field phenomena. To characterize the road circumstances, only mechanical abuses on LIBs will be explored. The principle objective of the mechanical abuse is to evaluate and ensure that LIB remains safe in all road circumstances. The level of test varies from cell to pack or vehicle level depending on the standard or regulation. Table 1 shows the list of international standards and regulations employed in mechanical abuse tests, and they are defined as follows: • Society of Automotive Engineers SAE J2464 [35] SAE J2929 [36] • International Organization for Standardization The letter C, M, P, and V representing cell, module, pack, and vehicle levels. More details regarding the standards for performing mechanical abuse tests are found in [45]. During abusing, the battery may develop different levels of hazards depending on the type of abuse, chemistry and internal passive protection devices. It would be worthy to say that the basic requirement is that the LIB does not ignite or explode for a defined time. For instance, "no fire", "no explosion", "no rupture", and "no leakage" are set as pass criteria for UN/ECE-R100.02, ISO 12405-3, and UL 2580 standards [45]. The European Council for Automotive R&D (EUCAR) defined different levels of hazards from the abused Li-ion battery and set categories for passed and failed battery, as shown in Table 2. All of the green and yellow colors represent a lower level of hazard, while the red colors represent a high level of hazard.
The battery pack of LIB is normally fixed on the frame together with other infrastructure including module cases, wiring and battery management system to protect the battery pack from external shocks, heat, and vibration. With the advanced design of LIBs, battery pack, and vehicle structures, most of the EV collisions are not expected to cause destruction to the battery pack [16]. The common approach employed by automakers is to place the LIB packs into the reinforced areas of the vehicle (Figure 3a), farther away from front and rear impact absorbing zones [7] (Figure 3b), aiming to reduce the risk of penetration during the accidents. The battery pack of LIB is normally fixed on the frame together with other infrastructure including module cases, wiring and battery management system to protect the battery pack from external shocks, heat, and vibration. With the advanced design of LIBs, battery pack, and vehicle structures, most of the EV collisions are not expected to cause destruction to the battery pack [16]. The common approach employed by automakers is to place the LIB packs into the reinforced areas of the vehicle (Figure 3a), farther away from front and rear impact absorbing zones [7] (Figure 3b), aiming to reduce the risk of penetration during the accidents.
Requirements for the Safety of Occupants
The safety of the EV occupant after crash can be foreseen in terms of: electrical safety, against mechanical deformation of the vehicle structure, and spillage of the hazardous chemicals. The electrical safety aims to ensure prevention of an electric shock to the occupant during or after the crash. The concept of electrical safety relies on providing protection against direct and indirect contact with high-voltage components.
For the case of occupant protection against vehicle structure, deformation pressure on EV makers is exercised through advanced safety programs, termed as New Car Assessment Program (NCAP). The NCAPs and other regulations consider to test the vehicle as a whole [5]. A series of crash tests are usually performed to mimic the most common road incidents such as frontal impact, side impact, and rear-end crashes. In all crash tests, dummies are employed to quantify the forces and possible injuries a driver may incur after a crash. The data collected from the dummy (triggered by the motion of the dummy during the crash) are evaluated, together with an inspection of vehicle deformation, other on-board safety systems, and scores are given for each specific crash test. The criteria for performing assessments are based on the injury criteria. In this section, a comparison of injury criteria based on the NCAPs from the United States, Europe, Latin America, Japan, China, and ASEAN countries are highlighted. Additionally, UNECE Regulations and
Requirements for the Safety of Occupants
The safety of the EV occupant after crash can be foreseen in terms of: electrical safety, against mechanical deformation of the vehicle structure, and spillage of the hazardous chemicals. The electrical safety aims to ensure prevention of an electric shock to the occupant during or after the crash. The concept of electrical safety relies on providing protection against direct and indirect contact with high-voltage components.
For the case of occupant protection against vehicle structure, deformation pressure on EV makers is exercised through advanced safety programs, termed as New Car Assessment Program (NCAP). The NCAPs and other regulations consider to test the vehicle as a whole [5]. A series of crash tests are usually performed to mimic the most common road incidents such as frontal impact, side impact, and rear-end crashes. In all crash tests, dummies are employed to quantify the forces and possible injuries a driver may incur after a crash. The data collected from the dummy (triggered by the motion of the dummy during the crash) are evaluated, together with an inspection of vehicle deformation, other on-board safety systems, and scores are given for each specific crash test. The criteria for performing assessments are based on the injury criteria. In this section, a comparison of injury criteria based on the NCAPs from the United States, Europe, Latin America, Japan, China, and ASEAN countries are highlighted. Additionally, UNECE Regulations and Chinese standards are added to provide a critique comparison. The regulations and safety programs are defined below. The crash test intends to examine the injury to the head, neck, chest, and legs of the driver and right fore-passenger. The details of the employed regulations for the safety requirements for occupants during frontal, side, and rear tests are found in [55,59,64,65]. In the safety requirements for occupants during full-wrap frontal, side, and rear collision test, speed is mostly considered for comparison with recent research and real-world fire incidents.
Recent Research on Crashworthiness of EVs
This chapter focuses on the crashworthiness of the EV in vehicle level. In several years researchers have been analyzing the ability of vehicles to protect the safety of occupant during the accident. This has been done by (a) performing real crash tests through the use of vehicle prototype, (b) simulating the vehicle crash on computer software by considering vehicle parameters obtained from the experiment. The tests are complex experiments with large non-linear deformations involving multiple iterations of design, prototype, and crash tests. In general, to perform a vehicle crash requires enough man-hours, although it produces accurate results. Moreover, real crash experiments need expert persons, sophisticated environment, prototype vehicle for crashing, sensing, and measuring system, etc.
O'Malley et al. [33] reported crash tests performed by Insurance Institute for Highway Safety (IIHS) and Australasian New Car Assessment Program (ANCAP) to analyze the safety of occupants and vehicle dynamics in frontal collision under moderate overlap 40% offset into deformable barrier at 64 km/h and small overlap 25% offset into rigid barrier at 64 km/h. The report of IIHS showed that the area around the vehicle's battery pack remained undamaged in all test scenarios with no electrical problem. Palvoelgyi and Stangl [66] crashed an electric G-van at 48 km/h and results showed the detachment of battery tray from the vehicle and uncontrolled g-loadings to the car. Uwai et al. [7] tested a newly developed body structure of Nissan to evaluate the protection of occupants against direct contact and deformation of the battery pack. The vehicle with the new body and battery pack were tested in the frontal test at 64 km/h with a deformable barrier, 32 km/h side pole test, and 80 km/h rear test. The impact sensing system was installed to shut down the high voltage during the crash, while fuses were used to protect electric shock and short circuits. After all tests, neither battery pack deformation nor electrical conduction was observed. EVERSAFE project [67] investigated the characteristics of high-voltage system during/after the collision test, identified signs for discovery of hazards, and examined the current post-crash harmless handling procedures. To do so, a side impact test with a movable pole impactor was performed on the Mitsubishi i-MiEV equivalent to Euro NCAP side pole test, and examined electrical, mechanical, thermal, and chemical parameters. The test resulted into no obvious damage to the battery pack and the high-voltage outside the battery pack dropped below 60 V as required by vehicle safety regulations. Moreover, autopsy revealed no indications of electrical, thermal, or chemical hazards to occupants' compartment. With the same objectives, BMW i3 was impacted at the rear by a crash trolley. In the observations, only mechanical damages were noticeable. The cabin had no intrusion but the passenger doors were opening easily. Unfortunately, the dummy driver was not set, hence, no injury assessment was done.
The small overlap test has been executed to mimic the road crash when the driver has failed to avoid the collision completely. The goal of the test is to assess the strength and energy dissipation ability of the cabin from the outer edges of the vehicle. However, the minimal energy absorbing strength of the exposed structure compromises the safety of the occupant.
The current common small overlap tests are 25% and 40% frontal offset tests. The 25% small overlap test performed by Polestar [68] reported the following observations: quick detachment of the front wheel which increases the chance for structural stack-up and deformations into the cabin, serious damage on the Severe Partial Offset Crash (SPOC) block (a metal bracket on each side of the chassis that protects metal intrusions during the impact), and unscathed battery pack.
Crashes between vehicles and round poles are common both inside and outside the urban areas. Normally, this kind of crash occurs in populated areas where many utility poles are installed, mainly along the roads, or on roads where signs or traffic lights are fixed. Out of urban areas, this kind of crash occurs often with the trees or utility poles located along the roads. It is also among the crashes that contribute significantly to dweller fatalities and serious harms [69]. Front pole test is common in conventional vehicles, see previous tests in [69,70]. Unfortunately, neither NCAPs nor IIHS executes front pole test in EVs. In doing the test, the vehicle crashes a rigid round pole at the center of the frontal width while missing both side members for absorbing the impact energy. Due to the absence of structure elements in the middle of the frontal width to stop the vehicle and absorb the impact energy, the intrusion rate is elevated in this test and emerges as a tough test for automakers and designers.
In general, performing real-world crashworthiness is an expensive and time-consuming task [71]; additionally, very few experimental data are publicly available. Luckily, the availability of high computing machines and crash simulation software with parallel computing techniques has revolutionized the crash tests. The useful experimental parameters have been adopted to generate simplified numerical models, which are employed to characterize different vehicle designs. For these reasons, various models have been employed to predict the vehicle behavior and reduce the necessity of full-scale impact test. These approaches include lumped parameters models (LPMs), beam element models (BEMs), and finite elements models (FEMs). All approaches originate from the principles of structural mechanics that fulfils the conservation of mass, energy and momentum. However, the selection of a particular approach vary widely depending on the required simplicity.
Syuhri [72] designed the optimal bumper in a racing EV using shock absorber to dissipate the impact energy during the frontal collision. First, derivation of the mathematical model and LPM was performed to obtain the dynamic behavior of structure occurred in the frontal crash. Second, the hydraulic crash damper was incorporated in the mathematical model and then numerical simulation was applied to obtain optimal value for the hydraulic crash damper. Finally, a comparison was made between the initial model and the new model with an optimal crash pulse. When crashed at a speed from 20 km/h to 100 km/h, the percentage energy absorbed was in the range of 88.03% to 64.7%. Moreover, the developed model showed a better response and has the ability to dissipate 72.9% crash energy than the initial one when crashed at 65 km/h. The new model was able to reduce vehicle deceleration, occupant deceleration, and vehicle deformation in the range of 25% to 28.1% compared to the initial model [72].
Despite the useful results in terms of vehicle designs and crash evaluation, many challenges of LPM have been reported in literature. For instance, Munyazikwiye et al. [71] reported that the main drawback of LPM is its reliance on the readiness of calibration data prior to undertaking the crash analysis. Meaning that, the spring characteristics of the LPM model need to be obtained either from a full-scale impact test or from FEM model [46]. In addition to that, the validity of the LPMs for the similar data of their adopted models and same test speeds are the big challenges that opens the room for further research [71].
In recent years, the explicit non-linear finite element analysis (FEA) has emerged as, undoubtedly, the most widely renowned modelling approach in EV crashworthiness due to its high accuracy in material specifications, stresses, and deformation during the impact [73]. The aspects of analyses include the structural behavior of the vehicle and mechanisms for protecting against high-voltage. Belingardi and Obradovic [74] developed an FEM impact attenuator crash to absorb the kinetic energy during the frontal impact of an EV Formula student car body. The improved structure showed an average deceleration of about 14 g, which is in good standing with the requirements of the SAE 2008 rules, a value lower than 20 g along the Y-axis. Arifin and Gunawan [75], using Abaqus software, presented the design and testing of impact attenuator on the Formula SAE FG17 Garuda UNY. The average deceleration of 15.908 g was achieved by the impact attenuator and met the 2017 Formula SAE regulation. Zhang, Zhou, and Xia [76] developed an FEM simulation model of the small lightweight EV (SLEV) to examine the effects of the front wheels on the crash load transfer and load path, intending to quantify the crash energy absorbed due to tire deformation.
The SLEV model underwent a full-wrap frontal test with rigid barrier, 40% offset frontal test with deformable barrier, and small overlap test with rigid barrier. In general, the front tires absorbed a significant portion of kinetic energy that is comparable to that of other front structural members. Some examples of FEM models developed by researchers to evaluate the crashworthiness are shown in Figure 4. reported that the main drawback of LPM is its reliance on the readiness of calibration data prior to undertaking the crash analysis. Meaning that, the spring characteristics of the LPM model need to be obtained either from a full-scale impact test or from FEM model [46]. In addition to that, the validity of the LPMs for the similar data of their adopted models and same test speeds are the big challenges that opens the room for further research [71]. In recent years, the explicit non-linear finite element analysis (FEA) has emerged as, undoubtedly, the most widely renowned modelling approach in EV crashworthiness due to its high accuracy in material specifications, stresses, and deformation during the impact [73]. The aspects of analyses include the structural behavior of the vehicle and mechanisms for protecting against high-voltage. Belingardi and Obradovic [74] developed an FEM impact attenuator crash to absorb the kinetic energy during the frontal impact of an EV Formula student car body. The improved structure showed an average deceleration of about 14 g, which is in good standing with the requirements of the SAE 2008 rules, a value lower than 20 g along the Y-axis. Arifin and Gunawan [75], using Abaqus software, presented the design and testing of impact attenuator on the Formula SAE FG17 Garuda UNY. The average deceleration of 15.908 g was achieved by the impact attenuator and met the 2017 Formula SAE regulation. Zhang, Zhou, and Xia [76] developed an FEM simulation model of the small lightweight EV (SLEV) to examine the effects of the front wheels on the crash load transfer and load path, intending to quantify the crash energy absorbed due to tire deformation.
The SLEV model underwent a full-wrap frontal test with rigid barrier, 40% offset frontal test with deformable barrier, and small overlap test with rigid barrier. In general, the front tires absorbed a significant portion of kinetic energy that is comparable to that of other front structural members. Some examples of FEM models developed by researchers to evaluate the crashworthiness are shown in Figure 4. Roland et al. [77] used LSDYNA explicit FE code to analyze a detailed model of a battery pack, and mimic its response under EuroNCAP side pole test with a test speed of 29 km/h into a rigid pole. Moreover, their crashworthiness results showed that hexagonal pack is superior in occupancy rate and energy absorption characteristics compared with trapezoidal and rectangular packs. Li et al. [78] performed a full frontal crash test using a Chinese regulation (GB11551-2014) on the FEM model to explore the lightweight design and crashworthiness of EV. The numerical results of Chen, Cheng, and Kun [79] showed that the composite materials are beneficial for putting on to the energy-absorption elements. Schäffer et al. [80] performed a study on door sill structure through a multi-level optimization to secure the battery pack floor during side pole crash. Better energy absorption from the improved door structure showed enhanced security to the battery pack. The developed model was validated against Euro NCAP test at 29 km/h. The work in [81] optimized the flexible structure of the battery pack to enhance the crashworthiness of EVs during the head-on collision. Results found a significant reduction on occupant compartment acceleration when using battery packs as energy-absorption components. Setiawan and Salim [82] presented FEM model to examine the performance of energy absorbing materials for securing the battery pack against the oblique side pole crash at 32 km/h. The use of aluminum foam as a filler for the door sill structure, showed an adequate battery pack deformation of 9.4 mm compared to the 15.3 mm limit.
Zhang et al. [83], using LS-DYNA software, presented FEM simulation results of a converted EV by examining the layout of the battery pack followed by topology optimization. The full-wrap frontal and side impacts were used to evaluate the modified EV. The results efficiently showed crashworthiness modified EV, while dropping the weight. Sakurai and Suzuki [59] observed that the used conventional car, when transformed to EV, may have poor distribution of mass affecting the center of gravity.
To sum up, despite the challenge of long-time usage in vehicle modeling, yet FEM remains the most popular software for EV crashworthiness in the recent studies. The dynamics of the EV deformation have been critical for analyzing the safety of occupants and battery pack. Improving the front end structure of EV and usage of energy absorbing components can enhance the protection of the occupant, whereas the lightweight design and composite materials can secure the damage on the battery pack floor. However, most of the simulations are not considering the electrocution and chemical spillage and their hazard. In EV battery, the chemicals pose the danger of fire ignition while electrocution endangers the occupant, hence, they need to be considered in the future research.
Real-World EVs' Fire Incidents
In this section, some of the publicly reported EVs' fire incidents that occurred on roads are summarized. To cut short, only the fire incidents that took the form of mechanical abuses have been considered. The accidents were grouped according to their type of mechanical abuses such as penetration, crash, drop, rollover, and immersion and are listed in Table 3. Table 3. Real-world EVs' fire incidents in the form of mechanical abuses.
Type of Mechanical Abuse
Descriptions Ref.
Penetration
Tesla Model S caught fire after colliding with a huge piece of metal debris, which destroyed one of the battery modules. The driver was unharmed. The event occurred close to Seattle [84] Tesla Model S went over a tow hitch, which damaged the vehicle's undercarriage and caused an electrical fire; the driver was not harmed. The event took place in Tennessee. [85] Drop Tesla Model S jumped over a cliff, collided with two boulders at the canyon's bottom, broke into pieces, and exploded, killing the driver. It happened on Malibu Canyon Road. [19]
Type of Mechanical Abuse Descriptions
Ref.
Crash
Tesla Model S was driven at 177 km/h through a roundabout when it crashed with a concrete barrier on the left wheel, then the right wheel, and finally a tree. [85] Tesla Model S impacted into several vehicles and torn apart in half after hitting a light pole in West Hollywood, ignited and seriously injured the driver and two officers, half of the car stuck in a synagogue [18] Tesla Model S collided with a tree at high speed, bounced around, and then exploded, killing the driver and passenger in Indianapolis. [86] Tesla Model S went off the road, crashed with a house, and set fire to a garage. The motorist was taken to the hospital in Lake Forest, California, with non-life threatening injuries. The fire was put out by a significant amount of water. [87] Tesla Model S car exploded instantaneously after hitting with a road barrier in Austria. The driver survived the accident [88] Tesla Model X traveling at 120 km/h collided with two other vehicles, ignited instantaneously, and killed the driver. The accident happened near Mountain View, California. [89] Tesla Model S drove off the road at 185 km/h, collided with a concrete wall, caught fire, and killed two teens while wounding another near Fort Lauderdale, Florida. There were two re-ignitions. [90] Tesla Model S hit a big vehicle at 97 km/h in Utah, South Jordan, USA. The frontal damage was severe, but there was no post-crash fire. [91] BYD e6 hit from behind at 180 km/h, crashed with a tree, pierced 1 m, erupted into flames, and killed three people in Shenzhen, China. [92] In Guangzhou, China, a Tesla Model X slammed into the center fence at 76 km/h and was promptly rear-ended by a Ford vehicle at 71 km/h, the air bag and rear door failed to deploy, and passengers were evacuated by the front doors from the backseat [93] Fire broke out from an EV two hours after being hit by a train near Østfold, Norway [92] In Moscow, a Tesla Model S collided with a tow truck on a highway, causing a fire. Three occupants sustained significant post-crash injuries. [94] After colliding with a barrier in Taoyuan City, a Tesla Model 3 caught fire and killed the driver. [95] At 160 km/h, Tesla Model S collided with a Honda carrying five passengers, striking a light post and severing the car in half. The automobile collided with numerous more vehicles before coming to a stop near West Holly-wood [96] Tesla Model S slammed into a palm tree and caught fire in Davie, Florida. [97] The 2020 Tesla Model X SUV collided with a 2017 Subaru Legacy car on Lake Zurich Road near Barring-ton, injuring all drivers and quickly catching fire [98] Tesla Model S crashed through a barrier, collided with a tree, and exploded. The driver was unharmed. To put out the fire, 12 tons of water were utilized [99] In Connecticut, Tesla Model 3 rear-ended a police cruiser. No one was hurt. [100] Tesla Model X veered off the road and crashed with two trees on Fremont's Thornton Avenue. The driver suffered just minor injuries. The fire was put out using 13 tons of water. [101,102] When attempting to park, Tesla Model X abruptly accelerated, colliding with a parked Toyota Tacoma and swiping the side. [103] On the highway near Beijing, Tesla Model S rear-ended the Volkswagen Santana's right-rear bumper. [104] Tesla Model S collided with the bus at a low speed after hitting the light post with its front edge and then its right side at high speed. Nobody was hurt. The battery pack was not harmed. [105] In Taiwan, a Tesla Model 3 collided with a vehicle while traveling at highway speeds. [106] The white car slammed into the side of the Tesla Model X at great speed. The white car slammed into the side of the Model X at great speed. [107] Tesla Model S hit the road sign post, the steel fence, and the tree. Nobody was hurt. The battery pack was not harmed. It happened in Guangdong Province.
[108] Table 3. Real-world EVs' fire incidents in the form of mechanical abuses.
Type of Mechanical Abuse Descriptions
Ref.
The Nissan LEAF collided with another car. The driver was unharmed. There was no blaze ignited. [109] Tesla Model 3 was traveling at 80 km/h when a crossing truck lost its tire, causing it to collide with the Model 3 at a combined speed of 160 km/h. The driver was unharmed. There is no harm to the battery. [110] In Seattle, Washington, a Tesla Model 3 was rear-ended by another car on the freeway. Neither a fire nor a major injury happened. [111] A Volvo Big Rig rear-ended a Tesla Model S, sparking a fire. The driver was unharmed. [112] In Baltimore-Washington, a Tesla Model 3 was rear-ended by a Honda Civic. [113] The Tesla Model S collided with another car on a highway in Arizona, U.S. [114] In Malibu, a Tesla Model 3 was rear-ended by a Ferrari. [115] The Tesla Model S was hit on the side by a conventional car in Contern, Luxembourg. [115] Tesla Model X was struck by a crashed aircraft in Sugar Land, Texas. [116] In Oregon, a Tesla Model X collided with a Toyota RAV4 and broke a light pole. [117] Tesla Model X collided with a Ford Transit van in Yiwu, Zhejiang Province. [118] In Laguna Beach, California, Tesla Model S collided with a stopped police cruiser. [118] In Broward County, Florida, a Nissan GTR traveling at 137 km/h smashed a Tesla Model X in half [119,120] Tesla Model 3 s right wheel collided with the barrier at 120 km/h, causing significant damage. There were no injuries or fires. The scene was in Northern Greece. [121] In Karlsruhe, Germany, a stopped Tesla Model S was struck on one side by a 7.5 ton truck traveling at 60 km/h and collided with a traffic signal post. There was no fire, but the resident received minor injuries. [122] Tesla Model S was involved in a head-on accident followed by a side impact in the Vallvidrera tunnels in Barcelona, Spain, when it was hit by the failing station wagon. [123] In Amsterdam, a Tesla Model S collided with a tree and caught fire. [25] Immersion Hurricane Sandy caused a fire in Toyota Prius and Fisker Karma submerged in seawater in New Jersey, USA [92] [92] Hundreds of new Maserati automobiles caught fire as saltwater surged and inundated the port late at night, causing the salt water to ignite the fire in the car batteries and spread to other vehicles. [124]
Rollover
The Tesla Model S collided with a barrier on a central reserve on a highway in Ticino, Switzerland, flipped over, and burst into flames. The driver was killed in the collision. [125] In Pennsylvania, a Tesla SUV collided with a guard rail, then a concrete wall before rolling onto its roof. The driver was hurt in the collision. [102] In Shanghai, a Model 3 crashed and overturned, but the occupant survived [126] Tesla Model 3 crashed with the car and rolled in the air several times before landing upside down at 113 km/h. [127] From the publicly reported incidents, different phenomena can be indicated. For instance, in the first two incidents in Table 3 the bottom of the battery modules were hit, caused electrical short circuit, and explosion consequently (see example of explosion in [84]). However, the first case was the large metallic debris, while the second was the three steel ball tow hitch, which may have different dimensions and material specifications. Moreover, the Teslas' battery packs were in motion when they underwent those hits. Thanks to the Tesla for improving the undercarriage of the Teslas' pack by introducing a three shields titanium body to prevent intrusions. For the case of crash, vehicle speed is the critical factor toward the safety of the vehicle. From the incident displayed on the orange color in Table 3 Another observation from Table 3 is the presence of multi-crashes, whereby more than one object is crashed or a vehicle crashes more than one time. Referring to scenarios reported in Table 3, many of them were caused by high driving speeds of EVs. As a consequence of high speeds, an EV can crash into more than one object to get stopped. For instance, to mention a few, in Table 3, incident no. 3, the vehicle was driven at 177 km/h through a roundabout, collided a concrete wall on the left wheel, later on the right wheel, and finally hit a tree. In incident no. 4, the vehicle crashed into several vehicles and later struck a light pole; see the split car in [96]. Another vehicle in incident no. 13 was driven at 71 km/h, collided with the fence, and was rear-ended by another car at 76 km/h. The speed did not look high but it ended up in blaze, see in [93]. In another scenario, incident no. 22, the vehicle drifted off the roadway and collided into the two trees. In the next scenario, incident no. 23, while the driver was trying to park, they accidentally accelerated the vehicle and hit the nearby vehicle, swiped the side of a parked one, and crash two small round poles; fortunately, the vehicle was at moderate speed. In incident no. 34, the vehicle first hit the road sign pole, metallic fence, and ended up on the tree. More multi-crashes can be observed from Table 3. In general, out of 54 reported incidents in Table 3, 44 incidents are based on the crash abuse and 11 out of 44 incidents involved multi-crashes. For an EV tested with a single crash per time to undergo multi-crashes per incident, it endangers its safety. The situation gets quite alarming when the occupant is choked by the guardrails penetrated through windshields, see examples in [108]. Another distinctive frontal crash among the others where two vehicles had a head-on collision with each other while they are in motion, see in [98]. A similar crash occurred in [116]. Apart from crash abuses, immersion is a silent enemy where many EVs may be destructed in a short time, see in [92,128].
Different types of barriers have been involved in the above reported incidents such as concrete barriers, guardrails, light poles or road sign poles, trees, vehicles, houses, and a combination of these barriers. Examples of a vehicle colliding with a barrier is reported in [93,98], a concrete block in [88], and a tree in [99]. In the reported multicrashes, 33 incidents were single barrier crashes. In terms of barriers, 8 crash incidents involved trees, 7 crash incidents involved round poles (5 incidents in light pole, 1 incident in road sign pole, and 1 incident in traffic light pole), 7 crash incidents involved roadblocks (6 involved concrete barriers and 1 guardrail), 29 incidents involved vehicle crashes (such as rear, frontal, or side impacts), and 1 incident where a vehicle crashed into a house. Another distinct crash is when vehicles crashed their roofs. For example, in incident no. 45, the vehicle flipped and hit its roof with all its weight, see [126], in incident no. 36 the vehicle's bonnet was hit by the tire which moved at 160 km/h, and in incident no. 43 the front edge of the vehicle was hit by the crashing plane.
The presence of high temperature and toxic gases can make battery pack's explosion dangerous to EVs' occupants and nearby properties. For example, in incident no. 5 the driver's body was burnt beyond recognition after his vehicle plunged off a cliff. In incident no. 8, as reported in [87], the battery fire was reported to emerge after colliding with the concrete barrier and became enormously severe and produced a lot of poisonous gases. Five firefighting trucks and 35 firefighters were used to put out the fire while copious amounts of water were employed to cool the battery's temperature. Large amounts of water have been also used in incident no. 20 and 22. The burning of house engulfed from the slammed EV has been report in incident no. 7. Table 3 Comparatively, the regulations and safety programs perform the full-wrap frontal tests at speeds ranging between 40 km/h and 70 km/h, which are in-line with recent research, while the real-world fire incidents ranged between 70 km/h and 190 km/h. Consequently, the post-crash damages were reported as extremely severe including demise of occupants, destruction of vehicle, and nearby properties. Since fatal crashes have been aggravated by high speeds, most of EVs in the global market are prone to undergo fatal crashes when driven at high speed. As examples, Porsche Taycan Turbo S [129], Tesla Model S [130], 2019 Nissan LEAF [131], and 2019 Volkswagen e-Golf SE [114] can accelerate from 0 to 100 km/h in 2.8 s, 3.2 s, 7.9 s, and 9.6 s, respectively. Moreover, the serious implications of high-speed crashes include extreme destruction or split of vehicles, burning of nearby properties when explosion occurs, and dreadful demise. Therefore, regarding the fast acceleration and high speeds of EVs, the safest EVs should be the ones fulfilling the injury criteria with the highest margin. However, achieving the high margin needs to take into account the cost of elevated impact energy, modified vehicle structure, and more space for dissipation of impact energy.
Multiple Crashes per Incident
In the recent reported EVs' fire incidents, multi-crashes have been emerged as repetitive and have caused disastrous incidents. As example, 11 out of 53 incidents reported in Table 3 involved multi-crashes. In one way, multi-crashes can be regarded to be sourced by high speed which made a crashed vehicle to bounce into barriers. Examples have been elucidated in Section 5 paragraph 3. In addition to that, various types of barriers and crash positions were involved. However, simulating a multi-crashes test could be a bit challenging in terms of crash speed, barriers, and assessment of safety criteria. A proper matrix that combines an appropriate crash speed, number, and type of barriers to be used and crash positions needs attention to carefully address the above parameters.
Types of Barriers
The main barriers involved in many of the reported real-world EVs' incidents have been explained in Section 5 paragraph 6. However, two major challenges have been raised regarding the barriers. First, the issue of round poles to split the EV when slammed at high speed. Some incidents have been reported in incident no. 4 and 48. Since slicing can occur from any EV side (front or side depending on the crash position), more enhancement is needed to strengthen the vehicle protection. Second, high possibility of moving barriers.
In the pool of EV in the highway it is prone for vehicles to crash while in motion, hence it makes it possible to roll over.
Injury Criteria
The major challenge among the safety requirements presented in Section 3 is the protection of occupant against electric shock or fire/explosion from the damaged battery pack. Notably, the crash may compromise the electrical safety measures and elevate the risk of electric shock. To date, in the enhancement of safety technology many EVs are fixed with safety switch to isolate the high-voltage from the battery pack in the crash events. For example, the concept of safety switch has been introduced in Mitsubishi i-MiEV [7] and Mercedes-Benz hybrid and EVs [132]. Since the crash can confront the safety of battery itself and cause electrolyte leakage, UNECE R94 and R95 do not allow electrolyte spillage inside the passenger cabin. For leaking outside the cabin, electrolyte spillage is limited to 7% (in UNECE R94 and R95) or 5 L maximum within 30 min only (in FMVSS 305), for open type of traction battery [132].
However, the new type of battery, LIB, employed in EVs, pose a serious safety concern due to their energetic and flammable materials, which are able to generate plenty of energy that can ignite a fire and later explode. The post-crash requirements for frontal and side impact tests permits the fuel leakage rate to 5 × 10 −4 kg/s, equivalent to 30 g/min [132]. Nevertheless, these requirements may seem to not be compatible with LIB where the electrolyte vaporizes in air. LIB varies widely with chemistry having diverse quantities and constituents. Consequently, the resulted thermal and fire hazards after explosion may also depend on the battery chemistry. To improve the safety requirement due to battery pack explosion, this work is in-line with the suggestion provided in [132], that further research could be allowed to establish appropriate limits for different battery types.
Summary and Future Outlook
This paper focuses on the safety of the EVs and its occupants in the moments of crash incidents. The safety requirements from the regulations and safety programs and recent research reviewed and later compared with the real-world fire incidents. The key outcomes can be summarized as follows: Most of EVs are designed using conventional vehicle design strategies such as body frame and chassis and an inclusion of electrified powertrain and energy storage system. Replacing the internal combustion engine with an electrified drivetrain in the front compartment, the addition of heavy battery pack, and structural modification necessitates the need of exploring the crash behavior of EVs. Currently, crashworthiness defines the ability of the vehicle structure to sufficiently protect the occupant against body injury (mechanical hazard) only, without including thermal and fire hazards. Energy storage system is the heart of the EV and its safety issues pose risk to the whole vehicle. In amid of EV uptrend, LIB technology has gained popularity but its safety concern has raised much attention. For successful applications in EVs, the requirements of international standards, national standards, regulations, and other laboratories have been reviewed. Moreover, the requirements for the safety of occupants based on regulations and safety programs were elaborated. The recent EVs' fire incidents were compared with the safety requirements outlined above. Several challenges are reported including high impact speed, multi-crashes per incident, multiple barriers of different types involved in the incident, and post-crash safety (extreme injury and demise) of occupants were evaluated.
For improving the crashworthiness, in the future, the EV models need to be well advanced in the following areas: Isolation of high-voltage in the moment of crashing to protect the first and second responders of different EV models. Possibility of externally discharging the electrical energy stored in battery pack of the slammed EV.
Enhancing the battery technology by replacing the flammable and toxic materials with safer ones. Due to the serious mechanical damages posed on the battery pack during the crash, understanding the failure process and damage tolerance of the battery becomes crucial in the future. That knowledge is important for designing a safe battery pack for an EV. The gap of knowledge regarding the mechanism of battery reaction to crush loading with electrical, chemical, and thermal behavior. Strengthening the battery pack enclosure not to burn easily and allow no fumes to penetrate in the cabin. Moreover, severe crashes in the field are still deforming the battery pack in the crush-safe zone and lead to explosion. More effort should be focused on strengthening the side area reinforcement as it is challenging to offer crumple zones around. The critical parameters to be focused include energy absorption capability in sideways of the battery enclosure, arrangement of the battery enclosure and battery, effect of battery materials and crash response among adjacent batteries, and frame stiffness of the compartment. EV manufacturers and regulatory organizations are obliged to widen the studies of the vehicle structure, materials, and control systems in order to attain higher ranking. Acquiring the real-time vehicle information and incorporating them with an outgoing call of the vehicle's advanced automatic collision notification (AACN) system for accurate identification of the source toward the crash incident. To encourage large number of real crash experiments and incorporating the observed real-world crash phenomena in order to develop inclusive test procedures and evaluation techniques which will be useful for examining the crash safety of EVs and develop appropriate safety regulations for the EVs. Current crash regulations for EVs are a virtuous starting point for establishing new ones. In the near future, small light EVs (SLEVs) are expected to increase significantly and emerge as a future solution especially in urban mobility. The major reason is their small physical dimensions. The attributes of SLEVs include short front and rear overhang and occupying less than 5 passengers. Their crash characteristics must be well understood to ensure the sufficient safety of SLEVs in the EV crash regulations. Furthermore, the obstacle to the broader commercialization, acceptance, and further growth of SLEVs is the current regulatory fragmentation which does not consider the crash safety of SLEVs.
Conclusions
Currently, EVs have become the green transport by saving the globe from imminent catastrophes caused by global warming. However, with the steady growth of the automotive industry, EVs pose a questionable issue regarding the safety. The key areas that are prominent in the crashworthiness of EVs have been presented and elaborated. The serious challenges observed include high impact speed, multi-crashes per incident, multiple barriers of different types involved in the accident, and post-crash safety (serious injury and demise) of occupants and rescue teams. While addressing these challenges, this review will aid researchers, engineers and manufacturers working in EVs, and fire safety engineering to narrow the gap and enhance the safety of future EVs in areas of battery, fire extinguishing, and vehicle's body structure. To conclude, this paper has provided a clear picture on the safety of EVs and occupants and potential areas that are in need of further research.
|
2021-09-27T20:15:08.547Z
|
2021-08-06T00:00:00.000
|
{
"year": 2021,
"sha1": "ccdeffea7e4f06e40ab5b3f794b7e0ea3648de3c",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1996-1073/14/16/4802/pdf",
"oa_status": "GREEN",
"pdf_src": "Adhoc",
"pdf_hash": "a57e9c0a6c0361368b802768df4128a6b7d4a60d",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Business"
]
}
|
222158776
|
pes2o/s2orc
|
v3-fos-license
|
Development of a fluid‐bed coating process for soil‐granule‐based formulations of Metarhizium brunneum, Cordyceps fumosorosea or Beauveria bassiana
Granule‐based products of solid state fermented micro‐organisms are available for biocontrol. Because liquid fermentation has several advantages, we investigated fluid‐bed coating with liquid fermented biomass.
Introduction
In Europe, agriculture based on chemical plant protection agents is currently in transition to integrated pest management (IPM) due to changes in the European legislation (Lamichhane et al. 2017). In the European Directive 2009/128/EC, the implementation of the principles of IPM in which non-chemical methods must be preferred is obligatory. Furthermore, a significant reduction of active substances approved in the European Union (EU) is occurring (Santin-Montanya et al. 2017). Especially for soil-dwelling pest insects, for example, in potatoes or corn no chemical pesticides are available or registered in Germany for application in the field (Bundesamt f€ ur Verbraucherschutz und Lebensmittelsicherheit 2020, accessed 11 May 2020). Therefore, biocontrol agents (BCAs) get an important role in IPM practice (Lamichhane et al. 2017). In the EU, seven strains of Beauveria bassiana, one of Cordyceps fumosorosea and one of Metarhizium anisopliae are approved as active substance (European Commission, accessed 11 May 2020). The B. bassiana strain 147 and NPP111B005 is formulated as granule for direct application between the rachis of palm trees (SANTE-2016-10424 Rev 1). On the EU database of active substances, Isaria fumosorosea the Apopka strain 97 is listed as approved. This Apopka strain 97, which was formely described as Paecilomyces fumosoroseus, has been indicated asCordyceps javanica after genetic reanalysis by Kepler et al. (2017). The approved M. anisopliae strain BIPESCO5 or F52, originally named as Ma43, is registered as granule-based products under the product names, for example, GranMet GR TN , BIO1020 TN , Taenure TN , Met 52 TN or TickEx G TN and should be used for soil application. The M. anisopliae products contain 3Á3 g active substance per kg product, which is comparable to 10 10 CFU per kg. This strain was reclassified as Metarhizium brunneum . Since 2017 the product Attracap TN (M. brunneum strain CB15-III) is allowed to be used in Germany under emergency situation (Article 53 of the Regulation (EC) No 1107/2009). Recommended application rates for granule-based products of entomopathogenic fungi are 122 kg ha À1 in fruits (Met 52 TN ) for control of, for example, black vine weevil, 30-50 kg of Melocont TN with three applications per year for control of white grubs or for control of wireworms, the larval stages of Click beetles, 30 kg of the M. brunneum-based product Attracap TN . In Europe, the prices per kg product range between 10 and 40 €. Because of the high product costs per ha, alternative production and formulation strategies are needed to reduce product costs and finally to make biocontrol more attractive.
In general, after application of granule-based formulations in the soil, conidia can directly infect the insects or conidia, microsclerotia or mycelium must grow out from the carrier and resporulate to produce the infectious conidia (Jackson and Jaronski 2009). The infection of the insects by entomopathogenic fungi starts by contact of the fungal spores with the cuticle (Ortiz-Urquiza and Keyhani 2013). There, the conidia produce a germ tube and penetrate the insect. This penetration starts mostly at the openings of the cuticle, such as the stomata. However, they can also penetrate the cuticle by a combination of enzymes and mechanical pressure of the germ tube. After successful penetration, the fungus grows in the insects' hemolymph and forms blastospores. So the fungus spreads throughout the target organism. After the death of the insect, the fungus sporulate on the surface of the insect under wet or in the insect under dry conditions (Arthurs and Thomas 2001b).
The industrial production of granules of the strain Ma43 is based on solid state fermentation, whereas the product Attracap TN is based on liquid fermentation. The selection of the production system highly depends on the fungal capacity to grow and sporulate in liquid or on solid state culture. Because the production in solid state fermenters requires a longer fermentation time, has a higher risk of contamination and an industrial scale-up is more complicated, we decided to focus only on liquid fermentation. In liquid fermenter, fungi can produce mycelium (Rombach 1989), mycelial pellets (Reinecke et al. 1990), microsclerotia (Behle and Jackson 2014), submerged conidia (Jenkins and Thomas 1996) or blastospores (Kleespies 1993). Often, it is a mixture of different growing stages. The current downstream processing for the fermented biomass includes separation, concentration and filtration-based purification followed by formulation of the fungal propagules. Depending on the desired application strategy, the formulation strategy has to be selected. In our study, we selected fluid-bed coating because it is a suitable method to produce granules (Stephan and Zimmermann 2001).
For this, the sprayed liquid evaporates and forms a film on the substrate. Such a product is very uniform and not dusting, which is very important for later approval of the product. In summary, the overall goal was to develop and to optimize a granule-based formulation technology for entomopathogenic fungi based on liquid fermented biomass. Temperature adjustments, biomass concentration and type of biomass as well as additives were investigated to optimize the conidiation on the surface of the granule. Conidiation is the fundamental requirement for these granules to achieve an efficient pest control in the field.
Fungal strains
Metarhizium brunneum strain JKI-BI-1339 (formerly described as M. anisopliae and named as Ma43, F52 or BIPESCO 5) was isolated 1971 in Austria from Cydia pomonella by M€ uller-K€ ogler. Based on this strain, several products like GranMet, Met52 or BIO1020 were developed by various companies. The Cordyceps fumosorosea strain JKI-BI-1496 was isolated 1971 in Germany from C. pomonella by M€ uller-K€ ogler and the Beauveria basisana strain B.b. 007 was isolated in Georgia from soil of high mountains of Caucasus and is deposited under IMI # 501799 at CABI Genetic Recourse and under B.b. 007 at the culture collection of entomopathogenic fungi at JKI.
For JKI-BI-1496 and B.b. 007, submerged spores (blastospores and/or submerged conidia) were partly separated after fermentation by filtration through three layers of muslin gauze. Spores can be formulated for spray applications while the remaining biomass including mycelium and spores was used for the following experiments. To remove remaining residues of the culture media, the filter cake was resuspended in 100 ml of 0Á9%-NaCl solution and centrifuged at 25°C and 15 433 g for 10 min. The supernatant was discarded and the pellet was again resuspended. After three centrifugations, the biomass in the NaCl solution was homogenized by a disperser (TP 18/10 Ultra-Turrax; Janke & Kunkel KG, Stauffen, Germany, attachment with a diameter of 17Á5 mm) at maximum speed of 20 000 rev min À1 for 5 min. Thereafter, the residual moisture content was determined by a moisture determination balance (Ma30; Sartorius, G€ ottingen, Germany) and the dry matter of the suspension was adjusted with 0Á9%-NaCl solution to final fungal biomass concentrations of 0Á3, 0Á03, 0Á003 and 0Á0003% and for further experiments with JKI-BI-1339 additionally to 0Á7%. These biomass concentrations were used for coating millet in a fluid-bed drying system.
Pretreatment of the millet grain
Millet grain for food consumption (Alnatura, Bickenbach, Germany) was autoclaved at 121°C for 20 min and dried in a drying cabinet at 100°C up to a water content lower than 5%.
Thermotolerance
The thermotolerance was investigated under wet and dry heat. For experiments under wet heat conditions, 900 µl of the biomass suspension containing 0Á03% biomass was filled in 1 ml reaction tubes. These tubes were transferred in a water bath with temperatures of 25, 50 and 70°C for 6 min. Before and after incubation, the tubes were homogenized on a vortex mixer for a short time. Because the biomass contained a mixture of mycelium and spores, the viability of the micro-organisms was determined by the most probable number (MPN) method. The principle of the MPN method is that suspensions of micro-organisms will be diluted up to a concentration of lower than one viable propagule per well. Consequently, fungal growth within this well is caused by one viable propagule. 120 µl of MPB was pipetted in 96-well microtiter plates. 30 µl of the temperature-treated samples was added to the MPB in four replications. In all, 16 subsequent dilution steps (1 : 5) were made. The microtiter plates were incubated for 7 days at 25°C. Based on the number of turbid wells, the MPN was calculated with the help of a computer program (Most Probable Number Calculator ver. 4.04 © 1996, Albert J. Klee -Risk Reduction Engineering Laboratory, United States Environmental Protection Agency, Cincinnati, OH). The experiment was repeated six times time independently.
For dry heat conditions, 900 µl of each biomass concentration was filled in 1 ml Eppendorf tubes. In all, 30 millet grains were added in each tube and mixed using a vortex mixer. Thereafter, the millet grains were placed on a filter paper in a petri dish followed by an incubation for 6 min in drying cabinets at temperatures of 25, 50 and 70°C. The six minutes are equivalent to the duration of the coating procedure in the fluid-bed dryer. After incubation, the millet grains were transferred on wateragar (1Á8% (w/v) agar-agar) and incubated at 25°C. After 1-week incubation, the rate of millet grains covered with fungus was determined. This experiment was repeated six times time independently with three replicates.
Fluid-bed coating
For the fungal strains JKI-BI-1496 and B.b. 007, biomass concentrations of 0Á3, 0Á03, 0Á003 and 0Á0003% were used. For the fungus JKI-BI-1339, initially the concentrations of 0Á03, and 0Á3% and in an additional experiment concentrations of 0Á3 and 0Á7% biomass were compared. In all, 20 ml of each biomass suspension was sprayed on 100 g of sterile millet in a laboratory fluid-bed dryer (Strea-1; Aeromatic-Fielder AG, Bubendorf, Switzerland, nozzle diameter: 1 mm, Container volume: 16Á5 l). The flowrate for the suspension was 3Á3 ml min À1 . This ensured a constant spray on the millet with a pressure of 1Á5 bar over a period of 6 min. The inlet temperature of the fluid-bed dryer was set to 50°C. The volume flow of the drying air was set to 130 m 3 h À1 . After drying three times, 20 granules were transferred on water agar for each concentration. After an incubation of 1 week at 25°C, the number of granule colonization was determined visually. This experiment was repeated six times time independently.
Conidiation on the granules in soil
The biological activity of the granule is caused by the sporulation of the fungus on the granule. Therefore, in a Due to the insufficient outgrowth of strain JKI-BI-1339, this experiment was conducted only with the fungi JKI-BI-1496 and B.b. 007. For this, granules treated with the biomass concentration of 0Á03% were used. For this experiment, soil (Fruhstorfer Erde Typ T; Hawita Gruppe GmbH, Vechta, Germany) was sieved through a mesh width of 13Á5 and 3Á0 mm and mixed with 20% (w/w) sand. After autoclaving three times at 121°C for 20 min and drying at 60°C for 2 weeks, the residual water content was approximately 3%. For each treatment, 50 ml of soil was mixed with 0, 5, 10, 15 and 20 ml of autoclaved deionized water. The residual moisture content of the soil was determined using a residual moisture meter (Ma30; Sartorius). A centrifuge tube (15 ml) was filled with soil to the mark of 1Á5 ml. Here, the first coated granule was placed and the centrifuge tube was further filled with soil up to the 2Á5 ml mark and another granule was put on top. This was continued for the marks of 3Á5, 4Á5, 5Á5 and 6Á5 ml. The uppermost granule was not covered with soil and was regarded as an optical indicator for the growth and sporulation of the fungus. To prevent the soil from drying, the tubes were sealed with parafilm. For each treatment with different wetness of the soil, three additional centrifuge tubes were filled. The centrifuge tubes were incubated for 7 days at 25°C. After this, the uppermost granule together with the surrounding soil was removed. The remaining content of the centrifuge tube was poured into a glass bottle. 10 ml of 0Á1% (v/v) Tween 80 was filled in the emptied centrifuge tube and shaken to suspend remaining spores. The 10 ml and additional 40 ml of 0Á1% (v/v) Tween 80 was filled in the corresponding bottles. The bottles were placed on a reciprocal shaker and shaken for 10 min at the highest setting. Finally, based on the specific form and shape of the fungal conidia, the spore concentration was determined using a hemocytometer. The experiment was repeated six times with three replications with the same material used for the fluid-bed drying experiments.
Optimization of granules based on strain JKI-BI-1339
Additional experiments were conducted with the strain JKI-BI-1339 to optimize the outgrowth on the granule by pre-coating nutrients within the fluid-bed coating process.
Screening of nutrients for better sporulation
In 96-well microtiter plates in half of the cavities 100 µl of single nutrient solutions of peptone, malt extract, lactose, raffinose pentahydrate (Fluka, Buchs, Switzerland), trehalose (Carl Roth GmbH), sorbitol (Sigma-Aldrich, Buchs, Switzerland) or glucose (Merck) (all 20%, w/v) were pipetted. In the other half 50 µl of peptone (20%, w/v) plus 50 µl of lactose, raffinose, trehalose, sorbitol or glucose (all 20%, w/v) were added. Afterwards, each cavity was inoculated with 10 conidia suspended in 100 µl of deionized and autoclaved water. The microtiter plates were incubated at 25°C for 2 weeks. Within this time, the fungus was growing and sporulating on the surface of the medium. To count the number of formed conidia, 150 µl of 0Á1% Tween 80 was added to each cavity. The number of conidia was determined with the help of a hemocytometer. Three time independent repetitions with three replicates were set up.
Comparison of different fungal materials and nutrients
For the production of conidia JKI-BI-1339 was mass-produced for 2 weeks on a medium containing autoclaved rice and barley in a ratio of 5 : 1 in a laboratory solid state fermenter (L-03; Prophyta, Malchow, Germany) at 25°C. Afterwards, pure conidia were harvested with a mycoharvester (5b; MycoHarvester, Ascot, UK). For the production of mycelium and submerged spores a medium containing 2% glucose, 2% yeast extract, 1Á5% corn steep and 0Á4% Tween 80 was selected. Submerged spores were separated from the mycelium within a Sieve-shaker using sieves with a mesh size of 400, 180 and 20 µm. Only submerged spores were able to pass the 20 µm mesh size. The remaining filtercake on the 180 µm sieve was removed and processed as described. The biomass was adjusted as described to a concentration of 0Á7% g. The filtrate was centrifuged and the pellet was resuspended to a final spore concentration of 1 9 10 6 submerged spores or conidia per ml with a 0Á9%-NaCl solution. For the pre-coating with different nutrients, 120 ml of nutrient solution (malt extract, peptone, each 20%, or malt extract + peptone, each 10%) was sprayed on 100 g millet using the above described settings. After that, 15 ml of the fungal suspension was sprayed on the nutrient-coated granule using the same settings. Deionized water was used as control. Per treatment 20 granules were placed on water agar and the number of contaminated and by entomopathogenic fungi colonized granules and the conidiation was measured after 7 days at 25°C. Granules were defined contaminated when bacterial or untypical fungal growth was visible on the granule. The experiment was repeated three times.
Statistical analysis
Data were statistically analysed with the software SAS System for Windows ver. 9.4. The Shapiro-Wilk test was applied for testing normality. The homogeneity of variance was proven by the Levene test (P < 0Á1). For separation of means, data were compared with the Student-Newman-Keuls test (SNK) (P < 0Á05). For analysing the influence on additives on the granule colonization and conidiation, the glim-mix procedure based on a residual likelihood was applied (P < 0Á05). For all other experiments data showing heteroscedasticity of variance, the non-parametric U-test of Mann-Whitney or the t-test of Kruskal-Wallis was chosen. By the exact Methods in the NPR1WAY procedure (two-sided), data were compared pair wise (Wilcoxon, exact, P < 0Á05).
Thermotolerance under wet heat
Heat treatment for 6 min at 50°C resulted in a significant reduction of MPNs for JKI-BI-1339 and JKI-BI-1496. The values were two potencies lower in comparison to the 25°C treatment. For B.b. 007, no significant reduction was measured when these two temperatures were compared. The treatment with 70°C resulted in no viability of strains JKI-BI-1496 and B.b. 007. For JKI-BI-1339, some numbers were counted even after incubation at 70°C (Table 1).
Thermotolerance under dry heat
After heat treatment, the number of colonized millet was determined. For JKI-BI-1339 and JKI-BI-1496, no significant difference of the granule colonization was obtained at temperatures of 25-70°C, whereas for B. bassiana strain B.b. 007 it was significantly reduced at 70°C (Fig. 1). For M. brunneum strain JKI-BI-1339, the granule colonization was highly depended on the biomass concentration. Only a biomass concentration of 0Á3% resulted in nearly 100% granule colonization. For the strains JKI-BI-1496 and B.b. 007 even concentrations of 0Á0003% biomass resulted in nearly 100% granule colonization.
Survival rate of the fungi after fluid-bed drying
For JKI-BI-1339, two sets of experiments were conducted. In the first experiment, granules coated with a biomass concentration of 0Á3% showed a granule colonization of 45% and in the second experiment of 33%. When the biomass concentration was lowered to 0Á03%, the granule colonization was reduced to 8%. In the second experiment increasing the biomass to 0Á7% resulted in a granule colonization of 48%. The granule colonization was not significantly improved (Fig. 2). In contrast to JKI-BI-1339 even concentrations of 0Á03% of strain JKI-BI-1496 resulted in 100% granule colonization. An additional 10 times reduction of biomass resulted in a significant reduction of granule colonization. For 0Á0003% only 40% of the granules were mycosed (Fig. 3). The highest granule colonization was achieved with strain B.b.007. Even for the lowest concentration of 0Á0003% biomass, the granule colonization was still more than 60% (Fig. 3).
Conidiation on the granules in soil
Beside granule colonization, the conidiation is an important factor for efficient biocontrol of soil-dwelling pest insects. These experiments were only conducted with strain JKI-BI-1496 and B.b.007 with the lowest biomass concentration of 0Á03% with 100% granule colonization on the granules and showed that the conidiation on the granule is dependent on the soil wetness. For JKI-BI-1496, a water content of 40% resulted in more than 5 9 10 9 conidia per granule which was more than five times higher in comparison to a water content of 3% (Fig. 4). For B.b.007, the highest conidiation was achieved in the most wet soil with approximately 2 9 10 9 conidia per granule (Fig. 4).
Optimization of granules based on strain JKI-BI-1339
Because the number of colonized granules was too low for JKI-BI-1339, we proofed the potential for optimizing the fluid-bed drying process by pre-coating the granule with nutrients. In the first experiment, the combination of malt extract with peptone resulted in the highest sporulation on liquid media (Fig. 5). Based on these results, malt extract, peptone or a mixture of both was coated on the granule before the fungal biomass was applied. Additionally, we compared in this set of experiments aerial conidia, submerged spores or a mixture of mycelium with submerged spores. For submerged spores and mycelium, the addition of malt extract alone or in combination with peptone had a significant positive effect on the granule colonization which was around 20% higher than in the untreated control (Fig. 6). Peptone alone had a significant negative effect on the granule colonization and an unwanted positive effect on the number of contaminated granules. Nearly 100% granule colonization was achieved by coating the granule with conidia. For conidia, no additional positive effect was achieved by adding nutrients to the fluid-bed drying process. The results on the conidiation on the granule indicate that although the granule colonization was optimized, the conidiation was not enlarged. For none of the fungal material, higher sporulation was achieved by adding nutrients (Fig. 6).
Discussion
The aim of this work was to develop a technology for formulating liquid fermented biomass of entomopathogenic fungi for control of soil-dwelling insect pests. In this work, we developed granules based on fluid-bed coated biomass of three different genera of entomopathogenic fungi. In the developed system, autoclaved millet was used as core particle, on which a thin layer of fungal biomass was coated. Millet has the advantage that it is cheap, the main component is starch and the size and stability is optimal for application with common granule application technology in agriculture (H. Lehner, pers. communication). Additionally, starch in form of, for example, corn meal or wheat flower can be used as natural source of CO 2 , which is attracting soildwelling pest insects (Bernklau et al. 2004). Starch can as well be used as carbohydrate source by the (2010) observed that millet powder can be utilized by the fungus and improved the sporulation of Pandora nouryi on alginate pellets. In general, fluid-bed coating offers the possibility to alter and to improve various characteristics of core particles. The challenge is to get a constant coating quality, especially during process up-scaling (Parikh 2017). Therefore, a homogeneous coating material is required. When fungi are cultivated in liquid culture, the growth behaviour extremely depends on the cultivated strain, the cultivation media and cultivation conditions (Kleespies 1993). In liquid, the M. brunneum strain JKI-BI-1339 is mainly growing as mycelium. Under specific conditions, it is only producing filamentous mycelium or compact pellets which were the basis for the first BIO1020 TM product (Reinecke et al. 1990). The potential of JKI-BI-1339 to sporulate in liquid is limited and after optimization max. 3Á25 9 10 7 blastospores per ml were formed (Kleespies 1993) which is unsuitable for commercial production. In contrast, this strain can easily be produced in solid state fermenter with high conidiation on the substrates. Based on this material, the two products Met52 Granular TM (conidia on a granular matrix) and Met52 EC TM (liquid-based formulation of conidia) are registered in a range of countries as plant protection product (novozymes, 2020).
Beauveria bassiana strain Bb.007 can be easily produced in liquid and on solid media. In liquid culture, B.b.007 is growing in the mycelial phase and as well is producing up to 1Á2 9 10 8 submerged spores per ml. Since 2019, strain B. bassiana strain B.b. 007,-IMI # 501799, is registered by the Ministry of Environment Protection and Agriculture of Georgia, National Food Agency in 2019, as trade mark Bover-GE (Registration # 3142). It is expected that the market potential of B.b 007 will rise when new cost-efficient formulations will be available. Cordyceps fumosorosea strain JK-BI-1496 can easily be produced in liquid culture with a spore yield of up to 4 9 10 8 submerged spores per ml (D. Stephan, unpublished). Because of its interesting biological activity and uncomplicated production in liquid culture, this strain has a high potential for further product development.
To get a homogeneous biomass suspension, which can pass the nozzles of the fluid-bed dryer the biomass was homogenized. The MPN values indicate that biomass is still viable after homogenization. Assuming that an incubation of 6 min at 25°C is not effecting the fungal viability, our MPN numbers indicate that in 1 g biomass of JKI-BI-1339 and JKI-BI-1496 the MPN is approximately 10 10 MPN/g dry weight . For B.b.007, the value is around 10 times lower. Because in biological control living organisms are applied in the field, the organism has to survive the thermal and dehydration stress within the formulation process. Our results confirmed that liquid fermented fungi are sensitive for thermal stress. When fungal biomass suspended in water was heated for only 6 min, even at temperatures of 50°C the viability for strains JKI-BI-1339 and JKI-BI-1496 was reduced to 4%, compared to an incubation at 25°C. This thermal sensitivity within a formulation process correspond to results of Stephan and Zimmermann (2001) and Horaczek and Viernstein (2004). We confirmed that strains differ in their thermotolerance (Rangel et al. 2005). The B. bassiana B.b.007 tolerated temperatures of 50°C but only for JKI-BI-1339 limited growth was detected at 70°C. Under the microscope, only mycelium was seen but it cannot be excluded that in very low concentrations, for example, microsclerotia were formed under oxidative stress (Georgiou et al. 2006), which are a melanized structure, that possibly can survive better than mycelium itself (Willetts 1971). The second set of thermal experiments was more adapted to the fluid-bed coating conditions. Millet was covered by a thin but undefined film of biomass suspension and was incubated under different temperature conditions. During this incubation phase, water was able to evaporate and consequently to cool the desiccating biomass with a clear effect on the granule colonization on the millet. Under these conditions for none of the strains any negative effect of an incubation temperature of 50°C was detected and 70°C only effected B.b. 007 significantly. The cooling effect of evaporating water is used in several drying systems. Stephan and Zimmermann (1998) demonstrated that submerged spores of Metarhizium acridum can be spray-dried even under high inlet temperatures without loss of viability. When different biomass concentrations were tested, only for JKI-BI-1339 the granule colonization declined independently of the temperature treatment. Because the MPN of the biomass suspension of JKI-BI-1339 and JKI-BI-1496 was nearly the same, the results indicate that the biomass of these two strains has a different desiccation tolerance. It is expected that the fragmented mycelium is more desiccation sensitive than the remaining submerged spores, which were partly in the biomass of strain JKI-BI-1496 and B.b.007. Drying of mycelium for getting formulations for biocontrol was conducted by several authors (Mc Cabe and Soper 1985;Rombach 1989;Roberts 1990, 1991;Krueger et al. 1992). But in contrast to our experiments, all of the authors described slow drying processes. Our results of the thermal experiment were transferred to the fluid-bed coating system. Again, for JKI-BI-1339 the granule colonization was low. It was not possible to achieve colonization rates of more than 50% by enlarging the biomass concentration to a maximal concentration of 0Á7% dry weight which was the maximal pump-and sprayable concentration. On the other side, for the two other strains concentrations of 0Á003% dry weight was efficient enough to get nearly 100% colonized granules. We know from other entomopathogenic fungi, that a production of 1Á5% dry weight fungal biomass in liquid culture is realistic (Bernhardt, pers. comm., (Ortiz-Urquiza et al. 2010). Based on our fluid-bed coating settings and results, 1 l of fermentation culture of JKI-BI-1496 or B.b.007 with 1Á5% dry weight fungal biomass would be enough to produce 25 kg granule with around 100% granule colonization. For that, the authors considered fluid-bed coating as an economically interesting technique. Although (Gotor-Vila et al. 2017) achieved good results with freeze-drying of Bacillus amyloliquefaciens, this drying process was considered as too time-consuming and cost-intensive. Based on their positive results, they preferred a combination of spray drying and conventional fluid-bed drying technique. Reddy et al. (2014) treated oil-coated granules with conidia of entomopathogenic fungi. This is an easy and suitable system for lipophilic conidia but not suitable for hydrophilic biomass. Several authors described formulations containing a mixture of growing substrate with conidia (Ricaño et al. 2013;Zhang et al. 2019), microsclerotia with clay (Behle and Jackson 2014;Behle and Goett 2016) or in combination with attractants like pheromones (Kabaluk 2014;Todd Kabaluk et al. 2015) or yeast (Brandl et al. 2017). Whenever growing substrate is a part of the granule formulation, the whole granule has to be dried to get a storable product. The same applies for other formulations, for example, alginate pellet formulations. As a result, the formulation costs are expected to be higher.
But granule colonization is just one quality criteria. The fungus has to resporulate in the soil to infect target insects. Conidiation of fungi depends on the environmental conditions like nutrient resources and humidity (Arthurs and Thomas 2001a;Nuñez-Gaona et al. 2010). Jackson and Jaronski (2012) demonstrated that microsclerotial granules with higher moisture levels produced more conidia immediately after drying and granules with low moisture produced more conidia after 12 months storage. In our experiments, we compared different water contents in a sterile soil substrate mixed with sand by adding water. For agricultural soils, the water availability described as field capacity and the available water capacity are the important parameters which are highly influenced by the loam content. Therefore, further experiments with different types of grown soil have to follow. Our results underline that conidiation takes place on the granule in the soil as well as on pure water-agar (data not shown). Consequently, the nutrient source of millet is sufficient for sporulation. This correspond to results of Pandey and Kanaujia (2008).
Additionally, in our system, sterile soil was used to avoid contamination effects which is far away from the reality in the field. Anyhow, our results indicate that JKI-BI-1496 has a clear optimum at 40% water content. Additional experiments with JKI-BI-1339 indicate that for this strain the optimum is lower than 27% water content (data not shown). The results underline differences between strains.
For the C. fumosorosea strain JKI-BI-1496 and the B. bassiana strain B.b.007, the results were acceptable so that further experiments in the field can follow. For the M. brunneum strain JKI-BI-1339, the fungal growth on the granules was too low so that further optimization steps followed. Sharma et al. (2020) demonstrated that millet as a granular carrier improved the efficacy of entomopathogenic fungi under field conditions. The reason is unknown. Kim et al. (2020) confirmed that millet-based solid cultured granules of M. anisopliae were effective against soil-dwelling pest insects. Conidiation is highly influenced by the composition of nutrients. Nitrogen and carbohydrate sources are important ingredients and their ratio as well as their quantity is influencing the growth and sporulation on strain level (Gao et al. 2007;Uzma and Gurvinder 2009). Soy-peptone is a commonly used nitrogen source. Li and Holdom (1995) compared different sugars in terms of mycelial growth and sporulation of M. anisopliae with best effects of D-Mannose, maltose and D-Glucose. Because of the bee toxicity of mannose (Staudenmayer 1939), this nutrient is not suitable for the development of granules in agriculture and was not included in our studies.
Our results confirm that malt extract with the main component maltose resulted in high sporulation. In contrast, D-glucose was not sufficient in our experiments. As already described by Ottow and Glathe (1968), malt extract can be used as a single nutrient component for fungal growth. In our experiments, best sporulation was obtained with a combination of malt extract and peptone. Further optimization steps should follow because several aspects, for example, C/N ratio of nutrients are important factors influencing fungal growth and sporulation (Safavi et al. 2007). When the granule was precoated with malt extract, peptone or a combination of both the granule colonization rate was influenced. Precoating with malt extract alone or in combination with peptone resulted in a significant higher granule colonization when fungal biomass or submerged spores were used. These results indicate that there is a potential to optimize the coating process by adding specific nutrients to the process. The fluid-bed coating process was not done under sterile conditions. Therefore, the additional information about risk of contamination was important. Especially, when peptone was pre-coated alone, the contamination rate was more than 40%. Micro-organisms compete with nutrients and peptone seems to support bacterial growth (Hibbing et al. 2010). These bacterial contaminations were not further investigated but the results show us an important hint what may occur in an unsterile soil. Sugar components are known to protect micro-organisms within a drying process (Stephan and Zimmermann 1998;Stephan et al. 2016;Gotor-Vila et al. 2017;Bisutti and Stephan 2020). Therefore, it is likely that malt extract has as well a protective effect during the drying process. Aerial conidia are amenable to simple drying techniques with better storability (Bidochka et al. 1987;Lane et al. 1991) and therefore it is likely, that conidia survive the coating process better than submerged spores or mycelium. Granules coated with conidia were colonized up to 100% even without any pre-coating. These results demonstrate that for coating of the granule as well conidia can be used. However, it must be taken into account that aerial conidia are produced on solid substrates. It is contra productive to suspend these nearly dry conidia in water and dry them again. In our experiments, none of the pretreated granules resulted in a higher conidiation after coating submerged spores, mycelium plus submerged spores or conidia. Steyaert et al. (2010) stated that the transition from fungal mycelium to spore is determined by the interplay of environmental cues, whereby one factor alone is not necessarily sufficient to evoke change. For the genus Trichoderma, the relative ratio of carbon and nitrogen has a strong influence on conidiation and growth. Gao et al. (2007) have shown that for Trichoderma higher amounts of nitrogen favored mycelial growth whereas nitrogen limitation favored conidiation when sucrose was used as carbon source. Possibly, the higher numbers of colonized granules and its conidiation can be explained by the carbohydrate and nitrogen availability. But it is not explaining the differences between conidia and liquid fermented biomass. It should also be mentioned that the conidiation was measured after 1-week incubation. It cannot be excluded that within 1-week incubation the fungus did not reach the maximal level of conidiation. That possibly explains the low numbers of conidia per granule in comparison to JKI-BI-1496 and B.b.007. The idea of the pre-coating experiments was to add nutrients. Additionally, nutrients like sugars can be used as protectants within the drying process (Stephan and Zimmermann 1998;Horaczek and Viernstein 2004;Stephan et al. 2016). This could possibly be achieved by coating a mixture of biomass with nutrients.
In conclusion, the results clearly indicate that liquid fermented biomass of entomopathogenic fungi can be the affective ingredient of granules for biocontrol of soildwelling pest insects. Fluid-bed coating is an efficient process to coat thin layers of fungal material on the surface of granules. In further experiments, the efficacy of this type of granule has to be tested in the field. Finally, the economic and technical feasibility of the described process has to be verified.
|
2020-10-06T13:33:23.710Z
|
2020-09-28T00:00:00.000
|
{
"year": 2020,
"sha1": "1b9ec320c585e768e9ce2509899574d5b51dd4b1",
"oa_license": "CCBY",
"oa_url": "https://sfamjournals.onlinelibrary.wiley.com/doi/pdfdirect/10.1111/jam.14826",
"oa_status": "HYBRID",
"pdf_src": "Wiley",
"pdf_hash": "7f64773897983e7d46076acbc8797d9ce3ada797",
"s2fieldsofstudy": [
"Environmental Science",
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Chemistry"
]
}
|
3045481
|
pes2o/s2orc
|
v3-fos-license
|
A low frequency persistent reservoir of a genomic island in a pathogen population ensures island survival and improves pathogen fitness in a susceptible host
Summary The co‐evolution of bacterial plant pathogens and their hosts is a complex and dynamic process. Host resistance imposes stress on invading pathogens that can lead to changes in the bacterial genome enabling the pathogen to escape host resistance. We have observed this phenomenon with the plant pathogen Pseudomonas syringae pv. phaseolicola where isolates that have lost the genomic island PPHGI‐1 carrying the effector gene avrPphB from its chromosome are infective against previously resistant plant hosts. However, we have never observed island extinction from the pathogen population within a host suggesting the island is maintained. Here, we present a mathematical model which predicts different possible fates for the island in the population; one outcome indicated that PPHGI‐1 would be maintained at low frequency in the population long term, if it confers a fitness benefit. We empirically tested this prediction and determined that PPHGI‐1 frequency in the bacterial population drops to a low but consistently detectable level during host resistance. Once PPHGI‐1‐carrying cells encounter a susceptible host, they rapidly increase in the population in a negative frequency‐dependent manner. Importantly, our data show that mobile genetic elements can persist within the bacterial population and increase in frequency under favourable conditions.
Introduction
Bacterial genomes can evolve rapidly in different environmental conditions, typically through mechanisms such as horizontal gene transfer (HGT) and loss of mobile regions of DNA. Potential mobile regions of DNA include plasmids, transposons and genomic islands (GIs). GIs are regions of the genome that are present in some strains of bacteria but not others, are normally associated with specific integration sites in the genome such as tRNA loci and contain genes that may be responsible for recombination and mobility such as integrases and pili (Hacker and Kaper, 2000;Hacker and Carniel, 2001;van der Meer and Sentchilo, 2003). A number of GIs have been demonstrated to have roles in the virulence of their hosts, for example, the islands PAPI-1 and PAPI-2 contribute significantly to the virulence of Pseudomonas aeruginosa PA14 in acute pneumonia and bacteremia models (Harrison et al., 2010) and loss of island SPI-4 from Salmonella enterica serovars Typhimurium and Enteritidis attenuates oral virulence in mice (Kiss et al., 2007). A number of plant pathogens also carry GIs (Arnold et al., 2003), for example, the hrp/hrc (hypersensitive response and pathogenicity/conserved) genes, which encode a type 3 secretion system used to deliver effector proteins, are carried on a GI with tripartite mosaic structure in Pseudomonas syringae (Alfano et al., 2000). Similarly, the virulence gene hopAB1 (virPphA), which is essential for virulence of P. syringae pv. phaseolicola (Pph), is found on a GI carried by a 154 kb plasmid (Jackson et al., 1999), and in Streptomyces acidiscabies, the production of a phytotoxin called thaxtomin relies on genes carried on a 26 kb GI (Bukhalid et al., 2002). GIs also continue to be identified, for example, a collection of GIs have been recently identified in P. syringae pv. actinidiae, which causes the devastating disease of bacterial canker on kiwifruit (Butler et al., 2013;McCann et al., 2013).
Pph is a plant pathogen that causes halo blight disease of beans, the molecular genetics of which have been studied for many years, and, as such, it is often used as a model plant pathogen . Pph has been subdivided into a number of races based on the gene-forgene interaction between effector genes in the pathogen and resistance genes in the host (Taylor et al., 1996a,b). An example of this gene-for-gene interaction is between the effector gene avrPphB (also named hopAR1) carried by Pph race 4 strain 1302A, which matches resistance gene R3 and induces a rapid resistance reaction called the hypersensitive response (HR) or effector triggered immunity (ETI) in bean cultivar Tendergreen (TG) (Jenner et al., 1991;Jones and Dangl, 2006). Induction of the HR leads to programmed cell death at the infection site, and results in the development of antimicrobial conditions due to the production of reactive oxygen and nitrogen species (Fones and Preston, 2012), and secondary metabolites such as phytoalexins (Mur et al., 2008). AvrPphB is a cysteine protease (Shao et al., 2002) that targets the protein serine/ threonine kinase PBS1 in Arabidopsis, which in turn triggers cytoplasmic immune receptor RPS5-specified ETI. However, AvrPphB also has a virulence function because in the absence of RPS5, it inhibits pathogen associated molecular pattern-triggered immunity (PTI) by cleaving additional PBS1-like kinases (Zhang et al., 2010).
Inoculation of 1302A in TG results in the evolution of a virulent strain derived from 1302A (named RJ3), due to the selection pressure on the pathogen to evade the HR triggered by avrPphB. avrPphB resides on a 106 kb GI designated PPHGI-1 (Jackson et al., 2000;Pitman et al., 2005). The PPHGI-1 integrase xerC gene enables the excision of PPHGI-1 from the chromosome to form a circular molecule, leading to down-regulation of avrPphB transcription . PPHGI-1 also has the potential to self-replicate and can be transferred between strains of Pph by in planta transformation (Lovell et al., 2009). During extended infection within the resistant plant, PPHGI-1 is lost from the genome of Pph 1302A through natural selection, causing a change in host range and the production of water-soaked lesions typical of disease. Comparison of the growth rates of bacteria with and without PPHGI-1, inoculated at equal densities in the susceptible host, shows no advantage to carrying or losing the island (Pitman et al., 2005).
We have built upon this phenomenon to develop an experimental evolution system that allows us to follow the dynamics of pathogen evolution. This relies on passaging of bacteria within plants, for example to follow the loss of PPHGI-1 from Pph strain 1302A in leaves of TG. In doing so, we have observed that PPHGI-1 is rapidly lost from the bacterial population, with around 95% of the population having lost the island by week five (Pitman et al., 2005;Lovell et al., 2011). However, over a number of independent experiments we have never observed 100% PPHGI-1 loss from the bacterial population.
We hypothesised that there is a threshold at which strains expressing AvrPphB either: (i) trigger plant cell death leading to island loss or (ii) are at a low enough frequency that the effects are suppressed by the dominant island-less genotype, thus leading to island maintenance. We developed a mathematical model to determine the circumstances under which PPHGI-1 would be maintained in the bacterial population long term; the model predicted that to be maintained the island must provide a growth advantage to the host bacterium, even when present at low frequencies in a mixed population. We then used an experimental approach to test the predictions resulting from the model. These data show that although host defence leads to significant island loss from the bacterial pathogen population, the island is able to persist until its bacterial host reaches a favourable environment upon which it rapidly increases in the bacterial population.
Results
PPHGI-1 is lost from the genome of Pph 1302A::NCR during exposure to the HR In our previous work, we have shown the loss of PPHGI-1 through passaging of Pph 1302A in leaves of resistant bean cultivar TG. This was done by screening strains for the ability to cause HR and disease in bean pods: strains with the GI trigger the HR, those without the GI cause a water soaked lesion. Although this is a very sensitive assay, it is extremely time consuming, so to enable longer term studies we developed a rapid test for PPHGI-1 loss using antibiotic resistance screening. Pph 1302A::NCR is a strain previously created that contains a kanamycin resistance cassette inserted into a predicted non-coding region (NCR) of PPHGI-1 (Pitman et al., 2005). To confirm that strain Pph 1302A::NCR behaved the same as wild-type Pph 1302A with respect to PPHGI-1 loss (and to also control for simple loss of the NCR insertion), we checked for PPHGI-1 loss using our bean pod assay. We passaged Pph strain 1302A::NCR though the resistant bean cultivar TG six times and recorded island loss by both assaying on TG bean pods and monitoring the frequency of kanamycin-resistant bacteria within populations. Both methods gave the same result, showing the highest loss of PPHGI-1 observed as 98% after six weeks (Fig. S1). Thus, the results for 1302A and 1302A::NCR are congruent. This also correlated with our previous work, where we have never observed 100% loss of PPHGI-1 over this time scale (Pitman et al., 2005;Lovell et al., 2011). Therefore, we concluded that monitoring the loss of kanamycin resistance from 1302A::NCR should allow us to do longer term passaging studies without reliance on bean pod assays.
Mathematical predictions for the long-term dynamics of PPHGI-1 retention Our passaging experiments indicated that a subpopulation of PPHGI-1-carrying cells was maintained in the population at a low frequency. We hypothesised that PPHGI-1 conferred a fitness benefit to those cells during the suppression of antimicrobial conditions manifested by plant resistance, leading to island retention within the population. Firstly, to attempt to predict the effect of longer term exposure to a resistant plant environment on the ability of Pph 1302A to retain PPHGI-1, we used a mathematical model (detailed in Supporting Information) to investigate the possible long term outcomes. Figure 1 summarises the key biological implications derived from our mathematical model. Essentially, the mathematical model predicts three qualitatively distinct longterm (permanent) biological outcomes, depending on the relative values of the parameters in the model: i. Bacterial extinction. All bacterial populations (1302A and RJ3) die out, with the host plant surviving at its carrying capacity K (in the absence of bacteria the plant cell density would reach a steady-state value of K) (region (i) in Fig. 1); ii. Loss of the GI. The bacterial populations carrying the GI (1302A) die out but there is co-existence of the GI-free bacterial population (RJ3) with the host plant, the latter surviving at a level below its carrying capacity K (region (ii) in Fig. 1); iii. Retention of the GI. There is co-existence of bacterial populations both with the GI (1302A) and without the GI (RJ3), with the host plant surviving at a level below its carrying capacity K (regions (iiia) and (iiib) in Fig. 1). It should be noted that these outcomes are stable in the sense that they do not depend on the particular initial population sizes (frequencies) of the bacteria or host and that small changes to the mathematical model would not affect the qualitative predictions (the system is 'hyperbolic').
The co-existence predicted in (iii) can take one of two forms: (a) steady-state or (b) cyclic. The term 'steady-state' refers to population densities which are constant over time; the term 'cyclic' refers to population densities which are varying periodically over time. Note that (i)-(iii) are long-term behaviours, i.e., population densities undergo transient dynamics before eventually settling down to one of these behaviours. It is outcome (iiia), which appears to correspond to our experimental results (Fig. S1). These three outcomes can be characterised in terms of just two of the model parameters, namely r R , the intrinsic Malthusian growth rate of R (RJ3), and K, the carrying capacity of the host plant (see Supporting Information for the definition and role of these parameters). In particular, our mathematical model predicts that there exists a critical threshold valuer such that the GI is retained if and only if r R is less thanr and K is greater than d R =r (see Fig. 1). The valuer is given byr 5r c d R =d c , where r c and d c are the intrinsic Malthusian growth and death rates of B c (wild-type 1302A) respectively. We can write these conditions more elegantly and naturally by defining the two ratios, such as q c 5r c =d c and q R 5r R =d R .
The quantity r/d is commonly referred to as the reproductive ratio of an organism having (Malthusian) growth rate r and natural death rate d, and represents the expected number of offspring produced during its natural lifespan 1/d. The equivalent condition for GI retention is therefore that q R is less than q c and K is greater than 1=q c , i.e., q R < q c and K > 1=q c , providing a very simple algebraic criterion for GI retention and a simple biological interpretation in terms of the carrying capacity of the host plant and the reproductive ratios of the wild type bacteria B c (1302A) and the strain R (RJ3) (Fig. 2). The numerical simulations in Fig. 3 illustrate this, showing long term steady-state GI retention for q R < q c and GI loss for q R > q c (and with K chosen such that K > 1=q c ). The condition K > 1=q c is equivalent to saying that wild type 1302A has an exponential growth phase (as opposed to exponential decay) when the host plant cell density is at its carrying capacity (K), in the absence of all other factors (such as host defence mechanisms and competition). This follows from the assumption that the natural growth rate response of 1302A In all cases, the plant survives. Parameter region (i) corresponds to eventual extinction of all bacterial populations, (ii) corresponds to survival of RJ3 only with 1302A becoming extinct, (iiia) corresponds to eventual co-existence of RJ3 and 1302A at steady state (GI is retained) and (iiib) corresponds to cyclic (time-periodic) coexistence of RJ3 and 1302A (GI is retained). The boundaries between regions (i), (ii) and (iiia) are given explicitly by r R 5r, K 51=r and r R 5d R =K , wherer 5r c d R =d c . The boundary curve between (iiia) and (iiib) is a locus of Hopf bifurcation points, marking the transition between steady state and cyclic GI retention. is then r c K and its natural death rate is d c , so exponential growth is possible only if r c K > d c , i.e., K > 1=q c .
PPHGI-1 is maintained in the population during longterm exposure to the plant environment Based on the predictions made by the model described above a long-term passaging experiment was carried out to investigate if PPHGI-1 was maintained in the population over a longer time period than we had previously observed. We used strain Pph 1302A::NCR so that we could rapidly screen for the loss of PPHGI-1 following plant exposure by screening for loss of kanamycin resistance. TG leaves were inoculated with a mixture of 2% 1302A::NCR and 98% RJ3 (this ratio mimics the population observed after six passages through TG starting with pure 1302A::NCR, Fig. S1). The experiment was carried out for a total of 18 weeks (Fig. 4A). We observed the rapid loss of PPHGI-1 from the population and, therefore, a decrease in the proportion of the 1302A::NCR strain. Initially, this dropped to around 0.1% but then recovered to be stably maintained at 0.5% for the rest of the experiment. Figure 4B shows a zoomed in view of weeks 2-18 showing that the PPHGI-1 containing strain drops to a very low level before invading from rare and being maintained in the population at a low level (approximately 0.5%) over time.
To confirm that the loss of PPHGI-1 is being triggered by avrPphB induced HR we used a 1302A avrPphB insertion mutant and compared this to 1302A::NCR in a bean cv. TG passaging experiment (Fig. 4C); in each case the strains were mixed with RJ3 at a 0.5:99.5% ratio. When avrPphB is inactivated and therefore not triggering the HR, population numbers increase confirming that it is the HR activation by avrPphB that is causing the 1302A population to be held at a very low level.
PPHGI-1 is thought to be lost from the genome of Pph 1302A during exposure to the plant's resistance mechanism because the antimicrobial environment generated by the HR favours PPHGI-1 loss, enabling the evolved bacteria to avoid triggering the HR (Pitman et al., 2005;Arnold et al., 2007). Therefore, we considered at what concentration Pph 1302A was able to cause symptoms of the HR on bean leaves and whether there was a level at which 1302A could retain PPHGI-1 without macroscopic symptoms of the HR being visible. TG leaves were inoculated with Pph 1302A::NCR OD 600 0.1 diluted to 100%, 5%, 2%, 0.5% and 0.1% with Pph RJ3 (Fig. 4D). This mimics the concentrations observed in the previous passaging experiments (Figs. 4A and S1). After 24 hours, the symptoms of the HR could be clearly seen in the leaves inoculated with 100%, 5% and 2% Pph 1302A. However, very weak HR symptoms could be seen at 0.5%, and no plant response was obvious when using a 0.1% density.
Pph 1302A population frequency increases in a susceptible plant
As Pph 1302A excises PPHGI-1 and loses the GI from its genome during the HR, it becomes identical to strain RJ3. However, even though Pph 1302A (B) could survive at a concentration low enough to significantly reduce HR symptoms, our model suggested that it would only be maintained in a population consisting mainly of RJ3 (R) if PPHGI-1 conferred some fitness benefit. Therefore, to investigate this, the mixed bacterial population (99.5% RJ3 1 0.5% 1302A) harvested from the week 18 passage through TG leaves were inoculated into leaves of susceptible bean cultivar Canadian Wonder (CW) (Fig. 4A, weeks 19-24). Here, we observed that between weeks 19 and 24, the 1302A::NCR population increased 6.5-fold, whereas the RJ3 population only increased 1.6-fold (Fig S2). This increased population density suggests that 1302A A. Initially leaves of bean cultivar Tendergreen (TG) were inoculated at an inoculum density of OD600 0.1 with 2% Pph 1302A::NCR and 98% RJ3. After 7 days, the cells were harvested, diluted to the starting inoculum density, and reinoculated into a new leaf. This process was repeated 24 times, weeks 1-18 in TG followed by weeks 19-24 in bean cultivar Canadian Wonder (CW). (B) Zoomed in view between weeks 2 and 18. (C) The population of 1302A can increase in planta if avrPphB is inactivated. An avrPphB disruption mutant Pph 1302A::avrPphB and 1302A::NCR was passaged six times through resistant bean cv. TG at an initial inoculum concentration of 0.5% 1302A and 99.5% RJ3 to mimic that found at week 18. Colonyforming units (CFUs) retaining kanamycin resistance and therefore PPHGI-1 were counted after each passage. Mean is of three replicates 6 SEM. (D) Phenotypes displayed by Pph 1302A::NCR on bean cv. TG at various concentrations of bacteria. Bean cv. TG leaves were inoculated with Pph 1302A::NCR OD600 0.1 concentrations of 100% (8 3 10 7 ), 5% (4 3 10 6 ), 2% (1.6 3 10 6 ), 0.5% (4 3 10 5 ) and 0.1% (8 3 10 4 ) made up to 8 3 10 7 cells ml 21 with Pph RJ3. These concentrations represent the levels of Pph 1302A::NCR observed at various time points in (A) and (B). Symptoms were recorded after 24 hours. has a fitness advantage over RJ3, when the HR is not a factor.
Discussion
Our previous work has demonstrated that Pph strain 1302A carries a GI, PPHGI-1, that harbours an effector gene, avrPphB, that is, responsible for the recognition of 1302A by a bean cultivar with the R3 resistance gene (Jackson et al., 2000;Pitman et al., 2005). Yet although there is a strong selective pressure to lose PPHGI-1 from the bacterial population during infection of this resistant bean cultivar, we have never observed 100% PPHGI-1 loss from the population in any of our previous studies. This suggested three possibilities, that the excised GI integrates into the genome at a low frequency within the population to maintain a small population of island-carrying bacteria, that the negative effects of the GI are suppressed at a critical minimal threshold and that the GI confers a fitness advantage to the minority. There are 100 predicted ORFs on PPHGI-1 (Pitman et al., 2005), so it is possible that some of these may give the bacteria an advantage in certain environments. However, simply comparing the growth rates of the bacteria with and without PPHGI-1 in a susceptible bean plant showed no clear fitness advantage to maintain PPHGI-1 when comparing separate inoculations of 1302A and RJ3 (Pitman et al., 2005). We have also shown that there is no difference in growth rate if 1302A::NCR and RJ3 are co-inoculated in a susceptible bean plant at equal densities (Fig. S3).
Here, we developed a mathematical model to try and predict the circumstances that would lead to the maintenance of PPHGI-1 in a sub-population of the bacteria. Our model suggested that PPHGI-1 would be maintained indefinitely in a small proportion of the bacterial population if PPHGI-1 gave the bacteria a fitness advantage once the selective pressure against the avirulence gene avrPphB was removed (specifically, if the intrinsic reproductive ratio of 1302A is greater than that of RJ3, with the assumption that the carrying capacity of the host plant is sufficiently large). We validated this prediction experimentally and went on to show that it is only when the strain containing PPHGI-1 is at a very low level in the overall population (0.5%) that in cv. CW a fitness advantage can be seen, as the strain containing PPHGI-1 can grow quicker than the island-less strain and therefore increase as a fraction of the total population.
The mathematical model described here makes a number of assumptions, firstly, we have assumed that there remains an antimicrobial response at very low bacterial population densities, consistent with reports that a single bacterium is capable of eliciting the HR (Turner and Novacky, 1974). This is difficult to verify experimentally for such small population counts and their effects on the host plant may not even be visible macroscopically (Fig. 4D). It is conceivable that when PPHGI-1 carrying strains are present at low frequencies in a population, antimicrobial activity may be insufficient to restrict bacterial growth. It would, therefore, be interesting to investigate models which include 'threshold' effects in which the antimicrobial response is negligible, which could facilitate the persistence of PPHGI-1 even in the absence of a fitness benefit. Similarly, we have also assumed functional forms of massaction type in several parts of our model, including bacterial growth rates and the rate of island excision. It seems likely that more quantitatively accurate results could be obtained if growth rates due to nutrient uptake and antimicrobial inhibition were modelled via saturation functions of Monod type (i.e., there is an upper limit to the growth response). However, numerical simulations of such saturating effects (not reproduced here) show no important qualitative differences. We take the view that since not all parameter values are experimentally obtainable for this model, it is essentially qualitative in nature but is amenable to rigorous mathematical analysis. We are also aware that PPHGI-1 is capable of HGT between bacteria cells (Lovell et al., 2009), which is not incorporated into the current model, which focuses on the maintenance of the island in the population and not in individual cells. However, we have also considered the effects of small rates of HGT in the mathematical model by carrying out a stability analysis entirely analogous to the one presented in the Supporting Information, which shows that the qualitative predictions, in particular those shown in Fig. 1, remain unchanged when the model includes HGT effects (data not shown).
There are two biologically interesting aspects of these results that warrant further consideration: (i) how is the PPHGI-1 containing strain maintained at a low level in the population in the resistant host and (ii) what is the mechanism underlying its population increase in the susceptible host given that it does not appear to have an intrinsic growth advantage when populations are equal? For the maintenance of the GI at low level in the resistant host we have previously shown that PPHGI-1 can excise and form a circular form outside of the chromosome and that when this occurs the avirulence gene avrPphB is down-regulated . It is, therefore, possible that PPHGI-1 strains carrying the excised GI escape the antimicrobial environment that seems to be needed to cause GI loss (Pitman et al., 2005) and that the excised island reintegrates into the genome stochastically, maintaining a population of chromosomal GI carrying bacteria. It is also possible that at lower densities the PPHGI-1 containing cells are benefiting from the suppression of plant defences by RJ3. In Young (1974) and Barrett et al. (2011), it was shown that the presence of virulent P. syringae strains significantly enhanced the growth of non-pathogenic and nonhost strains in co-inoculations and we may be observing a similar phenomenon here. However, our model suggests that PPHGI-1 would only be maintained over an extended period of time if it conferred a fitness benefit and thus there may be as yet other undefined mechanisms of maintenance of PPHGI-1. For example, it may be that avrPphB and other genes on PPHGI-1 provide a fitness benefit to bacteria at lower densities, and it would be interesting to investigate the role of genes on PPHGI-1 in the persistence of the island at low levels.
When the mixed population of bacteria (0.5% 1302A and 95.5% RJ3) is moved to a susceptible host 1302A grows more rapidly than RJ3, but the same effect was not observed when the two strains were inoculated at equal densities or in vitro (data not shown). This may be due to a fitness benefit conferred in the plant by the GI at low population density. This phenomenon of negative frequencydependent selection, where the fitness of a phenotype decreases as it becomes more common, is observed in other systems as a mechanism that maintains genotypes when they are rare in the population, thus favouring intraspecific diversity (Minter et al., 2015). Possible mechanisms include a role for the GI in suppression of plant defences, which would provide a diminishing advantage as the proportion of plant cells in which defence responses have been suppressed increases, allowing bacteria that do not possess the GI to benefit from its activity. Here, we show that the population of bacteria is not genotypically homogeneous and a small proportion of cells still retain PPHGI-1, which, when conditions change (e.g., interaction with susceptible plants) can have a fitness benefit, which is most apparent when bacteria are present at a low frequency.
Overall our results illustrate the maintenance of a mobile genetic element at low frequency within bacteria, helping to maintain a diversity of genetic material within their population. This enables the bacteria not only to now infect the previously resistant plant but also to rapidly colonise susceptible plants if the island-containing genotype is dispersed to them. Given that these GI's are universally common amongst bacterial genera and include GI's in human pathogens containing virulence factors or antibiotic resistance genes, it is clear that this phenomenon can have serious implications for the persistence of genetic material. Moreover, the longevity of the GI as a persister population may make it difficult to displace the element from the bacterial population, thus causing difficulties in controlling disease outbreaks.
Experimental procedures
Bacterial and plant growth conditions Pph 1302A::NCR (Pitman et al., 2005) and RJ3 (Jackson et al., 2000) were cultured at 258C for 48 h on Kings B (KB) agar plates (Difco, UK). Overnight cultures were grown in Luria-Bertani media (Difco) at 258C shaking at 200 rpm. Medium was supplemented with 25 mg ml 21 kanamycin where appropriate. Phaseolus vulgaris cultivar TG and cultivar CW were grown at 238C, 80% humidity with a 16 h photoperiod. Pods were harvested from 8-week-old TG bean plants.
The mathematical model
Mathematical modelling assumptions. We consider the population of bacterial cells to be composed of three distinct sub-classes: (i) Pph 1302A cells with the GI located on their chromosomes (B c ); (ii) Pph 1302A cells containing the excised form of the GI (B e ) and (iii) RJ3 cells without the GI (R). We illustrate the relationship between these classes schematically in Fig. 2.
The presence of B c and B e cells, and hence of the effector gene avrPphB present on the GI, triggers an antimicrobial response (the HR) in the plant, via plant resistance gene R3. We denote the concentration of antimicrobial chemicals by (A) and assume that the antimicrobial response is proportional to the density of Pph 1302A cells and plant cells, in accordance with the law of mass action. The antimicrobial environment causes cell death in all three sub-classes of bacterial cell as well as in the host plant itself and degrades at a constant rate. Simultaneously, the presence of the antimicrobial field induces excision of the GI from the chromosomes of B c cells, thereby converting these to the B e class (Pitman et al., 2005). We assume that the per capita rate of excision is proportional to the concentration of the antimicrobial field A.
We assume that the per capita rate of growth (replication) of the bacterial cells is proportional to the quantity of nutrients available and that the quantity of such nutrients is itself proportional to the density of living plant cells (P). For each of the three bacterial classes (B c ; B e and R), we consolidate these two constants of proportionality into a single parameter, namely the natural (Malthusian) growth rate (r c ; r e and r R respectively). In the absence of nutrients, the bacterial cells die at a constant per capita natural death rate (d c , d e and d R ). Finally, we assume that in the absence of bacteria, the plant cell density obeys a self-limiting (logistic) growth law with carrying capacity K (i.e., in the absence of bacteria the plant cell density would reach a steady-state value of K), but with a per capita death rate proportional to the bacterial cell densities in the presence of a bacterial population.
Mathematical formulation (see also Supporting Information). There are four population densities representing the three sub-classes of bacterial cells and one for the plant cells, which vary with time (assumed continuous), in addition to the concentration of the antimicrobial field: B c t ð Þ -population density at time t of bacterial cells having GI on chromosome; B e t ð Þ -population density at time t of bacterial cells having excised GI; R t ð Þ -population density at time t of bacterial cells without GI; P t ð Þ -population density at time t of plant cells; A t ð Þ -concentration at time t of antimicrobial field.
Supporting information
Additional Supporting Information may be found in the online version of this article at the publisher's web-site: Fig. S1. PPHGI-1 is lost from Pseudomonas syringae pv. phaseolicola 1302A::NCR during passaging though bean. Pph 1302A::NCR was passaged six times (each passage 7 days) through resistant bean cv. TG. At each passage, 200 colonies were tested on TG pods (A) and via antibiotic selection (B) for the loss of PPHGI-1 and the percentage loss recorded. Both tests showed the same result. Means are of three replicates 6 SEM. Fig. S2. Pph 1302A has a faster growth rate than RJ3 when the starting cell proportions are unequal. The mixed bacterial population (99.5% RJ3 1 0.5% 1302A::NCR) harvested from week 18 passage (Fig. 4) through Tendergreen leaves was inoculated into leaves of susceptible bean cultivar Canadian Wonder and passaged six times. 1302A::NCR displays an increased growth rate, and its population increasing 6.5-fold compared to 1.6-fold for RJ3 between weeks 19 and 24. Mean is of three replicates 6 SEM. Fig. S3. Pph 1302A and RJ3 have similar growth rates in planta when the starting cell proportions are equal. Pph 1302A::NCR and RJ3 were diluted to OD 600 0.1 and 250 ml of each strain mixed and inoculated into susceptible bean cultivar Canadian Wonder leaves. Samples were taken every 2 h and total colony forming units (CFU) calculated. Data shown are log 10 CFU ml 21 , and mean is of three replicates 6 SEM.
|
2018-04-03T05:18:45.532Z
|
2016-08-26T00:00:00.000
|
{
"year": 2016,
"sha1": "df89163ea6cbe4d11891d692eccacf3c9e50f94f",
"oa_license": "CCBY",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/1462-2920.13482",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "8becda6192ecfec77589788dfc1aa256fa6afb6e",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
141472523
|
pes2o/s2orc
|
v3-fos-license
|
Livable city / unequal city : The politics of policy-making in a « creative » boomtown
In a recent issue of the Journal of the American Planning Association, Leone Sandercock (2004) charts what she calls a “new planning imagination for the 21st Century.” Her paper identifies new ways in which we might conceive of the goals, methods, sites of engagement, and political potentialities of urban planning practice. She defines planning as an “always unfinished social project whose task is managing our coexistence in the shared spaces of cities and neighborhoods in such a way as to enrich human life and to work for social, cultural, and environmental justice” (Sandercock, 2004, p.134; see also Sandercock, 1998; 2003). Her intent is to emphasize the need for planning to remain relevant to the contemporary economic and cultural characteristics of cites.
Introduction 1
In a recent issue of the Journal of the American Planning Association, Leone Sandercock (2004) charts what she calls a "new planning imagination for the 21 st Century."Her paper identifies new ways in which we might conceive of the goals, methods, sites of engagement, and political potentialities of urban planning practice.She defines planning as an "always unfinished social project whose task is managing our coexistence in the shared spaces of cities and neighborhoods in such a way as to enrich human life and to work for social, cultural, and environmental justice" (Sandercock, 2004, p. 134 ; see also Sandercock, 1998 ;2003).Her intent is to emphasize the need for planning to remain relevant to the contemporary economic and cultural characteristics of cites.
There are three elements of Sandercock's work that are relevant here.First, she draws attention to the relationship between livability/quality of life/"enrich[ed] human life" and economic development in cities.Second, her paper presents a broad definition of urban policy-making that encompasses the actions of activists, the business community, and the media as well as city officials and politicians and, therefore, indicates that the formulation and adoption of urban policies is always a political process (Logan and Motlotch, 1987).Third, Sandercock draws attention to imagination, not as separate from reality but as productive of reality through its central role in "framing' (Tarrow, 1992) social practice.More specifically, her work points to a particular type of social imagination -what David Harvey (1973, pp. 23-27), following Mills (1959), called the geographical imagination -whereby actors recognize how social relations are mediated by space and through which they seek to use, shape, and manage space for specific purposes.
Livable City / Unequal City: The Politics of Policy-Making in a « Creative » ...
Revue Interventions économiques, 37 | 2008
This approach resonates with the wider understanding among geographers of the mutually constitutive relationship between urban social, political, and economic processes and urban space (Harvey, 1973 ;Soja, 1989 ;Lefebvre, 1991).This paper will address the contemporary tendency in North American urban policy to uncritically seek to connect specific ideals of urban "livability' with urban economic development policies that cater to the whims of Richard Florida's "Creative Class' (Florida, 2004a).It will do so through an analysis of the politics of urban policy-making in Austin, Texas, a city Florida sees as a model for aspirant creative urban regions.Austin has 37.5 % of its workers employed in the Creative Class -third among large US urban regions, behind Washington, DC and Raleigh-Durham, NC (Florida 2004a, p. 368) -and it experienced a recent high-technology boom -a sector defined by Florida (2004a, p. 328) as part of the "Super-Creative Core" of the contemporary economy.The paper will outline two related spatial frames or parts of a geographical imagination, that underpin in Florida's argument -one which identifies an idealized vibrant urban neighborhood as the geographical nexus at which livability and economic competitiveness connect and another that positions individual cities within a wider context of competitors through the device of rankings and comparative tables.The paper will then address the case of Austin from 1997 to 2001, a period when a charismatic mayor and a Democrat-led, so-called "green council" enacted a set of policies aimed at growing the city's high tech economy while preserving its environment and enhancing quality of life.It will show how the council's attention to certain parts of the city and Austin planners' and local economic development specialists' attention to a mental map of cities to be learned from and competed against reflects Florida's perspective and, as his book indicates, has also influenced his account of how cities can become "creative' (Florida, 2004a, pp. 190-191, 298-300).
The paper will subsequently turn to the question of inequality and its relationship to policies aimed at nurturing, attracting, and retaining the "Creative Class.'It will examine debates in Austin over rising economic inequality and a related decrease in housing affordability which arose just as the city was gaining its reputation as an exemplar of "new economy' urban success.This section will again focus on geographical framings to outline how activists, policy-makers, and politicians struggled over and sought to mitigate the negative consequence of Austin's high-technology boom and how this politics calls into question much of the rosy optimism of the Creative Class thesis.
In this context, the paper makes two related arguments : (1) an attention to space and spatiality -the interaction of space and social action (Soja, 1989) -offers analytical purchase on contemporary issues of urban development since urban policy, politics, and economic development reflect and shape geographical processes, including geographical imaginations and their spatial framings, and (2) the case of Austin suggests that even the most favored "creative cities' are quickly forced to address the inequality which seems to result from Creative Class policies and, thus, advocates of the Creative Class thesis must address evidence of how creative cities are becoming increasingly less livable for many.Addressing this issue needs, the paper argues, the development of concrete policy strategies, not merely the sort of hand-wringing that has characterized much of the most prominent literature to this point.
The spatial politics of urban economic development and quality of life
The geographical aspect of contemporary efforts to fuse urban economic competitiveness with urban quality of life, or livability is captured by Logan and Molotch's (1987) phrase, "the political economy of place," which, along with Cochrane's (1999, p. 111) notion of the "the local politics of business" emphasizes the actions of "locally-dependent" (Cox and Mair, 1988) fractions of capital (rentiers, whose increased profit margins are dependent on the intensification of local land uses) and allied individuals and institutions (from the local media to developers and labor unions) in branding cities, shaping urban landscapes, and framing urban policy in reference to inter-urban competition (Hall and Hubbard, 1998;McCann, 2004;Ward, 2000aWard, , 2000b)).The importance of place and scale in these works emphasizes the centrality of spatiality at the heart of sociologists, political scientists, and geographers' understanding of contemporary urban development.A parallel focus of these literatures has been on the discursive and representational aspects of the politics of urban development that, according to Jessop (1998, p. 84-85 ; see also Boyle, 1999 ;Jonas and Wilson, 1999b ;McCann, 2002), increasingly involves, among other things, "modifying the spatial division of consumption through enhancing the quality of life for residents, commuters, and visitors." Jessop's words echo Harvey's (1989, p. 12) argument that cities, once set on the treadmill of competition by larger structural forces, such as the reconfiguration of revenue streams from other levels of government, must "keep ahead of the game [by] engendering leapfrogging innovations in life-styles, cultural forms, products, and service mixes, even institutional and political forms, if they are to survive."There is a long social science and policy-making tradition of addressing factors involved in improving quality of life (such as decreased residential overcrowding and mitigating natural hazards) in order to create more socially just, equitable, and humane cities (Pacione, 1982 ;1990).Yet, Harvey's emphasis on lifestyle in his discussion of urban entrepreneurialism indicates that quality of life is now routinely understood as a competitive advantage and defined in terms of consumption opportunities for wealthier and/or more economically valued class fractions who are able to choose the cities in which they live or invest on the basis of specific lifestyle characteristics.This definition of urban economic competitiveness in relation to a narrow definition of quality of life (Ley, 1990), is especially evident in Richard Florida's (2004a) "Creative Class' discourse.It is a new policy vulgate or "commonplace' ("notions or theses with which one argues but over which there is no argument") (Bourdieu and Waquant, 2001, p. 2, their emphasis) that has become central to the entrepreneurial rhetoric of North American urban policy-makers (Peck, 2005).Florida's argument is that in order to be economically successful, cities must attract the "Creative Class' -young workers, primarily working in the sciences, engineering, the design professions -from architecture to product designas well as the arts and education (Florida, 2004a).Cities' attractiveness, he suggests, is based on their ability to provide this class fraction with a high quality of life."They like," says Florida, indigenous street-level culture -a teeming blend of cafes, sidewalk musicians, and small galleries and bistros, where it is hard to draw the line between participant and observer, or between creativity and its creators (Florida, 2004a, p. 166).
According to this perspective, the city, if it is to be economically competitive, must be reshaped and repackaged as a consumption and lifestyle space that attracts the Creative Class.
Florida's geographical imagination
This has clearly been a persuasive argument for a wide range of cities (Peck, 2005).One aspect of its persuasive power that I would like to point to is Florida's skillful use of particular spatial frames -related elements of his geographical imagination.Each of these invokes certain images for policy actors, they are "causal stories" (Clarke and Gaile, 1997) or "regulating fictions" (Robinson, 2002) that help policy actors visualize their current practice, their creative future, and allow them to legitimize certain policy strategies over others.
Geographical imaginations are ways of seeing and understanding social space that influence how one acts in it and how one thinks it can be organized and managed (Soja, 1989 ;Harvey, 1990 ;Wolford, 2004).A geographical imagination, according to Harvey (1973, p. 24), enables the individual to recognize the role of space and place in his [sic] own biography, to relate to the spaces he sees around him, and to recognize how transactions between individuals and between organizations are affected by the space that separates them.It allows him to recognize the relationship which exists between him and his neighborhood, his territory, or . . .his "turf.'It allows him to judge the relevance of events in other places ... It allows him to fashion and use space creatively and to appreciate the meaning of the spatial forms created by others.
More recently, Wolford (2004, p. 413), writing in a different context and elaborating on the cognate term, "spatial imaginaries,' suggests that these ways of seeing are, "lens [es] for turning context into action."This suggests that imagination, "no longer represents transcendence or escape, but is crucial -indeed the most crucial -form of social construction, of productive work" (Buell, 1994, p. 314 ;quoted in Olds, 2001, p. 48).Geographical imaginations allow urban policy actors to locate themselves in wider flows of knowledge (about good policies, for instance) and also motivate and legitimate their actions (e.g., efforts to attract the Creative Class).Each geographical imagination is a conception of society and space as they are, as they might, should, or will be.They are also social gathering points, around which actors converge to form interpretive communities, "who come together around a shared reading of a set of texts," where "their shared reading serves as the basis for social action" (Duncan, 1990, pp. 155-156 ; see also Stock, 1986).Geographical imaginations like Florida's are, therefore, powerful.They permeate and constitute everyday urban policy-making.
Two spatial frames are particularly relevant here.The first is a mental map of successful or aspirant creative cities with which to compete or from which to learn.This is created through the "calculative practices" (Larner and Le Heron, 2002, p. 753) used to create Florida's indices of high-tech, innovation, gays, bohemians, talent, the melting pot, diversity, and creativity (Florida, 2004a, pp. 327-334).Subsequently these calculations lead to a ranked mapping of cities in terms of their levels of creativity (e.g., Florida, 2004a, p.xxii) which, I argue, are powerful and attractive spatial frames of reference for policy-makers interested in competing with or learning from similar cities and interested in legitimizing their activities.
Florida's second spatial frame is an idealized vision of the vibrant, diverse, streetoriented, and creative urban neighborhood.Such neighborhoods -whether existing in the past, currently present, or promised for the future -are the rhetorical and spatial anchors of Florida's vision of creative city-regions (cf. McCann, 2007).His always-polished prose is at its liveliest when he describes these places and, as a result, I argue that the images he presents are powerfully persuasive to policy-makers and the general public, evoking as they do a sense of nostalgic familiarity coupled with anticipation of an urban future that is almost in reach.
This spatial framing is evident in his discussion of Jane Jacobs's Greenwich Village of the 1950s, and particularly Hudson Street, where she lived at the time : Her book, "celebrated the creativity and diversity of urban neighborhoods like her own Greenwich Village.""Jacobs's neighborhoods," Florida continues, were veritable fountainheads of individuality, difference and social interaction.The miracle of these places, she argued, was found in the hurly-burly life of the street.The street, where many different kinds of people came together, was both a source of civility and a font for creativity (Florida, 2004a, pp. 41-42).
Florida argues that Jacobs's (1961) ideas are increasingly coming into their own : Not only are urban neighborhoods similar to Hudson Street reviving across the country, but many of the principles that animated Hudson Street are diffusing through our economy and society.Workplaces, personal lives, entire industries and entire geographic regions are coming to operate on principles of constant, dynamic creative interaction (Florida, 2004a, p. 43).
This evocation of Jacobs's New York lays the foundation of many of the key arguments in Florida's book, particularly those that suggest the need for a fusion of economic development, "quality of life,' and "quality of place.'Here is Florida in the chapter entitled, "The Experiential Life,' describing the appropriate milieu for the Creative Classa type of space which cities must foster if they are to be economically competitive : [T]he Creative Class is drawn to more organic and indigenous street-level culture . . . in multiuse urban neighborhoods.The neighborhood can be upscale like D.C.'s Georgetown or Boston's Back Bay, or reviving-downscale like D.C.'s Adams Morgan, New York's East Village, or Pittsburgh's South Side.Either way, it grows organically from its surroundings, and a sizable number of the creators and patrons of the culture live close by.This is what makes it "indigenous."(Florida, 2004a, p. 182).
"And then," he goes on, "if it is a proper street scene, there will be . . .a delicious sense of adventure in the air.One has an awareness of the possibilities of life [in such a place]" (Florida, 2004a, p. 186).Adventure and possibility can best grow in a certain type of space, he suggests, and in the vibrant urban neighborhood, they become the experiential building blocks of creativity : I would further argue . . .that this kind of experience is essential to the creative process.We humans are not godlike ; we cannot create out of nothing.Creativity for us is an act of synthesis, and in order to create and synthesize, we need stimulibits and pieces to put together in new and unfamiliar ways i , existing frameworks to deconstruct and transcend (Florida, 2004a, p. 186, my emphasis ;cf Jessop, 1997, p. 31).
These two spatial frames -the mental map of competitive creative cities and the vision of the livable, creative neighborhood -are geographically-grounded stories which link certain urban spaces and the geography of inter-city competition to current and future visions of urban economies and societies.As "causal stories' (Clarke and Gaile, 1997) around which groups of interpreters gather and in reference to which they develop policy (Stock, 1986), they are therefore powerful.
Keepin' Austin weird ? Geographies of policy and quality of life
Evidence of the power of Florida's geographical imagination and its two key frames, is widespread (Peck, 2005).How might a perspective that reads the contemporary influence of the Creative Class thesis in urban policy through the notion of the geographical imagination be useful in analyzing specific cases ?In the following paragraphs I will address this question by drawing on the politics of urban policy in Austin, Texas from 1997 to 2001.Austin is widely seen as the epitome of what might be termed a hometown/ boomtown ideal in North America urban policy.By this I mean that the contemporary urban policy orthodoxy in North America suggests that successful cities must effectively blend a boomtown atmosphere -a vibrant economy, usually one structured around specific economic clusters such as semiconductors and electronics, computers and peripherals, and film and media -with a high quality of life which makes the place attractive as a hometown for business owners and their most valued employees (McCann, 2004).Austin is a city with a well-developed and still expanding technology sector, a growing population, a relatively low cost of living, an attractive environment, vibrant nightlife, and a strong arts sector, anchored by the music and movie industries.While many North American cities have been adopted Florida's language and some of his policy ideas (Peck, 2005), the Austin case is more complex since the city was an exemplar for Florida -he drew some of his general principles from the study of the Austin case.Thus it might be seen as a proto-creative city, where some of the problems with the Creative Class model (such as the tendency towards increased economic inequality in creative cities) can be identified as clearly as any of its benefits (such as a tolerance for some forms of difference).The following two sub-sections discuss (1) the way Austin's policy-makers attempted to limit sprawl and to channel new development in ways that would enhance existing historical neighborhoods, thus promoting the sort of vibrant street-life that Florida celebrates and (2) how these internal policy interventions are shaped by an inter-city imagination of best policy practices and exemplary cities to be emulated or competed against -a mental map that is reflected in Florida's rankings of the top creative cities.These will lead to a discussion, in the next section of the paper, of the politics and geography of inequality in Austin.
The empirical material is based on fieldwork in Austin in Fall 2000, which employed semistructured interviews with key informants (planners, members of the business community and neighborhood activists), direct observation of planning meetings, and archival research.Eighteen interviews, 45 minutes to two hours long, were conducted.Direct observation in various settings provided the opportunity to view how analyses of policy challenges were articulated in the political process.Archival research on government documents, newspapers, and materials distributed by activist organizations was also used during the period in Austin and afterwards.
Shaping vibrant, livable central city neighborhoods through planning policy
Livable City / Unequal City: The Politics of Policy-Making in a « Creative » ...
Revue Interventions économiques, 37 | 2008
Redevelopment of downtown Austin is one of the priorities of the Smart Growth Initiative.The vision for downtown is a vibrant, diverse, neighborhood with a mix of cultural, employment, entertainment, residential, and retail uses.City of Austin (1997, p. 3) In Austin, after the election of the "green council" in 1997, the policy agenda soon focused on promoting economic development while managing the negative effects of growth on both the surrounding natural environment and also on central city neighborhoods.In this context, planning policy was reformulated around a Smart Growth agenda that had at its center a participatory neighborhood planning program, branded with the slogan "Neighborhoods First."The two related intentions of the Smart Growth approach were first to discourage growth on environmentally sensitive land on the city's edges and, instead, to channel that growth toward the downtown and a central city core defined by the city's immediate post-war boundaries.Second, the neighborhood planning component of the strategy was intended manage new central city growth in order to maintain and promote the attractiveness of urban neighborhoods as they were impacted by new investment (McCann, 2003).The city's approach, then, was tied to a specific geographical imagination, frame, or causal story that guided and legitimated policy.In it, certain parts of the city were understood as particularly important objects of planning, either as problem areas to be managed (the growing suburbs) or as areas with the potential to attract the desirable populations (the downtown and surrounding neighborhoods).As a senior planner, resonating with Florida's discussion of vibrant urban neighborhoods, put it, One of the things, though that the Mayor sort of hooked on to, was . . . the fact that there's a changing demographic going on here and everywhere else around the country. . . .[M]any young people who are tired and bored with the suburbs [are] interested in living downtown.So you've got this renaissance of downtown going on . . .and the Mayor and the Smart Growth Program have helped each other to help that process move forward (Interview with senior planner, November 6, 2000).
The control of sprawl, in this context, is part of a policy strategy aimed at promoting the "quality of life' and "quality of place' of central city neighborhoods and, in turn, to maintain and encourage the development of a high technology economy.While Florida (2004a, p. 290) makes a strong case for the importance of urban neighborhoods like Jacobs's Greenwich Village, he also notes the links between sprawl-control and neighborhood-building strategies: [I]n one of the most ironic twists in recent memory, both sprawling cities and traditional suburbs are seeking to emulate elements of urban life.Cities like Atlanta, Los Angeles, Phoenix and San Jose have all undertaken major efforts to increase density in and around their urban centers, develop downtown housing and redevelop their downtown cores.San Diego has embarked on an ambitious $2.5 billion "City of Villages' initiative to generate more compact, community-oriented development by rebuilding its older neighborhoods as pedestrian-friendly centers, where homes are close to shops, parks and public transit.
Vibrant central city neighborhoods and their relationship to a certain definition of livability, are, then, central to Florida's geographical imagination and to that of Austin's politicians and policy professionals.As a City of Austin (1997, p. 1) life.Smart Growth calls for the investment of time, attention, and resources in central cities and older suburbs to restore community and vitality to these areas (City of Austin, 1997, p. 1).
Following Clarke and Gaile (1997), this geographical imagination can be understood as a "causal story' which encourages, frames, and legitimates certain policy interventions.Austin's experience with this particular imagination in turn provided partial inspiration for Florida's arguments, as did a second spatial frame to which I will now turn.
Cities, good and bad: Politicians' and policy professionals' moral geography of urban policy Austin or Houston ?You Decide.Political campaign poster, Austin, Fall 2000 While the attention of Austin's politicians and planners was focused in part on the creation of a vibrant central city through the Smart Growth Initiative, those same actors along with the city's economic development professionals -in the city government and in the Greater Austin Chamber of Commerce -were also developing a spatial frame which positioned their practice within a wider inter-urban geography of learning and competition.Through this frame, they positioned Austin, its quality of life, and economic competitiveness in relation to other cities and thus were able to advocate for and legitimate certain policy strategies.This inter-urban geographical imagination was frequently framed in terms of hierarchies or rankings that : (1) identified cities where positive or negative lessons about urban planning policy could be learned and (2) highlighted cities to be competed against for investment.These hierarchicalizations were simultaneously spatialized as part of a mental map or moral geography of good and bad places.
Local politicians and policy-makers actively sought out examples from elsewhere as they shaped and legitimated their Smart Growth approach.Specifically, the mayor took the lead in bringing the Smart Growth approach to Texas by "basically copying [Governor] Paris Glendenning's [highly regarded state-wide Smart Growth] effort in Maryland, but at a local level" (Interview with senior planner, November 6, 2000) and by subsequently hosting a national Smart Growth conference in 1998 (Interview with planner, October 17, 2000).More specifically, the planner in charge of the Neighborhood Planning element of the Smart growth process described how we looked to Portland [Oregon] and we looked to [others places to] get these ideals from places, to design the initial program.[We did a] survey of neighborhood planning events in Texas to see what are those cities working with and to really learn about . . .their process and what they thought has worked out.And I think that our process of course is different from Houston because Houston doesn't zone.Our process is somewhat different from Portland, in that they have an excellent, excellent program.They really, they have a lot of funding for other things that we haven't been able to obtain (Interview with planner, October 12, 2000).
The focus on Portland was an obvious choice for a group of planners with a longstanding regard for the city as an exemplary place for growth management and the maintenance of vibrant neighborhoods.As Austin's chief Smart Growth Planner put it when asked to identify cities Austin modeled itself upon, Portland has always held up as the prime example and its certainly been mentioned prominently here.I think you could probably talk about a number of west coast cities -Portland, Seattle … Just the idea that you could have a very different type of city, from what we tend to have around here, you know.One that is not so autobased … (Interview with planner, October 17, 2000) The other side of this moral geography highlights the cities that Austin's policy-makers view as cautionary examples.San Jose, California, at the heart of Silicon Valley, is a very different west coast city from Portland.It is a place with a high cost of living and problematic traffic congestion.It is frequently cited in Austin as a cautionary tale."[We] don't want Austin to become the next Silicon Valley, in terms of quality of life," said a member of a local high tech trade association.She continued, " . . .there is a real sense that the economic engine that we are fueling is fantastic, but we don't want it to run over what is Austin" (Interview with trade association representative, October 12 2000).A local columnist echoed these sentiments : "[Silicon Valley] illustrate[s] the conundrum of prosperous, glittering new regions that attract brains and bravado.The regions spawn clusters of businesses, spin wealth for fortunate, highly skilled workers and, inadvertently, create a starkly divided cultural landscape in which poorer people are pushed farther out to the margins of society. . . .Is this Austin's future ?" (Austin American-Statesman, 2000, p.A14).
Similarly, in the election of November 2000, Austin's ballot included an opportunity for the city's residents to vote on a proposal to fund a light rail system in the city.The proposal was strongly supported by incumbent politicians and policy-makers, who argued that it would mitigate the worst effects of traffic congestion in the rapidly growing city.Again, this local infrastructure proposal, which failed by the slimmest of margins, was framed in terms of an inter-urban geographical imagination.In the weeks before the election, signs appeared around the city with the following text : "Austin or L.A. ?You Decide.Vote Light Rail" (another version asked "Austin or Houston ? . ..").While the campaign was unsuccessful, the imaginative geography it invoked both reflected and resonated with many in Austin who worried about a possible negative future for the city.A similar, if less explicit, sentiment was expressed in a bumper sticker that began appearing at approximately the same time.The sticker demanded, "Keep Austin Weird," and spoke to fears that the influx of technology workers -many who were attracted from California by Austin's relatively low cost of living -threatened to dilute the city's bohemian spirit in favor of corporatization and "Californication.' This spatial frame dovetails with that of city's economic development professionals and the greater Austin Chamber of Commerce.They identified a group of US cities that are Austin's main economic competition.The cities -Boulder and Denver, Colorado ; Phoenix, Arizona ; Portland, Oregon ; Raleigh-Durham, North Carolina ; Salt Lake City, Utah ; San Jose, California ; and Seattle, Washington -are, according to the Greater Austin Chamber of Commerce, [metropolitan] regions chosen because, like Austin, they are high-tech centers, and because they are the competition.The Chamber has seen many businesses consider these benchmarked regions when locating or expanding in Austin (Greater Austin Chamber of Commerce, 2000, p. iii, my emphasis).This geography, like that described previously, reflects an understanding of Austin's place within more widespread networks of interaction -circuits of policy knowledge and of capital.It resonates with the mental map produced by Florida's ranking of "creative cities' which places Austin at the top of the list, followed by San Francisco, Seattle, Boston, Raleigh-Durham, and Portland.Again, I want to argue that these imaginations should be seen as produced by and productive of the material practices of politicians, city planning and economic development staff, and business leaders.They are the causal stories that framed and legitimated Austin's late-1990s push towards what Florida would later term the Creative Economy.
Livability for whom ?Inequality, politics, and the limits of the Creative Class thesis In the preceding sections, I have suggested that contemporary urban policy-making aimed at nurturing, attracting, and retaining the group of workers and capitalists Richard Florida dubs the Creative Class has partly entailed the deployment of a particular geographical imagination.This imagination is underpinned by an ideal of vibrant, creative urban neighborhoods and by a mental map of cities to be learned from in terms of good urban policy and to be competed against for creative talent and high-tech investment.This geographical imagination and its intrinsic spatial frames are important, I suggest, because they are causal stories that encourage and legitimate specific policy interventions in the built environment and in the economic base of cities.I show that this imagination is evident both in Florida's writing and in the policy discourse of Austin in the period 1997-2001.Austin, I argue, can be seen as a proto-creative city ; one that inspired Florida's account and that has continued to develop policies in parallel with those he proposes.
I will now suggest, however, that while the Austin's experience and Florida's writings do seem to run in tandem, there is a point at which the reality of the rise of a high-tech, "creative' economy in Austin diverges from the rather rosy account of the Creative Class and its impact on cities that is featured in Florida's work.This divergence is caused by rising levels of economic inequality that are certainly correlated with, and are likely caused by the rise of the Creative Class.While, as I will show, Florida is aware of this problem but chooses to avoid dealing with it in any sustained and serious manner, Austin's politicians, planners, and economic development professionals have, since the mid-1990s, been forced to address its various dimensions.There has been a politics of inequality in Austin -inflected again by a strong spatial framing -which has brought politicians and policy professionals into engagement with a range of critics, journalists, and activists and, during the late 1990s and the beginning of the current decade, has entailed the development of a number of concrete, if unevenly successful, policies to mitigate the effects of the city's high-tech boom.The politics and policy interventions aimed at reducing inequality and of the politics surrounding questions of livability that were necessary in Austin are not reflected in the generally optimistic and apolitical writings produced by Florida.Arguments about the Creative Class must seriously address the relationship between policies aimed at this group and the economic inequality that is making cities less livable for many.
"The glittering signs of the new economy are becoming a familiar sight around these parts -the cranes and construction cones, the millionaires and megaplexes, the technology and traffic," noted an editorial in the Austin American-Statesman (2000b, p.A14).Yet, "[p]aralyzing poverty in a time of plenty is fast becoming the catch phrase for [Austin's] new economy".Austin's economic boom came with an attendant income bifurcation, which was a prominent discussion point in the city, not just in terms of its impacts on Austin's long-term economic competitiveness but also in terms of its quality of life.In 1990, the city's top decile of earners made 5.7 times the average wage of the lowest-earning ten percent.At the end of the decade, during the height of the city's economic boom, those at the top earned 11.1 times as much as those at the bottom (Bishop, 2000, p.A1).At the same time, 13.1 % of the city's population lived in poverty, while the US average was 12.7 % (Sustainability Indicators Project, 2000).
Addressing these figures, a columnist for the American-Statesman argued that, [t]he rapidly increasing gap between rich and poor can contribute to ill health and crime, economists contend.And the gap could slow economic growth as companies find it difficult to do business in a region where most workers can't afford to live in most parts of the city. . . .Lower-wage workers can no longer find housing near their work.It becomes more expensive for them to connect to the labor market and harder for the labor market to connect to them (Bishop, 2000, p.A1).
Local politicians have expressed similar worries (Ibid.), as have planners and activists (Interviews with planners and activists, October and November, 2000).Another columnist crystallizes these concerns, again in terms of economic competitiveness and livability.Referring to the increasing tendency of Austin's new high tech elite to build hilltop mansions on the edges of the city with majestic views of the surrounding Texas Hill Country, while also being involved in local environmental initiatives, she argues that, People who have the means to enjoy living here often define success in economic or environmental terms.In the past year, environmentalists and business boosters forged a delicate alliance based on a report that said businesses consider quality of life when deciding where to move. . . .Economic development and the environment were linked.Yet we've overlooked one E in the three Es of quality of life : social equity. . . .There is much to be preserved in this region.Including people, not just vistas (Richardson, 1999).
It is worth noting that Florida is compelled, in the preface to the paperback edition of The Rise of the Creative Class, to acknowledge that there does indeed seem to be a correlation between the characteristics of cities that make them "creative' and the characteristics that make them socially and economically unequal.His analysis reveals that "inequality is highest in the creative epicenters of the US economy" (Florida, 2004a, p.xv) and that Austin -his top creative city (p.xxii) -also ranks fourth in the US in terms of levels of wage inequality (p.xvi).Yet, while Florida expresses disquiet over the probable causal relationship between creativity and inequality, he does not go beyond hand-wringing and offers no ideas for concrete policy solutions -merely opining that some city will eventually figure it out (p.xvii) and elsewhere suggesting that inequality is an "open question' for policy-makers (Florida, 2004b).This seems less than helpful for cities that, like Austin, have found inequality on the rise in parallel with the "new economy.'In Austin, inequality was particularly evident in relation to housing.Austin's boom in the 1990s drove up housing costs to the point where many who worked in the city were forced to look elsewhere for affordable accommodation while many long-term residents with below-average incomes were increasingly likely to experience, or fear the prospect of, displacement as a result of gentrification.In 1997, the median house price in the city ($108,200) exceeded that of Texas' other major cities while Austin's median income ($35,118) was lower than any of those cities.This created a considerable housing affordability gap (Breyer, 1997).By the end of the decade, 55-60 % of the city's housing was affordable, down from 62 % in 1991.This figure was 8 % lower than the national average (Sustainability Indicators Project, 2000).The city also dropped 40 places in a national survey of housing affordability in the 1990s (Austin American-Statesman, 2000) and was ranked as the second least affordable housing market in the US South in 1997 (Breyer, 1997).
The question of economic inequality and declining quality of life for many in the metropolitan population was the focus of policy and politics.In April, 2000 the mayor proposed a series of related policies aimed at increasing the amount of middle-and lowincome housing.These proposals complemented a longer standing set of "SMART [Safe, Mixed-income, Accessible, Reasonably-priced, Transit-oriented] Housing Incentives' that were part of Austin's original Smart Growth approach (Interviews, 2000 ;Rivera, 2000).Reacting against visions of Silicon Valley and spurred by city staff's assertion that "[t]here's no question we have a housing crisis in Austin" (Hilgers, quoted in Breyer, 2000, p.G1), the mayor argued that "[o]ne of the ways that Austin is no longer Austin is if we are only a city of the rich and poor and we don't have the ability to have other people live in this town" (Watson, quoted in Rivera, 2000, p.A1 ; see also Greenberger, 1998).
These policy initiatives were spurred, to a great extent, by the prospect of the middle class -including public employees such as police officers, firefighters, teachers, nurses, and planners -being priced out of the city.It evoked a possible future geography of negative social and environmental consequences resulting from sprawl and unaffordability.In this geographical imagination, the Austin metropolitan region would become a sprawling place of commuter-clogged roads, traffic spewing noxious fumes as workers, pushed to the suburbs and surrounding towns in search of affordable housing, commuted back and forth to the central city each day.It further threatened a bland geography of monocultural enclaves linked by arterial highways yet ironically segregated by the individualized car dependency upon which this socio-spatial form is based.
The worries of, and about, the middle class in Austin were paralleled and often challenged by political activist groups based in the city's poorest neighborhoods, located east of downtown.These groups saw the city's economic boom, its housing affordability crisis, and the Neighborhood Planning policies intended to alleviate it as particular threats to Austin's Latino and African-American poor.They were vociferous, yet eventually unsuccessful, in opposing attempts to rezone poor central city neighborhoods to allow mixed uses and multi-family housing (McCann, 2003).These policy changes would, they argued, lead to the displacement of large number of existing residents who rented singlefamily housing in the neighborhoods as landlords converted this housing stock into new profitable developments with shops on the ground floor and lofts or condos above.As a leading activist put it, "whenever there is a big economic boom, all we can do is just pray.Because we know we are going to lose a lot of the land."Expressing her organization's worries over gentrification, she argued that, commercialization and mixed use [in East Austin neighborhoods] is going to be [high rent] condominiums and lofts.We don't fit into that equation at all . . .so to our people, it's just a major displacement.That's what we're saying.It's just a major displacement that is coming into our communities, and by changing all that zoning, all those people [gentrifiers] have been waiting to cross over [the boundaries of the neighborhoods]. . . .They are going to now move us all east of [Highway]183. . . .And that's what we're seeing right now -you know the gentrification, and the move out of our community to further east.And the zoning is one way of how they are going to do it (Interview with neighborhood activist, October 18, 2000).This is a vision of Austin focused not on the benefits of the Creative Class but on the forward march of gentrification frontiers through the urban core (Smith, 1996) and the Tellingly, gentrification is not a term included in the extensive index of The Rise of the Creative Class.Florida occasionally touches on it in the text, however.Yet when he does, it is in passing and the topic is quickly dispatched in favor of a more familiar optimistic narrative.For example, Florida (2004a, p. 312) acknowledges that "deep social divides remain" in rejuvenating Pittsburgh.
The edgy street-level venues of Garfield and the new upscale development on the South Side do little to address the desperate plight of a large economic underclass.And while growing numbers of Creative Class types infiltrate and gentrify lowincome urban areas, huge numbers of people in all classes continue to segregate themselves distinctly into different places -and different ways of life -along income and racial lines.
Here, not only is the topic dealt with quickly and without any concrete policy prescriptions but it is phrased in such a way -". . .classes continue to segregate themselves . .." (Ibid.my emphasis) -that blame for gentrification and the inequality it fosters seems to be laid at the feet of its victims and their "choices.'While some politicians and activists in places like Austin struggle to shape socially just economic futures, they find little in the largely apolitical and Pollyannaish Creative Class literature to aid them.
Conclusion
Every technical task involves a decision . . .about what counts.Sandercock (2004, p. 136) This paper makes two related arguments.It suggests that an attention to the framing and legitimizing role of geographical imaginations provides useful analytical purchase on contemporary urban development policy-making.Secondly, using the case of Richard Florida's Creative Class thesis and that of Austin, Texas' experience as it became seen as an exemplary "creative city,' the paper argues that the most prominent work on the Creative Class does a disservice to policy-makers looking to fully understand the range of positive and negative consequences of its proposed policy model.Thus, I suggest that Florida and others must take issues of inequality in "creative cities' more seriously, move beyond hand-wringing, and offer concrete policy prescriptions that promise to make those cities livable for more than just the Creative Class.
In reference to Sandercock's words on urban policy-making above, it seems, then, that only certain aspects of cities count for many proponents of the Creative Class thesis.For Florida, questions of inequality seem to count for less that optimistic and idealized visions of vibrant urban neighborhoods and an archipelago of "creative cities' strung out across the United States and, increasingly, the world (Florida, 2005).At first glance, what counted for Austin's party-political, business, and bureaucratic policy actors was the development of technology-oriented industries, attracting and retaining of "creative' workers, and the reassertion of the urban core as a live-work space for this class fraction.At most it seemed that these actors had accepted what Peck (2005, p. 766) describes as Florida's vision of "a form of creative trickle down" to aid the "two-thirds of the population languishing in the working and service classes."It is clear, however, that while it is possible for highly-mobile, trans-local consultants like Florida to remain detached from the questions of inequality that emerge in cities as they experience high- technology booms, politicians, journalists, activists, and residents are forced to engage with the destructive elements of these changes (e.g., wage inequality, housing affordability gaps, displacement, and increased commute times) and ask for whom it is that their quality of life and quality of place is being shaped ?51 Solutions will not be found in the popular Creative Class work.They are more likely to be found through the careful study of the politics of policy-making in cities like Austin.The city is a cautionary example of the limits of the creative Class thesis but its experience offers the concrete starting points for a discussion of the appropriate policies to mitigate urban inequalities.In Austin, the extreme conditions of the boom years have lessened since 2000.Evidence suggests, however, that this change is due more to a global downturn in the economy which severely impacted the city in the early years of this century, robbing it of its boomtown status, and that inequality in wages or housing affordability among other measures has by no means been eliminated (Central Texas Sustainability Indicators Project, 2004).Thus, the question of the effectiveness of some of Austin's anti-inequality policies remains one to be explored further.However, the widespread acknowledgement of the link between rapid economic growth and problems of inequality and declining quality of life in the city should provide pause for thought for policy-makers attracted to the increasingly hegemonic creative city' ideal.
Livable City / Unequal City: The Politics of Policy-Making in a « Creative » ... Revue Interventions économiques, 37 | 2008 subsequent scattering of Austin's long-established and tight knit low-income communities.
document puts it, "[t]o address [problems of rapid growth] Smart Growth emphasizes the concept of developing "livable" cities and towns."Livability suggests, among other things, that the quality of our built environment and how well we preserve the natural environment directly affect our quality of Livable City / Unequal City: The Politics of Policy-Making in a « Creative » ...
Livable City / Unequal City: The Politics of Policy-Making in a « Creative » ...
Livable City / Unequal City: The Politics of Policy-Making in a « Creative » ...
Livable City / Unequal City: The Politics of Policy-Making in a « Creative » ...
|
2018-12-11T14:22:51.943Z
|
2008-02-01T00:00:00.000
|
{
"year": 2008,
"sha1": "f886ece6c00b8b9a304fc88e79029d84704497b7",
"oa_license": "CCBY",
"oa_url": "https://journals.openedition.org/interventionseconomiques/pdf/489",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "f886ece6c00b8b9a304fc88e79029d84704497b7",
"s2fieldsofstudy": [
"Political Science",
"Sociology",
"Geography"
],
"extfieldsofstudy": [
"Sociology"
]
}
|
226226564
|
pes2o/s2orc
|
v3-fos-license
|
On Sum Secure Degrees of Freedom for K-User MISO Broadcast Channel With Alternating CSIT
In this paper, the sum secure degrees of freedom (SDoF) of the $K$-user Multiple Input/Single Output (MISO) Broadcast Channel with Confidential Messages (BCCM) and alternating Channel State Information at the Transmitter (CSIT) is investigated. In the MISO BCCM, a $K$-antenna transmitter (TX) communicates toward $K$ single-antenna receivers (RXs), so that message for RX $k$ is kept secret from RX $j$ with $j<k$. For this model, we consider the scenario in which the CSI of the RXs from $2$ to $K$ is instantaneously known at the transmitter while CSI of RX $1$ is known at the transmitter (i) instantaneously for half of the time and (ii) with a unit delay for the remainder of the time. We refer to this CSIT availability as \emph{alternating} CSIT. Alternating CIST has been shown to provide synergistic gains in terms of SDoF and is thus of a viable strategy to ensure secure communication by simply relying on the CSI feedback strategy. Our main contribution is the characterization of sum SDoF for this model as $SDoF_{\rm sum}= (2K-1)/2$. Interestingly, this $SDoF_{\rm sum}$ is attained by a rather simple achievability in which the TX uses artificial noise to prevent the decoding of the message of the unintended receivers at RX $1$. For simplicity first, the proof for the case $K=3$ is discussed in detail and after that, we have presented the results for any number of RXs.
I. INTRODUCTION
The Broadcast Channel with Confidential Messages (BCCM) is the multi-terminal channel in which one transmitter (TX) communicates toward a set of receivers (RX) so that the message of one user remains secret from a given set of receivers. In this paper, we study the Multiple Input/Single Output (MISO) version of this channel: a transmitter with K antennas communication toward K receivers with a single antenna over an Additive White Gaussian Noise (AWGN) channel. This channel model is particularly relevant in modern wireless communication scenarios in which illegal eavesdropping of down-link communications is easily accomplished. For this scenario, we leverage channel state information at the transmitter (CSIT) to guarantee private and secure communication, that is: since the channel realization between the TX and each of the RX is unknown at the other RXs, this source of randomness can be used to achieve secure communication. As having the RXs feedback the CSI to the TX is expensive in terms of energy and computational complexity, one would want to minimize the CSIT availability so as to satisfy the security demands of each of the RX. For this reason, we investigate the high-SNR asymptotic of the secure communication performance attainable through alternating CSIT in the form of the Sum Secure Degrees of Freedom (SDoF sum ). Our results, although theoretical in nature, validate the effectiveness of a particularly simple strategy to attain secrecy: transmitting artificial noise toward the non-intended receiver to obfuscate the messages for other intended receivers. In the following, we indicate the CSIT availability as a vector with entries P or D to indicate whether the CSIT is available perfectly, delayed respectively.
Literature review: Let us briefly review the literature on the SDoF of multi-terminal channels, such as the Broadcast Channel with Confidential Messages (BCCM), also relying on alternating CSIT. In [1], the authors considered the problem of secure transmission over a 2-user MISO broadcast channel with an external eavesdropper. First, they characterized the SDoF region of fixed CSIT states PPD, PDP, and DDP for the first RX, second RX, and eavesdropper respectively. Next, the authors established bounds on the SDoF region on the symmetric case in which the transmitter is allowed to alternate between PDD and DPD states with equal fractions of time. When considering more than two receivers, most literature has focused on the MISO case in which the number of transmit antennas equals the number of RXs. The SDoF of a 3-user MISO BCCM when the channel state alternates between the PPP and DPP states at the RXs with an equal fraction of time is investigated in [2].
For the BC with a secrecy constraint, which is the channel in which the message for one RX has to be kept secret from all other RXs, partial (Perfect CSIT for some users and Delayed for the others) results are presented for the multi-user MISO BC with M transmit antennas and K single-antenna users in [3]. For this problem, it is characterized that the minimum amount of perfect CSIT required per user to achieve the maximum DoFs of min(M, K) is min(M, K)/K. The DoF for the K-user MISO BC with alternating CSIT is analyzed in [4], and total achievable DoF is given by K 2 2K−1 .
Contributions:
In this work, we determine the SDoF sum for the MISO BCCM with K receivers and specific secrecy constraints where the TX with K antennas transmit toward K RXs each with one antenna, in such a way that the message for RX k is kept secret from RX j for all j < k, as shown in Fig. 1. As such, the present work is an effort to define different levels of confidentiality in the model of BC at the high-SNR regime. Indeed, when considering a practical communication scenario, messages have different significance that correspond to different confidentiality levels. For this reason, we define the secrecy constraints in a way such that messages with higher importance have higher levels of confidentiality. For example, for the case K = 3, RX 1 is the one who is not security conscious thus only feedbacks half of its CSI to the TX, RX 3 is the most security-conscious, so it provides full CSI to the TX. RX 2 is partially security conscious but still feedbacks all its CSI. The strength of our work compared to previous works is in extending the analysis to the model with any number of users, which, in terms, also has stronger confidentiality constraints. Both the achievability and the converse proofs rely on the synergic benefits of alternating CSIT to achieve optimal SDoF sum .
Paper Organization: The remainder of the paper is organized as follows. In section II, we will first present our system model and a mathematical framework. The relevant results for 2-user, 3-user, and K-user BC are mentioned in section III. Our main result is presented in Section IV, and the proofs of the main result are provided in V. We finally conclude the paper in Section VI. Notation: With the notation [n : m] indicates the set {n, n + 1, . . . , m − 1, m}. We also adopt the shorthand [n] [1 : n]. The variable of the i th receiver is indicated with the subscript i, i.e. X i . The time-dependency is indicated in brackets, i.e. X(t). We also adopt the short hand notation {x(t)} t∈[n] as x n . Vectors are indicated using bold lower-case letters, i.e., v, all vectors are taken to be column vectors. Matrices are indicated using bold upper-case letters, i.e., M. Random variables/vectors (RVs) are indicated with upper case letters, i.e., X. With X , we indicate the support of the RV X. The notation CN (µ, Σ) indicates the circularly symmetric Gaussian distribution with mean µ and covariance matrix Σ.
II. SYSTEM MODEL AND DEFINITIONS
A K-user Broadcast Channel with Confidential Messages (BCCM), consists of a K-user broadcast channel (BC) in which some messages are shared between users while the others should be concealed from unintended receivers based on secrecy conditions. The K-user BCCM and with alternating CSIT is a variation of the BCCM in which the CSIT is provided in an alternating fashion. In the following, we consider the case in which (i) half of the time, the CSI of the first receiver is known perfectly at the transmitter, while (ii) the other half the time this CSI is known with a delay of one time unit. The CSI of the other receivers is always perfectly known at the transmitter.
More specifically, we consider the multiple input/single output (MISO) BCCM in which the transmitter (TX) is equipped with K antennas, while each of the K receivers (RX) is equipped with one antenna, as depicted in Fig. 1. The transmitter communicates to the receivers over T channel uses. The input/output relationship between the transmitter and the k th receiver at time instant t ∈ [T ] is obtained as where X(t) ∈ C K is the channel input (column) vector, Y k (t) ∈ C is the channel output, h k (t) ∈ C K , is the channel state vector and N k (t) ∈ C, N k (t) ∼ CN (0, 1) is the AWGN. Each entry in the channel state vector is obtained as i.i.d. drawn from the continuous distribution of P H . Additionally, the channel input is subject to the second moment constraint where · 2 is the L 2 norm and P ∈ R + . When vectorizing the channel output over the user index k ∈ [K], we obtain the more compact expression where we have used the vectorization for X(t), Y (t) and N (t) while In the following, we use the notation to compactly indicate the CSI of all users up to time t. The channel state information (CSI) is assumed to be made available at the transmitter in the following fashion: (ii) Perfect CSIT for RX 1 for half of the channel uses: if t, i.e. t ∈ [1, T /2], the CSI h 1 (t) is instantaneously available, at the transmitter, (iii) Delayed CSIT for RX 1 for half of the channel uses: if t, i.e. t ∈ [T /2, T ], the CSI h 1 (t) is made available at the transmitter with a unit time delay. We refer to this CSIT availability above as alternating CSIT [5] and we indicate in Fig. 1 as a switch. Finally, we assume that the CSI availability that is whether the CSI is perfect or delayed, is known at all users instantaneously.
Next, we introduce some standard definitions of code, achievable region, and SDoF. In the MISO BCCM, the TX wishes to communicate the message W k = 2 ⌊nR k ⌋ to user k. A code for the K-user MISO BCCM with alternating CSIT consists of the two encoding mappings.
(P for perfect CSIT from RX 1, D for delayed CSIT from RX 1) and K decoding mappings For such a code, the probability of error at time T , P e (T ), defined as where Y T k is obtained by applying the encoding functions
to produce the sequence of inputs {X(t)} t∈[T ]
1 .c A rate tuple [R 1 , R 2 , . . . , R K ] is said to be securely achievable if there exists a sequence of codes such that the probability of error vanishes at T → ∞ while also In the literature, the condition in (10) is referred to as perfect secrecy.
The secure capacity region is the convex hull of all the securely achievable rates. The sum secure capacity, denoted as C sum (P ) of the BCCM with alternating CSIT is the achievable rate tuple attaining the supremum of the sum of the rates Finally, the SDoF is obtained as
III. RELEVANT RESULTS
Let us briefly review some results on the SDoF of various BCCM channels as available in the literature.
A. 3-user MISO BCCM
In [6], for a 3-user MISO BC, when the transmitter has perfect CSIT of the channel to one receiver and delayed CSIT of channels to the other two receivers, two new communication schemes are proposed that can achieve a DoF tuple (1, 1 3 , 1 3 ) with a DoF sum of 5 3 . The SDoF is, instead, studied in [1]. Theorem 3.1: [1] For the 3-user MISO BCCM, the SDoF with either perfect or delayed CSIT is obtained as Additionally, in [1], the authors establish bounds on the SDoF region on the symmetric case in which the transmitter is allowed to alternate between PDD and DPD states in equal fractions of time. In [7], the optimal DoF for the 2-antenna 3-user MISO BC with alternating CSIT where the permitted CSIT states are PPP, PPD, PDP, PDD, and DDD is characterized.
B. K-user MISO BCCM
In [5], partial results are presented for the multi user MISO BC with M transmit antennas and K single-antenna users. For this problem, it is characterized that the minimum amount of perfect CSIT required per user to achieve the maximum DoFs of min(M, K) is min(M, K)/K. The work in [4] studied the DoF of the K-user MISO BC through utilizing the synergistic benefits of alternating CSIT. The authors consider perfect, delayed, and unknown CSIT for each user among different time slots. They calculated the distribution for a fraction of time at each state as and showed that the achievable DoF for the proposed network is given by
IV. MAIN RESULT
The main result of the paper is shown by the next theorem.
Because of the simplicity of understanding, we will first present the case of K = 3 users in detail in Sec. IV-A and then we will move on the case of any K is Sec. IV-B.
A. 3-User P P P/DP P BCCM Channel Let us next present in some detail the proof of the following corollary of Th (13) Proof: The proof is divided into achievability and converse parts. Conceptually, the proof will show that in the P P P state, it is possible to attain SDoF sum = 3, since the transmitter can use orthogonal transmissions toward each receiver. In the state DP P , instead, only SDoF sum = 2 can be achieved, since orthogonal transmissions cannot be used to hide the messages of RX 2 and RX 3 from RX 1. In this latter case, artificial noise is transmitted toward RX 1, so to hide these messages of RX 2 and RX 3. This artificial noise is orthogonal to the signal space at RX 2 and RX 3, to allow secure transmissions toward these two receivers. Note that the specific order in which the states P P P and DP P occur does not impact the SDoF since the state is known at all users. Also note that, for the case of three receivers, the perfect secrecy conditions in (10) are obtained as • Achievability: As we are concerned with the high SNR asymptotics, we will disregard the effect of the additive noise. The achievability proof is as follows: without loss of generality, assume that the channel input at time t = t P ∈ [1, ⌊T /2⌋] is in the state P P P and in state DP P at time t = t D ∈ [⌊T /2⌋, T ]. For each pair of times [t D , t P ], we send one symbol, a 1 to RX 1 and two symbols, i.e., (a 2 , b 2 ) and (a 3 , b 3 ), to RX 2 and RX 3. This is accomplished as follows. Let X(t P ) be obtained as where v k (t P ), k ∈ [3] are vectors satisfying the following linear independence conditions so that the symbol a k can be securely decoded at RX k. At the time instant t = t D , the encoder can no longer securely communication to RX 1 using interference nulling: in this case we let the channel input be where, this time we satisfy the orthogonality conditions while U ∼ CN (0, P ) is an artificial noise aimed at concealing the private messages b 2 and b 3 from RX 1. With the choice of channel input in (17), we obtain the channel outputs Since 5 messages where transmitted in 2 time instants, we have that SDoF sum ≥ 5 2 . To show that the SDoF of 5 2 is indeed achievable, it is necessary to verify the constraints in (14). Let us begin the secrecy constraint for the message for RX 2 in the signal received at RX 1: where we have used the fact that the message symbols are independent and the fact that, due to the high-power artificial noise U , it is impossible to reliably obtain the linear combination of a 2 and b 2 from Y 1 . Next, let us consider the secrecy of the message for RX 3 in the signal received at RX 1 and RX 2: where, again, we have used the independence of the message symbols and the effect of the artificial noise.
Next, we write where (21b) follows by Fano's inequality, and (21c) follows from the following inequality where, in (22), we have used the secrecy condition in (14). In (21c), we also use the following bound where (23b) is due to the independency of messages, (23d) follows from the secrecy conditions in (14) , and (23f) from Fano's inequality. An alternative bound on the sum rate is obtained as By using the inequality of Lemma 4.3: where (25) follows due to the fact that conditioning reduces entropy. Now, combining applying (25) in (24d) we conclude that: Finally, using the fact that H(Y T i |S T ) ≤ T log P , we have so that, by dividing both sides on T log P and letting P → ∞ and T → ∞, we concluded that SDoF sum ≤ 5 2 .
B. K-User P K /DP K−1 BCCM Channel As per usual, the proof of Th. 4.1 is presented into two parts: the achievability and the converse. 1) Achievability: In the following, we derive an achievable scheme that attains SDoF sum = 2K−1 2 by having the TX sends messages W 1 , W 2 , . . . , W K to the K RXs while attaining secrecy conditions (10). This scheme is described as follows. The TX sends symbol a 1 to the first receiver as the message W 1 in the period of t P , and for any other RX such as k ∈ [2 : K], it sends the tuple symbols (a k , b k ) as message W k during the two time slots t P and t D . In the time slot t P that the channel is in state P K , symbol a 1 is sent to the first receiver, and symbol a k is sent to every RX k such that k ∈ [2 : K]. Since in this time slot perfect CSI of all RXs are known at the TX, it is capable of sending the symbols at any direction knowing that the irrelevant RXs will not conceive anything about the symbol. Beside, the desired RX privately receives each symbol via using the coefficients of channel states and choosing the suitable direction of interference beamforming vectors. We show interference beamforming vectors with v 1 (t P ), v 2 (t P ), . . . , v K (t P ) such that ∀k ∈ [K], v k (t P ) is a normalized column vector of order K for k th RX (v k (t P ) ∈ C K ). In the time slot t P , according to (1) the input/output equations are: where the latter term in (31b) represents interference at RX k so that these signals are not carrying useful information for this RX. Note that due to high signal to noise ratio, the additive noise is omitted from the output equations. For every receiver k, interference will be completely removed only if the other RXs cannot access the direction of interference beamforming vector v k (t P ). In other words, the h k (t P ), v i (t P ) should be zero in equation (31b) for every k = i that i ∈ [K] . If we define H H /k (t) as a (K − 1) × K matrix that has been achieved by removing the k th row from of the channel state matrix H H (t) at time instance t, by writing all of the interference effect omitting equations for all of the K RXs, we can calculate interference beamforming vectors as follows: As a result, each RX by using its channel state vector and interference beamforming vector can decode its desired symbols. In the other time slot t D , we assume the channel is in state DP K−1 . Since the TX does not know anything about the CSI of RX 1, there is no possibility to transmit any symbol for this RX confidentially. Besides, we should find a way that this partial CSIT does not have any effect on the other RXs. To solve this problem, we use the technique of transmitting artificial noise u ∼ CN (0, P ). If we denote the interference beamforming vectors in the second time slot by , the TX at this stage intends to send symbol b k as the other part of message W k for every receiver k that k ∈ [2 : K] and k = 1. For the input we can write: For each receiver k ∈ [K] , k = 1, the output is: If we consider H H /k,1 (t) as a (K − 2) × K matrix that has been achieved by removing the first and k th rows of the channel state matrix at time instance t, H H (t), similar to the previous time slot, to dispose of interference, and deliver the desired symbols to each receiver except the RX 1 we will use the following: Since for RX 2 to K, perfect CSIT is available; thus, we choose v 1 (t D ) in a way that it be orthogonal for all these channel coefficient vectors. that means we choose v 1 (t D ) as H H /1 (t D )v 1 (t D ) = 0 . For the first receiver, the output will be calculated as follows: In equation (36b), L(b 2 , . . . , b K , u), denotes a linear combination of the symbols b 2 , b 3 , . . . , b K , and the artificial noise u.
The first RX receives the linear combination with a Gaussian noise u, and will be unable retrieve irrelevant symbols. In other words, the transmitter sends the high-power Gaussian noise u in the direction of v 1 (t D ) so that the unawareness of CSI of this RX would not affect the output of other RXs. In such achievable schema, transmitter sends one symbol to the first RX, and sends two symbols to every other RXs in two time slots that results in the sum SDoF being (K − 1) + 1 2 = 2K−1 2 . Now we prove the secrecy constraints for this achievable schema. According to equation (10) for each RX i, the secrecy is preserved if the expression I(W i ; Y T j j∈[0:i−1] |S T ) ≤ T ǫ holds. By replacing the outputs of the two time slots, we will have the following for each i ∈ [K]: where (37a) results form independence of messages, and (37b) results from the fact that due to the high power of the artificial noise, it is impossible to attain the linear combination of desired symbols.
2) Converse: We first introduce a property which will be useful to establish the results in this work and is called statistical equivalence property (SEP) [1] and then by using this lemma we presented the proof of converse part in Appendix B. Consider the channel input/output relationship (1) for RX 1 at the channel state DP K−1 : We want to define a virtual RX or a statistically indistinguishable RX for the first RX in a way that the channel output for this RX is independent of the channel output of the actual RX, and its distribution is the same as the channel output of the actual RX. Therefore, h H 1,DP K−1 (t) is replaceable withh H 1,DP K−1 (t) in the virtual RX in a way that these two vectors are independent, and their distributions are the same. Likewise, we can replace Gaussian noise N 1,DP K−1 (t) withÑ 1,DP K−1 (t) such that they have independent identical distribution. Given these, the output of the virtual RX is given by: , S T = S T ,S T and Ω is an auxiliary random variable, then, the SEP states that: Lemma 4.5: For the channel model with the secrecy conditions (1) with alternating CSIT between the states P K and DP K−1 , we have: The proof is given in Appendix A.
V. CONCLUSION
In this paper, we used synergistic benefits of alternating CSIT to study SDoF of a K-user Multiple Input/Single Output (MISO) Broadcast Channel with Confidential Messages (BCCM) and alternating Channel State Information at the Transmitter (CSIT). In the MISO BCCM, a transmitter (TX) with K antennas transmit toward K receivers (RXs), in such a way that the message for RX k is kept secret from RX j for all j < k. The channel between the TX and each RX is a fading channel: the CSI is assumed to be known instantaneously at the transmitter for the receivers 2 to K. On the other hand, the CSI of RX 1 is known at the transmitter (i) instantaneously for half of the time while (ii) with a unit delay for the remainder of the time. For this channel model, we calculated the high-SNR characterization of the secure capacity of the sum rate in the form of the Secure Degrees of Freedom (SDoF), as SDoF = (2K − 1)/2. In achievability proof, we use the benefits of artificial noise transmission to retain confidentiality and exploiting orthogonal space. For the converse proof, we adopt the so-called statistical equivalence property lemma.
The second expansion for the virtual RX is: where (42b) and (43b) follow by Chain rule. Now by using summation of equations (42b) and (43b) we can follow: where (44b) follows by: We continue to lower bound (44d) as the following: where (46a) follows because given Y 1,DP K−1 (t),Ỹ 1,DP K−1 (t), Y t−1 1,DP K−1 , Y T j j∈[0:K−1],j =1 , we can reconstruct Y K,DP K−1 (t) within noise distortion and (46b) follow from the fact that conditioning reduces entropy. so we can follow as: This conlcudes the proof of lemma.
APPENDIX B PROOF OF CONVERSE FOR K -USER We begin the converse proof as follows: where (49a) follows by using the independency of messages from each other and channel states, (49c) follows by Fano's inequality, (49d) follows from (50c) and (51g) in the following. For RXs one to K − 1 we can obtain:
|
2020-11-03T02:00:53.512Z
|
2020-11-02T00:00:00.000
|
{
"year": 2020,
"sha1": "6071b875cc218cb8f886d01793ab5936290f8c44",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "94ba417673af944a5adc7957bbbf03dbea742fd8",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science",
"Mathematics"
]
}
|
134747466
|
pes2o/s2orc
|
v3-fos-license
|
Design of indoor air quality monitoring system based on wireless sensor network
With the development of industrial technology, China's smog weather is frequent, affecting air quality and human health. It is important to monitor indoor air quality in real time and make timely processing. This paper designs an air quality monitoring system based on ZigBee wireless sensor network. The system consists of multiple terminal modules, a coordinator module, a control module, and a monitoring center. The terminal module collects data using a variety of sensors and sends them to the monitoring center through the GPRS. Once the air quality index exceeds the set threshold, the user is promptly alerted and the corresponding air purification process is performed. The system is simple and convenient, the monitoring result is accurate, the real-time performance is high, and the application is extensive.
Introduction
With the rapid development of China's economy, air pollution has become more and more serious, leading to environmental governance becoming the focus of society. Environmental problems threaten human health and affect people's daily work and life to a certain extent. To control air pollution, it is first to conduct air quality monitoring and then take control measures [1]. In air pollution, PM2.5 accounts for more than half of the total airborne particulate matter, so PM2.5 becomes an important monitoring air quality index(AQI) [2]. For the monitoring of outdoor air quality, there are government departments to monitor and warn, but there is little monitoring for people's long-awaited indoors [3]. And with different time changes, the air quality has different values, users can not grasp the indoor air quality in real time. Therefore, it is necessary to design an indoor air quality monitoring system with convenient installation, accurate monitoring results and good real-time performance. Considering the advantages of high flexibility, modularity, low energy consumption and strong anti-interference ability of wireless sensor networks [4,5], an indoor air quality monitoring system based on wireless sensor network is designed.
Overall system design
The overall design of the system is shown in Figure 1. The system consists of several sensor modules, control modules, a coordinator module, and a user monitoring center. The sensor module mainly collects air quality data. The air quality affects not only PM2.5, but also nitrogen oxides and sulfur dioxide. Therefore, the sensor modules mainly include smog sensors, SO2 sensors and NO2 sensors. These sensors are connected to the ZigBee terminal node to form a routing node. A plurality of such routing nodes are respectively placed on the roof of the room and the living room, so that the smog concentration of each room can be accurately measured. The ZigBee terminal node processes the collected data and sends it to the coordinator. The coordinator transmits the received node data to the user monitoring center through the GPRS module, and transmits the user's command to the control module. The control module is mainly composed of ZigBee terminal node, stepping motor, electric curtain, infrared module and air purifier. When the air quality index exceeds the standard, the smartphone will issue commands to close the curtains and open the air purifier. When the ZigBee terminal node receives the command, it controls the curtains and the air purifier. The User Monitoring Center is a monitoring software installed on the smartphone that displays and analyzes received data in real time and can send control commands remotely.
Terminal node circuit design
ZigBee terminal node and coordinator node are ZigBee nodes with CC2530 as the core.The CC2530 is a chip that supports TI's IEEE802.15.4 and ZigBee protocols in the 2.4GHz band. It contains a highperformance RF transceiver and an enhanced 8051MCU core. It also integrates a 16-bit A/D converter. Collect analog data from the sensor.The terminal node is connected to the sensor, collects air quality data, and transmits it through the RF module after processing. The block diagram of the node is shown in Figure 2 .The air quality data acquisition module includes a smog sensor, an SO2 sensor, a NO2 sensor, and a temperature and humidity sensor.The smog sensor is Sharp's GP2Y1010AU0F and is also a PM2.5 sensor.
Coordinator node hardware design
The main function of the coordinator is to set up a wireless sensor network, receive the data of the terminal node and send it to the mobile phone remotely through GPRS, and forward the instructions issued by the mobile phone to the terminal control node.The GPRS module uses the MC39i from Siemens, which enables data voice transmission, short message transmission and fax functions.The MC39i can communicate with the coordinator node through the serial port, and at the same time embeds the TCP/IP protocol to facilitate Internet data transmission.The coordinator node module is shown in Figure 3, and includes an antenna module, a power supply, a clock, a display, an alarm module, and a GPRS module. The display module is used to display the air quality data in real time, and activates the alarm module when the data exceeds the set threshold.
Control module
The control module is mainly used to control the switch of the air purifier and the curtain, and mainly includes an infrared module, a stepping motor, an air purifier and an electric curtain.The infrared module is used to control the air purifier. First, learn the air purifier remote control through the infrared learning mode, and then encode the infrared code segment of the purifier.After receiving the control air purifier command, the corresponding infrared code segment is sent to remotely control the air purifier.
Terminal detection node software design
Monitoring terminal node software flow chart shown in Figure 4.The terminal detection node module is divided into a sensor data acquisition module and a control module. The sensor module is mainly responsible for collecting air quality data and transmitting it to the coordinator, and the control module controls the corresponding home appliance according to the received instruction information.Each module of the system must be initialized first, and successfully join the network to communicate data.The terminal has multiple sensor modules, which collect data of temperature and humidity sensor, PM2.5 sensor, SO2 sensor and NO2 sensor in turn. The system will judge whether the data is valid. If it is valid, the address code is read and the data is sent, otherwise the data is discarded.After the data is sent, the system can delay or sleep to reduce power consumption.
Coordinator node software design
The coordinator node mainly has two functions. On one hand, it receives the data transmitted from the ZigBee network for unpacking, and transmits it to the remote monitoring center through the GPRS module. On the other hand, it receives the command issued by the remote user center through the GPRS module, and transmits it to the terminal node. After the coordinator initializes the system, it starts to set up the network. After the network is successfully established, it waits for the serial port to be interrupted., Because the communication between the GPRS module and the ZigBee terminal module with the coordinator main control chip are serial mode, the serial interrupt program is divided into two categories, one from the GPRS network and one from the ZigBee network.. When an interrupt occurs, first determine whether the interrupt is from the GPRS module,.If it is, send the command to the ZigBee network.If not, determine whether the interrupt is from the ZigBee module.If not, discard the data,otherwise, the packet is parsed and the received data is compared with the set threshold.If it is greater than the threshold, it will alarm and notify the user, otherwise it will send data to GPRS at a certain interval.Coordinator node software flow chart shown in Figure 5.
System test
In the test experiment, two terminal nodes and one coordinator node are selected to form a monitoring system. One terminal node is connected with SHT11, PM2.5 sensor, SO2 sensor and NO2 sensor.The other terminal node is connected to the air purifier and the electric curtain..The real-time monitoring software is installed on the mobile phone or PC. Figure 6(a) shows the real-time air quality data displayed on the display connected to the coordinator, and Figure 6(b) shows the air quality data displayed by the monitoring software.And the test results show that the system can monitor the air quality data in real time.
Conclusion
In this paper, the hardware design and software development of indoor air monitoring system based on wireless sensor network is completed for indoor air quality problems, which can realize real-time collection of indoor temperature and humidity, PM2.5, SO2 and NO2 concentrations. Through the monitoring software, users can keep abreast of indoor air quality information.When the air quality is very poor, people can remotely control the air purifier and electric curtains to purify the air in advance, so that people can have a good living environment when they go home.The monitoring software can be widely applied to air quality monitoring of other buildings, greenhouses, communities, etc.,and has great application prospects.
|
2019-04-27T13:12:33.775Z
|
2018-12-20T00:00:00.000
|
{
"year": 2018,
"sha1": "97c1bd28eab0593ab269a20b14ad228ccbbb9b56",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1755-1315/208/1/012070",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "58d881fbd10a22c5eabdd00190d5ea39973d9a73",
"s2fieldsofstudy": [
"Environmental Science",
"Engineering",
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science",
"Physics"
]
}
|
199545408
|
pes2o/s2orc
|
v3-fos-license
|
Validity of the iLOAD® app for resistance training monitoring
Background This study aimed (I) to assess the inter-rater agreement for measuring the mean velocity (MV) of the barbell with the iLOAD® app, and (II) to compare the magnitude of the MV and total work of a training session between the iLOAD® app and a linear encoder (reference method). Method Sixteen young healthy individuals (four women and 12 men) were tested in two sessions separated by 48 h. The 10 repetition maximum (RM) load was determined in the first testing session in the half squat exercise. The second testing session consisted of 3 sets of 10 repetitions during the half squat exercise performed against the 10RM load. Both the iLOAD® app and a linear encoder were used to calculate the MV and total work of each training set. MV was recorded with the iLOAD® app by two independent researchers to evaluate the inter-rater agreement. Results Trivial differences and nearly perfect correlations were observed between raters for the MV values collected under individual sets (effect size [ES] ≤ 0.02, r ≥ 0.987), as well as for the whole training session (ES = 0.01, r = 0.997). Trivial-small differences and nearly perfect correlations were observed between the iLOAD® app and the linear encoder (Chronojump, Barcelona, Spain) for MV (EV ≤ 0.25, r ≥ 0.903) and total work (ES ≤ 0.05, r ≥ 0.973). Bland-Altman plots did not reveal heteroscedasticity of the errors between the iLOAD® app and the linear encoder for MV (r2 = 0.010) and total work (r2 < 0.001). Conclusions iLOAD® is a valid smartphone app which can provide real-time feedback of the MV and total work completed in a set of multiple repetitions in the half squat exercise.
INTRODUCTION
Resistance training (RT) is a fundamental part of training for competitive athletes (Suchomel, Nimphius & Stone, 2016) as well as for the general population (Myers, Beam & Fakhoury, 2017;Guizelini et al., 2018). RT does not only provide an essential stimulus for the development of muscle mass and strength (Schoenfeld, 2013;Suchomel et al., 2018), but it may also lead to a better performance in different tasks such as jumping, running, sprinting, kicking, and shooting (Folland & Williams, 2007;Suchomel, Nimphius & Stone, 2016). The monitoring of RT is important for the management of fatigue and to explore the association between the RT performed and the chronic adaptations induced in physical performance (Scott et al., 2016;Fernandes, Lamb & Twist, 2018). A wide range of tools are currently available for RT monitoring, including perceived exertion scales (Singh et al., 2007;Robertson et al., 2008), linear position transducers (Harris et al., 2010), force plates (Dugan et al., 2004), contact mats (Crewther et al., 2011), high-speed cameras (Sañudo et al., 2016), isokinetic dynamometers (Ratamess et al., 2016) or accelerometers (Balsalobre-Fernández et al., 2016). These tools are frequently used to evaluate the effect of RT programs.
One of the tools that has received more scientific attention in recent years for physical activity and RT monitoring are smartphone applications (i.e., apps) (Peart, Balsalobre-Fernandez & Shaw, 2018). Smartphone apps are popular due to their low cost and high portability. These apps collect data using different technologies, such as global positioning systems, accelerometers, gyroscopes, microphones, or high-speed cameras (Higgins, 2016;Peart, Balsalobre-Fernandez & Shaw, 2018). It is currently accepted that movement velocity is one of the most important variables for monitoring and prescribing RT programs (González-Badillo, Marques & Sánchez-Medina, 2011;Jovanovic & Flanagan, 2014). Although linear position transducers and inertial measurement units are the two most commonly used devices for monitoring movement velocity during RT, smartphone apps are beginning to be used for this purpose (Perez-Castilla et al., 2019). For example, the PowerLift R app has been validated for measuring mean velocity (MV) of individual repetitions during several RT exercises (Balsalobre-Fernández et al., 2017;Perez-Castilla et al., 2019). However, a limitation of the PowerLift R app is that it does not provide real-time velocity feedback because the user should manually select the start and end point of each repetition. Moreover, the current version of the PowerLift R does not provide the average velocity of a set of multiple repetitions. Therefore, it would be necessary to develop a smartphone App that provides real-time feedback of the average velocity of a set of multiple repetitions. In addition, to the best of our knowledge, there are no apps providing the total work during resistance exercises. This information would be of great interest as work has been suggested to be an appropriate parameter for quantification of training volume in different RT protocols (McBride et al., 2009).
To address this gap, our research group has recently developed the iLOAD R app. The iLOAD R app provides the MV (m s −1 ) and total work (J) of a training set in real-time using the smartphone's timer and calculator. However, the iLOAD R app has not been scientifically validated. Thus, the main objective of this study was to validate the iLOAD R app for RT monitoring during the half squat exercise. The half squat exercise was chosen because it is related to daily physical activities such as standing up from a sitting position and it has also been demonstrated to be effective for strength and muscle mass development, performance enhancement and injury prevention (Schoenfeld, 2010;Hartmann, Wirth & Klusemann, 2013). Specifically, in this study we aimed: (I) to assess the inter-rater agreement for measuring the MV with the iLOAD R app; and (II) to compare the magnitude of the MV and total work of a training session between the iLOAD R app and a linear encoder (reference method). It was hypothesized that a high level of agreement would be obtained between raters (rater 1 vs. rater 2) and devices (iLOAD R app vs. linear encoder). Of note, it would be important to obtain a high level of agreement between raters to confirm that the outcomes collected with the iLOAD R app do not depend on the rater. The confirmation of our hypotheses would place the iLOAD R app as a cheap, portable and time-efficient tool for RT monitoring.
Participants
Sixteen young healthy individuals, four women (mean ± standard deviation [SD]; age = 29.5 ± 7.2 years, body height = 1.61 ± 0.07 m, body mass = 58.7 ± 6.1 kg, squat one repetition maximum [1RM] = 53.0 ± 18.8 kg) and 12 men (mean ± SD; age = 27.4 ± 7.2 years, body height = 1.76 ± 0.05 m, body mass = 78.7 ± 8.2 kg, squat 1RM = 102.7 ± 15.4 kg), volunteered to participate in this study. All participants were familiar with the half squat exercise and had at least one year of RT experience. Prior to testing and after detailed explanation of the procedures and risks of the study, participants gave their written consent to participate in the study. The study protocol adhered to the tenets of the Declaration of Helsinki and was approved by the Catholic University of Brasilia (54813016.0.0000.0029).
Study design
This study was designed to determine the validity of the iLOAD R app for monitoring MV and total work during a RT session (Fig. 1). Participants came to the laboratory on two occasions separated by 48 h. The 10RM load was determined in the first testing session during the half squat exercise. In the second testing session participants were instructed to perform three sets of 10 repetitions during the half squat exercise against the 10RM load. Participants completed 9.9 ± 0.5, 9.5 ± 1.3, and 9.7 ± 1.3 repetitions during the first, second and third set, respectively. Both the iLOAD R app (v 1.0; ILoad Solutions, Brasilia, Brazil) and a linear encoder (Chronojump, Barcelona, Spain) were used to calculate MV and total work of each training set. After familiarization with the iLOAD R app, two independent researchers recorded MV over the training sets with the iLOAD R app to evaluate the inter-rater agreement. The average value of both raters was considered to explore the concurrent validity of the iLOAD R app with respect to the linear encoder. Both testing sessions were performed at the same time of the day for each participant.
Procedures
All measurements were conducted at the same laboratory (''Laboratório de Estudos da Força'' of ''Universidade Católica de Brasília''). Anthropometric data was assessed at the beginning of the first session with a stadiometer (ES2040; Sanny, São Paulo, Brazil) and an electronic scale (W110 H LED; Welmy, Santa Bárbara d'Oeste, Brazil). The vertical distance of the barbell for each participant between a knee angle of 90 • and the standing position (hips and knees fully extended and feet flat on the floor) was determined with a measuring tape (E095; Stamaco, São Paulo, Brazil). This distance was introduced in the iLOAD R app for the computations of MV and total work. This measurement was collected while the participants held the unloaded Smith machine barbell (17 kg). The 90 • knee angle was determined with a manual goniometer (187-907; MITUTOYO R , São Paulo, Brazil). To ensure a consistent countermovement depth during all repetitions, an elastic cord was positioned to contact with the participants' buttocks when they reached a 90 • knee angle. In addition, an adhesive tape was located on the ground to instruct the participants to place their feet always in the same position. After these measurements, participants completed a warm-up consisting of 5 min of submaximal running on a treadmill at a self-selected pace, followed by 15 repetitions with the 17 kg bar of the Smith machine used in the present study (Power Tech, Righetto, São Paulo, Brazil). Thereafter, the 10RM load during the half squat exercise was evaluated following the protocol proposed by the American College of Sports Medicine (i.e., four attempts separated with a 3-min recovery interval) (Pescatello et al., 2014). The initial load corresponded to 70% of the self-perceived 10RM, and it was progressively increased until the participant could not complete more than 10 repetitions.
The warm-up of the second testing session consisted of 5 min of submaximal running on a treadmill at a self-selected pace, 15 repetitions with the unloaded Smith machine barbell, and five repetitions with the previously determined 10RM load. Three minutes after completing the last warm-up set, participants were instructed to perform three sets of 10 repetitions during the half squat exercise against the 10RM load with 3 min of rest between sets. Participants were instructed to complete all sets as quickly as possible maintaining the same range of motion during all repetitions. All sets started with a 'go' instruction from one of the raters, and were considered finished when the concentric phase of the last repetition was completed.
Data acquisition and analysis
Mean velocity (MV) and total work of the three sets were calculated from the recordings of both the iLOAD R app and the linear encoder: -iLOAD R app: The iLOAD R app was installed in two smartphones (5S, iPhone, USA) with an actualized operating system (11.2, IOS, USA). Two independent raters were positioned in front of the participants and recorded the time needed to complete each set. The raters were familiarized with procedures during two preliminary sessions which consisted of the same protocols. The inputs of the iLOAD R app for each set were the load (in kilograms), number of repetitions, vertical distance of the barbell, and time needed to complete the set (in seconds). The time needed to complete each set was determined by the chronometer of the smartphone. The smartphone's chronometer was initiated after the 'go' signal used to indicate the start of the set in the iLOAD R app and it was stopped when the subject completed the last repetition of the set (i.e., when the hips and knees reached full extension). Of note, because of the greater mechanochemical efficiency during the eccentric action (i.e., negative work) when compared to the concentric action (i.e., positive work) of a single complete repetition (De Looze et al., 1994;Ryschon et al., 1997), a factor of 1.33 rather than 2 was considered for the sum of the concentric and eccentric phases for total work calculations (Bloomer et al., 2006). In addition, the iLOAD R app was used in the 'squatting' mode that considers the user's body mass with a weighting factor of 0.88 (Bloomer et al., 2006) for computing total work. The MV and total work of each set were automatically calculated by the iLOAD R app as follows:
Statistical analysis
Descriptive data are presented as means and SD. Normality of the distribution was confirmed by the Shapiro-Wilk test (p > 0.05). The inter-rater agreement for the recordings of MV with the iLOAD R app, as well as the concurrent validity of the iLOAD R app respect to the linear encoder for measuring MV and total work were assessed by independent samples t-tests, Cohen's d effect size (ES and 95% confidence interval [CI]), Pearson's correlation coefficients (r) and Bland-Altman plots. Note that the total work was not compared between raters because its value does not depend on the time recorded by the raters (see Eq. (2)). The scales proposed by Hopkins et al. (2009)
Inter-rater agreement
No significant differences and nearly perfect correlations were observed between raters for the MV values collected under individual sets (p ≥ 0.38, ES ≤ 0.02, r ≥ 0.987) as well as for the whole training session (p = 0.38, ES = 0.01, r = 0.997). Bland-Altman plots also revealed very low systematic bias (≤0.003 m s −1 ) and random errors (≤0.033 m s −1 ), while heteroscedasticity of the errors was observed for the sets 2 (r 2 = 0.208) and 3 (r 2 = 0.199), but not for the set 1 (r 2 = 0.045) or the whole training session (r 2 = 0.074) (Fig. 2).
iLOAD R app vs. linear encoder
Due to the very high inter-rater agreement reported above, the average value of both raters was considered to explore the concurrent validity of the iLOAD R app with respect to the linear encoder. Although significant differences between the iLOAD R app and the linear encoder were observed for MV during the sets 2 and 3 as well as for the whole training session (p < 0.05), the magnitude of the differences were trivial-small and the correlations were always nearly perfect between both devices for MV and total work (Table 1). Bland-Altman plots did not reveal heteroscedasticity of the errors between the iLOAD R app and the linear encoder for MV and total work (r 2 ≤ 0.010) (Fig. 3).
DISCUSSION
This study was designed to determine the concurrent validity of the iLOAD R app with respect to a linear encoder (reference method) for monitoring MV and total work during a RT session conducted with the half squat exercise. The experimental data collected in the present study supported our two hypotheses. Namely, the iLOAD R app showed a very high inter-rater agreement for the recordings of MV, and also a very high validity for the measurements of MV and total work when compared to the data collected with the linear encoder. These results highlight that the iLOAD R app could be a valuable tool for RT monitoring because it is cheap, easy to use, portable and time-efficient. Our first hypothesis was confirmed since the inter-rater agreement for the measurement of MV was very high. The inter-rater agreement was comparable to the results reported by Balsalobre-Fernández et al. (2018) for the measurement of MV with PowerLift R (p = 0.549, ICC = 0.941). It should be noted that the main difference between both apps is that iLOAD R provides the MV of a training set (i.e., from the start of the eccentric phase of the first repetition until the end of the concentric phase of the last repetition) and PowerLift R reveals the MV of the concentric phase of individual repetitions. Therefore, these apps provide complementary information that could be valuable for prescribing and evaluating the effect of RT programs. Collectively, these results highlight that the outcomes of different smartphone apps specifically designed for monitoring movement velocity should not differ between different evaluators. This result could be expected because smartphone apps (e.g., iLOAD R and PowerLift R ) are very easy to use. However, it would be important to determine in further studies the potential effect of testers' experience with the use of smartphone apps on the accuracy of their outcomes. The linear encoder is frequently considered as the gold-standard for monitoring movement velocity during RT exercises (Balsalobre-Fernández et al., 2018;García-Ramos, Pérez-Castilla & Martín, 2018). Supporting our second hypothesis, the MV recorded with the iLOAD R app showed a very high level of agreement with respect to the MV collected with the linear encoder. Previous studies have also observed a high validity of PowerLift R for measuring the MV of individual repetitions during a variety of RT exercises compared to a linear encoder (Balsalobre-Fernández et al., 2017) or a high-speed video camera (Perez-Castilla et al., 2019). Therefore, the results of the present study suggest that smartphone apps are not only useful to determine the MV of individual repetitions (i.e., PowerLift R app), but also for monitoring the MV of a set of multiple repetitions (iLOAD R app). However, it should be noted that, in the present study, the MV values collected with the iLOAD R app were slightly lower compared to the MV values provided by the linear encoder (see Table 1). This result was likely caused because the iLOAD R app was initiated just after the 'go' instruction provided by one of the raters. However, the recording of the linear encoder was initiated when a descent of the barbell was detected, and this should have promoted a shorter duration of the training set for the linear encoder because it is expected that the participants started the movement slightly after the iLOAD R app was initiated. Note that although significant, the magnitude of the differences was trivial-small (ES range = 0.15-0.25). These results suggest that the data collected with the iLOAD R app should not be used interchangeably with the data provided by a linear encoder when the expected differences are small. Our results reinforce the potential applicability of the iLOAD R app for monitoring RT based on movement velocity. However, it remains to be elucidated whether the MV of a set may guide coaches and athletes in the same manner as traditional velocity-based training which is based exclusively on the velocity of the concentric phase (González-Badillo, Marques & Sánchez-Medina, 2011;Jovanovic & Flanagan, 2014).
Regarding total work, no significant differences and nearly perfect correlations were found between the iLOAD R app and the linear encoder in all sets. These results suggest that the total work of a RT session can be accurately quantified with the iLOAD R app. The only inputs needed by the iLOAD R app to calculate total work are the vertical displacement of the barbell during a single repetition, the user body mass, the load lifted, and the number of repetitions performed. To our knowledge, iLOAD R is the first smartphone app that has been designed to quantify total work during RT sessions. The recording of total work is important because it is considered as one of the most objective measures to quantify the total volume during RT, and it is also one of the most appropriate methods for equating training volume in different RT exercises (Cormie, McCaulley & McBride, 2007;McCaulley et al., 2007;McBride et al., 2009). Note that two athletes with different heights but similar body mass would complete different amount of work for the same load (kg) during the half squat exercise because the distance completed in each repetition directly influences the total work performed. Therefore, the iLOAD R app allows practitioners to obtain real-time accurate measures of the total work performed during a RT session. This augmented feedback may help to improve both physical performance (Weakley et al., in press) and psychological traits (Wilson et al., 2017) in athletes whilst training.
The use of the iLOAD R app is not without limitations. The main issue related to the iLOAD R app is that the MV encompasses the whole training set and not only the MV of the concentric phase of individual repetitions. Note that velocity-based RT guidelines has been proposed considering only the MV of the concentric phase of individual repetitions (González-Badillo, Marques & Sánchez-Medina, 2011;Jovanovic & Flanagan, 2014). Therefore, future studies should elucidate whether the MV of the training set provided by the iLOAD R app could also provide valuable information to prescribe and monitor RT programs. Another important issue is that we only examined the validity of the iLOAD R app for sets consisting of approximately 10 repetitions and, consequently, future studies should clarify whether the iLOAD R app can also provide accurate data when performing a lower number of repetitions. Finally, for testing purposes, it would also be important to determine the reliability of iLOAD R app for the measurement of MV during sets consisting of different number of repetitions in a variety of RT exercises.
CONCLUSIONS
The main finding of the present study is that the iLOAD R app showed a high validity for monitoring the MV and total work in a set of multiple repetitions during the half squat exercise. Therefore, the iLOAD R app can be considered as a cheap, easy to use, portable and time-efficient tool for RT monitoring. Future studies should explore the validity of the iLOAD R app for RT monitoring with other RT exercises.
|
2019-08-15T13:05:17.916Z
|
2019-08-07T00:00:00.000
|
{
"year": 2019,
"sha1": "33f2ab8c85d8d02892554c640505f1b100a56d52",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.7717/peerj.7372",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "33f2ab8c85d8d02892554c640505f1b100a56d52",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine",
"Mathematics"
]
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.