id
stringlengths
3
9
source
stringclasses
1 value
version
stringclasses
1 value
text
stringlengths
1.54k
298k
added
stringdate
1993-11-25 05:05:38
2024-09-20 15:30:25
created
stringdate
1-01-01 00:00:00
2024-07-31 00:00:00
metadata
dict
225717557
pes2o/s2orc
v3-fos-license
Electro-Absorbers: A Comparison on Their Performance with Jet-Absorbers and Absorption Columns This work focuses on the removal of perchloroethylene (PCE) from gaseous streams using absorbers connected with electrolyzers. Two types of absorption devices (jet absorber and absorption column) were compared. In addition, it has been evaluated the different by-products generated when a simultaneous electrolysis with diamond anodes is carried out. PCE was not mineralized, but it was transformed into phosgene that mainly derivates into carbon tetrachloride. Trichloroacetic acid was also formed, but in much lower amounts. Results showed a more efficient absorption of PCE in the packed column, which it is associated to the higher gas–liquid contact surface. Jet absorber seems to favor the production of carbon tetrachloride in gaseous phase, whereas the packed column promotes a higher concentration of trichloroacetic acid in liquid. It was also evaluated the scale up of the electrolytic stage of these electro-absorption devices by using a stack with five perforated electrode packages instead of a single cell. Clarification of the effect of the applied current density on the speciation attained after the electrolysis of the absorbent has been attempted. Experiments reveal similar results in terms of PCE removal and a reduced generation of gaseous intermediates at lower current densities. - Significant differences in the speciation related to the absorption step. -PCE was not mineralized but transformed into other products. -Phosgene and carbon tetrachloride are the main products with the jet absorber. -Trichloroacetic acid is important when using the absorption column. -Scale up of electro-absorption with a stack cell was evaluated. Introduction In recent years, many studies have been carried out to evaluate the removal of persistent pollution from water and wastewater, but a reduced attention has been paid to the vapors emitted during these treatments [1,2]. However, gaseous pollutants are considered an important problem for the environment and for the human health [3,4], and their removal is really a matter of a major importance. In this context, gaseous emissions associated to these streams in food waste treatments plants have been studied over last decades, developing technologies capable to remove compounds, such as methyl mercaptan [5], hydrogen sulfide, and trimethylamine [6,7], typically associated with odor. Among them, it is important to highlight technologies such as thermal oxidation, selective catalyst reduction, chemical scrubbing, bioscrubbing, and biofiltration [8,9]. However, these technologies have shown several disadvantages, such as catalyst deactivation and high operating costs [10]. For this reason, other technologies such as scrubbing are gaining relevance, although it is important to take in mind that this absorption process does not attain a complete removal but it only promotes the transference of the gaseous pollutants into a liquid stream that needs for further treatment [11]. In these systems, the mass transfer is a key aspect and efficiency is controlled by residence times and gas-liquid contact surfaces, so that a design with good hydrodynamic behavior is required using high and wide contactors [6,[12][13][14]. Considering this necessity, recently [15], it was proposed the use of a jet absorber to promote the gas-liquid contact in the treatment of a perchloroethylene stream. This system is based on the venturi effect and avoid the use of a compressor, allowing an important reduction in the energy requirement [16]. Obviously, this technique need to be combined with another destructive technology to remove the pollutant from the liquid [17]. In this line, one of the options is the electrochemical technology in which an electrochemical cell is combined with the absorption system in order to regenerate an active catalyst in aqueous solution [18,19] or to produce an oxidant from salts such as phosphate [10] or sulfate [20], which have been applied successfully to remove different gaseous pollutants. Because of their good physical properties, such as low flammability, chemical stability, and an excellent washing solvent, PCE is frequently used in commercial processes such as machine manufacturing, metal degreasing, and dry cleaning. As a result, many cases of PCE-contaminated groundwater, soil [21], and indoor air of different stages of a wastewater treatment plant (WWTP) has been reported. Considering its high toxicity, volatility, extremely persistence in the environment, as well as the toxic reaction intermediates generated, such as phosgene, chloroform, and carbon tetrachloride, PCE presents in water and/or air must be removed. Among the technologies developed to eliminate PCE from liquids, it is worth to mention photocatalytic degradation [22], dielectric barrier discharge plasma [23], phytoremediation [24], sonochemical degradation [25], electrochemical degradation [26], sonoelectrochemical degradation [27,28], adsorption/electrolysis [29], etc. However, these technologies are not valid for gaseous flows where, on contrary, the wet absorption process could be considered as convenient and an economical technology. However, preliminary evaluation of these coupled technologies with jet absorber showed a production of a vast number of intermediates, mainly in the gaseous phase when PCE is used as a model compound with a negligible concentration of carbon dioxide in comparison with intermediates as trichloroacetic acid or carbon tetrachloride [15]. This work focuses on the comparison of the reactivity of gaseous PCE when it is absorbed using a jet absorber and packed absorption column, coupled with an electrolyzer. In addition, two types of electrolyzers were coupled, being one of them a single cell and the other a cell stack with five electrodes. With this comparison it was tried to determine if the bottleneck of the scale up is associated to the efficiency of electrochemical devices. Absorption Using a Packed Column and Jet Absorber To determine the absorption capacity of the two absorbers evaluated in this work, a single absorption process was carried out in the proposed set-ups. To do this, PCE concentrations in liquid and gaseous phases were measured in both liquid waste desorption (LWD) and absorbent-electrolyte storage (AES) tanks. Thus, in Figure 1, it can be seen the changes in the concentration of the PCE in the tank where the simulated gaseous pollutant was produced (Part a) and in the tank that it is contained the absorbent (Part b). As can be observed, the decay of PCE in the liquid phase of LWD tank depends on the aeration and absorber used. Thus, in the system equipped with an air compressor and the packed absorption column, PCE concentration decreases up to 95%, whereas the system equipped with the jet absorber is only capable to reduce the concentration~35% after the same operation time (120 min). The effect of PCE condensation into the compressor was discarded in the results obtained with previous experiments taking some gaseous samples before and after this equipment for two hours obtaining no relevant differences in the mass balance of PCE. Thereby, the stripped PCE is transferred to the liquid circuit, where it is absorbed. The concentration of PCE absorbed reaches a maximum and, then, it starts to decrease. This seems to indicate that PCE is transformed into other products, whose relevance will be discussed afterwards. These results indicate that the absorption attained by the jet absorber is less efficient than that of the packed column and this observation can be explained considering two main factors, the higher gas-liquid surface area attained in the packed column because of the addition of solid spheres and the higher counter current gas flow promoted by the compressor installed (almost the double than in the jet absorber). In both systems, amounts of PCE in the gas phase are less important than in the liquid phase, but higher and fast absorption of PCE avoids the total degradation of this compound before the electrooxidation processes and this seems to point out an evolution of the system towards a different gas-liquid equilibrium. Catalysts 2020, 10, x FOR PEER REVIEW 3 of 16 and gaseous phases were measured in both liquid waste desorption (LWD) and absorbent-electrolyte storage (AES) tanks. Thus, in Figure 1, it can be seen the changes in the concentration of the PCE in the tank where the simulated gaseous pollutant was produced (Part a) and in the tank that it is contained the absorbent (Part b). As can be observed, the decay of PCE in the liquid phase of LWD tank depends on the aeration and absorber used. Thus, in the system equipped with an air compressor and the packed absorption column, PCE concentration decreases up to 95%, whereas the system equipped with the jet absorber is only capable to reduce the concentration ~35% after the same operation time (120 min). The effect of PCE condensation into the compressor was discarded in the results obtained with previous experiments taking some gaseous samples before and after this equipment for two hours obtaining no relevant differences in the mass balance of PCE. Thereby, the stripped PCE is transferred to the liquid circuit, where it is absorbed. The concentration of PCE absorbed reaches a maximum and, then, it starts to decrease. This seems to indicate that PCE is transformed into other products, whose relevance will be discussed afterwards. These results indicate that the absorption attained by the jet absorber is less efficient than that of the packed column and this observation can be explained considering two main factors, the higher gas-liquid surface area attained in the packed column because of the addition of solid spheres and the higher counter current gas flow promoted by the compressor installed (almost the double than in the jet absorber). In both systems, amounts of PCE in the gas phase are less important than in the liquid phase, but higher and fast absorption of PCE avoids the total degradation of this compound before the electrooxidation processes and this seems to point out an evolution of the system towards a different gas-liquid equilibrium. Performance of Electro-Absorbers Once it was determined the higher efficiency of the packed column in terms of absorbing the PCE released from the wastewater, it was necessary to determine the viability of the electrochemical process to modify the composition of the pollutants absorbed. Thus, in Figure 2, it is shown the changes in the removal of PCE (in terms of aggregated values of liquid and gaseous phases) in both tanks of the two systems, during absorption (ABS) or electro-absorption (EABS) tests. Performance of Electro-Absorbers Once it was determined the higher efficiency of the packed column in terms of absorbing the PCE released from the wastewater, it was necessary to determine the viability of the electrochemical process to modify the composition of the pollutants absorbed. Thus, in Figure 2, it is shown the changes in the removal of PCE (in terms of aggregated values of liquid and gaseous phases) in both tanks of the two systems, during absorption (ABS) or electro-absorption (EABS) tests. As it can be seen, in the liquid waste desorption (LWD) tank, over the complete test in the system equipped with the packed column absorber no differences are observed with the application of current to the electrochemical cells. On contrary, minor differences appear in the removal of PCE when jet absorber was used. During the electrochemical process, a faster depletion of PCE was observed during the first 30 min. However, after this initial change, the response is stabilized in similar values of those attained by the non-electrochemical absorption process. In the tank containing the absorbent (AES tank), there are important differences between both absorption systems. In the case of the jet absorber, evolution in the concentration of PCE is totally different, although it increases quickly at the beginning in both systems: in the single absorption was obtained almost 30% of PCE recovery at the end, and in the electroabsorption process, the evolution of the PCE decreased to detect ~5% of PCE volatilized. This behavior would be influenced by the reactivity promoted by the electrochemical process during the absorption. Regarding to the process carried out with packed column absorber, the same trends are observed in both cases, with fast absorption of PCE at the early values, but less than 15% of PCE absorbed at the end. Results confirm that the liquid-gas contact is more efficient in the column absorber and indicates that the effect of the electrochemical reactions in the global process is not as important as in the case of jet absorber. During the absorption processes, several by-products were detected by HPLC, related with the removal pathway followed in aqueous solutions. One of them is the trichloroacetic acid ( Figure 3) that corresponds to an acidic compound derived of the single displacement of a chloride radical from the PCE and the hydroxyl radicals obtained in the electrode surface [30,31], as it is shown in the Equations (1)-(4). Cl + OH * → ClO + Cl * + H + e (1) As it can be seen, in the liquid waste desorption (LWD) tank, over the complete test in the system equipped with the packed column absorber no differences are observed with the application of current to the electrochemical cells. On contrary, minor differences appear in the removal of PCE when jet absorber was used. During the electrochemical process, a faster depletion of PCE was observed during the first 30 min. However, after this initial change, the response is stabilized in similar values of those attained by the non-electrochemical absorption process. In the tank containing the absorbent (AES tank), there are important differences between both absorption systems. In the case of the jet absorber, evolution in the concentration of PCE is totally different, although it increases quickly at the beginning in both systems: in the single absorption was obtained almost 30% of PCE recovery at the end, and in the electroabsorption process, the evolution of the PCE decreased to detect~5% of PCE volatilized. This behavior would be influenced by the reactivity promoted by the electrochemical process during the absorption. Regarding to the process carried out with packed column absorber, the same trends are observed in both cases, with fast absorption of PCE at the early values, but less than 15% of PCE absorbed at the end. Results confirm that the liquid-gas contact is more efficient in the column absorber and indicates that the effect of the electrochemical reactions in the global process is not as important as in the case of jet absorber. During the absorption processes, several by-products were detected by HPLC, related with the removal pathway followed in aqueous solutions. One of them is the trichloroacetic acid ( Figure 3) that corresponds to an acidic compound derived of the single displacement of a chloride radical from Catalysts 2020, 10, 653 5 of 14 the PCE and the hydroxyl radicals obtained in the electrode surface [30,31], as it is shown in the Equations (1)- (4). Catalysts 2020, 10, x FOR PEER REVIEW 5 of 16 This intermediate was detected in both tanks, which means that just the instability of PCE in water promotes this compound. However, the total amount detected in LWD tank is lower than 5 mg with soft increasing trends with the time. In the absorbent tank, higher concentration is detected during both electro-absorption processes, which means that the secondary products generated in the electrooxidation could promote also an hydrogenolysis that increase the total concentration of this intermediate. Trichloroacetic acid, due to its high polarity and low vapor pressure, is commonly found in liquid phase. Thereby, the higher generation also in the system equipped with the jet absorber suggests that an important transformation of PCE is transferred directly to the liquid phase which makes it easier the further pollution removal [32]. Another intermediate detected by HPLC analysis is the carbon tetrachloride, and its behavior is shown in Figure 4. This compound shows similar concentrations in the waste and the absorber tanks, being undetectable in the waste tank (LWD) when the jet absorber system without electric current was used. Its formation can be explained in terms of the decomposition of phosgene, which is an intermediated known to be produced by wet hydrolysis of PCE [33]. This compound behaves as a final product because no decreasing trends were obtained with the time and it was produced without remarked differences in both ABS and EAB processes. This intermediate was detected in both tanks, which means that just the instability of PCE in water promotes this compound. However, the total amount detected in LWD tank is lower than 5 mg with soft increasing trends with the time. In the absorbent tank, higher concentration is detected during both electro-absorption processes, which means that the secondary products generated in the electrooxidation could promote also an hydrogenolysis that increase the total concentration of this intermediate. Trichloroacetic acid, due to its high polarity and low vapor pressure, is commonly found in liquid phase. Thereby, the higher generation also in the system equipped with the jet absorber suggests that an important transformation of PCE is transferred directly to the liquid phase which makes it easier the further pollution removal [32]. Another intermediate detected by HPLC analysis is the carbon tetrachloride, and its behavior is shown in Figure 4. This compound shows similar concentrations in the waste and the absorber tanks, being undetectable in the waste tank (LWD) when the jet absorber system without electric current was used. Its formation can be explained in terms of the decomposition of phosgene, which is an intermediated known to be produced by wet hydrolysis of PCE [33]. This compound behaves as a final product because no decreasing trends were obtained with the time and it was produced without remarked differences in both ABS and EAB processes. Both intermediates-carbon tetrachloride and trichloroacetic acid-were the unique compounds detected and quantified by HPLC, but they represent different removal pathways of PCE related to the higher gas-liquid contact or to the promoted attack of strong oxidants generated during the electrooxidation. However, mass balance informs about the existence of a third compound with absorption at 365 nm [34] and with a ratio chlorine/carbon 2:1, which is not detected by GC or HPLC. This information is compatible with the formation of phosgene, an organic compound produced by wet hydrolysis of PCE [33,35]. Figure 5 shows the amount of phosgene estimated by carbon-mass balance. As can be observed, there is a clear difference in the evolution of phosgene in tests carried out with the jet absorber and the packed column. In the electro-absorption with jet aerator, the concentration of phosgene is negligible which agrees with the higher efficiency showed in the PCE recovery and the use of electrooxidation with diamond electrodes. Regarding the Jet ABS, although the evolution of phosgene seems higher than the process applying current density, at the end, similar concentration was reached which shows that phosgene generation is not very influenced by the presence of the electrolytic processes. On the other hand, the evolution of phosgene was similar in the Column ABS and Column EABS processes, but it was very different in comparison with Jet ABS and Jet EABS, reaching a total mass around 5 times higher. This behavior is associated to the different absorption setups used and the different pathways that control the depletion of PCE in each absorption process. Both intermediates-carbon tetrachloride and trichloroacetic acid-were the unique compounds detected and quantified by HPLC, but they represent different removal pathways of PCE related to the higher gas-liquid contact or to the promoted attack of strong oxidants generated during the electrooxidation. However, mass balance informs about the existence of a third compound with absorption at 365 nm [34] and with a ratio chlorine/carbon 2:1, which is not detected by GC or HPLC. This information is compatible with the formation of phosgene, an organic compound produced by wet hydrolysis of PCE [33,35]. Figure 5 shows the amount of phosgene estimated by carbon-mass balance. As can be observed, there is a clear difference in the evolution of phosgene in tests carried out with the jet absorber and the packed column. In the electro-absorption with jet aerator, the concentration of phosgene is negligible which agrees with the higher efficiency showed in the PCE recovery and the use of electrooxidation with diamond electrodes. Regarding the Jet ABS, although the evolution of phosgene seems higher than the process applying current density, at the end, similar concentration was reached which shows that phosgene generation is not very influenced by the presence of the electrolytic processes. On the other hand, the evolution of phosgene was similar in the Column ABS and Column EABS processes, but it was very different in comparison with Jet ABS and Jet EABS, reaching a total mass around 5 times higher. This behavior is associated to the different absorption setups used and the different pathways that control the depletion of PCE in each absorption process. Once it was monitored the main products obtained, different removal pathways of the degradation of PCE could be proposed. The distribution of these compounds detected at the end of the process is shown in Table 1. It is highlighted that only carbon dioxide was detected after the absorption with the Jet ABS, but also in this set-up was obtained the lower removal of PCE. The most effective system for the removal of PCE was the packed column (7.89-9.21%, respectively). However, in the case of jet absorber, it is shown a high concentration of trichloroacetic acid in Jet EABS test. In recent studies [23,36] it was reported the mechanism of oxidation of PCE to trichloroacetic acid, as well as the formation chlorine radicals that can attack to the carbon cleavage of the PCE to form phosgene. This compound is unstable and could be degraded into carbon tetrachloride and carbon dioxide. Scale up of Electro-Absorption Processes To evaluate the scale up of the electrolytic process, the system equipped with the packed column absorber was selected for an additional study, due to its better performance. In this new study, the single cell was replaced by a stack of cells. Figure 6 shows the total mass of PCE (sum of liquid and vapor phase) and by-products generated during the electro-absorption process, as a function of Once it was monitored the main products obtained, different removal pathways of the degradation of PCE could be proposed. The distribution of these compounds detected at the end of the process is shown in Table 1. It is highlighted that only carbon dioxide was detected after the absorption with the Jet ABS, but also in this set-up was obtained the lower removal of PCE. The most effective system for the removal of PCE was the packed column (7.89-9.21%, respectively). However, in the case of jet absorber, it is shown a high concentration of trichloroacetic acid in Jet EABS test. In recent studies [23,36] it was reported the mechanism of oxidation of PCE to trichloroacetic acid, as well as the formation chlorine radicals that can attack to the carbon cleavage of the PCE to form phosgene. This compound is unstable and could be degraded into carbon tetrachloride and carbon dioxide. Scale up of Electro-Absorption Processes To evaluate the scale up of the electrolytic process, the system equipped with the packed column absorber was selected for an additional study, due to its better performance. In this new study, the single cell was replaced by a stack of cells. Figure 6 shows the total mass of PCE (sum of liquid and vapor phase) and by-products generated during the electro-absorption process, as a function of current density applied after a treatment of 2 h. The total mass of PCE does not vary with the current density used in the electrochemical process. In fact, it is nearly depleted. Surprisingly, TCA is only observed at low concentrations, which may indicate a different performance of the stack with this product as compared with the single electrolyzer. The main products are phosgene and CCl 4 . However, substantial modifications were observed in their distribution. As can be seen, at low current densities, the main final reaction product is phosgene, remaining in solution around 200 mg after the treatment. When the current density increases, CCl 4 becomes the primary final product (up to 450 mg), which it is probably motivated by the side reactions that occurs in the electrode surface due to the limitations of mass transfer. Nevertheless, in this point it is important to point out that this figure compares results which correspond to different electric charged passed, because the electrolysis times in all experiments were the same. Thus, the higher is the current density applied, the higher is the electric charge supplied in each test and, hence, the progress of the electrochemical reaction. Therefore, despite having used the same treatment times, results may be indicating that phosgene is produced previously to carbon tetrachloride and not the real effect of the current density. Catalysts 2020, 10, x FOR PEER REVIEW 8 of 16 current density applied after a treatment of 2 h. The total mass of PCE does not vary with the current density used in the electrochemical process. In fact, it is nearly depleted. Surprisingly, TCA is only observed at low concentrations, which may indicate a different performance of the stack with this product as compared with the single electrolyzer. The main products are phosgene and CCl4. However, substantial modifications were observed in their distribution. As can be seen, at low current densities, the main final reaction product is phosgene, remaining in solution around 200 mg after the treatment. When the current density increases, CCl4 becomes the primary final product (up to 450 mg), which it is probably motivated by the side reactions that occurs in the electrode surface due to the limitations of mass transfer. Nevertheless, in this point it is important to point out that this figure compares results which correspond to different electric charged passed, because the electrolysis times in all experiments were the same. Thus, the higher is the current density applied, the higher is the electric charge supplied in each test and, hence, the progress of the electrochemical reaction. Therefore, despite having used the same treatment times, results may be indicating that phosgene is produced previously to carbon tetrachloride and not the real effect of the current density. To clearly see this effect, it is necessary to compare results obtained at the same electric charge passed. This can be seen in Figure 7, in which all tests were compared for an applied electric charge of 0.35 Ah dm −3 and where it can be seen that higher current densities are less efficient in the degradation of PCE, which can be explained in terms of the promotion of side reactions such as water oxidation and reduction. Meanwhile, low current densities promote the formation of phosgene. The amounts of CCl4 produced do not depend on the current density, being the key product in all tests. The relevance of TCA is very low and limited to the application of low current densities. To clearly see this effect, it is necessary to compare results obtained at the same electric charge passed. This can be seen in Figure 7, in which all tests were compared for an applied electric charge of 0.35 Ah dm −3 and where it can be seen that higher current densities are less efficient in the degradation of PCE, which can be explained in terms of the promotion of side reactions such as water oxidation and reduction. Meanwhile, low current densities promote the formation of phosgene. The amounts of CCl 4 produced do not depend on the current density, being the key product in all tests. The relevance of TCA is very low and limited to the application of low current densities. In this way, this study indicates that the electro-absorption using a cell stack could be a feasible technology to achieve good PCE oxidation. However, further research should be developed due to no complete elimination of its by-products was attained. No phosgene was found at the end of the process, although significant concentrations of carbon tetrachloride remain in solution mainly when higher current densities are applied. Thereby, it is important to warn about the toxicity of phosgene and the reduction of their concentration, which is a considerable achievement of the electrochemical technology. However, the removal of carbon tetrachloride that is a recalcitrant by-product would be in subsequent studies the real challenge to achieve a complete mineralization. Experimental Setup Absorption and electro-absorption processes were developed in the two versatile set-ups shown in Figure 8: one in that integrates a jet absorber (part a) and another that integrates an absorption packed column (part b). Both experimental systems are divided into two connected circuits: the liquid (absorbent-electrolyte) circuit and the gas circuit. In both cases, there is a tank (liquid waste desorption (LWD) tank, (4) In this way, this study indicates that the electro-absorption using a cell stack could be a feasible technology to achieve good PCE oxidation. However, further research should be developed due to no complete elimination of its by-products was attained. No phosgene was found at the end of the process, although significant concentrations of carbon tetrachloride remain in solution mainly when higher current densities are applied. Thereby, it is important to warn about the toxicity of phosgene and the reduction of their concentration, which is a considerable achievement of the electrochemical technology. However, the removal of carbon tetrachloride that is a recalcitrant by-product would be in subsequent studies the real challenge to achieve a complete mineralization. Experimental Setup Absorption and electro-absorption processes were developed in the two versatile set-ups shown in Figure 8: one in that integrates a jet absorber (part a) and another that integrates an absorption packed column (part b). Both experimental systems are divided into two connected circuits: the liquid (absorbent-electrolyte) circuit and the gas circuit. In both cases, there is a tank (liquid waste desorption (LWD) tank, (4) where the raw concentrated solution (1 L of aqueous solution with 150 mg dm −3 of perchloroethylene (PCE)) is stored. From here, the volatilization and stripping of PCE is induced by a gas flow. The jet absorber is based on the venturi effect and it is used to create suction favoring the flow of gas from tank 4 into the liquid circuit. Therefore, the stripping of the PCE from the LWD tank (4) is promoted. The small throat diameter of the jet (4.23 mm) modifies the size of the bubbles generated, affecting to the behavior of the absorption process. Additionally, to ensure good mixing conditions in tank 4, a magnetic stirrer (5) was used [15]. In the other system (Figure 1b), the gas phase is generated with an air compressor (Silent Pump (11) model SI6000, ICA SA, Toledo-Spain) with 3.8 W and a flowrate of 360 L h −1 . This gas stream passes through the LWD tank (4), favoring the stripping of PCE and then it flows through the absorption packed column (10) where it is transferred to liquid circuit. Gas and liquid phases flow countercurrent in the column. The absorption column is made of glass with 0.5 m long and an inner diameter of 500 mm. A length of 0.4 m of the column was packed with glass spheres (8 mm ± 0.75). These spheres are solid (2.5g cm −3 ) made of high-quality borosilicate glass which are a good choice for demanding corrosive environments. Therefore, the role of tank 4 in both cases is to produce a more realistic gaseous pollutant flow, with real gaseous mixtures water-PCE, which can influence on the later treatment because of the reactivity of PCE in wet environments [37]. On the other hand, the liquid phase (electrolyte solution with the PCE absorbed) is pumped with a centrifugal pump (1) to the electrochemical cell (7), which is connected to a power supply (8) and where the degradation of the species present in the liquid phased occurs. Additionally, in this system conventional electrochemical cell is modified to test the scale up of the process using a stack of cells (12) model ECWP d20X5P. (Condias GmbH, Itzehoe, Germany) The absorbent-electrolyte storage (AES) tank (9) is the auxiliary tank of the electrochemical cell in both setups. Its function is to provide residence time to the electrochemical cell in order to promote chemical reactions mediated by oxidants or reductants generated by the application of the electric current. In both systems, there is a connection between the AES and LWD tanks (9 and 4) to equilibrate the total pressure of the system. Both the AES tank (6) and LWD tank (4) were made of polyvinyl chloride (PVC) with capacities of 2.5 L and 1.2 L, respectively. Sample points are implemented in both tanks. The liquid phase used as absorbent was an aqueous solution of Na 2 SO 4 (0.1 mol L −1 ). The centrifugal pump is a Micropump ® GB-P25 J F5 S (flow rate 160 L h −1 ) supplied by Techma GPM s.l.r. (Milan, Italy) connected to the electrochemical cell by a Tecalan ® tube. Extreme care was taken to avoid gaseous losses in all compartments by using tight-fitting ground silicone stoppers and by sealing with Teflon tape. To study and compare the two absorption systems proposed in terms of absorption and depletion of PCE, experiments were performed for 120 min collecting samples in duplicated at specified time intervals. Electrochemical processes were conducted with two kind of commercial electrochemical cells: The first one was a DiaCell ® supplied by Adamant Technologies (La Chaux-de-Fonds, Switzerland), in which conductive diamond electrodes (p-Si-boron-doped diamond) were used as anode and cathode, respectively. Both electrodes were circular (100 mm diameter) with a geometric area of 78.6 cm 2 . The BDD coating have a 2-3 µm of the thickness, boron concentration 500 ppm, and the relation sp 3 /sp 2 > 150. The active surface is 78.6 cm 2 and the inter electrode gap is 1 mm. The stack of cells selected for the scaleup was a CondiaCell ® model ECWP d20X5P supplied by Condias GmbH (Itzehoe, Germany) consisting of ten circular stainless steel electrodes as cathodes with 20 × 1.5 mm and five circular Diachem ® type perforated electrodes as anode. These diamond electrodes (50 × 24 × 1.3 mm 3 ) were built on a substrate that consists of a niobium mesh (type B) and were assembled in two stacks with a NAFION ® cation exchange membrane separating the anode and cathode and acting as the electrolyte [38]. The anodic active area per package is approximately 420 mm 2 and the inter electrodes gap is 0.5 mm. Four current were studied being 0.8, 4.4, 8, and 16 A. Both kinds of electrodes were subjected to cleaning procedure during 10 min in a 1 M Na 2 SO 4 solution at 15 mA/cm 2 prior to electrolysis assays. All the processes were conducted at room temperature (20 ± 2 • C) and at atmospheric pressure conditions. Analysis Procedures Liquid and gas samples were taken from two different sampling ports which are placed in the accumulation and PCE tanks. To determine and quantify PCE concentration in gas and liquid phase, it has been followed the procedure described elsewhere [15,39]. For the determination of by-products produced in the absorption and electro-absorption treatment, two chromatographic methods were employed: The first one was used to analyze carbon tetrachloride by means Jasco HPLC LC-2000 with a PDA MD-2018 Detector (Jasco, Tokio, Japan). The mobile phase consisted of an aqueous solution of 0.1% of phosphoric acid (flow rate of 1.0 mL min −1 ). The detection wavelength used was 280 nm and the temperature of the oven was maintained at 25 • C. Volume injection was set to 20 µL. The second method was used to analyze trichloroacetic acid, which was determined using a HPLC Agilent 1100 series (Agilent Tech. Santa Clara, CA, USA) with a detection wavelength of 220 nm. The ion exchange column used was Supercogel TM H Column with 30 cm × 7.8 mm ID. The mobile phase: 1% phosphoric acid (H 3 PO 4 ), column temperature: 30 • C, flow rate: 0.8 mL/min, injection volume: 20 µL. Conclusions From this work, the following conclusions can be drawn. • Both jet absorbers and packed column absorbers can be used for the absorption stage of the removal of PCE with electro-absorbers. Packed columns reached more successful results. • The column absorber favors the formation of trichloroacetic acid. • Electrooxidation process with diamond electrodes increases the reduction of PCE but enhances the generation of more dangerous and toxic intermediates. Trichloroacetic acid and carbon tetrachloride are the main compounds detected. A removal pathway for PCE degradation related to the absorption efficiency in both set-ups can be proposed. • The use of a cell stack with five electrodes does not show remarkable differences in the removal efficiency of PCE as compared to the singles cell. However, TCA formation is not promoted with the scaled-up system. In addition, current density affects importantly to results. Low current densities lead to the formation of higher amounts of phosgene and higher current densities to the less efficient removal of PCE.
2020-06-18T09:10:12.069Z
2020-06-11T00:00:00.000
{ "year": 2020, "sha1": "54eee142f8e623047d268454cfc2e66e6e480df8", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2073-4344/10/6/653/pdf", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "fe57b1221f85b4cfee5080fa0057dff867e78071", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Chemistry" ] }
11149434
pes2o/s2orc
v3-fos-license
Watermelon (Citrullus lanatus (Thunb.) Matsum. and Nakai) Juice Modulates Oxidative Damage Induced by Low Dose X-Ray in Mice Watermelon is a natural product that contains high level of antioxidants and may prevent oxidative damage in tissues due to free radical generation following an exposure to ionizing radiation. The present study aimed to investigate the radioprotective effects of watermelon (Citrullus lanatus (Thunb.) Matsum. and Nakai) juice against oxidative damage induced by low dose X-ray exposure in mice. Twelve adult male ICR mice were randomly divided into two groups consisting of radiation (Rx) and supplementation (Tx) groups. Rx received filtered tap water, while Tx was supplemented with 50% (v/v) watermelon juice for 28 days ad libitum prior to total body irradiation by 100 μGy X-ray on day 29. Brain, lung, and liver tissues were assessed for the levels of malondialdehyde (MDA), apurinic/apyrimidinic (AP) sites, glutathione (GSH), and superoxide dismutase (SOD) inhibition activities. Results showed significant reduction of MDA levels and AP sites formation of Tx compared to Rx (P < 0.05). Mice supplemented with 50% watermelon juice restore the intracellular antioxidant activities by significantly increased SOD inhibition activities and GSH levels compared to Rx. These findings may postulate that supplementation of 50% watermelon (Citrullus lanatus (Thunb.) Matsum. and Nakai) juice could modulate oxidative damage induced by low dose X-ray exposure. Introduction A variety of highly reactive chemical entities known as reactive oxygen species (ROS) are produced by respiring cells as a small amount of the consumed oxygen is reduced [1]. ROS has dual roles, in which it can be beneficial and/or deleterious [2]. In normal biological system, the cellular functions depend on redox balance which may be defined as reduction and oxidation of prooxidants and antioxidants [2,3]. Any distortion in the redox balance may promote oxidative stress and lead to a series of pathological condition [4]. X-ray has been clinically used as diagnostic and therapeutic tools [5]. Despite its usefulness, X-ray may also induce direct or indirect harmful effects on cellular constituents and deoxyribonucleic acid (DNA) [6,7]. X-ray has a high penetrating power due to its low linear energy transfer (LET) and exposure to X-ray could result in generation of free radicals through radiolysis process [8]. When these free radicals interact with biological molecules, it may cause cellular lipid peroxidation and DNA damage [9]. Lipid peroxidation can be defined as the oxidative deterioration of lipids containing carbon-carbon double bonds that yield a large number of toxic byproducts [10]. Membrane lipids are highly susceptible to free radical damage [11]. The highly damaging chain reaction occurs as the lipids react with free radicals and this can lead to a production of various end products including malondialdehyde (MDA), the main carbonyl compound [11,12]. Free radicals especially hydroxyl radicals react with DNA molecules through several mechanisms producing a broad spectrum of structural damage [13,14]. These structural DNA damages include oxidative base modification, single strand break (SSB), double strand break (DSB), cross-links, clustered base damage, and mismatch repair (MMR) that may affect the cell's ability to transcribe the genes which are encoded by affected DNA [13]. An antioxidant is known as a molecule that acts as free radical scavenger and protects the body from oxidative damage [15]. A study by Srinivasan et al. [16] reveals that an antioxidant defense mechanism is applied to maintain redox balance, and appropriate antioxidants may reduce the free radical toxicity and protect from radiation damage [17]. Defense mechanism such as superoxide dismutase (SOD) is responsible for catalyzing the dismutation of the superoxide anion (O 2 − ) into oxygen and hydrogen peroxide (H 2 O 2 ) [18], while glutathione (GSH) provides protection against oxidative damage by participating in the cellular defense system and its intracellular level may be assessed as an indicator of oxidative stress [19]. The dietary guidelines recommended by A. V. Rao and L. G. Rao [20] suggest to increase the consumption of plant-based food that are rich in carotenoids, a bright coloured microcomponent, which is present in fruits and vegetables. Watermelon (Citrullus lanatus) contains a high level of carotenoids such as lycopene, beta-cryptoxanthin, beta-carotene, and vitamin E and it is proven to scavenge free radicals [21]. Citrullus lanatus (Thunb.) Matsum. and Nakai is the most polymorphic among all Citrullus species which has wild, cultivated, and feral forms [22]. Altaş et al. [23] demonstrate that the nature of chemicals present in watermelon is responsible for the reduction of lipid peroxidation. Thus, the aim of this study was to evaluate the antioxidant capacity of watermelon juice and its protective effect on low dose X-ray-induced oxidative damage in mice model. A 50% (v/v) Watermelon Juice Preparation. A locally harvested, red seedless, watermelon juice was freshly prepared on a daily basis. The watermelon was cleaned with filtered tap water and peeled to obtain the red flesh. The flesh was then processed with a commercial juice maker which automatically separated the pulp and the juice. A 50% concentration was prepared by diluting a pure watermelon juice with filtered tap water in the ratio of 1 : 1 (v/v). Animal Handling and Study Design. All animal studies were conducted in accordance with the criteria of the investigations and Universiti Teknologi MARA Committee of Animal Research and Ethics (UiTM CARE) guidelines concerning the use of experimental animals. Twelve, healthy, four-week-old male ICR mice, each weighing about 30 grams, were obtained from Laboratory Animal Facility and Management (LAFAM), Faculty of Pharmacy, UiTM Puncak Alam Campus. The animals underwent acclimatization period for 14 days and normal mouse diet along with filtered tap water was given ad libitum. The study involved two groups of seven-week-old male ICR mice and each weighting 31.3 grams which consisted of radiation group (Rx) and watermelon juice supplementation group (Tx) with six animals in each group. Mice from Tx were supplemented with 50% watermelon juice as the sole liquid source ad libitum for 28 days, while the Rx were only given filtered tap water. All the mice were fed with normal mouse diet. Watermelon juices were changed twice/day and the volume of watermelon juice consumed by each mouse was recorded. On day 29, both groups were exposed to a total body irradiation of a single dose X-ray. Irradiation and Tissues Collection. Both groups were placed in cages under Philips Bucky DIAGNOST X-ray machine and treated with single fractionated of 100 Gy Xray for total body irradiation. This low dose irradiation was performed by a qualified radiographer at Medical Imaging Laboratory, Faculty of Health Sciences, UiTM Puncak Alam Campus. All the mice were sacrificed by cervical dislocation within 12 hours following total body irradiation. The brain, lung, and liver tissues were excised immediately and stored at −80 ∘ C prior to analysis. Lipid Peroxidation Product, MDA Assay. Tissue samples were resuspended at 100 mg/mL in PBS containing 1X butylated hydroxytoluene (BHT). Five grams of the tissue samples was homogenized on ice, spun at 10,000 g for five min. The supernatant was collected and assayed directly for its TBARS level. MDA in samples and standards was interacted with thiobarbituric acid (TBA) at 95 ∘ C and incubated and then read spectrophotometrically at 532 nm with POLARstar Omega Reader. MDA levels were determined by comparison with predetermined MDA standard curve. Oxidative DNA Damage (AP Sites). Genomic DNAs of brain, lung, and liver were isolated with Invisorb Spin Tissue Mini Kit (Stratec Molecular, Berlin) following the manufacturer's protocol. DNA Damage Quantification Kit (AP Sites) was used to quantitate apurinic/apyrimidinic (AP) sites in tissue of interest. The aldehyde reactive probe (ARP) that reacts specifically with an aldehyde group on the open ring form of AP sites (ARP-derived DNA) was detected with Streptavidin-Enzyme Conjugate. The quantity of AP sites in unknown DNA sample of brain, lung, and liver was determined using POLARstar Omega Reader at 450 nm by comparing standard curve of predetermined AP sites. All unknown DNA samples and standard were assayed in duplicate. SOD Activity Assay. The activity of SOD was determined by using Oxiselect Superoxide Dismutase Activity Assay. Tissues were homogenized on ice using mortar and pestle in 7 mL of cold 1X Lysis Buffer per gram tissue followed by centrifugation at 12000 ×g for 10 minutes. The supernatant of tissue lysate was then collected and kept at −80 ∘ C until further analysis. Superoxide anions generated by Xanthine/Xanthine Oxidase system were detected with a Chromagen Solution by measuring the absorbance reading at 490 nm using POLARstar Omega Reader. The activity of SOD was determined as the inhibition percentage of chromagen reduction. GSH Antioxidant Assay. Tissues were blot-dried and weighed. Ice-cold 5% metaphosphoric acid (MPA) was added and homogenized using mortar and pestle and then centrifuged at 12,000 rpm for 15 min at 4 ∘ C. The supernatant was collected. The levels of GSH were measured kinetically with a spectrophotometric kit (Oxiselect Total Glutathione Assay) according to the manufacturer's protocol. The chromogen that reacted with the thiol group of GSH produced colored compound which was then detected with POLARstar Omega Reader at 405 nm. The total GSH content in the samples was determined by comparison with GSH standard curve. 2.9. Statistical Analysis. All mean ± SEM (standard error of mean) values were calculated and statistical analysis was done using SPSS version 18.0 (SPSS Inc., Chicago, IL, USA) for Windows. Data were analyzed by one-way analysis of variance (ANOVA), followed by post hoc Tukey test for multiple comparison of mean. The difference was considered significant when value was less than 0.05 ( < 0.05). Dietary Supplementation of 50% Watermelon (Citrullus lanatus (Thunb.) Matsum. and Nakai) Juice Conferred Remarkable Radioprotection against Lipid Peroxidation. The results obtained from the experimental analysis of MDA levels in mice brain, lung, and liver tissues are presented in Figure 1. There was no significant reduction of MDA levels in brain tissues of Tx compared to Rx. However, MDA levels in lung and liver tissues of Tx were significantly reduced compared to Rx with = 0.004 and = 0.01, respectively. The average MDA levels in lung tissues of Tx and Rx were 25.28 ± 0.45 M and 30.05 ± 0.94 M, respectively, while the average MDA level in liver tissues of Tx was 20.20 ± 0.73 M and Rx was 25.63 ± 1.43 M. Dietary Supplementation of 50% Watermelon (Citrullus lanatus (Thunb.) Matsum. and Nakai) Juice Conferred Remarkable Radioprotection against Oxidative DNA Damage by Mitigating Number of AP Sites. The radioprotective effects of 50% watermelon [Citrullus lanatus (Thunb.) Matsum. and Nakai] juice against oxidative DNA damage (AP sites) in mice tissues are shown in Figure 2. The generation of noncoding AP sites in brain showed significant differences between Tx and Rx with = 0.029. The average numbers of AP sites per Figure 3 referred to the mean value of SOD inhibition activities in Tx and Rx for mice brain, lung, and liver tissues. There were significant differences between brain SOD inhibition activities in Tx (80.02 ± 1.69%) and Rx (52.79 ± 2.03%) with = 0.001. Meanwhile, SOD activities increased significantly in lung tissues of Tx compared to Rx ( = 0.001). The average SOD activities in lung tissues of Tx and Rx were 79.90 ± 1.91% and 42.06 ± 1.24%, respectively. In liver tissue, there was a significant difference between Tx (68.50 ± 1.82%) and Rx (59.13 ± 2.0%) with = 0.04. Figure 4 shows the levels of GSH content in mice brain, lung, and liver tissues. In the present study, GSH levels in brain tissues of Tx (0.18 ± 0.0085 M) showed significant increment compared to Rx (0.07 ± 0.006 M) with = 0.001. GSH levels in lung tissues of mice supplemented with watermelon juice (Tx) increased compared to Rx but no significant differences ( > 0.05) were observed. However, GSH levels in liver tissues of Tx were significantly increased compared to Rx ( = 0.003). The average GSH levels in Tx and Rx were 0.06 ± 0.001 M and 0.04 ± 0.002 M, respectively. Discussion Oxygen radicals react with PUFA residues in phospholipids resulting in end products that are mostly reactive towards protein and DNA. One of the most abundant carbonyl products of lipid peroxidation is MDA [24]. Low dose X-ray might cause lipid peroxidation and the finding of this present study has shown that mice supplemented with 50% watermelon juice (Tx) resulted in a marked reduction in MDA levels in lung and liver tissues compared to Rx (Figure 1). Supplementation with 50% watermelon juice restored the activities of intracellular antioxidant enzymes in mice lung and liver tissues following exposure to low dose X-ray. Thus, phytochemical antioxidants contents in 50% watermelon juice may possibly contribute to the efficacy of intracellular antioxidant defense system by providing a puissant consumer of free radicals which induced oxidative damage. This was in agreement with the study conducted by Asita and Molise [1] which reveals that watermelon contains higher content of carotenoids such as lycopene and has proven to scavenge free radicals thus inhibit lipid peroxidation. DNA continuously generates sites of missing bases termed as abasic or apurinic/apyrimidinic (AP) sites through exposure to endogenous and exogenous sources which are capable to induce oxidative DNA damage [25]. This study demonstrated that low dose X-ray exposure induced oxidative DNA damage was indeed positively correlated with AP sites formation. Here, these results show that mice supplemented with 50% watermelon juice in the presence of low dose X-ray exposure significantly prevented progressive increase of AP sites formation in brain, lung, and liver tissues compared to mice irradiated with low dose X-ray alone ( Figure 2). It is seen possible to suggest that these results are mainly due to synergistic interaction between micronutrients content in watermelon juice and intracellular antioxidant enzymes could modulate oxidative DNA damage induced by low dose X-ray exposure. The present finding seems to be consistent with a previous study by Shokrzadeh et al. [26] which showed that mice preadministered with Citrullus colocynthis (L.) extract or locally known as watermelon for seven consecutive days via intraperitoneal injection followed by injection with 70 mg/kg body weight of cyclophosphamide-(CP-) induced DNA damage significantly reduced the number of micronucleated polychromatic erythrocytes (MnPCEs), an index of oxidative DNA damage. SOD plays an important role in reducing the effect of free radicals attack, and SOD is the only enzymatic system quenching O 2 − to oxygen and H 2 O 2 and plays a significant role against oxidant stress [18]. Referring to Figure 3, the percentage of SOD inhibition activities in brain, lung, and liver tissues of Tx showed significant increment compared to Rx. It seems possible to suggest that these results are mainly due to watermelon containing high level of phytonutrients including lycopene [27]. Perkins-Veazie et al. [28] point out that lycopene is a highly effective antioxidant because it acts as a strong free radical scavenger compared to carotenoids including beta-carotene, alpha carotene, lutein, beta-cryptoxanthin and astaxanthin in biological systems. In this context, the micronutrient antioxidant contents, especially lycopene, in 50% watermelon (Citrullus lanatus (Thunb.) Matsum. and Nakai) juice, accumulate in the tissues and counteract the deleterious effects of free radicals generated by low dose Xray through activation of oxygen molecules. GSH has been reported to have protective roles against oxidative stress through scavenging hydroxyl radical and singlet oxygen directly detoxifying H 2 O 2 and lipid peroxides and also regenerate important antioxidants, Vitamins C and E, back to their active forms [2]. In the present study, the GSH levels in brain and liver tissues of Tx showed significant increment compared to Rx but no significant increment in lung. This phenomenon may suggest that the supplementation of antioxidant in 50% watermelon juice has successfully elevated the levels of GSH in both brain and liver tissues. Present results were in line with a study by Saada et al. [29] which emphasized that pretreatment with lycopene, which is rich in watermelon, significantly improved the oxidant/antioxidant status and helped in reducing oxidative damage due to radiation. Conclusion This study clarifies that the supplementation of 50% watermelon juice possesses benefits in modulating the oxidative damage induced by low dose X-ray exposure in terms of suppressing the levels of MDA and noncoding AP sites formation while enhancing the levels of SOD and GSH activities.
2016-05-31T19:58:12.500Z
2014-04-29T00:00:00.000
{ "year": 2014, "sha1": "eb052644899650ff79e6cbf7373ed0f07f5d901d", "oa_license": "CCBY", "oa_url": "http://downloads.hindawi.com/journals/bmri/2014/512834.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "114f81483fd3ab6a1efa0b57bcd996e251dfbb8f", "s2fieldsofstudy": [ "Environmental Science", "Medicine" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }
237521823
pes2o/s2orc
v3-fos-license
TSI-GNN: Extending Graph Neural Networks to Handle Missing Data in Temporal Settings We present a novel approach for imputing missing data that incorporates temporal information into bipartite graphs through an extension of graph representation learning. Missing data is abundant in several domains, particularly when observations are made over time. Most imputation methods make strong assumptions about the distribution of the data. While novel methods may relax some assumptions, they may not consider temporality. Moreover, when such methods are extended to handle time, they may not generalize without retraining. We propose using a joint bipartite graph approach to incorporate temporal sequence information. Specifically, the observation nodes and edges with temporal information are used in message passing to learn node and edge embeddings and to inform the imputation task. Our proposed method, temporal setting imputation using graph neural networks (TSI-GNN), captures sequence information that can then be used within an aggregation function of a graph neural network. To the best of our knowledge, this is the first effort to use a joint bipartite graph approach that captures sequence information to handle missing data. We use several benchmark datasets to test the performance of our method against a variety of conditions, comparing to both classic and contemporary methods. We further provide insight to manage the size of the generated TSI-GNN model. Through our analysis we show that incorporating temporal information into a bipartite graph improves the representation at the 30% and 60% missing rate, specifically when using a nonlinear model for downstream prediction tasks in regularly sampled datasets and is competitive with existing temporal methods under different scenarios. INTRODUCTION Graph representation learning (GRL) aims to accurately encode structural information about graphbased data into lower-dimensional vector representations (Hamilton, 2020). The basic idea is to encode nodes into a latent embedding space using geometric relationships that can then be used to accurately reconstruct the original representation (Hoff et al., 2002). There are two node embedding approaches: shallow embedding methods and more complex encoder-based models (i.e., graph neural networks, GNNs) (Hamilton, 2020). Shallow embedding methods, such as inner product and random walks, are inherently transductive meaning they can only generate embeddings for nodes present during training, which can restrict generalizability without retraining (Ahmed et al., 2013;Perozzi et al., 2014;Grover and Leskovec, 2016;Hamilton, 2020). In contrast, GNNs use more complex encoders that depend more on the structure and attributes of the graph, allowing them to be used on inductive applications (i.e., evolving graphs) (Hamilton et al., 2017a;Hamilton et al., 2017b). A key feature of GNNs is that they can use k-rounds of message passing (inspired by belief propagation), where messages are aggregated from neighborhoods and then combined with the representation from the previous layer/iteration to provide an updated representation (Hamilton, 2020). Recently, GRAPE (You et al., 2020), a framework for handling missing data using graph representation, proposed formulating the problem using a bipartite graph, where the observations and features in a data matrix comprise two types of nodes, observation and feature nodes, and the observed feature values are the attributed edges between the two types of nodes. GRAPE used a modified GraphSAGE (Hamilton et al., 2017b) architecture and introduced edge embeddings during message passing to learn edge attributes and was shown to outperform a deep generative model (Yoon et al., 2018a), as well as traditional methods on edge-level prediction and node-level prediction tasks (You et al., 2020). Yet one of the shortcomings of GRAPE is it assumes observations are independent, which is generally not the case in temporal settings with repeated measurements. Therefore, representations learned using GRAPE may not be suitable for temporal data with repeated measurements. There are numerous contemporary imputation methods. Recurrent neural networks (RNNs) capture sequence information well when handling missing data (Lipton et al., 2016;Che et al., 2018), particularly bidirectional RNNs that use information from the past, present, and future (via forward and backward connections) (Yoon et al., 2018b;Cao et al., 2018), but RNNs learn a chain structure, whereas GNNs learn across geometric spaces via message passing in a graph-structured manner. Non-autoregressive models have been proposed to capture long-range sequence information in parallel, which rely on bidirectional RNNs to process input data, but the implementation does not handle irregular sampling (Liu et al., 2019). GNNs have been combined with matrix completion to extract spatial features, but these approaches do not explicitly capture temporal information and the implementations only use discrete datatypes (e.g., ratings) (Berg et al., 2017;Monti et al., 2017;Zhang and Chen, 2019). Further, separable recurrent multi-graph convolutional neural networks (sRMGCNN), which feed the extracted spatial features from a MGCNN into an RNN to exploit the temporal information, are transductive (Monti et al., 2017). Autoencoders (AEs) can efficiently learn undercomplete (i.e., lower-dimensional) or overcomplete (i.e., higher-dimensional) representations (Beaulieu-Jones et al., 2017;Gondara and Wang, 2018;McCoy et al., 2018), but the recovered values are not based on an aggregation from a non-fixed number of neighbors, as in GNNs. Further, AEs cannot explicitly train over incomplete data (i.e., AEs initialize with arbitrary/default values) (Gondara and Wang, 2018) or explicitly exploit temporal information (i.e., AEs combine with a temporal dynamic model, such as a Gaussian process (Fortuin et al., 2020) or RNN (Park et al., 2020)). Similarly, there are a myriad of classic imputation methods. Matrix completion can exploit correlations within and across feature dimensions, but it is generally only used in a static setting (i.e., single measurement that does not change over time) (Candès and Recht, 2009). Interpolation methods have been proposed to exploit correlations within feature dimensions in temporal settings; however, they ignore correlations across feature dimensions (Kreindler and Lumsden, 2012). K-nearest neighbors (KNN) learns an aggregation, but from a fixed number of neighbors with weights based on Euclidean distance and is usually only applied to static data (Troyanskaya et al., 2001). MissForest is a non-parametric method that uses a random forest trained on the observed values of a dataset to predict the missing values, but is generally a static method (Stekhoven and Bühlmann, 2012). In contrast to single imputation, multiple imputation methods aim to model the inherent variability into recovered values to account for the uncertainty in estimating missing values (Yoon, 2020). While multiple imputation by chained equations (MICE) (White et al., 2011) is the gold standard, it is generally a static method and may not perform well at higher rates of missingness (Yoon, 2020). Some contemporary methods also produce multiple imputations, such as RNN-based and GNN-based methods that utilize a dropout hyperparameter (Srivastava et al., 2014;Yoon et al., 2018b;Rong et al., 2019;You et al., 2020) as well as AE-based methods that initialize with different sets of random weights at each run (Gondara and Wang, 2018). In this work, we introduce temporal setting imputation using graph neural networks (TSI-GNN), which extends graph representation learning to handle missing data in temporal settings. We build on previous GRL methods by capturing sequence information within the same type of nodes (i.e., observation nodes) in a bipartite graph and by exploring how we can recover an accurate temporal representation that preserves the original representations' feature-label relationships. TSI-GNN incorporates temporal information into a bipartite graph without creating actual edges between the same type of nodes, enhancing the learned representation without violating bipartite graph properties. While we evaluate TSI-GNN using the modified GraphSAGE architecture from GRAPE (You et al., 2020), our approach is general to GNN-based approaches that use a bipartite graph representation. Representation and Observation Node and Edge Definition An ideal imputation method learns to recover the original relationships in a dataset (Yoon, 2020). Extending a graph representation to the temporal setting should therefore preserve temporal dynamics such that the recovered representation keeps the original relationships between the feature and label across time (Meng, 1994). In temporal settings with repeated measurements, observations are often correlated, particularly frequent measurements (e.g., stocks, energy, healthcare) (Yoon, 2020). Therefore, an imputation method for temporal settings with repeated measurements should capture temporal information, not ignore it. Similarly, features in temporal settings can be correlated. Thus, a GNN-based temporal imputation method should learn to recover important information within and between sets of the two types of nodes (i.e., observation nodes and feature nodes). We illustrate this with the following scenario in the healthcare setting: Patient 1 labs (e.g., estimated glomerular filtration rate and potassium) and vitals (e.g., respiratory rate and systolic blood pressure) are monitored every 4 h for a sequence length of 3 checks (i.e., 12 h total). In this scenario, Patient 1 had three observations (repeated measurements). According to You et al., the key innovation of GRAPE is to formulate the problem using a bipartite graph representation (You et al., 2020). In a bipartite graph, the absence of an edge between the same type of nodes [e.g., observation node 1 (O 1 ) and observation node 2 (O 2 )] implies that O 1 and O 2 are independent, denoted by O 1 v O 2 . While this may hold in a static setting, in the case of temporal settings with repeated measurements (as illustrated in the healthcare scenario above), this does not necessarily hold and to assume it does could ignore important temporal information. To adhere to bipartite graph properties, we do not create edges between the observation nodes to exploit temporal information. Instead, we incorporate sequence length, which represents a sequence among sequences of datapoints, into the observation nodes and edges, thereby capturing temporal information in observation nodes and edges that may provide a more accurate chronological representation of the data. We call this type of approach a joint bipartite graph, as it incorporates sequence length, which addresses the independence assumption between observation nodes implied in a bipartite graph. Intuition Behind Joint Bipartite Graph in TSI-GNN Our key innovation is to formulate the problem using a joint bipartite graph (Figure 1). Let G (V, E) be the joint bipartite graph of nodes V and undirected edges E. V is comprised of two types of nodes, where the size of m is the number of observations minus the sequence length times the sequence length, and V features {v 1 , . . . ,v n }, where the size of n is the number of features. E contains the undirected edges between V observations and V features , where E {k 1 , . . . ,k p } and the size of p is the number of observations minus the sequence length times the sequence length times the number of features. To incorporate sequence length into observation nodes and edges, we use an operation similar to the idea of reshaping a 3D array with a sequence length dimension to a 2D array (The NumPy community, 2021), which keeps the sequence length information and can then be used as input for a GNN to exploit. Prior to reshaping, we cut the data by the sequence length (i.e., apply a sliding window technique), which is an operation also implemented in existing temporal imputation methods (Yoon et al., 2018b;Yoon et al., 2019). For example, using the stock dataset (see Datasets), after cutting the data by the sequence length, let observations, sequence length, and features, respectively denote the parameters in the 3D array, (4120-21 4099, 21, 6). After reshaping the 3D array, let observations and features denote the parameters in the 2D array, (4099*21 86079, 6), which keeps the sequence length information (i.e., sequence length of 21), but in a different shape. Since we know the sequence length, we can verify that it is kept by the 2D representation by demonstrating that we can recover the 3D array with dimensions (4099, 21, 6) by reshaping the 2D array with dimensions (86079, 6). In a similar vein, RNN-based and GAN-based methods have empirically shown that the operation of reshaping 3D arrays with a sequence length dimension into 2D arrays is a suitable method for keeping sequence length information (Yoon et al., 2018b; van der Schaar Lab: T-I, 2020). Specifically, M-RNN uses this reshaping operation in the training and predicting fully connected network functions (Yoon et al., 2018b) and T-GAIN uses this reshaping operation in the fit and transform functions (van der Schaar Lab: T-I, 2020). While our application of the reshaping operation is for graph representation, the logic remains the same. Thus, this joint bipartite graph captures temporal information across the same type of nodes (i.e., observation nodes) without creating actual edges (which we informally refer to as using "ghost" edges) as well as captures information between the observation and feature nodes (i.e., the edge attributes) for GNN-based approaches to leverage. Optimizing the Number of Trainable Edges in TSI-GNN Incorporating sequence length can significantly increase the number of trainable edges in a GNN. As such, it is helpful to be aware of the size of the potential TSI-GNN before its training. Let observations, sequence length, features, respectively denote the main parameters affecting the size of the TSI-GNN, where a nonnegative threshold is a hyperparameter used to balance the selection of parameters affecting size and size ≤ threshold is practical to implement for the machine capable of running the model. For example, on a MacBook M1 with 8core CPU and 16 GB RAM, generating a TSI-GNN with a size ≤ four million is practical for implementation. Furthermore, at lower rates of missingness there are a larger number of trainable edges (i.e., it produces a larger TSI-GNN) relative to higher rates of missingness where there are a lower number of trainable edges (i.e., resulting in a smaller TSI-GNN). As GNNs are inductive, it is feasible to train on smaller subsets of the larger dataset, learn a temporal representation, then generalize to unseen data; thereby, reducing the computational complexity of the model the data is trained on. Baseline Imputation Methods In this work, we explore the performance of baseline imputation methods (Figure 2) that include well-established and contemporary approaches commonly used in static and temporal settings: 1) Static methods. Generative adversarial imputation networks (GAIN), introduces a hint mechanism to ensure that the generator generates samples according to the true underlying data distribution (Yoon et al., 2018a); GRAPE, formulates the problem using a bipartite graph, modifies the Datasets We utilized publicly available datasets from three domains: finance, energy, and healthcare. (Johnson et al., 2016), of individuals who received antibiotics at any point, based on the daily decision on antibiotic treatment. This dataset was extracted based on the preprocessing guidelines from the Clairvoyance implementation but filtered to produce a complete dataset (Jarrett et al., 2020). We selected 16 features (common labs and vital signs) from the original 27 features. Further, we randomly sampled 550 patients from the subset with a minimum sequence length of 9. While using a lower number of patients may degrade performance of leading benchmark temporal methods, such as M-RNN (Yoon et al., 2018b), it enables testing our method at higher sequence lengths, which we believe is suitable for this work. Determining Sequence Length Sequence length is a vital parameter for temporal imputation methods and should be thoughtfully determined. In this work, the regularly sampled datasets (i.e., stocks, energy) contain the same number of repeated measurements as the number of observations; therefore, selecting a sequence length for these datasets is somewhat flexible. For example, previous methods using a similar stock dataset set the sequence length ranging from 7 to 24 days (Yoon et al., 2018b;Yoon et al., 2019). In this work, we set the sequence length at 21 days. In this work, the irregularly sampled dataset (i.e., healthcare) contains multiple observations per patient, and the number of observations vary between patients. Therefore, determining an appropriate sequence length requires careful consideration. Previous research has suggested calculating an average sequence length for electronic health record (EHR) datasets and found that an average sequence length above 10 has been shown to lead to improved performance in contrast to lower average sequence lengths (Yoon et al., 2018b). Further, RNNbased and GAN-based methods have employed a maximum on the sequence length in EHR datasets to handle irregular sampling (Jarrett et al., 2020). After applying the inclusion criteria, the healthcare dataset in this work has an average sequence length of 15. To handle the irregular sampling, we set the maximum sequence length to be the same as the average sequence length, 15. Model Training and Evaluation The datasets are fully observed; therefore, we mask 30% and 60% of the data completely at random, recreating the missingness scenario where data are missing completely at random. Since the majority of the variables in the datasets we explore are continuous, we evaluate imputation performance using the root mean square error (RMSE). To test the effect of imputation on the downstream prediction task, we follow a holdout procedure via a 70:30 training and test set split. In this study, data was normalized before input to the models, and we did not renormalize or round the output of the models. Therefore, we evaluate prediction performance using R 2 , which is a measure to assess the goodness of fit. We compare the R 2 of the imputed values to the original values to assess the congeniality of the modelsthat is how well the imputed values preserve the feature-label relationships of the original dataset (Meng, 1994). We use a nonlinear model, gradient boosting regression trees (GBR), as well as a linear model, linear regression (LR), to determine which model may be more appropriate for the dataset as well as more congenial to the original representation. Configurations 1) TSI-GNN uses the same modified GraphSAGE architecture as GRAPE (You et al., 2020) to fairly evaluate the performance of a joint bipartite graph representation (TSI-GNN) to a bipartite graph representation (GRAPE). Further, for TSI-GNN and GRAPE, we set the number of epochs to 2,000 and use the Adam optimizer with a learning rate at 0.001. We use a 3-layer GNN with 64 hidden units (You et al., 2020). We use the mean aggregation function and ReLU activation function. For TSI-GNN, we set the sequence length at 21, 24, and 15 for the stocks, energy, and ICU datasets, respectively. 2) For M-RNN, we use four hidden state dimensions and a batch size of 64. We train the model for 2,000 iterations with a learning rate at 0.001. We set the sequence length at 21, 24, and 15 for the stocks, energy, and ICU datasets, respectively. 3) For GAIN and T-GAIN, we train the model for 2,000 iterations with a learning rate at 0.001. We use a batch size of 64, a hint rate of 0.9, and an alpha of 100. For T-GAIN, we set the sequence length at 21, 24, and 15 for the stocks, energy, and ICU datasets, respectively. TSI-GNN Improvement Over GRAPE in Downstream Prediction TSI-GNN outperforms GRAPE at the 30% and 60% missing rate with respect to the original dataset in the downstream prediction task, specifically when using GBR in the regularly sampled datasets ( Table 2). In the stock dataset, TSI-GNN R 2 for GBR is 0.141 and 0.123 higher than GRAPE R 2 at 30% and 60% missing rate, respectively ( Table 2). In the energy dataset, TSI-GNN R 2 for GBR is 0.097 and 0.146 higher than GRAPE R 2 at 30% and 60% missing rate, respectively ( Table 2). In the ICU dataset, TSI-GNN GBR is 0.030 and 0.033 higher than GRAPE R 2 at 30% and 60% missing rate, respectively ( Table 2). Downstream Prediction Across Static and Temporal Methods In the stock dataset, at the 30% missing rate, with respect to the original dataset, TSI-GNN, T-GAIN, and GAIN perform similarly and achieve the best for GBR, followed closely by cubic interpolation, while spline interpolation, cubic interpolation, and KNN perform the best for LR ( Table 2). At the 60% missing rate, with respect to the original dataset, GAIN performs the best for GBR, followed closely by M-RNN and TSI-GNN, while missForest performs the best for LR ( Table 2). In the energy dataset, at the 30% missing rate, with respect to the original dataset, KNN performs the best for GBR, while MICE performs the best for LR ( Table 2). Notably, at the 60% missing rate, with respect to the original dataset, M-RNN performs the best for GBR, while cubic interpolation and GAIN perform the best for LR ( Table 2). In the ICU dataset, at the 30% missing rate, with respect to the original dataset, TSI-GNN performs the best for GBR, while KNN and GRAPE perform the best for LR ( Table 2). At the 60% missing rate, with respect to the original dataset, TSI-GNN performs the best for GBR, while KNN, GRAPE, and TSI-GNN perform the best for LR ( Table 2). Imputation Performance Across Static and Temporal Methods In the stock dataset, at the 30% missing rate, MICE performs the best, followed closely by TSI-GNN (Table 3). But at the 60% missing rate, MICE is outperformed by all temporal methods except for T-GAIN (Table 3). In the energy dataset, at the 30% and 60% missing rate, missForest performs the best ( Table 3). In the ICU dataset, at the 30% missing rate, MICE performs the best, followed by TSI-GNN and GRAPE (Table 3). Yet at the 60% missing rate, TSI-GNN and GRAPE outperforms MICE (Table 3). DISCUSSION In this work, we show that formulating the problem using a joint bipartite graph, which incorporates sequence length information into bipartite graphs, can improve the representation at the 30% and 60% missing rate, specifically when using GBR for downstream prediction tasks in regularly sampled datasets. Moreover, we demonstrate that TSI-GNN is able to capture the temporal information between observation nodes without creating actual edges between them. In contrast, GRAPE formulates the problem using a bipartite graph, which does not incorporate sequence length or capture the temporal relationships between observation nodes. Our proposed method has the potential to capture meaningful temporal dynamics that can be useful in various domains and applications. While determining the sequence length of a dataset can be straightforward in regularly sampled datasets it requires more consideration in irregularly sampled datasets. In this work, we highlight learning the average sequence length in EHR data and incorporating it into bipartite graphs; however, this can be generalized to various irregularly sampled data. A limitation to our proposed method is that it increases the number of trainable edges in a GNN. But as demonstrated, it can improve the representational capacity. Therefore, using the guidelines we provided regarding managing the size of the generated TSI-GNN, one can practically implement and potentially scale this method. Interestingly, for datasets with higher rates of missingness, this limitation is potentially nullified as the number of trainable edges is lower. Another limitation is the preprocessed ICU dataset we used for testing our method. It is possible that some of the preprocessing steps used (e.g., using a completely observed subset or a fixed sequence length) removed important temporal information that degraded the performance of the temporal methods in the healthcare domain. Further, in the ICU dataset at the 30% missing rate, while the TSI-GNN R 2 for GBR was most similar to the original R 2 , it was slightly higher (0.009). Similarly, in the stocks dataset at the 30% missing rate, TSI-GNN R 2 for GBR was slightly higher than the original R 2 (0.018). In this work, we empirically show that joint bipartite graph representation captures temporal information; however, future work is needed to provide theoretical foundations that can elucidate how GNNs exploit temporal information. Furthermore, using RMSE may be a more biased performance metric when handling missing data for categorical variables (Wang et al., 2021). Although the vast majority of the variables in the datasets we explored are continuous, there still exists some ambiguity regarding choosing a single appropriate metric to use when evaluating imputation performance on a dataset with a mixture of categorical and continuous variables. While the main contribution of this work was to introduce TSI-GNN, we also demonstrate the performance of a nonexhaustive collection of benchmark temporal and static imputation methods. Not surprisingly, we did not find a single temporal method that worked the best across all domains. Rather, our findings suggest that each data domain has unique characteristics that can make optimizing various classic and contemporary methods across multiple domains difficult. Recently, a Bayesian optimization/ensemble approach was applied on-top of various imputation methods, which seems to help reduce challenges associated with selecting and tuning the appropriate imputation method for a given domain (Jarrett et al., 2020). Still, this suggests that the choice of an imputation technique must be carefully considered in light of the underlying data distribution as well as downstream application in data analysisno singular method will be superior without sufficient context regarding its usage. In future work, we plan to explore temporal imputation boosting with interpolation layers (TIBIL) for healthcare datasets with less frequent measurements (e.g., annual intervals) and shorter sequence lengths (e.g., 4). More specifically, TIBIL uses: 1) an upsample interpolation layer to produce more frequent and longer sequence lengths; 2) temporal imputation, such as TSI-GNN, to handle missing data; and 3) a downsample interpolation layer to rescale the interpolated and imputed data back into the original less frequent and shorter sequence length. We also plan to explore TSI-GNN and TIBIL using appropriate missingness mechanisms as well as using other aggregation functions (e.g., LSTM, which does not necessarily assume order invariance). Further, we plan to explore combining reinforcement learning with TSI-GNN and TIBIL. CONCLUSION Incorporating temporal information into GNN-based methods for handling missing data improved the representation, specifically when using GBR for downstream prediction tasks in regularly sampled data. We tested our method using several benchmark datasets and compared to classic and contemporary methods. We provided insight into practically implementing our proposed method by managing the size of the generated TSI-GNN. TSI-GNN outperformed GRAPE, specifically when using GBR in downstream prediction tasks in regularly sampled datasets. Our proposed method is competitive with existing temporal imputation methods. AUTHOR CONTRIBUTIONS Most intellectual work was done by DG with guidance from AB. DG was responsible for planning and implementing the paper methodologies. DG was responsible for running the experiments and evaluating the models. AB, PP, HZ, and DZ reviewed drafts of the paper and provided feedback. FUNDING This work was supported in part by the NIH T32 EB016640 and NSF NRT-HDR 1829071.
2021-09-16T13:14:19.063Z
2021-09-15T00:00:00.000
{ "year": 2021, "sha1": "335dd8a1c97f062e01c584d1444837947681d22f", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fdata.2021.693869/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "335dd8a1c97f062e01c584d1444837947681d22f", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science", "Medicine" ] }
127842119
pes2o/s2orc
v3-fos-license
E ffi cient design of cold-formed steel bolted-moment connections for earthquake resistant frames Cold-formed steel (CFS) sections can be designed in many con fi gurations and, compared to hot-rolled steel elements, can lead to more e ffi cient and economic design solutions. While CFS moment resisting frames can be used as an alternative to conventional CFS shear-wall systems to create more fl exible space plans, their per- formance under strong earthquakes is questionable due to the inherited low local/distortional buckling of thin-walled CFS elements and limited ductility and energy dissipation capacity of typical CFS bolted-moment con- nections. To address the latter issue, this paper presents a comprehensive parametric study on the structural behaviour of CFS bolted beam-to-column connections with gusset plates under cyclic loading aiming to develop e ffi cient design solutions for earthquake resistant frames. To simulate the hysteretic moment – rotation behaviour and failure modes of selected CFS connections, an experimentally validated fi nite element model using ABAQUS is developed, which accounts for both nonlinear material properties and geometrical imperfections. Connection behaviour is modelled using a connector element, simulating the mechanical characteristics of a bolt bearing against a steel plate. The model is used to investigate the e ff ects of bolt arrangement, cross-sectional shape, gusset plate thickness and cross-sectional slenderness on the seismic performance of CFS connections under cyclic loading. The results indicate that, for the same amount of material, folded fl ange beam sections with diamond or circle bolt arrangements can provide up to 100% and 250% higher ductility and energy dissipation capacity, respectively, compared to conventional fl at- fl ange sections with square bolt arrangement. Using gusset plates with the same or lower thickness as the CFS beam may result in a premature failure mode in the gusset plate, which can considerably reduce the moment capacity of the connection. The proposed numerical model and design con fi gurations can underpin the further development and implementation of CFS bolted-moment connections in seismic regions. Introduction Compared to hot-rolled steel elements, cold-formed steel (CFS) thinwalled elements are easier to manufacture with a far greater range of section configurations. Having a high strength to weight ratio, they are easy to transport and erect and can lead to more efficient and economic design solutions. However, CFS sections are more prone to local/distortional buckling due to the large width-to-thickness ratio of their thinwalled elements. As a result, CFS cross-sections have traditionally been employed mainly as secondary load-carrying members such as roof purlins and wall girts. However, in the modern construction industry, CFS members are increasingly used as primary structural elements, especially in modular systems [1]. Conventional CFS structures usually comprise shear walls made of vertical studs, diagonal braces and top and bottom tracks. The performance of shear walls with different bracing systems such as straps, steel sheets, and K-bracing were evaluated experimentally on full-scale specimens under cyclic and monotonic loadings [2][3][4]. The results indicate that most of the shear wall systems tested can maintain their lateral and vertical load bearing capabilities up to the drift limits specified by most seismic codes. However, CFS shear walls may exhibit poor ductility due to the distortion buckling of the stud elements and the resulting rapid decrease in their load-bearing capacity [5]. Fiorino gusset-to-track connection failure) [6]. CFS members have also been used as primary structural elements in low-to mid-rise multi-storey buildings [7] and CFS portal frames with bolted-moment connections [8,9]. Experimental and numerical investigations on bolted moment connections using CFS sections in general demonstrated their good strength and stiffness, and adequate deformation capacity for seismic applications [9,10]. However, the typical CFS bolted moment connections may exhibit very low ductility and energy dissipation capacity, especially when the width-to-thickness ratio of the CFS elements increases [11]. This highlights the need to develop more efficient CFS connections suitable for moment-resisting frames in seismic regions. The global moment-rotation behaviour of CFS bolted connections is mainly governed by bolt distribution configuration, bolt tightening and bearing behaviour [12,13], as presented schematically in Fig. 1. Bolt slippage in CFS bolted connections can be avoided by appropriate tightening. The behaviour of beam-to-column CFS bolted moment resisting connections with gusset plates has been investigated experimentally and numerically under monotonic and cyclic loading conditions [12,[14][15][16][17][18]. It was found that, though they usually exhibit a semirigid behaviour, they can generally provide enough stiffness and moment resistance for low to medium rise moment frames [18]. In another study, Lim et al. [9] examined experimentally the ultimate strength of bolted moment-connections between CFS channel-sections. The tested specimens included apex and eaves connections, and it was concluded that the connections exhibit a semi-rigid behaviour due to bolt-hole elongation in the thin-walled steel sheet. Analytical work by Lim et al. [10] indicated that bolt-group sizes in CFS connections can also have a significant impact on the bending capacity of connected sections. Based on the above studies, it can be concluded that the ductility and energy dissipation capacity of CFS bolted connections depend mainly on four factors: (a) material yielding and bearing around the bolt holes; (b) yielding lines resulting from the buckling of the CFS cross-sectional plates; (c) bolt distribution; and (d) cross-sectional shapes of the CFS beam elements. The effects of these factors will be investigated in this study. Unlike the common misconception that slender CFS structural elements are not ductile, previous studies showed that by using an appropriate design CFS sections can offer significant ductility and energy dissipation capacity even when subjected to local/distortional buckling [19][20][21]. Experimental and analytical research on CFS moment resisting frames [22] at the University of Sheffield, UK, has also demonstrated that increasing the number of flange bends in CFS channel sections can delay the buckling behaviour. Follow-up studies proved that the optimised folded flange sections can provide up to 57% higher bending capacity [23] and dissipate up to 60% more energy through plastic deformations [24] compared to commercially available lipped channels. Consequently, higher strength, stiffness, and ductility can be potentially developed in CFS beams with an infinite number of bends (curved flange shown in Fig. 2(a)). Nevertheless, this type of crosssection is hard to manufacture and difficult to connect to typical floor systems. Considering the construction and manufacture constraints, the curved flange can be substituted with a folded flange cross-section ( Fig. 2(b)). Both of these sections are examined in this study. This paper aims to develop efficient design configurations for moment resisting CFS bolted beam-to-column connections to improve their Configuration of CFS moment resisting connections using gusset plate with (a) curved flange beam adopted from [22] and (b) folded flange beam adopted from [23]. ductility and energy dissipation capacity and therefore facilitate their practical application in earthquake resistant frames. Detailed nonlinear FE models are developed by taking into account material nonlinearity, initial geometrical imperfections and bearing action which are known to affect the accuracy of the results [24]. The models also adopt a connector element to simulate the behaviour of a single bolt against the CFS plate. The developed FE models are verified against experimental results of CFS bolted connections under cyclic loading using test data by Sabbagh et al. [12]. An extensive parametric study is then conducted to investigate the effects of a wide range of design parameters including cross-sectional shapes, cross-sectional slenderness, bolt configuration and gusset plate thicknesses on the behaviour of the connections. Finally, the results are used to identify the most efficient design solutions to considerably improve the seismic performance of CFS bolted-moment connections. Finite element model Finite Element (FE) modelling is widely used to examine the behaviour of CFS bolted connections with gusset plates [9,[25][26][27][28][29][30]. The results of previous studies in general demonstrate the adequacy of detailed FE models to predict the behaviour of CFS connections under monotonic [30,31] and cyclic loading [27]. This section describes the details of the FE models developed including the model proposed for simulating single bolt behaviour in a connection assembly. It should be noted that the bolt slippage can significantly change the cyclic behaviour of the CFS bolted connections [12]. Therefore, this study deals only with the CFS bolted moment connections without bolt slippage as a typical connection type in CFS frame systems. More information about FE modelling of the CFS connections with bolt slippage can be found in [27]. Bolt modelling Lim and Nethercot [17,31] developed a simplified bolt model comprising of two perpendicular linear springs to model the bearing behaviour of a single bolt, and reported good agreement with experimental results of full-scale CFS joints subjected to monotonic loading. However, the linear spring elements used are not suitable for modelling the bearing behaviour of bolts under cyclic loads, due to nonlinearities in the bearing plate. Sabbagh et al. [27] used the connector element in ABAQUS [32] to model the behaviour of bolts under both monotonic and cyclic loading and achieved good agreement with experimental tests. Nonetheless, this approach induces stress concentrations around the two connector nodes. This issue can be overcome if the bolt behaviour is modelled explicitly using solid elements and surface-to-surface contact interactions [30,33,34]. The disadvantage of this is that it makes the model more complex and computationally expensive for cyclic modelling, especially when a large number of bolts is needed. Furthermore, convergence becomes an issue in the presence of bolt rigid body movement and slippage [34]. In this study, a simplified connection element which is similar to the concept of "component method" adopted by Eurocode 3 [35] is used to simulate the CFS fullscale connection behaviour. It is anticipated that the proposed model can provide accurate results with considerably lower computational costs compared to the complex FE models. The point-based "Fastener" using a two-layer fastener configuration found in ABAQUS library [32] is employed to model individual bolts (see Fig. 3). Each layer is connected to the CFS beam and gusset plate using the connector element to define the interaction properties between the layers. To model the connector element, a "physical radius" r is defined to represent the bolt shank radius and simulate the interaction between the bolt and the nodes at the bolt hole perimeter. The adopted method can accurately capture the stress concentrations around the nodes at the bolt positions and help to simulate more accurately the bearing work of the bolts. As shown in Fig. 3, each fastener point is connected to the CFS steel plates using a connector element that couples the displacement and rotation of each fastener point to the average displacement and rotation of the nearby nodes. Hence, rigid behaviour is assigned to the local coordinate system corresponding to the shear deformation of the bolts. Fig. 4 shows the fasteners with connector elements modelled in a typical CFS bolted-moment connection. It should be mentioned that previous studies by D'Aniello et al. [36,37] highlighted the importance of accounting for the non-linear response of the bolts (e.g. due to shank necking or nut stripping) in the modelling of preloadable bolt assemblies. However, no bolt damage was observed in the reference experimental tests, and therefore, the failure modes of the bolts were not considered in this study. Geometry, boundary conditions and element types To model the CFS connections in this study, the general-purpose S8R element in ABAQUS [32] (8-noded quadrilateral shell element with reduced integration) is adopted. Shell elements in general can accurately capture local instabilities and have been successfully used by researchers to model CFS connections in bending [22,27]. Following a comprehensive mesh sensitivity study, the mesh size 20 × 20 mm is selected to balance accuracy and computational efficiency. The boundary conditions are defined to simulate the actual experimental test set-up used by Sabbagh et al. [12]. The translational degrees of freedom U X and U Y at the top of the back-to-back channel column are restrained, while the bottom of the column is considered to be pinned (see Fig. 5). Since in the experiments the back-to-back channel beams were assembled with bolts and filler plates, the web lines are tied together in the U X ,U Y and U Z directions using the "Tie" constraint in ABAQUS [32]. The out of plane deformation of beam (in X direction) is restrained at the location of the lateral bracing system of the test set-up (see Fig. 5). The column stiffeners (used in the experiments to ensure the column remains elastic) are tied to the column surfaces. While it was previously shown that deformation of panel zone can also contribute to the rotational response of bolted-moment connections [38,39], in this study it is assumed that the panel zone remains elastic during cyclic loading (due to the use of a thicker cross-section and column stiffeners). This is consistent with the experimental results reported by Sabbagh et al. [12]. To apply the external load, the nodes of the beam end section are coupled to their centroid using a coupling constraint. Material model The stress-strain behaviour of the CFS plate is simulated using the constitutive model suggested by Haidarali and Nethercot [40]. The stress-strain relationship consists of a Ramberg-Osgood equation up to the 0.2% proof stress, followed by a straight line with a slope of E/100 (where E is the elastic modulus taken as 210 GPa). This slope is obtained according to the coupon tests results reported by Sabbagh et al. [12], as shown in Fig. 6. The ultimate strain used here is 0.08. Mathematically, the stress-strain model is expressed as: ), ε 0.2 is the strain corresponding to the 0.2% proof stress and n is a parameter determining the roundness of the stress-strain curve. The parameter n is taken as 10 to have the best agreement with the coupon test results. Fig. 6 compares the stress-strain curve from the coupon tests [12] with the material model used in this study. Since the reference bolted-moment connection exhibited plastic deformations with strain reversals under the applied cyclic loading, the effect of cyclic strain hardening was taken into account in this study. The combined hardening law in ABAQUS [32] was adopted based on the linear kinematic hardening modulus, C, determined as follows: where σ 0.2 is the yield stress at the zero plastic strain and σ u is a yield stress at ultimate plastic strain, ε pl , all obtained from the monotonic coupon test results shown in Fig. 6. The adopted method is capable to take into account the Bauschinger effect [41,42] and has been shown to be efficient at simulating the cyclic behaviour of steel material with isotropic/kinematic hardening [27,43]. Imperfections An experimental and analytical study carried out by Tartaglia et al. [39] showed that geometrical imperfections can significantly affect the cyclic response of an isolated beam leading to more severe strength degradation under both monotonic and cyclic loadings. Similarly, previous studies have highlighted the importance of considering geometrical imperfections for more efficient design of both CFS sections (e.g. [44,45]). No global buckling (lateral-torsional buckling) was observed in the reference beam [12] due to the lateral bracing system used in the test setup. Therefore, in this study, depending on the critical buckling resistance, either local or distortional imperfection is incorporated into the FE models. For steel sheets with thickness (t) less than 3 mm, the imperfection amplitude is taken as 0.34t and 0.94t for the local and distortional imperfections, respectively, based on experimental and analytical work by Schafer and Pekӧz [46]. The selected amplitudes are adopted from 50% value of the Cumulative Distribution Function (CDF) of the experimentally measured imperfection data. For steel sheet thickness (t) larger than 3 mm, the imperfection magnitude is considered to be tλ 0.3 s based on the model proposed by Walker [47], where λ s is the cross-sectional slenderness. It is worth mentioning that the Walker model can be considered as an upper bound for the data range presented by Schafer and Pekӧz [46]. The cross-sectional shape of the full CFS connection assembly, including CFS beam and column sections and gusset plate, with their corresponding imperfections are then generated by using an eigenvalue buckling analysis in ABAQUS [32]. The first buckling mode of the CFS connection is used to find the general shape of the local and distortional imperfections. This general shape is then scaled by the imperfection amplitude and superimposed to obtain the initial state of the CFS connection, as shown in Fig. 7. Solution technique and loading regime Nonlinear FE analyses are conducted using Static General Analysis (available in ABAQUS library) by applying a displacement at the reference point placed on the beam end section as show in Fig. 5. The AISC 341-16 [48] cyclic loading regime used in the reference experimental tests [9] is adopted as shown in Fig. 8. Fig. 9 shows the moment-rotation hysteresis behaviour of the CFS connection test A1 [49], which is used to validate the developed FE models. This connection was designed to prevent bolt slippage by providing sufficient friction force between the steel plates through pretensioning the bolts with a torque of 240 N m [49]. The simplified bolt model in Fig. 3 is used for modelling of the bolts (the radius of the bolts was 18 mm). The experimental and FE responses subjected to cyclic loading are presented in terms of moment-rotation (M-Ɵ) hysteretic curves in Fig. 9. The numerical results show a very good agreement with the corresponding experimental measurements. For better comparison, Fig. 10 compares the failure shape of the modelled connection under cyclic loads with the experimental observations. The von Mises stress distribution extracted from FE analysis is shown in Fig. 10(a) with grey areas indicating yielding. It should be noted that the stress and inplane deformations developed in gusset plate is noticeably smaller than CFS beam due to its thicker plate thickness. It is shown that the developed numerical model captures successfully the shape and the position of local/distortional buckling in the CFS beam. These results validate the modelling approach used in this study. Design parameters The main design parameters examined are beam cross-sectional shape, plate thickness (or cross-sectional slenderness), bolt configuration and gusset plate thickness. Considering the capabilities of the coldrolling and press-braking processes to provide cross sections with intermediate stiffeners or folded plates, four different geometries including flat, stiffened flat, folded and curved shaped channel crosssections are selected for the parametric study, as shown in Fig. 11. For each cross-section, four different plate thicknesses of 1, 2, 4, 6 mm are used. The selected cross-sections have the same total plate width, and therefore, use the same amount of structural material. Moreover, to investigate the effect of different flange shapes, all cross-sections are designed to have a similar web slenderness ratio. It should be mentioned that the curved flange cross-section in this study is less practical and is mainly used for comparison purposes and verification of analytical models with experimental results [12]. Also thin walled sections with curved or folded flange cross sections may be sensitive to crippling due to support transvers actions from joints and purlins, which should be considered in the design process of these elements. Since bolt distribution can also affect the stress field distribution of bolted moment connections [10], three types of bolt distributions including circle, diamond and square shapes are selected, as shown in Fig. 12. To increase the efficiency of the proposed connection, the number of bolts and their arrangement were also optimised in this study. As a result, the number of required bolts was reduced from 16 in the reference experimental tests [12] to 9 bolts. It is worth mentioning that circular bolt arrangement may be less practical compared to square and diamond in real constructional practice, however, in this study it is investigated for comparison purposes. The effect of gusset plate thicknesses on the cyclic behaviour of the bolted CFS connections is also investigated in the parametric study. It should be noted that, for better comparison, the performance parameters of the sections with different shapes and plate thicknesses are presented as a function of their slenderness ratio calculated as: where σ cr and f y are the critical buckling stress and yield stress, respectively. Eurocode regulations To design CFS beams for bending, Eurocode 3 [50,51] divides the cross-section of CFS beam elements into individual plates subjected to either compression, bending or combined bending and compression. Each plate is identified as internal or outstand element according to the edge boundary conditions. Based on their susceptibility to local buckling, sections are categorized into four different classes (1, 2, 3 and 4). This classification is based on the slenderness of the constituent flat elements (width to thickness ratio), yield stress, edge boundary conditions and applied stress gradient. The overall classification of a crosssection is obtained using the highest (most unfavourable) class of its compression parts. Table 1 lists the Eurocode 3 classifications obtained for channels with flat, stiffened flat and folded flanges used in this study. It should be noted that the Eurocode classification cannot be applied directly to sections with round elements (e.g. curved-flange channel). As shown in Fig. 13, the concept of the Eurocode cross-sectional classification is based on the moment-rotation ( − M θ) curves of the elements, where M y , M p and M u represent yield moment, plastic moment and peak moment capacity, respectively. Class 1 cross-sections can form a plastic hinge with the rotation capacity obtained from plastic analysis without reduction of their resistance (M p < M u ). Class 2 cross-sections are capable of developing their full plastic moment resistance, but have limited rotation capacity due to local buckling (M p < M u ). In class 3 cross-sections, the stress in the extreme compression fibre reaches the yield strength, but local buckling prevents the development of the plastic moment resistance (M y < M u < M p ). In class 4 cross-sections, local buckling occurs before the attainment of yield stress in one or more parts of the cross-section (M u < M y ). Since the moment-rotation response of class 1 and class 2 sections follow a similar trend (M p < M u ), the two classes can be distinguished based on the pure plastic rotation capacity factor (R) in EN 1993-1-1 [50]: where θ u is the ultimate rotation corresponding to the drop in the moment-rotation curve in the softening branch, and θ p is the rotation corresponding to the plastic moment at the hardening branch, as shown in Fig. 13. Classification based on moment-rotation behaviour The Eurocode 3 cross-sectional classification concept is assessed here by examining the predicted moment-rotation behaviour under monotonic load. Non-linear inelastic post-buckling analyses (Static Riks) are performed on 2 m long CFS cantilevers, using single channel cross-sections, as shown in Fig. 14. The CFS members are fully fixed at one end and are subjected to a tip load at the centroid of the other end. The centroid is coupled to the beam end cross-section and boundary conditions are applied to prevent the lateral movement of the flanges at 1/3 length intervals (see Fig. 14). Other FE modelling parameters (e.g. Fig. 8. Cyclic loading regime for the reference experimental test [12] and numerical study. Fig. 9. Comparison between tested moment-rotation hysteretic curve of connection A1 [49] and the results of FE modelling. mesh size, material properties and geometric imperfections) are as described in Section 2. Fig. 15 illustrates how the cross-section classification is applied based on normalised moment and pure plastic rotation capacity factor (R). As shown in Table 1, all of the sections with 1, 2, 4, and 6 mm plate thickness are identified as class 4, 3, 2 and 1, respectively. The Eurocode slenderness limits for class 1 and 4 channel sections are found to be in general sufficient, but they are not very accurate for class 2 and 3 channel sections. Cross-sectional classifications may also be determined using the classical finite strip method [52,53], which estimates the buckling capacity of thin-walled members by taking into account local, distortional and global buckling elastic modes. The critical elastic buckling stress of the cross-sectional shapes used in this study (subjected to bending) are determined based on the minimum of the local buckling (σ l ) and the distortional buckling (σ d ) stresses obtained from constrained finite strip software CUFSM [54]. Due to the presence of the lateral supports, the global buckling mode is not considered to be dominant in this study (see Fig. 14). The critical buckling stresses obtained for the different cross sections are presented in Table 1. It is shown that the ratio of the critical stress to the yield stress is in the range of 3-6, 2-3 and 1-1.5 for class 1, 2 and 3 sections, respectively. For class 4 sections, local buckling is always identified as the dominant buckling mode and the critical buckling stress is generally smaller than the yield stress. This indicates that the critical buckling stress can be also used as a simple measure to identify the cross-sectional classification based on Eurocode 3. Efficiency of CFS bolted-moment connections In this section, the results of the parametric study are used to identify efficient design solutions for CFS bolted-moment connections. less than 0.5, between 0.5 and 25 and over 25 are classified as "simple", "semi-rigid" and "rigid", respectively, where L b is the beam length and EI b is the flexural rigidity of the beam. Table 2 lists the rigidity of the different connections used in this study. The results indicate that connections with cross-sectional classes 1 and 2 are always classified as "rigid", while those with cross-sectional classes 3 and 4 should be treated as "semi-rigid". Failure mode The failure modes of the bolted-moment CFS connections obtained from the detailed FE models are identified in Table 2. It is shown that, in general, the dominant mode is local buckling of the CFS beam section close to the first row of the bolts. This can be attributed to the effect of bolt group on the stress distribution at the connection zone. The results indicate that for flat and stiffened flat channels, local buckling occurs at both web and flange of the CFS section. But using curved and folded flange channels can postpone the local buckling of the flange by creating an in-plane stiffness through arching action and shifting the local buckling failure to the web. Fig. 17 shows the typical failure mode of the beams with flat and bent flange channel sections. FEMA bi-linear idealisation model To characterize the cyclic behaviour of the selected bolted-moment connections, FEMA model [55] is developed based on the cyclic moment-rotation envelope, which is capable to take into account both positive and negative post-yield slopes. As shown in Fig. 18, FEMA model uses an ideal bi-linear elastic plastic response to represent the non-linear behaviour of an assembly by incorporating an energy balance approach. The area below the cyclic moment-rotation envelope curve is assumed to be equivalent to the area under the idealised bilinear FEMA curve. The yield rotation (θ y1 ) is determined on the condition that the secant slope intersects the actual envelope curve at 60% of the nominal yield moment (M y1 ), while the area enclosed by the bilinear curve is equal to that enclosed by the original curve bounded by the target displacement (θ t ). The target rotation was assumed to be corresponding to the rotation at which the flexural capacity of the system dropped by 20% (i.e. θ t = θ u ), also recommended by AISC [48]. The characteristic parameter values of the FEMA models corresponding to the different bolted-moment CFS connections are presented in Table 2. For better comparison between the behaviour of different types of connections, Table 2 also presents the yield moment results calculated based on the FEMA idealised bi-linear curve up to the rotation at the maximum moment capacity (M y2 ). Fig. 19 compares the moment capacity of the CFS connections with different cross-sections, plate thicknesses and bolt arrangements. While previous studies showed that CFS channels with folded-flange and curved-flange sections can generally provide considerably higher flexural moment capacity compared to standard lipped channels [12,23], the results in Fig. 19 indicate that using bent flange channels (folded Moment capacity of the connections Loading points in Z direction and curved) can only increase the moment capacity of the connections by up to 10%. There are two main reasons for this contradiction: (i) The bolts placed in the web result in the reduction of the moment-capacity of the channel-sections at the connection zone due to the bi-moment effects [10]. Bi-moment is equal to the product of the major axis moment and the eccentricity of the web centreline from the shear centre and puts each flange into bending about its own (horizontal) plane. (ii) Using channels with deep web reduces the effects of the flange on the moment capacity of the connections, since bending moments cannot be transferred directly from web to flanges. As shown in Fig. 19, the slenderness of the CFS beam elements and the arrangement of the bolts seem to be the most important design parameters affecting the capacity of the connections. In general, using a square bolt arrangement leads to a higher bending moment capacity especially in the case of CFS beam elements with low slenderness ratios (class 1 and 2), where up to 32% increase is observed compared to the other bolt arrangements. Seismic resistance requirements American Institute of Steel Construction (AISC) [48] imposes some performance requirements for beam-to-column connections in seismic force resisting systems. For special moment frames (SMFs), AISC stipulates that the bolted-moment connections should be able to undergo at least 0.04 rad rotation with less than 20% drop from their peak moment (M u ). It is shown in Table 2 that the bolted-moment connections with beam sections class 1 and 2 (plate thicknesses of 4 and 6 mm) can satisfy the AISC requirements for SMFs. Connections with beam class 2 (plate thicknesses of 2 mm) are only acceptable when intermediate stiffeners are used in the flanges (stiffened flat). The results show that the connections with class 4 beam cross sections (plate thicknesses of 1 mm) are not suitable for SMFs in seismic regions. Ductility ratio of the connections Moment resisting connections in seismic resisting systems should provide enough ductility to withstand and redistribute the seismic loads. The fundamental definition of ductility ratio (µ) is the ratio of the ultimate rotation (θ u ) to the yield rotation (θ y ), as follows: The ductility ratio of the CFS connections in this study is calculated based on the results of the FEMA bi-linear idealisation models by using the rotation at 80% of the post-ultimate moment as the ultimate rotation (see Fig. 18). The ductility ratios of the connections with different beam cross-sections, plate thickness and bolt arrangements are compared in Fig. 20. It can be observed that the connection ductility ratio is highly affected by the beam cross-sectional shape and slenderness ratio as well as bolt distribution. In general, the ductility of the connections increases by decreasing the slenderness ratio of the CFS beam. This increase is particularly high for beams with lower slenderness ratios (higher classes). The results show that in most cases the connections with class 1 and 2 beam sections provide a good level of ductility suitable for seismic applications. This conclusion is in agreement with the AISC requirement checks presented in Section 5.5. As shown in Fig. 20, for the same beam slenderness ratio and bolt arrangement, folded flange sections generally result in the highest ductility ratios, up to 55%, 45% and 30% higher than curved, flat and stiffened flat sections, respectively. The best bolt arrangement appears to be governed by beam slenderness ratio (or classifications) and crosssectional shape. For folded and curved flange class 1 and 2 channels, diamond bolt arrangement provides the highest ductility ratios, while for class 3 and 4 channels the circle bolt arrangement leads to the best results. For CFS beams with flat and stiffened flat flanges, the circle bolt arrangement results in the highest ductility ratio for all cross-section classifications. It is worth mentioning that, in general, using the circle and diamond bolt arrangements can significantly increase the ductility of the connections (by up to 100%) compared to the conventional square bolt arrangement, especially for the sections with lower slenderness ratios. Energy dissipation The seismic performance of structures can be improved considerably by increasing the energy dissipation in the structural elements and connections. For each connection, the area under the FEMA bilinear idealisation curve of the moment-rotation hysteretic response (see Section 5.3) is used to calculate the energy dissipation capacity of Damping coefficient The equivalent viscous damping coefficient, h e , is another indicator of the energy dissipation capability for seismic applications [56,57]. This parameter represents the plumpness of the hysteresis loop and can be calculated using the following equation: where + SS ∆ABC ∆CDA is the energy dissipated by the connection at the expected rotation (shaded area in Fig. 22(a)). + SS ∆OBE ∆ODF is the total strain energy of the connection at the expected rotation, in which the connection is assumed to remain elastic (shaded area in Fig. 22(b)). Points B and D represent the maximum positive and negative moment In this study, the equivalent viscous damping coefficients are calculated for two different loops based on the peak moment and the ultimate moment (20% drop from the peak moment in softening stage) of the connections as shown Fig. 23 and Fig. 24, respectively. The results in general indicate that connections with class 1 and 2 beam elements can provide higher equivalent viscous damping coefficients compared to other classes, and therefore, are suitable for seismic applications. It is shown that reducing the CFS beam cross-sectional slenderness is always accompanied by increasing the equivalent viscous damping coefficient, while the effect of cross-sectional shape on the damping coefficient is negligible. It can be also noted that the damping coefficients corresponding to the peak moment (h e ) are always smaller (up to 15%) than those corresponding to the ultimate moment. Fig. 23 shows that, up to the peak moment, diamond and circle bolt configurations are capable of dissipating more energy in the connections with class 1 and 2 beam sections. However, the square bolt arrangement provides better results for class 3 and 4 sections, which implies that the majority of the seismic energy is damped by the connection in the elastic stage (before buckling). The results in Fig. 24 indicate that the equivalent viscous damping coefficient at ultimate moment is not significantly affected by changing the bolt configuration. Effect of gusset plate thickness In general, previous studies indicated that the design of gusset plate in the stiffened end-plate bolted joints can influence the transfer mechanism of the forces and affect the cyclic rotational behaviour of the connections [38,39]. Therefore, in this section the effect of gusset plate thickness on the cyclic behaviour of the bolted-moment connection is investigated. Fig. 25 compares the moment-rotation curves of the connections with class 1-4 flat channel beam sections (see Table 1) using the square bolt configuration and various gusset plate thicknesses. In general, using gusset plates with the same or lower thickness as the CFS beam (1, 2, 4 and 6 mm thickness for class 4, 3, 2 and 1 sections, respectively) shifts the local buckling from the CFS beam to the gusset plate. As shown in Fig. 25, this premature failure mode can significantly reduce the moment capacity of the connections and lead to higher postbuckling strength degradations. This undesirable failure mode can be prevented by increasing the gusset plate thickness slightly above the thickness of the CFS beam. Summary and conclusions Experimentally validated FE models were developed by taking into account material nonlinearity and geometrical imperfections. A comprehensive parametric study was conducted on CFS bolted beam-tocolumn connections with gusset plates under cyclic loading to compare the efficiency of various design solutions for earthquake resistant frames. The following conclusions can be drawn: while connections with class 4 beam cross sections are not considered to be suitable for SMFs. (4) The ductility of the connections is also governed by bolt arrangement and beam cross-sectional shape and slenderness ratio. Ductility is increased radically by decreasing the slenderness ratio of the CFS beam sections. For the same beam slenderness ratio and bolt arrangement, folded flange sections result in up to 55%, 45% and 30% higher ductility levels compared to the curved, flat and stiffened flat sections, respectively. While the best bolt arrangement is related to the beam slenderness ratio and cross-sectional shape, in general, using the circle and diamond bolt arrangements can increase the ductility of the connections (up to 100%) compared to the conventional square bolt arrangement. (5) The effect of cross-sectional shape and bolt configuration on the energy dissipation capacity of the connections is only evident when class 1 beam sections are utilized. For these connections, using folded flange beam cross sections results in up to 250%, 200% and 150% higher energy dissipation capacity compared to flat, curved and stiffened flat sections, respectively. Using diamond and circle bolt arrangements could also considerably increase (up to 250%) the energy dissipation capacity of the connections compared to the conventional square bolt arrangement. It was shown that reducing the CFS beam cross-sectional slenderness is always accompanied by increasing the equivalent viscous damping coefficient, while the effect of cross-sectional shape on the equivalent damping coefficient is negligible. (6) Using gusset plates with the same or lower thickness as the CFS beam may lead to a premature failure mode in the gusset plate and considerably reduce the moment capacity of the connection. However, this failure mode can be efficiently prevented by increasing the gusset plate thickness slightly above the thickness of the CFS beam.
2019-04-23T13:21:46.903Z
2020-05-01T00:00:00.000
{ "year": 2020, "sha1": "31224c1641611d306a0710ea45bd9366b6dedad2", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1016/j.tws.2018.12.015", "oa_status": "HYBRID", "pdf_src": "Adhoc", "pdf_hash": "f34702178433e6d3ef51e9ec17a59c592b3e8a03", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Materials Science" ] }
55918814
pes2o/s2orc
v3-fos-license
Forage and nutritive value of the pruning residues ( leaves plus summer lateral shoots ) of four grapevine ( Vitis vinifera L . ) cultivars at grape harvest and two post-harvest dates The annual pruning of vineyards produces shoot and leaf residues that have traditionally been fed to sheep and goats. The aim of this work was to determine the forage and nutritive values of grapevine (Vitis vinifera L.) leaves plus summer lateral shoots at grape harvest and two post-harvest dates. The study cultivars were Cabernet Sauvignon, Merlot, Sauvignon Blanc and Sémillon, all grafted onto 5BB rootstocks. The leaves and summer lateral shoots were removed at the same time from each cultivar at three dates: grape harvest, 15 days post-harvest, and 30 days postharvest. No significant differences were seen between the cultivars in terms of their mean crude protein (CP) (45.4446.33 g kg), crude fibre (CF) (37.12-37.50%), neutral detergent fibre (NDF) (324.63-324.87 g kg), acid detergent fibre (ADF) (247.44-249.44 g kg), potassium (2.11-2.14 g kg), calcium (3.85-3.95 g kg) or iron (0.037-0.038 g kg) contents at any of the three sampling dates. The highest fresh matter (1,765.33 kg ha) and dry matter (DM) yields (610.67 kg ha) were obtained from Sauvignon Blanc. The fresh matter yield, DM yield, CP, CF, NDF and ADF contents on the different sampling dates all differed significantly. The maximum fresh matter yield (1,925.33 kg ha), DM yield (634.67 kg ha) and CP content (61.67 g kg) were recorded at grape harvest. The potassium, calcium and iron contents ranged from 2.11-2.15, 3.86-3.92 and 0.036-0.038 g kg respectively at all stages. The leaves plus summer lateral shoots of Cabernet Sauvignon, Merlot, Sauvignon Blanc and Sémillon grapevine cultivars can be beneficially fed to sheep, goats and cattle in some viticultural regions of Turkey and other parts of the world. Additional key words: acid detergent fibre, mineral content, neutral detergent fibre, yield. Proteins are vital for proper nourishment.The 70-80 g of protein required daily to maintain good human health should be in the form of both plant (45%) and animal (55%) proteins (Ates and Tekeli, 2001).However, in less-developed and developing countries, the demand for animal protein is often lower than that for plant protein, perhaps because of preference, but partly because people with modest incomes cannot afford to buy animal products.Certainly in Turkey the production of animal products is insuff icient, which increases their cost: the country's animal population therefore needs to be increased.The forage used to feed animals in Turkey and other less-developed countries is provided by grazing land, forage crops, and the supernumerary materials of other cultivated plants (Tekeli and Ates, 2006a). Grapevines (Vitis vinifera L.) are grown all over the world's moderate climate belts (isotherms around 10-20°C), i.e., between 30°and 50°North (including the Mediterranean basin) and South (Celik et al., 1998).Apart from grapes, these plants produce considerable quantities of by-products.For example, the annual pruning of vineyards produces grapevine shoot and leaf residues that have traditionally been fed to sheep and goats after the grape harvest (Romero et al., 2000).These residues are thought to be a source of protein and mineral ions.Mineral elements make up approximately 1.5-5% of animal bodies (Tekeli et al., 2003), and their different functions (Ensminger et al., 1990) require that adequate concentrations of all the necessary types be maintained if health is to be preserved.A lack of one element cannot be balanced by the surfeit of another (Ates and Tekeli, 2005). The aim of this work was to determine the forage and nutritive values of grapevine leaves plus summer lateral shoots at grape harvest and at two post-harvest dates for four important cultivars grown in Turkey. All work was conducted in the vineyards of the village of Yagzir, near Tekirdag in western Turkey, over the seasons of 2004 and 2005.These vineyards lie at 40°59'N, 27°33'E at an altitude of 100 m; the mean total precipitation they receive is 482 mm, and the annual mean temperature 10.5°C.The soil of these vineyards is a Xerept, low in organic matter (0.78%), moderate in its P (60.11 kg ha -1 ) and K (210.12 kg ha -1 ) contents, and with a pH of 6.8.The Tekirdag area is home to important viticultural (6,000-7,000 ha, 75,000-80,000 Mg yr -1 ) and stock-raising (210,000-220,000 sheep and goats, 125,000-135,000 cattle) activity.In this region, pruning residues may be fed to animals at different stages after the grape harvest. The experimental plants, all nine years old and grafted onto 5BB rootstocks, represented four wine grape cultivars: Cabernet Sauvignon, Merlot, Sauvignon Blanc and Sémillon.Spacing in the vineyards was 2.5 × 1.5 m; all plants were Guyot-trained.Downy mildew -Plasmopara viticola (B.et C.) Berlese et de Toni -and powdery mildew -Uncinula necator (Schw.)Burr.-were the most common diseases observed affecting these plants.To control downy mildew, Mancozeb (72%) (200 g per 100 L of water) was applied at the pre-bloom stage when the shoots reached 20-25 cm height, followed by a second spraying after the heavy rain of May and June in both years.Further sprayings were performed according to climatic conditions (rainfall and temperature), disease load and intensity.To control powdery mildew, cyproconazole + sulphur (mixture 0.8% + 80%; 100 g per 100 L of water) was initially applied at the pre-bloom stage when the shoots were 20-25 cm in height.Later, as the disease spread, spraying was performed at two week intervals from berry set to véraison. Fertilizers, including N, P and K were applied to the vineyard soils: N, as (NH 4 ) 2 SO 4 (containing 21% N; total 400 kg ha -1 ), was applied during shoot growth (200 kg ha -1 ) and bloom (200 kg ha -1 ) in both study years; total P, as triple superphosphate (42%) (360 kg ha -1 ), was applied in the autumn of the first year; K, as K 2 SO 4 (50%; 300 kg ha -1 ) was applied in the autumn of both study years.None of the vineyards was irrigated. The study plots were 7.5 × 7.5 m in size and arranged in a randomised block design with three replicates (Turan, 1995).Three grapevine plants were selected for examination in each.The leaves plus summer lateral shoots (which made up 30% of all shoots on both grapevine arms) were removed by hand from plants of each cultivar on three sampling dates: at grape harvest (1 st year, October 2; 2 nd year, October 5), at 15 day postharvest (1 st year, October 17; 2 nd year, October 20), and at 30 days post-harvest (1 st year, November 1; 2 nd year, November 4). The fresh matter yield (kg ha -1 ) for each cultivar and time was determined and recorded.The dry matter (DM) yield (kg ha -1 ) was calculated by drying approximately 500 g samples at 55°C for 24 h followed by storage for a further day at room temperature (Tekeli and Ates, 2006b).The crude protein (CP) and crude fibre (CF) contents were determined by the micro-Kjeldahl and Weende methods respectively.The mineral contents were analysed after dry-ashing at 550°C in a muffle furnace and dissolving in deionised water to standard volumes.The K ratio was determined by flame photometry (AOAC, 1999).The Ca and Fe ratios were determined by atomic absorption spectrophotometry.The neutral detergent fibre (NDF) and acid detergent f ibre (ADF) contents were determined following Romero et al. (2000). The results were analysed using the least significant difference (LSD) test (P = 0.01); calculations were made using TARIST statistical software (Acikgoz et al., 1994). The sampling stage significantly affected the fresh matter yield, DM yield, CP, CF, NDF and ADF contents of all cultivars (P < 0.01).The maximum fresh matter yield (1,925.33 kg ha -1 ), DM yield (634.67 kg ha -1 ) and CP content (61.67 g kg -1 ) were found at grape harvest stage in all cultivars.The CF (33.67%),NDF (320.70 g kg -1 ) and ADF (246.17 g kg -1 ) contents were lower at grape harvest than at any other date (P < 0.01) in all cultivars.The CP contents recorded agreed with those reported by Rebolé et al. (1988) and Rebolé (1994) for grapevine branches and leaves.Romero et al. (2000), who investigated the digestibility and voluntary intake of vine leaves by sheep, found a higher CP content for the leaves (68 g kg -1 ), while the leaf NDF and ADF contents were 319 g kg -1 and 254 g kg -1 respectively.Madibela et al. (2000) reported a CP content of 135 g kg -1 and an ADF content of 214 g kg -1 for the leaves of Tapinanthus lugardii (N.E.Br.)Danser.The present CP contents were lower than those of the parasitic plants T. lugardii, Erianthenum ngamicum, Viscum rotundifolium and V. verrucosum reported by Modibela et al. (2000).Ates and Tekeli (2005) reported the CP and CF ratios to range from 16.30 to 22.57% and 19.60-24.23%respectively in orchardgrass (Dactylis glomerata L.) and white clover (Trifolium repens L.).The present values were lower than those reported by Ates and Tekeli (2005).Tekeli and Ates (2006a,b) cell growth ceases the cell walls thicken and the secondary wall is formed.Unlike the primary walls, the secondary walls do not contain protein and may vary significantly among cell types in terms of their composition and structure.However, secondary walls are generally composed of a network of cellulose f ibrils embedded in an amorphous matrix of hemicelluloses, pectin and lignin.Generally, young plant cell walls are richer in pectin and lower in fibre than older plant cell walls (Ates and Tekeli, 2005;Tanner and Morrison, 1983). Table 1 shows the effect of sampling date on the mineral content to be non-significant (P > 0.01).The content of K, Ca and Fe in the leaves plus summer lateral shoots ranged from 2.11 to 2.15 g kg -1 , 3.86 to 3.92 g kg -1 and 0.036 to 0.038 g kg -1 respectively in all the cultivars examined and at all stages.The NRC (2001) reports the major mineral nutrient requirements of gestating or lactating beef cows to be 0.6-0.8%(w/w) for K and 0.18-0.44%for Ca.Baysal et al. (1991) reported the Ca, K and Fe contents in vine leaves were 3.92 g kg -1 , 2.13 g kg -1 and 0.39 g kg -1 respectively.The present mineral element contents were similar to those reported by Baysal et al. (1991).Khanal and Subba (2001) determined contents of 2.2-57.2g kg -1 for Ca, 9.0-28.9g kg -1 for K, and 0.095-0.66g kg -1 for Fe in the leaves of a number of major fodder trees.Gizachew and Smit (2005), who investigated the CP and mineral composition of major crop residues and supplemental feeds, reported contents of 16.0 g kg -1 for Ca, 19.9 g kg -1 for K and 0.15 g kg -1 for Fe in grass pea (Lathyrus sativus L.) haulms (figures higher than those recorded in the present work). The high CF content of the grapevine leaves plus summer lateral shoots reported here might be due to the high NDF levels and low protein contents recorded.The highest fresh matter and DM yields were obtained from Sauvignon Blanc cultivar at the grape harvest stage.The leaves plus summer lateral shoots of Cabernet Sauvignon, Merlot, Sauvignon Blanc and Sémillon grapevine cultivars can be beneficially fed to sheep, goats and cattle in some viticultural regions of Turkey and other parts of the world. Table 1 . reported 16.16-21.20%CF in Persian clover (Trifolium resupinatum L.) -higher than in the present findings.After plant Forage and nutritive value of grapevine pruning residues 519 Forage (fresh matter yield, dry matter yield, crude protein, crude fibre, neutral detergent fibre, acid detergent fibre), and nutritive value (K, Ca and Fe contents) of the leaves plus summer lateral shoots of four grape cultivars at different sampling dates.TARIST software was used for the LSD comparison of the means of the two experimental years.
2018-12-05T15:41:13.140Z
2007-12-01T00:00:00.000
{ "year": 2007, "sha1": "1a6ef24088b2bd8653d17e6fc4419a2b391a6a5e", "oa_license": "CCBY", "oa_url": "https://revistas.inia.es/index.php/sjar/article/download/287/284", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "1a6ef24088b2bd8653d17e6fc4419a2b391a6a5e", "s2fieldsofstudy": [ "Agricultural and Food Sciences" ], "extfieldsofstudy": [ "Biology" ] }
3273791
pes2o/s2orc
v3-fos-license
Effects of TLR3 and TLR9 Signaling Pathway on Brain Protection in Rats Undergoing Sevoflurane Pretreatment during Cardiopulmonary Bypass Objective To investigate the effects of TLR3 and TLR9 signaling pathway on brain injury during CPB in rats pretreated with sevoflurane and its possible molecular mechanism. Methods SD rats were randomly assigned to sham group, CPB group, and Sev group. Brain tissue was obtained at before CPB (T0), at CPB for 30 minutes (T1), 1 hour after CPB (T3), and 3 hours after CPB (T5). ELISA was used to measure S100-β and IL-6. Western blot was utilized to determine TLR3 and TLR9 expression. TUNEL was applied to detect neuronal apoptosis. Results Compared with CPB group, at T1, at termination after 1 hour of CPB (T2), T3, 2 hours after CPB (T4) and T5, S100-β and IL-6 decreased in Sev group. Compared with CPB group, IFN-β were increased in Sev group, except T0. Compared with CPB group, TLR3 expression increased, and TLR9 and NF-κB decreased in Sev group. The apoptotic neurons were less in Sev group than in CPB group (P < 0.05). Conclusion Sevoflurane intervention can activate TLR3 and TLR9 signaling pathway, upregulate TLR3 expression and downstream TRIF expression, decrease TLR9 expression, and downregulate downstream NF-κB expression in CPB rat models, thereby mitigating brain injury induced by inflammatory response during CPB. Introduction The emergence, application, and popularization of cardiopulmonary bypass (CPB) have led to the switch of cardiovascular surgery and significantly improve the survival rate of patients [1]. The disadvantage associated with CPB is its damage to other systems in the body. Researchers have developed various measures to reduce CPB damage to the body and significantly improve patient survival and reduce the incidence of other systemic complications, but the incidence of complications of the nervous system is relatively constant; for example, the incidences of stroke and encephalopathy are approximately 2%-5% and 10%-30%, respectively [2]. After CPB, brain injury prolongs the length of hospital stay, increases the risk of complications, consumes medical resources, and hinders the development of cardiovascular surgery. The mechanism of brain injury after CPB is very complex; the main causes are cerebral emboli (gas, liquid, or solid), cerebral ischemic injury (such as vascular embolism, hypoperfusion, and hypoxia), and inflammatory response [3][4][5]. During CPB, surgical trauma stimulation, blood contact with foreign body, body endotoxin, and low temperature can activate noninfectious systemic inflammatory response syndrome [6,7]. Thus, a large number of inflammatory cytokines enter the brain and produce brain damage. Current study of brain injury after cardiac surgery showed that a series of inflammatory factors activated and released after inflammatory response are strongly associated with brain damage after CPB. The focus of research on reducing brain injury after CPB has shifted from trying to manage extracorporeal circulation to using a preventive strategy. These preventive strategies include the use of drug pretreatment and appropriate measures such as avoiding aortic operation, exhausting the gas in the heart cavity, and preventing air from entering the pump [8]. Pretreatment of narcotic drugs is presented on the basis of simulated ischemic preconditioning and has the same effect as ischemic preconditioning. Sevoflurane has some advantages, including rapid anesthetic induction, rapid consciousness, and low blood gas distribution coefficient and has become the current commonly used inhalation anesthetics [9]. A previous study confirmed that sevoflurane pretreatment could provide neuroprotection extensively, enhance tolerance of brain tissue to ischemic injury, and improve nerve repair after injury; thus, the neuroprotective effect of sevoflurane pretreatment has attracted much attention [10]. Present studies have shown that the protective effect of sevoflurane pretreatment on the brain may be strongly associated with the inhibition of systemic inflammatory response [11]. Bedirli et al. [12] considered that sevoflurane could improve inflammation, brain lipid peroxidation, and histological damage by downregulating TNF-and IL-1 . Ye et al. [13] pointed out that the expression of nuclear factor-kappa B (NF-B) increased when cerebral ischemic injury occurred. In contrast, one of the possible mechanisms of sevoflurane pretreatment in brain protection is to inhibit the expression of NF-B protein. The existing studies suggest that sevoflurane is associated with inflammatory responses, but its specific molecular mechanisms are unclear. Toll-like receptors (TLRs) recognize pathogens by pathogen-associated molecular patterns (PAMPs) in the early stage of pathogen invasion and then participate in the inflammatory response. So far, 10 human TLRs (TLR1-10) and 12 rat TLRs (TLR1-9, TLR11-13) have been identified [14]. Recent studies have found that signal transduction pathway mediated by TLR3 and TLR9, which are important members of TLRs family, may have an inseparable relationship with ischemic injury [15,16]. He et al. [17] observed periventricular axonal injury, ependymal rupture, and activation of glial cells around the hippocampus through intracerebroventricular injection of TLR9 specific ligand nonmethylated CpG oligodeoxynucleotides in rats, indicating that TLR9 signaling pathway may induce inflammatory response in the nervous system and further cause brain damage [18]. TLR3 is an unique member of the TLR family. First, TLR3 is highly expressed in astrocytes; second, there are two immune pathways in downstream in the TLR family [10]. The vast majority of TLRs (TLR2, 4, 8, and 9) is transmitted via the MyD88 pathway [17,19,20]; only TLR3 depends on the TRIF pathway [21]. Pan et al. [22] verified that TLR3 preexcitation could increase the tolerance of brain tissue to ischemic injury, reduce inflammatory response, promote the production of anti-inflammatory cytokines and neuroprotective mediators, and attenuate ischemic brain injury. Taking whether sevoflurane suppressed inflammatory lesions during CPB through activating TLR3 and TLR9 signaling pathway and protected the brain against injury as the starting point, the present study observed the effects of 2.4% sevoflurane pretreatment on brain injury and TLR3 and TLR9 signaling pathway during CPB in rat models. We also analyzed whether sevoflurane pretreatment exerted protective effect on the brain by activating TLR3 and TLR9 signaling pathway and investigated the possible molecular mechanisms so as to establish a certain basis for investigating the exact mechanism of TLR3 and TLR9 signaling pathway in the neuroprotection of sevoflurane treatment in rats undergoing CPB. Preparation of a Rat Model of CPB Primed without Blood. Rat model of CPB was performed as previously reported [23]. Rats were intraperitoneally anesthetized with 10% chloral hydrate 300 mg/kg. Direct orotracheal intubation was conducted using a 16 G venous catheter. Mechanical ventilation was carried out with a small animal ventilator at a frequency of 60 times/min, oxygen flow of 11/min, tidal volume of 3 ml/kg, and inspiratory-to-expiratory ratio of 1 : 1.5. Rat heart rate, saturation of blood oxygen, and rectal temperature were monitored using a monitor. The hair was shaved at the site of puncture. After disinfecting and cutting, blood vessels were isolated and exposed. 23 G venous catheter on the left side was connected to the microinfusion pump. 24 G femoral artery catheter on the left side was connected to the monitor to real-time detect blood pressure. A self-made multi-empty drainage needle (16 G) was inserted in the right internal jugular vein and reached the right atrium as blood pressure drainage during CPB. Right femoral artery puncture catheter was fixed for perfusion during CPB. The sites of puncture were connected with a drainage tube, a self-made blood reservoir, a constant flow peristaltic pump, a silica gel pipeline, and a rat membrane oxygenator to establish CPB circuit. The left femoral vein was chosen as the site of systemic heparinization in rats and injected with heparin sodium injection 300 IU/kg. When ACT reached 400-500s, it was considered up to standard. After the rats were primed without blood, rat models of CPB on a beating heart were established. Priming solution consists of 6% hydroxyethyl starch 12 ml, 5% sodium bicarbonate 2 ml, 20% mannitol 1 ml, and heparin sodium 150 IU/kg. Membrane oxygenator was immediately used after CPB. The bypass speed was 35 ml/(kg⋅min) (low flow) at beginning and gradually increased to 100-120 ml/(kg⋅min) (full flow). In order to prevent the formation of air embolism, 1-2 ml blood was kept in the blood container. MAP was kept higher than 60 mmHg; PH was at 7.35-7.45 level; PaCO 2 was between 35 and 45 mmHg; Hct was greater than 0.25. Vasoactive drugs and fluid supplement were given during the operation to maintain the circulatory stability in rats. At 1 hour after CPB, extracorporeal circulation was gradually terminated, and mechanical ventilation was carried out. At 2 hours after termination of the circulation, stable vital signs of rats indicated successful model establishment. Treatments in Each Group. In the sham group, tracheal intubation and mechanical ventilation were only performed in the right femoral artery. Bypass was not conducted in the right internal jugular vein puncture catheterization. In the CPB group, models of CPB were established. In the Sev group, after pretreatment with 2.4% sevoflurane for 1 hour, CPB models were established. In the CPB and Sev groups, CPB was performed for 1 hour. Sevoflurane pretreatment was as follows: Soda lime was paved under the gauze at the bottom of the pretreatment box. Two holes were provided at both ends of the box. One end connected to anesthesia ventilator, and one end connected to the gas collection port of the monitor to real-time monitor oxygen and sevoflurane concentrations. The temperature in the box was maintained at 35-37 ∘ C. After the rats were placed in the box, oxygen and sevoflurane flow meter was opened to adjust their concentrations. When sevoflurane was kept at the needed concentrations, pretreatment time was counted for 1 hour. Brain Tissue. After the heart was exposed, the circulatory system was perfused and washed with 250-400 ml of physiological saline. The skull was broken off and the brain was obtained on the ice plate. The brain was cut into two pieces through the median sagittal line. The hippocampi were isolated and stored in -80 ∘ C (left side) and 10% neutral formalin (right side). Venous Blood Serum and Arterial Blood Gas. Venous blood was centrifuged at 3000 rpm for 10 minutes, and serum was collected and stored at -80 ∘ C. Blood was collected from left femoral artery and subjected to blood gas analysis, 1.3 ml/time. After sampling, an equal volume of 6% hydroxyethyl starch was intravenously injected. Cell Apoptosis as Measured by Terminal Deoxynucleotidyl Transferase dUTP Nick End Labeling (TUNEL) Assay In Situ. Paraffin sections were dewaxed and hydrated, treated with proteinase K for 15 minutes, washed four times with water at room temperature, treated with H 2 O 2 -PBS mixture for 15 minutes, and washed three times with PBS, each for 5 minutes. These sections were then treated with TUNEL for 10 minutes, stop buffer for 30 minutes, washed with PBS, incubated with horseradish peroxidase for 30 minutes, and washed with PBS. Afterwards, sections were incubated with DAB-H 2 O 2 mixture for 3 minutes, washed with PBS, counterstained with methyl green for 10 minutes, and washed three times with distilled water and three times with N-butanol, each for 1 minute. Subsequently, the sections were dehydrated with xylene, dried, and mounted. Nuclei of normal neurons were stained blue using hematoxylin, considering as negative cells. Nuclei of injured neurons were stained brown, with the presence of DNA breakage, considering positive cells. Six sections were selected from each group and five fields of each section were selected at 400x magnification. Mean integrated optical density values of injured neurons were analyzed using Metamorph Dplo BX41 image analysis system. 2.2.7. Serum S100-, IFN-, IL-6, TLR3, and TRIF Concentrations in Rats as Detected by Enzyme Linked Immunosorbent Assay (ELISA). Sample (50 l) was added in each well after standard substance was diluted. The sample (50 l) diluted five times was added and incubated at 37 ∘ C for 30 minutes. The washing liquid was added to the waste liquid hole for 30 seconds, repeatedly for five times. ELISA reagents (50 l) were added in each well, and developers A and B (each 50 l) were added and incubated at 37 ∘ C for 15 minutes. Stop buffer (50 l) was added for 15 minutes. Absorbance values were measured at 450 nm, and results were analyzed. TLR3 and TLR9 Expression as Measured by Western Blot Assay. The hippocampus was lysed with 1 ml/100 mg lysate and centrifuged at 12000 rpm and 4 ∘ C for 15 minutes. The supernatant was incubated with loading buffer, subjected to electrophoresis, and transferred onto the membrane for 90 minutes. The membrane was blocked with 5% defatted milk powder for 2 hours and incubated with approximately 1 ml of TLR3 antibody or TLR9 antibody (Abcam, USA) at room temperature for 2 hours. After removal of primary antibody, the membrane was washed four times with phosphatebuffered saline/Tween. The incubation with secondary antibody was the same as before. After visualization, results were analyzed using Quantity One software. Statistical Analysis. Data were analyzed with SPSS 19.0 software. Measurement data were expressed as the mean ± standard deviation. Intergroup comparison was completed using one-way analysis of variance. Intragroup comparison was finished using repeated measures analysis of variance. A value of < 0.05 was considered statistically significant. Serum S100-Concentrations in Rats of Each Group. Serum S100-concentrations significantly increased during CPB and gradually diminished after CPB in the CPB group (versus 0 ; < 0.05). Compared with the sham group, serum S100-concentrations significantly increased at 1-5 in the CPB and Sev groups ( < 0.05). Compared with the CPB group, serum S100-concentrations significantly reduced at 1 -5 in the Sev group ( < 0.05, Figure 1). ELISA Results of IL-6 Concentrations in Rats of Each Group. Compared with the sham group, serum (Figure 2(a)) and brain (Figure 2(b)) IL-6 concentrations significantly increased in the CPB and Sev groups at 1 -5 (versus T 0 ; < 0.05, < 0.05). Compared with the CPB group, serum and brain IL-6 concentrations significantly diminished in the Sev group at 1 -5 ( < 0.05). ELISA Results of TLR3 Protein Expression in the Rat Hippocampus of Each Group. Compared with the sham surgery group, TLR3 protein expression increased at 1 , 3 , and 5 , significantly increased at 1 , and gradually diminished at 3 in the CPB and sevoflurane pretreatment groups (versus 0 ; < 0.05, < 0.05). Compared with the CPB group, TLR3 expression significantly increased at 1 , 3 , and 5 in the sevoflurane pretreatment group ( < 0.05; Figure 4). ELISA Results of TRIF Protein Expression in the Rat Hippocampus of Each Group. Compared with the sham surgery group, TRIF protein expression increased at 1 , 3 , and 5 , significantly increased at 1 , and diminished at 3 in the CPB and sevoflurane pretreatment groups (versus 0 ; < 0.05). Compared with the CPB group, TRIF protein expression significantly increased at 1 , 3 , and 5 in the sevoflurane pretreatment group ( < 0.05; Figure 5). Western Blot Assay Results of TLR3, TRIF, TLR9, and NF-B Protein Expression in the Rat Hippocampus of Each Group. Compared with the sham group, TLR3, TRIF, TLR9, and NF-B expression significantly increased in the CPB and Sev groups ( < 0.05). Compared with the CPB group, TLR3 and TRIF expression significantly increased and TLR9 and NF-B expression significantly decreased in Sev group ( < 0.05; Figure 6). Discussion This study established rat models of CPB primed without blood after 1-hour pretreatment with 2.4% sevoflurane. Serum S100-protein was selected as biochemical marker for brain injury. 1.0 MAC sevoflurane has been extensively used in the clinic. Previous studies have shown that 2.4% sevoflurane was equivalent to rat 1.0 MAC sevoflurane [24,25]. Hu et al. [26] demonstrated that sevoflurane could mitigate cerebral ischemia/reperfusion injury after 1-hour pretreatment with 2.4% sevoflurane. Sevoflurane is commonly used in the rat models of middle cerebral artery but seldom used in rat models of CPB. In accordance with the results of our preliminary experiments, the concentration of sevoflurane was 2.4% in this study. S100-is a sensitive neuron-specific protein, and its expression is very low in the normal brain. Only when brain tissue is damaged is S100-activated early and expressed rapidly [27]. Because the permeability of cell membrane and blood-brain barrier increased, S100-can be released into the cerebrospinal fluid and the systemic circulation through the blood-brain barrier; thus, the detection of S100concentrations in peripheral blood can sensitively reflect brain injury [28]. The researchers therefore regard S100protein as a marker that reflects brain injury. Similarly, in the study of CPB, S100-is considered as a specific marker for early brain injury in CPB [29,30]. The hippocampus is more sensitive to cerebral ischemia and hypoxia compared with other parts of the brain, and ischemia and hypoxia are more likely to cause neuronal damage, so the hippocampus is selected in this study [31]. In this study, serum S100-levels were close to normal value before CPB and were low in each group. After CPB, S100-levels were still low in the sham group, indicating that simple intubation did not cause significant brain injury. Compared with the sham group, serum S100-levels obviously increased during CPB in the CPB and Sev groups, and S100remained in a remarkably high level after CPB. Singh et al. [32] found that, under intravenous anesthesia of sevoflurane and isoflurane, S100-protein concentrations maximally increased after CPB in patients undergoing coronary artery bypass grafting, and S100-concentrations were minimal in the Sev group, which was consistent with our results. 7 These findings suggest that CPB-induced brain injury in rats through some mechanisms. During CPB, various factors, such as nonphysiological perfusion, local oxygen supply, and insufficient blood supply, can lead to the release of large numbers of proinflammatory cytokines from lymphocytes, resulting in systemic inflammatory response syndrome [33], which can cause great damage to the brain. Therefore, inflammatory lesion is an important cause of cognitive impairment after CPB. How to effectively reduce inflammatory lesion during CPB is the foothold of our research. Ramlawi et al. [34] demonstrated that serum IL-1 levels remarkably increased, and the changes in concentration were positively associated with cognitive decline in patients with postoperative cognitive decline in the early stage after coronary artery bypass grafting and valve replacement. Ashraf et al. [35] thought that, during CPB, S100-protein expression was positively associated with IL-6 expression; IL-6 expression had promoting effect on S100-protein expression; inflammatory mediators participate in and aggravate brain injury. Vila et al. [36] confirmed that IL-6 was strongly associated with area of acute cerebral infarction; serum IL-6 concentration and cerebral infarction area showed the same trend and were independent risk factors for cerebral infarction. In this study, IL-6 concentrations obviously increased in the CPB group compared with the sham group from the beginning of CPB and then gradually reduced after CPB but were still high, indicating that CPB definitely started the inflammatory response, which was consistent with the results from previous studies. Compared with CPB and sham groups, IL-6 concentrations increased during CPB in the Sev group; however, IL-6 concentrations were still lower in the Sev group than in the CPB group at various time points. After CPB, IL-6 concentrations were remarkably decreased, and the descent speed was faster than that in the CPB group. Furthermore, the trend of IL-6 concentration was the same as that of S100-. S100-concentrations were lower in the Sev group than in the CPB group, suggesting that sevoflurane mitigated brain injury induced by inflammatory response during CPB through inhibiting IL-6 expression. Bedirli et al. [12] believed that sevoflurane pretreatment could diminish the concentrations of inflammatory factors (TNF-and IL-1 ) during ischemia and reperfusion, lessen inflammatory response-induced brain injury, and exert protective effect on the brain, which was consistent with our results. The establishment of CPB causes the brain tissue to undergo multiple noxious stimuli, such as hypoxia and ischemia of histiocytes induced by abnormal perfusion and subsequent reperfusion injury, activation of systemic complement system by foreign bodies such as pipes, and extremely released endotoxin [37]. These noxious stimuli can strongly induce the formation of pathophysiological processes characterized by inflammatory response through a variety of factors and pathways [38]. TLR9 signaling pathway, as an important hinge between innate immunity and acquired immunity, can play an immune regulatory role in the early stage of pathogen invasion [39]. In ischemic brain injury, TLR9 activates the downstream signaling molecule NF-B by binding to specific ligands and participates in inflammatory response [40]. Trop et al. [41] found that CPB increased inflammatory response, induced high expression of interferon-gamma (INF-), TNF-, IL-6, IL-8, and IL-10, activated TLRs signaling pathway, and upregulated IRAK-4 and NF-B expression. Mahle et al. [42] performed a study on the relationship between inflammatory response and clinical prognosis in neonates after CPB and found that neonatal cardiac surgery can cause complex and extensive inflammatory response, leading to the increase of INF-, TNF-, IL-2, IL-5, IL-6, IL-8, IL-10, and IL-13. In the present study, the establishment of CPB activated inflammatory TLR9 signaling pathway in the brain and increased NF-B expression. This indicates that TLR9 signaling pathway is involved in the pathological process of cerebral ischemic injury, which was consistent with Trop et al. 's study [41]. This result may be because CPB can cause hypoxia and ischemia in brain tissue; blood-brain barrier is damaged; apoptosis and necrosis occur in neuron cells; the damaged tissues and necrotic cells can release some molecules as endogenous activators of TLR9 to activate the TLR9 signaling pathway. For example, cleavage of DNA fragments can act as ligands on TLR9 receptors and activate the TLR9 signaling pathway [43]. Taken together, inflammatory response induced by CPB may be associated with the signaling pathway of TLR9 inflammatory response. TLR3 is the only complete, single TLR through the TRIF-dependent pathway. TLR3 is expressed throughout the central nervous system and prominently highly expressed in astrocytes. Astrocytes account for the highest proportion of cells in the central nervous system, so TLR3 is particularly sensitive to injury [44]. Relative to the response of TLR to brain injury caused by inflammatory response during ischemia, if certain stimuli are given in advance to activate TLR, it can provide a powerful neuroprotective effect by reducing inflammatory response [45]. Previous studies have confirmed that pretreatment with lipopolysaccharide or upregulation of TRIF expression by ischemic preconditioning can ultimately increase the release of anti-inflammatory cytokines, inhibit the release of nuclear factor-kappa B-(NF-B-) mediated proinflammatory cytokines, thus playing a protective effect on the brain [8,9]. Nhu et al. [46] found that lipopolysaccharide pretreatment could upregulate TLR3 expression, thereby increasing the expression in the downstream TRIF pathway. Because TLR3 signaling is completely through the TRIF pathway, activation of TLR3 expression may contribute to the balance of proinflammatory and antiinflammatory cytokines, resulting in neuroprotection. Pan et al. [47] demonstrated that the use of TLR3 agonist polyinosinic:polycytidylic acid could stimulate TLR3 expression, inhibit the release of proinflammatory cytokines NF-B and IL-6, and alleviate the damage to nerve cells. In the current study, the trend of TRIF expression is basically the same as that of TLR3, and TLR3 expression is confirmed by the downstream pathway. Because TLR3 signaling is completely through the TRIF pathway, the activation of TLR3 expression may help to improve the balance of proinflammatory and anti-inflammatory cytokines that are broken by CPB, to search for novel balances and to produce neuroprotective effects. IL-6 concentrations and S100protein concentrations showed the same trend, suggesting 8 BioMed Research International that IL-6 concentrations increased during CPB, accompanied by or aggravating the occurrence and development of brain injury. Before CPB, IL-6 concentrations were lower in the Sev group than in the sham group. The trend of IL-6 change in the Sev group was approximately identical to that of CPB group, but IL-6 concentrations were obviously lower. In this study, after pretreatment with sevoflurane in rat models of CPB, sevoflurane pretreatment activated TLR3 expression, increased TLR3 and TRIF levels, and inhibited the production of S100 and IL-6. Sevoflurane pretreatment is similar to ischemic preconditioning, can activate TLR3 and TRIF protein expression, enhance the tolerance of brain tissue to inflammatory lesion caused by CPB, lessen IL-6 release, and reduce inflammatory factors-induced damage. Results from the present study confirmed that TLR3 participated in the brain protection of sevoflurane pretreatment and provided a new target for the prevention and improvement of CPBinduced brain injury in clinic. In conclusion, sevoflurane pretreatment has protective effect against brain injury after CPB. Sevoflurane pretreatment can activate TLR3 and TLR9 signaling pathway, upregulate TLR3 expression and TRIF expression, decrease TLR9 expression, and downregulate NF-B expression and inhibited the production of S100 and IL-6, thereby lessening CPB inflammation-induced brain injury. Conflicts of Interest The authors declare that they have no conflicts of interest regarding the publication of this article.
2018-04-03T05:20:51.595Z
2017-12-27T00:00:00.000
{ "year": 2017, "sha1": "14f177f81765538fbf2fb297e119970d9609c8aa", "oa_license": "CCBY", "oa_url": "http://downloads.hindawi.com/journals/bmri/2017/4286738.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "06f5a4bc229ba12377469fef106f72431224a2e3", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
117386857
pes2o/s2orc
v3-fos-license
RELATIONSHIPS AMONG WRITING AND READING AS A RESPOND TO CRITICAL JOURNAL REVIEW The integrating of reading and reading in L2 is not the new area; However, only a few studies in reading and writing receive enough attention and are implemented in the L1 and L2 teaching. This study is aimed to reveal relationships among reading and writing through critical journal review. Thus, the participants show the ability of L2 learners on reading which confesses through writing. In another occasion, the participants show judgment which representative error is written after reading. This research is going to certain fault writing L2 as judgmental on reading a review on journal. INTRODUCTION The L2 writing performance of learners seems an interesting area.Before writing, the learners initially build their performance from reading.Researchers have found that phonemic awareness and knowledge of symbol relationships are critical factors for learning to read and strong phonemic awareness has been found to be crucial for efficient word decoding, especially reading a lot in L1 (Sparks, et.al, 2011).The implication of enjoyment in reading will be likely to read more often and over time, so later it is exposed to the printed word and more frequently (Sparks, et. al 2011, Cunningham & Stanovich,1991).From reading more, the students will acquire early success in learning to read and subsequent development of language-related skills.Surprisingly, the exposure to L1 reading is similar to acquiring L2 reading.The L2 reading factors involve languagerelated skills such as vocabulary, spelling, grammar, and general knowledge. The contribution of reading reflects on writing.Actually, the writing performance happens if the learners' mental representation of writing would give any contribution (Zarei, et.al 2016).From its contribution, the writing model actually has shown a processbased, multi-dimensional and integrated activity inducing self-direction and organization. LITERATURE REVIEW Reading as a strategy There are several hypotheses showing reading and reading-related skills (Sparks,et.al 2011 andStanovich, 1997).The cognitive efficiency hypothesis has differences in vocabulary, general knowledge, and general language skills caused by variation in differences of the cognitive mechanisms for gaining meaning from texts.The environmental opportunity hypothesis have differences in language skills that result in the differential opportunities for word learning.From these hypotheses, we may conclude that reading activity is a measurement of print exposure (reading volume) which students engaged.The learners sometimes meet differences in reading-related skills which are associated with the efficiency of the cognitive mechanism related to reading. Writing after reading Writing in L2 has several implications such as cohesion device and writing quality.Several studies found that the greater cohesion is indicated by perfectly linking between paragraphs (Crossley,et.al, 2016).Cohesion is used to give judgments to investigate the writing. A number of studies show positive relations between L2 writing quality and the production of local and text cohesive devices.The implications may happen if the L2 learners use his L1 knowledge of reading to produce L2 writing.The essay quality will reveal the learners' cognition and the production result after the transfer. Recent studies show that the quality of writing after reading depends on pedagogical aspect because the available sources of reading have also linked to writer's previous knowledge and lead to other interpretation in writing.Li (2014) has studied reading summary and writing as the integrated task.Sixty-four participants were assigned to criticize a textbook and write the summary by their own styles.The result reveals that several reading strategies-identifying and skipping unknown words; reprocessing information to clarify meaning; and rereading clarification-are focused on word and sentence level comprehension.In writing, the participants create the content from the source read.It proves that reading-writing is integrated skills. In presenting the L2 writing, it is found some errors.Conducting this judgments of writing, there are some composition variables for analysis the error on writing (CLRC writing center), such as verb tense errors, sentence structure errors, and word choice errors. RESEARCH METHODS Source of Data and Participants This research reveals the learners' writing based on the critical review on applied linguistics.In their critical review, they made conclusions after summarizing the article.The conclusion represented their knowledge of reading and the learners' assumptions as their cognition to writing.Some errors were found in their critical review such as verb tense errors, incorrect items, sentence structure and word structure.Those criteria were used as the data source.The samples were taken from five participants.They are students of The State University of Malang in the class of Applied Linguistics at 2016. Data Analysis There are some results of implications to judge L2 writing such as verb tense errors, incorrect items (conjunction use), sentence structure errors, and word choice errors.Those categories most commonly happened in L2 writing.Verb tense errors 2 nd writer: The researchers reveals the reality that the Jakarta Post actually has its standpoints.The correction : The researchers have revealed the reality that …… 4 th writer :This article broadens my knowledge about how language taboo not only can make a judgment to someone's personality and intelligence but also they can be used in literature, drama or movies and how they influence the audiences or readers.The correction : This article broadens my knowledge about how language taboo not only can make a judgment to someone's personality and intelligence but also can be used in literature, drama or movies and how they influence the audiences or readers. Incorrect items (conjunction use) There are some mistakes in using conjunction explained as follows. 1 st writer : Besides, it does not have a single grammatical error nor misspelling.The correction : Besides, it does not have any single grammatical error or misspelling.3 th writer : Not only learning general English, but varieties of English must be learnt also.The correction: Not only learning general English, but also the varieties of English must be learnt. Sentence structure errors This is example of overuse on sentence, so it becomes jumbled sentence.3 rd writer : Conducting research communication in aviation, the point of view of a linguist a certainly contributes to a deeper understanding of the issues involved.The correction : Conducting research on communication in aviation from the point of view of a linguist certainly contributes to a deeper understanding of the issues involved.Word choice errors 5 th writer : In case of the results and discussions of the research, not enough explanations are given and there is problem in reporting the data.The correction : Concerning the results and discussions of the research, the explanations does not give additional information, besides there is still problem in reporting the data. FINDING This explanation of data analysis which interprets the participants' reading and writing judgment errors has been raised.In terms of verb tense error, the 2nd writer did not realize that the activity was done and the writer used present tense instead of present perfect or simple past tense.In addition, the 5 th writer, s/he did not realize adding subject for 2 nd clause, but 1 st clause does not have the subject.However, this may be connected to writer's mental cognition.Moreover, the 1 st writer is also supposed to give special attention.The use of nor is matching with neither, but the sentence does not support for adding neither.Next for the 3 rd writer, it is odd to put but separately away from also.The writer tried to make variation in the sentence but failed.When the 3 rd writer used the prepositions for several times, the sentence produced repetitions.In sentence structure, the sentence from writer seems haphazardly constructed, so it does not give a good interpretation for many prepositions used.From word choice errors, it can be seen that the 5 th writer tried to make it short, but it may bewilder the readers.The word choice or diction of the writer has to be reformed to avoid misinterpretation. This research leads to several interpretations on the participants' writing.Considering L1, it is not possible that the participants did not deepen enough on their L1 reading.Lightbown and Spada (2006) already revealed that certain misunderstanding in acquiring L2 happened also with the participants' L1 background in that whether they were good or bad at L1 skill impacted to acquiring L2. CONCLUSION Reading plays important role in writing.The writers much more improved to reuse the article reading.Contrast to the results, the writing seems still challenging to writers.The L2 writing error may disappear gradually.The implication of this research, the writers have to read a certain good passage, and then try to make a review writing afterwards.
2019-04-16T13:28:42.461Z
2018-08-14T00:00:00.000
{ "year": 2018, "sha1": "af43abc496892e95c0e3f31b67c652a420bc6bb8", "oa_license": "CCBYSA", "oa_url": "http://jurnalftk.uinsby.ac.id/index.php/IJET/article/download/108/pdf_32", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "af43abc496892e95c0e3f31b67c652a420bc6bb8", "s2fieldsofstudy": [ "Education", "Linguistics" ], "extfieldsofstudy": [ "Psychology" ] }
78939810
pes2o/s2orc
v3-fos-license
Distribution of hepatitis C virus genotypes in patients with chronic hepatitis C infection in Karnataka, South India Hepatitis C is caused by a spherical, enveloped, single stranded RNA virus, which belongs to the family Flavivirdae and genus Flavivirus. It is a major cause of chronic hepatitis C, throughout the world. WHO estimates that 170 million individuals worldwide are infected with hepatitis C virus (HCV). However, the prevalence of HCV infection varies throughout the world. 1 HCV is classified into 6 genotypes and numerous subtypes. Molecular differences between the genotypes are relatively large and have a difference of at least 30% at the nucleotide level. The viral genome undergoes mutation and thus the parent strain has different mutants, which coexist as quasispecies in the same individual. 2 INTRODUCTION Hepatitis C is caused by a spherical, enveloped, single stranded RNA virus, which belongs to the family Flavivirdae and genus Flavivirus. It is a major cause of chronic hepatitis C, throughout the world. WHO estimates that 170 million individuals worldwide are infected with hepatitis C virus (HCV). However, the prevalence of HCV infection varies throughout the world. 1 HCV is classified into 6 genotypes and numerous subtypes. Molecular differences between the genotypes are relatively large and have a difference of at least 30% at the nucleotide level. The viral genome undergoes mutation and thus the parent strain has different mutants, which coexist as quasispecies in the same individual. 2 HCV leads to chronic hepatitis in about 80% of the cases. 3 The virus can cause gradual hepatic fibrosis and eventual cirrhosis, end-stage liver disease, and hepatocellular carcinoma. 4 Without treatment, 33 percent patients have an expected median time to cirrhosis of less than 20 years. 5 The virus genotype does not influence the presentation of the disease but different strains of HCV may be involved in the disparity in the course of the hepatitis C among infected individuals and difference in the pattern of the disease between countries with different dominant genotypes. Since it is a major predictor of response to antiviral therapy and also determines the type of antiviral drugs, it is important to understand the prevalence of the type of genotype to device strategies to combat the disease. Multiple studies have been done on the distribution of various hepatitis C virus genotypes in India. [6][7][8][9][10][11][12][13] There is no data from Karnataka on distribution of hepatitis C virus genotypes. We took up this study to find out the prevalence of various genotypes of hepatitis C virus in the patients with chronic hepatitis C infection in Karnataka, South India. METHODS From October 2012 to September 2016, one hundred and eighty five consecutive patients diagnosed with chronic hepatitis C infection attending the outpatient department of medical gastroenterology department were included in this retrospective study. Patients were identified from the outpatient registers and data was extracted from liver proforma of medical gastroenterology department, Victoria hospital, attached to Bangalore medical college and research institute. All the patients had anti-HCV antibody-positive test done by ELISA (Tri Dot) test. HCV RNA viral load and genotype was done for all the patients before starting the combination therapy. HCV RNA quantitative test was done by reverse transcriptase PCR assay on Roche Cobas Ampliprep analyzer (Roche diagnostics GmbH, Mannheim, Germany), ranging from 43 IU/mL to 6.9 × 107 IU/mL. A viral count of <43 IU/mL was considered to be undetectable. DISCUSSION Hepatitis C infection is the most common cause of chronic liver disease. The severity of hepatitis C, its progression, and response to therapy may vary depending on the genotype. 5 As regional differences exist in the distribution of HCV genotype, it is important to know the genotype distribution to understand its prognostic implication. 11 In our study, genotype 1 was predominant, followed by genotype 3, which is similar to that reported by other workers from southern India. 7,9 In northern and western India genotype 3 was found to be predominant. 6,8,[10][11][12][13] From other parts of the world studies reveal that genotype 3 is prevalent in South East Asia whereas genotype 1 is common in USA and Western Europe. 14 These geographical differences may help in predicting the origin of HCV virus. In CKD patients on hemodialysis chronic hepatitis C infection is common. Very few studies have reported genotype in these patients. [15][16][17] In two western studies genotype 1 was commonest. 15,16 In a study from north India, genotype 3 was the commonest in CKD patients on hemodialysis. 17 In our study genotype 1 was predominant in CKD patients on hemodialysis, which is in trend with geographical prevalence. In CKD patients genotype 1 was seen in 93.1% and genotype 3 in 6.9% of patients. The severity of disease, its progression, and response to therapy may vary according to the genotype.18-20A number of studies have reported that severe liver disease occurs in relation to type 1 infection (especially type 1b). 18,21,22 and that cirrhotic patients infected with HCV type 1b carry a significantly higher risk of developing hepatocellular carcinoma as compared to those infected with other HCV types. 23 But the results of various studies are conflicting. [18][19][20] It is known that genotype 1 is the second most common genotype reported from north India and is the most prevalent genotype in south India, as also seen in the present study. Thus knowledge on the distribution of various genotypes in our country is essential for its prognostic implications in chronic hepatitis C infection. CONCLUSION Genotype 1 was found to be the most prevalent genotype in patients with chronic hepatitis C in Karnataka, south India.
2019-03-16T13:08:02.806Z
2016-12-24T00:00:00.000
{ "year": 2016, "sha1": "09a505b267fc4123b04c727165b7fc696576e296", "oa_license": null, "oa_url": "https://www.ijmedicine.com/index.php/ijam/article/download/27/26", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "dd94daaf0e591b7668a72634fd5265f10c3b8ad6", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
231886325
pes2o/s2orc
v3-fos-license
Applying the Rasch model to analyze the effectiveness of education reform in order to decrease computer science students’ dropout Attrition is an important issue in higher education, especially in the field of computer science (CS). Here, we investigate to what extent an education reform affects the attrition of students by analyzing the pattern of grades of CS students’ academic achievement from 2010 to 2018 by IRT, based on Rasch-model analysis. We analyze data from 3673 undergraduate students of a large public university. In 2016 an education reform—as an intervention—was added to our BSc program: all theoretical lectures became compulsory to attend and we introduced a learning methodology course for all first-year students. According to our results, after the education reform most subjects became accomplishable, and students with lower levels of ability also tried to take exams. We succeeded in retaining 28% of our students. Analyzing students’ results could help administrators develop new programs in order to increase retention. Introduction I n the last decades, higher education institutions (HE) have been under pressure to reduce the rates of students 'dropping out', and develop methods that encourage students to continue their studies (Thomas and Quinn, 2003;Mayhew et al., 2016).The aim of our research is to analyze students' academic success and to find subjects related characteristics of retention in the field of computer science (CS). Retention in CS.At a large public university in Europe (over 30,000 students) the overall dropout rate is 30%, and the worst among the departments is in Informatics, where the average rate of attrition was 60% between 2010 and 2016.The attrition rates are similar in other countries in Europe, as well (Borzovs et al., 2015;Zwedin, 2014), but it seems to be a worldwide issue, which can present a significant problem in the job market.Today more than 800,000 computer scientists would be needed (Europa.eu, 2015), which makes this problem not only an educational but an economic one, as well.Most students in the first year of college fail the Introductory Mathematics course more often than any other courses.After the first semester, on average 30% of students leave the field of CS, and this number increases to 60% by the end of the first year (Borzovs et al., 2015;Ohland et al., 2008).Therefore, it is worthwhile to analyze the curriculum of CS in order to find solutions for dropout. According to the Association for Computing Machinery an ideal curriculum for computing contains guiding disciplines for CS education (ACM-IEEE, 2017).Students should be able to "analyze complex, real-world problems to identify and define computing requirements and apply computational approaches to the problem-solving process".A general CS program is based on various areas of mathematics, as well.For instance, discrete mathematics is essential for higher levels of CS.Every CS curriculum contains mathematics for at least 12 compulsory credits.However, at the departments of Informatics of a large public university in Europe half of the students (51%) had problems with subjects related to mathematics.In the following the description of some theories behind the phenomenon of dropping-out clarifies the reasons and research directions in order to retain students. Theories of student persistence.Tinto (1975) introduced an interactional theory of student persistence in academic life.This theory emphasized the importance of the students' personal characteristics, traits, experiences, and commitment.Furthermore, Tinto (2012) highlighted the interactions between the student and the institution regarding how integrated ("fitted" academically and socially) the student is.Pascarella and Terenzini (1983) also associated the importance of social and academic integration, such as peer relationships and faculty member relationships with persistence.Interactional theories suggest that students should be connected to one another and their institutions.Braxton and Hirschy (2004) emphasized the need for community on campus as a help of social integration to develop relationship between peer to peer.Terenzini and Reason (2005) and Reason (2009) suggest that the student's pre-college characteristics and experiences interact with internal structures, policies, and practices of the university.It will not turn out whether the student persists and continues their studies or not until the end of the interactional circle (Terenzini and Reason, 2005;Reason, 2009). However, according to Braxton and Hirschy (2004), there is a missing evidence that these characteristics and motivations can provide a successful predictive model of student engagement.It remains a question why some students successfully "fit" while others do not at the university, although they often have similar academic backgrounds and socioeconomic demographics.Because of the remaining questions and the lack of explanation it is considerable to analyze the reasons behind CS dropout because this field has a large number of students dropping out. Analyzing the reasons behind CS dropout.In line with the growing attention paid to dropout, models explaining CS students' dropout have been presented in the literature.Unfortunately, most students drop out already in the first year of their studies.Every year many students around the world enter higher education enrolling for CS but after 3 years only a few will receive a degree.It means that we have to understand which characteristics of the subjects should be taken into account in order to avoid dropout.While one direction of the attrition analyses of dropout is investigating the core subjects which students tend to fail, the other direction is conducting research into students' psychological characteristics.In the following section some of the research will be discussed in detail. During the first academic year, CS students have basic subjects, such as mathematics and programming, which provide important basic knowledge for their further academic studies.Most studies claim (Divjak et al., 2010) that most of the students fail in mathematics courses; however, programming courses also cause problems for students (e.g.Bennedsen and Caspersen, 2007).According to Watson and Li (2014) the success rate of passing in an introductory programming course is 67.7% based on the systematic literature review.Baker et al. (2009) claim that difficulties in the introductory courses can cause unwillingness to continue studies in CS major. According to students' characteristics investigations, there is a hypothesis that being a successful student in engineering is dependent on being successful in math during high school.University students learn a huge amount of new information; therefore, it is a necessary skill to be able to recall most of what you have learnt (Bacon and Stewart, 2006;Rawson et al., 2013).Pearson and Miller (2012) found that the bachelor degree in engineering is highly dependent on the calculus course during high school, and the number of calculus courses taken at university.According to Pearson and Miller, one-third of the students fail to complete the degree due to inadequate knowledge in mathematics.Hopkins et al. (2016) analyzed the psychological mechanism in acquiring information among engineering students.They employed spaced versus massed retrieval practice for students to acquire the Introductory Calculus for Engineers course.A hybrid between-and within subjects design was used.Spaced retrieval practice could help in academic performance of engineering students.Among the first-year students they found a relation between mean high school GPA and student success if students had the same disciplines learned in high school.Chen (2013) mention several reasons behind STEM attrition: STEM attrition was correlated with students' demographic characteristics, pre-college academic preparation, types of first institution enrolled, etc.It appears that STEM coursetaking in the first year, the type of math courses taken in the first year, and the level of success in STEM courses keep stronger relationships with this outcome than did other factors.Robbins et al. (2004) claim in their meta-analyses that there is a moderate relationship between retention and academic goals, academic self-efficacy, and academic-related skills (ps = 0.340, 0.359, and 0.366, respectively).Actually, academic self-efficacy and achievement motivation seem to be the best predictors for GPA (ps = 0.496 and 0.303, respectively).Giannakos et al. (2017) conducted an eightpredictor model explaining 39% of student retention.The model contained the following variables: usefulness of the degree, cognitive gains, and supportive environment.Other researchers suggest restructuring the education system; Kalmar (2013) claims that the two important pedagogical factors behind attrition rates are the lack of feedback and practice (Seymour and Hewitt, 1997). Although the above studies dealt with a huge range of reasons of retention, further studies are needed for a deeper understanding of the phenomenon and for including it in a practical intervention program.Some of the intervention techniques.A wide variety of techniques have been employed to minimize student attrition at undergraduate institutions.Many interventions focus on positively affecting retention.There are many types of intervention.For instance, focusing on difficult, entry-level science courses has managed, in some cases, to increase retention by 10% (Blanc et al., 1983;Tinto, 2005); self-selected groups of students have reached similar results by decreasing attrition from 9.8% to 3.2% (Gregerman et al., 1998); other faculties have orientation sessions (Pascarella et al., 1986), which affect retention indirectly through developing social interaction among the undergraduate students and by increasing commitment to an institution.Recently Bowman et al. (2019) presented two studies that analyzed how effective a goal-setting academic advising intervention can be.The study examined engineering students who were on academic probation in order to improve their grades.The findings show that the intervention notably increased the grades of engineering students on probation who are beyond their first year of college, but it was not effective for students in their first year.It shows that this type of intervention supports academic success after the first year. Project Success (PS) is the name of an intervention program that helps probationary students achieve academic success.The program has two main parts.The first component of the program is to give students elaborate information about study skills and campus resources.The second component of PS is to help students to improve skills necessary for studying, such as time management.The groups are small and they are required to meet every week for gaining a letter of completion.The intervention does not grant academic credits (Humphrey, 2005).Hwang et al. (2014) came to the same conclusion analyzing the experience of some college students who had experienced academic effectiveness.During the 'education recovery program' four main topics appeared to be vital: attitude, study strategies, external support, and coping with difficulties.These results suggest that studentsby receiving external support-are better able to cope with and overcome academic difficulties.In addition to this, working at a large metropolitan public research university, Kot (2014) studied the first-year GPA of 2745 full-time freshmen and their secondyear enrollment behavior.According to their results, students who used centralized counseling services had an increase in their first-term GPA, second-term GPA, and also had less probability of first-year dropout.Wlazelek and Coulter (1999) had similar results: students who had participated in counseling during their academic studies had significantly higher grade point average than students who did not receive counseling. Mellor et al. ( 2015) applied a small class intervention as well and had a 10% lower attrition rate among the students taking the course.Similarly, students who completed the goal-setting intervention for 4 months showed significant improvements in academic performance compared with the control group.The goal-setting program could be an effective and quick intervention for students who are struggling with academic studies (Morisano et al., 2010).These results suggest that small-group interventions can effectively reduce attrition.Furthermore, using 25 experimental studies for meta-analysis on the effect of academic probation, student-faculty mentoring showed that mentoring had the largest positive influence on student outcomes (Sneyers and De Witte, 2018). According to Herpen et al. (2019) participation in a preacademic program could encourage students to make a greater effort during their studies, because students who had participated in the program had better first-year cumulative GPA. This study: Education reform Program description: Intervention.Based on information gathered in the literature review of issues surrounding attrition among CS students and analyzing various intervention programs and their effectiveness, the following education reform has been performed. Mentor program: first-year students are organized in fixed composition groups of 20 students in order to promote community building.Peer mentors serve to support and encourage new first-year students to succeed at the university.Peer mentors together with a mentor teacher lead a group meeting weekly and help new students throughout the academic year.In this buddy program, students share their problems with their teachers and fellow students, who then help them to cope with issues in university life.Peer mentors are knowledgeable guides for new students, thoughtful facilitators who provide access to people and resources, ultimately role models.Peer mentors coordinate and facilitate social and educational programs as desired or needed.there is an emphasis on fostering extra-curricular activities and peer interactions.The aim is (Ryan and Deci, 2000) to develop close student-student relations and student-teacher relations in order to closely monitor the academic performance.It could result in satisfaction in basic psychological needs for competence, autonomy, relatedness, and higher level of intrinsic motivation. So, we started our education reform with non-compulsory mentoring classes and fixed groups and contemporary tutoring in 2006, but the program itself did not bring a breakthrough in reducing dropout.However, a pilot program with 70 students was successful in 2015.During this period of time (between 2006 and 2015) the CS bachelor's degree program did not change significantly; the subjects and outcome requirements were not modified and mostly the teachers as well remained the same.The legal background of higher education did not change either.So there was no substantive change that could have helped reducing dropout. In the pilot program in 2015, we invited first-year students who performed badly during the semester.Students who participated in the Study course performed better than those who did not (Takács and Horváth, 2017).After the pilot we extended the intervention to every first-year student to prevent them from dropping-out. Promotion and prevention program: Achieving student success.2016: A special course entitled "Preparation course for university studies and developing learning skills" became obligatory for all first-year students.The course consists of two main parts: an intensive training program and a special mentoring program.The training program is held by psychologists and peer counselors for a group of 20 students.It is a combination of motivation, organization, time management, and concentration that helps students stay on track and be able to achieve successful test scores.It is to develop and maintain (1) relationships with and support first-year students' acclimation, and a sense of belonging, (2) to motivate students to find out what they are passionate about and use their interests to connect with their university tasks, and to strengthen their CS identity, (3) to help students stay organized, including organizing and labeling all materials and notebooks and keeping a checklist of essential tasks, (4) to help them prioritize and manage their time, keep track of assignments and tasks, and (5) to develop soft skills to help concentration and preparation for exams to keep their minds on the task.Topics discussed are the difficulties in the transition from secondary school to university, how families can support students' academic life, general information about the university, the evaluation system used at the courses, general activities of student life, etc.There are many benefits of the program: familiarization with the university, developing teacher-student and peer relations, getting to know classmates before the academic year starts. In addition to teaching them general studying and timemanagement techniques so that they will avoid procrastination, psychologists also develop students' soft skills and develop a strong study group.Besides psychologists, a circle of peer counselors was formed, who serve as a positive social and academic role-model.Peer counselors hold a special workshop about learning techniques on how to study mathematics and programming subjects efficiently.They last 30 h.The prevention course lasts 18 lessons, which are held one week before the semester starts, whereas the 12-h-long second part is held during the semester. Changing the structure of the education system.In our higher education system every subject is graded on a 5 point scale, where 1 means fail, and grades from 2 to 5 mean pass, with 5 being the best grade.Since the 2016-2017 academic year, all the lessons (in our system we have lectures and practice session) have been obligatory to attend (before that only practice sessions were obligatory).One semester contains 30 credits and 6-8 subjects. Research question In the present study, we introduce different steps of education reforms attempting to help our students be engaged in their university studies.The intervention program can have an effect and benefits on the retainment of students such as influencing their goals and commitments, their institutional experience and their integration into the academic environments.Information is available about the benefits and outcomes of university intervention programs, but-unlike our intervention program-most of them are voluntary to attend.The research question is, '1.To what extent will an education reform at a large public university affect the attrition of students?2. Can we have an evidence of it regarding the subjects by analyzing the pattern of grades?' Design Difficulty and differential analysis of subjects.It is worth examining the subjects of the various courses, because-although there are grades-there is some kind of expected knowledge, competence that the subject should measure (and the student should develop it or at least reach the expected level by the end of the course).It is also important to see that in order to obtain sufficient knowledge (or to achieve better grades), a subject really needs differentiation, better and better abilities-or does the subject differentiate students in different ways? To find out whether this kind of competence actually exists and whether the subjects measure this kind of competence, we applied IRT, based on Rasch-model analysis to examine the mathematical and programming subjects included in CS education (Rasch, 1960).In our higher education system every subject is graded on a five-point scale, where 1 means fail, and grades from 2 to 5 mean pass, with 5 being the best grade.In the analyses the final grades were included in each subject. Rasch models are a special case IRT models.The essence of Rasch modeling is to bring the difficulty of the subjects and the ability of the students to the same scale.A subject with a given difficulty can be solved by a student with the same ability level with a given probability.Obviously, all subjects that are less difficult are more likely to be correctly solved than more difficult subjects. Let us look at an example of this: if you have a right triangle that gives both of its catches and the student has to calculate the overall, then he or she must be partly familiar with the Pythagorean theorem and partly able to apply it-so square, add, results from the root.This is not a very difficult task.Students with a given difficulty are 60-70% more likely to solve the problem.A more difficult task is to ask them to determine the length of the third side on the basis of two sides and the angle enclosed by an arbitrary triangle (Cosinus theorem, generalization of the Pythagorean theorem), but it is an easier task to specify the total length of the two sides specified, or the circumference of the triangle.The former task can be solved by people with lower abilities, while the latter is more likely to be solved by students at the given level; and in this light the difficulty of the task will be higher (i.e. more difficult) and that of the latter will be lower (i.e.having a lower difficulty value). In the model, we use a second parameter besides the difficulty: this is the "slope" of the subject.The steeper the slope of the subject is, the better it can differentiate students, so, it measures well and strongly around a given ability level. Here, too, is an example based on the above.In applying the Pythagorean theorem, it is generally true that by the time students get to this point, they will use a calculator to square and root.In the light of this, the slope of this task will be higher, because if one knows the item, he or she will most likely solve the problem-so at a higher level of ability, the task will no longer be able to distinguish between students.Conversely, at a lower level-if you do not know the item, you will not be expected to guess the Pythagoras theorem, so you will not be able to solve the problem.In this sense, at lower levels, this task will not measure well, either, as under that level, no one is expected to solve it. It is clear, therefore, that when examining items, we will first look at these two parameters: on the one hand, we find how difficult the subjects are, and on the other hand, how well they are able to differentiate (higher for "cut" and lower for "randomizing"). Results The analysis was performed using the IRT, based on the Raschmodel of the STATA15 software package. Descriptive statistics.Participants: All first-year students (N = 3673) are full-time students at the BSc course in CS. 2863 participants started the university before 2016; and 809 after 2016; and the average age was 19.81 years. In the longitudinal examination between 2010 and 2015, 3671 students started the university program, 1776 students (48%) left the university, and 24% are retained students (N = 894).From the student registration system between 2016 and 2017, 809 students registered and 168 (20%) left the university, while the others' degrees are still in progress (Table 1). Instead of introducing the whole subject network, we present typical subjects that were analyzed using the IRT, based on the Rasch-model.Below we will discuss the subjects of Discrete Mathematics, Law and Management Theory, and Basic CS (Mathematics, General, and Functional Programming). These subjects enables us to sufficiently interpret the typical phenomena that may occur in such an analysis.The whole analysation of the subjects can be found in Appendix A. The period before 2015 and after 2016 are treated separately in the table, as at the end of 2015 the first step of the education reform took place, when all the lectures became obligatory and the Learning method course was introduced to all firstyear students so it had an impact on academic achievement.We wondered if it manifested itself in some way in the difficulty of completing the subjects and in their ability to differentiate. Examination of slope and difficulty coefficients. Let us examine Table 2 more closely.As a first step, let us examine the slope indices of the given objects in different years, whether they change from one year to another. We applied two parametric procedures: each subject has a difficulty index and a slope. The students' ability value (a) moves on the same scale as the object difficulty value (d).Connected to the slope (s).This means P Item grade with given ability value That is the ability of a student's performance in a given subject at a given ability value can be calculated with the above probability.Let us examine how it can be interpreted.It should be noted that the examined phenomenon cannot have a negative slope (typically not 0), because a slope of 0 means that there is 1/2 of a probability (regardless of ability) that a student passes a given exam, so in simple terms, it is essentially a coin toss that decides what grade they get.Fortunately, there is no such thing, so we can assume that all slopes are positive. If the student's ability is higher than the difficulty of the subject, then the exponent of "e" is always negative, so the higher the ability, the lower the denominator of the fraction, thus the greater the probability of earning a given grade. An increase in slope refers to the fast change of the probability of how steeply the subject discriminates, i.e. how quickly the probability of success decreases/increases from a given difficulty/ ability. It is thus clear that it is the two parameters of the subject and the ability of the student that determine the probability of success in a subject.The default values are 0-0-and the values for affordable subjects/weak ability levels are typically −2, and for heavy subjects/serious ability levels are above 2. Typically, if the student's ability is exactly equal to the difficulty of the subject, the exponent is 0, which means that it is just 1/2 of a probability of reaching that particular grade. While if the student's ability falls short of the difficulty, the denominator of the fraction will increase, so the probability that the student will be able to pass the exam will decrease-he/she will earn a good grade (Fig. 1). Discrete mathematics lecture, practice separately.In discrete mathematics (both in lecture/theory or practice), we see a slope of high value above 3 (sometimes 4) (before and after 2016).This means that the subject had strong differentiating abilities both before and after the subject reform. If we also observe the difficulty parameters, two things become visible: On the one hand, the subject became less difficult, as the lower level of ability before 2015 and after 2016 or later was sufficient to earn a higher grade (mainly true for the practice course as well).Regarding performance, our attention can focus on the difficulty of level 1, which moved downwards (2015 and before: −0.599, standard error: 0.03, and after 2016: −0.71, standard error: 0.05the larger error is due to the fact that the measurement point was obviously smaller for this period, and the number of students was smaller due to the shorter elapsed time). This means that the students tried taking exams more bravely, so students with lower ability levels also tried passing an exam, whereas earlier, presumably this level meant that the student did not even take the exam, since one cannot think that from one year to the next, students were "disoriented" and far better able students took over universities. On the other hand, you can also see that there is a bigger jump in the difficulty of the subject between 2 and 3 grades.This means that the difficulties are most differentiated here.It is also noticeable that the difficulties increase substantially evenly (both in practice and in lectures), except for this group-so it is actually easier to receive grades 1-2 than one of grades 3-4-5. However, the most noteworthy thing for us is that experience has proven that level 1 has become easier after the education reform, which means that students are more willing to take the exam (and more people try it); dropouts usually begin when the students do not attempt the exam).And we can prove that the difference between the two years was in this direction. Law and management theory.It can be clearly seen that the slope of this subject is significantly lower than that of discrete mathematics, so the subject is to be expected to be able to distinguish between students with a higher ability group (still high enough not to "randomize" grades). What makes this well-known subject particularly prominent in the analysis is that it has become drastically easier to receive outstanding grades following the education reform.It can clearly be seen that while the lower region (sufficient or not tried the exam at all) has not changed significantly, at the level of grades 4 and 5, (or 3 and 4) we can see a very serious, almost one-unit shift (here the standard errors are 0.05 values).It means that passing the exam has not been significantly easier or more difficult for students, but students with lower ability levels than before 2015 are now able to score better on the subject. Also note that the slope of the subject has increased, which also seems to confirm that the subject cannot differentiate well below and above the sufficient/medium level, in this sense that student gaining grades 1-2 or students gaining grades 4-5 differ in abilities. Functional programming.As for functional programming, we see a similar slope to that of Law and management, so its differential effect is less expressed (slope: 1.6-1.7); on the other hand, it exhibits similar stability to discrete mathematics. The reason why we chose to present this subject among of the programming subjects is the typical change of the programming/ computing subject: in this subject we can see that while attaining the lower region became more difficult for students (it is harder to "pass" after 2016 than before), after they reach this level, it will be easier for them to receive a better grade. In other words, from difficulty level 1 to level 3, the subject is difficult in the sense that only the better (already positive ability level) students take the exam, and in their case an increase has been observed.The level of the achievement of the distinguished grade is also lowered.Thus the subject has a kind of centralizing effect; it has restricted the student's achievement. It also means that we were generally able to observe thatcompared to the period before 2016 and after 2016-1.mathematics-related subjects (such as discrete mathematics) became achievable at lower levels of difficulty, and students with lower levels of ability attempted exams and did not regard not passing it as a failure; 2. programming/professional subjects (such as functional programming) became harder, more serious subjects, and retained their differentiating ability. Discussion The most reasonable result from recent research is the effect it has had on the rate of retention.The 28% lower attrition rate among the students following the education reform suggests that interventions might help to reduce attrition.This reduced attrition-as a result of our reform-gave students the ability to continue their university education.The drop-out rate was minimized from 48% to an impressive 20%.However, we know that in CS programs we might lose these students later, as well (for example, because they start working), but it i.e. retention still has great economic and social consequences.In order to answer to the first hypothesis (i.e. to what extent will an education reform at a large public university affect the attrition of students?), it could be claimed that education reform might have a substantial part in reducing dropout. The education reform is still in progress.Each semester we analyze the effectiveness of our education reform and we introduce different challenges in order to reduce further dropout.Annual statistical measurements throughout the program support the effort to help the reform develop along with its students.Other programs in the literature, e.g.Humphrey (2005), Kot (2014) measure high attendance rate with potential results.However, a program which involves every freshman student is not yet to be found.Our education reform involved all first-year students, and participation was obligatory in order to prevent students from dropout, so every student participated in the learning methods course.Both the research and the feedback by students are positive and encouraging.Tinto (1975) introduced an interactional theory of student persistence in academic life.Our education reform has a developing impact on every factor that according to Tinto (2012) could be important for student retention.This theory emphasized the importance of the students' personal characteristics, traits, experiences, and commitment.The study course develops interactions between the student and the institution and help students to become more integrated, "fitted" academically and socially.The course supports these students in their efforts to become academically successful.The small groups provide a support system and strong relationships for the students.The structure of course addresses many of the relevant academic skills affecting dropout including time management skills and effective study skills.The ongoing, reflective, and responsive nature of the course allows mentor teachers to treat students as individuals within the groups.The education reform allows us to give students positive and proven guidance about how to become successful college students.The education reform facilitates members of the university to be involved in retention and student success in a meaningful way.The investment by the institute is huge, but we can experience the results immediately by conducting longitudinal research and find new opportunities to invest in order to develop student retention by the university.All in all, the education reform appears to be a win-win program that could be used by other institutions, as well, to increase the retention of their students. As we can see from the literature, most students drop out already in the first year of their studies.Our education reform can be one of the answer for this issue, because our program has been introduced for every freshman student, not just for students at risk.This way, with support from the mentors and developing study skills, we can prevent our students from drop-out.According to our findings we could retain 28% of our students. The role of the mathematics in dropout.According to the literature, e.g.Pearson and Miller (2012), one-third of students fail to complete a degree due to their inadequate knowledge in mathematics.Our findings suggest that interfering in the education system can have an effect on students' retention.By answering to second hypothesis (i.e.Can we have any evidence for it regarding the subjects by analyzing the pattern of grades?), analyzing the pattern of grades could help to clarify the difference success rate of students before and after the education reform (2016) because it clearly shows that more students have passed the subjects.After the education reform the structure of passing the mathematical subjects has changed: more students try and pass the exam.Our reform has changed the attitude of students towards mathematics-oriented subjects, because the success rate of passing these exams has risen.We fully agree with Giannakos et al. (2017), who highlight the importance of supporting highquality education in order to retain more students in CS studies.However, this phenomenon is complex and further investigations are needed to see how students' motivation can be kept.It is worth revising the importance of mathematics in CS education and developing teaching methods based on the findings that different skills are needed to elaborate mathematical theories. In our research we analyzed students' achievement from a new perspective: we applied IRT, based on the Rasch-model.We found different achievement patterns before and after the education reform.One of the most mentionable results is that experience has proven that reaching passing grade has become easier after the education reform, which means that students are more willing to take the exam.It means that we could stop dropout which begins with students not attempting exams and ends with their leaving the university during the semester.In addition to this, there is another notable result: after the education reform the structure of subjects has changed, as well.Mathematics-related subjects became achievable at lower levels of difficulty, thus students with lower levels of ability also took exams.Programming or professional subjects became harder, more serious subjects, and retained their differentiating ability. These findings underline the importance of teachers: they should provide opportunities for students to develop their skills in order to fulfill the academic requirements.Students who struggle with mathematics could be identified by a learning system and could be given the opportunity to consult with teachers.As a consequence, we saw that students were likely to postpone mathematics subjects that required more complex knowledge until a later semester.This type of procrastination can easily lead to dropout.The intervention reform could help students to stay and receive a degree in order to increase the number of computer scientists in the economy.CS education has a serious responsibility for controlling students' attrition.The recent study has important educational implications for universities in the field of CS, namely, an education reform is worth introducing.The present research could lighten the effect of attending compulsory courses and learning methods.It seems that students with different abilities can succeed in fulfilling the academic requirements. Conclusion We addressed some important issues of CS retention in this paper, and now we will discuss some further solutions to these problems here.It is hoped that by identifying some of the major reasons for high attrition rates among students, efforts can be made to reduce them.We should pay more attention to students. Much research has investigated higher education dropout, but little attention has been paid to intervention programs that are not voluntary.Such intervention programs may affect students engagement in university activities and can support them making their decision whether to retain or not.First time in the literature we have introduced an education reform for every freshman: all the theoretical classes have been made obligatory and an intervention program offering effective learning skills have been introduced to students who needed it.Our results suggest we managed to improve student success.Introducing a new course as a compulsory item in the curriculum is a new phenomenon, through this we could reach those students who are not willing to participate in extra classes after school or not willing to reach for help but could be at risk in attrition.The implementation of the education reform could improve a further example of caring, and develop effective communication between students and mentor students. Preventing students' attrition and gaining more information about students' needs might result in better understanding students' needs and developing more interventions to retain students at the faculty. Limitations of the study and future research.Despite the fact that this study presented interesting results, we believe that the conclusions derived from it should be taken carefully.Future research should be extended to additional variables, other than those taken into account in this study.Data analysis techniques should also be taken into consideration in order to evaluate the academic profile of students who dropped out or graduated in the previous years.The conclusions of the recent paper have their own limits since data were only collected from CS students in Hungary. As for further research, we have already started to analyze the role of the different subjects in attrition.It is advisable to create a new curriculum for CS students and to rethink the logical order of the subjects. Table 1 Descriptive statistics of students between 2010 and 2107. Table 2 IRT-model on typical subjects of the CS degree program. Fig. 1 Difficulty levels of the subjects.
2021-02-12T14:10:24.933Z
2021-02-10T00:00:00.000
{ "year": 2021, "sha1": "97a7ed24c04b22ebf6c75aa9edb4240345127d89", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41599-021-00725-w.pdf", "oa_status": "GOLD", "pdf_src": "SpringerNature", "pdf_hash": "c811d36bf38ece5fd90ae9b4fd2b2c6ab62d19a2", "s2fieldsofstudy": [ "Education" ], "extfieldsofstudy": [] }
260613795
pes2o/s2orc
v3-fos-license
Local rings of countable Cohen--Macaulay type We prove (the excellent case of) Schreyer's conjecture that a local ring with countable Cohen--Macaulay type has at most a one-dimensional singular locus. Furthermore we prove that the localization of a Cohen-Macaulay local ring of countable CM type is again of countable CM type. Conjecture 0.1 ([12]). An analytic local ring over the complex numbers of countable CM type has at most a one-dimensional singular locus, that is, R p is regular for all primes p with dim R/p > 1. We verify Conjecture 0.1 more generally for all excellent CM local rings satisfying countable prime avoidance (Lemma 1.2). Some assumption of uncountability is necessary to avoid the degenerate case of a countable ring, which a fortiori has only countably many isomorphism classes of modules. 1. Schreyer's Conjecture Definition 1.1. A Cohen-Macaulay local ring (R, m) is said to have finite (resp., countable) Cohen-Macaulay type if it has only finitely (resp., countably) many isomorphism classes of maximal Cohen-Macaulay modules. For the proof of Schreyer's conjecture, we need the following well-known lemma. Lemma 1.2 (countable prime avoidance [5,Lemma 3]; see also [13]). Let A be a Noetherian local ring which either is complete or has uncountable residue field. Let {p i }, i = 1, 2, . . . , be a countable family of prime ideals of A, a an ideal of A, x ∈ A. Then x + a ⊆ p j for some j whenever Theorem 1.3. Let (R, m) be an excellent Cohen-Macaulay local ring of dimension d, and assume either that R is complete or that the residue field R/m is uncountable. If R has countable CM type, then the singular locus of R has dimension at most one. Proof. Assume that the singular locus of R has dimension greater than one. Since R is excellent, this means that the singular locus is defined by an ideal J of height strictly less than d − 1. Let be a complete list of representatives for the isomorphism classes of indecomposable MCM R-modules. Consider the set , for some i, j, and dim(R/p) = 1}. Note that Λ is at most countable, and that J is contained in each p ∈ Λ. By countable prime avoidance (applied to R/J), the maximal ideal m is not contained in the union of all p in Λ, so there is an element f ∈ m \ p∈Λ p. Choose a prime q containing f and J such that dim R/q = 1; then of course q ∈ Λ. Let X (resp. Y ) be a (d − 1) th (resp. d th ) syzygy of R/q. Then X and Y are both MCM R-modules and we have a nonsplit short exact sequence . To see the opposite containment, note that since q contains J, R q is not regular. The resolution of the residue field of R/q is thus infinite, and neither X q nor Y q is free, so (*) is nonsplit when localized at q. We can write both X and Y as direct sums of copies of the indecomposables M i , and further write with all but finitely many of the a ij equal to zero. Then q is the intersection of the annihilators of the nonzero Ext modules appearing in the above decomposition. Since q is prime, it must equal one of these annihilators, and then q ∈ Λ, a contradiction. Localization of rings with countable CM type Let (R, m) be an excellent local ring of countable CM type, and assume R either has an uncountable residue field or is complete. By Theorem 1.3, the dimension of the singular locus of R is at most one. Thus there are at most finitely many prime ideals p 1 , . . . , p n such that R p i is not regular and p i = m. All such primes have dimension one, i.e., dim(R/p i ) = 1 for i = 1, . . . , n. To understand the structure of these rings, one wishes to know what type of singularity R has at these primes. A quick inspection of the list of examples given in [12] shows that R p i has finite CM type! Our next main result shows that countable CM type localizes; hence in general each R p i has countable CM type. Note also that in the case R is complete, while R p i is no longer complete in general, it will have uncountable residue field. wherein X α is a MCM R-module and Y α has finite injective dimension [3]. Since there are uncountably many modules X α , there must be uncountably many X α of some fixed multiplicity. Fixing that multiplicity, and using that there are only countably many isomorphism classes of MCMs, we then find that there are uncountably many short exact sequences where X is a fixed MCM R-module, Y β has finite injective dimension, and the M β are among our original list of M α . Since each (M β ) p is a MCM R p -module and (Y β ) p has finite injective dimension over R p , This follows from [11,Proposition 4.9]: if Y is a finitely generated module having finite injective dimension, then for all finitely generated R-modules M , depth(M ) + sup{i| Ext i R (M, Y ) = 0} = depth(R). In particular, each extension (χ β ) splits when localized at p. This implies that (M β ) p is a direct summand of (X) p for each β. But over a local ring, a finitely generated R-module Q can have at most finitely many non-isomorphic summands. 1 Since there are uncountably many (M β ) p which must be summands of X p , this contradiction proves the theorem. The results above, together with known examples, suggest a plausible question: Let R be a complete local Cohen-Macaulay ring of countable CM type, and assume that R has an isolated singularity. Is R then necessarily of finite CM type? We end the paper with an observation that having countable CM type descends from faithfully flat overrings. The method follows that of [14]. For an indecomposable MCM R-module N , write S ⊗ R N ∼ = i Z a i i , where all but finitely many of the a i are zero. We assume that a i = 0 for i > n and write the sum as a finite one. Then (S ⊗ R N ) ⊕ W a 1 1 ⊕ · · · ⊕ W an n ∼ = S ⊗ R (X a 1 1 ⊕ · · · ⊕ X an n ), so S ⊗ R N is a direct summand of S ⊗ R (X b 1 ⊕ · · · ⊕ X b n ), where b = max{a i }. In other words, S ⊗ R N is in the "plus category" of S ⊗ R (X 1 ⊕ · · · ⊕ X n ) (see [14]). By [14,Lemma 1.2], N is in the plus category of X 1 ⊕ · · · ⊕ X n , and by [14,Theorem 1.1], there are only finitely many possible such N . Since the set of all finite subsets of {X 1 , X 2 , . . . } is a countable set, this shows that R has only countably many indecomposable MCM modules up to isomorphism.
2019-04-12T09:10:28.717Z
2002-05-06T00:00:00.000
{ "year": 2002, "sha1": "108642e9e0da9876c53acbebc3c16ba7a5c6d962", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "108642e9e0da9876c53acbebc3c16ba7a5c6d962", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
4588040
pes2o/s2orc
v3-fos-license
Tau protein liquid–liquid phase separation can initiate tau aggregation Abstract The transition between soluble intrinsically disordered tau protein and aggregated tau in neurofibrillary tangles in Alzheimer's disease is unknown. Here, we propose that soluble tau species can undergo liquid–liquid phase separation (LLPS) under cellular conditions and that phase‐separated tau droplets can serve as an intermediate toward tau aggregate formation. We demonstrate that phosphorylated or mutant aggregation prone recombinant tau undergoes LLPS, as does high molecular weight soluble phospho‐tau isolated from human Alzheimer brain. Droplet‐like tau can also be observed in neurons and other cells. We found that tau droplets become gel‐like in minutes, and over days start to spontaneously form thioflavin‐S‐positive tau aggregates that are competent of seeding cellular tau aggregation. Since analogous LLPS observations have been made for FUS, hnRNPA1, and TDP43, which aggregate in the context of amyotrophic lateral sclerosis, we suggest that LLPS represents a biophysical process with a role in multiple different neurodegenerative diseases. It is clear that both referees appreciate the findings. However, there are also several issues that have to be sorted out in order to consider publication here. This concerns both the in vitro and in vivo data, but the referees offer constructive comments on how to resolve the points. Should you be able to address the comments raised then I would like to invite you to submit a suitably revised manuscript. Let me know if we need to discuss anything further Wegmann et al present results suggesting that the MT binding full length phosphorylated tau, best known for its role in tangle formation in Alzheimer's Disease. They present very interesting evidence that tau is competent for phase separation both in vitro as well as in cells. The live cellular microscopy images of tau collections/assemblies are interesting and intriguing, though my concern is that overexpression of fluorescent protein tagged tau is not representative of the state in cells. More importantly for this review, the manuscript makes many biophysical claims based on in vitro experiments that in my opinion are either not well designed (they cannot be used to make these conclusions) or explained and therefore the claims are not justified. I see these as critical issues that need to be addressed to support the conclusions that tau LLPS is fact regulated in the way described in the results and discussion and physiologically relevant. Major concerns 1. In vitro tau LLPS. Chris Dobson demonstrated that effectively any protein could form amyloid (even super stable lysozyme, except perhaps very proline rich or repeat proteins) if the correct solution conditions where found. Certainly, truly any protein can form amorphous assemblies if the solvent conditions are chosen correctly. Therefore, it is reasonable to assume that essentially any IDP can phase separate. (Even folded proteins phase separate in crystallography trays.) Therefore, tau phase separation should be demonstrated first in cells in the manuscript and the emphasis should be placed there. Additionally, in vitro phase separation here is always initiated with polymeric crowding agents. This not acceptable, despite what has been previously reported. Here it is unclear if tau can phase separate and it has a well described MT binding function unlike the LC domains that have largely dominated the attention of phase separation reports. The authors here need to show that tau can phase separate without addition of polymers that may cause phase separation by mediating multivalent weak interactions with tau instead of excluded volume effects which have been assumed here but not demonstrated. If the authors wish to show data with ficol, they should show that fluotagged ficol doped into the 12.5% ficol does not partition into the droplets with tau. If it does partition into the droplets, then how can we know if tau is forming tau-tau interactions? In addition, if tau needs excluded volume effects to phase separate, they should either raise the tau concentration (after all tau is very soluble as they say) and phase separate without crowders or they should crowd with monomeric crowders -instead of ficol they should use sucrose (ficol is a sucrose polymer) or glucose (instead of dextran which is glucose polymer) or ethylene glycol (PEG is the ethylene glycol polymer) (see Pielak's work http://www.pnas.org/content/113/7/1725.abstract) as these monomers in ~10% w/v will have the same crowding effect but will not be substrates for multivalent interactions. The authors may also wish to look at the edge of an evaporating solution drop (without crowders) where protein molecules are concentrated to see if such superconcentration will result in phase separation (or simply aggregation). Tau concentration in droplets The authors state that "at 2uM starting p-tau441 concentration, the addition of 10% PEG led to a concentration of p-tau441 in the droplets of ~20uM". There are no details about this calculation presented. The droplets appear to be >200x the background in the right hand inset -what is the concentration left in the supernatant as determined by other means -that will at least provide some baseline information even if the fluorescence in the background is not signal but rather is noise. The authors should show their calibration curve against free Alexa. The authors should not use free Alexa as a control because dye photo properties (even emission wavelengths) change upon conjugation. They should at least use the tau in non LLPS state at variety of concentrations to form a multipoint calibration curve. Yet: what happens to the fluorescence inside droplets -is quantum yield expected to change in this dense environment? The authors need to quantify the number of fluorophores attached per molecule, the dilution factor (fluo-label %of p-tau441). The authors should specifically state the concentration of the fluorophore in the droplets -do fluorophorefluorophore photophysical (dipolar) interactions affect the expected fluorescence in this state? The authors should also calculate the mg/ml of this assembly and discuss if this is biophysically reasonable. There is one report from Brangwynne of low uM range droplets by ucFCS, but that report is subject to the same questions as those brought up above. 3. Viscosity in droplet. The authors use AquaVis sensitive to molecular rotation to measure viscosity. I cannot find any information on AquaVis in the methods section, nor a reference in the paper (perhaps I missed it) nor even online Google search. Second, the authors appear to conclude that the high fluorescence in the droplet suggests the viscosity is higher there. I certainly believe the viscosity is higher, but the increased fluorescence likely comes from a) partitioning of the dye into this hydrophobic-protein-containing droplet and b) binding to the high concentration of hydrophobes to slow presumable rotation about the carbon-carbon bond -rather than a higher viscosity slowing rotation. 4. "The hexanediol sensitivity or p-tau441 phase separation can likely be attributed to beta strand interactions". This claim is not supported, The reference Panas is a review article on stress granules that (correctly, as far as I understand the consensus) suggests that "1,6-hexanediol, an aliphatic alcohol that disrupts weak hydrophobic interactions, dissolves liquid droplets without affecting insoluble aggregates". It is insoluble aggregates (at least the amyloid fibril variety) that are typically thought to be stabilized by beta-sheet interactions. Hexane diol does not block the hydrogen bonds associated with beta sheet formation. The authors here present no evidence that beta-sheets are involved in LLPS. They could make variants that introduce proline residues that break beta sheets but are common in phase separating domains to try to isolate the contribution of beta sheets, but this would be a challenging task. 5. In cell LLPS: "Notably because overexpression of GFP-tau leads to intense overall fluorescence of the entire cell body and processes, GFP-tau droplets can only be identified in cells with low GFPtau expression". To me this seems to argue against LLPS in cells as clusters should form leaving the regions without LLPS "granules" the same concentration (the critical concentration at the cell condition) regardless of expression level. 6. Droplet size -on page 13, the authors compare variants based on droplet size. The droplet size is a kinetic effect as droplets will fuse over time and will also undergo Ostwald ripening -which may be halted by gel-formation. The authors should use the approach of Mackensie et al Neuron 2017 on TIA-1 variants to estimate the critical concentration for droplet formation (the left side/arm of the phase diagram) to see if more protein is left in the cleared supernatant for different variants. 7. Full droplet bleach. The partial droplet bleach results are clear. The change in full droplet bleach do not indicate "that the liquid droplets matured rapidly into hydrogels of high viscosity" -indeed the small portion droplet bleach experiments do not support the "maturation" or gel formation until 60 minutes. The full droplet bleach lack of recovery may arises due to suppression of exchange kinetics across the interface which could be due to decrease in the availability of free monomers (the droplets get bigger and so the transport across the boundaries slows) or some type of surface hardening on the interface. The authors cannot distinguish these effects, but it cannot be due to maturation as the small portion bleach results demonstrate clearly. 8. thioflavin s. The authors claim that thioflavin S reports on beta sheet formation. Yes, it and tht are used for amyloid fibril detection, yet thioflavins are rotor dyes also that fluoresce due to quenched rotation when bound. Indeed, thioflavins also fluoresce when in high viscosity solvent http://pubs.acs.org/doi/full/10.1021/jp805822c so this expt is very much like the AquaVis experiment and cannot tell the presence of beta sheets. X-ray diffraction and TIRF could be use to indicate beta-sheet structures as demonstrated by McKnight. 9. phosphorylation effect on tau LLPS. The authors should add dephospho-tau to Figure 6 so that the differences are not due to source material rather due to phosphostate 10. the authors suggest a "conformational shift when negative charges are introduced". What are they referring to? No experiments are conformations are presented. 11. The model of phase separation via the nterminal domain would suggest that tau on the surface of the MT would be locally phase separated, confined to a 1 dimensional fiber. The authors should discuss this. This would hyperconcentrate locally the nterminal domain. The reference to membraneless organelles here as polyelectrolyte hydrogels or brushes is not consistent with LLPS organization, but could be consistent with a brush structure on an MT -though that may or may not be phase separated. In their study, Wegmann and colleagues address the recent concept of liquid-liquid phase separation (LLPS) as an intriguing pathogenic mechanism for tau aggregation in tauopathies, using in vitro and cellular systems, which are complemented by the analysis of human and mouse tissue. I generally like the study that used a vast range of complementary techniques that makes a convincing point that LLPS can initiate tau aggregation in AD. My major concerns are about the analysis/claims of tau concentration and phosphorylation pattern and the in vivo part that in my view is less convincing (the claim in the abstract that droplets are not only observed in culture but also in the intact brain). Besides that I am suggesting a few experiments that can be easily done and would clarify a few issues. The figures are well-crafted but occasionally would benefit from additional information in the corresponding legends. My specific comments are as follows: 1. Page 7, Fig 1C: A significant part of this paper is based on making claims about the role of phosphorylation of tau in driving LLPS. The authors cite a previous paper of tau preparation in insect cells that has shown phosphorylation of tau at 18 sites. How can they be sure that their current preparation shows the same pattern? Moreover, it is not only important which sites are phosphorylated (most likely using more sensitive methods would pick up more sites), but what the stochiometry is. Both the phosphorylation sites and the stochiometry (ratio of moles of phosphate per mole of tau) need to be determined or the statement needs to be modified. 2. Page 8: What is a physiologically critical tau concentration and how is this being determined? The authors claim (Fig. 2F) that 1.5 uM is 'critical' but they only tested 0, 1, 2 ...uM so this is really difficult to say. I guess it also needs a better explanation in the legend and what the green and black labeling of the dots means. The statement on this page that tau is restricted to the axon is not entirely correct and the question is whether the cited 30-50% of unbound free tau (2005 ref) still holds up considering that tau binds to and interacts with so many proteins. 3. Page 9: Estimation of intraneuronal tau concentrations: This is an interesting approach but we are only given the end results without the figures for the intermediate steps. I find this entire paragraph problematic, especially as even a correct estimate would not inform on local (i.e. subcellular) protein concentration. 4. Page 9, Figure S3: The C-terminal half of tau should be added as a 'negative' control for p-tau256. 5. Page 10, Figure S3I: Please add non-reduced, unfractionated brain lysate such as to provide an estimate of the relative abundance of the N-terminal fragments in human brain. 6. Page 10, Section: 'Tau phase separation in neurons'. Here, I have several comments: The authors want to test 'whether tau can form droplets under physiological conditions' and use an overexpression of GFP-tagged tau. I think it would be better to drop the reference to 'physiological conditions' and refer to this as 'in neurons'. The claim of 'droplet-like tau accumulations' to me is an over-statement as there is simply a tau accumulation which could be anything. Wegmann and colleagues use FRAP to assess the behavior of the 'droplets' but FRAP should also be applied to 'droplet-free areas'. If the recovery rate is the same this would not substantiate the claim. What is also ignored is that in any area of bleaching there can be a mixture of tau populations. Further down the text, 'droplet-like' is replaced by 'droplets' without any proof that droplets actually form in vivo. It can also not be claimed that the droplets in the cell are 'in a viscoelastic hydrogel state'. This can be speculated but should go into the discussion. The Lim et al. reference and statement are out of place. 7. Page 12, Figure 4: Fig 4A: The phosphorylation pattern is mostly assumed but not shown. Fig 4B: There is a huge variability in total tau (between blots and between transfectants) indicating massive differences in tau levels so the observed effects could be because of phosphorylation and levels. Minor comments: 1. M. Materials and methods are presented after the figure legends and before the figures. There is some inconsistency in the referencing style. 2. M. Bottom of page 4: Low complexity domain (LCD): The authors make the point that tau is a protein with low amino acid variance, intrinsic disorder and inhomogenous charge or polarity distribution. This led them to postulate that tau could undergo phase separation. But then they argue that tau has no defined LCD domain. This discrepancy needs to clarified. 5. M. Page 9: 'in contrast to most other LLPS proteins' -please provide info which ones do and which ones don't. 6. M. Page 14. Some statements should be toned down to 'this SUGGESTED that the exchange ... ; this MAY ALSO EXPLAIN... Figure 5A,B: There seems to be a mix-up as the actual droplets look the same and the bleached area whereas in A the entire field (i.e. the entire drop) is bleached. We thank the reviewers for the critical reading and the constructive comments for our manuscript on tau liquid-liquid phase separation. We addressed all concerns raised and performed a set of additional experiments in order to do so. We also changed the manuscript structure and, where needed, the text in order to be more clear in our description and discussion of the data. We hope that the revised version of our manuscript now fulfills the scientific criteria to be published in EMBO Journal. In the following you find our point-by-point answers to the reviewer comments including a short description of the new data and the changes made in the manuscript. Referee #1: Wegmann et al present results suggesting that the MT binding full length phosphorylated tau, best known for its role in tangle formation in Alzheimer's Disease. They present very interesting evidence that tau is competent for phase separation both in vitro as well as in cells. The live cellular microscopy images of tau collections/assemblies are interesting and intriguing, though my concern is that overexpression of fluorescent protein tagged tau is not representative of the state in cells. More importantly for this review, the manuscript makes many biophysical claims based on in vitro experiments that in my opinion are either not well designed (they cannot be used to make these conclusions) or explained and therefore the claims are not justified. I see these as critical issues that need to be addressed to support the conclusions that tau LLPS is fact regulated in the way described in the results and discussion and physiologically relevant. Authors: At first, we want to thank the reviewer for the very constructive and scientifically outstanding review of our manuscript. We truly appreciate that the reviewer pointed out several the weak points and discussed the concerns in the reviewer's comments, and we agree that some major points describing the biophysical process of LLPS needed to be further investigated and clarified. We tried to address all (major and minor) concerns raised by performing additional experiments, and we think that the manuscript benefitted a lot from the additional data we added and the way we restructured and re-phrased the text in Results and Discussion to better reflect our observations . Major concerns 1. In vitro tau LLPS. Chris Dobson demonstrated that effectively any protein could form amyloid (even super stable lysozyme, except perhaps very proline rich or repeat proteins) if the correct solution conditions where found. Certainly, truly any protein can form amorphous assemblies if the solvent conditions are chosen correctly. Therefore, it is reasonable to assume that essentially any IDP can phase separate. (Even folded proteins phase separate in crystallography trays.) Therefore, tau phase separation should be demonstrated first in cells in the manuscript and the emphasis should be placed there. Authors: We restructured the manuscript and now start with showing that p-tau LLPS can be observed in in neurons in vitro, and that droplet-shaped tau accumulations can be found in the cytosol in the living brain of mice. We also made clear that this is happening (for now) in tau overexpressing experimental systems, although the data from AD brain suggest that a similar mechanism of tau LLPS may also be possible at physiological tau levels in the human brain, or more specifically, when the cytosolic phsopho-tau concentration is abnormally elevated in AD. This is also further supported by the the conditions we chose to investigate tau LLPS in vitro in more detail: instead of harsh conditions needed to assemble soluble proteins into amyloids, we used near physiological tau protein concentrations (1-10 mM) and buffer conditions (50-150 mM NaCl, pH7.4) to initiate tau LLPS and subsequent aggregation. It is accepted that tau is an amyuloid forming pritein both in vivo and in vitro, and we also show that under the usually mild conditions used to trigger tau assembly into amyloid-like fibrils in vitro, which is the addition of the organic polyanionic polymers heparin or RNA, tau LLPS can readily be observed. We thus would argue that tau LLPS and aggregation can be compared to a general phenomenon of any given protein under harsh conditions. And in the case of LLPS, it may even be interesting to challenge the idea that LLPS could be a general (functional or dysfunctional) common mechanism of IDPs? Additionally, in vitro phase separation here is always initiated with polymeric crowding agents. This not acceptable, despite what has been previously reported. Here it is unclear if tau can phase separate and it has a well described MT binding function unlike the LC domains that have largely dominated the attention of phase separation reports. The authors here need to show that tau can phase separate without addition of polymers that may cause phase separation by mediating multivalent weak interactions with tau instead of excluded volume effects which have been assumed here but not demonstrated. If the authors wish to show data with ficol, they should show that fluo-tagged ficol doped into the 12.5% ficol does not partition into the droplets with tau. If it does partition into the droplets, then how can we know if tau is forming tau-tau interactions? In addition, if tau needs excluded volume effects to phase separate, they should either raise the tau concentration (after all tau is very soluble as they say) and phase separate without crowders or they should crowd with monomeric crowders -instead of ficol they should use sucrose (ficol is a sucrose polymer) or glucose (instead of dextran which is glucose polymer) or ethylene glycol (PEG is the ethylene glycol polymer) (see Pielak's work http://www.pnas.org/content/113/7/1725.abstract) as these monomers in ~10% w/v will have the same crowding effect but will not be substrates for multivalent interactions. The authors may also wish to look at the edge of an evaporating solution drop (without crowders) where protein molecules are concentrated to see if such superconcentration will result in phase separation (or simply aggregation). Authors: We thank the reviewer for being critical and pointing out these concerns about the driving forces behind tau LLPS in crowdning conditions, and for the constructive suggestion of experiments to address the open questions. We peformed several experiments to answer these questions: First, to shed light on the question if the polymeric crowders cause tau LLPS by templating multivalent interactions and co-seperation into the tau droplets, we spiked fluorescently labeled dextranes of different molecular weights (3 kDa, 20 kDa, 70 kDa) into dextran (70 kDa) induced ptau LLPS setups and imaged the droplets by confocal microscopy over a time of 5 to 60 min. We observed that immediately after tau LLPS initiation by dextran, all three dextranes were excluded from the tau droplets, which did not change over time. This suggested that tau LLPS triggered by the polymeric crowder dextran relies on tau-tau interactions induced by excluded volume effects that lead to high local tau concentrations, rather than on interactions of dextran molecules with tau. The results are shown in Supplementary Figure S4C. Then, we tested if crowding induced by the monomeric entities of the polymeric crowders dextran and PEG can initiate tau LLPS as well, as was suggested by the reviewer. Indeed, at 10% (w/v), we only observed crowding after the addition of dextran and PEG, but not in presence of glucose or ethylene glycol (Supplemental Figure S 4F), suggesting that excluded volume effects caused by the polymeric crowders are needed to supersaturate tau and thereby initiate tau LLPS. Next, we tested if p-tau can phase separate in the absence of molecular crowding agents at very high concentrations by imaging an air-exposed droplet of 100 mM p-tau. In this setting, the droplet surface was exposed to evaporation, which led to a gradual local increase of the tau concentration and a supersaturation of tau on the droplet-air interface. We observed that immediately after droplet deposition, tau LLPS started to occur from the outside to the inside of the droplet, suggesting that tau can undergo phase separation at high concentration of ≥50 mM. These results are shown in Supplemental Figure In the revised version of the manuscript, all of the above results are mentioned in the text and the effect of supersaturation as a possible initiator of tau LLPS and subsequent aggregation is mentioned in the Discussion. Tau concentration in droplets The authors state that "at 2uM starting p-tau441 concentration, the addition of 10% PEG led to a concentration of p-tau441 in the droplets of ~20uM". There are no details about this calculation presented. The droplets appear to be >200x the background in the right hand inset -what is the concentration left in the supernatant as determined by other means -that will at least provide some baseline information even if the fluorescence in the background is not signal but rather is noise. The authors should show their calibration curve against free Alexa. The authors should not use free Alexa as a control because dye photo properties (even emission wavelengths) change upon conjugation. They should at least use the tau in non LLPS state at variety of concentrations to form a multipoint calibration curve. Yet: what happens to the fluorescence inside droplets -is quantum yield expected to change in this dense environment? The authors need to quantify the number of fluorophores attached per molecule, the dilution factor (fluo-label %of p-tau441). The authors should specifically state the concentration of the fluorophore in the droplets -do fluorophore-fluorophore photophysical (dipolar) interactions affect the expected fluorescence in this state? The authors should also calculate the mg/ml of this assembly and discuss if this is biophysically reasonable. There is one report from Brangwynne of low uM range droplets by ucFCS, but that report is subject to the same questions as those brought up above. Authors: We apologize for the previously incomplete presentation of the data on the tau concentration in PEG induced droplets in vitro. In the revised version of the manuscript, we added detailed information about the experimental procedure to the Methods part and report all the calibration and measurement steps in an additional supplemental Figure panel (Supplemental Figure S4B). To reassure our previous results, we redid the entire experiment with the proper calibration of the fluorescence intensity in the recorded images against different concentrations of Alexa568-labeled p-tau441 in the absence of LLPS (no PEG). We assured that the Alexa568 fluorophore does not majorly change its fluorescence characteristics in the presence of 0-20% PEG. We determined the average number of Alexa568 fluorophores per molecule p-tau441 as [tau:Alexa568] = [1:1.45]. With this information and the imaging of >100 p-tau441-a568 droplets, we come to the very similar conclusion that at a starting concentration fo 5 mM p-tau441, the tau concentration in the droplets (as measured by fluorescence intensity of p-tau441-a568 droplets by image analysis) is ~22 mM (~ 1.25 mg/ml p-tau441-a568; min=0.64mg/ml; max=2.01mg/ml). The concentration of the remaining tau in the non-droplet phase is ~3 mM (~0.17 mlg/ml). This new data are presented in the text and in Figure 2B and the data are discussed in the Discussion part. 3. Viscosity in droplet. The authors use AquaVis sensitive to molecular rotation to measure viscosity. I cannot find any information on AquaVis in the methods section, nor a reference in the paper (perhaps I missed it) nor even online Google search. Second, the authors appear to conclude that the high fluorescence in the droplet suggests the viscosity is higher there. I certainly believe the viscosity is higher, but the increased fluorescence likely comes from a) partitioning of the dye into this hydrophobic-protein-containing droplet and b) binding to the high concentration of hydrophobes to slow presumable rotation about the carbon-carbon bond -rather than a higher viscosity slowing rotation. Authors: We apologize for the sloppy referencing for the viscosity measurement dye Viscous Aqua TM (Ursa Bioscience, Maryland, USA). However, after considering the interpretation of the reviewer, that the dye becomes concentrated in the tau droplets due to hydrophobic interactions of the concentrated protein, we agree that we can at this point not conclude or proof the viscosity in the tau droplets. We thus decided to take the Viscous Aqua data out of the manuscript and deleted the statement about the increased viscosity in the droplets, also because the FRAP data actually already shows that point. In fact, both the Viscous Aqua and Thioflavine-S fluorescence in the freshly formed droplets both indicate the retention of these hydrophobic dyes in the droplets; and we further verified this idea with results showing also the retention of Methylene Blue in the droplets immediately after formation. These results are included in the revised version of the manuscript in Supplemental Figure S4D. 4. "The hexanediol sensitivity or p-tau441 phase separation can likely be attributed to beta strand interactions". This claim is not supported, The reference Panas is a review article on stress granules that (correctly, as far as I understand the consensus) suggests that "1,6-hexanediol, an aliphatic alcohol that disrupts weak hydrophobic interactions, dissolves liquid droplets without affecting insoluble aggregates". It is insoluble aggregates (at least the amyloid fibril variety) that are typically thought to be stabilized by beta-sheet interactions. Hexane diol does not block the hydrogen bonds associated with beta sheet formation. The authors here present no evidence that beta-sheets are involved in LLPS. They could make variants that introduce proline residues that break beta sheets but are common in phase separating domains to try to isolate the contribution of beta sheets, but this would be a challenging task. Authors: We agree with the reviewer that, although we observe the inhibiting effect of hexanediol on full-length p-tau LLPS, we do not know if beta-strand interactions per se are effected. This is a speculation based on previous publications (Eckermann et al, 2007; Von Bergen et al, 2005,2001) that showed that hydrophobic beta-strand interactions in the tau repeat domain play a major role for the aggregation of the tau. Based on this knowledge we speculated that these interactions may also drive or at least influence tau LLPS. Because we cannot clearly proof the relation between betastrand interaction and tau LLPS at this point, we now mention this interpretion as one possible mechanism of the inhibiting effect of haxanediol, which is supported by the finding that in mutants with enhanced beta-strand propensity in the repeat domain (e.g DeltaK280; von Bergen et al, 2000) LLPS can occur even in the absence of phsophoylation, and that mutants breaking this enhanced beta-strand propensity due to proline insertions in the beta-strand motif (e.g. DeltaK280/PP; Eckermann et al, 2007) abolish tau LLPS. We hope that this idea is more precisely described and better discussed in the revised version of the manuscript. We also made clear that further experiments will be needed to examine the role of particular intra-and intermolecular tau:tau interactions in the process of tau LLPS. 5. In cell LLPS: "Notably because overexpression of GFP-tau leads to intense overall fluorescence of the entire cell body and processes, GFP-tau droplets can only be identified in cells with low GFPtau expression". To me this seems to argue against LLPS in cells as clusters should form leaving the regions without LLPS "granules" the same concentration (the critical concentration at the cell condition) regardless of expression level. Authors: We agree with the reviewer that this statement is somewhat confusing when assuming that tau LLPS is dependent mainly on tau concentration. However, not only is the tau distribution in the neurons is per se very heterogenous (normally in axons by far higher than in dendrites and the soma; and de novo tau transcription was reported to exist in dendrites upon Abeta exposure (Zempel & Mandelkow, 2015; Li & Götz, 2017) but also dependent on stress conditions and highly dependent on local and transient phosphorylation. The dependency of tau LLPS on phosphorylation becomes clear from the data in this manuscript, it is an entire separate and very complex issue to understand the dynamics of phosphroylation in the neuron, a problem we and multiple research groups currently work on. To remove the confusion caused by the statement pointed out by the reviewer, we added a sentence explaining this issue of local tau concentration and phosphorylation and the dependency of tau LLPS on these parameters. 6. Droplet size -on page 13, the authors compare variants based on droplet size. The droplet size is a kinetic effect as droplets will fuse over time and will also undergo Ostwald ripening -which may be halted by gel-formation. The authors should use the approach of Mackensie et al Neuron 2017 on TIA-1 variants to estimate the critical concentration for droplet formation (the left side/arm of the phase diagram) to see if more protein is left in the cleared supernatant for different variants. Authors: We absolutely agree with the reviewer that the droplet growth is a result of various contributing and overlapping effects, including Ostwald ripening, fusion/fission, and gelation. The detected growth differences between the phosphorylation states of tau are thus likely also due to a combination of these factors. We also agree that it would be of interest to sort out more in detail which biophysical parameters during the LLPS and the subsequent ripening of the droplets change with alterations in the phosphorylation. However, at this point in our studies, the complexity of the biophysical mechanisms behind LLPS and droplet phase transitions as well as the complex biology behind tau phosphorylation and its relevance for tau function and molecular behavior prevents us from analyzing all the details behind the observation that phosphorylation obviously has a large impact on tau LLPS. To clarify the raised question of the critical LLPS concentration of the phosphorylation forms of tau, we followed the reviewer's advice and performed experiments in analogy to Mackensie et al Neuron 2017 on TIA-1, in which we aimed to separate formed droplets from the non-droplet phase by simple centrifugation (20000 g for 15 min at room temperature). Measuring the p-tau441 concentration in the soluble phase before (no PEG) and after LLPS (+10% PEG, supernatant), we were indeed able to detect a reduction of tau in the soluble non-droplet phase (~3-fold reduction at starting concentration of 10 mM p-tau441) by protein determination at 280 nm. Unfortunately we were unable to detect changes in the other tested phospho-tau constructs (E17, MARK tau441), and when checking the droplet phase (pellet after centrifugation) and non-droplet phase by microscopy, no enrichment of tau droplets in the pellet could be detected for these constructs. This inefficient separation could be explained by the large difference in droplet sizes between p-tau441 and E17 and MARK-tau4, which drastically changes their sedimentation behavior; also the droplet phase stability may be different between the phospho-tau constructs. As much as we want to sort out these questions, we believe that it will take an extended amount of time and that such aim can be seen as a whole new project in itself. In fact, we are already performing a complex experimental setup with multiple different approaches to precisely evaluate the impact of individual tau phosphorylation -and other PTMs -on tau LLPS behavior. For example, we are currently preparing a mutant tau library, in which all phosphorylation sites relevant for function and misfunction/aggregation of tau are systematically disabled or pseudophosphorylated, and plan to carefully characterize these mutants with different biophysical and protein biochemical methods to sort out the role of cellular PTMs on tau LLPS. However, to account for the raised concerns of the reviewer in the manuscript, we added a sentence to the text discussing the issue and analyzed the number of droplets per volume and the volume fraction in z-stacks recorded for each construct ( Figure 3D) in order to have at least a qualitative idea about the amount of tau phase separating into droplets. 7. Full droplet bleach. The partial droplet bleach results are clear. The change in full droplet bleach do not indicate "that the liquid droplets matured rapidly into hydrogels of high viscosity" -indeed the small portion droplet bleach experiments do not support the "maturation" or gel formation until 60 minutes. The full droplet bleach lack of recovery may arises due to suppression of exchange kinetics across the interface which could be due to decrease in the availability of free monomers (the droplets get bigger and so the transport across the boundaries slows) or some type of surface hardening on the interface. The authors cannot distinguish these effects, but it cannot be due to maturation as the small portion bleach results demonstrate clearly. Authors: Thank you for pointing out that our simplified interpretation. We changed this statement in the text. We agree with reviewer and would also interpret the lack of recovery after full-droplet bleach as some kind of surface hardening on the interface; this idea is to some part supported by the observed shape changes of droplets (deviation from spherical shape) as shown for the p-tau constructs in Figure 3D, and by the growth of tau aggregates on the interface of the droplets ( Figure 4D). However, we now mention these possible explanations in the Discussion. 8. Thioflavin s. The authors claim that thioflavin S reports on beta sheet formation. Yes, it and tht are used for amyloid fibril detection, yet thioflavins are rotor dyes also that fluoresce due to quenched rotation when bound. Indeed, thioflavins also fluoresce when in high viscosity solvent http://pubs.acs.org/doi/full/10.1021/jp805822c so this expt is very much like the AquaVis experiment and cannot tell the presence of beta sheets. X-ray diffraction and TIRF could be use to indicate beta-sheet structures as demonstrated by McKnight. Authors: We absolutely agree with the reviewer that Thioflavine S can not only function as a betasheet detection compound, but also partitions into droplets (likely due to hydrophobic interactions) and hence produces enhanced fluorescence in viscous droplets even in the absence of beta-sheet structures (see Figure 4E). However, Thioflavine-S has a high preference to bind to amyloid and amyloid-like beta-sheet rich macromolecular assemblies and exhibits a ~10-fold increase in fluorescence upon binding these structures. It became thus the classic amyloid detection dye used in neurological research over multiple decades, and has undoubtfully proven its usefulness to specifically detect amyloid plaques as well as tau aggregates in neurofibrillary tangles, both in vitro as well as in the brain. While being aware of the "unspecific" partitioning of ThioS into tau droplets, we would argue that the very intense ThioS fluorescence of the tau aggregates that form on the droplet surfaces is a strong indication for bsheets in these tau aggregates. The b-sheet content of tau aggregates remains difficult to detect even in pure paired helical filament preparations assembled in vitro. This is likely because the actually beta-strand motifs are very short (a hexapeptide motif in the beginning of repeat 3), and beta-sheets formed by this motif are detectable by CD, FTIR and X-ray in "clean" amyloid fibrils made from tau constructes containing only the motif or the repeat domain (Von Bergen et al, 2005), but barely in full-length tau amyloid assemblies. In the current version of the manuscript, we made sure to better explain the compromise between ThioS fluorescence intensity in viscous droplets and in beta-sheet rich aggregates, and we toned down our statement of the b-sheet content in the tau aggregates on tau droplets. We also added a Supplemental Figure S4D, in which we show that also other dyes like Methylene Blue and Viscous Aqua co-partition into fresh tau droplets ; likely due to hydrophobic interactions, as the reviewer pointed out. 9. phosphorylation effect on tau LLPS. The authors should add dephospho-tau to Figure 6 so that the differences are not due to source material rather due to phosphostate Authors: Figure 6 shows data on non-phosphorylated (E. coli derived) tau proteins that carry FTDmutations, which can initiate tau LLPS, even in the absence of any phosphorylation. The source of all tau proteins is in this case is the same (E coli). deP-tau441 from Sf9 cells still carries residual phosphorylation (average of 4 phosphates per tau molecule), which appears to be sufficient to initiate tau LLPS. To avoid any confusion of the reader, we stated that more clearly in the text as well as in the legend of this Figure. . We added these references as well as a sentence of better explanation to the text. 11. The model of phase separation via the nterminal domain would suggest that tau on the surface of the MT would be locally phase separated, confined to a 1 dimensional fiber. The authors should discuss this. This would hyperconcentrate locally the nterminal domain. The reference to membraneless organelles here as polyelectrolyte hydrogels or brushes is not consistent with LLPS organization, but could be consistent with a brush structure on an MT -though that may or may not be phase separated. Authors: Thank you for pointing out this very interesting aspect of possible roles of tau phase separation. We already did some Gedankenexperiments on this previously, and now added a sentence to the Discussion part of the revised manuscript. A number of imprecisions and misrenderings of the literature should be addressed: Page 3 -change "comprise" to "contain" Page 3/4 -the sentence says "The members of this protein family (presumably IDPs) ... also aggregate in protein aggregation diseases" this is not generally true, most IDPs/IDRs are not aggregation prone. Page 4 -prion protein in prion disease is not based on an IDD Page 4 -"accounted for" -rephrase Page 4 -inhomogenous charge -this is not clear Page 5 and many places -McKnight and coworkers generated hydrogels by super concentrating solubilized forms of LC domains (mCherry attached for example) and then waiting days at cold temperature for amyloid polymerization. These are distinct from gel-like forms created by "aging" of essentially instantaneous LLPS/demixing of proteins below the coexistence line for LLPS. I suggest that the authors not use the term "hydrogel" for these gel-like forms created by incubation of LLPS granules (for example, change "visoelastic hydrogel state" to "gel-like state" Page 5 "By analogy to hnRNPA1, TDP-43, ... C9orf72" the authors should say C9orf72 derived DPRs, not the gene Page 6 "can also manifest" should be change and the sentence is confusing Page 6 How are "These domain charges, however, ...altered by phosphorylation, which introduces negative charges" -phosphorylation seems to occur in the negative charged domains already Authors: We thank the reviewer for the careful and critical reading and pointing out these potentially conflicting choices of our wording. We changed the text at all mentioned points to correct the mistakes and improve the manuscript. Referee #2: In their study, Wegmann and colleagues address the recent concept of liquid-liquid phase separation (LLPS) as an intriguing pathogenic mechanism for tau aggregation in tauopathies, using in vitro and cellular systems, which are complemented by the analysis of human and mouse tissue. I generally like the study that used a vast range of complementary techniques that makes a convincing point that LLPS can initiate tau aggregation in AD. My major concerns are about the analysis/claims of tau concentration and phosphorylation pattern and the in vivo part that in my view is less convincing (the claim in the abstract that droplets are not only observed in culture but also in the intact brain). Besides that I am suggesting a few experiments that can be easily done and would clarify a few issues. The figures are well-crafted but occasionally would benefit from additional information in the corresponding legends. Authors: First, we want to thank the reviewer for the careful and critical reading of our manuscript and for his/her constructive critics and providing ideas for experiments to remove some issues that needed clarification. We tried to address most points experimentally or changed the way we presented and described the data in the text in order to be more specific. We believe the manuscript benefitted a lot from these changes. We also toned down our statement of tau LLPS in the intact brain to not overstate. My specific comments are as follows: 1. Page 7, Fig 1C: A significant part of this paper is based on making claims about the role of phosphorylation of tau in driving LLPS. The authors cite a previous paper of tau preparation in insect cells that has shown phosphorylation of tau at 18 sites. How can they be sure that their current preparation shows the same pattern? Moreover, it is not only important which sites are phosphorylated (most likely using more sensitive methods would pick up more sites), but what the stochiometry is. Both the phosphorylation sites and the stochiometry (ratio of moles of phosphate per mole of tau) need to be determined or the statement needs to be modified. Authors: We absolutely agree that some more effort was needed to verify the phosphorylation state of the used p-tau preparation. Since the source of the protein derived from Sf9 insect cells is the same as the one , -meaning the protein was produced in the same laboratory using the same expression system, cell line, passage number, and purification protocol -we think that is is okay to assume a very similar phosphorylation pattern in the p-tau we used in this study. To verify the phosphorylation state of the used protein, we went back to the previously determined phosphorylation pattern of p-tau441 (Mair et al, 2016) and verified the existence of the described phsophorylation sites by Western Blot analysis. Using 16 different phosphorylation site specific antibodies, we could verify the presence of all except of two previously described sites in p-tau441. This indicates that the phosphorylation between the proteins used here and in the previously studies is indeed very similar. The data are included in the manuscript in Supplemental Figure S3C. Of course we cannot exclude small variations in the stoichiometry of the PTMs. This would ask for another detailed Mass spectrometry analysis, which at this time would unfortunately exceed our time line for the manuscript revision. However, we are in the process of planning and performing experiments using tau constructs with local single amino acid mutantions to inhibit and/or pseudophsophorylate single phospho-sites in order to figure out the importance and contribution of individual pshophorylation sites for tau LLPS. We hope that this data will be available in near future and inform about the biological function of neuronal tau LLPS. 2. Page 8: What is a physiologically critical tau concentration and how is this being determined? The authors claim (Fig. 2F) that 1.5 uM is 'critical' but they only tested 0, 1, 2 ...uM so this is really difficult to say. I guess it also needs a better explanation in the legend and what the green and black labelling of the dots means. The statement on this page that tau is restricted to the axon is not entirely correct and the question is whether the cited 30-50% of unbound free tau (2005 ref) still holds up considering that tau binds to and interacts with so many proteins. Authors: We apologize for the unclear description of the phase diagram of tau LLPS under different conditions and added this information to the Figure Legend. We also performed additional experiments on p-tau441 LLPS around the critical concentration (at 0.5, 1.0, and 2.0 uM pTau in the presence of 10% and 20% PEG) to determine the critical concentration in more detail. We found that tau LLPS, as visible my light microscopy using a 40x objective -can occur even at ~1 uM p-tau441. Since this is a microscopic and not molecular evaluation of the LLPS process, it may be well possible that LLPS can occur at even lower concentration but is missed in this analysis due to the insensitivity of the method caused by the diffraction limit. At this point, there is -to our knowledge -no better way of determining LLPS because chemical shifts in NMR upon tau LLPS are almost not detectible and turbidity measurements are rather insensitive. Since this is a critical issue also for almost all previously published studies claiming concentration limits for protein LLPS, we point out this issue in the text to make the reader aware of this topic. We also added a sentence to the text mentioning the fact that neuronal tau has multiple binding partners in the cell, which likely causes large fluctuations in the local intraneuronal tau concentration available for LLPS. We also mentioned in the text (page 10) that 30-50% of tau has been reported to be unbound (free). At an assumed average intraneuronal concentration of ~ 2 mM tau that would give a concentration of ~0.7-1 mM free tau detached from microtubules, and this free tau is usually phosphorylated, which increases the local concentration of p-tau species, potentially leading to a higher local LLPS propensity. 3. Page 9: Estimation of intraneuronal tau concentrations: This is an interesting approach but we are only given the end results without the figures for the intermediate steps. I find this entire paragraph problematic, especially as even a correct estimate would not inform on local (i.e. subcellular) protein concentration. Authors: Thank you for this thorough understanding of tau biology and the critical comment on the issue of intraneuronal tau concentration -a topic that is most of the time under-appreciated as well as a longstanding open question. We tried to tackle the issue to relate the tau concentration range critical for in vitro LLPS to brain tau content. However, we are aware that the global tau content we estimated in the human brain becomes rather irrelevant when talking about precise local tau concentrations in neuronal sub-compartments. We also apologize for the lack of intermediate data presentation. In the revised version of our manuscript, we mention this brain tau level in the text to present the relevance of the tau concentration in LLPS to the reader (also without tau biology background knowledge) and present all the data and Method of how we estimated the human brain tau content in a separate Supplementary Information file. We hope that this now clarifies the question about tau brain concentration and tau LLPS relevance without confusing the reader. 4. Page 9, Figure S3: The C-terminal half of tau should be added as a 'negative' control for p-tau256. Authors: Thank you for pointing out this critical lack of data! We designed and expressed a tau construct in insect cells that contains the C-terminal half of full-length human tau (amino acids 242 to 441) fused to GFP (p-tauCt-GFP). We found that this conctruct can actually undergo LLPS in the presence of molecular crowding and that p-tauCt-GFP LLPS was insensitive to high salt concentrations as well as the presence of hexanediol. These data is now included in the revised version of the manuscript, as it is menstioned in the text and included in an extra Supplemental Figure S5. Similarly, LLPS of the tau microtubule binding domain has very recently been presented also by others (Ambadipudi et al, 2017). 5. Page 10, Figure S3I: Please add non-reduced, unfractionated brain lysate such as to provide an estimate of the relative abundance of the N-terminal fragments in human brain. Authors: A Wetsern Blot of non-reduced whole brain lysates from the same AD and Control brains has been added to the Figure S6H . 6. Page 10, Section: 'Tau phase separation in neurons'. Here, I have several comments: The authors want to test 'whether tau can form droplets under physiological conditions' and use an overexpression of GFP-tagged tau. I think it would be better to drop the reference to 'physiological conditions' and refer to this as 'in neurons'. The claim of 'droplet-like tau accumulations' to me is an over-statement as there is simply a tau accumulation which could be anything. Wegmann and colleagues use FRAP to assess the behavior of the 'droplets' but FRAP should also be applied to 'droplet-free areas'. If the recovery rate is the same this would not substantiate the claim. Authors: We agree with the reviewer that stating "physiological conditions" in overexpressing neurons and tau LLPS in the "intact brain" may have been slightly overstated in the manuscript. We followed the reviewers advice and toned down these statements accordingly. We also shifted the data on Dendra2-tau LLPS in the mouse cortex by two-photon imaging into the Supplemental Figure S3A. Regarding the FRAP data, all data has been normalized to FRAP of droplet-free adjacent areas in the same cell during the data analysis. To make this clear, we added this information to the figure legend and explained it in the Methods part more explicitly. What is also ignored is that in any area of bleaching there can be a mixture of tau populations. Further down the text, 'droplet-like' is replaced by 'droplets' without any proof that droplets actually form in vivo. It can also not be claimed that the droplets in the cell are 'in a viscoelastic hydrogel state'. This can be speculated but should go into the discussion. Authors: Thank you for pointing out this imprecise description of our data. We did not intend to claim droplets in vivo at this point without having the undoubtful proof. We changed the wording as suggested by the reviewer and removed the speculation of an viscoelastic state of intraneuronal tau droplets. 7. Page 12, Figure 4: Fig 4A: The phosphorylation pattern is mostly assumed but not shown. Authors: As described in more detailed in our answer to Major Concern #1, we tested the existence of the phosphorylation sites that we reported previously for p-tau441 using tau specific Mass Spectrometry here now by extensive Western Blot analysis and present the data in Supplmenetal Figure S3C. This Analysis showed that almost all (two exceptions) phsophorylation sites tested were present in the current p-tau441 preparation sa well. deP-tau441, the Alkaline Phosphatase treated p-tau441, showed a clear reduction in Molecular Weight down to the size of non-phosphorylated E. coli tau441, indicating the efficient removal of most phosphorylation; this also becomes clear in the massive reduction of Western Blot signal using the antibodies PHF1 (pS396/S404) as well as 12e8 (pS262/S356) in Figure 4B. The phsophorylation of tau by MARK2 in vitro has been analyzed in great detail previously in our lab (Schwalbe et al, 2013;Timm et al, 2003). And although the in vitro phosphorylation efficiency may vary from batch to batch, Western Blot analysis of the two main tau target sites of MARK (pS262 and pS356, antibody 12e8) shows excellent phosphorylation efficiency in our current preparation. We believe that with the new Western Blot analysis of the tested phsopho-tau versions, the data became more convincing and clear. Our previous studies on these tau phosphorylations are explicitly cited in the text as well, in order to ensure more transparency and easier context access for the reader. Fig 4B: There is a huge variability in total tau (between blots and between transfectants) indicating massive differences in tau levels so the observed effects could be because of phosphorylation and levels. Authors: We apologize for the previously poor quality of the Western Blots and repeated the blots using protein from the same batch used for the LLPS experiments and loading the same amount of protein (protein concentration was deteremined by BCA). The new data are now included in the Figure and show the differences in phosphorylation at equal total tau protein amounts. Minor comments: 1. M. Materials and methods are presented after the figure legends and before the figures. There is some inconsistency in the referencing style. 2. M. Bottom of page 4: Low complexity domain (LCD): The authors make the point that tau is a protein with low amino acid variance, intrinsic disorder and inhomogenous charge or polarity distribution. This led them to postulate that tau could undergo phase separation. But then they argue that tau has no defined LCD domain. This discrepancy needs to clarified. Authors: We corrected and changed all mentioned issues in the manuscript in order to address all minor comments of the reviewer. We hope, that the revised version is now suitable for publication in EMBO Journal. 2nd Editorial Decision 4 th December 2017 Thank you for submitting your revised manuscript to The EMBO Journal. Your manuscript has now been re-reviewed by the two referees and their comments are provided below. Both referees appreciate the introduced changes and find that the analysis has been strengthened. Referee #1 has some remaining concerns with the technical aspects that I would like to ask you to take into consideration in a final revision. Some of the issues should be straightforward to address. Some of the issues (concentration of the droplets) are more difficult to sort out. The referee also suggests that as this aspect is not key to the overall message that this part could also be removed from the manuscript. Either option is fine with me -happy to discuss further if helpful. MAJOR In their experiments in cells, the authors should comment on the possibility of GFP autorecovery from photobleaching due to noncovalent darkening of fluorophore GFP -the authors should do control FRAP on GFP fusions that form aggregates (so they should not recover FRAP) with same imaging and FRAP parameters to show that the recovery observed is much smaller than with the GFP-tau441. As these effects are extremely dependent on imaging parameters and can be up to majority of the recovery observed in some cases, it is important to show in parallel in same setup. See this report: doi: 10.1016/j.bpj.2012.02.029 The authors should also include images from the FRAP timecourse in Fig 1D. The authors suggest that electrostatic interactions are important for phase separation but then show some data for LLPS vs salt concentration that they say suggests salt concentration does not affect LLPS. First, these need to be reconciled. Second, the experiments showing salt concentration effects in Figures 2G and EV3C are not convincing, as there are droplets at all conditions. The authors should present a two dimensional "phase diagram" (like in Figure 2G, left) or at least perform the experiments at lower tau concentration -either way, so that they pass through a transition to no droplets like in EV4F. CFP appears to show punctate distribution alone. Is this an artifact of aggregation? I suggest high centrifuge spinning (14000rpm in microcentrifuge tube) of the CFP stock before use as that can clear aggregates. It is important to show this negative control has no assemblies if it is to be shown at all. Droplet volumes are presented but no information is presented about how they are calculated -the droplets imaged on the glass slide surface are not spherical and no z-stack quantification is described or presented, hence it is difficult to understand. Also, is this the maximum volume observed as there are different sizes of droplets? What is the error bar derived from? Similarly, the volume fraction measurement is not described. The volume fraction on a glass slide will be dependent on the droplets that have fallen on to the surface -how much height is being integrated? It is not clear to me what this is measuring or if it will be helpful. Perhaps % area of slide covered is more straightforward and more precise -and will make clear that the property measured is a function of the experimental setup (extensive) in a way that "volume fraction" (an intensive property) does not imply. It is possible I have misunderstood these measurements and missed the explanation in the methods, but then I suggest additional explanation in the methods and also in the results. Concentration in the droplet. The authors have now explained their measurements. However, this causes concern for at least three reasons and additional caution is still warranted in my opinion. The authors nicely show a calibration curve with ptau441a568 showing linearity and low uncertainty. Presumably this was conducted with the confocal volume/image placed far away from the coverslip, so the entire confocal volume is filled with solution and not partially in the air or in the glass. But is the confocal volume in the droplet containing pixels filled with droplet or does it extend to the glass or tau-depleted phase above the droplets? 1) for the droplets imaged on the coverslip, it is not clear if the focus is adjusted too close to the glass -in other words is the confocal volume going beneath the surface -a z-stack showing a clear decrease going down into the glass and then above it a uniform fluorescence intensity with increasing z is required to demonstrate that the focus is not too low. 2) it is not clear if the confocal volume extends above the droplet into the tau depleted solution above the droplet. In my understanding, the confocal resolution is poor in z and the volume can in fact extend approximately 2 microns or more in z. The "gelled" droplets observed by AFM presented here are not bigger than this height. What evidence do the authors show that the confocal volume is filled? The droplet cross sections highlight this significant concern -the fluorescence vs x/y (at a constant height z) appear rounded (indeed the highest one is a near perfect semi-circle profile - Figure 2C) directly suggesting that the confocal volume is not filled (at least it cannot be filled at any point except the max height-and still point 1 above would have to be shown not to be too low). If the volume is not filled and extends above the droplet, then the authors are averaging in the tau-depleted phase above, which decreases the concentration estimate. The authors should look for the highest, biggest droplet forming conditions and show a z-stack with a clear z slice that is demonstrated not at the bottom but has a sharp, flat profile in an x vs z plot, not the rounded profile shown in this version. 3) Additionally, to rule out dye-dye "self quenching" interactions (https://www.nature.com/articles/srep20237), the authors should show linearity with decreasing fraction of fluo-labeled peptide. Are these experiments conducted with samples made of 100% tau labeled with fluorophore and 0% unlabeled tau? I remain cautious that fluorophore-fluorophore proximity in the droplet may alter photophysical properties and strongly suggest qualifying any concentration determined in this way unless the linearity in total fluorescence (e.g. no effect of fluorescence label concentration) as a function of fluolabeled concentration is demonstrated. This would also control for dye effects on phase separation. 4) However, even fluorophore dilution does not control for fluorophore quenching by being brought into greater proximity with amino acids (aromatic, histidine, methionine) as is well known though poorly understood (http://pubs.acs.org/doi/abs/10.1021/ja100500k) and for this I can see no easy control experiment -if the droplets are as concentrated as reported for Ddx4 (100-350mg/ml of protein) the fluorophore is effectively in a >100mM concentration of quenching amino acids. In other words, the inside of the droplet is effectively a different solvent condition that could (dramatically) change fluorescence intensity. The authors would also have to rationalize their other observations -A) if the droplets are approximately 20uM in concentration, how could 20uM tau in the absence of crowders remain completely 1 low-tau phase -indeed LLPS at 20uM (Fig 2G) requires the same 5% PEG to see phase separation at 2uM and 20uM, suggesting they are both far from the saturation concentration. Indeed, no tau phase separation is seen at much higher concentration (50-100uM) except at the evaporating edge of a solution drop where protein concentration (and buffer concentration?) are again much higher than 50uM. In other words, if a droplet inside has only 22uM, how can the solution support 50uM ptau441? No crowding agent should be needed to push the tau together since it is already above the droplet concentration and hence the system should actually be crossed all the way into a single tau-rich phase regime (all tau rich "droplet") with no droplets. B) Furthermore, if the 5uM solution demixes into 3uM in tau depleted phase and 22uM in tau concentrated phase, this is a concentration factor of only 11x (2uM of the protein goes into assemblies that are 22uM dense). Therefore, the phase separated material should make up about 9% of the total volume. That means that for a water drop on a cover slip that is approximately 1 millimeter in height, there should be a uniform layer 90 microns tall of tau dense "droplet" protein that falls to the bottom. It does not appear to have this much volume of phase separated material formed based on the image in 2C, indeed the height is much less -about 2 micron height for 5 micron width droplet 4F. In my reading, the concentration is not a critical finding and I would suggest further investigation of these aspects as described above and a second method to confirm the concentration of the droplet material is required if they wish to keep this claim, or much more easily, removing the claim and the related part in the discussion. To summarize the major concerns listed -I think they are major issues in the manuscript as written but can be easily addressed by simple experiments or removal of claims (concentration) MINOR Introduction -FUS, TDP-43, hnRNPA1 are not best described as "LLPS proteins" -they are prionlike domain containing proteins (King et al Brain research -"tip of the iceberg") and they are a subset of proteins shown to be directed to cellular RNP granules and phase separate in vitro. please correct spelling of hnRNPA1 throughout -(not "hnRNP1", "hRNP-A1", or "hRNP1" as it is abbreviated differently in nonstandard fashion in three places). Or clarify if they mean a different protein by spelling out the name before using the abbreviation. substitute "β-sheet" for "beta-pleated sheet" -though the authors may be more precise in saying amyloid fibril cross-β structure as these dyes are not thought to be sensitive to globular β-sheet proteins, rather they detect amyloid fibrils. I again believe that the authors should change the statement "whereas the C-terminal MT binding domain can stabilized tau droplets through β-sheet interactions" to "whereas the C-terminal MT binding domain can stabilized tau droplets through hydrophobic interactions, possibly made up of βsheet structures". The authors do not conclusively show β-sheet assembly so I think it would be best to temper the conclusion, though I think it is an exciting possibility and hence it should remain in the discussion and in qualified statements as suggested above. Change "The aggregation of tau in the droplets appears to release enough free energy to enable symmetry breaking of the droplets, which can be detected as droplet deformation and the growth of non-spherical solid aggregates." to be more simple and precise as no thermodynamic measurements are performed: "The aggregation of tau can be detected as droplet deformation and the growth of non-spherical solid aggregates." What happened to heparin or RNA added p-tau441 droplets over time -did aggregates emerge? Results In the description of cellular assemblies on pg 16, the authors should call them "rounded" or "spherical" instead of "droplet-shaped" to be more precise. Pg 20 -change extend to extent Referee #2: Susanne Wegmann, Brad Hyman and colleagues have satisfactorily addressed my concerns, in particular with regards to statements for the in vivo situation. This is a really nice study. The authors may wish to check that all figure panels are properly references, as my specific comment 1 was addressed by the authors in Figure EV1C, not S3C. My specific comment 4 was addressed in Figure EV3, not S5. My specific comment 5 was addressed in E4H, not S6H. I have one final (minor) comment regarding point 5: "5. Page 10, Figure S3I: Please add non-reduced, unfractionated brain lysate such as to provide an estimate of the relative abundance of the N-terminal fragments in human brain. Authors: A Wetsern Blot of non-reduced whole brain lysates from the same AD and Control brains has been added to the Figure S6H" Thank you for adding these blots but still, because separate fractions have been analysed (SEC fraction 3 for HMW species showing only full-length tau, and SEC fraction 14 for LMW, showing only N-terminal fragments) the relative abundance of the N-terminal fragments in human brain cannot be determined. Figure 1 (Figure EV1B-C). We also added time course images of a GFP-tau441 droplet FRAP experiments ( Figure EV1A). The additional data is now also mentioned in the text. 2) The authors suggest that electrostatic interactions are important for phase separation but then show some data for LLPS vs salt concentration that they say suggests salt concentration does not affect LLPS of p-tauCt. First, these need to be reconciled. Second, the experiments showing salt concentration effects in Figures 2G and EV3C are not convincing, as there are droplets at all conditions. The authors should present a two dimensional "phase diagram" (like in Figure 2G, left) or at least perform the experiments at lower tau concentration -either way, so that they pass through a transition to no droplets like in EV4F. The new data has been added to the main Figure 2 (panel G and H) and is presented in Extended View Figure EV3). Authors 3) CFP appears to show punctate distribution alone. Is this an artifact of aggregation? I suggest high centrifuge spinning (14000rpm in microcentrifuge tube) of the CFP stock before use as that can clear aggregates. It is important to show this negative control has no assemblies if it is to be shown at all. Authors: We repeated the control experiments with a fresh stock of GFP, which we sonicated and centrifuged at 21000x g before the experiment. We did not see any evidence of LLPS at 2 mM GFP in the presence of 10% PEG. The new data is included in Figure EV2A. 4) Droplet volumes are presented but no information is presented about how they are calculated -the droplets imaged on the glass slide surface are not spherical and no z-stack quantification is described or presented, hence it is difficult to understand. Also, is this the maximum volume observed as there are different sizes of droplets? What is the error bar derived from? Similarly, the volume fraction measurement is not described. The volume fraction on a glass slide will be dependent on the droplets that have fallen on to the surface -how much height is being integrated? It is not clear to me what this is measuring or if it will be helpful. Perhaps % area of slide covered is more straightforward and more precise -and will make clear that the property measured is a function of the experimental setup (extensive) in a way that "volume fraction" (an intensive property) does not imply. It is possible I have misunderstood these measurements and missed the explanation in the methods, but then I suggest additional explanation in the methods and also in the results. Authors: We apologize for the lack of explanation of our methods applied for the droplet volume analysis. To clarify, the droplets imaged were not attached to the surface but free floating in the solution. We collected Z-stacks over larger volumes (z= ~20-40 mm) at a confocal plane thickness of 2 mm. In each plane of the z-stack, only droplets that were in focus in the focal plane were analyzed by fitting a circle on their phase boundary and calculating their volume (assuming spherical shape). The data represent mean droplet volume ±SD; this is mentioned in the figure legend. We removed the data of the droplet volume fraction since we agree that it does not add additional information. 5) Concentration in the droplet. The authors have now explained their measurements. However, this causes concern for at least three reasons and additional caution is still warranted in my opinion. The authors nicely show a calibration curve with ptau441a568 showing linearity and low uncertainty. Presumably this was conducted with the confocal volume/image placed far away from the coverslip, so the entire confocal volume is filled with solution and not partially in the air or in the glass. But is the confocal volume in the droplet containing pixels filled with droplet or does it extend to the glass or tau-depleted phase above the droplets? 1) for the droplets imaged on the coverslip, it is not clear if the focus is adjusted too close to the glass -in other words is the confocal volume going beneath the surface -a z-stack showing a clear decrease going down into the glass and then above it a uniform fluorescence intensity with increasing z is required to demonstrate that the focus is not too low. 2) it is not clear if the confocal volume extends above the droplet into the tau depleted solution above the droplet. In my understanding, the confocal resolution is poor in z and the volume can in fact extend approximately 2 microns or more in z. The "gelled" droplets observed by AFM presented here are not bigger than this height. What evidence do the authors show that the confocal volume is filled? The droplet cross sections highlight this significant concern -the fluorescence vs x/y (at a constant height z) appear rounded (indeed the highest one is a near perfect semi-circle profile - Figure 2C) directly suggesting that the confocal volume is not filled (at least it cannot be filled at any point except the max height-and still point 1 above would have to be shown not to be too low). If the volume is not filled and extends above the droplet, then the authors are averaging in the tau-depleted phase above, which decreases the concentration estimate. The authors should look for the highest, biggest droplet forming conditions and show a z-stack with a clear z slice that is demonstrated not at the bottom but has a sharp, flat profile in an x vs z plot, not the rounded profile shown in this version. 3) Additionally, to rule out dye-dye "self quenching" interactions (https://www.nature.com/articles/srep20237), the authors should show linearity with decreasing fraction of fluo-labeled peptide. Are these experiments conducted with samples made of 100% tau labeled with fluorophore and 0% unlabeled tau? I remain cautious that fluorophore-fluorophore proximity in the droplet may alter photophysical properties and strongly suggest qualifying any concentration determined in this way unless the linearity in total fluorescence (e.g. no effect of fluorescence label concentration) as a function of fluolabeled concentration is demonstrated. This would also control for dye effects on phase separation. 4) However, even fluorophore dilution does not control for fluorophore quenching by being brought into greater proximity with amino acids (aromatic, histidine, methionine) as is well known though poorly understood (http://pubs.acs.org/doi/abs/10.1021/ja100500k) and for this I can see no easy control experiment -if the droplets are as concentrated as reported for Ddx4 (100-350mg/ml of protein) the fluorophore is effectively in a >100mM concentration of quenching amino acids. In other words, the inside of the droplet is effectively a different solvent condition that could (dramatically) change fluorescence intensity. The authors would also have to rationalize their other observations -A) if the droplets are approximately 20uM in concentration, how could 20uM tau in the absence of crowders remain completely 1 low-tau phase -indeed LLPS at 20uM (Fig 2G) requires the same 5% PEG to see phase separation at 2uM and 20uM, suggesting they are both far from the saturation concentration. Indeed, no tau phase separation is seen at much higher concentration (50-100uM) except at the evaporating edge of a solution drop where protein concentration (and buffer concentration?) are again much higher than 50uM. In other words, if a droplet inside has only 22uM, how can the solution support 50uM ptau441? No crowding agent should be needed to push the tau together since it is already above the droplet concentration and hence the system should actually be crossed all the way into a single tau-rich phase regime (all tau rich "droplet") with no droplets. B) Furthermore, if the 5uM solution demixes into 3uM in tau depleted phase and 22uM in tau concentrated phase, this is a concentration factor of only 11x (2uM of the protein goes into assemblies that are 22uM dense). Therefore, the phase separated material should make up about 9% of the total volume. That means that for a water drop on a cover slip that is approximately 1 millimeter in height, there should be a uniform layer 90 microns tall of tau dense "droplet" protein that falls to the bottom. It does not appear to have this much volume of phase separated material formed based on the image in 2C, indeed the height is much less -about 2 micron height for 5 micron width droplet 4F. In my reading, the concentration is not a critical finding and I would suggest further investigation of these aspects as described above and a second method to confirm the concentration of the droplet material is required if they wish to keep this claim, or much more easily, removing the claim and the related part in the discussion. To summarize the major concerns listed -I think they are major issues in the manuscript as written but can be easily addressed by simple experiments or removal of claims (concentration). Authors: Images for calibrating the fluorescence intensity of soluble p-tau441 were taken by confocal imaging (pin hole = 2 mm) in the center of the droplet, meaning far away from droplet surface or glass surface to avoid any surface effects on tau concentration and imaging artifacts. When imaging the sample with droplets, the imaging plane was of course chosen high enough above the glass surface to not penetrate into the glass. We agree that the confocal of 2 mm may be larger than the diameter of some of the droplets, which may cause an underestimation of the actual fluorescence intensity in these droplets because of the averaging across both the droplet phase and the soluble phase above the droplet. We now account for this artifact in our corrected measurements that analyze the maximum fluorescence intensity in each droplet and only considering large droplets with a plateau profile. Using these parameters, we now estimate a higher tau concentration in the droplets of 30.5 mM. Because these data are not essential for the manuscript but may be interesting for the reader, we removed the data from the main figure and now show it in the Supplemental Data Figure EV2B. We also mentioned in the text that the estimated concentration of 34.3 mM in the droplets rather underestimated the actual concentration because of fluorophore:fluorophore quenching and other imaging artifacts due to the high condensation and viscosity in the droplets. We feel that the data, although presented self-critically and as an estimate, is informative enough to be reported. MINOR Introduction -FUS, TDP-43, hnRNPA1 are not best described as "LLPS proteins" -they are prionlike domain containing proteins (King et al Brain research -"tip of the iceberg") and they are a subset of proteins shown to be directed to cellular RNP granules and phase separate in vitro. > We edited the sentence in the introduction and added the reference. please correct spelling of hnRNPA1 throughout -(not "hnRNP1", "hRNP-A1", or "hRNP1" as it is abbreviated differently in nonstandard fashion in three places). Or clarify if they mean a different protein by spelling out the name before using the abbreviation. > This has been corrected throughout the manuscript. substitute "β-sheet" for "beta-pleated sheet" -though the authors may be more precise in saying amyloid fibril cross-β structure as these dyes are not thought to be sensitive to globular β-sheet proteins, rather they detect amyloid fibrils. > We replaced β-sheet with "amyloid fibril cross-β structure" in the context of ThioS fluorescence. I again believe that the authors should change the statement "whereas the C-terminal MT binding domain can stabilized tau droplets through β-sheet interactions" to "whereas the C-terminal MT binding domain can stabilized tau droplets through hydrophobic interactions, possibly made up of βsheet structures". The authors do not conclusively show β-sheet assembly so I think it would be best to temper the conclusion, though I think it is an exciting possibility and hence it should remain in the discussion and in qualified statements as suggested above. > We changed the sentence in the discussion accordingly. Referee #2: Susanne Wegmann, Brad Hyman and colleagues have satisfactorily addressed my concerns, in particular with regards to statements for the in vivo situation. This is a really nice study. The authors may wish to check that all figure panels are properly references, as my specific comment 1 was addressed by the authors in Figure EV1C, not S3C. My specific comment 4 was addressed in Figure EV3, not S5. My specific comment 5 was addressed in E4H, not S6H. Authors: We are glad that we could answer the reviewer's questions in a satisfying way. We double checked the Figure references in the newest version of the text again to make sure there are no errors. I have one final (minor) comment regarding point 5: "5. Page 10, Figure S3I: Please add non-reduced, unfractionated brain lysate such as to provide an estimate of the relative abundance of the N-terminal fragments in human brain. Authors: A Wetsern Blot of non-reduced whole brain lysates from the same AD and Control brains has been added to the Figure S6H" Thank you for adding these blots but still, because separate fractions have been analyzed (SEC fraction 3 for HMW species showing only full-length tau, and SEC fraction 14 for LMW, showing only N-terminal fragments) the relative abundance of the N-terminal fragments in human brain cannot be determined. > We agree with this comment and changed the word "abundance" to "presence" of N-terminal fragments in the figure legend of Figure EV5H. We also mention now that N-terminal fragments of tau appear in eth LMW but not in the HMW brain lysate SEC fractions. Thanks for submitting your revised manuscript to The EMBO Journal. Your study has now been rereviewed by referee #1 whose comments are provided below. The referee appreciates the introduced changes and has just a few minor comments left that can be resolved with appropriate text changes. I am therefore very happy to say that we are pleased to publish your study in The EMBO Journal. Concentration in the droplet. The authors have addressed most of the points regarding the confocal microscopy and avoid others by their caution. However, they do not address my point "A" about the total concentration and point "B" about the total volume. Therefore, I believe the authors should instead of saying "estimate a concentration" they should change it to say they "measure an apparent concentration". And, in the results section paragraph ending that section where they describe having "50-100uM" tau in solution without crowding agents, they need to say that the droplet concentration should therefore be bigger than this concentration. The caution/qualifications they provide is useful. I still remain highly cautious about this apparent concentration.
2018-04-03T03:22:11.971Z
2018-02-22T00:00:00.000
{ "year": 2018, "sha1": "370466d42885c31d837010599a9256df23f20221", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.15252/embj.201798049", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "414734d7ddb680b1e5c603d24bdcb97ed129f8d8", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
245812185
pes2o/s2orc
v3-fos-license
Novel encryption for color images using fractional-order hyperchaotic system The fractional-order functions show better performance than their corresponding integer-order functions in various image processing applications. In this paper, the authors propose a novel utilization of fractional-order chaotic systems in color image encryption. The 4D hyperchaotic Chen system of fractional-order combined with the Fibonacci Q-matrix. The proposed encryption algorithm consists of three steps: in step#1, the input image decomposed into the primary color channels, R, G, & B. The confusion and diffusion operations are performed for each channel independently. In step#2, the 4D hyperchaotic Chen system of fractional orders generates random numbers to permit pixel positions. In step#3, we split the permitted image into \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$2\times 2$$\end{document}2×2 blocks where the Fibonacci Q-matrix diffused each of them. Experiments performed where the obtained results ensure the efficiency of the proposed encryption algorithm and its ability to resist attacks. Introduction Users of the Internet and other networks share and transmit millions of color images every day; these images are used in different applications such as telemedicine, distance learning, business, and military. Securing digital images is extremely important to prevent image content loss during transmission and hide image information from attackers. Different techniques such as watermarking (Hosny et al. 2018;Hosny et al. 2019;Hosny et al. 2021a, b), image steganography Kadhim et al. 2020), and image encryption (Naskar et al. 2020) are frequently used for securing digital images. Image encryption technique is based on two main stages: encryption and decryption. In the encryption stage, the input image is converted into an unreadable image using a secret key. In the decryption stage, the contents are retrieved by using the same key. One main advantage of using encryption over other methods is that the image is retrieved without losing information. The RGB color space is commonly used in color image encryption algorithms. Each pixel in the color image consists of three values, one in each channel. Most color image encryption algorithms encrypt each channel independently. Scientists proposed different techniques of color image encryption. These techniques depend on a chaotic system (Wang et al. 2019a, b;Parvaz and Zarebnia 2018;Wang et al. 2019a, b;Xian et al. 2020;Liu et al. 2019a, b), DNA computing (Jithin and Sankar 2020;Nematzadeh et al. 2020), and compressive sensing (Yao et al. 2019). Also, deep learning approaches are applied in image encryption, Ali et al. 2020;Chen et al. 2019), to increase the robustness of 2D/3D image encryption. This technique utilizes a fast and effective CNN denoiser based on the principle of deep learning. In Ding et al. (2020), the authors utilize the Cycle-GAN network as the primary learning network to encrypt and decrypt the medical image. Ding et al. (2021) proposed a new deep learning-based stream cipher generator, DeepKeyGen, designed to generate the private key to encrypt medical images. Generally, color image encryption techniques involved confusion and diffusion of pixels. Confusion is the process of changing pixels' arrangement without changing the pixel value. This step individually does not give satisfactory 1 3 results in encryption. For improved security, the confusion step is usually combined with the diffusion step, in which the values of pixels are changed based on specific mathematical operations. Chaotic systems are divided into two main classes (low and high dimensions). These systems have many valuable characteristics, such as randomness, ergodicity, complex behavior, and sensitivity to control parameters and their initial conditions. Also, the keyspace generated by most of the chaotic systems is very large. Based on their ability to improve encryption algorithms' efficiency, various chaotic systems are utilized in color image encryption Li et al. 2019a, b;Yang and Liao 2018). Pak and Huang (2017) show that low-dimensional (LD) chaotic systems contain a combination of one-dimensional chaotic maps. Teng et al. (2018) converted the color image to one bit-level image by combining the three primary channels, Red, Green, and Blue. Then, the combined image was scrambled using a skew tent map. Irani et al. (2019) designed the chaotic coupled-Sine map. This new map is used for scrambling color images. Despite the simple structure of low-dimensional systems, the keyspace is small, achieving a lower security level. Essaid et al. (2019) proposed a new method for encrypting both color and grey images using a secure variant of Hill cipher and an improved 1D chaotic maps (logistic map, sine map, and Chebyshev map). Wu et al. (2015) presented a color image encryption algorithm by combining DNA sequences with multiple improved 1D chaotic maps. Kamal et al. (2021) proposed a new algorithm for encrypting grey and color medical images. This algorithm is based on image blocks and a chaotic logistic map. The high-dimension (HD) chaotic systems are characterized by having complex structures and multiple parameters. These properties enable HD systems to overcome the weaknesses of LD systems. Liu et al. (2019a, b) encrypted color images using dynamic DNA and the 4-D memristive hyper-chaos. Zhang and Han (2021) proposed a new color image encryption scheme based on dynamic DNA coding, six-dimensional (6D) hyperchaotic system, and image hashing. Kaur et al. (2020) presented minimax differential evolution-based 7D hyperchaotic map that is used for encrypting color images. Sahari and Boukemara (2018) combined two chaotic maps, piecewise and logistic, to design a 3D chaotic map. This 3D map is used in information security applications. Wu et al. (2016) proposed a new color image encryption method by combining the discrete wavelet transform with a 6D hyperchaotic system. Zhou et al. (2018) proposed quantum-based encryption of color images using quantum cross-exchange with a 5D hyperchaotic system. The fractional-order polynomials and functions show better performance than the corresponding integer-order ones in color image analysis Hosny et al. (2020a), pattern recognition Hosny et al. (2020b), Image-based Diagnosis of COVID-19 Abd Elaziz et al. (2020), Plant disease recognition Kaur et al. (2019), and improved recognition of bacterial species Chen et al. (2018). Generally, the chaotic systems of fractional orders are more complex and more accurate than the integer-orders chaotic systems. Accordingly, recent fractional-orders chaotic systems-based image encryption methods were proposed (Yang et al. 2019;Yang et al. 2020a, b;Yang et al. 2020a, b). These encryption algorithms are limited to encrypting gray-scale images. Color images contain more information than grey images. So, encrypting color images with high efficiency in considerable time is a great challenge. Some recent color image encryption techniques have shortcomings, such as small keyspace; the key does not depend on the original image, making it weak against differential attacks. Other algorithms cannot resist different kinds of attacks. There is not much previous work that used fractional-order chaotic systems in encrypting color images. These shortages motivate the authors to propose a novel utilization of chaotic systems of fractional orders in the encryption of color images. The main contributions of this paper are: 1. A 4D fractional-order hyperchaotic Chen system is applied to generate the secret key used for scrambling the plain image, where the system's initial conditions are based on the plain image. 2. The diffusion step is based on the Fibonacci Q-matrix. 3. Integration between the 4D fractional-order hyperchaotic Chen system and Fibonacci Q-matrix assures a high level of security and can resist different kinds of attacks. A new three-stage encryption algorithm for color images was proposed. In this algorithm, the 4D hyperchaotic Chen system of fractional orders is combined with the Fibonacci Q-matrix. Step#1: the input image decomposed into the primary color channels, R, G, & B. Step#2: the confusion and diffusion operations are performed independently for each channel, where the 4D hyperchaotic Chen system fractional-orders used to generate random numbers to permit pixel positions. Step#3: the permitted image is divided into small blocks where each of them is diffused by applying the Fibonacci Q-matrix. Various experiments were performed to demonstrate the efficiency and its ability to resist attacks. The following subsections are: Sect. 2 includes the preliminaries of the 4D hyperchaotic Chen system of fractionalorders and the Fibonacci Q-matrix. The proposed method for color images is described in Sect. 3. The experiments, results, and discussion are presented in Sect. 4. The conclusion is presented in Sect. 5. 4D hyperchaotic Chen system of Fractional-order Li et al. (2005) defined the hyperchaotic Chen system of integer-order, while Hegazi and Matouk (2011) used the principles of fractional calculus to derive the hyperchaotic Chen system of fractional-order. where is the fractional-order, a, b, c, d, are constants and x, y, z, u represent the state variable of the system. A chaotic system is hyperchaotic when the number of positive Lyapunov exponents is ≥ 1 . For a = 35 , b = 3 , c = 12 , d = 0.3 , = 7 , and = 0.97, the system possesses two positive Lyapunov exponents, which means that it is hyperchaotic of fractional-order. Fibonacci Q-matrix The Fibonacci sequence is defined by: where F n is Fibonacci number and F 1 = F 2 = 1 . The Fibonacci Q matrix is a square matrix with size 2 × 2 that is given by: The nth power of the Q matrix is defined by: The inverse matrix Q −n has the following form: All symbols used in this section are defined in Table 1. The proposed encryption algorithm The input color image of size, M × N × 3 , decomposed into three channels of the uni-size, M × N , and then we encrypt each channel independently. Our encryption algorithm depends on three main steps. First: the fractional-order hyperchaotic sequence is generated. Second: the input color image is scrambled, and then diffusion of scrambled image is performed. The scrambling and diffusion processes were executed twice to get the encrypted image. Figure 1 shows a visualized flowchart for one round. In this figure, the input color image "Lena" is decomposed into three grey images. Then, the fractional sequences are computed based on these images. After that, each grey image is scrambled by the generated sequence and diffused using the Fibonacci Q matrix. Finally, the three encrypted images are combined to obtain the encrypted color image. Also, the decryption process is illustrated in this section to retrieve the original image. Figure 2 shows one round of the decryption process where the encrypted image is converted into three grey images. First, the diffusion steps using the Fibonacci Q matrix are applied. Then the same sequence ( S ) generated in encryption is used in the scrambling step. Generating a fractional hyperchaotic sequence Step#1: the initial condition of the Chen hyperchaotic system of fractional-order, as defined in Eq. (1), is imagedependent. The image is converted into vector P, and then the initial condition is calculated by: where x 1 , x 2 , x 3 , x 4 refer to the initial conditions of the fractional hyperchaotic system, MN is the length of the image vector P , and the mod is the modulo operation. Step#2: get the sequence L by iterating the system in (1) N 0 + MN /4 and then discard N 0 . Step#3: sort the sequence L in ascending order and return sorted pixels position in vector S with size MN. Image scrambling Image scrambling is the process of changing the position of the pixels without changing their value. In our algorithm, the image vector P scrambled by the sequence S that is defined by: where R is the scrambled vector obtained from changing the pixel's position in P using the values in the vector S. Diffusion In diffusion, the Fibonacci Q-matrix is used to change image pixels value as follows: Step#1: reshape scrambled vector R into the matrix R ′ with size M × N. Step#3: obtain the encrypted image E by multiplying each sub-block in the scrambled image R ′ with the Fibonacci Q matrix ( Q 10 ) defined in Eq. (4) as follows: Decryption The inverse process of encryption is decryption. This process aims to retrieve the input image from the encrypted one. In the proposed algorithm, the diffusion Eq. (9) changed: (10) 34 −55 −55 89 mod 256 where i = 1 ∶ M , j = 1 ∶ N with a unified step 2. Also, the sequence S that generated in Sect. 3.1 used to retrieve the original image by replacing Eq. (8) in the scrambling process with the following equation: Experiments and results Various experiments were conducted to show the efficiency of the proposed method in color image encryption. These experiments are based on computing information entropy and the correlation between adjacent pixels to show the high randomness of the image encrypted using our proposed algorithm. Also, the efficiency of our algorithm in resisting differential attacks, brute force attacks, noise, and data attacks is presented. In these experiments, the authors used standard color images that are available in SIPI datasets. The experiments were performed using MATLAB (R2015a) with a Laptop computer equipped with Core i5-2430 M 2.4GH CPU and 4 GB RAM. Table 2 shows all test images used for evaluating our algorithm. In Table 2, there is an eight-color test Information entropy Image randomness is usually measured using the information entropy, where a successful image encryption method can generate an encrypted image with high randomness. The entropy is computed for each primary color channel, R, G, and B in color images. If the entropy for each channel is near 8, this means the high randomness of the color image. The mathematical formulation for entropy is: where k refers to the total number of pixels in the image, P(s i ) refers to the probability of s i . The entropy is computed for eight standard color images in the three channels, where the three primary channels of these images are encrypted using the proposed method. Obtained values are shown in Table 3. The values of entropy for all color images in all channels are almost identical to the optimum value. In another experiment, the color image of Lena was encrypted by using our method and the recent algorithms (Chai et al. 2019;Li et al. 2019a, b;Rehman et al. 2018;Wu et al. 2018;Zhang et.al 2020;Hosny et al. 2021a, b;Xuejing and Zihui 2020). The values of the entropy are calculated and shown in Table 4. Generally, the average entropy (encrypted image) using our technique outperforms the methods (Rehman et al. 2018;Wu et al. 2018;Zhang et.al 2020) and is very similar to the results of the methods (Chai et al. 2019;Li et al. 2019a, b;Hosny et al. 2021a, b;Xuejing and Zihui 2020). The results indicate that our algorithm can produce an encrypted image with high randomness as all values are near 8. Correlation analysis Typically original image has a high correlation between adjacent pixels. A good encryption algorithm should remove this correlation between neighboring pixels. In successfully encrypted images, the correlation between adjacent pixels in the encrypted image should 0. For any two adjacent pixels x and y , their correlation is: Additional experiments were performed to calculate the correlation coefficients of the novel encryption methods Chai et al. 2019;Li et al. 2019a, b;Rehman et al. 2018;Wu et al. 2018;Zhang et.al 2020;Hosny et al. 2021a, b;Xuejing and Zihui 2020)., where the obtained results are shown in Table 6. As displayed in Fig. 3, a visualized correlation analysis proves the concept of a strong correlation between adjacent pixels as all pixels clustered in the diagonal direction. However, in Fig. 4, the selected adjacent pixels from the encrypted image occupied the whole space, indicating a weak correlation. Differential attack The efficiency of image encryption algorithms depends on their high sensitivity to minimal image changes. The algorithm is considered more efficient when a minimal input Attackers try to trace differences between two images, which are encrypted using the exact encryption method from the input images with only a one-pixel difference. This process enables attackers to find a relation between the input and encrypted image to guess the secret key. This kind of attack is known as a differential attack. The higher efficiency of an algorithm makes it more difficult for attackers to guess image information. The resistance to this attack was measured quantitively using NPCR and UACI: where NPCR is the number of pixels change rate while NPCR is the unified average Change intensity. where E 1 and E 2 refer to encrypted images from the same input image by changing a single pixel. In the literature of image encryption, 99.6094 % , 33.4635 % are the typical values of NPCR and UACI. An experiment was performed where we randomly changed on pixels in the input image then encrypted both the original input image and the modified one using our algorithm. The NPCR and UACI were computed for both encrypted images. As shown in Table 7, the obtained results show that all NPCR and UACI for the three primary channels are almost equal to the typical values. Also, Table 8 compares the results of our algorithm and the methods (Chai et al. 2019;Li et al. 2019a, b;Rehman et al. 2018;Wu et al. 2018;Zhang et.al 2020;(15) Hosny et al. 2021a, b; Xuejing and Zihui 2020). The average values of NPCR and UACI are ideal. Thus, encrypted images produced using our algorithm are secured against differential attacks. Histogram analysis The histogram for each grey level of the color image shows its pixel value distribution. The histogram of both original and encrypted images must be entirely different. Also, the histogram of the original image is usually distributed randomly. However, the histogram of an encrypted image should be uniform. Mathematically, the variances of histograms are calculated by: The variance of the histogram reflects the histogram uniform distribution for the encrypted image, where both are inversely proportional. Low variance means a uniform histogram. Table 9 shows the variances of histograms for the test color images. The variances of the encrypted images are smaller than the variances of the original images. Figure 5 shows the histogram of the original and encrypted primary channels of the Lena color image. The histograms of the Data cut and noise attacks Encrypted images may be exposed to data loss or noise when transmitted over the network. An encryption method is successful when efficiently restoring the encrypted image after noise and data cut attacks. This ability is measured by peak signal to noise ratio (PSNR), which is calculated by: The MSE is defined by: where I O & I D refer to original and encrypted images. The value of PSNR is directly proportional to image quality, where high values reflect a high similarity between the decrypted and the original image. When the value of PSNR is above 35, it is challenging to distinguish between the original image and the decrypted image. To test the ability of (19) PSNR = 10 × log 10 255 2 MSE (db) our algorithm in resisting these attacks, we do the following experiments on the encrypted image: (1) Add "salt & pepper" noise (SPN) with density 0.002 and 0.005. (2) Cut attack by cropping with size 64 × 64 and128 × 128 at the left corner. Image contents still recognized that prove the robustness of our algorithm against noise and data cut attacks. Figure 6 shows the decryption results after attacking the encrypted image with different sizes of data cut-the decryption of Lena after adding Salt and Pepper noise with different densities presented in Fig. 7. Also, Table 10 shows the PSNR values for two-color images in the three channels. When we add noise with density 0.002 and 0.005, the average PSNR of the three images' three channels is 29db and 26 dB. While in the case of data cut of size 128, the average PSNR for Lena is 12, and for Baboon is 18. When the data cut is size 64, PSNR average values for Lena and Baboon are 18 and 24, respectively. The experimental results indicate that our algorithm has an excellent performance in restoring encrypted images after noise and data cut attacks. Keyspace Attackers try all possible keys in the keyspace of an encryption algorithm to know the correct secret key. If the keyspace is large, it is more difficult for attackers to guess it correctly. This kind of attack is known as a brute force attack. The efficiency of the encryption method depends on having a large keyspace to resist this attack. When the keyspace is larger than 2 100 the algorithm can achieve a higher level of security. In our encryption algorithm for color images, the secret key constructed by the four initial conditions of the fractional hyperchaotic system, a, b, c, d, , and N 0 . If we consider the computation precision is 10 15 , the size of the keyspace is N 0 × 10 150 . Therefore, the keyspace of this method is sufficient to resist this attack. Key sensitivity The robustness of the encryption algorithm depends on its sensitivity to the secret key. In other words, minimal changes in the generation process of the secret key result in another key. Therefore, the modified key cannot be utilized in decrypting the encrypted image by the original key. The decrypted image should be a noisy image without any information about the original image. In this experiment, the original key was used in encrypting the image of Lena, and the encrypted image using this key is shown in Fig. 8b. When we decrypt the image in Fig. 8b with changing x 4 in the initial conditions to x 4 + 0000000001 the plain image is not restored as in Fig. 8c. The original secret key will only restore the plain image, as seen in Fig. 8d. Conclusion Novel utilization of hyperchaotic systems of fractional-order in encrypting color images proposed. In this three-stage encryption algorithm, the 4D hyperchaotic Chen system of fractional-order is used in changing the pixel position. The diffused color image is divided into a group of 2 × 2 blocks, and then the Fibonacci Q-matrix with n = 10 is used in changing the pixels values for each block. The new encryption algorithm showed high sensitivity to any minimal modifications to the secret key and the pixel distribution where an entirely different encrypted image was obtained. The obtained results ensure the success of the proposed color image encryption algorithm to resist all frequent attacks. The future work will focus on improving the running speed of our proposed algorithm. Also, we will study applying the proposed algorithm in video encryption and super-resolution images. The emerging deep learning-based encryption approach for color images is another future work.
2022-01-08T16:28:16.376Z
2022-01-06T00:00:00.000
{ "year": 2022, "sha1": "f943b3920acd930a849b74e5194f9bbc67522f69", "oa_license": null, "oa_url": "https://link.springer.com/content/pdf/10.1007/s12652-021-03675-y.pdf", "oa_status": "BRONZE", "pdf_src": "PubMedCentral", "pdf_hash": "29bee792e6464f0651f80331cf6d2e24b6028924", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Medicine", "Computer Science" ] }
12219286
pes2o/s2orc
v3-fos-license
GALEX Ultraviolet photometry of NGC 2420: searching for WDs We present color-magnitude diagrams of the open cluster NGC 2420, obtained from Galaxy Evolution Explorer (GALEX) ultraviolet images in FUV and NUV bands and Sloan Digital Sky Survey (SDSS) u,g,r,i,z photometry. Our goal is to search for and characterize hot evolved stars and peculiar objects in this open cluster, as part of a larger project aimed to study a number of open clusters in the Milky Way with GALEX and ground-based data. Introduction NGC 2420 is a metal poor (Z=0.008) old open cluster (2.0 Gyr) at a distance of 2.48 kpc, well studied in the visible range (Sharma et al. 2006). From isodensity curves, Sharma et al. (2006) estimate an almost spherical core and a cluster coronal region with radial extent of 10 arcmin. We present preliminary results of our search for hot evolved objects using combined UV and visible photometric measurements from GALEX and SDSS data. The FUV and NUV GALEX bands are very sensitive to the detection of hot stellar objects, and the combination of FUV to NIR bands allows us to uncover hot evolved objects in binary systems where the unevolved cooler companion dominates the optical light ). Send offprint requests to: C. De Martino GALEX and SDSS data The FUV and NUV photometry was measured from a GALEX observation made for the All-sky Imaging Survey (AIS) on Jan 04 2006 with an exposure time of 128 s (equal for the FUV and NUV images). The data were downloaded from the MAST archive. GALEX provides simultaneous imaging in two UV bands, far-ultraviolet (FUV; λ eff =1516 Å) and near-ultraviolet (NUV; λ eff =2267 Å), with a circular field of view of 1.2 o diameter (Martin et al. 2005). The GALEX pipeline photometry was not used because it merges nearby objects in the crowded parts of the cluster. Therefore, we performed custom photometry using DAOPHOT, for all sources detected on a 2-pxl-smoothed NUV image with significance above 5σ. We used aperture photometry to extract the NUV and FUV magnitudes in the AB magnitude system, adopting an aperture of 6 arcsec. We applied aperture corrections following Morrissey et al. (2007). The source catalogue We matched the UV sources to pointlike optical sources using a match radius of 2.5 arcsec. From the matched sources catalogue we extracted the objects within a 10 arcmin circular region centered at the NGC 2420 center (RA=07h38m28s, DEC=+21deg34m01s, J2000). The resulting catalog includes 344 NUV sources, 17 of which have no significant FUV detection. For comparison purposes, we extracted a sample of targets representing field stars from a region centered at RA=07h36m48s, DEC=+21deg14m06s (J2000) and covering an area equal to that of our cluster sample. Analysis We use the matched GALEX-SDSS photometry to build color-magnitude diagrams (e.g., Fig. 1) and color-color diagrams (examples in Fig.s 2 and 3) to investigate the cluster's stellar population. For the analysis, we further limit the sample to objects with good photometry. A detailed discussion will be provided elsewhere. An isochrone (from Girardi et al. 2004) for a 2Gr old population is shown on the CMD. There is clearly a concentration of objects along the isochrone, presumably cluster members, in addition to a sparse distribution of sources matching the distribution of the field stars. Model colors (from Bianchi et al. 2007) for stars of varying T e f f and gravity values down to log g=9, reddened with E(B-V)=0.04 mag, are shown in the color-color diagrams. Two sources in the NGC 2420 area appear to be hot evolved objects, from their pho-tometric colors. In Fig. 4 we plot the spectral energy distribution (SED) of star A, together with two pure-H models, computed with the TLUSTY code. Assuming that star-A is at the cluster's distance, its luminosity would require a small radius, making it a candidate evolved object. However, we point out that a similar small number of hot evolved star candidates is found in the control sample (field stars), and follow-up data are needed to establish cluster membership of the hot stars. Girardi et al. (2004), with age of 2 Gyr and metallicity z=0.008. Future work SED fitting of the photometry with model colours is in progress for all measured sources, to determine their physical parameters. Further work includes the study of other MW clusters, and follow-up spectroscopy of the hot evolved candidate objects. Acknowledgements. Based on archival data from the NASA Galaxy Evolution Explorer (GALEX) which is operated for NASA by the California Institute of Technology under NASA contract NAS5-98034. LB, DT and JH acknowledge support from the GALEX project and from NASA Fig. 2. The [NUV-g] vs [g-r] color-color diagram. Model colors (described in Bianchi et al. 2007) grant NNX07AJ47G (GI cycle 3). Some of the data presented in this paper were obtained from the Multimission Archive at the Space Telescope Science Institute (MAST). STScI is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS5-26555. Support for MAST for non-HST data is pro- Fig. 4. SED (FUV to near-IR) of a hot star in the NGC2420 area. Black dots are the measured magnitudes, plotted at the effective wavelength of each filter. Photometric errors are smaller than the dots. Two pure-H TLUSTY models are shown, scaled to the measured magnitude in the r-band. At the distance of 2.48kpc, the scaling factor would imply R=0.1 R ⊙ (for log g =7) for the UV source. vided by the NASA Office of Space Science via grant NAG5-7584 and by other grants and contracts.
2007-12-05T15:49:18.000Z
2007-12-05T00:00:00.000
{ "year": 2007, "sha1": "25f8f1ee286fd8e167acd3b883881b4bf6a10fb3", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "ccfe9e30402eeca0686ceea493725fc789faa61b", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
54860396
pes2o/s2orc
v3-fos-license
Correlation between Clinical and Histopathologic Chorioamnionitis in HIV-positive Women Objective Histopathologic findings of intraamniotic infection include chorioamnionitis and funisitis. Chorioamnionitis is a recognized risk factor for intrapartum HIV transmission. We sought to determine whether histologic placental findings in patients with clinical chorioamnionitis were different between HIV-positive and HIV-negative patients. Study design Retrospective case control study of all cases of clinical chorioamnionitis at The Johns Hopkins Hospital between 1996 and 2010, diagnosed by presence of maternal fever and one or more of the following criteria: maternal or fetal tachycardia, fundal tenderness, malodorous amniotic fluid, or elevated WBC. Placentas of HIV-positive women were matched 1:2 to HIV-negative controls. Placentas were reviewed by two independent pathologists, blinded to HIV status, using 2003 Society for Pediatric Pathology criteria. Primary outcome was histopathologic finding of chorioamnionitis by maternal grade/stage and fetal grade/stage. Results Of 28,915 deliveries, 2,228 met criteria for chorioamnionitis (7.7%), and 11 were HIV-positive (0.5%); one had dichorionic-diamnionic twins with placentas evaluated separately. Parity, gestational age, mode of delivery, Apgar scores, and infant birth weight were similar between groups. WBC at delivery was significantly lower in HIVpositive patients (9 vs. 13 p<0.01), and women with HIV were significantly older (34.4 vs 28.9 years, p=0.05). HIVpositive patients were no more likely to have histologic chorioamnionitis than their HIV-negative controls. Maternal and fetal stage/grade did not vary significantly between groups. Conclusion Among patients with clinical chorioamnionitis, histopathologic confirmation does not differ significantly between HIV-positive and -negative patients. Additionally, histologic chorioamnionitis in the HIV-positive population does not appear to differ in severity from that in the HIV-negative population. White blood cell count at delivery is significantly lower in HIV-positive women with chorioamnionitis compared to HIV-negative controls. Introduction Chorioamnionitis is infection of the placental membranes surrounding the fetus. Diagnosis can be made either clinically or histologically. Clinical chorioamnionitis is diagnosed by presence of maternal fever in conjunction with one or more of the following: maternal tachycardia (>100 beats per minute), fetal tachycardia (>160 beats per minute), fundal tenderness, or malodorous discharge or leukocytosis (>15,000 cells/mm3). Histologic diagnosis of chorioamnionitis is made by presence of neutrophilic infiltrate of the chorionic plate and membranes, possibly involving the walls of the umbilical blood vasculature (vasculitis) or progressing to infiltrate the stroma of the cord surrounding the cord vessels (funisitis), as demonstrated by Figures 1-3. At our institution, department protocol dictates that placentas of all mothers with clinical chorioamnionitis be sent for evaluation by a pathologist for histopathologic confirmation of diagnosis. Histopathologic evaluation is considered the gold standard for making the pathologic diagnosis of chorioamnionitis [1]. A number of studies have evaluated the degree to which clinical and histologic diagnosis of chorioamnionitis overlap, and have found clinical diagnosis to be less sensitive than histopathologic diagnosis [2]. Published estimates show that histologic chorioamnionitis is observed in 10-38% of term placentas, whereas clinical chorioamnionitis is recognized in 5-12% of term pregnancies [3]. These ranges of values are likely due to variable definitions both of clinical and histopathologic definitions of chorioamnionitis, which can differ by institution. Chorioamnionitis has important clinical implications for mother and fetus. Mothers with chorioamnionitis are at increased risk of postpartum hemorrhage and endomyometritis, and those that deliver by cesarean have an increased risk of postoperative surgical site infection [4,5]. Infants born to mothers with chorioamnionitis are at risk for sepsis and often undergo prolonged observation in the neonatal intensive care unit, and may undergo invasive procedures such as blood cultures and lumbar puncture to rule out sepsis. Additionally, chorioamnionitis has been associated with cerebral palsy in both preterm and term infants [6,7]. Diagnosis of chorioamnionitis is of particular importance among HIV-positive women, because it is a known risk factor for vertical HIV transmission [8]. In the literature, the histologic features of the placentas of HIV-positive women have been characterized [9]. However, whether a difference exists in the histologic features of HIV-positive and HIV-negative women with chorioamnioinitis has not yet been evaluated. We sought to determine whether a difference exists histologically between these two patient populations. We hypothesized that given the immunocompromised characteristic of HIV, that a difference would be observed in the histologic features of these two populations. Materials and Methods Study permission was granted from the Johns Hopkins Institutional Review Board. The study was performed as a retrospective case-control study. All cases of clinical chorioamnionitis between 1996 and 2010 were identified. Patients with HIV were identified, and were contemporaneously matched 1:2 with HIV-negative patients. Maternal demographic and medical information including age, parity, gestational age, presence/absence of spontaneous labor, CD4 count, viral load, and white blood cell (WBC) count on admission, and presence/absence of intrapartum fever were obtained from the medical record. Delivery information including mode of delivery, neonatal Apgar scores, NICU admission, use of antiretroviral medications and antibiotics in labor, and neonatal arterial cord gas information were also abstracted from the medical record. Placentas for study patients were reviewed by two independent pathologists (ZM, FA) both blinded to HIV status, using 2003 Society for Pediatric Pathology criteria [9], and were analyzed for presence of histologic chorioamnionitis as well as maternal and fetal grade and stage of the placenta ( Table 1). The primary study outcome was histopathologic findings of chorioamnionitis (including maternal grade and stage and fetal grade and stage). Secondary outcomes included mode of delivery, APGAR scores, arterial cord pH, WBC count at delivery, infant birth weight, and need for NICU admission. Fisher's Exact Test was used for discrete variables and Student's T-Test for continuous variables, with p<0.05 considered significant. patients (0.5%) were HIV positive; one woman had a set of dichorionic diamniotic twins whose placentas were analyzed individually (accounting for the total n=12). Maternal demographic and baseline medical information were similar between HIV-positive and HIVnegative women ( HIV-positive women were significantly older, and had significantly lower WBC counts than HIV-negative women. Other factors including nulliparity, gestational age, ethnicity, rates of regional anesthesia use, and labor characteristics were similar between the groups. Intrapartum antibiotic treatment was at the discretion of the treating provider. Standard protocol at our institution is to administer ampicillin and gentamycin for clinical chorioamnionitis. Clindamycin is typically added if delivery is by cesarean. Alternate antibiotic treatment in the setting of patient allergy is at the discretion of the treating physician. 10 of 11 HIV-positive patients received zidovudine intrapartum. Ampicillin and gentamicin were administered to 7 of 11 HIV-positive patients and 16 of 22 controls, and clindamycin was administered to 3 of 22 controls. Despite the lower WBC count associated with the HIV-positive cohort, as well as the known immunosuppression associated with the disease process, rates of histologic chorioamnionitis did not differ significantly between the HIV-positive and HIV-negative patients ( Table 3). Table 3: Delivery and outcome data by HIV status. HIV-Positive Rates of histopathologically confirmed chorioamnionitis were 75% and 72.7%, respectively for the HIV-positive and HIV-negative patients (p=0.89). Additionally, there was no difference between the groups in maternal or fetal grade or stage of chorioamnionitis. Modes of delivery were similar between the groups, as were neonatal outcomes including NICU admission, umbilical cord pH and 5 minute APGAR scores. Comment The diagnosis of chorioamnionitis represents a clinical challenge because of variable symptomatology and presentation. Nonetheless, the diagnosis is important to make, because chorioamnionitis is associated with worse maternal and neonatal outcomes, particularly infectious morbidity and mortality. Since the clinical presentation is highly variable, histopathologic diagnosis remains the gold standard. Previous studies have demonstrated significant discrepancy between clinical and histopathologic diagnoses of chorioamnionitis in the general population. However, to date no studies have examined whether infection with HIV has any impact on the correlation between clinical and histopathologic diagnoses of chorioamnionitis. Interestingly, despite the immunosuppression that is the hallmark of HIV, the cellular infiltration of the placental membranes that is diagnostic of chorioamnionitis is relatively preserved in HIV-positive patients. This response is preserved despite a significantly lower white blood cell count in these patients compared to their HIV-negative counterparts. Perhaps one explanation for this finding is that the majority of chorioamnionitis seen in obstetric practice is acute chorioamnionitis, which is neutrophil-mediated response. Given that HIV affects lymphocyte-mediated immunity, the neutrophilic response may be relatively preserved and accounts for similar histologic findings between the two groups. Chronic chorioamnionitis, on the other hand, is a lymphocyte-mediated response, and one might expect different histologic findings among HIV-positive and HIV-negative patients with chronic chorioamnionitis due to their relative deficiency of lymphocytes. This is a potential future avenue of study, but would require a much larger sample size. Strengths of the study include the standardized, blinded histopathologic evaluation of the specimens by two independent pathologists, as well as the fact that this study attempts to answer a novel clinical question not previously addressed in the medical literature. Limitations of this study are its small sample size and retrospective nature. However, because both HIV and chorioamnionitis are uncommon entities, this was the most reasonable way to approach the study question. The small sample size also limits further analysis of the data to evaluate the association between severity of HIV (based on CD4 count and viral load) and histopathologic placental characteristics, which is another potential future avenue of study. Lastly, the study population is largely an urban, African American population, which limits the generalizability of the results. In conclusion, among patients with clinical chorioamnionitis, histopathologic chorioamnionitis does not appear to differ in severity or frequency between HIV-positive and -negative patients, in this small retrospective case control study at an urban tertiary care center. Chorioamnionitis has important medical implications for both HIVpositive mothers and their neonates.
2019-03-13T13:31:52.067Z
2015-01-01T00:00:00.000
{ "year": 2015, "sha1": "066edb03b28a165ac8dc02eb338bfc04351a0e4b", "oa_license": "CCBY", "oa_url": "https://doi.org/10.4172/2376-127x.1000217", "oa_status": "HYBRID", "pdf_src": "MergedPDFExtraction", "pdf_hash": "ec84f54dd2cddf7e7c1427fce58d7dff25b472b4", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
16833683
pes2o/s2orc
v3-fos-license
Intravascular Large B-cell Lymphoma of the Bilateral Ovaries and Uterus in an Asymptomatic Patient with a t(11;22)(q23;q11) Constitutional Translocation Intravascular large B-cell lymphoma (IVLBCL) is a rare subtype of extranodal diffuse large B-cell lymphoma, which is characterized by the intravascular growth of lymphoma cells, aggressive behavior, and an often fatal course. Patients with IVLBCL are usually symptomatic. Although any organ may be involved, localized lesions in the bilateral ovaries and uterus are extremely rare. We experienced a rare case of IVLBCL involving the bilateral ovaries and uterus in an asymptomatic patient with a t(11;22)(q23;q11) constitutional balanced translocation. Its association with the disease remains unknown. Even in asymptomatic situations, IVLBCL is possible, and the uterus and ovaries can be involved. Introduction Intravascular large B-cell lymphoma (IVLBCL) is a rare aggressive subtype of B-cell lymphoma which is characterized by the proliferation of malignant lymphocytes within the lumen of small blood vessels and without an extravascular tumor mass (1). The clinical presentation shows tremendous variation: anemia, neurological symptoms, hepatomegaly, and splenomegaly; and constitutional B symptoms (fever, night sweats, and weight loss) are observed in the majority of patients (2)(3)(4). Thus, IVLBCL is usually symptomatic. The aggressive clinical behavior of the disease often results in a delayed diagnosis and fatal complications due to the occlusion of blood vessels (5). The prognosis is poor, with the 3-year survival rate reported to be between 14 and 27% (3,6). Although any organ can be involved, localized lesions in the bilateral ovaries and uterus are extremely rare (7)(8)(9)(10)(11). The t(11;22)(q23;q11) translocation, which is the most common recurrent non-Robertsonian constitutional balanced translocation in humans, has been reported in more than 160 unrelated families (12). Carriers of the t(11;22)(q23;q11) constitutional translocation manifest no clinical symptoms; however, they often have problems with reproduction (13). The t(11;22)(q23;q11) constitutional translocation has also been reported to be associated with an increased risk of breast cancer (14,15), while the association between the translocation and IVLBCL remains unknown. We experienced a case of IVLBCL involving the bilateral ovaries and uterus in an asymptomatic patient with a t(11;22)(q23;q11) constitutional translocation. To the best of our knowledge, there have been no previous reports of such a case. We present this case with a discussion of the diagnosis and the treatment strategies. Case Report A 72-year-old Japanese female without any symptoms underwent an annual positron emission tomography (PET)computed tomography (CT) examination for opportunistic cancer screening, and metabolically active foci with maximal standardized uptake values of six were found in the fundus and front wall of the uterus (Fig. 1). Contrast-enhanced magnetic resonance imaging (MRI) was conducted as a more thorough examination. Although it revealed marked early enhancement in the uterus, few endometrial abnormalities were observed (Fig. 2). The imaging studies showed no abnormalities in the bilateral ovaries, abdominal lymph nodes or peritoneal membrane. In addition, the patient's epithelial tumor marker levels -including the carcinoembryonic antigen, carbohydrate antigen 19-9, carbohydrate antigen 125, and squamous cell carcinoma antigen levels -were all within normal limits. These findings suggested that uterine sarcoma or malignant lymphoma were more likely diagnoses than endometrial carcinoma. The patient was referred to our hospital for a workup and treatment with a provisional diagnosis of uterine sarcoma, malignant lymphoma, and atypical endometrial carcinoma. Although her past medical history revealed no significant medical illness, she had experienced an early fetal death and a miscarriage at 29 and 33 years of age, respectively. A pelvic examination and transvaginal ultrasound at our hospital revealed no abnormalities. To eliminate the possibility of a uterine metastasis from an intestinal malignancy, upper and lower gastrointestinal endoscopies were performed. These revealed no significant findings. The only abnormality in the patient's laboratory data was a significant increase in her lactic dehydrogenase (LDH; 519 U/L) level. A cytological examination of the cervix and uterine mucosa revealed the presence of non-epithelial malignant cells (Fig. 3). The cytological features, imaging findings, and her clinical presentation suggested that uterine sarcoma was more likely than malignant lymphoma. A total abdominal hysterectomy with bilateral salpingo-oophorectomy was performed as a diagnostic therapy. The pathological findings showed that the lymphoma cells were almost exclusively contained within the blood vessels, including the capillaries, without an obvious extravascular tumor mass (Fig. 4). A diagnosis of IVLBCL involving the bilateral ovaries and uterus was made. The lymphoma cells were positive for CD20, CD5, BCL2, BCL6, and MUM1, and negative for CD10, Cy-clinD1, and SOX11, suggesting a non-germinal center B-cell origin. Although IVLBCL is likely to involve the central nervous system (CNS), the results of contrast brain MRI and a cerebrospinal fluid analysis did not demonstrate CNS involvement. A bone marrow analysis revealed hypocellular marrow with no lymphoma cells, and a Giemsa-band analysis showed 46,XX,t(11;23)(q23;q11.2) translocation in all of the analyzed cells, suggesting that the translocation was constitutional. We planned to administer R-CHOP, a combination chemotherapy consisting of cyclophosphamide, doxorubicin, vincristine, and prednisolone with rituximab (a recombinant anti-CD20 antibody) once every three weeks for six cycles. Although the patient had few findings that would indicate CNS involvement, we also planned to administer intrathecal prophylaxis (methotrexate, cytarabine, and prednisolone) once every three weeks for four cycles. After one cycle of chemotherapy, the soluble interleukin-2 receptor (sIL-2R) and LDH levels decreased to 1,076 U/mL and 167 U/L, respectively. The patient has been treated on an outpatient basis and has maintained a good performance status with the chemotherapy. Thus far, no disease progression has been recognized. Discussion The present case shows that IVLBCL involving the bilateral ovaries and uterus was incidentally found in an asymptomatic patient with a t(11;22)(q23;q11) constitutional translocation. The lack of lymph node swellings and hepatosplenomegaly without B symptoms made a diagnosis of malignant lymphoma seem less likely. The localized FDG accumulation in the fundus of the uterus, the presence of few morphological changes in the endometrium and the cy- tological findings suggested that uterine sarcoma was more likely than endometrial carcinoma. Only the surgical specimens of the bilateral ovaries and uterus revealed the presence of IVLBCL, which had spread throughout the uterine and ovarian blood vessels. To the best of our knowledge, this is the first report of IVLBCL involving the bilateral ovaries and uterus to be diagnosed in an asymptomatic patient. Interestingly, the clinical presentation of IVLBCL appears to differ according to the country of origin. Two main clini-cal variants are recognized: the Western and Asian variants. In the Western variant, the CNS (34-100%) and skin (39-60%) are most commonly involved (4,6,16); on the other hand, the myeloid and lymphoid system, such as the bone marrow (32%) and spleen (29%) are less frequently involved (4). In contrast, in the Asian variant, the involvement of the bone marrow (75%), spleen (67%), and liver (55%) is often observed (3,17,18), while neurological symptoms (27%) and skin lesions (15%) are less common (19). However, in our patient, a physical examination, brain contrast MRI, bone marrow biopsy, lumbar puncture, and an abdominal enhanced CT scan, yielded no relevant findings. It is impossible to diagnose IVLBCL on the basis of cytological findings (8,(20)(21)(22). In this case, the presence of features such as dissociated single cells with high nuclear and cytoplasmic ratios, as well as nuclear pleomorphisms with prominent nucleoli, suggested the presence of non-epithelial malignant cells. In such cells, the cytological features associated with some types of sarcoma are quite similar to those of malignant lymphoma. Thus, cytological findings are sometimes insufficient for distinguishing between the two conditions. When cytological findings that suggest the presence of non-epithelial malignant cells are obtained, the possibility of malignant lymphoma should be considered. A histological analysis is essential for the final diagnosis. Abnormal laboratory findings, such as increases in LDH and sIL-2R, anemia, and an elevated sedimentation rate have been reported in more than half of IVLBCL patients. However, these findings are not specific for IVLBCL (1). Our patient also presented markedly elevated LDH and sIL-2R levels at the initial visit; these abnormally high levels continued, even after the surgery in which the metabolically active foci which were found on PET-CT were completely resected. Although PET-CT has been reported to be useful in the diagnosis of IVLBCL, false-negative results for some types of organ involvement, such as skin and renal involvement, were also recognized. The exact sensitivity of PET-CT for the detection of lymphoma cells in IVLBCL remains unknown because of the rarity of the condition, and careful interpretation is required. IVLBCL is primarily treated with systemic chemotherapy with rituximab and CNS-directed therapy. Although there are no standard chemotherapy regimens for IVLBCL, anthracycline-containing regimens have been reported to improve the clinical outcome (6). The largest retrospective analysis of treatment, in which approximately 83% of the patients received CHOP or CHOP-like chemotherapy, with or without rituximab (2), demonstrated that the patients who received chemotherapy plus rituximab had a significantly higher complete response (82% versus 51%) and two-year overall survival (66% versus 46%) rate than patients who did not receive rituximab, indicating the efficacy of rituximab. As for therapies directed at the CNS, we administered CNS-directed therapy as a prophylactic treatment against potential CNS metastasis in the present case (despite the patient showing few signs of CNS involvement) because IVLBCL is associated with a high CNS relapse rate [the incidence during follow-up is reported to be 25% (19)]. The t(11;22)(q23;q11.2) is the most frequent constitutional balanced translocation in humans (12). Few reports have shown an association between the translocation and IVLBCL; on the other hand, the translocation appears to have a significant association with breast cancer (14,15). To the best of our knowledge, this is the first report of an IVLBCL patient with this translocation. In conclusion, we experienced a rare case of IVLBCL in-volving the bilateral ovaries and uterus in an asymptomatic patient with a t(11;22) constitutional translocation. The present case demonstrates that IVLBCL may occur in asymptomatic patients and that the uterus and the ovaries can be involved. The diagnosis of IVLBCL is challenging because the laboratory and imaging studies show low specificity, and the cytological findings are limited. There is room for considering the possibility of IVLBCL when unexplained clinical findings are observed in patients with elevated LDH and sIL-2R levels. Written informed consent was obtained from the patient for publication of this case report and any accompanying images and laboratory data. The authors state that they have no Conflict of Interest (COI).
2018-04-03T02:36:04.197Z
2016-11-01T00:00:00.000
{ "year": 2016, "sha1": "f4a13b3ce59c9c26ac50575520bc2d8cd5a25e6e", "oa_license": "CCBYNCND", "oa_url": "https://www.jstage.jst.go.jp/article/internalmedicine/55/21/55_55.6578/_pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "f4a13b3ce59c9c26ac50575520bc2d8cd5a25e6e", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
221116700
pes2o/s2orc
v3-fos-license
Feasibility study of stem-cell enriched autologous lipotransfer to treat oro-facial fibrosis in systemic sclerosis (Sys-Stem): Protocol for open-label randomised controlled trial Highlights • Oro-facial fibrosis presents a significant disease burden in systemic sclerosis.• There is currently no treatment available for oro-facial fibrosis.• Autologous fat grafting is a novel therapeutic method for oro-facial fibrosis. Context Systemic Sclerosis (SSc) is a complex multisystem disease characterised by autoimmune, microvascular and fibrotic components, affecting a predominantly female population aged 30 to 60 years at onset [1,2]. Skin fibrosis is present in nearly all patients, and often termed the clinical hallmark of SSc [3]. In particular, facial involvement presents a significant disease burden to patients due to its impact on aesthetic appearance and oro-facial function, leading to social disability, isolation and psychological distress. It ranked as the most worrying aspect of the disease by the majority of patients, overtaking even internal organ involvement [4]. Orofacial manifestations include skin thickening and atrophy, skin induration, reduction in mouth opening (microstomia), thinning and retraction of the lips (microcheilia), peri-oral furrowing and telangiectasia. With disease progression this can lead to inability in achieving oral competence with breathing and chewing impairment. Involvement of the salivary and lacrimal glands can also lead to xerostomia and xerophthalmia [5]. There is yet no effective disease modifying therapy to reverse skin fibrosis [6]. Physiotherapy and self-administered exercises are suggested to improve mouth opening but relapse is common [7,8]. Autologous lipotransfer is a minimally invasive surgical techhttps://doi.org/10.1016/j.isjp.2020.07.002 2468-3574/Ó 2020 The Authors. Published by Elsevier Ltd on behalf of Surgical Associates Ltd. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/). nique that is used for correcting volumetric deficits and soft tissue, however is now finding a role in fibrotic conditions [9,10]. Our group and others have suggested that it may improve skin fibrosis in different conditions including hypertrophic scars, burns, radiation-induced fibrosis, lichen sclerosis, and hemifacial atrophy [11][12][13][14]. Autologous lipotransfer has been also reported in small cohorts of SSc patients with facial or hand involvement [15][16][17][18][19]. A formal clinical trial assessing the safety and efficacy of autologous lipotransfer for facial involvement in SSc has not yet been reported and represents an unmet clinical need within the NHS. Preliminary work The Royal Free NHS Trust London is a national referral centre for SSc in the UK. We are the only site to treat SSc patients with autologous lipotransfer. Sixty-two patients with oro-facial fibrosis were retrospectively assessed following oro-facial lipotransfer [2]. Efficacy was assessed by volumetric augmentation, oro-facial function and psychological questionnaires. Results showed improvement in peri-oral volume, lip flexibility and aesthetics with fat retention in the cheeks (93.7%), nasolabial folds (81.9%) nose (67.4%) chin (68.2%), upper lips (35.5%) and lower lips (27.3%). The Mouth Handicap in Systemic Sclerosis (MHISS) scale and all psychological measures showed significant improvement. Study design This is a single centre, randomised controlled study with an open-label design. The control arm will be a no-treatment concurrent control receiving care-as-usual. The treatment arm will receive autologous lipotransfer as the intervention and therefore study participants will not be blinded. Randomisation will be carried out by the clinical research team using a centralized system, sealedenvelope.com, and a single surgeon will carry out the procedure. Patients will be assessed at 6 weeks, 3 months and 6 months ( Fig. 1). Table 1 summarises the assessments at each time point. Participants in the control arm will be given the option of receiving the intervention at the end of the study to ensure treatment is offered to all. Study aims and outcomes The primary objective of this study is to assess the feasibility of using the Mouth Handicap in Systemic Sclerosis scale (MHISS) as our primary outcome measure. This was determined as being the most important outcome measure by our patient discussion group. Also, we will assess the feasibility of using the 3 subscales of the MHISS (Opening, Dryness and Aesthetic) as outcome measures. Secondary objectives include the following: -Estimate recruitment rate required for a multi-centre clinical trial; -Estimate attrition rate required for a multi-centre clinical trial; -Assess the willingness of participants to be randomised; -Feasibility of obtaining patient-reported outcomes via psychological and quality of life questionnaires; Visual Analogue Scale (VAS) Derriford Appearance Scale (DAS24), Brief Fear of Negative Evaluation Scale (BFNE), Hospital Anxiety and Depression Scale (HADS), EuroQol (EQ-5D-5L); -Feasibility of determining cost-effectiveness in main trial, by quality-adjusted life-years calculated from the EQ-5D-5L, and costs to the NHS according to Patient Resource Use questionnaires (Client service receipt inventory (CSRI)); -Acceptability of a range of qualitative and quantitative outcome measures (mouth opening, salivary flow, ultrasound, thermography, videocapillaroscopy, cutometry, durometry, laser speckle contrast imaging, 3D photography) Feasibility criteria are as follows: -Minimum recruitment rate of 80% -Maximum attrition rate of 10% Trial population The Royal Free London NHS Foundation Trust has the highest number of SSc patients on registry in the UK with currently 1700 patients with both limited and diffuse forms of the disease. Study participants will be recruited from the existing registry over a period of 12 months. Only patients who plan to have surgery as part of their standard of care will be approached. Surgery will not be carried out if there is no clinical need. The inclusion criteria are as follows: Inclusion criteria Age > 18 and < 90 years Diagnosis of limited or diffuse forms of SSc in the established phase of the disease (>3 years) Sample size This feasibility study will consist of 25 patients in each arm, 50 patients in total. The sample size of 50 was chosen to fulfill one of the main objectives of this study which is to calculate the recruitment rate and rate of attrition for the main trial. We can estimate a recruitment rate of 80% with 95% CI (69% to 91%) and an attrition rate of 10% with 95% CI (1.5% to 18.5%). A difference of four points in the mean MHISS scores comparing control and intervention groups could be considered to be clinically significant, assuming a mean MHISS score of 29.3 (SD = 8) pre intervention (calculated from our retrospective data based on improvement in MHISS scores). This mean difference of 4 points can be estimated from the feasibility study with 95% CI (0.5 to 9.5). Screening and enrolment Subjects who match the inclusion criteria will be identified from the SSc registry by the clinical research team, and screened in clinic at their next scheduled visit, or telephoned and invited to clinic to discuss the study. The study will be described to the patient and they will be provided with a patient information sheet (PIS) detailing the specifics of the study and the risks and benefits involved. Subjects willing to participate in the study will be approached for informed consent at the next clinic visit, giving enough time to consider participation in the trial. The trial nurse will take informed consent and address all queries. Consent Subjects who sign the consent form will be deemed recruited into the study and will be assigned a unique subject number. A copy of the signed informed consent form will be given to the participant and the original signed form will be retained in the locked trial file on site and a copy placed in the medical notes. Interventions and outcomes The following non-invasive interventions will be undertaken by the participant at the Royal Free Hospital at baseline, 6 weeks, 3 months and 6 months following the autologous fat transfer in the intervention arm or following screening in the control arm: Questionnaires Assessment of patient-reported outcomes via psychological and quality of life questionnaires will be assessed VAS, DAS24, BFNE, HADS, and EQ-5D-5L. Assessment of oral function The MHISS scale will be supplemented with measurements of mouth opening. Salivary flow rate will also be measured to assess involvement of salivary glands in fibrosis. Assessment of skin fibrosis The modified Rodnan skin score and high frequency ultrasound will be used to give a measure of the degree of skin fibrosis. Assessment of microcirculation Videocapillaroscopy, thermal imaging and laser speckle imaging will be used as assessment tools for microcirculation of the skin. Assessment of biomechanical properties Durometry and cutometry will be used as quantitative measures to assess stiffness and elasticity of the skin respectively. Photography Standardised two-dimensional photographs will be taken. Three-dimensional photography will allow volumetric assessment of the face to calculate fat retention rates. Intervention Autologous lipotransfer is a minimally invasive clinical procedure and is considered a standard of care procedure in reconstructive surgery [9,10]. Adipose tissue is harvested from the abdomen or thighs and centrifuged to separate out the fraction rich in stem and progenitor cells. This is injected into the fibrotic oro-facial tissues using a minimally invasive technique [20]. Any surplus lipoaspirate to be discarded will be collected at the end of procedure. Two punch biopsies will be taken from the forearm in the same procedure and a single aliquot of autologous lipotransfer will be injected at a marked site. At 6 months follow up subsequent forearm biopsies will be taken from the injected site. Tissue collection The clinical research fellow will be responsible for collection, isolation procedures and storage of the participant's tissue and cells in accordance with informed consent and the detailed PIS. All samples will remain onsite and will be stored anonymised in the secure onsite storage facility. Samples will be processed, stored and disposed in accordance with all applicable legal and regulatory requirements, including the Human Tissue Act 2004 and any amendments thereafter. Laboratory assessment The tissue samples, collected at baseline and 6 months, and will be assessed by immunohistochemistry for features of SSc including fibrosis, vasculopathy and immuno-inflammatory markers. Dermal fibroblasts will be isolated from the tissues using established methodologies. Fibroblasts will be analysed by quantitative PCR to assess the genetic phenotype. A 2 gene biomarker (THBS1 and MS4A4A) specific to SSc will be used to quantify the profibrotic signature of SSc fibroblasts before and after exposure to lipotransfer. We will also determine the benefit of genetically screening participants. Adipose-derived stem cells (ADSCs) will be isolated from discarded lipoaspirate using established fat digestion techniques and characterised for cell viability and DNA content. Participant withdrawal Participants may be withdrawn from the trial whenever continued participation is no longer in their best interests. This may include disease progression, interconcurrent illness; participant choice or persistent non-compliance to protocol requirements. The decision to withdraw a participant from treatment will be recorded in the CRF and medical notes. Patient and public involvement We have a patient representative who has made substantial contribution in research design including identifying and prioritising the research questions and receiving feedback from a dedicated patient discussion group in shaping these further. Wider involvement of the public and patients is through the social media discussion group (https://www.facebook.com/groups/ 205999563141399). Data collection All data will be handled in accordance with the UK Data protection Act 1998. Clinical data will be collected into case report forms (CRFs) which will not include the participant's name or other identifiable data. The participant's initials, date of birth and trial identification number will be used for identification. The clinical trial nurse and the clinical research fellow will be responsible for data collection. All source data from medical records and laboratory and clinical reports will be included in the CRFs. All data will be anonymised and encrypted and will be stored in a locked and dedicated filing cabinet. Research data will be stored electronically on-site in REDCap, a secure and trusted resource. In the long term, data will be stored at Iron Mountain. Records will be stored over the lifetime of the patients, as they will continue to be under the care of the consultant. Statistical analyses Data on all key variables will be summarised using mean (SD), median (interquartile ranges) or proportions as appropriate. The difference in the mean MHISS scores between the intervention and control groups will be estimated using linear regression, adjusted for the baseline score and presented as an estimate with 95% confidence intervals. The secondary outcomes will be compared between the intervention and control groups using appropriate statistical methods and presented as estimates with 95% CI or just descriptive statistics. Trial funding, organisation and administration The study funding has been reviewed by the UCL/UCLH Research Office, and deemed sufficient to cover the requirements of the study. NHS costs will be supported via UCLH and/or the Local Clinical Research Network. The research costs for the study have been supported by the National Institute for Health Research (NIHR), Research for Patient Benefit (RfPB) scheme (reference number PB-PG-1216-20042; funding amount £245,985; date of award 13th Nov 2017). This research is also supported by the Royal Free Charity to cover the salary costs of the clinical research nurse for the duration of the study (funding amount: £100,000; awarded Sept 2017). Ethics and dissemination The study received ethical approval from the London Camden and Kings Cross Research Ethics Committee (REC reference 19/ LO/0718). The results of the study will be disseminated to a national and international audience of patients, patient user groups, public, clinicians and health services, with a goal of raising awareness and receiving patient and public feedback for the full multi-centre trial. The design and results of the study will be presented to national and international rheumatology conferences including the Scleroderma and Raynaud's UK (SRUK), and published in peer reviewed rheumatology journals. Availability of data The protocol, sample case report forms and participant information are available on upon request to the corresponding author. Trial status The trial opened to recruitment on 22nd October 2019. Discussion Oro-facial fibrosis is recognised as a cause of significant concern in SSc patients and yet there has been no effective treatment to target skin fibrosis. Emerging studies reporting clinical improvement after autologous lipotransfer are encouraging and support the use of this well-established surgical technique as a novel therapeutic approach in these patients [11][12][13][14][15][16][17][18][19]. It is postulated that antifibrotic effects are mediated by adipose-derived stem cells (ADSCs). Compared to other adult stem cell populations, ADSCs have drawn interest due to the ease of isolation, quick processing time and abundance of stores to harvest from [21]. ADSCs have been shown to secrete angiogenic, immunomodulatory and anti-apoptotic factors and can differentiate into adipogenic, chondrogenic and osteogenic lineages [22]. The mechanisms by which ADSCs exert their antifibrotic effects are not yet fully understood. Dermal fibrosis in SSc is a complex process involving pathological deposition and accumulation of extracellular matrix in the dermis. Dermal fibroblasts show upregulated proliferation and collagen synthesis with decreased collagenase activity levels, as a result of alterations in several molecular regulators including cytokines and transcription factors. Transforming growth factorbeta-1 (TGF-b1) and connective tissue growth factor (CTCG) are implicated to exert a significant role by activating collagen synthesis and enhancing fibroblast action [1,3,11]. The TGF-b1 pathway has been explored as a potential mechanism by which ADSCs reverse fibrosis, as well as potential modulation of angiogenesis or immune response [23][24][25]. To date study cohorts remain limited by small sample size, limited outcome measures and short follow-ups. A formal clinical trial is necessary to assess the efficacy of autologous lipotransfer as a treatment for oro-facial fibrosis in SSc over time, and how this therapy may be optimised. Consdering the heterogeneity of disease presentation, involvement of auto-antibodies, overlap with other rheumatological diseases and variable disease progression and degree of fibrosis in SSc subsets, our data will enable subset anal-yses that will allow assessment of treatment response to guide future therapy tailored to the specific patient. As we increase our understanding of the regulation and reversal of the fibrotic pathway, this can form a foundation upon which we can extrapolate to other fibrotic conditions, including hypertrophic scarring, radiation-induced fibrosis, burns, lichen sclerosis and Dupuytren's disease. Please state any conflicts of interest None to declare. Consent Not applicable. Registration of Research Studies Registered on ISRCTN registry. Identifier: ISRCTN17793055. Guarantor Peter EM Butler. Declaration of Competing Interest None to declare.
2020-07-23T09:04:31.185Z
2020-07-18T00:00:00.000
{ "year": 2020, "sha1": "c671d6e55274f1e4ffe0ddd595ab0561c2cdeab7", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1016/j.isjp.2020.07.002", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "45e231f8292655b2f76ad5eed8739b71cf119d50", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
268248367
pes2o/s2orc
v3-fos-license
Vector wave dark matter and terrestrial quantum sensors (Ultra)light spin-1 particles — dark photons — can constitute all of dark matter (DM) and have beyond Standard Model couplings. This can lead to a coherent, oscillatory signature in terrestrial detectors that depends on the coupling strength. We provide a signal analysis and statistical framework for inferring the properties of such DM by taking into account (i) the stochastic and (ii) the vector nature of the underlying field, along with (iii) the effects due to the Earth's rotation. Owing to equipartition, on time scales shorter than the coherence time the DM field vector typically traces out a fixed ellipse. Taking this ellipse and the rotation of the Earth into account, we highlight a distinctive three-peak signal in Fourier space that can be used to constrain DM coupling strengths. Accounting for all three peaks, we derive latitude-independent constraints on such DM couplings, unlike those stemming from single-peak studies. We apply our framework to the search for ultralight B - L DM using optomechanical sensors, demonstrating the ability to delve into previously unprobed regions of this DM candidate's parameter space. Introduction Dark matter (DM) dominates the non-relativistic matter content in our cosmos.However, we know exceptionally little about the constituent particles/fields of DM.Apart from the fact that they must interact gravitationally, we do not know their mass, spin, and other potential interactions [1,2].Astrophysical observations allow for a broad range of masses for the dark matter "particles": 10 −19 eV ≲ few × M ⊙ [3,4].Theoretical models include particle masses that span this range, with ultralight bosons at the lower end and composite particles/primordial black holes at the upper end [5][6][7][8][9][10]. Among the variety of possibilities, the case of ultralight, bosonic dark matter is particularly intriguing.These include, for instance, the QCD axion [11][12][13], axion-like particles and other scalars [14][15][16][17], and vector particles [18,19].A wide-ranging observational and experimental program is currently exploring models that can be tested with contemporary technology.Assuming a local dark matter density ρ ∼ GeV cm −3 with a typical virial velocity of v 0 ∼ 10 −3 c, for particle masses smaller than a few eV, the typical particle number within a de Broglie volume becomes sufficiently large, allowing for a classical field theory description: N dB ≃ ρ/m (h/mv 0 ) 3 ∼ 10 66 10 −15 eV/m 4 . In our detection scheme, the sensor points in a fixed direction relative to the local tangent plane of the experiment, rotating with the Earth at an angular frequency ω ⊕ ≡ 2π/(1 sidereal day) ≈ 7.5 × 10 −5 Hz.We will show that the signal in Fourier space contains three distinct peaks located at the angular frequencies m, m − ω ⊕ , and m + ω ⊕ , with the last two arising due to the Earth's rotation. 1Since ω ⊕ corresponds to a mass scale of ∼ 5 × 10 −20 eV, which is already outside of the allowed mass window, we naturally have m > ω ⊕ .To resolve these three distinct peaks, we shall therefore enforce that the observation time always satisfies T obs > 1 d.With this basic setup, we will concentrate on the short observation time regime in this paper: experimental expedition timescales which are much smaller than the coherent timescale of the wave DM field, T obs ≪ τ coh ≡ h/mv 2 0 . (1. 1) This means that, at most, our framework is valid for masses that give a coherence time of at least 1 d, corresponding to m ≲ 5 × 10 −15 eV.For longer expedition times, this upper mass limit decreases: for example, for a runtime of 1 yr, we have that m ≲ 10 −16 eV.This gives us the mass range of 5 × 10 −20 eV ≲ m ≲ 5 × 10 −15 eV where our analysis is appropriate 2 .We limit ourselves to working within the coherent regime for two reasons.Foremost, we wish to highlight how the inherent stochasticity of the full 3-dimensional vector field should be treated when drawing inferences.This randomness is roughly only manifest for observation times shorter than a coherence time since the random variables dictating the behaviour of the field can be treated to be sampled once every coherence patch.For longer observation times, this randomness is averaged over.Secondly, remaining in the coherent regime allows us to treat the signal as only appearing within three bins in Fourier space.For observation times outside of this regime, we would instead begin to resolve the shape of the dark matter halo velocity distribution, complicating our inferencing.Constraining ourselves to this regime thus simplifies the problem, emphasizing the role that the vector nature of this type of dark matter plays when performing inferences. Given the recent and substantial research and development efforts in quantum technologies, we are specifically interested in timely studies aimed at understanding the potential of quantum optomechanical sensors in the direct detection of dark matter.Mechanical detectors have a rich history in tests of gravity, including LIGO, and in recent years there has been a surge in efforts to explore their potential in quantum sensing for fundamental physics investigations (see reviews [52][53][54]).We are only beginning to understand the new opportunities for dark matter searches [55][56][57][58][59][60] in light of significant advances in quantum readout and control of mechanical sensing devices using optical or microwave light [61][62][63].Accurately modeling the dark matter signal and the associated statistics is crucial for drawing representative inferences, guiding these quantum research and development efforts and aiding in experimental design.This is particularly relevant for large-scale accelerometer projects, such as the one proposed by the Windchime collaboration [64].Such sensors have demonstrated potential as powerful probes for the wavelike signature produced by ultralight dark matter [65]. In this paper, we devise the analysis strategy for ultralight vector dark matter in the coherent regime to draw more representative exclusion inferences in the future.We begin by laying the theoretical groundwork for this DM paradigm in Section 2. Considering equipartition between the longitudinal and transverse modes of the VDM field, we derive the associated signal in both the time and frequency domains.We do this by taking into account both its stochastic and polarization properties, as well as accounting for the rotation of the Earth.We then perform statistical analyses of the DM signal in the frequency domain in Section 3. We derive a limit on a generalised parameter that is independent of the vector dark matter model and experimental parameters, which can be recast to concrete choices of them.Unlike other studies that focus solely on a single peak, our findings reveal that the signal power is distributed across three distinct peaks.Accounting for this distribution ensures the retention of constraining power, regardless of the experiment's location on Earth.Finally, in Section 4, we apply our framework to a concrete dark matter model and sensor: B − L dark matter and the canonical optomechanical light cavity. We have also included 5 appendices in this paper.In Appendix A, we discuss the stochastic behaviour of the vector field given an unequal distribution of power amongst its longitudinal and transverse modes, as well as their ultimate equipartition (owing to non-linear gravitational dynamics).In Appendix B, we derive the marginal likelihood for the three-peak signal and also show that the powers in the three peaks are uncorrelated.In Appendix C, we discuss the applicability of our results in the context of the gradient of a scalar, deriving limits on the appropriate generalized parameter.In Appendix D, we derive the likelihood in the case that the vector field is 'linearly polarized' and also show that the covariance matrix is not diagonal.Finally, in Appendix E, we derive an updated limit on the gauge coupling of a new, long-range B − L coupled fifth force given the latest MICROSCOPE results.Throughout the rest of this paper, we will work in natural units, whereby ℏ = c = 1.Moreover, we sometimes quote the DM mass in units of Hz where it is more appropriate to treat it as an angular frequency.The conversion from Hz to eV is given by the relation m = ℏω/c 2 ≈ 4.14 × 10 −15 eV[ω/(2π Hz)]. The Dark Photon Field The random vector field  of mass m, at any given location and time, can be decomposed as where, in the non-relativistic limit and assuming free field evolution, each Fourier mode of the complex 3-vector field Ψ(x, t) evolves as Ψk (t) = Ψk e −i k 2 2m t .We use hatted notation to indicate that a quantity is stochastic. With non-linear gravitational clustering, we expect an equipartition between longitudinal and transverse polarizations of the plane waves in our vicinity, regardless of whether the early universe production mechanism favors one over the other.In Appendix A, we discuss this equipartition in greater detail, providing evidence for it via halo-formation 3D simulations of the Schrödinger field Ψ. Given this equipartition, the spectrum Here, V is the volume and ρ is the local mass density.To be explicit, we work with a finite volume V so that k is discretized. We can define Ψk = √ f k εk such that, for every k, εk is a set of 6 real (3 complex) i.i.d.s (independent and identically distributed random variables) with unit norm ⟨ε † k • εp ⟩ = δ k,p .In other words, for every k there are 5 real random numbers that are uniformly distributed on a unit S 5 .With this, a realization of the random vector field is In this paper, we are interested in the short observation time limit, T obs ≪ τ coh = m/k 2 0 (where k 0 = mv 0 denotes the typical wavenumber).In this case, we can neglect the k 2 /2m factor from the time-varying sinusoid.Subsequently, we have a summation over many monochromatic waves (all oscillating with frequency m), with different amplitudes and phases for different values of k.Assuming that the halo function is well behaved (meaning any n th moment k k n f k is finite), we can use the central limit theorem to arrive at the following4 : Â(x, t) For T obs ≪ τ coh , we can also safely assume that the distance Earth sweeps during the observation time is negligible compared to the de Broglie length.As such, we shall set x = y and further set x = 0 assuming statistical homogeneity of the DM field.Decomposing the complex Gaussian random variables ŵ into Euler form, ŵj = αj e −i φj , we may write Âj (t) The three αj are independent Rayleigh distributed random variables, while the three φ j are independent uniformly distributed angles (ranging from 0 to 2π): The three components become statistically independent, mimicking three independent scalars. 5Also note that our result is similar to the case of the gradient of a scalar [46], in the sense that there are 6 independent normal random variables to describe the DM field at a given location and short time scales. Equipartion and Ellipses In this subsection, we further justify our assumption of equipartition in the previous section. Equipartion and Vector Field Ellipses: There exist various production mechanisms where disparate amounts of longitudinal (spin-0) and transverse (spin-1) helicities are produced [20,21,23,25].That is, for every k, there could be different amounts of longitudinal and transverse components of Ψ k .However, owing to non-linear gravitational clustering, such disparity within the two sectors is expected to have disappeared by now within our local cosmic vicinity.With non-linear gravitational clustering, we expect virialization, leading to the equipartition of energy within all three degrees of freedom (dof).This would result in 2/3 of the total power being contained within the (two) transverse dof and the remaining 1/3 within the longitudinal ones.In this case, the vector field at each point (within a coherence region), which is formed out of a sum over a large number of Fourier modes, roughly traces out a randomly oriented ellipse (as opposed to oscillating along some fixed direction).This can be seen by noting that within time scales and length scales much smaller than the coherent ones, the spin current [31,66] is negligible since it scales with the typical speed σ.Hence, the local spin density (given by s = A × Ȧ) is conserved, implying that the local field vector A can execute a two-dimensional sinusoidal motion in general, i.e. it sweeps an ellipse.See Fig. 1 for a visualization and a description of this evolution, and visit this webpage for a video.Such elliptical motion is appropriately used in, for example, [67][68][69]. Random Linear Polarization: Some studies make the assumption of linear polarization; i.e. the vector field oscillates along a line in a fixed direction with this direction changing randomly every coherence time (see for example [41,[70][71][72]).However, as we argued above based on equipartition, an elliptical motion of the vector field is what one should expect.Nevertheless, we note that a preference for linear (circular) polarization could be generated by allowing for a non-gravitational attractive (repulsive) self-interaction [73][74][75].While these works argue for such a preference in isolated soliton-like configurations, whether a significant preference for linear (circular) polarization within each coherence patch can be achieved dynamically remains to be seen. Fixed Direction: There could exist a misalignment production mechanism for vectors where the entire observable Universe (or at least a large portion of it) has the vector field oscillating in a fixed direction [76,77].Such setups indeed lead to a fixed direction of oscillation of the vector field (apart from randomly oriented small perturbations).This would be distinct from the equipartition case we consider.However, there are difficulties associated with their production mechanisms [78][79][80]. The Detector Signal Let ζ(t) be the "antenna" direction-the time series signal is then given by Ŝ Here, g is some overall coefficient containing the coupling Figure 1: At each spatial point with a coherence region of size 2π/mv 0 (scale of interference "granules"), the dark matter field A (orange vector) traces out an approximately fixed ellipse (for 2π/mc 2 = τ Comp ≪ t ≪ τ coh = 2π/mv 2 0 ).These ellipses change their size and orientations on the coherence time scale and smoothly connect with each other from one coherence region to another.For the duration of the measurement, we expect to find ourselves within one such coherence region.The detection signal S(t) is proportional to the dot product of the dark electric field E ≃ ∂ t A and the detector orientation ζ(t) (green arrow).Also see Fig. 2. Snapshot of the actual simulated field-leftmost paneltaken from [31].For a movie of the vector field behaviour based on the simulations, visit https://www.youtube.com/watch?v=bbw6yFRLS7s. constant and any other possible model parameters, and we have used the fact that the produced dark electric field is approximately given by E ≈ ∂ t A in the non-relativistic limit of the vector DM field.In this work, we will take the antenna to always point towards the zenith; however, we will comment on this assumption in Section 3.With ϕ denoting the latitude (where ϕ = 0 • and ϕ = 90 • respectively correspond to the equator and poles), and ω ⊕ denoting the angular rotation frequency of the Earth, we have ζ(t) = (cos ϕ cos(ω ⊕ t), cos ϕ sin(ω ⊕ t), sin ϕ) ⊺ .Then, the signal Ŝ(t) is Ŝ(t)| T ≪τ coh ≈ g 2ρ/3 αx cos ϕ cos (mt + φx ) cos(ω ⊕ t) + αy cos ϕ cos (mt + φy ) sin(ω ⊕ t) + αz sin ϕ cos (mt + φz ) . (2.6) Ultimately, we perform our analysis in Fourier space, considering the (one-sided) periodogram generated by Eq. (2.6).This is proportional to the mod-square of the discrete . This depends on the shape, orientation and period of the vector field A(t), as well as the detector axis ζ(t), which rotates with the Earth, and a model-dependent coupling g.Also refer to Fig. 1.We show the signal in the time (left) and Fourier (right) domains.For the time domain signal, we show its behaviour over several coherent times (upper) and sidereal days (lower), the latter of which is the expected observed signal in our sensor.The inset shows the oscillation of the signal over several Compton times.Note the appearance of three peaks: a single Compton peak at frequency m in the middle and two additional ones appearing due to Earth's rotation-a difference peak at d ≡ m − ω ⊕ and a sum peak at s ≡ m + ω ⊕ .The time domain signal is simulated from Eq. (2.6), the periodogram of which is taken via Eq.(2.7).For the simulation, we have used the observation time T obs = 10T ⊕ , and the results are expressed in terms A ≡ g 2ρ/3 (where ρ is the local DM density).For additional details on the simulation, see text.The peaks are contained within bins whose widths correspond to the resolution of the frequency space data, given by ∆ω = 2π/T obs .Note that the general elliptical behavior of the DM field allows for different power in the sum and difference peaks.In contrast, the linearly polarized DM field would lead to equal power in these peaks. Fourier transform of Ŝ(t) and is given by (2.7) Here, ω is the angular frequency, N is the number of points sampled in the time domain, and ∆t ≡ T obs /N is the sampling frequency.The factor of two accounts for the 'folding' of the result from negative angular frequencies to positive angular frequencies, producing the one-sided periodogram that ignores the former.We define the dimensionless parameter where σ is the noise power spectral density (PSD).Typically, A is an acceleration or a force for accelerometer studies.With these definitions, the signal periodogram, normalised by the noise PSD (which we call the excess power λ), is given by λ(ω) ≡ P(ω) (2.9) Here, we have defined the 'sum' and 'difference' angular frequencies, s ≡ m + ω ⊕ and d ≡ m − ω ⊕ , respectively.The δ a, b are Kronecker delta functions over angular frequencies.We note that σ can be frequency dependent.This means that the expected noise PSD within each signal-containing bin can be different.However, for the remainder of this work, we take σ to be approximately constant, which is a good approximation in the frequency range of the signal, ∆ω ∼ ω ⊕ ≡ 2π/T ⊕ ≈ 7.3 × 10 −5 Hz, where T ⊕ = 23.93 h is the Earth's sidereal period.We comment on this approximation further in Section 4 when we consider a concrete sensor and DM model.Fig. 2 shows an example of a signal in the short observation time regime, in both the time (Eq.(2.6)) and Fourier (Eq.(2.9)) domain.To generate them, we have taken A = 1 [A], m = 2π Hz, ϕ = 45 • , T obs = 10T ⊕ , and, for the purposes of fast convergence, T ⊕ = 100 s.Here, [A] are the units of A, which depend on the quantity being measured by the experiment.We have also taken α ≡ (α x , α y , α z ) ⊺ = (1, 0.7, 0.2) ⊺ and φ ≡ (φ x , φ y , φ z ) ⊺ = (π/2, π/4, π/3) ⊺ .When running our future simulations, we sample these six variables independently from their respective distributions. There are three characteristic timescales within the signal: the Compton scale, the Earth's rotation period, and the coherent (de Broglie) scale.The first two of these are present in the larger panels of Fig. 2. The Earth's rotation period is evident from the time domain signal, which we have shown for three full rotation periods.The Compton scale is much faster than this scale (see inset), making the signal appear solid in shape.Crucially, we see that the vector ULDM field leaves a characteristic three-peak signal in the Fourier domain 6 .One peak is present at the Compton frequency ω = m, which previous frequency-space analyses have focused on [37,39].However, a further two peaks manifest as a result of the Earth's rotation, which are spaced ω ⊕ away from the Compton peak.These additional peaks, the use of which has been ignored in previous accelerometer analyses 7 , only appear as T obs ≥ T ⊕ ; shorter observation times do not give us enough resolution in the frequency domain to resolve them8 .We call the peak at the Compton frequency the Compton peak, that at s the sum peak, and that at d the difference peak. We argued earlier that, within a coherence patch, we expect the vector field to undergo an elliptical motion with period 2π/m (see Fig. 1), as opposed to a linear one commonly used in the literature.In both the linear and elliptical cases, the time domain signal, S(t), is sinusoidal and contains the angular frequencies m and m ± ω ⊕ .Repeating the analysis of [41] in the time domain, but without the linear polarization assumption, we expect qualitatively similar results (with more statistical spread on the time averaged power).However, there are some important differences when analysing the expected signal in Fourier space. In the elliptical case, the power contained in the m and m ± ω ⊕ peaks is statistically uncorrelated (see Appendix B).On the other hand, for the linear polarization case, the power at m and m±ω ⊕ is correlated, with equal power in the sum and difference peaks.This can be seen by noting that, in this case, all the components of the vector are in phase.The statistical independence in the elliptical case significantly simplifies our analysis pipeline for projected sensitivities.Furthermore, the distinction in power at m ± ω ⊕ is also relevant in case of a detection since we would expect different powers in the elliptical case.We show the statistical independence arising from the elliptical case in Appendix B and the statistical dependence of the sum and difference peaks arising from the linear polarization case in Appendix D. Statistical Analysis We now consider the projected exclusion limits that a generic experiment would be able to set using our three-peak analysis.To do this, we use a series of likelihood-ratio tests. Signal Likelihood For our likelihood, we follow a hybrid frequentist-Bayesian approach, defining a marginalized likelihood in which all nuisance parameters are integrated out.In our case, these are the random Rayleigh parameters, α, and random uniform DM phases, φ.Such a hybrid approach has already been used in the context of ultralight bosonic dark matter [43,81].Our work differs from Ref. [81] since they focused on an axion-like signal as opposed to that from vector DM.It goes beyond Ref.[43] since they did not consider the peaks arising from the rotation of the Earth in their analysis. The full likelihood in Fourier space is well-known to follow a non-central χ 2 with two degrees of freedom [82].In our case, the non-centrality parameter is the total signal amplitude in Eq. (2.9).The marginalized likelihood is then given by where Π describes the priors of our random parameters and p is the random variable we expect to measure in an experiment.We can express the result of Eq. (3.1) completely analytically and provide a full derivation of it in Appendix B, only quoting the final result here.The likelihoods in the signal-containing bins, which we call the Compton and sum/difference likelihoods, are respectively Note that, when β = 0, we correctly retrieve central χ 2 distributions in each bin, corresponding to the background-only case.The result for the Compton peak matches that of Refs.[43,81].The result for the sum/difference peaks is new.For completeness, we also derive the equivalent of Eq. (3.2) for the linear polarization case in Appendix D. In Fig. 3, we show a comparison between a numerical simulation of the signal likelihoods and the analytical results of Eq. (3.2).For the former, we begin from Eq. (2.6), simulating 10 6 realisations of the time-domain signal.We take A = 1 [A], T obs = 10T ⊕ , σ 2 = 5 [A] 2 Hz −1 , and ϕ = 45 • .For the purposes of fast convergence in our simulations, we take m = 2π Hz and T ⊕ = 100 s.For each run, we sample α and φ from independent Rayleigh and uniform distributions respectively, as given by Eq. (2.5).To generate the time domain signal, we add Gaussian-distributed white noise with zero mean and variance given by σ 2 t = σ 2 ∆t.Finally, we compute the periodogram of the signal following Eq.(2.7), dividing by σ 2 to produce the excess power in each frequency bin.The resulting normalised distribution of p(ω) for the Compton bin (ω = m) and the sum/difference bins (ω = m ± ω ⊕ ) are in excellent agreement with our analytical result.We also show the likelihood governing the deterministic result for the Compton peak: a non-central χ 2 with two degrees of freedom and non-centrality parameter given by the last term of Eq. (2.9).Without the nuisance parameter α 2 z integrated out, we instead set it to its expectation value: ⟨α 2 z ⟩ = 1.We see that higher values of p are favoured in this case, which would ultimately lead to an overly aggressive constraint on β. The full likelihood over all frequency space is then given by the product of the likelihoods in each frequency bin, where p i represents the excess power density in the i th frequency bin, p is the full data vector, and the product runs over all N bins frequency bins.Ultimately, since our signal only Here, ϕ is the latitude of the experiment, p (a random variable) is the value of the measured excess power, and β is defined as per Eq.(2.8).Also shown as a dashed line is the deterministic result for the Compton peak, where the stochastic variable α 2 z is set to its expectation value, ⟨α 2 z ⟩ = 1. manifests in three bins, it suffices for us to consider only those bins that could potentially contain a signal, and we may ignore all other bins.We can express the likelihood in this way because each bin is statistically uncorrelated, as we show in Appendix B. This is in contrast with the analysis performed in Ref. [46], where a similar study was conducted in the case of the gradient of a scalar in the time domain.There, a complicated covariance matrix had to be computed to account for correlations in the signal at different times.In Fourier space, these covariances disappear.The power of performing this analysis in the frequency domain is thus not only that the signal is contained within a small number of bins, but also that these bins are statistically independent, which allows us to treat the statistics in a significantly simpler way. Crucially, once the latitude of the experiment, ϕ, is fixed, the likelihood depends on the product of all experimental variables via the dimensionless parameter β.This means that we can set a more holistic limit that is independent of the specifics of an experiment.Once the form of A (which depends on both the experiment and the DM model), the observation time T obs , and the noise profile σ are known, the ensuing limit on β can be recast to one on the model parameters of interest.This makes our analysis, both the results and overall logic, as generally useful as possible. Projected Exclusions To derive our limits, we construct the one-sided log-likelihood-ratio test statistic, defined as where β is that value of β which maximises the likelihood given the observed data set p, characterising the best-fit model.This statistic is defined as a piecewise function as we only expect excess signals to be disfavoured when excluding a value of β in a one-sided test.This corresponds to values of β greater than the best-fit value.Values below this are deemed under-fluctuations and considered consistent with observation.This statistic tells us how consistent the data are with a signal defined by β compared to the best-fit model, with zero representing perfect consistency and large values indicating high inconsistency. The general idea is that we want to exclude those values of β that lead to excessive values of q β .We do this by considering the distribution f (q β ) that we expect to arise when many hypothetical experiments perform a measurement.Generating the α conf % confidence level (CL) limit then depends on finding the value of the test statistic, q lim β , for which an experiment has an α conf % probability of attaining that value or below.That is, we solve for q lim β in From q lim β , we then find that β lim which gives us this value for the test statistic.This is our α conf % CL limit on our parameter of interest, β. Ultimately, we compute 90% CL limits (α conf = 0.90) via a series of Monte Carlo (MC) simulations, following the above procedure in each MC run 9 .For a given choice of β, we begin by simulating the distribution f (q β ).We do this by simulating 10 6 experiments, sampling the data p for each signal bin directly from the verified likelihoods given in Eq. (3.2).In each run, we find β and compute the distribution of q β for a range of β and ϕ values.We then fit our distributions to the ansatz where ϖ ∈ [0, 1] is a scale factor ensuring that the distribution is normalised to 1.This ansatz is inspired by the asymptotic result of Chernoff (where ϖ = 1/2), which itself is a limiting case of Wilk's theorem when the true value of β lies on the boundary of its domain (which is true in our case, since our background-only data set if defined by β = 0) [84,85].Our three-peak analysis generally provides a better/comparable limit to either of the two single-peak analyses and is largely latitude-independent. For each run, we find the best-fit value of ϖ.From Eq. (3.6), we can then invert Eq. (3.5) to find q lim β .For the α conf % CL limit, we have that In generating our distributions, we find a weak dependence on the chosen values of β and ϕ.However, this leads to a small change in the ultimate value of q lim β .We take the mean value of it over our fits as its best estimator, yielding q lim β ≃ 2.43, and use this throughout the rest of our study.Note that this is close to the asymptotically expected result of q lim β ≃ 2.71 for a 1 d.o.f.problem [83].We could have derived a similar result by integrating the resulting q β histograms; however, we attain a good fit to Eq. (3.6), and it provides us with a closed-form solution for q lim β , as per Eq.(3.7).With q lim β at hand, we may derive the main result of this section, β lim .We once again follow an MC approach, generating 10 6 data sets consistent with a background-only observation (setting β = 0), and finding, for each hypothetical experiment, that value of β for which Eq. (3.4) returns q β = q lim β .This produces a distribution of limits, for which we take the median as our best estimator.We also produce the 1σ error bars on our limits by finding the 16 th -and 84 th -percentile of the distribution in β lim .We show the 90% CL limit arising from our three-peak analysis in Fig. 4. Also shown are the corresponding results from two single-peak analyses focusing solely on the Compton and either one of the sum or difference peaks.These results follow the same MC procedure as above but take as the full likelihood only the Compton or the sum/difference likelihood given in Eq. (3.2).We see that the three-peak analysis produces a limit that is largely latitude-independent, rising slightly towards the pole.This is because the sensitivity axis at this latitude only has a component parallel to the Earth's rotation axis and is thus only able to pick out the Compton peak. This latitude-independent limit is in contrast with the analyses that focus on only single peaks, which are both highly sensitive to where the experiment is placed.For the study focusing on the Compton peak, the constraining power is optimal at the pole, where all of the power is contained in the Compton frequency bin, and it rapidly declines towards the equator, where the Compton peak disappears.Note that this and the three-peak results join at this point, with no difference between the approaches.Conversely, for an analysis focusing on one of the sum or difference peaks, the situation is the opposite.In this case, the results of this and the three-peak methods do not converge since half the power is contained in a single one of the sum or difference peaks at the equator; the threepeak analysis captures all of this power, whereas the single-peak analysis misses half of the power.Thus, the strength of our analysis is that the constraining power is retained no matter where an experiment is placed, such that its latitude is rendered largely irrelevant from the viewpoint of constraining a ULDM signal. We emphasize that the interpretation of Fig. 4 is as a set of exclusion lines whereby background pseudodata is generated and the assumed DM signal strength is constrained.The key point is that the level of this constraint depends on the assumption one makes on the nature of the DM signal given a non-detection.Taking this signal to be only a single peak in Fourier space then leads to constraints that are generally dependent on the latitude of the experiment, rapidly weakening towards latitude extremes.On the other hand, employing the full signal model consisting of three peaks yields stronger constraints that are almost independent of the experiment placement. Throughout the above analysis, we assumed that the sensitivity axis pointed in the zenith direction.However, we can relax this assumption and consider what our results would look like if this axis pointed in some different direction, say for instance directions perpendicular to the zenith-namely, the East/West and North/South directions 10 .If North/South-pointing, all of the curves in Fig. 4 would be flipped about the line ϕ = 45 • .While the strongest limit for the Compton peak would now occur at the equator instead of the pole (and vice versa for both the sum and difference peaks), crucially we would still retain a largely latitude-independent constraint for our three-peak analysis.This is because, throughout the experiment, the directionality of the detector makes a cone, making it sensitive to the vector DM power in all the three directions.If, instead, the axis pointed towards the East/West, we would only see the sum and the difference peaks.This is because, throughout the experimental expedition, the directionality of the detector is restricted to lie on a plane (which would necessarily be perpendicular to the rotation axis of the Earth).Therefore, it is always insensitive to the power contained across the perpendicular direction, which is tied to the standalone Compton peak.One final possibility is when the directionality of the detector traces out a line throughout the experiment.This is only possible when it points parallel to the Earth's rotation axis (at any given latitude).In this case, we would naturally be oblivious to the Earth's rotation and hence to the sum and difference peaks, and we would only be able to resolve the Compton peak. In summary, the best-case scenario is when the detector's sensitivity axis traces out a cone.In this case, we capture all the three peaks since we are sensitive to the vector DM power across all three directions (Earth's rotation axis and the two orthogonal directions). For comparison, we also derive the scaling relationship of the limits on β under the assumption of linear polarization in Appendix D using a simpler two-peak Asimov analysis.We find that the limits are mostly affected towards the equator, with the largest scaling factor being ∼ 1.2.This difference becomes larger for higher desired confidence levels. Application to Accelerometer Studies As an application of our analysis strategy, we consider a concrete sensor and DM model.As our sensor, we take the canonical optomechanical light cavity, which can be used to perform acceleration measurements by continuously measuring the distance between fixed and movable cavity mirrors.As our model, we consider 'dark photon' DM stemming from a gauged U (1) B−L symmetry, leading to wavelike DM in the ultralight regime that couples to the difference between baryon number, B, and lepton number, L. Gauging such a charge is popular in the context of particle physics, since it naturally leads to the introduction of right-handed neutrinos and, hence, can account for the non-zero neutrino masses [86][87][88].Motivation for such an ultralight gauge boson can also be found in [89,90].The associated Lagrangian density reads where A ′µν ≡ ∂ µ A ′ν − ∂ ν A ′µ is the field strength tensor for the new field, m is the mass of the field, and j µ B−L is the B − L vector current.Explicitly, it is given by where the sum runs over all fermions in the SM and where Q f B−L is the B − L charge of the fermion f .A multitude of studies have considered this combination and set limits or projections on B − L coupled DM [36,37,39,42,43,45]. However, those works that performed a Fourier space analysis only had access to the Compton peak.This is because the total experiment integration times had to be such that T obs ∼ 1 h < T ⊕ to maintain experimental stability, dictated by retaining the coherence of the laser in optomechanical cavity setups.In the event that one can instead measure for at least ∼ 1 day, other peaks can be resolved.As we discussed in Section 3, an analysis for an axial sensor that then does not account for these additional peaks is sub-optimal, as it fails to capture the full signal and therefore suffers from a signal loss at a range of latitudes.Moreover, ignoring the randomness of the nuisance variables leads to an overly aggressive constraint, as was illustrated in Fig. 3. Our more holistic three-peak strategy, which also considers this stochasticity, retains the full signal and is largely latitude-independent in its constraining power.Therefore, this choice of sensor and DM model makes for an excellent case study with which to showcase the improved constraining power of our method.Furthermore, a proper vector treatment of the DM field in Fourier space, which includes the stochasticity of the ULDM field variables and the effect of the rotation of the Earth, has not been done.Ignoring the randomness of the nuisance variables leads to an overly aggressive constraint, as was also pointed out in Refs.[43,81]. Recasting Generalised Limits onto B − L Dark Matter To recast our limits on β shown in Fig. 4 into one on the parameter of interest for this model, the gauge coupling strength g B−L , we must define four quantities.Firstly, we must make clear what the quantity A is for this experiment, which will depend on both the model and the signal of interest for this sensor.Secondly, we must choose an observation time, T obs .Thirdly, we must elaborate on what the concrete noise profile, σ(ω), for this type of experiment is.Lastly, and least importantly following from our discussion above, we must choose a latitude for the experiment.Once these are known, Eq. (2.8) can be re-arranged for g B−L , giving us our model-and experiment-specific 90% CL limit. For B − L coupled dark photon dark matter, the relevant signal is a differential acceleration.This is given by Eq. (2.6) in the time domain, with a now concrete choice for Here, g B−L is the gauge coupling strength of the model, ∆ ij is the differential B − L charge per nucleon between materials i and j, given by and a 0 ≡ 2ρ/3 u −1 ≃ 10 12 m s −2 is a characteristic acceleration imparted by the field to each nucleon (with u being the atomic mass unit).For most materials, ∆ ij ∼ 0.1, which we take in the following analysis [37].Note that, for this particular model, g ≡ g B−L ∆ ij /u.For our observation time, we take T obs = 10T ⊕ to be firmly in the regime where the three peaks can be resolved.For light cavities, which may only be able to remain coherent over the scale of hours rather than days, such a runtime is optimistic.However, since our aim here is merely to showcase how our method can be used concretely and compare to the works of Refs.[37,39], we do not see this as an issue.Our strategy is general and can be applied to any axial sensor and vector-like DM model, and we have settled on this configuration only for the sake of argument; other sensor technologies, such a magnetically levitated sensors, do not have this issue.Moreover, multiple cavities or data-stacking techniques can be employed to mitigate this issue. We model the background according to Refs.[37,39].Namely, we split the total expected background PSD σ 2 , which we will write as S aa as per convention, into a thermal, shot-noise, and back-action component, In what follows, we do not discuss the forms of these noise terms; we instead refer the reader to Ref. [54,91] for a review on the topic.The thermal component is given by where γ represents the couplings between the sensor and the thermal bath of temperature T , m s is the mass of the sensor, and k B is Boltzman's constant.Typically, we parametrise the thermal coupling as γ ≡ ω 0 /Q, where ω 0 is the resonance frequency of the cavity and Q is its quality factor.The measurement-added noise terms-the shot-noise and back-action noise-are respectively given by and Here, κ is the cavity loss, which quantifies the efficiency of the optical modes of the cavity, L is the cavity length, ω L is the angular frequency of the laser, and P L is its power.The mechanical susceptibility is given by and the cavity susceptibility by Our choices for all of the above parameters except for the laser power, which we expand on below, are summarised in Table 1.These are in keeping with the choices made in Refs.[37,39].The choice of where to tune the laser power is critical to give us competitive limits for a wide range of dark matter masses.We have found that we can achieve excellent limits for low dark matter masses, which are of most relevance to our work, by tuning the laser power such that the back-action and shot-noise components are minimised at low frequencies.That is, by finding that P L for which ∂ P L [S SN aa (ω → 0) + S BA aa (ω → 0)] = 0 for a given choice of ω 0 .The required laser power to achieve this is given by For the resonant frequencies we have considered, the laser power ranges from P L ∼ 10 −12 W for f 0 = 0.1 Hz to P L ∼ 10 −8 W for f 0 = 10 Hz.We note that this choice of power tuning is different from the strategies usually employed in other studies.Typically, the laser power is tuned so that the measurement-added noise is either minimised on resonance or, as LIGO implements it, well above resonance [92].However, we found that both of these choices detriments the limit that we can draw at low masses, increasing the background at low frequencies beyond the thermal noise floor.For these other strategies to be beneficial at both the respective frequency targets and at low frequencies, the thermal noise would have to be significantly lower than that achieved with our choice of sensor mass, quality factor, and bath temperature. In the treatment of our backgrounds, we have neglected seismic noise, which can become important at frequencies below ∼ 10 Hz.Since we require the use of two materials to measure the differential acceleration peculiar to B − L DM, we can envision constructing two sensors: one with a moveable mirror made of material i and another of material j.By subtracting their signals, we both isolate the differential acceleration signature and remove backgrounds common to both-this includes the seismic noise component [65] Finally, we choose Houston as the location of our experiment, ϕ = 29.76• .From Fig. 4, we find that β lim ≃ 3.1 for this latitude.However, we note that this information is almost unnecessary for the three-peak analysis since it is largely location-independent.We may then re-arrange Eq. (2.8) accounting for Eq.(4.3) to give us our limit on ultralight B − L-coupled DM, We show our limits in Fig. 5 for three choices of resonance frequency: 0.1 Hz, 1 Hz and 10 Hz.For all but the last of these frequencies, we are able to exclude new regions of The 90% CL limits on the gauge coupling for ultralight B − L DM placed by an optomechanical cavity setup using our statistical framework.The limits using our threepeak analysis strategy (solid) for three resonance frequencies, f 0 = 0.1 Hz, 1 Hz and 10 Hz are shown.Existing bounds from the Eöt-Wash [93,94] and MICROSCOPE experiments are shown in grey.For MICROSCOPE, we show the bound based on the first [94][95][96][97] and final [98] results, the latter of which we compute in Appendix E. The vertical shaded region indicates where the observation time T obs = 10T ⊕ becomes greater than the coherence time, where our framework is no longer valid.The top axis shows the Compton frequency for a given DM mass in Hz. the UDLM B − L parameter space, best excluded by the fifth-force satellite experiment MICROSCOPE [98] and torsion-balance experiment Eöt-Wash [93].This showcases the power of such sensors in searching for this DM candidate, as was first pointed out in Ref. [65].In a single-peak analysis focusing solely on the Compton peak, our limits would be weakened by approximately a factor of 2, as can be seen from Fig. 4.This difference becomes more dramatic for sensors located closer to the equator. For the existing limits outlined above, we extract the Eöt-Wash limit from [94]; however, we recompute the MICROSCOPE limit.This is because the result of [94], at the time of writing, is based on [96], which computed constraints based on the first MICROSCOPE results [95,97].We update this limit to reflect the final results given in [98] following the reasoning outlined in [99,100].See Appendix E for details.At low masses, we find that g B−L ≲ 7 × 10 −26 , improving the limit given in [94], which we also show in Fig. 5, by approximately a factor of 6.2.For the B − L limits computed in [99,100], we find an improvement by approximately a factor of 2.6. We only extend our limits to where they are appropriate.At the higher mass range, we are limited by keeping our observation time shorter than the coherence time, T obs ≪ τ coh ≃ 10 d (5 × 10 −15 eV/m).At the lower mass range, we must keep our observation time longer than a day (corresponding to m ≃ 5 × 10 −20 eV) to be able to resolve the three peaks T obs ≳ 2π/ω ⊕ .This leads to the mass range 5 × 10 −20 eV ≲ m ≲ 5 × 10 −15 eV for where our study is valid. We note that σ(ω), though mostly slowly varying in the frequency width ∆ω = ω ⊕ , does exhibit a large gradient around the resonance frequency of the cavity.In the neighbourhood of this frequency, our assumption that σ(ω) does not vary greatly in the above range is incorrect, and a more careful analysis in which all three peaks take on different noise levels would have to be conducted for more representative limits.However, we do not expect that our limits would differ greatly from our calculation and, at any rate, would still smoothly join to the regimes on either side of the resonance frequency, where our assumption holds. 5 Future Directions Longer Observation Times In this work, we have focused on the wave vector DM signal within time scales much shorter than the coherence time, τ coh = 2π/mv 2 0 ≃ 50 d (10 −15 eV/m).This allowed us to treat the amplitudes and phases of the three different components of the vector to be constant random variables.Realistically, there would be modulations giving these random variables time dependence-corrections of the order O(t/τ coh ).Simultaneously, there would be spatial variations/correlations due to the finite distance covered by the Earth/detector during the observation, leading to corrections of the order O(v 0 t/ℓ coh ) = O(t/τ coh ) again.For longer observation times, T obs ≫ τ coh , such modulations necessarily need to be taken into account.While we can easily generate the realistic time-series for such long time-scale signals (see top panel of Fig. 2), a more comprehensive treatment of how it affects our limits is left for future work. Nevertheless, we comment on a simplified study that could be done within the incoherent regime.In this limit, the stochasticity of the field is averaged over as O(T obs /τ coh ) coherent patches cross the Earth.Thus, for T obs ≫ τ coh , the randomness in the signal disappears, and we are left with a deterministic signal.Moreover, the signal in Fourier space loses its coherent T obs enhancement, reaching its maximal value at T obs = τ coh (c.f.Eq. (2.8)). In a simplified experimental study, one could then analyse the data by first splitting the long-time time series into N coh ∼ T obs /τ coh smaller, independent time series of coherencetime durations.Each one of these series would lead to our three-peak signal in Fourier space with randomly drawn Rayleigh amplitudes and uniform phases.One could then average over all N coh PSDs, resulting in a deterministic amplitude where the randomness is no longer manifest.Crucially, this procedure would lead to the noise within each signalcontaining bin to also be averaged over, resulting in a noise suppression by the factor N −1/2 coh .Similar arguments have been made in Refs.[37,39].To make inferences, we could then proceed by redefining our β parameter as an 'incoherent' version of it, This parameter can then be used in the deterministic likelihood, which is of the form of a non-central χ 2 distribution (c.f.Eq. (3.1)), to find the limit β lim incoh given the observed (averaged) PSD. Expanding the Mass Window The detection scheme considered in this paper is one where the detector points in a specific fixed direction locally on Earth while rotating along with it.As noted above, in the short observation time scenario we are bounded from below by a day (corresponding to m ≃ 5×10 −20 eV) and from above by the coherence time scale τ coh ≃ 10 d (5×10 −15 eV/m) to be able to resolve the distinctive 3 peaks.To have a larger working window where we can neglect the previously mentioned corrections, the masses we can probe using this setup lie in the range 5 × 10 −20 eV ≲ m ≲ 5 × 10 −15 eV.To expand this window towards higher masses, we need to push the lower bound coming from T obs > 1 d. Instead of a fixed detector rotating with the Earth, a detector can also be made to rotate at an angular frequency faster than a day.For example, if ω exp ≃ 2π/(1 min), then the detector needs to collect data for only a few minutes to be able to isolate the three peaks.With τ coh ≃ 15 min (5 × 10 −12 eV/m), we can probe masses up to m ≃ 5 × 10 −12 eV.Note that while the time signal would contain modulations due to Earth's rotation, the effects would be suppressed by O(ω ⊕ /ω exp ).Furthermore, in the frequency space, this modulation would split the three peaks at ω left = m−ω exp , ω middle = m, and ω right = m+ω exp , into nine peaks (each splitting into three).However, unless the frequency resolution ∆ω becomes smaller than ∼ ω ⊕ , the experiment would not be able to resolve this splitting due to the Earth's rotation.Thus, we expect our whole analysis in this paper to carry forward, with ω ⊕ replaced by ω exp everywhere.Furthermore, such a setup would also be beneficial for the optomechanical light cavities we considered in Section 4, where their typical laser coherence times are of the order of hours and not days. Conclusions We have provided an analysis strategy for inferring the properties of ultralight vector dark matter from terrestrial experiments, taking into account the stochastic and vector nature of the field (see Fig. 1).Our main results are suited for observation times that are longer than a sidereal day, but shorter than the coherence time.They are as follows: • We focused on the signal in Fourier space, deriving the power spectral density that such dark matter is expected to leave on an axial sensor that is sensitive to its oscillatory signal.Accounting for the rotation of the Earth, we found that the signal manifests as three peaks at definite frequencies but with random amplitudes (see Fig. 2). • We derived the likelihoods in each of the signal-containing bins in Fourier space.We did this by considering the marginal likelihood after integrating out the six random variables exhibited by the ULDM signal in the coherent regime: the three Rayleigh amplitudes and the three uniformly distributed DM phases (see Eq. (3.2) and Fig. 3).We found that the general elliptical motion of the vector field, arising out of equipartition, afforded us a simpler analysis in Fourier space than the linear polarization assumption.This is because in the former, all peaks become statistically uncorrelated. • We drew exclusion limits on a generalised, dimensionless parameter that can be reinterpreted in the context of a concrete sensor setup and dark matter model.We did this via a series of log-likelihood ratio tests following a hybrid frequentist-Bayesian approach.Crucially, we found that, unlike analyses focusing on only a single peak, our approach retains constraining power for experimental setups at all latitudes.This is because we make use of the entire DM signal, which is distributed across all three peaks, instead of constraining ourselves to the signal in any one peak, which is dependent on the latitude of the experiment (see Fig. 4). • We considered a specific sensor technology (the optomechanical light cavity) and dark matter model (ultralight dark matter stemming from a new gauged U (1) B−L symmetry) as a concrete application of our analysis strategy.We recast our general limit onto one on the gauge coupling of this model, g B−L , finding that long-exposure cavities can rule out previously unexplored regions of the B − L parameter space (see Fig. 5). In this work, we have established a framework for future experimental efforts in the detection of ultralight vector dark matter.Novel direct-detection probes require an understanding of how the signal of ultralight vector dark matter behaves in our local neighborhood and manifests itself in a sensor.We hope that our work aids in (i) designing search strategies using emerging detector technologies that are not traditionally used for dark matter searches, and (ii) in understanding how well a given model can be tested in the context of calls for Big Science projects using quantum sensing [64]. the large volume limit: Here, and the cross correlation between â and b is zero, i.e. ⟨â i * bj ⟩ = 0. Here, k/m is our velocity relative to the rest frame of the halo, and σ is the velocity dispersion.Owing to their Gaussian nature, we can further combine the two random variables and define ŵj ≡ Υ 0 âj + √ 1 − 2Υ 0 bj .This new combined random variable will have zero mean, and its variance will be equal to the sum of the individual two in Eq. (A.6) (weighted by Υ 0 and (1 − 2Υ 0 ), respectively).Also, we can extract the factor ρ/m.With all of this, we have that Âj (0, t) where ⟨ ŵj ⟩ = 0 and With the form of the stochastic vector field derived, we now need to generate these random variables ŵs.We can do this by finding an operator matrix G such that its square gives the right-hand side of the above two-point correlation in Eq. (A.9).Then, we can simply pick a 3-dimensional normal complex random variable, say ĥ (which can be equivalently thought of as a 6-dimensional normal random variable), and hit it with G to get ŵ.That is, we have ŵi = G ij ĥj , (A.10) where While this serves as a procedure to generate the stochastic random vector field Â, the problem simplifies dramatically when the power is equipartitioned between the longitudinal and transverse modes.That is, when Υ 0 = 1/3.We expect this to be the case when the vector field accounts for (at least the majority of) the virialized dark matter around us.We discuss this next.)) around halo formation for four different simulations with different Υ(0): Υ(0) = 0 (solid), Υ(0) = 1/6 (dashed), Υ(0) = 1/3 (dot-dashed), and Υ(0) = 1/2 (dotted).The convergence of the fractional power in transverse modes towards 2/3 and that in longitudinal modes towards 1/3 demonstrates the ultimate equipartition of power.That is, Υ(t) → 1/3. on an S 5 so that it is unit normalized.That is, ⟨ε i * k εj p ⟩ = δ k, p δ ij .We then hit it with the transverse and longitudinal projection operators to get εi With these, we construct the full transverse and longitudinal initial fields as with the full field being With this as the initial condition, we evolve the SP system (A.15).In Fig. 6, we present simulation results for various different initial conditions, including different values of Υ(0).Here, we plot the fractional powers in the two different sectors: With Υ → Υ 0 = 1/3, we achieve equipartition.The stochastic vector field, c.f. Eq. (A.8), takes the simple form Âj (0, t) and is the form that we have used in Eq. (2.3).This gives rise to the "random ellipse" picture, discussed in the main text. To conclude, we have shown that even if the vector field is initialized with an unequal distribution of power between its longitudinal and transverse modes, non-linear gravitational dynamics leads to its equipartition eventually.However, we have only begun to explore this topic and leave a detailed study (including a k-dependent Υ) for future work. B Derivation of Marginal Likelihood with Stochastic Field Amplitude The full signal in time space is given by where N ∼ N (0, σ t ), with σ 2 t ≡ σ 2 /∆t and σ 2 being the equivalent of the noise PSD in frequency space.From this, we get that the (two-sided) periodogram, P ′ , normalised by the noise PSD is given by P′ Here, both XR and XI are such that Xi ∼ N (0, 1/ √ 2).The one-sided, noise-normalised periodogram, P/σ 2 , therefore follows a non-central χ 2 distribution with non-centrality parameter Introducing randomness in the parameters α's and φ's, with prior Π ′ ({α i , φ i }), the marginalized likelihood is where I 0 is the modified Bessel function of the first kind.To evaluate the above integral, we first note that the prior is factorizable into that for α z and for the set {α x , α y , φ x , φ y }.The latter 4 random variables (which correspond to the ω = s and ω = d peaks), can be redefined as two 2D random vectors x and y with relative angle π/2−(φ x −φ y ), in order to give α Here the subscript 1 and 2 correspond to the two components of the vectors, in the two directions of the 2D Euclidean space respectively.Since α's are Rayleigh distributed and φ's are uniformly distributed from (0, 2π), the four variables {x 1 , x 2 , y 1 , y 2 } are normally distributed with zero mean and variance equal to 1/2.Furthermore, we can now redefine x's and y's as x i + y i = u i and x i − y i = v i for i = {1, 2}, to get the following expression 2 )/2 , Π ′ (α z ) = 2α z e −α 2 z , and Using the series representation of the Bessel function, together with Gamma function identities, the 5 random variables can be integrated out analytically.We arrive at the following marginalized (and normalized) likelihood: where This likelihood can be split into three individual likelihoods for the sum/difference peaks and the Compton peak, as given in Eq. (3.2).The form of the likelihoods for the sum and difference peaks is equivalent. To treat the total likelihood as the product of the individual likelihoods in each frequency bin, we must check that the covariance matrix is diagonal.We will consider a signal-only analysis, discarding the noise, since the noise merely adds to the power and is uncorrelated between different frequency bins.We may write the values of the three peaks as P1 = A 2 T obs 8 α2 x + α2 y + 2 αx αy sin( φy − φx ) cos 2 ϕ , We wish to compute the quantity We can do this using the expression for the raw moments, where, for us, σ = 1/ √ 2. Aside from this, we need to know that ⟨sin( φy − φx )⟩ = 0 , We then get the diagonal covariance matrix Crucially, we get that the covariance between peaks is 0, allowing us to treat them as statistically independent and hence permitting us to express the total likelihood as the product of the individual likelihoods. C The Case of the Gradient of a Scalar In this case, there is a preferential direction because ∇a points in the direction of the local DM velocity.Aligning the lab's working coordinate system such that this local velocity vector is parallel to the z axis, the amplitudes associated with the three different directions in Eq. (2.9) are not all the same.Effectively, there is an extra factor associated with the z direction, and the random signal in frequency space (c.f.Eq. (2.9)) takes the following form λ(ω n ) = β2 4 α2 x + α2 y + 2 αx αy sin( φy − φx ) cos 2 ϕ δ ωn,s + α2 x + α2 y − 2α x αy sin( φy − φx ) cos 2 ϕ δ ωn,d + 4 Υ α2 z sin 2 ϕ δ ωn,m , (C.1) where (and following the notation of [46]) Proceeding similarly as in Appendix B, the marginalized likelihood is which we can evaluate by proceeding in the same fashion as in Appendix B; i.e. making redefinitions of the variables so they become independent and the integral becomes analytically tractable.We arrive at the following: )) derived from our MC analysis (the same as in Fig. 4 but for the case of the gradient of a scalar).We show the results of our three-peak analysis and those of two single-peak analyses focusing on the Compton peak and on one of the sum or difference peaks.The shaded region indicates the 1σ error bar on our three-peak analysis.Our threepeak analysis generally provides a better/comparable limit to either of the two single-peak analyses and is largely latitude-independent. where The overall result is that Y simply gets rescaled by γ.Following the same MC analysis as outlined in Section 3, we show the 90% CL limits for the case of the gradient of a scalar in Fig. 7.The largest difference in this case is the increased constraining power of the Compton peak compared to the vector case shown in Fig. 4.This is because of the scaling that its amplitude receives by the factor γ > 1. D Linear Polarization Statistics Here we present the marginal likelihood for the linear polarization case.To get the relevant non-centrality parameter, we can set φ x = φ y in Eq. (2.9) to get λ′ (ω) ≡ P(ω) As in Section 3, we verify our analytical expressions for the likelihoods via a series of MC simulations.We begin from Eq. (2.6), this time setting all φ i to be equal, drawing them from a single uniform distribution, φ i ∼ U(0, 2π).We draw each of the three Rayleigh variables independently from their respective distributions, as given in Eq. (2.5).For each simulation, we compute the PSD as in Eq. (2.7) and consider the distributions of the values of each of the Compton, sum, and difference peaks, normalised by some noise level.For our simulations, we take A = 1 [A], T obs = 10T ⊕ , σ 2 = 5 [A] 2 Hz −1 , and ϕ = 45 • .For the purposes of fast convergence, we also take m = 2π Hz and T ⊕ = 100 s.Our results for 10 4 simulations are shown in Fig. 8, displaying excellent agreement with our derived likelihoods in Eq. (D.4). As in Appendix B, we can also compute the covariance matrix for the linear polarization case.Following the approach there, we find a non-diagonal covariance matrix Σ with We thus find that the sum and difference peaks have a non-zero covariance, with the Compton peak remaining statistically uncoupled from the other two peaks.Due to this nondiagonal covariance matrix, we cannot simply write the total likelihood as in Eq. (3.3), and we instead require a more complicated treatment accounting for the non-zero covariances. D.1 Effect of Elliptical versus Linear Polarization on Limits We comment on the effect that the linear polarization assumption has on our limits compared to the more realistic elliptical polarization treatment.Since the sum and difference peaks are correlated in the linear polarization case (c.f.Eq. (D.5)), a full three-peak analysis is difficult to perform without accounting for the full covariance matrix.However, we can conduct a simplified, uncorrelated two-peak analysis in which we consider both the Compton peak and one of the sum/difference peaks.We can then compare the results of this analysis with a similar two-peak one done in the elliptical polarization case to learn how the limits should scale between these assumptions.Since we are only interested in this scaling, we perform a simpler Asimov analysis in which the data are assumed to be perfectly consistent with the background [83].The result of an Asimov analysis is expected to asymptotically converge to the true result in the limit of high statistics. The two-peak likelihood when considering the Compton peak and one of the two sum/difference peaks is given by where we have indexed the likelihoods to use for the elliptical and linear polarization cases, respectively following from Eq. (3.2) and Eq.(D.4).In an Asimov analysis, we replace the data vector p with the expectation values in the background-only case, which can be shown to be p = 2 for each bin.In this case, using the log-likelihood-ratio test statistic )) assuming elliptical versus linear polarization with latitude ϕ.The 90% (solid) and 99.7% (dashed) limits are shown, with the latter equivalent to a 3σ confidence level.Near the equator, the difference is more pronounced because the difference in the likelihoods is greater; the likelihoods are equal at the poles. given in Eq. (3.4), we will get that β = 0 for this data.The problem then becomes finding that β for which q β reaches a value that we can exclude to our desired confidence level.This is the same procedure we followed in Section 3, and we take q lim β ≃ 2.43 for the 90% CL limit as we found there.For comparison, we also consider the case when one wants to draw a limit at the 3σ level, equivalent to a ∼ 99.7% CL limit.Solving Eq. (3.7), we get that q lim β ≃ 8.49 in this case. We show our results in Fig. 9.We find that the scaling is greater towards the equator.This is because the difference in the form of the likelihoods is greater there, since only the sum/difference likelihood changes between the two polarization assumptions.At the poles, only the Compton peak is present and the two likelihoods are equivalent.This leads the two limits to converge towards the same value.At the 90% CL, which we have used throughout this work, we see that the limit at worst scales by the factor ∼ 1.2.As the CL grows, this factor increases; for example, at the 3σ level, the scaling from the linear to the elliptical assumption becomes ∼ 2.3.Nevertheless, in the incoherent regime, we expect both the limits to match since the stochasticity of the field vanishes in that limit. t e x i t s h a 1 _ b a s e 6 4 = " 8 9 R S W 5 7 c M k a f t 5 L J x R Y o A + k e / E 0 = " > A A A C B H i c b V C 7 S g N B F J 2 N r x h f U c s 0 g 0 G w i r v i q w z a W E Y w D 8 i G M D u Z z Q 6 Z n V l m 7 i p h S W H j r 9 h Y K G L r R 9 j 5 N 0 4 e h S Y e u H A 4 5 1 7 u v S d I B D f g u t 9 O b m l 5 Z X U t v 1 7 Y 2 N z a 3 i n u 7 j W M S j V l d a q E 0 q 2 A G C a 4 Z H X g I F g r 0 Y z E g W D N Y H A 9 9 p v 3 T B u u 5 B 0 M E 9 a J S V / y k F M C V u o W S 3 D s A 0 m 7 m a 9 j T F U 0 8 j X v R 0 C 0 V g / d Y t m t u B P g R e L N S B n N U O s W v / y e o m n M J F B B j G l 7 b g K d j G j g V L B R w U 8 N S w g d k D 5 r W y p J z E w n m z w x w o d W 6 e F Q a V s S 8 E T 9 PZ G R 2 J h h H N j O m E B k 5 r 2 x + J / X T i G 8 7 G R c J i k w S a e L w l R g U H i c C O 5 x z S i I o S W E a m 5 v x T Q i m l C w u R V s C N 7 8 y 4 u k c V L x z i t n t 6 f l 6 t U s j j w q o Q N 0 h D x 0 g a r o B t V Q H V H 0 i J 7 RK 3 p z n p w X 5 9 3 5 m L b m n N n M P v o D 5 / M H a p i Y n Q = = < / l a t e x i t > t/⌧ coh !< l a t e x i t s h a 1 _ b a s e 6 4 = " 9 D R U Z D j l W h 9 V l Z z z V D z o B B R 2 0 4 A = " > A A A B 6 H i c d V D L S g N B E J z 1 G e M r 6 t H L Y B A 8 L b t 5 e w t 6 8 Z i A e U C y h N l J J x k z O 7 v M z A p h y R d 4 8 a C I V z / J m 3 / j J B t B R Q s a i q p u u r v 8 i D O l H e f D W l v f 2 N z a z u x k d / f 2 D w 5 z R 8 d t F c a S Q o u G P J R d n y j g T E B L M 8 2 h G 0 k g g c + h 4 0 + v F 3 7 n H q R i o b j V s w i 8 g I w F G z F K t J G a z i C X d + x i o V x z i z g l l V J K q p c l 7 N r O E n m 0 Q m O Q e + 8 P Q x o H I D T l R K m e 6 0 T a S 4 j U j H K Y Z / u x g o j Q K R l D z 1 B B A l B e s j x 0 j s + N M s S j U J o S G i / V 7 x M J C Z S a B b 7 p D I i e q N / e Q v z L 6 8 V 6 V P M S J q J Y g 6 D p o l H M s Q 7 x 4 m s 8 Z B K o 5 j N D C J X M 3 I r p h E h C t c k m a 0 L 4 + h T / T 9 o F 2 6 3 Y 5 W Y p X 7 9 a x Z F B p + g M X S A X V V E d 3 a A G a i G K A D 2 g J / R s 3 V m P 1 o v 1 m r a u W a u Z E / Q D 1 t s n 7 8 G N D g = = < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = " Q 7 c j r 1 y H B A P v M m C o F 4 4 Z N P j L K m 4 = " > A A A B 6 H i c d V D L S g N B E J z 1 G e M r 6 t H L Y B A 8 L b t 5 e w t 6 8 Z i A e U C y h N l J b z J m d n a Z m R V C y B d 4 8 a C I V z / J m 3 / j J B t B R Q s a i q p u u r v 8 m D O l H e f D W l v f 2 N z a z u x k d / f 2 D w 5 z R 8 d t F S W S Q o t G P J J d n y j g T E B L M 8 2 h G 0 s g o c + h 4 0 + u F 3 7 n H q R i k b j V 0 x i 8 k I w E C x g l 2 k j N 4 i C X d + x i o V x z i z g l l V J K q p c l 7 N r O E n m 0 Q m O Q e + 8 P I 5 q E I D T l R K m e 6 8 T a m x G p G e U w z / Y T / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = " e a w m 5 5 U g H 3 r b O T l J s v p h q c V n u X M = " > A A A B 6 X i c d V D L S g N B E O z 1 G e M r 6 t H L Y B C 8 u O z m 7 S 3 o x W M U 8 4 B k C b O T 2 W T I 7 O w y M y u E k D / w 4 k E R r / 6 R N / / G S T a C i h Y 0 F F X d d H f 5 M W d K O 8 6 H t b K 6 t r 6 x m d n K b u / s 7 u 3 n D g 5 b K k o k o U 0 S 8 U h 2 f K w o Z 4 I 2 N d O c d m J J c e h z 2 v b H V 3 O / f U + l Y p G 4 0 5 O Y e i E e C h Y w g r W R b s + L / V z e s Y u F c s 0 L 45 •Figure 3 : Figure 3: Example likelihoods for each of the three signal peaks once the stochastic variables have been marginalised over.The bars show the result of a numerical simulation of the noise-normalised periodogram values beginning from Eq. (2.6), while the solid lines show the analytical result of Eq. (3.2).Here, ϕ is the latitude of the experiment, p (a random variable) is the value of the measured excess power, and β is defined as per Eq.(2.8).Also shown as a dashed line is the deterministic result for the Compton peak, where the stochastic variable α 2 z is set to its expectation value, ⟨α 2 z ⟩ = 1. Figure 4 : Figure4: The 90% CL limits (β lim ) on the dimensionless parameter β ≡ A 2 T obs /(2σ 2 ) (see Eq. (2.8)) derived from our MC analysis .We show the results of our three-peak analysis and those of two single-peak analyses focusing on the Compton peak and on one of the sum or difference peaks.The shaded region indicates the 1σ error bar on our threepeak analysis.The dotted line indicates the latitude of Houston, which we use in Section 4. Our three-peak analysis generally provides a better/comparable limit to either of the two single-peak analyses and is largely latitude-independent. Figure 5 : Figure5: The 90% CL limits on the gauge coupling for ultralight B − L DM placed by an optomechanical cavity setup using our statistical framework.The limits using our threepeak analysis strategy (solid) for three resonance frequencies, f 0 = 0.1 Hz, 1 Hz and 10 Hz are shown.Existing bounds from the Eöt-Wash[93,94] and MICROSCOPE experiments are shown in grey.For MICROSCOPE, we show the bound based on the first[94][95][96][97] and final[98] results, the latter of which we compute in Appendix E. The vertical shaded region indicates where the observation time T obs = 10T ⊕ becomes greater than the coherence time, where our framework is no longer valid.The top axis shows the Compton frequency for a given DM mass in Hz. Figure 7 : Figure 7: The 90% CL limits ( βlim ) on the dimensionless parameter β ≡ ρ g 2 eff σ 2 v T obs /σ 2 n(see Eq. (C.2)) derived from our MC analysis (the same as in Fig.4but for the case of the gradient of a scalar).We show the results of our three-peak analysis and those of two single-peak analyses focusing on the Compton peak and on one of the sum or difference peaks.The shaded region indicates the 1σ error bar on our three-peak analysis.Our threepeak analysis generally provides a better/comparable limit to either of the two single-peak analyses and is largely latitude-independent. Figure 8 : 2 ( 2 + 2 ,(D. 3 ) Figure 8: Example likelihoods for each of the three signal peaks once the stochastic variables have been marginalised over in the linear polarization case.The bars show the result of a numerical simulation of the noise-normalised periodogram values beginning from Eq.(2.6), while the solid lines show the analytical result of Eq. (D.4).Here, ϕ is the latitude of the experiment, p (a random variable) is the value of the measured excess power, and β is defined as per Eq.(2.8). α 7 %Figure 9 : Figure9: Scaling for limit on the dimensionless parameter β (c.f.Eq. (2.8)) assuming elliptical versus linear polarization with latitude ϕ.The 90% (solid) and 99.7% (dashed) limits are shown, with the latter equivalent to a 3σ confidence level.Near the equator, the difference is more pronounced because the difference in the likelihoods is greater; the likelihoods are equal at the poles. Table 1 : .11)The optomechanical cavity configuration we have assumed in this work.
2024-03-06T06:44:50.310Z
2024-03-04T00:00:00.000
{ "year": 2024, "sha1": "c0a7a9b86c5e7834d013d5159dd8bd0d4e6d70fc", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1088/1475-7516/2024/06/050", "oa_status": "HYBRID", "pdf_src": "ArXiv", "pdf_hash": "3bd23974e5f34a38eaec351072ab24b566b831dc", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
207895803
pes2o/s2orc
v3-fos-license
On the Physical Layer Security Characteristics for MIMO-SVD Techniques for SC-FDE Schemes Multi-Input, Multi-Output (MIMO) techniques are seeing widespread usage in wireless communication systems due to their large capacity gains. On the other hand, security is a concern of any wireless system, which can make schemes that implement physical layer security key in assuring secure communications. In this paper, we study the physical layer security issues of MIMO with Singular Value Decomposition (SVD) schemes, employed along with Single-Carrier with Frequency-Domain Equalization (SC-FDE) techniques. More concretely. the security potential against an unintended eavesdropper is analysed, and it is shown that the higher the distance between the eavesdropper and the transmitter or receiver, the higher the secrecy rate. In addition, in a scenario where there is Line of Sight (LOS) between all users, it is shown that the secrecy rate can be even higher than in the previous scenario. Therefore, MIMO-SVD schemes combined with SC-FDE can be an efficient option for highly secure MIMO communications. Introduction Multiple-Input, Multiple-Output (MIMO) techniques are being increasingly considered for new wireless communication systems, due to their huge capacity over traditional single-antenna techniques. In fact, it can be shown that the capacity can even scale linearly with the number of antenna elements [1][2][3][4]. As such, several MIMO techniques have been selected to integrate wireless communications standards, such as WiFi [5] and Long Term Evolution (LTE) [6], and will likely be key elements in future 5G systems [7]. Although wireless channels have considerable advantages, they also present additional security difficulties when compared with wired channels. In fact, since anyone in range can listen to the channel (such as an eavesdropper that knows the transmitting characteristics such as the frame and block structures and carrier frequency), the security levels of conventional wired communications might not be enough, particularly for Internet-of-Things (IoT) devices [8]. Therefore, it is desirable to have an additional physical layer security level [9][10][11] on top of conventional security measures, so as to increase the overall system security. Thanks to their increased security capabilities, the physical layer security techniques have become increasingly attractive for both industry, [12] and IoT applications [13]. Security measures in the physical layer can take advantage of the different characteristics of the legitimate and eavesdropper links, which can be done using channel estimates, equalization schemes, System Characterization In this paper we consider a point-to-point MIMO system with a transmitter, denoted A (Alice in conventional wiretap channels nomenclature), employing T antennas and a receiver, denoted B (Bob in conventional wiretap channels nomenclature), employing R antennas. For the sake of simplicity we assume T = R, although this work could easily be extended to the case where T = R. In addition, there is a third user, denoted E (Eve in conventional wiretap channels nomenclature), that is attempting to eavesdrop the signal transmitted between A and B. Although we are assuming a scenario with a single eavesdropper, a scenario with more eavesdroppers can also be taken into account [27]. However, a scenario with multiple co-located eavesdroppers, can be approximated by a single eavesdropper with KR antennas, where K is the number of eavesdroppers. This three user scenario is shown in Figure 1. The distance between each antenna at the transmitter and the receiver is assumed much larger than the transmitted signal's wavelength, and the receiver is in the far field region of the transmitter. The transmitter can send up to C = R data streams over a highly frequency-selective channel. To cope with the strong levels of inter-symbol interference (ISI) associated to such channels, we employ an SC-FDE transmission technique. The data blocks are composed of N quadrature phase shift keying (QPSK) symbols (the generalization to other constellations with IB-DFE is straightforward [28]), plus an appropriate CP that is larger than the maximum overall channel impulse response. A block diagram of the considered system is depicted in Figure 2. Channel Precoding Decoding Equalization Figure 2. Proposed MIMO SVD-based system, employing T transmitting antennas and R receiving antennas. The data symbols to be transmitted by the C single-carrier data streams will be denoted by the N × C matrix s, where each data stream is represented as an N × 1 vector n represents the QPSK symbol transmitted on the cth stream at the nth time instant. The frequency-domain counterpart of the transmitted data is defined by the discrete Fourier transform (DFT) of s, which is the N × C matrix S. The group of symbols associated to the kth sub-carrier are represented as the 1 × C vector S k = [S (1) The channel frequency response for the kth sub-carrier is modeled by the R × T matrix Since we are considering a point-to-point communication where we have a multi-antenna transmitter and a multi-antenna receiver, the separation of the MIMO streams can be done using the SVD technique [20]. To perform the SVD, we need channel knowledge at both the transmitter and receiver. To achieve this, the transmitter and receiver exchange training sequences. This process is relatively straightforward in time division duplex (TDD) schemes, where we can take advantage of the channel's reciprocity. The SVD technique allows us to obtain up to C decoupled channels, onto which we can multiplex up to C data streams. Since we are employing SC-FDE schemes over frequency-selective channels, this decomposition is made at the sub-carrier level. Therefore, we can decompose the channel matrix associated to a given sub-carrier H k as where U k is the R × R decoding matrix, V k is the T × T precoding matrix and Λ k is a C × C diagonal matrix composed by the singular values of H k , which are sorted in descending order according to their power. Transmission Although SVD techniques allow for the orthogonalisation of the different data streams, the performance associated to each steam can vary substantially. This is explained by the fact that the performance depends essentially on the magnitude of the singular values, which vary considerably from stream to stream [29]. To overcome this problem, one can employ appropriate loading techniques, with power and/or constellation differentiation between different streams, as it is proposed for some OFDM-based systems [30]. An interesting alternative for SC-FDE MIMO-SVD systems was described in [19], which is based on interleaving the data to be transmitted between all streams, thereby forcing each stream to be affected by singular values with different powers, and avoiding streams with very poor performance (that would determine the average BER performance). We define S k as the interleaved data symbols associated with sub-carrier. As already pointed out, the channel estimates at the transmitter side, required for computing the precoding matrix, can be obtained from a training sequence that was previously sent by the receiver. After that, the transmitter sends a training sequence to the receiver (typically at the beginning of the data block), which is used by the receiver to compute the detection matrix, perform the channel equalization and complete the SVD decomposition (the details are described below). Naturally, we assume that the channel coherence time is greater than the time it takes to transmit both sequences and the data block. The eavesdropper listens to both training sequences, so it can compute its own channel estimate. We can summarize this process into three steps, as shown in Figure 3. In the first step, the receiver sends a training sequence to the transmitter, which is overheard by the eavesdropper. In this step, both the transmitter and eavesdropper obtain channel estimates. In the second step, the transmitter sends a sequence of training symbols, so that the receiver can obtain a channel estimate and compute the decoding matrix to complete the SVD. The eavesdropper also listens to this sequence and obtains another channel estimate. The third and last step is when the data transmission begins. The transmitter uses its channel estimate to precode the signal, while the receiver uses its channel estimate in order to perform the decoding of the received signal. Similarly, the eavesdropper tries to decode the overheard signal. To increase the accuracy of its detection, the eavesdropper uses a channel calculated as an average of its two channel estimates. Steps for obtaining the channel estimates. (a) the receiver sends a training sequence, P k , that is received by the transmitter and the eavesdropper. (b) the transmitter sends a training sequence that is received by the receiver and the eavesdropper. (c) the transmitter begins sending data to the receiver, that is also received by the eavesdropper. As described in [31], the channel can be expressed as whereĤ k A is the channel estimate used by the transmitter, ρ A is a correlation factor with the true channel, and k is the error associated to the channel estimation process (our analysis can be easily extended to other models for the channel estimation errors). This error k is characterized as a complex variable with Gaussian distribution and variance 2σ 2 N /β, where σ 2 N is the noise variance for a specific Signal-to-Noise Ratio (SNR) value and β is a scaling factor. For β → ∞ and ρ A = 1, there is a perfect channel estimation, i.e.,Ĥ k = H k A . The SVD decomposition ofĤ k A is as followŝ Therefore, the transmitter computes the precoded symbols with the T × 1 vectorV k A as Reception Both the correct receiver and the eavesdropper employ the same reception approach. However, the channel that they observe will be different, i.e., they will work with different channel estimates, since in general the eavesdropper is at a position different from the transmitter and the receiver. We also consider the pessimistic scenario where the eavesdropper knows the interleaving pattern in use (in practice, this could add an extra security layer, that is not considered in this paper). The received signal can be expressed as where N k denotes the frequency-domain additive white Gaussian noise (AWGN) samples associated to the kth sub-carrier. Naturally, both the receiver and the eavesdropper must perform the decoding operation. For the intended receiver, B, we can define the channel as in (3), namely We can assume that there is little difference between the estimation of the transmitter and the intended receiver, so it is reasonable to approximate ρ A = ρ B ≈ 1. For the sake of simplicity, we will also assume that the power of the channel estimation error is equal for A and B (the generalization for other cases is straightforward). The SVD decomposition done at the intended receiver's side iŝ being the corresponding estimates of the matrices defined in (2). It should be noted that the eavesdropper cannot directly estimate the channel between A and B, since its actual value is never transmitted between A and B. Therefore, the eavesdropper can only approximate such estimation by estimating the channel between A and E and between B and E. The eavesdropper obtains these estimates by listening to the training sequences exchanged between A and B. We can define both of these channels as and with ρ E1 and ρ E2 referring to the correlation between the different channels and the real channels and ξ k being an appropriate Gaussian distributed error term with variance σ 2 N /β M , where β M is a scaling factor. Since the eavesdropper does not know the channel, we can assume that ρ E1 = ρ E2 < 1. As mentioned before, in order to improve the quality of the estimation of the channel between A and B, the eavesdropper can calculate an average of the estimates of the intermediate channels, i.e., As in conventional SVD techniques, the decoding is made by multiplying the received signal by the decoding matrix estimateÛ H k B for the intended receiver; orÛ H k E for the eavesdropper. Since the process is the same for both receivers, we will useÛ H k as a place-holder for either receiver. The decoding is then computed as where W k is a C × 1 column vector with the interleaved, decoded symbols. These symbols can be written as withΛ k corresponding to the diagonal matrix composed by the singular values of the estimated channel. However, before performing equalization, we must group all the data symbols associated to a given stream, i.e., restore the original symbol order. This is done by applying the deinterleaving to the matrix W k , which yields Thanks to the interleaving, each stream becomes affected by a frequency-selective channel, composed by the interleaving of the different singular values. Multiple Eavesdroppers Let us now assume a scenario with K eavesdroppers. Moreover, let us consider the worst case, i.e., the case where the different eavesdroppers are co-located and can perform joint estimation and equalisation. Under these conditions, we can model the existence of K eavesdroppers by considering one eavesdropper with KR receive antennas. Thus, the channel being estimated by the eavesdroppers can be defined as The received signal Z k E is expressed as It should be noted that the eavesdroppers do not require any changes to the equalization algorithm, since the number of singular values is the same, not to mention they can take advantage of the increased singular value power due to employing more receiving antennas. Considering the SVD, the channel represented in (15) can be decomposed asĤ whereΛ k E is a C × C diagonal matrix composed by the singular values ofĤ k E ,V k E is the T × T precoding matrix, that is not utilised by the eavesdroppers, andÛ k E is the KR × T decoding matrix, computed economically so as to not have null columns. Line-of-Sight Link Scenario Another possible scenario is the one where there is LOS between the transmitter and both the receiver and eavesdropper. Under these conditions, the channel is defined as the sum of a LOS component (that does not suffer fading effects) with several multipath rays (assumed uncorrelated and with fading effects). In a worst case scenario, we can assume that the eavesdropper can estimate the LOS component (eventually with a certain error), although that is not feasible for the remaining multipath rays [32]. In this case, we define the channel as where D k los is the low-fading, highly-correlated LOS component and R k mp is the high-fading multipath component of the channel. We then substitute this channel in (3) and (7), as The intended receiver and transmitter's remaining operations are calculated as described previously. The eavesdropper, however, cannot estimate the multipath component of the channel, and must instead rely on the estimate of the LOS component. We define this component as where H k E1,los = ρ E1Ĥk E1,los + ξ k + k and H k E2,los = ρ E2Ĥk E2,los + ξ k + k . In this scenario, the channel estimatesĤ k E1,los andĤ k E2,los only concerns the LOS component between A and E and B and E, respectively. The difference between these estimates and the real channel will be proportional to the power of the multipath component. We define the ray power coefficient as where P D and P R are the powers of the LOS and multipath components, respectively. Clearly, if α RP = 0, the channel is only composed by the LOS component, whereas at α RP = 1 the channel is composed of only the multipath component. Iterative Equalization In order to reduce the ISI, we employ a nonlinear FDE technique based on the IB-DFE concept [24,33]. The IB-DFE is a frequency-domain receiver which utilizes an iterative equalization based on the minimum mean squared error (MMSE). This equalization is done on a sub-carrier basis, and is composed by a feedforward and feedback equalization, which is employed to remove the residual ISI. The equalization processes are iterative and can be repeated up to L times. The set of equalized symbols associated with the kth sub-carrier and lth iteration are given by whereS k is a C × 1 matrix with the soft-decisions of the previous iteration, and F (l) k and B (l) k are the feedforward and feedback equalization matrices for the kth sub-carrier and lth iteration, respectively. The feedforward equalization matrix for the kth sub-carrier and lth iteration is defined as where ρ (l−1) denotes the block-wise reliability associated to the data estimated in the (l − 1)th iteration (when l = 1, we have ρ (0) = 0). On the other hand, the feedback equalization matrix is defined as The soft-decision estimates of the transmitted data, employed in the feedback equalization, are calculated using the reliability of each bit in each symbol, expressed as a log-likelihood ratio (LLR), as and where After obtaining the LLR for each bit, we can calculate the soft decision of a given data symbol as The estimated data symbols are obtained through the hard-decision of the equalized symbols. Secrecy Rate The secrecy rate is defined as the difference between the capacity of the channel between A and B, and the capacity of the channel between A and E [34][35][36]. For simplicity, we define the total capacity as the sum of the capacity of each sub-carrier, i.e., where C k denotes the capacity of a single sub-carrier, defined as [3] C k = I(X k , Z k ), (33) where I(X k , Z k ) is the mutual information between the transmitted signal and the received signal, which can be computed by with H(Z k ) being the differential entropy of Z k and H(Z k |X k ) being the conditional differential entropy of Z k given X k . Since we know that X k is independent from N k , we can simplify H(Z k |X k ) = H(N k ) and define both entropies as and where R X = σ 2 X I and R N = σ 2 N I, with σ 2 X and σ 2 N corresponding to the variances of X k and N k , respectively. By substituting (35) and (36) in (34), we can write Since we have two different transmitter-receiver pairs, we can, likewise, define two different system capacities. Let us start by defining the system capacity associated with the link from A to B (i.e., the capacity of the intended receiver), which is given by where σ 2 B is the power of the interference associated with the imperfect channel estimation, given by denoting a matrix comprised of the interference in the receiver, which can be computed aŝ Similarly, we can define the capacity of the eavesdropper as where ρ E is a simplification defined as ρ E = ρ E1 = ρ E2 , and σ 2 E is the interference power due to the imperfect channel estimation, which is larger than σ 2 B , and is computed as is the interference matrix computed aŝ With (38) and (41), we are able to obtain the total capacity by using (32). Moreover, we are also able to compute the secrecy rate, defined by the difference between the intended receiver's capacity and the eavesdropper's capacity, i.e., Results and Discussion In this section we present a set of performance results regarding the BER and secrecy rate of the considered point-to-point MIMO system with, unless otherwise mentioned, T = 8 transmit antennas and R = 8 receive antennas. These performance results involve scenarios with and without a LOS component and are obtained through Monte Carlo simulations. Unless otherwise stated, the block size is N = 256. NLOS Scenario We begin by analyzing the impact of the ρ E factor in the receiver's performance. This can be observed in Figure 4, where we measure BER of the eavesdropper for different ρ E values. From the figure, it can be observed that the system performance can be severely degraded at low levels of ρ E . In accordance with our system definition, it is not unreasonable to assume that the eavesdropper will operate with small values of ρ E . In the next set of results, we compute the secrecy rate of the system under different conditions. Figure 5 shows the secrecy rate as a function of ρ E , considering different MIMO configurations. From the figure it can be concluded that, with perfect CSI, the maximum attainable secrecy rate increases with the number of antennas of both users. Figure 6 shows the secrecy rate of an 8 × 8 system, considering different values of β N (i.e., considering different channel estimation errors on both receivers), at an SNR of 12 dB. As expected, the addition of a channel estimation error negatively impacts the secrecy rate of the system, particularly for lower values of ρ E . In Figure 7, we have introduced the channel mismatch error, represented by β M , in addition to the channel estimation error and SNR of the previous simulations. From the figure, it can be seen that when the channel estimation error assumes low levels, the secrecy rate increases. It should also be noted that even for high values of ρ E , the secrecy rate is higher than in a scenario with no channel mismatch error. This is expected due to the mismatch error affecting only the eavesdropper's capacity. In addition, a relatively small difference between the theoretical and simulated results can be observed. This arises due to the residual error of the Gaussian approximation. Figure 8 combines various levels of SNR for the same levels of channel estimation and mismatch errors. From the figure, it can be noted that a higher SNR leads to a higher secrecy rate, as expected, with the secrecy rate gain increasing further for smaller values of ρ E . Multiple Eavesdroppers Scenario Let us now consider the existence of K eavesdroppers performing joint estimation and equalization. As mentioned before, this scenario is approximated by a single eavesdropper employing KR antennas, for K > 1. Figure 9 shows the secrecy rate of the system for K = 1, 2 and 4. From the figure, it can be seen that increasing the number of eavesdroppers leads to a lower attainable secrecy rate. This fact is not limited to the scenario without errors, as can be observed in the scenario with channel mismatch in Figure 10. From this figure, it can be noted that by considering more eavesdroppers and/or antennas, the impact of the channel mismatch error can be reduced (or even eliminated). LOS Scenario In addition to varying ρ E and the error factors, let us evaluate the secrecy rate of a scenario where we also vary the ray power ratio between multipath component and main LOS component. Figure 11 shows the secrecy rate with no errors, considering different values of ρ E and different ray power coefficients α RP . From the figure it can be observed that the higher the ray power ratio, the higher the achievable secrecy rate. In fact, this is somewhat expected, since the component that the eavesdropper can estimate contributes less to the total channel power. Let us now consider a scenario with imperfect CSI. Figure 12 shows the secrecy rate when the SNR is 12 dB and different values of α RP are considered. Simulation α RP =0.5 Theory α RP =0.5 Simulation α RP =0.8 Theory α RP =0.8 Figure 11. Secrecy rate of the system for various ray power ratios with β N = ∞. Simulation α RP =0.5 Theory α RP =0.5 Simulation α RP =0.8 Theory α RP =0.8 Figure 12. Secrecy rate of the system for various ray power ratios with β N = 100 at 12 dB SNR. The unknown multipath component introduces a permanent error in the eavesdropper, which accounts for the higher secrecy rate at ρ E = 1, similar to the mismatch error. In Figure 13, we have introduced the mismatch error to the previous scenario. Simulation α RP =0.5 Theory α RP =0.5 Simulation α RP =0.8 Theory α RP =0.8 Figure 13. Secrecy rate of the system for various ray power ratios with β N = 100 and β M = 10 at 12 dB SNR. We verify that the mismatch error leads to an overall increased secrecy rate at all power ratios, since by varying this ratio, only the eavesdropper's channel estimate and the corresponding capacity is affected. Conclusions In this paper, we proposed a physical security level against eavesdroppers which is based on MIMO-SVD schemes along with SC-FDE techniques. The security potential was studied, and it was shown that the secrecy rate can increase sharply as the distance between eavesdropper and transmitter or receiver increases. It was also demonstrated that in LOS scenarios, the secrecy rate increased with the multipath component's power. Therefore, MIMO-SVD schemes combined with SC-FDE tecnhiques can be an efficient option for highly secure MIMO communications.
2019-11-07T14:09:34.187Z
2019-11-01T00:00:00.000
{ "year": 2019, "sha1": "17e2aaba190d919a9c905bfab0b3fdaf33baaa17", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1424-8220/19/21/4757/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "e94e0ac847d789c125b28bd7af3349a3b2b43170", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Medicine", "Computer Science" ] }
51029272
pes2o/s2orc
v3-fos-license
Assessing the Print Communication and Technology Attributes of an Academic Medical Center Background: Historically, health literacy has been viewed as the patient's problem; however, it is now accepted that the responsibility for improving health literacy lies with the health care professionals and systems. An Institute of Medicine report outlines the health literacy attributes, such as printed patient education and technology, which both play a role in patient decision-making and engaging them in their health care. Research suggests that patients who are engaged in their health care have improved health outcomes. For health care organizations to accommodate the needs of all patients, it is imperative that they determine the current organizational state and discover opportunities for improvement. Methods: The Health Literacy Environment of Hospitals and Health Centers (HLEHHC) Print Communication Rating and Technology Rating Tool were used to measure the internal aspects of organizational health literacy at The University of Tennessee Medical Center (UTMC). Included in the print assessment were the 150 most distributed patient education handouts. Researchers also used the Simple Measure of Gobbledygook and Patient Education Materials Assessment Tool to assess print material. Technology was assessed using UTMC's website as the authoritative source. Key Results: The HLEHHC was useful for assessing print material and technology. Reviewing and reporting the data question-by-question revealed more granular, actionable information on where there are opportunities to improve the health care environment for all patients. This analysis resulted in proposing actions based on best practices that UTMC could implement in the coming year. The process is replicable in other settings. Implications: Responsibility for improving informed medical decision-making lies with health care organizations. Low health literacy influences the effectiveness of print patient education and technology in informing patients about their health. Assessing these aspects of the health care organization as part of quality improvement provides necessary data for improvements. The Health Literacy Environment of Hospital and Health Centers was a useful tool to measure characteristics of print and technology. [HLRP: Health Literacy Research and Practice. 2018;2(1):e26–e34.] Plain Language Summary: A task force at an academic medical center assessed the health literacy attributes of their organization. Researchers assessed print patient education and patient-related technology. The researchers found areas for improvements to make health information easier to understand. In the past, health literacy was viewed as a problem for patients and that it was their burden to acquire the necessary skills to understand and make decisions about their health. However, as illustrated in the literature (Koh, Brach, Harris, & Parchman, 2013;Koh & Rudd, 2015;Parker, Ratzan, & Lurie, 2003), it is now understood, and generally accepted, that the primary responsibility for im-proving health literacy lies with the health care professionals and systems. The National Action Plan to Improve Health Literacy recognized the health literacy problem in the United States and focused on systematic issues in the health care system rather than on the shortcomings of patients (U.S. Department of Health and Human Services, 2010). The National Action Plan put forth seven goals to restructure how health education is conducted and how health information is disseminated. The second goal of the National Action Plan called on health care organizations to "Promote changes in the health care delivery system that improve health information, communication, informed decision making, and access to health services" (U.S. Department of Health and Human Services, 2010). Subsequent to the report, a set of criteria by which organizations could gauge whether they made it "easier for people navigate, understand, and use information and services to take care of their health" was created (Brach et al., 2012). Among the focal points of these two documents are several references to health materials, including print patient education, paper forms, audiovisual materials, and technology, such as patient portals, touch screens, social media content, and blogs. Each of these materials has the potential to inform patients about their health care. "Informed medical decision-making," a term introduced by McNutt in 2004, best describes this process, and research suggests that patients who are engaged in their health care have improved health outcomes (Stacey et al., 2017;Weiner et al., 2013). The use of technology and print resources have both been promoted as ways to inform patients about the decisions they must make and better engage them in their health care (Woolf et al., 2005). As part of a performance improvement process, The University of Tennessee Medical Center (UTMC), located in East Tennessee, formed a task force, led by medical librarians, to assess the current state of the organization in regards to health literacy to provide a basis for promoting changes. This article reports on the assessment of print communication and use of technology, including the assessment tool choice, the research methods, statistical analyses, and results. INSTRUMENTS There are several assessment tools available to assess an organization's health literacy attributes. All of the tools fall into the categories of surveys and checklists, and although they are helpful and easy to apply, none have been truly validated as research tools (Kripalani et al., 2013). Authors of the tools include government and private institutions, both in the U. S. and internationally. The target respondent for these measures is either an organization, individual provider, or patient (Kripalani et al., 2013). The criteria for choosing an assessment tool to apply to UTMC's setting included the following: that it be based in the U. S. health care system; include organization respondents; used in other health care organizations; and assessed the most health literacy attributes, as defined by the Ten Attributes (Brach et al., 2012). The Health Literacy Environment of Hospitals and Health Centers (HLEHHC), created by Rudd and Anderson (2006), offered a set of tools to measure five aspects of the health care organization that impact patients with low health literacy. The HLEHHC is not meant for comparison purposes between health care organizations; instead, the tool measures internal aspects of organizational health literacy of one organization. The document is comprised of five categories: Print Communication, Oral Communication, Navigation, Policies and Protocol, and Technology. Each category contains background information for the researcher followed by a series of questions. Respondents answered questions by ranking them as 1, 2, or 3. A ranking of 1 represented "this is something that is not done." A ranking of 2 represented "this is done, but needs some improvement." Lastly, a ranking of 3 represented "this is something that is done well." The resulting aggregate score for each of the five categories was then assigned into three predefined ranges consisting of "begin a focused initiative to eliminate literacy-related barriers," "augment efforts to eliminate literacy-related barriers," and "continue to monitor and eliminate literacy-related barriers" (Rudd & Anderson, 2006). This article discusses the results of the Print Communication and Technology portions of the UTMC's larger HLEHHC assessment project. The HLEHHC for print communication assessed factors that influence how a patient engages with and uses printed material for health decisions. The HLEHHC delineates the complexity of print materials through four distinct sections that highlight areas of influence including: writing style, organization and design, type style (size of print and contrast with paper), and photographs (illustrations, symbols, and diversity). The HLEHHC assessed the use of technology through review of televisions, telephones, patient engagement, website content, and computers. In addition to using the HLEHHC to assess technology and print, researchers used other methods for assessing the printed patient education material. Assessing the grade level is important as The Joint Commission (2010) recommends that all patient education be at the sixth-grade reading level or below. The HLEHHC recommends using Simple Measure of Gobbledygook (SMOG) to review the grade level of the print material. Rudd and Anderson (2006) state that SMOG is useful for doing quick assessments and predicts 100% comprehension. Based on these recommendations, researchers chose SMOG to review grade level. The HLE-HHC recommended Suitability Assessment of Materials (SAM); however, Patient Education Materials Assessment Tool (PEMAT) was deemed the better choice to assess the understandability and actionability of print materials based on recent research demonstrating its validity (Shoemaker, Wolf, & Brach, 2014). Researchers included SMOG and PEMAT in addition to the HLEHHC also to help prepare reviewers to better answer the HLEHHC print communication questions. The research project received an exemption from the Institutional Review Board because there was no identifiable patient information. METHODS For the print assessment, researchers downloaded the 150 most distributed patient education documents from the hospital's system for review and assigned each document an identification number for tracking and data entry. The 150 pieces of patient education included both materials from ExitCare (a patient education material provider) as well as custom materials, which were created by UTMC staff. Materials were excluded if they were no longer avail-able through ExitCare or if they were only charts or images with no text content. Each document was assessed by three independent reviewers using SMOG, PEMAT, and the Print Communication Rating (PCR) form of the HLEHHC. Six graduate nursing students, as well as two masters' degree students in public health and counseling were selected as reviewers to complete the print assessment. Each patient education document was reviewed three times by three different reviewers. Reviewers were randomly assigned materials. A medical librarian trained in all three assessment tools provided reviewers with an overview of health literacy and principles of examining easy-to-read materials based on the National Network of Libraries of Medicine's class "Promoting Health Literacy Through Easy-to-Read Materials" (Ottosen, 2015). During training, reviewers practiced applying the tools using documents that were not included in the study. To avoid bias that final PCR scores might have on reviewers, PCR forms were returned to researchers untotaled. Researchers then totaled the scores for the PCR and entered the data into a spreadsheet for further data analysis. Frequency statistics were conducted on all variables to check for data entry errors. Skewness and kurtosis statistics were run on continuous variables to assess normality. Independent sample t-tests were used to compare groups on normally distributed continuous variables. Mann-Whitney U tests were employed for outcomes that were not normally distributed. Means, medians, interquartile ranges, standard deviations, and 95% confidence intervals (CI) were reported and analyzed. Pearson's r correlation was used to test associations between continuous variables. Intra-class correlation coefficients (ICC) were used to establish inter-rater reliability for survey instrument ratings. All analyses were conducted using SPSS Version 21 (IBM Corporation; Armonk, NY) and statistical significance was assumed at an alpha value of 0.05. Researchers evaluated technology use at UTMC using the Technology Rating Tool (TRT) of the HLEHHC. With permission from one of the tool's original authors (R. Rudd, personal communication, September 9, 2016), researchers edited the TRT to better reflect modern-day technology including accessing test results online, accessing prescription history, and requesting health information and video chat from hospital rooms ( Table A). One of the researchers completed the TRT using UTMC's website as the authoritative source. If the website provided answers to questions directly and affirmatively, a rating of "3" was given. If answers were not available on the website, but known to be true by researchers based on experiences and observations, a rating of "2" was given. If answers were not available on the website and were not known to be true, a rating of "1" was given. The data were then entered in a spreadsheet for further analyses. Descriptive statistics were used to explain the prevalence of ratings. RESULTS Of the 150 print materials analyzed, 91.3% (n = 137) were original, unedited documents from the ExitCare collection, and 8.79% (n = 13) were custom documents created or edited by UTMC health care providers. All data were normally distributed as determined by skewness and kurtosis statistics. There was good inter-rater reliability for the PCR between reviewers (ICC = 0.67). The mean PCR score for all 150 documents was 53.9 (95% CI 53, 54.9). When comparing original documents to custom, there was a significant difference (p = .02) with a lower score of 50.2 (95% CI 47.6, 52.8) for custom versus 54.3 (95% CI 53.3, 55.3) for original documents. The researchers evaluated the individual means for each question on the PCR to determine more granular results. In the category of formatting, UTMC scored an average mean of 2.82 to 2.97 of a possible 3. Included in formatting is "font size is 12 points or greater" (µ = 2.96) and "text avoids splitting words across two lines" (µ = 2.97). For the category of visuals and cultural sensitivity, UTMC scored an average mean of 1.39 to 2.74 of a possible 3. Included in this category is that visuals are "representative of the intended audience" (µ = 1.39) and "reinforce key messages" (µ = 1.76). See Figures 1-4 for a complete list of questions and their aggregate means. Due to extremely low inter-rater reliability, researchers were unable to run statistics on PEMAT scores. There was poor inter-rater reliability for the Understandability PEMAT (ICC = 0.25) and very poor inter-rater reliability for the Actionability PEMAT (ICC = 0.06). No further results are reported for PEMAT. The overall technology rating score was 47 of a possible 54. The following is the proportion of rankings on the TRT: 72.2% were ranked as a 3 (highest ranking), 16.7% were ranked as a 2 (middle ranking), and 11.1% were ranked as a 1 (lowest ranking). See Table B for details on question rankings. DISCUSSION This research is unique in its use of the HLEHHC assessment tools for both print communication and technology. Although Horowitz et al. (2014) and Groene and Rudd (2011) each referenced the HLEHHC in reporting their research, both use it only to assess the navigation of health care settings. Although these authors reported on the assessment of print patient education, neither used the PCR tool, relying instead on SMOG, SAM, or Flesch-Szigriszt. In addition, although the Fox Chase Cancer Center (Philadelphia, PA) reported using the HLEHHC and SMOG to evaluate patient education, researchers did not break down the PCR question by question to find recommendations for further evaluation and research (Raivitch et al., 2010). Finally, Fox Chase Cancer Center (Raivitch et al., 2010), Horowitz et al., 2014, andGroene &Rudd, 2011 used the HLEHHC tools, but did not assess technology. The process of assessment of the health literacy attributes of a health care setting using the HLEHHC provided rich data, which can be used to make improvements. Overall, UTMC scored well in both Print Communication (53.9 points) and Technology (47 points). The PCR score ranges from 0 to 72, with higher being better and the Technology Rating score ranges from 0 to 54, with higher being better. By breaking down the PCR and the TCR scores question by question, researchers could determine in which areas UTMC scored well and areas with opportunities for improvements. In doing so, researchers learned that for print communication UTMC scored well in the formatting of print patient education, as well as the use of headings, logical grouping of events, and bullets. Areas in which UTMC has opportunities for improvement include cultural sensitivity, use of visuals, and reading grade level. UTMC scored well in regards to patient engagement through technology because of the availability of bedside televisions to deliver patient education, the ability to request health information from patient rooms, and the availability of computers in more than one location. The organization established an environment for patients to engage. Opportunities for improvement in technology included providing a more engaging patient portal. By examining the aggregate score per question for print communication and technology, researchers could get a specific picture of where UTMC stood and then make recommendations for change to UTMC's senior leadership based on these findings. Researchers edited the technology tool to reflect today's modern technology. The Centers for Medicare and Medicaid Services emphasize the importance of technology for patients to have the capability to access their health records and, in so doing, become more connected to their provider (Weinstock & Hoppszallern, 2015). The Hospital and Health Network (HHN) awards organizations as the "Most Wired" based on their use of technology to partner with patients on their health (Vesely, 2017). According to Weinstock & Hoppszallern (2015), organizations on the Most Wired list are consistently improving their patient engagement by connecting daily with patients through the Internet, such as providing education and allowing for e-visits with the health care team. Additionally, the Most Wired organizations note the importance of patient portals being user-friendly and useful (Weinstock & Hoppszallern, 2015). By editing the TRT, researchers felt the addition of patient engagement better reflected today's technology. LIMITATIONS Limitations to this study include those that exist within the HLEHHC instrument. Options for responses on each of the tools within the HLEHHC manual are limited to a 3-point scale. The preferred scale is a 5-or 7-point scale, which results in data being available on a continuum from strongly agree to strongly disagree; therefore, offering a richer dataset. In addition, the method of using a website and personal knowledge as an authoritative source for the assessment of technology was a novel approach, and outdated items on the technology form were updated by the researchers; therefore, challenging content validity. The cross-sectional design of the print assessment limits the ability of researchers to infer "causal effect" due to lack of randomization. We cannot say with certainty that results found with the sample of 150 documents we reviewed would be duplicated in the whole population documents. Future research should include a truly randomized sample of the total number of documents. The low inter-rater reliability between raters using the PEMAT precluded using the data from that part of the print assessment study; therefore, we did not have valid data on the "actionability and usability." Further research should be done to understand why there was a low inter-rater reliability and to further explore the validity of this tool. CONCLUSION Health literacy affects people of all ages and education levels. The National Action Plan to Improve Health Literacy calls for a focus on systematic problems rather than potential shortcomings of patients (U.S. Department of Health and Human Services, 2010). At UTMC, a librarian-led task force was created to assess the organization's current state of health literacy and to serve as a catalyst for promoting changes at UTMC. The HLEHHC offered a set of tools to measure aspects of the health care organization that impact patients with low health literacy. As previously mentioned, the HLEHHC was used at UTMC to assess the health literacy environment of the medical center; included in this report were Print Communication and Technology. HLEHHC was a useful way to evaluate an organization's health literacy attributes in relation to print and technology. Health care organizations that do this demonstrate commitment to patient-centered care. UTMC's score for both Print Communication and Technology ranked in the highest of the three-category scoring rubric, which translates within the HLEHHC scoring rubric as "continue to monitor and eliminate literacy-related barriers." Researchers took a unique approach to reviewing and reporting the data for each tool on a question-by-question basis; therefore, revealing more granular, actionable information on where there are opportunities to improve the health care environment for all patients. This analysis resulted in proposing specific actions based on best practices that UTMC could implement in the coming year. Future plans for UTMC in regards to print communication include the following: the task force members will provide instruction to medical center team members on how to create easy-to-read and engaging patient education; create an advisory committee to evaluate the cultural sensitivity of the print communication; implement focus groups to evaluate print communication; and research vendors to find one that offers patient education written below the sixth-grade level. Future plans regarding technology include providing an engaging patient portal and promoting the use of a smart phone app for accessing patient portals. 15 "Kiosks are available to patients in one or more locations (i.e., waiting areas, testing sites, pharmacy, resource rooms). " "Patients can access their test results online. " 16 "Kiosks are programmed for orientation purposes. " "Patients can access their prescription history online (i.e., patient portal). " 17 "Kiosks are programmed for educational purposes. " "Patients can request health information from their room. " 18 "Kiosks have headsets connected to them. " "Patients can request 'video chat' from their rooms. "
2018-07-11T21:33:24.095Z
2018-01-01T00:00:00.000
{ "year": 2018, "sha1": "0f10412dbdd7a91df7a3e80345a31d0350733d56", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.3928/24748307-20180108-01", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "0f10412dbdd7a91df7a3e80345a31d0350733d56", "s2fieldsofstudy": [ "Medicine", "Education" ], "extfieldsofstudy": [ "Psychology", "Medicine" ] }
5588466
pes2o/s2orc
v3-fos-license
Tracheostomy in special groups of critically ill patients : Who , when , and where ? Tracheostomy may facilitate weaning from mechanical ventilation and is one of the most common surgical procedures performed in the Intensive Care Unit (ICU). Tracheostomy is performed in 9% and 10% of all mechanically ventilated patients in the United States and the United Kingdom, respectively.[1-4] Mortality due to tracheostomy is rare and periprocedural complications requiring fluid or blood replacement occurring in 7% of all cases.[5] Timing of tracheostomy is controversial with single center trials showing benefit of early tracheostomy compared with larger multicenter studies failing to replicate such favorable outcomes.[5-9] This article will focus on the current evidence around the timing of tracheostomy and its impact on different patient subpopulations. Advantages of Tracheostomy to Conventional Intubation Introduction Tracheostomy may facilitate weaning from mechanical ventilation and is one of the most common surgical procedures performed in the Intensive Care Unit (ICU).[3][4] Mortality due to tracheostomy is rare and periprocedural complications requiring fluid or blood replacement occurring in 7% of all cases. [5][7][8][9] This article will focus on the current evidence around the timing of tracheostomy and its impact on different patient subpopulations. Sedation and mobilization Multiple single center prospective and retrospective studies have shown an association of early tracheostomy with decreased use of sedation facilitating early weaning from mechanical ventilation. [10,11]Tracheostomy may reduce translaryngeal stimulation facilitating patient comfort, encourage patient autonomy, communication and has been associated with improved mobility and decreased length of intensive care stay. [12,13] Weaning from the ventilation Good quality single center studies [7] and larger methodologically less rigorous studies have suggested Tracheostomy in special groups of critically ill patients: Who, when, and where? a reduction in the duration of mechanical ventilation, nosocomial pneumonia, and hospital length of stay (LOS) in patients undergoing early tracheostomy. [14]everal multicenter randomized controlled trials and meta-analysis in mixed intensive care populations, however, failed to show a statistically significant reduction in the duration of mechanical ventilation. [5] Work of breathing In patients with failed extubations, tracheostomy can reduce expiratory resistance to airflow and improve lung mechanics. [15]In this cohort of patients, a reduction in work of breathing may facilitate liberation from the ventilator. [15,16] Secretion clearance and mucociliary function Translaryngeal intubation is associated with a high incidence of nosocomial sinusitis in critically ill patients.Over 90% of translaryngeal intubated patients have opacified sinuses by day 7, which improves following extubation or tracheostomy.This clinical significance of this is yet to be determined. [17] Complications of Prolonged Translaryngeal Intubation Laryngotracheomalacia is a rare complication following prolonged intubation in the critical care unit due to the use of a high-volume low-pressure cuff.Prolonged translaryngeal intubation is associated with lip and vocal cord pressure ulceration (in > 90% patients on autopsy) [18] and vocal cord dysfunction postextubation.This can be minimized through the use of a tracheostomy.Tracheal stenosis, defined as a 10% reduction in the internal tracheal diameter, is more common with tracheostomy [Figure 1]. [19] Percutaneous versus Surgical Tracheostomy The percutaneous approach offers several advantages over surgical tracheostomy and is the preferred choice in over 97% of the UK ICUs. [2]Most notably, these include the speed of insertion, comparable complication rates, and a smaller wound size improving cosmetic outcome and minimizing the incidence of infection.A bedside tracheostomy avoids deterioration of the patients associated with intrahospital transfer. [20]is has been demonstrated through several meta-analyses and has led to percutaneous tracheostomies being the predominant method of choice for tracheostomies in the ICU. [3]Studies have revealed that 70-97% of all tracheostomies are performed in the ICU by the intensivists and favored technique being single stage dilatation technique. [3]The potential benefit of percutaneous tracheostomy was analyzed by Freeman et al., [20] who performed a meta-analysis comparing the surgical versus percutaneous approach.They pooled data from 5 prospective randomized controlled trials which included 236 patients in total.They found that the percutaneous method reduced the operative time, perioperative bleeding, stomal infection, and postoperative complications.There was no significant difference in mortality. [20]In addition to the above advantages, percutaneous tracheostomies are more cost effective.An economic analysis comparing percutaneous and surgical tracheostomies found that percutaneous technique reduced the cost by one-third in the United States hospitals. [21]This evidence has been reflected in the recently published national guidelines suggesting percutaneous tracheostomy as the standard method for tracheostomy in intensive care patients. [22] Timing of the Tracheostomy in Unselected Intensive Care Patients There has been numerous single [7,14] and multicenter trials, [5] meta-analysis, [23,24] and retrospective studies [25] investigating early (<5 days) versus late (>7-10 days) tracheostomy in critically ill patients.The well-conducted single center trials showed a reduction in the ICU and hospital LOS, reduced requirement for sedation in ventilated, early weaning from the ventilator and reduced usage of sedation in the early tracheostomy group. [26]While it showed a reduction in the quality indicators of ICU, they failed to show a mortality benefit. All these well-controlled single center trials point toward cost savings without affecting the patient mortality.Unfortunately, these benefits were not translated in well-conducted multicenter trials [5] and meta-analysis [24] of the published data.These well-controlled randomized multicenter trials could be criticized for the heterogeneity of the patients recruited.For example, in the study by Young et al., [5] each center contributed 2-3 patients per year, probably an underestimate of the total number of tracheostomies performed.Furthermore, this study did not report on the subgroups of patients who may have benefited from an early tracheostomy. Tracheostomy in Cardiac Intensive Care Ventilatory dependence after cardiovascular surgery is common, partly due to an improvement in the provision of services in patients with comorbidities. [6]One study found, of 12,777 patients undergoing cardiovascular surgery, 704 (5.5%) developed ventilator dependence, and defined as intubation for over 72 h.Of those, the survival at 30 days and 2 years were 74% and 26%, respectively, compared to 84% and 58% in patients who did not develop ventilator dependence. [27]Another study found that only 31% of cardiac surgery patients ventilated for over 3 days were successfully weaned by day 10. [28]The results from well-conducted prospective randomized controlled trials and retrospective studies are conflicting in patients in cardiac intensive care. [29,30]In these studies, there were no differences in the duration of critical care stay, mortality rates up to 90 days, and ventilator-associated pneumonia (VAP).In the early percutaneous tracheostomy group, there was reduced usage of sedation, delirium, unscheduled extubations, early mobilization with associated patient comfort, and ease of administering nursing care. [31]In this cohort of patients where establishment of early nutrition is of paramount importance, tracheostomy was associated with early resumption of oral nutrition. [6]Some investigators believe that early tracheostomy within 48 h is associated with deep-seated mediastinitis in patients following median sternotomy. [32,33]However, large studies have shown that a retrospective analysis of 5095 patients who underwent tracheostomy over 6 days after cardiac surgery found no link to mediastinitis and sternal wound infection. [33]acheostomy in Neurocritical Care Small prospective and large retrospective studies in patients with brain injuries have shown that early tracheostomy reduced the duration of mechanical ventilation by reduced usage of sedation.[36][37][38] Despite these improvements after tracheostomy, there was no significant difference in long-term mortality, attributable to the diffuse nature of brain injury.Patients with brain injury are at a higher risk to develop VAP (in mechanically ventilated patients after traumatic brain injury up to 60%), and it is associated with significant morbidity [39] and mortality.While it is controversial, tracheostomy may reduce the secondary insults associated with a reduction in the incidence of VAP and unscheduled extubations after a brain injury. Tracheostomy is a safe and well-tolerated procedure in patients with brain injury, but caution should be exercised to reduce the incidence of periprocedural secondary neurological insults.When performed in an appropriate setting, there was no evidence of periprocedural insults during percutaneous tracheostomy in patients with brain injury. [40] Tracheostomy in Trauma Patients Incidence of tracheostomy in patients with polytrauma is high (over 40%). [41]Risk factors for tracheostomy in this cohort are age over 55, pulmonary contusions, multiple rib fractures, presence of head injury, low Glasgow coma score at admission, high abbreviated injury scale scores (>75), and craniotomy. [41]44][45] Adopting a standardized protocol of early consideration of tracheostomy may reduce time on the ventilator, VAPs, and ICU LOS. Tracheostomy in Burns The surface area, depth, and location of the burn (particularly head and neck burns) are the main factors determining the need for early tracheostomy in patients with burns.It is performed in patients with total surface body area burns of over 60% because of the requirement for multiple trips to the theater.It is also performed in patients with head or neck burns and associated inhalation injury. [46]In a small randomized control trial of 44 burns patients of early (next operative day) versus late tracheostomy (if needed on day 14), there were no significant differences in the incidence of VAP, LOS, or mortality. [47]Despite these results, early tracheostomy may confer a microbiological advantage in burns patients.In pediatric patients with burns that early tracheostomy conferred a microbiological benefit to burned children. [48] Conclusion There is insufficient evidence to support an either early or late tracheostomy in routine clinical practice.Large national retrospective national registries have shown a significant linear increase in hospital costs and LOS with time to tracheostomy.Timing of tracheostomy should be personalized for the patient after taking into account the risks and benefits of this procedure.We believe that a bedside percutaneous tracheostomy is safe and economically feasible alternative to formal theater tracheostomy.Well-conducted randomized controlled trials with the relevant subgroups of patients are needed. Financial support and sponsorship Nil. Figure 1 : Figure 1: Theoretical advantages of tracheostomy in Intensive Care Unit
2018-04-03T06:21:46.514Z
2016-05-01T00:00:00.000
{ "year": 2016, "sha1": "21517de7e30bd8fcbacdd34872b3e38b4fdc8855", "oa_license": "CCBYNCSA", "oa_url": "https://doi.org/10.4103/0972-5229.182202", "oa_status": "BRONZE", "pdf_src": "ScienceParseMerged", "pdf_hash": "d35070dc478af401aa2b3d14aa3582ccaebbaf87", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
35305731
pes2o/s2orc
v3-fos-license
Ongoing measles outbreak in Romania , 2011 A Stanescu (aurora.stanescu@insp.gov.ro)1,2, D Janta1,2, E Lupulescu3, G Necula3, M Lazar3, G Molnar4, A Pistol1,2 1. National Institute of Public Health, National Centre for Communicable Diseases Surveillance and Control, Bucharest, Romania 2. These authors contributed equally to this work 3. National Institute of Research and Development for Microbiology and Immunology ‘Cantacuzino’ – National Reference Laboratory for Measles and Rubella, Bucharest, Romania 4. Ministry of Health, Bucharest, Romania Since January 2011 Romania has been experiencing a measles outbreak with 2,072 cases notified in 29 of the 42 Romanian districts.Most cases occurred in the north-western part of the country among unvaccinated children with the highest number of cases (893 cases) registered in children aged one to four years.This report underlines once more the need for additional measures targeting susceptible populations to achieve high vaccination coverage with two doses of measles-mumps-rubella vaccine. Between January and June 2011, 2,072 measles cases were notified in 29 of the 42 Romanian districts with most cases registered in the north-western part of the country mainly among unvaccinated children.No measles-related deaths have so far been notified in 2011. An outbreak of measles was first noticed in late August 2010 in the north-eastern part of Romania [1] and by the end of the year, 193 cases were registered in the whole country. Measles is a statutorily notifiable disease since 1978 in Romania, and medical practitioners have to immediately report all suspected measles cases to the local public health authorities.At national level, the National Centre for Communicable Diseases Surveillance and Control in Bucharest collects and analyses all notifications of measles cases.National case-based notification was initiated in 1999 and the European Union (EU) case definition and case classification have been adopted since 2005 [2]. The monovalent measles-containing vaccine was introduced in 1979 in the Romanian immunisation schedule for children aged 9-11 months.In 1994, the second measles vaccine dose was introduced for children aged between six and seven years (first school grade).The combined measles-mumps-rubella (MMR) vaccine replaced the monovalent measles vaccine in 2004 and was recommended as a first dose for children aged 12-15 months.The second MMR vaccine has been recommended for children aged between six and seven years since October 2005. Between 2000 and 2008 the national measles vaccination coverage for children aged between 18 and 24 months with the first dose of measles-containing vaccine was estimated at 97%-98% and for children aged seven years, the vaccination coverage with the second dose of measles-containing vaccine was estimated at 96%-98% [3].In the last two years a constant decrease could be noticed in the measles vaccination coverage for children aged 12 months.According to the vaccination coverage reports, in 2009, the coverage for the first MMR vaccine dose was 85.1% (95% CI: 82.4-87.8)at the age of 12 months and reached the target of 95% (95% CI: 93.4-95.8)coverage for children aged 18 months.A high number of children remain unvaccinated not only in the hard-to-reach communities but also in the general population, due to parental refusal and scepticism regarding the benefits of the vaccination.Vaccination coverage for the second dose of MMR vaccine is reported every year by the school medical staff to the local health authorities after the school vaccination campaign.In 2010, the reported coverage for the second dose of measles-containing vaccine, calculated using the number of doses administrated divided by the total number of eligible children aged seven years was 93.4% (95% CI: 90.7-95.0). Here we report an ongoing measles outbreak in Romania by analysing measles data available from 1 January to 30 June 2011.Descriptive analysis was performed using the national surveillance standardised form sent by the public health authorities of each district to the National Centre for Communicable Diseases Surveillance and Control. Outbreak description From the beginning of 2011 until 30 June, a total number of 2,072 measles cases were notified by the local public health authorities.The highest number of cases was registered among children aged between one and four years (893 cases), followed by the five-nine year-olds (445 cases) and the infants under one year of age (303 cases).Among the 10-14 year-olds there were 189 cases identified, 150 cases in those aged 20 years and above and 92 cases were registered among adolescents aged between 15 and 19 years.Among the total number of cases, approximately half occurred in hard-to-reach communities.The monthly incidence increased from 131 cases registered in January to a peak of 515 cases in May, and decreased in June when the number of notified cases was 437 (Figure 1). The laboratory confirmation was performed by detecting measles IgM antibodies in serum samples.Due to many local outbreaks, the laboratory confirmation was performed only in some of the first cases identified in a particular area until D4 genotype was confirmed.For those cases with clear epidemiological link with the outbreak, the epidemiological confirmation criteria were used. Of the 2,072 notified measles cases, 898 were laboratory-confirmed, 1,161 were probable cases with documented epidemiological link and 13 were clinical measles cases for whom sera could not be obtained due to parental refusal. RT-PCR techniques to detect measles virus nucleic acid were also used to confirm the first cases from some affected districts.Twelve viruses were genotyped by a nested RT-PCR reaction which targeted a 450 nt region at the C-terminus of the N protein (Nc region).All of them belonged to D4 genotype currently circulating in Europe [4]. Measles spread in 29 districts (including Bucharest) of a total of 42 and the geographical distribution shows a concentration of measles cases in the north-western part of the country (Figure 2). The median age was three years (range: three weeks -43 years).The highest incidence (138.4 per 100,000 population) was in infants not eligible for vaccination (under one year of age), followed by the one to four year-olds (103.4 per 100,000 population) and the five to nine year-olds (42.3 per 100,000 population) (Figure 3).For the older age groups the incidence ranged between 17.1 per 100,000 population and 1.8 per 100,000 population.Most cases occurred among unvaccinated children representing 72.8% from the total number of cases registered during period mentioned above.Of these, only 19.8% were not eligible for MMR vaccination due to their age (under 12 months) (Figure 4). Control measures Several control measures have been implemented by the local health authorities in their efforts to stop this outbreak.An additional MMR vaccination campaign started in the affected areas targeting all children aged between seven months and seven years, irrespective of their measles vaccination status.Nevertheless, no change has yet been foreseen in the national immunisation schedule regarding the administration of the first dose of MMR vaccine.The MMR vaccine is supplied by the Ministry of Health and is offered free of charge through the routine immunisation services (family doctors) and special outreach teams.As of 30 June 2011, 4,500 children have been vaccinated with measles-containing vaccine, following this additional vaccination campaign.Active case finding was initiated by general practitioners in the areas most affected by the outbreak, as well as contact-tracing in hospitals and in the community.Other activities such as meetings with local public health representatives were undertaken by the national public health authorities in order to increase awareness on the ongoing outbreak not only among physicians but also in the general population. Despite the high national immunisation coverage with MMR vaccine reported during the last 10 years, this outbreak highlights the presence of pockets of individuals vulnerable to measles and particularly those members of hard-to-reach communities but not only. We observed that more parents, even among highly educated persons, lost their confidence in vaccination benefits for their children and this became an important problem that needs to be addressed.The current measles outbreak in Romania and in other European countries reveal the need for increased awareness on the declining confidence that people have in vaccination benefits for their children and for public health intervention focused in hard-to-reach communities.In addition, after the pandemic influenza A(H1N1)2009, a constant scepticism and refusal regarding vaccination in general could be noticed, not only in hard-to-reach communities, but also in the general population. In areas and communities where vaccine coverage remains sub-optimum, large cohorts of susceptible people accumulate and represent a potential for large outbreaks.The large proportion of cases observed in infants suggests an intensely circulating measles virus [7]. Figure 4 Figure 4Percentage of measles cases by age group and vaccination status, 1 January-30 June 2011 (n=2,072 cases)
2017-10-18T15:18:22.268Z
2011-08-04T00:00:00.000
{ "year": 2011, "sha1": "fa97b443c8def025cd0ede54665b268f08095930", "oa_license": "CCBY", "oa_url": "https://www.eurosurveillance.org/deliver/fulltext/eurosurveillance/16/31/art19932-en.pdf?containerItemId=content/eurosurveillance&itemId=/content/10.2807/ese.16.31.19932-en&mimeType=pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "ec203c4f5959aa0b32d50dec37bc052d9e808777", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
2632452
pes2o/s2orc
v3-fos-license
Basic life support knowledge, self-reported skills and fears in Danish high school students and effect of a single 45-min training session run by junior doctors; a prospective cohort study Background Early recognition and immediate bystander cardiopulmonary resuscitation are critical determinants of survival after out-of-hospital cardiac arrest (OHCA). Our aim was to evaluate current knowledge on basic life support (BLS) in Danish high school students and benefits of a single training session run by junior doctors. Methods Six-hundred-fifty-one students were included. They underwent one 45-minute BLS training session including theoretical aspects and hands-on training with mannequins. The students completed a baseline questionnaire before the training session and a follow-up questionnaire one week later. The questionnaire consisted of an eight item multiple-choice test on BLS knowledge, a four-level evaluation of self-assessed BLS skills and evaluation of fear based on a qualitative description and visual analog scale from 0 to 10 for being first responder. Results Sixty-three percent of the students (413/651) had participated in prior BLS training. Only 28% (179/651) knew how to correctly recognize normal breathing. The majority was afraid of exacerbating the condition or causing death by intervening as first responder. The response rate at follow-up was 61% (399/651). There was a significant improvement in correct answers on the multiple-choice test (p < .001). The proportion of students feeling well prepared to perform BLS increased from 30% to 90% (p < .001), and the level of fear of being first responder was decreased 6.8 ± 2.2 to 5.5 ± 2.4 (p < .001). Conclusion Knowledge of key areas of BLS is poor among high school students. One hands-on training session run by junior doctors seems to be efficient to empower the students to be first responders to OHCA. Introduction Early recognition, performance of cardiopulmonary resuscitation (CPR) and the immediate activation of emergency medical services (EMS) are critical determinants of survival after out-of-hospital-cardiac-arrest (OCHA) [1]. Recent data from the Danish OHCA registry shows an improvement in bystander CPR increasing from 21% of cases in 2001 to 45% in 2010, with a concomitant improvement in overall 30-day survival from 4% in 2001 to 11% in 2010 [2]. However, the majority of OHCA still receive no bystander CPR, especially in non-public areas where most OHCA occur thus contributing to the low survival due to delayed or no bystander CPR [3,4]. Bystanders with previous CPR training are more likely to perform CPR [5]. Accordingly, the International Liaison Committee on Resuscitation and the American Heart Association (AHA) recommend that CPR training should be implemented throughout the community and be incorporated as a standard part of the school curriculum [6]. Over the past years several methods to teach school children have been developed and tested, showing that BLS training is effective in children from age of 4 years. However, the most efficient method is not well established [7]. Previous studies have shown that training should start at an early age, be repeated at regular intervals and be hands-on oriented because children only receiving theoretical training perform poorly [8]. In spite of this current knowledge, there is no consensus as to which method or material should be used to train students in BLS [6,9]. Additional barriers to implementing BLS in schools are limited resources and time in the curriculum [6]. The aim of the study was to evaluate Danish high school students' current BLS knowledge, and the effect of a single 45-minute BLS hands-on training session run by junior doctors on theoretical knowledge, self-assessed skills and self-perceived fears related to performing BLS. Study design and participants This is a prospective cohort study conducted in October 2012. Six-hundred-fifty-one students were included regardless of their previous attendance of a CPR course. They represented all three high school levels (first to third year) at the public Cathedral High School in Aalborg, the fourth largest city in Denmark. The study participation was voluntary and no student declined to participate at baseline. The questionnaire The students answered an identical questionnaire immediately before and one week after the training session. The questionnaire incorporated history of prior BLS training, multiple-choice questions on BLS theory (item 1-8), selfassessed skills (item 9) and self-perceived fear of being first responder to a person with OHCA (item 10). Item 9 was evaluated on a 4-point scale of "Not able to perform BLS", "In doubt and would probably not help","Know the theory but have no practical skills" and "Well prepared and would take action". Item 10 was evaluated on a visual analogue scale from zero (no fear) to ten (the worst imaginable fear). A supplementary qualitative description in a single sentence of their worst fear was encouraged. The questionnaire is available as supplementary online material in an English version translated using an English-Danish correspondent, from the original Danish version (Additional file 1). The content of the questionnaire was validated through cognitive interview of 10 lay persons to ensure that each item was understandable and the answers unambiguous. Test-retest reliability was examined in 32 students from another high school in the area, who did not receive the BLS training session, using two questionnaires at one week interval. No significant differences in any of the above items were observed. The BLS training and data collection The instructors were four doctors aged 27 to 30 years with less than one year of clinical experience since graduation from medical school. Generic teaching experience varied among the instructors but none had prior BLS teaching experience. All the junior doctors underwent the same training course in acute medicine within the last 1 ½ year of medical school, led by Aalborg University Hospital. The course comprises advanced life support training in accordance with current guidelines by the European Resuscitation Council. The training course for the high school students in this study was developed by two of the authors (ARA, CBL) based on a short protocol, in accordance with the ERC Guidelines for Resuscitation 2010. The students were trained in groups of 40-60 students giving a student-toinstructor ratio that did not exceed 15:1. Each training session lasted 45 minutes and included theoretical aspects of BLS and hands-on training using mannequins. The students were divided into subgroups circulating between three skill stations with the themes "Breathing assessment", "Recognition of a cardiac arrest" and "How to perform CPR". An introduction to the automatic external defibrillator (AED) was given in one big group at the end of the lesson. Statistical analysis The statistical analysis was performed using Stata 11.2 (StataCorp, College Station Texas, USA). P-value < .05 was considered statistically significant. Missing values in questionnaires of responders were imputed using multiple imputations by sex, age, previous BLS training and the student answers in the rest of the questionnaire. Responders and non-responders at follow-up were compared using non-paired t-test and chi2-test. For the responder subcohort a change over time in continuous variables was tested using paired t-tests and change in dichotomous variables was tested using conditional logistic regression. Demographics All 651 students answered the baseline pretest questionnaire. Age ranged from 17 to 21 years, 17.5 ± 1.2 years (mean ± SD). Sixty-eight percent were women. Sixtythree percent (413/651) of the students had received prior BLS training either in primary school, driving schools or sport clubs. The follow-up questionnaire was completed by 399 students (response rate 61%). There were no significant differences at baseline between responders and non-responders at follow-up regarding age, sex, prior BLS training, knowledge concerning BLS training (question 1-8), the changes in self-assessed BLS-skills and level of self-perceived fear of being first responder to a cardiac arrest situation. Theoretical knowledge The proportions of correct answers to the multiple-choice items at baseline are shown in Table 1. Ninety-nine percent of all the students knew how to call EMS in case of a cardiac arrest, 28% knew how to evaluate whether an unconscious person has adequate breathing and 57% knew what to do in a situation where the level of unconsciousness is uncertain. Sixty-six percent of the students knew the correct 30:2 compression-ventilation ratio during CPR. The odds ratios for the correct answer at follow-up are shown in Table 1. There were significant improvements for all but one multiple-choice item (p < .001); i.e. item 4, who to call in case of a sudden loss of consciousness (p = .66). This item was already correctly answered at baseline by 99% of the students. Self-assed BLS skills The figure illustrates the changes in self-assessed BLS skills (item 9) (Figure 1). At baseline approximately one third of the students answered that they would not respond or were uncertain about how to respond in case of OHCA, but at follow-up 90% felt well prepared and would take action (p < .001). Self-perceived fear of being first responder to a cardiac arrest situation The level of fear of being a first responder to a person having cardiac arrest was significantly decreased from baseline to follow-up with a decline in visual analog scale-score from 6.8 ± 2.2 to 5.5 ± 2.4 (p < .001). The supplementary qualitative description of their worst fear at baseline had three common themes: Fear of doing something wrong, fear of being the cause of exacerbating the situation and fear of being the cause of the person's death. The most common sentence reflecting their fear at baseline was the thought of panic. Only a minority of the answers, showed that the students were afraid of being accused of murder and feeling guilty if "things go wrong". Most of the students were afraid of misjudging the situation and of providing incorrect and inappropriate assistance. At follow-up the themes were similar to those at baseline. Furthermore the students still mentioned at numerous occasions that they feared the person would die. Discussion The main findings from this study of high school studentsB LS knowledge, skills and fears were that most high school students lacked knowledge regarding the first three steps of the chain of survival, had poor self-assessed BLS skills and high self-perceived fear of being first responder, despite previous training. Notably, one 45-minute training session based on ERC guidelines, consisting of a hands-on training session and a short theoretical introduction run by junior doctors, had an impact on empowering the students as first responders to a cardiac arrest. This shows that even though the students had divergent a priori qualifications, likely to be the case in real world settings, they increased their knowledge and self-assessed skills. Theoretical knowledge and interventions on BLS in high school students It is noteworthy that a significant proportion of students lacked knowledge regarding the first three links in the chain of survival despite previous training. Although our study was not designed to assess qualitative aspects of students' previous training and how these affected their current knowledge, our results are in accordance with previous findings that retaining CPR skills requires repetitive training [7]. Self-assessed BLS skills and self-perceived fears of being first responder The high school students' self-assessed BLS skills and selfperceived capability to take action as the first responder increased significantly in the present study. There is an inconsistency concerning self-confidence to perform CPR among high school students in similar studies. Meissner et al. found that only 27% of the high school students dared to initiate CPR before the training session which increased to 99% after a four hour training day [8]. Parnell found that 84% of the students would be willing to perform CPR on a family member and 64% on a stranger. The findings of a Norwegian study were that despite a good theoretical knowledge about handling a lifeless adult, self-reported confidence in having sufficient BLS knowledge was only modest [10]. The fears of being first responder in a cardiac arrest situation observed in this study indicate common themes of causes for not providing bystander CPR as identified in other studies [11]. Unfortunately, our study was not designed to address fear as an independent topic, since fear was not a topic in the training course. Even though there was a change in the level of fear of out participants, many participants still reported the same themes of fear after the course. However, a more thorough analysis of our qualitative question was not possible due to the study design. We acknowledge that it is an important aspect and that future studies should be designed to investigate how to decrease fear among high school students by including a brief discussion about the student´s fears concerning basic life support in BLS courses. Teaching high school students as a strategy to reach a broad audience Even though bystander CPR is a well-established determinant of survival after OHCA, the proportion of cardiac arrest victims who receive bystander CPR remains low, especially after OHCA at home [2]. Both acquisition of skills and skill retention after conventional CPR training have been disappointing and in this context, training of children and young adults is valuable for three main reasons. First, as a long-term investment, since training the young will eventually lead to a whole generation of trained adults, as reported in Stavanger [12]. Secondly, schoolchildren are at an age when knowledge and skills are well retained [13]. Thirdly, although they account for only a small percentage of OHCA, children/young adults should be capable of being first responder at OHCA as the majority occurs in private homes. If the students share the acquired knowledge with their family and non-trained friends, it indirectly introduces the BLS training to a broader audience, as previously shown [8]. Finally, the positive attitude towards learning BLS among high school students [14] could be a critical determinant of the impact of these short training sessions and emphasizes the importance to cease this golden opportunity to increase the knowledge on BLS at an early time in life. Implementation of BLS in Schools Even though the AHA and recommends CPR training of schoolchildren to be mandatory, implementation and maintenance of teaching BLS within the school system is a challenge worldwide [6]. Several key aspects to achieve successful implementation remain unknown, such as the length and the repetitive sequence of training sessions, the content, the teacher (professional instructor vs. nonprofessional instructor) and the methods or materials that should be used [15][16][17]. Previous studies have identified cost, limited time in the curriculum and instructor scheduling difficulties as the main barriers to implementation of BLS in schools [6,18]. Our results are valuable in this context, since our study used a new method where voluntary junior doctors used standard ERC BLS guidelines to assess students´level of knowledge and skills and to carry out an efficient 45-minute hands-on training session and achieve considerable increase in knowledge, self-assessed skills and significant decrease in self-perceived fear. Further, the short time frame used in our study (one 45-minute lesson) is a more realistic time frame to include in school curriculums compared to previous studies of 4-hour session [19]. Collectively, our results propose a new, simple and feasible method to teach BLS in schools which is shorter and cheaper than previous hands-on studies and could more easily be repeated once a year [8,19]. Study limitations In the present study, the follow-up questionnaire was handed out one week after baseline. The long-term effect of a single 45-minute training session on high school students therefore cannot be evaluated in this study. Furthermore we did not evaluate the quality of the educational skills of the instructors. However, we only used junior doctors who were trained using the same protocol based on ERC´s guidelines. There is some risk of selection bias because we only examined one high-school, but for logistical reasons this were the only option. Aalborg Cathedral School represents a broad sample of students because the students attending are enrolled throughout the whole region of northern Denmark including both rural areas and the city. And since this study represents a broad sample of students we believe it provides a god insight. Moreover the responders and non-responders were comparable in age, gender, education level (high school) prior BLS training, knowledge concerning BLS, self-assed skills and self-perceived fear of being first responder to a person with OHCA. Even though all the students enrolled on a voluntary base the dropout was 39% (254/651) after one week follow-up. Some of the reasons being; the students' skipped class, disease among the students, confusion regarding the teachers when to hand out the follow-up and where to hand the questionnaires in. Conclusion The BLS knowledge among high school students is poor, despite previous training. One 45-minute hands-on session run by junior doctors seems to be efficient to increase BLS knowledge and empower high school students to act as first responders in case of cardiac arrest on a short term-scale. Additional file Additional file 1: The questionnaire in an English version translated using an English-Danish correspondent, from the original Danish version.
2015-03-21T21:52:17.000Z
2014-04-14T00:00:00.000
{ "year": 2014, "sha1": "cbdf67227aecd00cc491df0060b60ba5602898e9", "oa_license": "CCBY", "oa_url": "https://sjtrem.biomedcentral.com/track/pdf/10.1186/1757-7241-22-24", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "b30d242c73ebbbc69ab9ddea006da88e2af4a104", "s2fieldsofstudy": [ "Medicine", "Education" ], "extfieldsofstudy": [ "Medicine" ] }
272785
pes2o/s2orc
v3-fos-license
In Vivo Characteristics of Premixed Calcium Phosphate Cements When Implanted in Subcutaneous Tissues and Periodontal Bone Defects Previous studies showed that water-free, premixed calcium phosphate cements (Pre-CPCs) exhibited longer hardening times and lower strengths than conventional CPCs, but were stable in the package. The materials hardened only after being delivered to a wet environment and formed hydroxyapatite as the only product. Pre-CPCs also demonstrated good washout resistance and excellent biocompatibility when implanted in subcutaneous tissues in rats. The present study evaluated characteristics of Pre-CPCs when implanted in subcutaneous tissues (Study I) and used for repairing surgically created two-wall periodontal defects (Study II). Pre-CPC pastes were prepared by combining CPC powders that consisted of CPC-1: Ca4(PO4)2O and CaHPO4, CPC-2: α-Ca3(PO4)2 and CaCO3 or CPC-3: DCPA and Ca(OH)2 with a glycerol at powder-to-liquid mass ratios of 3.5, 2.5, and 2.5, respectively. In each cement mixture, the Ca to P molar ratio was 1.67. The glycerol contained Na2HPO4 (30 mass %) and hydroxypropyl methylcellulose (0.55 %) to accelerate cement hardening and improve washout resistance, respectively. In Study I, the test materials were implanted subcutaneously in rats. Four weeks after the operation, the animals were sacrificed and histopathological observations were performed. The results showed that all of the implanted materials exhibited very slight or negligible inflammatory reactions in tissues contacted with the implants. In Study II, the mandibular premolar teeth of mature beagle dogs were extracted. One month later, two-wall periodontal bone defects were surgically created adjacent to the teeth of the mandibular bone. The defects were filled with the Pre-CPC pastes and the flaps replaced in the preoperative position. The dogs were sacrificed at 1, 3 and 6 months after surgery and sections of filled defects resected. Results showed that one month after surgery, the implanted Pre-CPC-1 paste was partially replaced by bone and was converted to bone at 6 months. The pockets filled with Pre-CPC-2 were completely covered by newly formed bone in 1 month. The Pre-CPC-2 was partially replaced by trabecular bone in 1 month and was completely replaced by bone in 6 months. Examination of 1 month and 3 month samples indicated that Pre-CPC-2 resorbed and was replaced by bone more rapidly than Pre-CPC 1. Both Pre-CPC pastes were highly osteoconductive. When implanted in periodontal defects, Pre-CPC-2 was replaced by bone more rapidly than Pre-CPC-1. Previous studies showed that water-free, premixed calcium phosphate cements (Pre-CPCs) exhibited longer hardening times and lower strengths than conventional CPCs, but were stable in the package. The materials hardened only after being delivered to a wet environment and formed hydroxyapatite as the only product. Pre-CPCs also demonstrated good washout resistance and excellent biocompatibility when implanted in subcutaneous tissues in rats. The present study evaluated characteristics of Pre-CPCs when implanted in subcutaneous tissues (Study I) and used for repairing surgically created two-wall periodontal defects (Study II). Pre-CPC pastes were prepared by combining CPC powders that consisted of CPC-1: Ca 4 (PO 4 ) 2 O and CaHPO 4 , CPC-2: α-Ca 3 (PO 4 ) 2 and CaCO 3 or CPC-3: DCPA and Ca(OH) 2 with a glycerol at powder-to-liquid mass ratios of 3.5, 2.5, and 2.5, respectively. In each cement mixture, the Ca to P molar ratio was 1.67. The glycerol contained Na 2 HPO 4 (30 mass %) and hydroxypropyl methylcellulose (0.55 %) to accelerate cement hardening and improve washout resistance, respectively. In Study I, the test materials were implanted subcutaneously in rats. Four weeks after the operation, the animals were sacrificed and histopathological observations were performed. The results showed that all of the implanted materials exhibited very slight or negligible inflammatory reactions in tissues contacted with the implants.In Study II, the mandibular premolar teeth of mature beagle dogs were extracted. One month later, two-wall periodontal bone defects were surgically created adjacent to the teeth of the mandibular bone. The defects were filled with the Pre-CPC pastes and the flaps replaced in the preoperative position. The dogs were sacrificed at 1, 3 and 6 months after surgery and sections of filled defects resected. Results showed that one month after surgery, the implanted Pre-CPC-1 paste was partially replaced by bone and was converted to bone at 6 months. The pockets filled with Pre-CPC-2 were completely covered by newly formed bone in 1 month. The Pre-CPC-2 was partially replaced by trabecular bone in 1 month and was completely replaced by bone in 6 months. Examination of 1 month and 3 month samples indicated that Pre-CPC-2 resorbed and was replaced by bone more rapidly than Pre-CPC 1. Both Pre-CPC pastes were highly osteoconductive. When implanted in periodontal defects, Pre-CPC-2 was replaced by bone more rapidly than Pre-CPC-1. stable in the package, had good washout resistance, and only hardened after being exposed to a wet environment [16]. Former studies also reported that Pre-CPCs showed excellent biocompatibility [17][18][19]. Even though Pre-CPCs generally have longer hardening time and lower strength compared to the conventional CPC mixed with water, they have the following advantages: (1) ready to be used for clinical applications and (2) practically unlimited working time for the surgeon to apply the Pre-CPC to a desired site because it begins to harden and to form hydroxyapatite only after being exposed to water from surrounding tissues. Besides uncertainty about how glycerol behaves in the body, there been always have questions whether the Pre-CPC could maintain its original shape in the body. Therefore, this study investigated in vivo characteristics of the Pre-CPCs when implanted in subcutaneous tissues in rats and used to repair surgically created twowall periodontal defects in dogs. Cement Powders CPC-1 consisted of tetracalcium phosphate (TTCP:Ca 4 (PO 4 ) 2 ) (73 % mass fraction) and dicalcium phosphate anhydrous (DCPA:CaHPO 4 ) (27 % mass fraction). CPC-2 consisted of α-tricalcium phosphate (α-TCP:α-Ca 3 (PO 4 ) 2 ) (90 % mass fraction) and CaCO 3 (10 % mass fraction). CPC-3 consisted of DCPA (73 % mass fraction) and calcium hydroxide (Ca(OH) 2 ) (27 % mass fraction). In each CPC powder, the Ca to P molar ratio was 1.67, the ratio found in stoichiometric hydroxyapatite (HA). TTCP was prepared by heating an equimolar mixture of commercially obtained DCPA (Baker Analytical Reagents, J. T. Baker Chemical Co., Phillipsburg, NJ) 1 and CaCO 3 (J. T. Baker Chemical Co.) at 1500 ºC for 6 h in a furnace and quenched at room temperature. α-TCP was prepared by heating a mixture that contained 2 mol of DCPA and 1 mol of CaCO 3 to 1500 ºC for 6 h and then quenched in air. The powders were ground individually in a planetary ball mill in cyclohexane, or 95 % volume fraction ethanol, or without a liquid to obtain the desired median particle sizes based on data from previous studies. The median particle sizes of TTCP and DCPA for CPC-1 were 17 μm and 1 μm, respectively. The median particle sizes of α-TCP, CaCO 3 and Ca(OH) 2 for CPC-2 and -3 were (4 to5) μm, respectively. The particle sizes of powders were measured using a centrifugal particle size analyzer (SA-CP3, Shimadzu, Kyoto, Japan) with an estimated standard uncertainty of 0.2 μm. Preparation of Pre-CPCs Pre-CPC-1, -2 and -3 were prepared by mixing CPC-1, 2 and 3 powder with the cement liquid at powder-to-liquid mass ratio (P / L ratio) of 3.5, 2.5 and 2.5, respectively. These ratios were chosen in order to produce pastes that exhibited workable consistencies. Experimental Design of Study I (Biocompatibility) This study was permitted by the Animal Experimentation Committee at Nihon University School of Dentistry, and performed in the animal and cell culture laboratories at the Nihon University School of Dentistry. The experiments followed the "Guidelines for Animal Experimentation Committee at Nihon University School of Dentistry." Experimental procedures of the study are shown in Fig. 1. Each experimental material was tested in five adult Donryu rats with an average body weight of 200 g to 250 g. All experimental procedures on given animals were completed as aseptic as possible. Each animal was anesthetized with pentobarbital sodium (Nembutal, Abbott Laboratories, North Chicago, IL, U.S/A.) injection at a dose of 1.5 mg/kg body mass. Under the general anesthesia, approximately 3 cm × 2 cm of back area of the rat was shaved and swabbed with 70 % volume fraction ethanol (Wako Pure Chemical Industries Ltd., Osaka, Japan). A 15 mm horizontal incision was made along the side of a back bone, and a skin pocket created 20 mm to 25 mm away from the (Fig. 2). A total of four pockets were formed totally on each rat, and each pocket was separated a distance of more than 40 mm. Each cylinder shaped sample (3 mm diameter and 4 mm length) from the premixed CPC, was inserted into a pocket of subcutaneous tissues as shown in Fig. 3, and then the pocket was closed with interrupted sutures. The material sample was not hardened yet when the sample was inserted into the pocket. Four weeks after surgery, the animals were sacrificed and the tissues including the test materials were excised en block. Tissues were fixed in 10 % volume fraction neutralized-buffered formalin, decalcified with Plank-Rychlo solution and embedded in paraffin. Subsequently, paraffin embedded blocks of decalcified samples were cut into 6 μm sections, and stained with hematoxylin and eosin. Histopathological features of each section were observed using an optical microscope (Vanox-S, Olympus, Tokyo, Japan). Experimental Design of Study II (Osteoconductivity) Results from the subcutaneous tissue reaction (Study I) exhibited that each Pre-CPC showed negli-gible inflammatory reaction. The implanted material maintained original graft shape and was encapsulated with extremely thin fibrous connective tissues. Our previous study showed that CPC-1 and -2, mixed with a 0.5 mol/L sodium phosphate showed excellent biocompatibilities and osteoconductivities. Therefore, Pre-CPC-1 and -2 were selected as the experimental materials in this study. The study protocol was reviewed and approved by the Animal Experimentation Committee at Nihon University School of Dentistry at Matsudo, and performed in the animal laboratory at Nihon University School of Dentistry at Matsudo. Mature (2 to 4 years old) beagle dogs were used in this study. The outline of the study is illustrated in Fig. 4. The investigation of each experimental material was carried out in three adult beagle dogs (average body weights were approximately 8 kg to 12 kg). Before starting this study, both left and right mandibular fourth premolar teeth were extracted to make enough space for the bone graft. All experimental procedures on each animal were completed without any interruption. The surgical procedures were performed under strict aseptic conditions. General anesthesia for each dog was administrated by intravenous injection of pentobarbital Volume 115, Number 4, July-August 2010 One month later, surgical procedures were performed on the designated teeth under general anesthesia supplemented by local administration of lidocaine-HCL (2 % mass fraction Xylocaine, Astra Japan Ltd., Fujisawa Pharmaceutical Co., Ltd., Osaka, Japan) to reduce hemorrhage in the surgical site. Intracrevicular incisions were made on the facial and both mesial and distal interproximal surfaces. Full thickness envelope flaps were then reflected on the facial and interproximal surfaces extended apically just past the mucogingival junction. Two wall bone defects (4 mm width and depth) were created in the alveolar bone close to the mesial of the first molar, distal of the third premolar, mesial of the third premolar or distal of second premolar (Fig.5). Each bone defect was filled with temporary filling material (Caviton: G-C Corp., Tokyo, Japan) to the level of the neighboring bone to maintain the surgically created bony defect and acts as a source Sutures were removed one week after the surgery. The temporary filling material was removed 4 weeks later. After an additional four weeks, a second surgical procedure was performed. Before surgery, clinical measurements (free gingival margin to the base of the pocket and free gingival margin to the cementoenamel junction) and gingival index (scores of inflammatory [20]) were recorded on the experimental teeth. A mucoperiosteal flap was elevated after an incision in the gingival sulcus except in interproximal areas, where incisions were extended on the lingual line angle of each tooth to permit papillary reflection and not made superior on the defect site. Full thickness envelope flaps were then reflected on the facial and interproximal surfaces just past the mucogingival junction. Scaling and root planning were performed on the cervical area of the alveolar bone. The distance from cementoenamel junction to the crest of the alveolar bone and the bottom border of the bone defects was recorded in the mid interproximal area of each experimental root. Each bone defect was randomly assigned as either an experimental or control site. Experimental sites were treated with each Pre-CPC to fill the bony defect prior to flap closure (Fig.6). The flaps were replaced at their preoperative position and sutured to secure complete coverage of the alveolar bone. Periodontal dressing and antibiotic medications were not provided. After surgery, daily plaque control was carried out using cotton balls moistened with saline. Two weeks after surgery, daily oral hygiene procedures were started using a soft nylon toothbrush moistened with 0.2 % chlorhexidine (Sigma Chemical Co. St. Louis, MO, U.S.A.). All dogs received a prophylaxis with fluoride phosphate prophylaxis paste (Prophy paste, Clean Chemical Sweden) once a week. The periodontal status was recorded as previously described for all three dogs at 1, 3 and 6 months after surgery. Histological Preparation of the Specimens One, 3 and 6 months after the operation, the animals were sacrificed and the tissues including the test materials were excised en bloc. The samples were fixed in 10 % neutralized buffered formalin, decalcified by Plank-Rychlo solution (Fujisawa Phamaceutical Co., Ltd., Osaka, Japan) and embedded in paraffin. Subsequently, paraffin-embedded blocks were mounted in a mesio-distal plane, serially sectioned at thickness of 7 μm thick and stained with hematoxylin and eosin. The section area in this experiment is shown in Fig.7. Histopathological features of each section were observed using an optical microscope (Vanox-S, Olympus, Tokyo, Japan). Results All acronyms used in animal studies were shown in Table 1. Study I (Fig.8) Pre-CPC-1: The grafted material (GM) was surrounded by thin fibrous connective tissues (FCT) with a quite few inflammatory cells. Small numbers of foreign body giant cells were found adjacent to the material. Tissue reaction to the material was extremely mild. Pre-CPC-2: Relatively thin FCT surrounded the material. Few inflammatory cells were observed around the materials. Infiltrated connective tissues (ICT) were predominantly seen in mass of the material. Tissue reaction to the material was very mild. Pre-CPC-3: The material was surrounded by thin FCT and small numbers of inflammatory cells. Multinuclear giant cells (MGC) were found in the thin connective tissues. Accumulated granulation tissue (GT) with fibrous cells was observed around the CPC mass. The histopathological reactions to this material were basically similar to those of CPC-1 and -2. All pre-CPC implants were found to retain their original cylindrical shape despite that during the surgery the CPC in the pocket was not yet hardened at the time of suturing. In general, all Pre-CPCs showed similar histopathological reactions. No differences were observed among the pastes prepared with different CPC powders. Each experimental material was encapsulated by thin FCT with small numbers of foreign body giant cells. Some sections showed a few infiltrated cells adjacent to the material. Tissue reactions to all Pre-CPCs were very mild. (Fig.9-11) Clinical Findings All dogs demonstrated various degrees of gingival inflammation at the 2nd flap surgery. One week prior to being sacrificed, they showed a significant improvement of the gingival index. Before sacrifice, all dogs showed healthy periodontal condition without any gingival recession or inflammations. There were no significant differences between the control and the experimental surgical sites in clinical observations. Histological Findings at One Month After Surgery (Fig.9) Pre-CPC-1: The grafted area (GA) was covered by relatively thick FCT. The grafted material (GM), which should have mostly converted to HA, was partially replaced by newly formed bone (NB) without any inflammatory reaction. Multinuclear giant cells (MGC), similar in appearance to osteoclasts, appeared around the resorbing front where the material resorbed and was replaced by NB. Junctional epithelium (JE) attachment was prevented at the crestal level of instrumented root surface (IRS). Newly formed cementum (NC) was found on the apical side of IRS. Pre-CPC-2: No inflammatory reactions were observed in the GA. Woven bone (WB) was formed throughout the entire GA and some trabecular bone (TB) was also noted. NC formation was observed around the apical area of IRS, and apical proliferation of JE was prevented at the crestal level of IRS. Bone formation in Pre-CPC-2 GA was slightly faster than that in Pre-CPC-1 GA. Volume 115, Number 4, July-August 2010 Fig. 9. Study II-One month after surgery. Pre-CPC-1: Newly formed bone (NB) was formed partially in grafted area (GA). Junctional epithelium (JE) extention was prevented at the crestal level of instrumented root surface (IRS). Newly formed cementum (NC) was formed in apical side of IRS, Multinuclear giant cells (MGC), which resembled to osteoclasts, appeared around the grafted materials (GM). Pre-CPC-2: Woven bone (WB) was formed in entire GA. Trabecular bone (TB) was already formed partially. NC was observed around the apical area of IRS, and JE proliferation was prevented at the crestal level of IRS. Histological Findings at Three Months After Surgery (Fig.10) Pre-CPC-1: No inflammatory reaction was observed in the GA. Most of the GM was converted to NB. The GA was completely covered with thin mature bone tissue with FCT. TB was formed throughout the GA, but clusters of GM were still present. NC was clearly formed and the JE proliferation was completely prevented at the crestal level of IRS. The number of phagocytic cells was reduced compared with the 1-month histological features of Pre-CPC-1. Pre-CPC-2: The GA was covered by relatively mature bone tissues with dense FCT. Some GM remained in the GA, but the GA was mostly replaced by relatively mature TB with Harversian lamellae (HL). NC was generated along entire IRS. Phagocytic response was decreased compared with the 1-month sample of Pre-CPC-2. Pre-CPC-1 Most of GM was converted to NB, but GM clusters was still slightly present in GA. TB was formed throughout GA. NC was clearly formed and JE proliferation was prevented at crestal level of IRS. Pre-CPC-2: GA was mostly replace by nomal bone with Herversian lamellae (HL). NC was generated along entire IRS. Histological Findings at Six Months After Surgery (Fig.11) Pre-CPC-1: The GM was completely resorbed and also converted to normal alveolar bone (AB). New bone was formed at the crest of AB with HL and osteocyte (OC). Osteoblastic activity was almost quiescent in the GA. Bone marrow (BM) was formed among TB. NC was found along on the entire IRS. A thin FCT layer, of which structure was closely similar to that of a periodontal ligament-like structure (PLS), was generated between AB and NC. Pre-CPC-2: The defect was completely converted to normal AB with BM and HL and was covered with periosteum (PO) attached to FCT. NC was generated along entire IRS, and a PLS was clearly formed between AB and NC. Histological features were quite similar to those of Pre-CPC-1 at 6-months. In general, both Pre-CPCs retained shape-integrity and restored the original alveolar bone shape after grafting of two-wall periodontal bone defects. Defects filled with the Pre-CPCs were replaced by natural bone within 6 months after the surgery. Periodontal tissues including NC and a PLS were also gradually regenerated. The defect filled with Pre-CPC-2 showed relatively faster bone replacement when compared to Pre-CPC-1. Discussion Reconstructive surgery for alveolar bone deficiencies, especially periodontal bone defects, can be performed with a number of techniques including guided bone regeneration, in which an occlusive barrier membrane is replaced between the connective tissues and residual alveolar bone to create a space for the new bone formation. However, it would be quite difficult to repair large defects, such as 1-or 2-wall periodontal bone defects with either membrane alone or in combination with bone grafting materials including autogenous bone. These grafting materials do not have sufficient properties, such as hardening, washout resistance, and bioresorption in the biological environment [16][17][18][19]. Therefore, it was difficult to surgically reconstruct alveolar bone defects for long time. CPC and several other similar CPCs harden in 10 min with use of a phosphate solution as the liquid and form a resorbable HA crystalline scafold as the final product [16]. On the other hand, CPCs have some difficulty maintaining the original grafted shape at defect sights when used as implanted materials, because they do not have enough washout resistance and viscosity in the body fluid within their hardening periods [17][18][19]. Despite these properties, our previous studies still reported that alveolar ridge augmentation and 3-wall periodontal bone defect reconstructed by using CPC, without using a barrier membrane, were replaced by natural bone within 6 months after surgery [10,14]. Pre-CPCs are stable in the package and have sufficient viscosity and excellent washout resistance in body fluids, so that they would harden only after delivery to defect sites where glycerol-tissue fluid exchange occurs and could be prepared in advance under wellcontrolled conditions. An important handling property for the Pre-CPC is an adequate working time for the surgeon to place and shape the cement into the defect. Therefore, we assume that the Pre-CPC could be used to repair a variety of bone defects. The results obtained from this study showed that the defect filled with either Pre-CPC-1 or 2 was gradually replaced by newly formed bone. The defect filled with Pre-CPC-2 showed singificantly faster bone formation than that filled with Pre-CPC-1. Our former study indicated that the alkaline phosphatase (ALP-ase) activity, which was closely related to new bone formation, was enhanced significantly under the presence of either CPC-1 or CPC-2 [21][22][23]. The activity of CPC-2 showed faster enhancement than that of CPC-1. In addition to those properties, CPC-2 contained carbonate in the original structures, so the final product of CPC-2 contained more carbonate apatite and had lower crystallinity than that of CPC-1 [1,[24][25][26]. This poorly crystalline HA containing carbonate was easily absorbed by osteoclasts, so it might be expected that new bone formation occurred in the absorption site simultaneously. Therefore, as the results of the above properties, Pre-CPC-1 and -2 showed excellent osteoconductivity, especially showing faster bone formation occurred in the presence of CPC-2. Six months after the surgery, the defects filled with either Pre-CPC-1 or -2 were converted to natural bone without using any barrier membrane. Reconstruction area of the Pre-CPC kept the original shape as when the material was originally implanted. Those results suggested that Pre-CPCs were not only useful as bone reconstructive materials, but also effective for any type of bone deficiencies. Conclusion Based on the results obtained from study I, all Pre-CPCs were shown to be highly biocompatible and retained the original cylindrical shape in subcutaneous tissues, thus suggesting that those materials may be useful for bone graft applications. Pre-CPCs, when filled into artificially created twowall periodontal bone defects, were resorbed and converted to natural alveolar bone within 6 months after surgery. One and 3 month results indicated that Pre-CPC-2 was resorbed and replaced by NB significantly faster than Pre-CPC-1. The faster implant-tobone turnover may be attributed to lower crystallinity and higher carbonate content of the HA formed in Pre-CPC-2. These results indicated that Pre-CPC-1 and -2 should be an effective and suitable material for large periodontal bone defects. Moreover, accelerated bone formation could be expected with Pre-CPC-2 when it is used as bone graft material.
2016-05-12T22:15:10.714Z
2010-07-01T00:00:00.000
{ "year": 2010, "sha1": "9b0c9b183b0855e668e2d2383e8cbdf309aa87f5", "oa_license": "CC0", "oa_url": "https://doi.org/10.6028/jres.115.021", "oa_status": "BRONZE", "pdf_src": "PubMedCentral", "pdf_hash": "9b0c9b183b0855e668e2d2383e8cbdf309aa87f5", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine", "Chemistry" ] }
11277663
pes2o/s2orc
v3-fos-license
Melanin-templated rapid synthesis of silver nanostructures Background As a potent antimicrobial agent, silver nanostructures have been used in nanosensors and nanomaterial-based assays for the detection of food relevant analytes such as organic molecules, aroma, chemical contaminants, gases and food borne pathogens. In addition silver based nanocomposites act as an antimicrobial for food packaging materials. In this prospective, the food grade melanin pigment extracted from sponge associated actinobacterium Nocardiopsis alba MSA10 and melanin mediated synthesis of silver nanostructures were studied. Based on the present findings, antimicrobial nanostructures can be developed against food pathogens for food industrial applications. Results Briefly, the sponge associated actinobacterium N. alba MSA10 was screened and fermentation conditions were optimized for the production of melanin pigment. The Plackett-Burman design followed by a Box-Behnken design was developed to optimize the concentration of most significant factors for improved melanin yield. The antioxidant potential, reductive capabilities and physiochemical properties of Nocardiopsis melanin was characterized. The optimum production of melanin was attained with pH 7.5, temperature 35°C, salinity 2.5%, sucrose 25 g/L and tyrosine 12.5 g/L under submerged fermentation conditions. A highest melanin production of 3.4 mg/ml was reached with the optimization using Box-Behnken design. The purified melanin showed rapid reduction and stabilization of silver nanostructures. The melanin mediated process produced uniform and stable silver nanostructures with broad spectrum antimicrobial activity against food pathogens. Conclusions The melanin pigment produced by N. alba MSA10 can be used for environmentally benign synthesis of silver nanostructures and can be useful for food packaging materials. The characteristics of broad spectrum of activity against food pathogens of silver nanostructures gives an insight for their potential applicability in incorporation of food packaging materials and antimicrobials for stored fruits and foods. Background Silver particles/nanostructures have been used as an effective antimicrobial agent in food and beverage storage for a long time. Silver containing plastics had been incorporated in refrigerator liners and food storage containers [1][2][3]. FDA has been approved the use of silver based particles for disinfection purpose for the food contacting materials [4]. Silver based nanomaterials and nanocomposite can be devised for the easiest detection of commonly found food adulterants, chemical contaminants, allergens and any changes respond to environmental conditions etc. Silver nanoparticles incorporated cellulose pads are used to control the food pathogens from packed beef meat and reduce the microbial count in fresh cut melon [5]. Apart from this, silver nanoparticles slower the ripening times of stored fruits by catalyzing the destruction of ethylene gas and increase the shelf lives of stored fruits [5]. Several studies have demonstrated the efficacy of silver nanoparticles loaded packaging materials in campaigning against microbial growth in foods [5][6][7][8]. Nanostructured antimicrobials have a higher surface area-to-volume ratio than their microscale counterpart and their incorporation in food packaging systems are supposed to be particularly efficient in their activities against microbial cells [9]. The development of stable, mono dispersible, metallic silver nanostructures synthesis via reliable green synthesis has been an important aspect of current nanotechnology research. The aggregation of silver nanostructures and the insufficient stability of their dispersions lead to loss of their special nanoscale properties. Researchers employ polymer-assisted fabrication routes and various chemical stabilizing agents (surfactants such as CTAB, SDS etc., and polymers such as PVP) for preventing the selfaggregation of nanostructures [10][11][12]. The use of chemical compounds is toxic and will reduce the biological applicability. The use of natural products such as biosurfactant, monosaccharides, plant extracts etc. as enhancers and stabilizing agent for silver nanostructures synthesis were extensively studied. The marine glycolipid biosurfactant stabilized silver nanoparticles were synthesized by Brevibacterium casei MSA19 under solid state fermentation using agro-industrial and industrial waste as substrate [13]. Apte et al. [14] studied L-DOPA mediated synthesis of melanin by fungi Yarrowia lipolytica and the induced melanin has been exploited in the synthesis of silver and gold nanostructures. In this study, rapid reliable approach has been developed to produce uniform silver nanostructures by purified melanin from marine Nocardiopsis alba MSA10. As melanin pigments are used as food colorant and nutritional supplements, which reflects the industrial need to large scale production as natural ingredients. Natural pigment production especially from microorganisms is emerging as an important aspect due to their wide acceptance in various industrial sectors [15] and it replaces the chemically synthesized pigments which cause harmful effects in the natural environment [16]. The microbial pigment, melanin has received considerable attention because of their useful biological activities especially in food and pharmaceutical industries. Melanins are high molecular weight pigments that are produced in microorganisms by oxidative polymerization of phenolic or indolic compounds with free radical generating and scavenging activity [17]. Based on chemical structure, properties and species affiliation, melanins are classified as allo-, pheo-, and eumelanins. The black or brown eumelanins are produced by oxidation of tyrosine through tyrosinase to DOPA (o-dihydroxyphenylalanine) and dopachrome, further the cyclization mediates to form 5,6-dihydroxyindole (DHI) or 5,6-dihydroxyindole-2-carboxylic acid (DHICA) [18]. The yellow-red pheomelanins are synthesized like eumelanins in the first step; the intermediate DOPA undergoes cysteinylation, directly or mediated by glutathione to form various derivatives of benzothiazines [19]. The third types of allomelanins are heterogenous group of polymers synthesized via pentaketide pathway [20]. Brown pigments may also produce from L-tyrosine pathway via accumulation and autooxidation of intermediates of tyrosine catabolism [18]). Microbial melanin has a wide range of applications including photoprotective, radioprotective, immuno-modulating, antimicrobial and antitumour activities [21][22][23]. Actinobacteria were resilient bacteria found among culturable sponge microbes and are current focus on bioactive leads from marine environment [24]. The sponge associated actinomycetes has wide application as antiviral, antibacterial, antitumour, anti-helminthic, insecticidal, immuno-modulator, immuno-suppressant and food colorants [25]. Melanin producing microorganisms are ubiquitous in nature; however limited literature is available on actinobacterial melanin production at different cultural conditions. Therefore, this study aims to enhance the production of melanin from marine actinobacterium N. alba MSA10, by optimizing various cultural and environmental parameters under submerged conditions as well as melanin mediated synthesis of silver nanostructures. Screening and identification of melanin producers The strain MSA10 was considered as potential melanin producers among the other isolates obtained from the sponge Dendrilla nigra. The MSA10 strain was Gram positive and mycelia appearance under phase contrast microscope, which produce white powdery colonies on the actinomycetes isolation agar. It showed positive results on indole, citrate utilization, urease and triple sugar ion tests and negative results in methyl red, Voges Proskauer and catalase tests. Based on the morphological, biochemical, phylogenetic analysis (UPGMA algorithm) and taxonomic affiliation (RDP-II), the isolate MSA10 was identified as Nocardiopsis alba MSA10. The 16S rRNA sequence was deposited in Genbank with an accession number EU563352. It was found that the isolate MSA10 showed clustering exclusively with pigment producing Streptomyces strain and also an efficient biosurfactant producer [26]. Melanin production by N. alba MSA10 was initiated at 72 h of incubation, the medium changed to light brown, further the color development was increased at 96, 120 and 144 h to light brown, brown and dark brown respectively. The melanin production was depending on the biomass yield and a highest yield of biomass with melanin was obtained at 144 h of incubation ( Figure 1). Formulation of fermentation media for melanin production It is evident that different media constituents such as carbon, nitrogen, metal ions, and organic solvents and environmental factors such as pH, temperature, and salinity are known to play a vital role in the melanin production. The fermentation conditions and media constituents including sucrose, tyrosine, temperature and salinity as most significant variables were optimized for enhanced melanin yield. The correlation between melanin yield and the four critical control factors (variables) were analyzed by Box-Behnken design, the following quadratic model polynomial equation was obtained to explain melanin yield in mg/ml (Y). The statistical significance of the equation 1 was checked by F-test and the results of ANOVA are shown in Table 1. The model F value of 251.68 implies the model is more significant (<0.0001). The coefficient determination (R 2 ) value was found to be 0.9960, which implies that the variation of 99.60% for the melanin yield was attributed to the independent variables and only 0.40% of the total variation could not be explained by the model. The R 2 value found in this study was closer to 1 show that the developed model could effectively increase the melanin production (3.4 mg/ml). The 3D response surface plots showed the effect of medium components and fermentation conditions on the production of melanin ( Figure 2). The response surface curve was plotted with two factors varied at a time when the other two factors as being remained at a fixed level. Higher melanin yield (3.4 mg/ml) was obtained with 12.5 g/l of tyrosine and 25 g/l of sucrose in the medium and maintaining the other parameters such as salinity (2.5%), pH (7.5) and temperature (35°C) as constant ( Figure 2). When the pH was below 7.5 (6.0 -6.5) and the temperature above 35°C, the growth of N. alba MSA10 as well as melanin production has declined drastically. The pigment production consequently increased with increasing the temperature up to 35°C, but the growth of N. alba MSA10 was found optimum at the temperature of 28 -30°C. It was found that at pH 7.5, the growth of N. alba MSA10 and melanin production was found to be linear. This suggests that the near neutral pH was optimum for higher biomass and melanin production. Lights on melanin production Light is considered as important environmental parameters for melanin production. Literature evidenced that pigments absorbed light at a particular wavelength and emits different colors. In this study, the various light sources such as green, red and yellow light on enhanced melanin production were investigated. It was found that the green light excitation had resulted in highest melanin production with the formation of dark brown color. Considerable pigment production was observed in red light and there is no pigment production in yellow light source in the culture plate, but slight production was observed in the fermentation medium at 144 h of incubation ( Figure 3). Characterization of melanin pigment The chromatogram of violet color spot on TLC plate showed an R f value of 0.74 related to melanin pigment. A strong peak at 220 nm was obtained for UV-visible spectrum of Nocardiopsis melanin (data not shown). The colorimeter L* (lightness ranges from 0-100 (darklight) a* (red-green) and b* (yellow-blue) values of melanin reflects the dark brown color. The L*, a* and b* values of melanin was found to be 2. 3397 cm -1 corresponds to the OH groups of polymeric structure, the band at 1638 and 1118 associated with primary amine NH and primary amine CN stretch vibrations of melanin respectively. The band at 1385 cm -1 is assigned to methylene scissoring of C-H groups and the band around 2077 arises from the carbonyl stretching vibrations. The TLC chromatogram and FT-IR spectrum analysis confirmed the melanin pigment produced by N. alba MSA10. Physico chemical properties of purified melanin The Nocardiopsis melanin was found to be dissolving immediately in alkaline water and hexane when compared to water at room temperature. A precipitation was formed when the melanin was allowed to dissolve in ethanol, methanol and HCl. It remains insoluble in ether, chloroform and ethyl acetate. Nocardiopsis melanin was stable at the range of temperatures (20-100°C) even for 3 h (Figure 4) and light sources including UV, natural sun light and complete darkness. The stability of different pH (3)(4)(5)(6)(7)(8)(9)(10)(11)(12) of melanin tested had showed slight variation of absorption spectrum scanned at 190-220 nm (data not shown). The strong peak at 215 was observed (peak value 3.9) in the alkaline pH (9, 10 and 12), which indicates the relative stability of melanin in alkaline conditions when compared to neutral and acidic conditions. Similar water solubility nature of melanin has been reported in a mutant strain of Bacillus thuringiensis [27]. The other physico chemical properties are similar to melanin obtained from Osmanthus fragrans seeds [28]. Antioxidant activity and reducing power of melanin The antioxidant assay is based on the reduction of Mo (VI) to Mo (V) by melanin with the formation of a green phospho molybdenum complex at different temperatures. Even though the greencomplex formation takes place at room temperature, the formation of maximum phospho molybdenum complex increases with the increasing temperature of 90 and 180°C. Figure 5 shows the antioxidant property exhibited by Nocardiopsis melanin. Similar results were obtained with the melanin from berry of Cinnamomum burmannii and Osmanthus fragrans [29]. The reducing capabilities of Nocardiopsis melanin from Fe 3+ to Fe 2+ was clearly investigated with the standard BHT ( Figure 6) and the results evidenced and validated the antioxidant property. Presence of antioxidant substances enhance the reduction of Fe 3+ /Ferricyanide complex to the Fe 2+ form, which can be monitored at 700 nm [30]. , Green light source on melanin production in tyrosine broth (C1) and tyrosine agar medium (C2), which produces dark brown pigment. Red light source produces brown pigment, tyrosine broth (D1) and tyrosine agar (D2). Yellow light shows light brown pigment production on tyrosine broth (E1) at 144 h incubation, but no pigment production in tyrosine agar medium (E2). The pigment production at normal light source on tyrosine broth (F1) and tyrosine agar medium (F2) produces dark brown pigment. Melanin mediated synthesis of silver nanostructures and antimicrobial assay The synthesis of melanin mediated silver nanostructures was confirmed by the appearance of strong peaks in the UV-visible spectra at 420-460 nm. With the increase in temperature (100°C) stable and rapid synthesis of same sized particles takes place ( Figures 7A and 8B). The synthesis pattern of UV-Visible spectrum at different temperature profile is depicted in Figure 7B. It is evident from the UVabsorbance spectrum that the temperature at 100°C shows effective synthesis. It is noticeable that the temperature stability of Nocardiopsis melanin tested before showed stability at 100°C over 3 h. The antioxidant and reductive capabilities of the melanin compound enhances the rapid synthesis of silver nanostructures without adding any capping agent. Thus, melanin acts as both reducing and capping agent of silver nano-sized structures synthesis. The synthesis at various time interval shows that increasing incubation time at 30 min gives more stable particles when compared to 0, 10 and 20 min ( Figure 7B). The FT-IR spectrum of melanin mediated silver nanostructures shows (Figure 9) spherical in shape and found to be well dispersed in aqueous medium. The particle sizes ranging from 20 -50 nm were formed. The melanin-silver nanoparticles showed ( Figure 10) antimicrobial activity against all food pathogens tested but the highest activity was found against B. cereus (140 mm 2 ) and P. fragi and E. coli (120 mm 2 respectively). Discussion Industrial production of colorants from microorganisms are more suitable due to factors such as ease of availability, culturing, higher production of pigments and the microbes' potential to be genetically manipulated. Isolation of new strain is still of particular interest because of the necessity to obtain microorganisms with suitable characteristics for submerged cultivation. Recently, sponge associated marine bacteria have been considered as a potential source of food-grade pigments [31]. The production of melanin by N. alba MSA10 was attributed to the supplement of tyrosine on the production medium via tyrosinase enzyme. The formation of dopachrome (red coloration) and the OD of 0.148 in tyrosinase assay were confirmed by tyrosinase activity of N. alba MSA10. The strain N. alba MSA10 utilized up to 12.5 g/l of tyrosine but on further addition up to 20 g/l, the melanin production rate gets declined. This shows that the strain N. alba MSA10 had produced melanin by the mediation of tyrosinase. According to Williams [32], about one third of the taxa of the genus Streptomyces produce melanin. In strains including Streptomyces antibioticus, S. glaucescens and S. lavendulae, the tyrosinase gene for melanin production have been cloned, sequenced and recombinantly produced the protein which has sequence similarity to mammalian tyrosinase [33,34]. Melanin like pigments formed from L-tyrosine with different melanogenic pathway in S. avermitilis [35], Xanthomonas campestris [36], Shewanella colwelliana [37] and Vibrio cholerae [38] has been well deliberated. Sucrose (25 g/l) as carbon source increased the melanin production up to 3.4 mg/ml significantly followed by glucose as alternative carbon source in N. alba MSA 10. Till date, there is no report on the production of melanin from sucrose as a sole carbon source. The red pigment produced by Paecilomyces sinclarii showed maximum mycelial growth in sucrose as carbon source, even though the highest pigment production has been attributed in soluble starch medium [39]. Stimulatory effects of various nitrogen sources including peptone, beef extract, yeast extract, urea and ammonium nitrate were tested by Placket-Burman experimental design, and the significant effect was found in case of beef extract and ammonium nitrate on melanin production. This shows that only trace amount of nitrogen source was utilized by N. alba MSA10 for melanin production, as the amino acid tyrosine mediates the melanin synthesis pathway. The strain MSA10 had utilized considerable level of nitrogen sources for their growth and mycelial development; however melanin production gets enhanced with the addition of tyrosine in the production medium. The strain grows optimum up to 3.5% of NaCl and the highest melanin production (3.4 mg/ml) has obtained at 2.5% of salinity. Further increasing salinity, the melanin production was found to be decreased. Melanin production by N. alba MSA10 was highest at 35°C and pH 7.5. The highest yield of pigment from Monascus was reported at 30°C [40]. The initial pH at 6 and temperature of 32°C increased the pigment production by Monascus sp. [41]. The pigment production by Monascus purpureus with various light sources was well recognized by Babitha et al. [42] and this finding described that red light have little effect on growth and pigment production when compared to green and blue light sources which probably inhibits the pigment production, even though there is significant increase in biomass under green light. Despite the importance of influence of light on pigment production as investigated on Monascus purpureus [42], much has not yet been determined on actinomycetes melanin. Therefore, the strain N. alba MSA10 would be the first record among the actinomycetes produced melanin under illumination of the green light source. The predicted melanin yield was found to be closer to actual melanin yield and the production rate was increased one fold over the wild strain N. alba MSA10. It reveals that the generated Box-Behnken design showed the interaction and actual relationships between the critical control factors. The RSM-based experiments showed that N. alba MSA10 has higher melanin (3.4 mg/ml) productivity potential. The FT-IR absorbance band of Nocardiopsis melanin ranging from 3400 cm -1 to 674 cm -1 had high degree of similarity to the BC58 melanin, standard melanin sigma [43] and synthetic pyomelanin, pyomelanin extracted from Aspergillus fumigatus [44]. The shifting of band related to primary amine (N-H), carboxylate group and C-O stretch vibration clearly evidenced that Nocardiopsis melanin reduces silver nitrate and at the same time it stabilizes the synthesized silver nanostructures. The stability of melanin mediated silver nanostructures were determined by synthesized particles which was allowed to stands for 3 months at room temperature. It was found that the color intensity of silver particles increased with aging and no aggregation was observed in duration of 3 months. The free amine or carboxylate group of proteins can bind with silver particles [45]. The interaction of melanin with metal ions, protein [46] and double stranded DNA [47] was extensively studied. The melanin mediated silver nanostructures found to be most effective on food pathogens such as B. cereus, P. fragi and E. coli. Thereby, the incorporation of melanin mediated synthesized silver nanostructures in food packaging materials can effectively inhibit the growth of food pathogens and increase the shelf life of packed food products. Nanomaterials are being explored for their promising role in food industry such as providing longer shelf-life for foods, better barrier properties, improved heat resistance and temperature control, and antimicrobial and fungal protections [48]. Silver nanoparticles that act as antibacterial agents or nanoclay coatings are currently used in food packaging [49]. The future studies can be focused on the rapid formation of different shape of silver nanostructures under optimal conditions with melanin as reducing and capping agent. The size and shape based silver nanostructures has many positive attributes such as good conductivity, chemical stability, catalytic and antimicrobial activity that make them suitable for many practical food packaging applications. The melanin mediated silver nanostructures can be incorporated in food packaging materials. The efficacy of melanin-silver nano-conjugates on the shelf life of packed food products is needs to be investigated. Conclusion The melanin pigment has been successfully purified and characterized from N. alba MSA10. The cultural conditions and environmental factors for enhanced yield of melanin were optimized through RSM-Box-Behnken design. The purified melanin has been used to synthesize and stabilizes the silver nanostructures in vitro. The antioxidant activity, reducing power and physicochemical properties of Nocardiopsis melanin was well characterized. The antioxidant, antimicrobial and natural coloring potential of Nocardiopsis melanin can be used as food additives, which significantly reduces the usage of artificial or synthetic colorants and antioxidants. The UV protective roles, withstanding higher temperatures, stability in alkaline conditions and water solubility nature of Nocardiopsis melanin increased their application in food, cosmetics and biomedical industries. Thus, the synthesis and stabilization of silver nanostructures by Nocardiopsis melanin demonstrates the metal interacting nature of pigment. Furthermore the antibacterial properties against food pathogens would facilitate its applicability in food processing and food packaging industries. Isolation, screening and identification of melanin producing marine actinobacterium The marine actinobacteria were isolated from marine sponge Dendrilla nigra as described by Selvin et al. [50]. The isolated actinobacteria were screened for melanin production on tyrosine agar medium (g/l): (peptone 5 g, sodium chloride 20 g, Beef extract 1.5 g, yeast extract 1.5 g, tyrosine 10 g, agar 20 g and pH 7.3) and were incubated at 30-35°C for 6-7 days. The pigment production was confirmed by the formation of a brownish color around the colonies. The melanin producer strains were identified based on morphological, biochemical and phylogenetic analysis [50]. Fermentation by shake flask culture The melanin production was carried out in five sets of 250 ml Erlenmeyer flasks under shake flask culture containing 100 ml of tyrosine medium. The culture flasks were incubated at 35°C for 7 days on a rotary shaker (Oasis) at 200 rpm. Samples were removed after initial color change periodically for biomass and melanin yield determination. The biomass yield was estimated by washing the cells with phosphate buffered saline (g/l) (NaCl 8 g, KCl 0.20 g, Na 2 HPO 4 1.44 g, KH 2 PO 4 0.24 g, pH 7.4) and dried at 50-60°C for 2 h. The melanin supernatant was first adjusted to pH 9 with 10 N NaOH to ensure polymerization and then adjusted to pH 3 with 5 N HCl to precipitate melanin. The precipitated melanin was centrifuged at 10,000 rpm for 15 min (Eppendorf), washed thrice with deionized water and lyophilized for dry weight determination. Formulation of fermentation media for melanin production To formulate the media with various concentrations of media constituents on melanin production by MSA10, different carbon, nitrogen sources, metal ions and organic solvents were used. The carbon sources used in this study include 20 g/L of glucose, dextrose, sucrose, mannitol and galactose. The organic nitrogen sources include 15 g/L of peptone, yeast extract, beef extract, and inorganic nitrogen sources of urea and ammonium nitrate are at the concentration of 100 mg/L. The pH of the melanin pigment production was studied using shake flask cultures at different initial values of pH (4)(5)(6)(7)(8)(9)(10). The effect of temperature on pigment production was determined with different incubation temperatures (25-60°C). The NaCl requirement for pigment production was optimized with 0.5 to 3.5% NaCl supplementation. Different metal ions such as CuSO 4 , FeSO 4 , MgSO 4 , MnCl 2 and MnSO 4 were added in tyrosine broth at 100 mg/L concentration to determine the effect of metal ions. Experimental design and statistical analysis For all experiments, fermentation was conducted in 500 ml of Erlenmeyer flasks containing different media constituents on melanin production. All experiments were carried out in triplicate and the final melanin yield was taken as the response (y). The Box-Behnken experimental design with four variables (A, B, C and D) such as sucrose, tyrosine, temperature and salinity respectively and three levels high (+), middle (0), and a low (−) was employed to optimize the fermentation conditions and thereby to obtain maximum melanin yield. The experimental design with four variables is summarized in Table 2. Based on Placket-Burman experimental design, the most significant variables sucrose, tyrosine, temperature and salinity were identified from the 11 variables analyzed such as glucose, sucrose, yeast extract, mannitol, tyrosine, ammonium nitrate, ferrous sulphate, pH, temperature, salinity and inoculums size (data not shown). The experimental data was analyzed using the software Design expert 8.0.4.1 trial version (Stat-Ease, Inc, USA). Light source on melanin production The effect of light on melanin production by MSA 10 was studied by passing different wavelengths of light, red (620 -750 nm), blue (450-475 nm), green (495-570 nm) on fermentation medium. The culture flasks were exposed to the light intensity of 32 W m -2 for 7 days. Assay for tyrosinase activity Tyrosinase activity was assessed by growing the MSA10 isolates in to glutamate medium [51] and 2 ml of culture supernatant mixed with 2 ml of 0.1 M phosphate buffer (pH 5.9), finally 1 ml DOPA was (10 mM) added. The reaction mixture was incubated at 37°C for 5 min. Red coloration resulting from dopachrome formation was observed and read spectrophotometrically at 475 nm (PG Instruments). Characterization of melanin pigment The cell free supernatant was collected from fermented broth by centrifugation at 10,000 g for 15 min (Eppendorf 5804 R). The supernatant was filtered through Whatman No.1 filter paper to remove residue cell debris. The initial purification of melanin was performed according to Wan et al. [20]. Briefly, the melanin supernatant was first adjusted to pH 9 with 10 N NaOH to ensure polymerization and then adjusted to pH 3 with 5 N HCl to precipitate melanin. The precipitated melanin was centrifuged, washed thrice with deionized water and lyophilized for further use. The absorbance spectrum of melanin produced by MSA10 was measured with UV/ VIS Spectrophotometer (PG instruments) over a range of wavelengths from 190 to 500 nm. The color intensity of melanin was measured by CR-300 colorimeter with the HunterLab color system. The L* (lightness ranges from 0-100 (darklight)), a* (red-green) and b* (yellow-blue) values were determined. The lyophilized melanin pigment was spotted on the TLC plate and the chromatogram was performed with the solvent system n-butanol: acetic acid: water (70:20:10). After drying, the pigment spot was sprayed with ninhydrin. The TLC purified pigment were applied to a column of DEAE-Cellulose (Bio-Rad, 1 × 30 cm) that had been equilibrated with 25 mM Tris-HCl buffer (pH 8.6) containing 50 mM sodium chloride. The column was eluted at a flow rate of 100 ml/h with 1:1 volume gradient from 0.1 M to 2 M NaCl in the same buffer. Physico-chemical properties of the melanin The physicochemical properties of Nocardiopsis melanin was analyzed according to Wang et al. [21]. The solubility of purified melanin was checked by adding 0.05 g of the melanin in 10 ml of water, aqueous acid, alkali (such as Na 2 CO 3 , NaOH solution), and organic solvents such as chloroform, ethyl acetate, ethanol, methanol, acetic acid, petroleum ether, hexane with stirring at 25°C for 1 h, then filtered and the absorption of the solutions were recorded spectrophotometrically at 220 nm. The temperature stability of melanin pigment was measured after treatment with various temperatures in a thermostatically controlled water bath at 20, 40, 60, 80 and 100°C for 3 h and subsequently the absorption of the solutions were recorded at 220 nm. Light stability of melanin was detected by holding the melanin solution (5 mg/ml) under natural light, at dark place and under the Ultraviolet-light far from 30 cm for two days and every 12 h interval the maximum absorbance was measured at 220 nm. The pH stability was assessed by adjusting the melanin solution (5 mg/ml) in to a varied pH range (3, 4, 6, 7,9,10 and 12) with 0.5 N NaOH and HCl. All the samples were held for 30 min at 25°C, and the absorption spectrum (190-220 nm) was scanned. A: Sucrose −1(20 g/l), 0 (25 g/l), +1(30 g/l); B: Tyrosine, -1(5 g/l), 0 (12.5 g/l), +1 (20 g/l); C: Temperature, -1(20°C), 0(35°C), +1(50°C); D: Salinity, -1(1.5%), 0(2.5%), +1(3.5%). Determination of antioxidant activity and reducing power of melanin The antioxidant activity of Nocardiopsis melanin was determined by a standard spectroscopic method [52]. Briefly, Aliquots of 2 ml of different concentration of melanin solution (0.5, 1 and 1.5 mg/ml) prepared in phosphate buffer (0.2 M, pH 6.6) and mixed with 2 ml of reagent solution (0.6 M sulfuric acid, 28 mM sodium phosphate, and 4 mM ammonium molybdate). The tubes were capped and incubated in a thermal block at 95°C for 120 min. Every 30 min the absorbance of the mixture was measured at 695 nm against a blank. The reducing power of the melanin pigment was determined by standard method [53]. Briefly, different concentrations of melanin were mixed with phosphate buffer (2.5 ml, 0.2 M, pH 6.6) and potassium ferricyanide [K 3 Fe (CN) 6 ] (2.5 ml, 1%). The mixture was incubated at 50°C for 20 min. 2.5 ml of TCA (10%) was added to the mixture, which was then centrifuged at 3000 rpm for 10 min. The supernatant (1.0 ml) was mixed with distilled water (7.0 ml) and FeCl 3 (0.5 ml, 0.1%), and the absorbance was measured at 700 nm. The Butylated hydroxytoluene (BHT in ethanol solution) was used as the standard and the obtained value was used to compare and interpret the result with melanin. Synthesis of melanin mediated nanostructures by boiling method Silver nanostructures were synthesized in vitro by adding 10 ml purified melanin solution (20 μg/ml) to 40 ml of 1 mM AgNO 3 (Sigma) and vigorously stirred for 5 minutes. The mixture was incubated at 60°C for 30 min. Both melanin and AgNO 3 was maintained separately as control. Silver nanostructures synthesis at different temperature range from 40-100°C and different time intervals (0-30 min) were studied at 1 mM AgNO 3. Then the nanostructures were characterized by UV-vis spectrophotometer (PG instruments), FT-IR spectrum (Spectrum RX1) and TEM analysis. TEM measurements were performed on a TECHNAI 10 PHILIPS model instrument operating at an accelerating voltage of 80 kV. Antimicrobial assay of melanin and silver nanostructures against food pathogens The silver nanostructures and the column purified melanin compound were tested for antimicrobial activity using well diffusion method and the area of the halo was measured [54]. The synthesized nanostructures were tested against common food pathogens such as Bacillus subtilis (MTCC 1305), Bacillus cereus (MTCC 1307), Staphylococcus aureus (MTCC 2940), Escherichia coli (MTCC 739), Vibrio cholerae (MTCC 3906), Vibrio parahaemolyticus (MTCC 451), Vibrio vulnificus (MTCC 1145), Pseudomonas fragi (MTCC 2458) and Salmonella typhi (MTCC 734). These were cultured on Muller Hinton agar (Himedia). Well was made with a sterile steel cork borer (1 cm diameter) and 50 μl of purified melanin and silver nanostructures were added in the wells, incubated at 30°C for 24 h. After incubation the clear halo was measured and the area of inhibition in mm 2 was calculated.
2017-07-09T17:53:54.847Z
2014-05-01T00:00:00.000
{ "year": 2014, "sha1": "1b790c8ad089a185f2c082892f4738c3d21bd897", "oa_license": "CCBY", "oa_url": "https://jnanobiotechnology.biomedcentral.com/track/pdf/10.1186/1477-3155-12-18", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "6c04b50cc5f61cc10843f02dd46b446a623a3b68", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
203836
pes2o/s2orc
v3-fos-license
The topical 5% lidocaine medicated plaster in localized neuropathic pain: a reappraisal of the clinical evidence Topical 5% lidocaine medicated plasters represent a well-established first-line option for the treatment of peripheral localized neuropathic pain (LNP). This review provides an updated overview of the clinical evidence (randomized, controlled, and open-label clinical studies, real-life daily clinical practice, and case series). The 5% lidocaine medicated plaster effectively provides pain relief in postherpetic neuralgia, and data from a large open-label controlled study indicate that the 5% lidocaine medicated plaster is as effective as systemic pregabalin in postherpetic neuralgia and painful diabetic polyneuropathy but with an improved tolerability profile. Additionally, improved analgesia and fewer side effects were experienced by patients treated synchronously with the 5% lidocaine medicated plaster, further demonstrating the value of multimodal analgesia in LNP. The 5% lidocaine medicated plaster provides continued benefit after long-term (≤7 years) use and is also effective in various other LNP conditions. Minor application-site reactions are the most common adverse events associated with the 5% lidocaine medicated plaster; there is minimal risk of systemic adverse events and drug–drug interactions. Although further well-controlled studies are warranted, the 5% lidocaine medicated plaster is efficacious and safe in LNP and may have particular clinical benefit in elderly and/or medically compromised patients because of the low incidence of adverse events. Introduction Neuropathic pain, one of the underlying causes of chronic pain, may result from a lesion or a disease of the somatosensory system. 1 Depending on the site of the lesion within the nervous system, the origin of neuropathic pain can be either central or peripheral. 2,3 Although prevalence estimates vary, neuropathic pain is reported to affect up to ∼18% of the population in developed countries, 4 with up to ∼60% of patients presenting with localized symptoms (localized neuropathic pain [LNP]). 5,6 Based on the International Association for the Study of Pain definition of neuropathic pain, LNP is defined as a type of neuropathic pain that is "characterized by consistent and circumscribed area(s) of maximum pain associated with negative or positive sensory signs and/or spontaneous symptoms characteristic of neuropathic pain". 7 Common LNP conditions, predominantly occurring in elderly individuals, include postherpetic neuralgia (PHN), diabetic polyneuropathy (DPN), and neuropathic postoperative pain. 2,[8][9][10] Neuropathic pain conditions can be debilitating, with a serious negative impact on patient functioning, daily activities, and overall quality of life (QoL). 11,12 The management of neuropathic pain is complex and multidisciplinary, requiring thorough physician knowledge of the various underlying pain mechanisms involved, the pharmacological options available for optimal pain management, and the individual needs of the patient (eg, elderly, receiving multiple medications). 13 Nevertheless, despite the availability of numerous management guidelines, many patients do not receive adequate pain management, and many are not satisfied with their treatment. 13 Pharmacological treatment options include the topical 5% lidocaine medicated plaster, tricyclic antidepressants, serotonin-norepinephrine reuptake inhibitors, gabapentin and pregabalin, and opioids. 8,[13][14][15] As discussed previously, 14 the 5% lidocaine medicated plaster (a 10 cm×14 cm adhesive plaster, containing 700 mg [5% w/w] lidocaine; Versatis ® , Grünenthal) has a dual mode of action by providing a mechanical barrier effect and a pharmacological action via voltage-gated sodium channel blockade as a direct result of lidocaine action. Numerous reviews and clinical guidelines recommend the topical 5% lidocaine medicated plaster as a first-line option for LNP, [16][17][18][19][20][21][22][23][24][25][26][27][28] with the majority of clinical evidence available for patients with PHN. However, due to differences in data analysis and without significant changes in the available data in the last 5 years (see "Discussion" section), recommendations are not always aligned. 29 The topical 5% lidocaine medicated plaster is approved in ∼50 countries worldwide for the symptomatic relief of neuropathic pain associated with previous herpes zoster infection; in nine of these countries, it is also approved for the treatment of LNP. It is estimated that, since the first marketing approval in 1999 and up to June 2014, the topical 5% lidocaine medicated plaster has been prescribed to ∼20 million patients worldwide. 30 This article presents an updated narrative appraisal of the clinical evidence (efficacy and safety in clinical trials, in addition to extensive experience gained in daily clinical practice) with the topical 5% lidocaine medicated plaster in LNP, focusing primarily on its use in patients with PHN and DPN and presenting a brief overview of recent evidence in other LNP conditions. In order to provide a reappraisal of the clinical evidence for the use of the 5% lidocaine medicated plaster in the treatment of LNP conditions, all efficacy and safety studies (randomized, controlled, or open label with a well-described methodology), case reports, and observational studies on the 5% lidocaine medicated plaster were retrieved from a PubMed literature search (1960 to September 30, 2015). Additional references were identified from the reference lists of published articles. Search terms were "lidocaine" and ("patch" or "topical") or "lidocaine medicated plaster". Inclusion of studies was based mainly on the methods section of the trials. If available, large, well-controlled trials with appropriate statistical methodology were preferred. Clinical evidence with the topical 5% lidocaine medicated plaster in LNP PHN or painful DPN PHN is the most common chronic complication of the reactivation of the herpes zoster virus that results in shingles, manifesting as LNP, with ∼20% of patients with herpes zoster reporting some pain at 3 months after the onset of symptoms. The frequencies of herpes zoster infection and PHN increase with age. 9 Painful DPN, a common chronic complication that occurs in up to ∼20% of patients with diabetes, is associated with a significant negative impact on the patient's QoL [31][32][33] Several articles have previously reviewed clinical trials in which the topical 5% lidocaine medicated plaster was administered to patients with localized PHN 14,27,34 or DPN. 14,35 An overview of topical 5% lidocaine medicated plaster clinical trials is provided here for completeness, in addition to a review of more recent experience gained in daily clinical practice and in long-term studies. One of the earliest clinical trials to demonstrate the efficacy of the 5% lidocaine medicated plaster was a foursession (12 hours each session), randomized, double-blind, vehicle-controlled study in 35 patients with established PHN affecting the torso or extremities. 36 Lidocaine plasters were applied in two of the four sessions, a vehicle plaster in one session, and the remaining session was a no-treatment observation session. Compared with no-treatment observation, 5% lidocaine medicated plasters significantly (P,0.05) reduced pain intensity at each time point (from 30 minutes to 12 hours and from 4 to 12 hours) compared with vehicle plasters. Lidocaine plasters were superior to both no-treatment observation (P,0.0001) and vehicle (P=0.033) in mean category pain relief scores. Minimal systemic absorption of lidocaine was observed (maximum blood lidocaine level 0.1 µg/mL). No systemic side effects were reported. 36 Two clinical studies used a randomized, withdrawal (enriched enrollment) design. 37,38 In the study by Galer et al, 37 patients had been treated successfully with topical 5% lidocaine medicated plasters on a regular basis for at least 1 month before study enrollment. Subjects were subsequently enrolled in a randomized, two-treatment period, vehicle-controlled, crossover study. The primary efficacy variable was "time to exit" due to a lack of efficacy (defined as a decrease in pain relief score by two or more categories on a six-item pain relief scale for any 2 consecutive days). The median time to exit with the lidocaine plaster was 69 5% lidocaine medicated plaster in LNP significantly greater than with the vehicle plaster (.14 days vs 3.8 days, P,0.001). At study completion, significantly more patients expressed a preference for the lidocaine plaster than the vehicle plaster (78.1% vs 9.4%, P,0.001). There were no statistically significant between-group differences with regard to side effects. 37 The study by Binder et al, 38 a double-blind, placebo plaster-controlled, parallel group study, was conducted at 33 outpatient centers in 12 European countries between April 2003 and June 2004. Patients aged $50 years with PHN and neuropathic pain persisting $3 months postrash healing and mean pain intensity of $4 on the 11-point numerical rating scale (NRS-11) were enrolled in an 8-week open-label, active treatment (5% lidocaine medicated plaster) run-in phase. Responders entered a 2-week, double-blind phase and were randomized to the 5% lidocaine medicated plaster or a placebo plaster. Patients applied up to three plasters for up to 12 hours/day. The primary endpoint was time to exit due to a $2-point reduction in pain relief on 2 consecutive days of plaster application using a six-item verbal rating scale. Among the 263 patients entering the initial 8-week run-in phase, 51.7% (n=137) achieved at least moderate pain relief on active treatment (responders). Seventy-one responders completed the entire 8-week initial phase and subsequently entered the double-blind phase and were randomized to the 5% lidocaine medicated plaster (n=36) or a placebo plaster (n=35). Median time to exit was numerically longer for the 5% lidocaine medicated plaster than the placebo plaster group (13.5 [range: 2-14] vs 9.0 [range: [1][2][3][4][5][6][7][8][9][10][11][12][13][14] days, P=0.151). For per-protocol patients (n=34), median time to exit was significantly longer in the 5% lidocaine medicated plaster than the placebo plaster group (14.0 [range: 3-14] vs 6.0 [range: [1][2][3][4][5][6][7][8][9][10][11][12][13][14] days, P=0.0398). During the 8-week run-in phase, treatment with the 5% lidocaine medicated plaster was associated with clinically relevant improvements in extremely painful and painful allodynia, QoL, and sleep measures, particularly in patients identified as responders. 38 The 5% lidocaine medicated plaster reduced pain intensity in patients with PHN with impaired nociceptor function (determined by heat pain thresholds and histamine-induced flare) 39 but not in those with preserved function in a randomized, double-blind, placebo-controlled substudy in 40 patients from a larger study in patients with any focal neuropathic pain. 40 Recently, Casale et al 41 reported data from a retrospective case review of eight patients with PHN who received the 5% lidocaine medicated plaster. The study cohort comprised mainly elderly patients taking multiple drugs (a mean of four ± two nonanalgesic drugs) to treat comorbidities, representing a population that is at a high risk of drug-drug interactions. Good pain relief (of at least 30%) was observed during a 3-month follow-up period, and pain relief was associated with a 46% reduction in the size of the painful area after 1 month (from 236.38±140.34 to 128.80±95.7 cm 2 ) and a 66% reduction after 3 months (to 81.38±59.19 cm 2 ). Although these observations confirm the effectiveness of the 5% lidocaine medicated plasters in the treatment of PHN, the authors of this study also noted that reduction in the size of the painful area represents a possible additional clinical benefit of the 5% lidocaine medicated plaster that warrants confirmation in large randomized controlled clinical trials. 41 This outcome was also reported in a prospective, observational study of 19 patients with traumatic injuries to peripheral nerves that were accompanied by LNP of .3 months duration. 42 The 5% lidocaine medicated plaster effectively reduced both pain intensity and the size of the painful area, and no local or systemic adverse effects were reported. 42 This observation has significant neurobiological implications as it suggests that long-term treatment may be associated with a reversal of central sensitization, as judged by the reduction in the receptive field zone. 43 Painful DPN As overviewed in Table 1, the effectiveness and safety of the 5% lidocaine medicated plaster have been evaluated in several open-label studies in patients with DPN, 44-48 some of which also included patients with PHN 44,46-48 or low back pain. 44,46 The comparative efficacy and tolerability of the 5% lidocaine medicated plaster and pregabalin were evaluated in one study (discussed in more detail later). 47 In the study that enrolled only patients with clinically defined painful DPN of .3 months' duration, significant improvements in pain and QoL outcomes were observed after 3 weeks of treatment with up to four 5% lidocaine medicated plasters daily for 18 hours. 45 Patients received the 5% lidocaine medicated plaster as add-on therapy to a stable analgesic regimen. The mean daily pain rating (using the Brief Pain Inventory [primary outcome]) reduced from 6.3±1.5 (baseline) to 3.6±2.1 (week 3; P#0.001). Significant improvements were also observed from baseline to week 3 in sleep quality (26. .001) scales. Improvements were maintained for up to a total of 8 weeks in a subgroup of patients (tapering of concomitant analgesic therapy was permitted during the 5-week extension phase). There was no systemic accumulation of lidocaine, and adverse events were minimal (mostly minor application-site events). 45 A systematic review and meta-analysis of the 5% lidocaine medicated plaster in patients with DPN indicated that the effects of the 5% lidocaine medicated plaster on pain reduction are comparable to those of amitriptyline, capsaicin, gabapentin, and pregabalin. 35 In the meta-analysis, all interventions remained effective compared with placebo (mean difference in change of pain from baseline compared with placebo, amitriptyline: −12. 58 . The authors concluded that topical agents such as the 5% lidocaine medicated plaster may be associated with fewer and less clinically significant adverse events than is the case for systemic agents. However, the results of the systematic review were limited by the number and size of studies included, warranting further well-designed studies in this patient population. 35 Compared with pregabalin The 5% lidocaine medicated plaster has been compared with pregabalin in an open-label trial in patients with PHN (n=96) or painful DPN (n=204). 47,49 At baseline, patients had a mean pain intensity score of 6.75 on the 11-point NRS during the previous 3 days (NRS-3). Patients received the topical 5% lidocaine medicated plaster (applied to the most painful skin area) or twice-daily pregabalin capsules (150−600 mg/d titrated to effect) in a 1:1 ratio at 51 European centers in this two-stage, randomized, open-label, multicenter, noninferiority study. During the initial 4-week comparative stage, the response rate (average reduction from baseline of $2 points or an absolute value of #4 points on the NRS-3) in the full analysis set (all randomized patients who received at least one dose of the investigational products and for whom at least one postbaseline assessment of pain intensity was available) was 66.4% (101/152) with the 5% lidocaine medicated plaster and 61.5% (91/148) with pregabalin, indicating noninferiority of the 5% lidocaine medicated plaster to pregabalin (P=0.00229). When the results were analyzed by indication, more patients in the PHN group responded to the 5% lidocaine medicated plaster than to pregabalin treatment (63.3% vs 46.8%; statistical data not reported), while in the painful DPN group, the between-treatment response was comparable (68.0% vs 68.3%). Among the secondary end points, $30% (57.8% vs 48.8%) and $50% (35.6% vs 20.9%) reductions in NRS-3 scores were greater with the 5% lidocaine medicated plaster than with pregabalin in patients with PHN but not in patients with DPN ([59.6% vs 56.4%] and [40.4% vs 37.2%]). Despite greater baseline values in patients with PHN than in those with painful DPN, reductions in the rates of "painful" and "extremely painful" allodynia were greater with the 5% lidocaine medicated plaster (57.8% at baseline to 25.0%) than with pregabalin (62.8%-41.2%) in patients with PHN; the between-treatment reduction in allodynia severity was comparable in patients with painful DPN. Significantly fewer patients using the lidocaine patch 5% experienced drug-related adverse events compared with those taking pregabalin (P,0.0001). Adverse events associated with the use of the 5% lidocaine medicated plaster were mainly mildto-moderate application-site reactions, whereas, in pregabalin recipients, adverse events mainly affected the central nervous system and were of moderate-to-severe intensity. 47 in combination with pregabalin The benefits of the 5% lidocaine medicated plaster in combination with pregabalin for 8 weeks were evaluated in patients with PHN or painful DPN 48,50 who had an inadequate response to monotherapy for 4 weeks in the first phase of the comparative study. 47,49 Patients continuing on monotherapy demonstrated additional decreases in NRS-3 scores. However, patients receiving combination therapy achieved further mean reductions in NRS-3 scores, above those experienced during the initial 4 weeks of monotherapy. 48 These improvements were similar between patients who started with pregabalin and added 5% lidocaine medicated plaster (5.8±0.8 to 4.0±1.7; n=43) and those who initially received the 5% lidocaine medicated plaster and then added pregabalin (6.1±1.0 to 3.6±1.5; n=57). In a secondary analysis of only patients with PHN from the first phase of the comparative study who were unresponsive to either the 5% lidocaine medicated plaster (n=18) or pregabalin (n=17) 72 de León-Casasola and Mayoral monotherapy, combination therapy provided additional efficacy and was well tolerated. 50 The results of these two studies give further support to the concept of multimodal analgesia 51 and suggest that patients treated this way can experience not only better analgesia but also less bothersome side effects that are frequently observed with high doses of pregabalin or gabapentin. LNP of different etiologies In addition to its efficacy and safety in PHN and DPN, the 5% lidocaine medicated plaster has been evaluated in a diverse range of other LNP conditions, including myofascial pain syndrome, [52][53][54] burn sequelae in children, 55 cervical radiculopathy, 56 inguinal postherniorrhaphy pain, 57 postsurgical neuropathic pain in patients with cancer, 58 cancer pain with neuropathic components or trigeminal neuropathic pain, 59 orofacial pain, 60 persistent postmastectomy pain, 61 and various other conditions 62 (Table 2). Most reports indicate clinical benefits with the 5% lidocaine medicated plaster in various LNP conditions. However, two double-blind, placebo-controlled, crossover studies in patients with severe, persistent, inguinal postherniorrhaphy pain, 57 or postsurgical neuropathic pain in patients with cancer, 58 reported no significant benefit with the 5% lidocaine medicated plaster ( Table 2). In studies where the safety of the 5% lidocaine medicated plasters was evaluated, a very low incidence of local or systemic adverse events was reported. In daily clinical practice In addition to the evidence gained in the clinical trial setting, the use of the topical 5% lidocaine medicated plaster has been evaluated in the daily clinical practice setting in patients with LNP. [63][64][65][66][67][68] In an effectiveness study performed at 42 US centers (large institutional primary care programs and academic centers, including pain centers, neurologists, and pain specialists affiliated with a university), the 5% lidocaine medicated plaster was associated with significant reductions from baseline in all mean pain intensity and composite scores at each time point in 332 patients with PHN (P=0.0001). Overall, 66% of patients reported improvements in pain intensity after 7 days of treatment; ∼43% of patients who did not respond after 7 days experienced improvement in pain intensity after 14 days of treatment. 63 These findings suggest that when initiating therapy with the 5% lidocaine medicated plaster, a trial of at least 14 days should be implemented before censoring patients as nonresponders. Moreover, if there is some degree of improvement, the plaster should not be removed, and other antineuropathic medications should be started to conform with the multimodal therapeutic approach in order to obtain adequate pain relief. 20 The day-to-day clinical use of the topical 5% lidocaine medicated plaster was evaluated in a prospective, observational study as part of a compassionate use program in 625 elderly patients (mean age 73.6 years) with PHN in France. 64 Treatment with the 5% lidocaine medicated plaster resulted in a significant quantitative reduction in concomitant neuropathic pain treatments and associated side effects, while maintaining the quality of analgesia. The safety analysis showed that the 5% lidocaine medicated plaster was well tolerated, with the incidence of adverse events being 2.6% (n=16). Adverse events were mainly related to applicationsite reactions, for which six patients discontinued treatment, and no events were considered serious. 64 Another prospective, observational study evaluated patients' perceptions of the topical 5% lidocaine medicated plaster in almost 1,000 patients with chronic neuropathic pain in daily clinical practice in Germany. 65 In this patient population, where 44.8% had PHN, patients perceived the 5% lidocaine medicated plaster as an efficacious treatment of chronic neuropathic pain (mean pain intensity .24 hours improved by 5.1 points [74%] from 6.9±1.6 points at baseline, assessed using the NRS-11). The most notable treatment effects were in patients with PHN or DPN. A 30% reduction in overall pain intensity was observed within the first 2−3 weeks, with continuous further reductions until the end of the study. Marked improvements in anxiety and depression scores (40% and 52%, respectively) and in pain-related restrictions in activities of daily living (66%) and QoL (157%) were also noted. The mean burden of pain (calculated on a 0−100 scale as the sum of three pain intensity scores [lowest, average, highest intensity] plus modified pain disability index sum score plus [40 minus QoL impairment by pain inventory sum score]) was reduced by 56.2 points (73%) from 77.5 points at baseline. Greatest pain relief and associated improvements in painrelated restrictions were observed within the first 5 weeks of treatment; however, beneficial effects continued until the end of the 12-week observation period. 65 Consequently, this study showed that the treatment of these individuals with 5% lidocaine medicated plasters was associated with an improvement not only in the level of analgesia but also in anxiety, depression, and QoL measurements. This is a very important finding because the success of an analgesic therapy should not be assessed solely by the effects it has on pain but also on QoL variables. This was also the case in a study conducted within a large teaching hospital in the UK. Pain, functioning, and patient satisfaction were improved significantly in 408 evaluable hospital patients of whom 197 were already receiving this form of therapy. 67 Before using the plaster, the median pain score (assessed using the NRS-11) was 8 (interquartile range: 7−9). One month after therapy was started, pain decreased to a level of 6 of 10 in all patients and to 5 of 10 in those who were already receiving this form of therapy. Reductions were statistically significant (P,0.001 for both groups). Long-term use Long-term use of the topical 5% lidocaine medicated plaster (for up to 5 years) has been evaluated in several clinical trials 69-71 and a .7-year follow-up survey. 72 Furthermore, extensive long-term experience (.20 million patients) has been gained since the introduction of the 5% lidocaine medicated plaster into numerous markets worldwide in 1999. 30 The long-term treatment of neuropathic pain symptoms in patients with PHN was evaluated in a 12-month, open-label, noncomparative, phase III study conducted at 34 outpatient clinics in 12 European countries (247 evaluable patients). 69 Up to three 5% lidocaine medicated plasters were applied to the painful area for up to 12 hours each day, with a treatmentfree period of at least 12 hours required per day. Patients were permitted to continue receiving concomitant medication. In newly recruited patients (n=97), the mean average pain intensity (NRS-11) scores at baseline, week 12, and at the end of the 12-month study were 5.9±1.4, 3.9±1.6, and 3.9±2.3, respectively. Pain intensity also decreased from baseline (3.9±1.9) to study end (3.4±2.0) in pretreated patients (n=150; no statistical data reported). Pain relief values were consistent with reductions in pain intensity and were sustained in the long term. Overall, a total of 77.3% (191 of 247) of patients were classified as "improved" from baseline. Infections (eg, bronchitis and nasopharyngitis) were the most common adverse events. In total, 48 treatment-related adverse events (mainly mild-to-moderate administration-site disorders) occurred in 31 (12.4%) patients. 69 A total of 102 patients (mean age 71 years, 64% female) continued from the main 12-month long-term study 69 into an extension phase of up to 3 years (total of up to 4 years treatment with the 5% lidocaine medicated plasters). 70 Mean pain relief of at least 4.3 on the six-point verbal rating scale, which had been achieved after 6 weeks in the initial 12-month phase of the study, was maintained throughout this 3-year extension period. At all visits, global impression of change, assessed by the investigator and patient using the clinical global impression of change and patient's global impression of change questionnaires, respectively, were "much" or "very much" improved in ∼80% of patients. For global evaluation of the 5% lidocaine medicated plaster, clinicians and patients were asked how they rated the study medication at each visitpoor, fair, good, very good, and excellent. At the final visit, the 5% lidocaine medicated plaster was rated as "excellent", "very good", or "good" by 91% (67/74) of physicians and 88% (67/76) of patients. Compared with the initial 12-month study, there was no increased frequency of treatment-related adverse events during the 3-year extension phase. 70 These results indicate that the 5% lidocaine medicated plaster appears to provide effective long-term treatment of neuropathic pain symptoms in patients with PHN without evidence of tolerance or tachyphylaxis. 69,70 A retrospective, observational study investigated the efficacy and safety of the 5% lidocaine medicated plaster in 431 evaluable patients (25.0% aged .70 years) with refractory chronic neuropathic pain who attended eleven pain centers in France over a 5-year time period. 71 Treatment of refractory neuropathic pain with the 5% lidocaine medicated plaster clearly demonstrated efficacy and an excellent safety profile. The 5% lidocaine medicated plaster reduced pain intensity by .50% or $30% in 45.5% and 82.2% of patients, respectively. Statistically significant reductions in the use of analgesics ( 71 Under a compassionate use agreement, 20 geriatric patients (mean age 75 years) who had used the topical 5% lidocaine medicated plaster in clinical trials and were offered to continue therapy (mean duration 7.6 years [range: 4−15 years]) completed a survey to assess effectiveness, tolerability, and patient satisfaction. 72 Patients reported a high degree of satisfaction with long-term 5% lidocaine medicated plaster use as judged by overall satisfaction, comparison of efficacy with previous treatment, pain relief, dosing convenience, ability to perform normal daily activities, and tolerability. 72 The long-term safety of the topical 5% lidocaine medicated plaster has been reported in a pooled analysis of clinical trial data for 502 patients with PHN and from spontaneous safety reports from consumers and health-care professionals in ∼20 million patients (as of July 2014). 15,30 In the majority of patients with adverse drug reactions, application-site erythema and application-site pruritus were the most frequently reported side effects. No serious adverse drug reactions 75 5% lidocaine medicated plaster in LNP occurred. 15 Moreover, based on postmarketing surveillance experience in ∼20 million patients worldwide, applicationsite reactions or reports of a lack of drug efficacy were the majority of adverse events reported spontaneously, findings that concur with the safety profile identified during the clinical development program. 30 Effects on QoL Improvements in QoL have been reported in several studies of the topical 5% lidocaine medicated plaster in patients with LNP. 45,47,63 In an open-label effectiveness study, 249 of 332 patients with PHN reported improved QoL after treatment with the 5% lidocaine medicated plaster for 7 days, with further improvements until the end of the study (28 days; P=0.0001). For all measures of pain intensity, pain relief, and interference with QoL, improvements from baseline were equally significant regardless of the time interval since the onset of shingles. 63 In 300 evaluable patients with PHN (n=96) or painful DPN (n=204), the 5% lidocaine medicated plaster improved QoL (based on the EuroQol-5 dimension QoL index) to a greater extent than pregabalin. 47 The mean change in EuroQol-5 dimension estimated health state score from baseline (all patients) was 0.12 and 0.04 in 5% lidocaine medicated plaster and pregabalin recipients, respectively. 47 Other measures of health-related QoL, Patient's Global, and Clinical Global Impression of Change scores indicated greater improvements with the 5% lidocaine medicated plaster than pregabalin in the PHN group but not in the painful DPN group. 47 The 5% lidocaine medicated plaster (maximum of four plasters daily for 18 hours) also significantly improved QoL ratings (sleep quality, pain interference, depression, and mood) in 56 patients with painful DPN (19 of whom had DPN with allodynia) in an open-label 3-week study (Table 1). 45 A subgroup of patients received the 5% lidocaine medicated plaster for an additional 5 weeks, during which taper of concomitant analgesic therapy was permitted; QoL benefits were maintained during the extended treatment period. 45 Discussion This review provides an updated summary of the published clinical experience with the 5% lidocaine medicated plaster in a wide range of LNP conditions. The data presented suggest that the topical 5% lidocaine medicated plaster is an effective and well-tolerated treatment option in patients with LNP, particularly those with PHN. Indeed, numerous systematic reviews and international guidelines include the topical 5% lidocaine medicated plaster as a first-line option in PHN. [16][17][18][19][20][21][22][23][24][25][26]28 In contrast, a recent systematic review/meta-analysis, using Grading of Recommendations Assessment, Development, and Evaluation criteria and an assessment of number needed to treat (NNT) for 50% pain relief as a primary measure, recommends the 5% lidocaine medicated plaster as a second-line treatment of peripheral neuropathic pain. 29 The analysis included randomized, double-blind, placebocontrolled studies with parallel group or crossover study designs that had at least ten patients per group -from these data, NNTs were generated. Randomized, enriched enrollment withdrawal trials were summarized separately. As discussed earlier, a number of pivotal studies of the topical 5% lidocaine medicated plaster were enriched enrollment/withdrawal studies, a study design that is not conducive to inclusion/consideration in meta-analyses. This is despite the fact that this study design is in agreement with regulatory authority (eg, US FDA) guidance for the approval of analgesic medications. 73 Enrichment designs can be useful to determine the success of a medication when compared to placebo because it allows for the decrease in early study dropouts caused by adverse events. This is particularly important in studies evaluating the therapeutic effect of a pain medication because the placebo effect is very strong in patients with pain. Furthermore, an enriched enrollment randomized withdrawal trial design allows the ability to detect desirable efficacy in a subgroup (and may, therefore, provide a strategy for establishing pharmacokinetic and pharmacogenetic patient profiles), and it can cope with initial dose titration to mimic clinical practice, 74 with the promise of greater translational impact. 75 Based on a comparison of results from enriched and nonenriched enrollment randomized withdrawal clinical trials of opioids in chronic noncancer pain, there also appears to be no difference in efficacy between enriched and nonenriched studies. 76 However, in the systematic review by Finnerup et al, 29 one of the consequences of summarizing enriched enrollment studies separately and excluding studies in everyday clinical practice, which represent a large proportion of actual usage, is that NNTs were not determined for the 5% lidocaine medicated plaster, resulting in a weak recommendation for use. The use of NNT can be criticized for several reasons and can only be calculated reliably for parallel designed, placebo-controlled studies with comparable inclusion and exclusion criteria. 17 As study designs for the 5% lidocaine medicated plaster trials were mainly withdrawal designs, NNT calculation was often not possible. Thus, by using this assessment method, very few studies with NNT data are available for the 5% lidocainemedicated plaster. However, the available NNT data are in line with those recommended as first-line medications. 17 In fact, in patients with various localized peripheral neuropathic pain syndromes, including the presence of mechanical allodynia, the 5% lidocaine medicated plaster as an add-on therapy reduced ongoing pain and allodynia with an NNT of 4.4 (2.5-17.5). 17 This is an important observation because, in clinical practice, multimodal therapy is considered the "gold standard" for the treatment of localized peripheral neuropathic pain. 77,78 Moreover, there is a knowledge gap in the majority of systematic reviews and clinical guidelines, as they have not been able to provide recommendations for the treatment of individuals who fail monotherapy. [16][17][18][19][20][21][22][23][24][25][26]28,29 In fact, for patients who are treated based on these guideline recommendations and do not experience at least 50% pain control, the core of the NNT concept, clinicians are currently using multimodal therapy with the addition of a second, third, or even fourth medication based on the age of the patient, potential for drug-drug interactions, potential for side effects, and opportunity to also treat comorbid conditions (eg, insomnia, depression, or anxiety). Consequently, the available guidelines have very little clinical application to daily practice as data on the use of multimodal therapy in the treatment of neuropathic pain are lacking. 79 Moreover, there are serious flaws in performing the analysis of the studies as it was done for the guidelines: 1) Recommendations are mainly based on NNTs that are derived from the evaluation of pain based on visual analog scales. Clinical pain researchers have recognized that this evaluation may not be accurate, and patient global impression of pain improvement, psychosocial functioning, and activity are now utilized to fully evaluate the success of analgesic medication. 2) The role of anxiety and depression in amplifying pain symptoms is also not accounted for in these studies. 3) The placebo effect introduced by the research nurses may also be a potential bias in these evaluations. 80-82 4) The statistical design varies from study to study. Some studies use the baseline evaluation carried forward, whereas others use the last evaluation carried forward when analyzing data for patients who dropped out of the studies. This has not been accounted for in the analysis done for the guidelines. 5) The maximum dose used for the majority of the medications studied varies from study to study. Thus, efficacy can be expected to vary as well. Clinicians are universally using higher doses/numbers of plasters for the treatment of their patients as postmarketing studies have demonstrated increased analgesic efficacy when this approach is utilized. Consequently, it is not surprising that the general findings of the recent evaluation by Finnerup et al 29 are largely reflected in a recent Cochrane review of all topical lidocaine preparations that found no evidence from good-quality randomized controlled studies to support the use of topical lidocaine to treat neuropathic pain, although individual studies indicated that it was effective for pain relief. 83 The Cochrane review also noted that clinical experience supports the efficacy of topical lidocaine in some patients. 83 Despite the general paucity of direct comparative data from randomized, controlled studies, there is a substantial body of clinical evidence and experience that the 5% lidocaine medicated plaster is a valuable and safe option in the management of LNP. Given the recognition that LNP is a subset of neuropathic pain, a treatment algorithm was developed recently in order to identify patients with LNP and to guide targeted topical treatment with the 5% lidocaine medicated plaster. 84 Generally, the more localized the pain (ie, the area of an A4 sheet of paper) the better the results of topical treatment. 84 The 5% lidocaine medicated plaster is easy to use, improves patient QoL, has a good tolerability profile, and is associated with a lack of systemic adverse events and a low potential for drug-drug interactions (particularly when compared with systemic medications); moreover, in contrast to systemic therapies, there is no requirement to titrate the dose. 15,85 These characteristics are particularly beneficial in elderly and medically complicated patients, including those with underlying comorbidities that require a polypharmacy management approach. 85 Indeed, the most recent NeuPSIG recommendations also acknowledge the first-line use of the 5% lidocaine medicated plaster as a safe and well-accepted option, particularly in frail or elderly individuals, where adverse effects or safety issues associated with systemic therapy are of concern. 29 Extensive postmarketing surveillance has confirmed the favorable safety profile of the 5% lidocaine medicated plaster, supporting its first-line use in the treatment of LNP after herpes zoster infection. 15 Based on the results of randomized, controlled, and open-label trials and numerous studies designed to gauge response and experience in real-life clinical practice settings, the use of the 5% lidocaine medicated plaster would appear to be indicated as the first step in the treatment of LNP as part of a multimodal approach or as a single agent. Recent developments with regard to the potential clinical benefit of reducing the size of the painful area using the 5%
2017-09-16T03:31:52.137Z
2016-02-12T00:00:00.000
{ "year": 2016, "sha1": "cac05802dcacf036147a1ae5a99ffee4bf67b0a3", "oa_license": "CCBYNC", "oa_url": "https://www.dovepress.com/getfile.php?fileID=28985", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "c1bd7e3c8dda54cabdeaa03e9e7d610cdffd9e01", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
164917186
pes2o/s2orc
v3-fos-license
Theoretical Framework for Characterizing Strain-Dependent Dynamic Soil Properties This paper proposes a theoretical framework for the characterization of the strain-dependent dynamic properties of soils. The analysis begins with an analytical constitutive model for soils under steady-state cyclic loading. The model describes the dominant soil characteristics, i.e., the hysteresis and nonlinearity with an intrinsic material property α, which physically represents the degree of the hysteresis nonlinearity in a medium. Explicit formulas for the backbone curve, tangent shear modulus, secant shear modulus, and damping ratio as a function of shear strain are derived directly from the constitutive model. A procedure is then developed to determine the parameter α in which the derived damping ratio equation is fitted to damping ratio data measured from the resonant column test (RCT). Clay and sand under three different levels of confinement stress are considered in the numerical evaluation. The capability of the proposed theoretical framework in predicting strain-dependent soil properties and responses is demonstrated. Introduction When the ground motion is severely affected by shear waves propagating vertically from the underlying rock, the soil deposits may undergo cyclic shear deformations. The dynamic properties of soils, including the strain-dependent shear modulus and damping ratio, are the basic input parameters in the analysis of the seismic ground response and site amplification [1]. The soil properties exhibit strong nonlinear responses; the shear modulus decreases, and the damping capacity increases with the amplitude of shear strain. The increasing damping capacity is directly associated with the hysteresis. These macroscopic, hysteretic, nonlinear behaviors of soils are the consequences of complex physics at microscales including inter-grain contact, friction and adhesion, and rearrangement of grain structures under loading-unloading. Numerous stress-strain models have been proposed for the analysis of dynamic soil responses [2,3]. Classical models in terms of empirical fitting parameters are often used for their simplicity [4][5][6][7][8]. More sophisticated models for cyclic loading require more fitting parameters to better describe the soil hysteresis loops. The hysteresis models have been proposed to capture a closed hysteresis loop characterized by imposed shear strain amplitude, and state of stress [9][10][11][12][13][14][15][16][17]. While these models have been successful in expressing the complex strain-dependency for certain soil types under the steady-state cyclic loading, they still have lack of robustness and universality due to physical uncertainty of the input parameters and their uses are quite limited to other soils. Therefore, a simple but robust model is needed to describe inherently hysteretic nonlinear nature of soils. The present study is to develop a new theoretical framework for the purpose of characterizing the strain-dependent dynamic shear modulus and damping ratio of soils. This paper extends the previous study by the author of [18] to (1) derive the explicit formula with the second model order; (2) to test damping ratio data obtained from resonant column test (RCT); and (3) to explore the effect of confinement stress on the hysteretic parameters. In particular, the modified procedure of conventional data interpretation is proposed to accurately characterize the hysteretic soil properties and responses from resonant column. Constitutive Model: Stress-Strain Relation Considerable progress in modeling such hysteretic nonlinear behaviors of granular materials has been made in the geophysics community. The constitutive models are derived within the mathematical framework of the Preisach-Mayergoyz space representation from a unit physical mechanism [19][20][21][22]. This study uses the relevant stress-strain relationship of soils for one-dimensional cyclic shear motion [23][24][25][26]. In particular, the classical nonlinearity terms (higher order power series in strain) are neglected where Gmax is the shear modulus in the limit of infinitesimal strain, γ is the shear strain, α is a non-dimensional parameter that measures the degree of hysteretic nonlinearity, . γ(= ∂γ/∂t) is strain rate, ∆γ is shear strain amplitude, and sgn(x) is signum function. Backbone Curve The nonlinear-hysteretic soil models follow the basic and extended Masing's rules, which are adopted in conjunction with the backbone curve to express unloading, reloading, and cyclic degradation behavior. It represents trajectory of the extrema of the hysteresis curves. By replacing the strain amplitude (∆γ) with the shear strain γ in Equation (1), the backbone curve can be readily obtained for the entire strain range: The backbone curves are superimposed on the stress-strain curves for three different strain amplitudes ( Figure 1). The results show increasing hysteresis (nonlinear damping) and decreasing slope (the strain-softening effect) with increasing strain amplitude. In addition, the backbone curve is constructed by two quadratic functions joined at the coordinate origin. In fact, similar hysteresis loops are observed in many soils and other granular materials. While the stress-strain model in Equation (1) may not describe all of complex constitutive behaviors of granular materials, it has been shown to capture some essential features in the stress-strain relationship under steady-state cyclic loading. (4) This explicit form of the tangent modulus can be useful in the numerical simulations to reflect strain-softening response of granular materials under cyclic loading. The tangent stiffness Gtan instantaneously captures the soil fabric change during a large strain test. It is not related with small strain stiffness Gmax measured under constant fabric [27,28]. The secant shear modulus (Gsec = τb/γ), where the secant slope drawn from the origin to any specified point on the stress-strain curve can be derived using Equation (2): The ratio between the secant and backbone tangent moduli is given: The small positive quantity in αγ indicates that the secant modulus is always larger than the backbone tangential modulus. Damping Ratio The damping ratio of a granular material is the summation of the linear damping ratio in the limit of small strain (ζ L ) that represents the inherent viscoelastic absorption and a nonlinear damping ratio due to the hysteresis that increases with the strain (ζ NL (γ)), i.e., Modulus Degradation The instantaneous shear modulus (Gtan = dτ H /dγ), defined as the slope on the stress-strain hysteresis loop along the loading path, can be obtained by taking the derivative with respect to shear strain in Equation (1): Given the . γ > 0 and ∆γ = γ, the tangent shear modulus can be expressed: This explicit form of the tangent modulus can be useful in the numerical simulations to reflect strain-softening response of granular materials under cyclic loading. The tangent stiffness G tan instantaneously captures the soil fabric change during a large strain test. It is not related with small strain stiffness G max measured under constant fabric [27,28]. The secant shear modulus (G sec = τ b /γ), where the secant slope drawn from the origin to any specified point on the stress-strain curve can be derived using Equation (2): The ratio between the secant and backbone tangent moduli is given: The small positive quantity in αγ indicates that the secant modulus is always larger than the backbone tangential modulus. Damping Ratio The damping ratio of a granular material is the summation of the linear damping ratio in the limit of small strain (ζ L ) that represents the inherent viscoelastic absorption and a nonlinear damping ratio due to the hysteresis that increases with the strain (ζ NL (γ)), i.e., The nonlinear damping ratio is defined [1,26]: where W S is the maximum strain energy stored during the cycle (= G sec ·∆γ 2 /2), and W D is the area closed by the hysteresis loop in the stress-strain curve, which represents the dissipated energy per cycle. The energy dissipation per cycle can be obtained by integrating Equation (1): The dissipated energy depends on the strain amplitude, hysteretic nonlinearity parameter, and shear modulus in the limit of infinitesimal strain. Thus, the nonlinear damping ratio can be obtained from Equation (8): Appl. Sci. 2019, 9, 1897 4 of 9 The strain-dependent damping ratio response cannot be described with the Kelvin-Voigt model that consists of a linear spring element and a linear dashpot element in parallel system: where η is the viscosity and γ is the harmonically varying strain as ∆γsin(ωt). The damping ratio can be expressed: There is no strain-dependence in the shear modulus and the damping ratio. These correspond to the constant damping ratio and resonant frequency monitored at low shear strains. Examples Resonant column test (RCT) characterizes the resonance frequency and damping ratio as a function of shear strain. The shear modulus is obtained from the resonance frequency using the characteristic equation derived from the linear vibration of the column-mass system. In fact, the hysteretic nonlinear nature of the vibration problem is ignored on the calculation of the shear modulus. Thus, this study uses damping ratio data to avoid data interpretation errors. The linear damping ratio ζ L is experimentally constrained below elastic threshold regime. Meanwhile, the nonlinear damping ratio is defined by using the least-squares method: Table 1): (a) Least-squares fitting of the model to measured damping ratio; (b) error function (L2 norm) Figure 3 shows the data points and fitted models for damping ratios and the estimated values are tabulated in Table 1. Since, the α values are on the order of 10 2 and the maximum shear strain is less than 10 −3 , αγ is at most 10 −1 , which proves the inequality in Equation (6). Higher confinement stress extends the elastic threshold region with lower values. In addition, clay is expected to be lower nonlinear than sand. These results show trend of the α values, implying that the α parameter involves physical information about the soil fabric changes. With the fitted parameters, the secant shear modulus data was compared with the proposed model ( Figure 4). As the shear strain levels are increased beyond the elastic threshold strain, the conventional method predicted larger reductions in the shear modulus. These deviations show that the use of the linear characteristic 2. Determination of the hysteretic nonlinearity parameter α using the damping ratio measured from resonant column test (RCT) and the proposed model (sand data from 16~18 in Table 1): (a) Least-squares fitting of the model to measured damping ratio; (b) error function (L2 norm). Figure 3 shows the data points and fitted models for damping ratios and the estimated values are tabulated in Table 1. Since, the α values are on the order of 10 2 and the maximum shear strain is less than 10 −3 , αγ is at most 10 −1 , which proves the inequality in Equation (6). Higher confinement stress extends the elastic threshold region with lower values. In addition, clay is expected to be lower nonlinear than sand. These results show trend of the α values, implying that the α parameter involves physical information about the soil fabric changes. With the fitted parameters, the secant shear modulus data was compared with the proposed model ( Figure 4). As the shear strain levels are increased beyond the elastic threshold strain, the conventional method predicted larger reductions in the shear modulus. These deviations show that the use of the linear characteristic equation produces data interpretation errors on the shear modulus. However, larger reductions in the shear modulus would be more conservative data for design purpose. Figure 5 shows the hysteresis curves, corresponding backbone curves, and the instantaneous tangent shear moduli for sand under confinement stresses, 100 and 400 kPa. The instantaneous tangent shear moduli have a bow-tie shape with end-point discontinuities, thereby, showing that the shear modulus is dependent on the loading path and exhibits a significant drop with the strain amplitude again due to the strain softening effect. The secant modulus at the maximum strains is superimposed at its tangent counterpart. Note that the tangent modulus at γ = 0 is equal to the average of two secant modulus at each shear strain amplitude. 10 470 342 10 100 260 11 275 462 11 100 259 12 250 417 12 100 270 13 320 419 13 200 230 14 480 223 14 400 200 15 550 209 16 100 590 [29] 17 200 469 18 400 370 19 70 610 20 138 518 21 207 390 22 83 437 23 83 500 24 100 482 Figure 3. Damping ratios of sand and clay under three different confinement stress levels (sand data from 16~18 and Clay data from 1 ~ 3 in Table 1): (a) Sand; (b) clay. The symbols are experimental data and the lines are defined by Equations (7) and (10). Table 1): (a) Sand; (b) clay. The symbols are experimental data and the lines are defined by Equations (7) and (10). Damping ratios of sand and clay under three different confinement stress levels (sand data from 16~18 and Clay data from 1 ~ 3 in Table 1): (a) Sand; (b) clay. The symbols are experimental data and the lines are defined by Equations (7) and (10). Table 1): (a) Sand; (b) clay. The symbols are experimental data and the lines are defined by Equations (7) and (10). Table 1): (a) Sand; (b) clay. The symbols are experimental data and the lines are defined by Equations (7) and (10). Correlation: Hysteretic Parameter and Confinement Stress The 38 data sets of resonant column test (RCT) compiled from the literature are compared with the proposed model for different confinement stress levels and soils. Figure 6 presents that (1) there is an inverse relation between hysteretic nonlinear parameter α and confinement stress σ'c; (2) sand leads to higher α value than clay at the same confinement stress. Correlation: Hysteretic Parameter and Confinement Stress The 38 data sets of resonant column test (RCT) compiled from the literature are compared with the proposed model for different confinement stress levels and soils. Figure 6 presents that (1) there is an inverse relation between hysteretic nonlinear parameter α and confinement stress σ' c ; (2) sand leads to higher α value than clay at the same confinement stress. Comments: Shear Strain Range The constitutive model in Equation (1) has been validated for many different granular material systems [25,30,31], while this research for the first time applies the constitutive model to characterize the soil properties of geotechnical interest. The functional form determines the shape of hysteresis, while the hysteretic nonlinearity parameter α quantifies the amount of the macroscopic hysteretic nonlinear effect caused by microscopic sources of nonlinearity. In addition, this parameter is not a meaningless fitting parameter but it has its own full physical meaning related to the sources of nonlinearity [24,32]. Thus, the ad-hoc fitting procedure is not required to characterize strain-dependent soil properties. It was revealed that the constitutive model in Equation (1) is valid in the strain range up to 10 −2 [25]. For example, the α parameters for the sand and clay considered in this research are on the order of 10 2 . This means αγ < 1 for γ < 10 −2 , and thus Gsec > 0 and Gtan > 0 in this model. The upper validity limit shear strain of 10 −2 is high enough in most of geotechnical applications. Figure 7 summarizes the entire procedure based on the RCT data. While the data from RCT are used as an example in this research, data from the torsional shear test can additionally be used in a similar procedure. The constitutive model in Equation (1) has been validated for many different granular material systems [25,30,31], while this research for the first time applies the constitutive model to characterize the soil properties of geotechnical interest. The functional form determines the shape of hysteresis, while the hysteretic nonlinearity parameter α quantifies the amount of the macroscopic hysteretic nonlinear effect caused by microscopic sources of nonlinearity. In addition, this parameter is not a meaningless fitting parameter but it has its own full physical meaning related to the sources of nonlinearity [24,32]. Thus, the ad-hoc fitting procedure is not required to characterize strain-dependent soil properties. It was revealed that the constitutive model in Equation (1) is valid in the strain range up to 10 −2 [25]. For example, the α parameters for the sand and clay considered in this research are on the order of 10 2 . This means αγ < 1 for γ < 10 −2 , and thus Gsec > 0 and Gtan > 0 in this model. The upper validity limit shear strain of 10 −2 is high enough in most of geotechnical applications. Figure 7 summarizes the entire procedure based on the RCT data. While the data from RCT are used as an example in this research, data from the torsional shear test can additionally be used in a similar procedure. Conclusions This paper addresses a theoretical framework for characterizing the dynamic behavior of soils. In particular, the simple but robust constitutive model involved with a single physical parameter α was used to capture the strain-dependent response of granular materials under steady-state cyclic loading condition. The main conclusions can be drawn as follows: Conclusions This paper addresses a theoretical framework for characterizing the dynamic behavior of soils. In particular, the simple but robust constitutive model involved with a single physical parameter α was used to capture the strain-dependent response of granular materials under steady-state cyclic loading condition. The main conclusions can be drawn as follows: • The constitutive model derives explicit formulas to describe the hysteretic nonlinear response. Note that all other models derived from this study model depend on the hysteretic nonlinear parameter α. • The 36 damping ratio data sets measured from resonant column test were compared with the proposed model for different confinement stress levels and soils. It shows the inverse relation between the α parameter and confinement stress and higher α values for sand. • The data analysis reveals that the model is valid for αγ < 1, and thus the explicit formula can be used to simulate the ground motion within intermediate strain range.
2019-05-26T14:02:05.955Z
2019-05-09T00:00:00.000
{ "year": 2019, "sha1": "89fe34a6481b27c5491a568c50bd609081231efe", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2076-3417/9/9/1897/pdf", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "44595f40ad4d1624650b5f5fc7f19e367efc804e", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Engineering" ] }
67871055
pes2o/s2orc
v3-fos-license
Computationally Efficient Market Simulation Tool for Future Grid Scenario Analysis The paper proposes a computationally efficient electricity market simulation tool (MST) suitable for future grid scenario analysis. The market model is based on a unit commitment (UC) problem and takes into account the uptake of emerging technologies, like demand response, battery storage, concentrated solar thermal generation, and HVDC transmission lines. To allow for a subsequent stability assessment, the MST requires an explicit representation of the number of online generation units, which affects powers system inertia and reactive power support capability. These requirements render a fullfledged UC model computationally intractable, so we propose unit clustering, rolling horizon approach, and constraint reduction to increase the computational efficiency. To showcase the capability of the proposed tool, we use a simplified model of the Australian National Electricity Market with different penetrations of renewable generation. The results show that the number of online units resulting from the proposed tool is very close to the binary UC run over a week-long horizon, which is confirmed by the loadability and inertia analysis. That confirms the validity of the approach for long term future grid studies, where one is more interested in finding weak points in the system rather than in a detailed analysis of individual operating conditions. Abstract-This paper proposes a computationally efficient electricity market simulation tool (MST) suitable for future grid scenario analysis. The market model is based on a unit commitment (UC) problem and takes into account the uptake of emerging technologies, like demand response, battery storage, concentrated solar thermal generation, and HVDC transmission lines. To allow for a subsequent stability assessment, the MST requires an explicit representation of the number of online generation units, which affects powers system inertia and reactive power support capability. These requirements render a fullfledged UC model computationally intractable, so we propose unit clustering, a rolling horizon approach, and constraint clipping to increase the computational efficiency. To showcase the capability of the proposed tool, we use a simplified model of the Australian National Electricity Market with different penetrations of renewable generation. The results are verified by comparison to a more expressive and computationally-intensive binary UC, which confirm the validity of the approach for long term future grid studies. Index Terms-Electricity market, future grid, electricity market simulation tool, optimization, scenario analysis, unit commitment, stability assessment, inertia, loadability. Sets C Set of consumers c. G Set of generators g, G = G syn ∪ G RES . G syn Set of synchronous generators, G syn ⊆ G. G RES Set of renewable generators, G RES ⊆ G. G CST Set of concentrated solar thermal generators, G CST ⊆ G syn . Shariq Riaz is also with the Department of Electrical Engineering, University of Engineering and Technology Lahore, Lahore, Pakistan. s g,t Number of online units of generator g, s g,t ∈ {0, 1} in BUC and s g,t ∈ Z + in MST. u g,t Integer startup status variable of a unit of generator g, u g,t ∈ {0, 1} in BUC and u g,t ∈ Z + in MST. d g,t Integer shutdown status variable of a unit of generator g, d g,t ∈ {0, 1} in BUC and d g,t ∈ Z + in MST. δ n,t Voltage angle at node n. p l,t Power flow on line l. ∆p l,t Power loss on line l. p g,t Power dispatch of generator g. p Minimum number of units of generator g ∈ G syn required to remain offline for time t < τ d g . e g Energy stored in TES of g ∈ G CST at start of horizon. e b p Battery state of charge for prosumer p at start of horizon. e s Energy stored in storage plant s at start of horizon. p g Power dispatch of generator g at start of horizon. s g Number of online units of generator g ∈ G syn at start of horizon. u g,t Minimum number of units of generator g ∈ G syn required to remain online for time t < τ u g . Parameters ∆t Time resolution. η x Efficiency of component x. λ Feed-in price ratio. Load demand of prosumer p. S g MVA rating of a unit of generator g. r +/− g Ramp-up/down rate of a unit of generator g. I. INTRODUCTION Power systems worldwide are moving away from domination by large-scale synchronous generation and passive consumers. Instead, in future grids 1 new actors, such as variable renewable energy sources (RES) 2 , price-responsive users equipped with small-scale PV-battery systems (called prosumers), demand response (DR), and energy storage will play an increasingly important role. Given this, in order for policy makers and power system planners to evaluate the integration of high-penetrations of these new elements into future grids, new simulation tools need to be developed. Specifically, there is a pressing need to understand the effects of technological change on future grids, in terms of energy balance, stability, security and reliability, over a wide range of highly-uncertain future scenarios. This is complicated by the inherent and unavoidable uncertainty surrounding the availability, quality and cost of new technologies (e.g. battery or photo-voltaic system costs, or concentrated solar thermal (CST) generation operating characteristics) and the policy choices driving their uptake. The recent blackout in South Australia [1] serves as a reminder that things can go wrong when the uptake of new technologies is not planned carefully. Future grid planning thus requires a major departure from conventional power system planning, where only a handful of the most critical scenarios are analyzed. To account for a wide range of possible future evolutions, scenario analysis has been proposed in many industries, e.g. in finance and economics [2], and in energy [3], [4]. In contradistinction to power system planning, where the aim is to find an optimal transmission and/or generation expansion plan, the aim of scenario analysis is to analyze possible evolution pathways to inform power system planning and policy making. Given the uncertainty associated with long-term projections, the focus of future grid scenario analysis is limited only to the analysis of what is technically possible, although it might also consider an explicit costing [5]. In more detail, existing future grid feasibility studies have shown that the balance between demand and supply can be maintained even with high penetration of RESs by using large-scale storage, flexible generation, and diverse RES technologies [6]- [10]. However, they only focus on balancing and use simplified transmission network models (either copper plate or network flow; a notable exception is the Greenpeace pan-European study [11] that uses a DC load flow model). This ignores network related issues, which limits these models' applicability for stability assessment. To the best of our knowledge, the Future Grid Research Program, funded by the Australian Commonwealth Scientific and Industrial Research Organisation (CSIRO) is the first to propose a comprehensive modeling framework for future grid 1 We interpret a future grid to mean the study of national grid type structures with the transformational changes over the long-term out to 2050. 2 For the sake of brevity, by RES we mean "unconventional" renewables like wind and solar, but excluding conventional RES, like hydro, and dispatchable unconventional renewables, like concentrated solar thermal. scenario analysis that also includes stability assessment. The aim of the project is to explore possible future pathways for the evolution of the Australian grid out to 2050 by looking beyond simple balancing. To this end, a simulation platform has been proposed in [12] that consists of a market model, power flow analysis, and stability assessment, Fig. 1. The platform has been used, with additional improvements, to study fast stability scanning [13], inertia [14], modeling of prosumers for market simulation [15], [16], impact of prosumers on voltage stability [17], and power system flexibility using CST [18] and battery storage [19]. In order to capture the inter-seasonal variations in the renewable generation, computationally intensive timeseries analysis needs to be used. A major computational bottleneck of the framework is the market simulation. Within this context, the contribution of this paper is to propose a unified generic market simulation tool (MST) based on a unit commitment (UC) problem suitable for future grid scenario analysis, including stability assessment. The tool incorporates the following key features: • market structure agnostic modeling framework, • integration of various types and penetrations of RES and emerging demand-side technologies, • generic demand model considering the impact of prosumers, • explicit network representation, including HVDC lines, using a DC power flow model, • explicit representation of the number of online synchronous generators, • explicit representation of system inertia and reactive power support capability of synchronous generators, • computational efficiency with sufficient accuracy. The presented model builds on our existing research [14]- [19] and combines all these in a single coherent formulation. In more detail, to reduce the computational burden, the following techniques are used building on the methods proposed in [20], [21]: • unit clustering, • rolling horizon approach, • constraint clipping. The computational advantages of our proposed model are shown on a simplified 14-generator model of the Australian National Energy Market (NEM) as a test grid [22]. Four cases for different RES penetration are run for one to seven days horizon length, and computational metrics are reported. To reflect the accuracy of the proposed MST, system inertia and voltage stability margins are used as a benchmark. In simulations, RES and load traces are taken from the National Transmission Network Developed Plan (NTNDP) data, provided by the Australian Energy Market Operator (AEMO) [23]. The remainder of the paper is organized as follows: Literature review and related work are discussed in Section II, while Section III details the MST. A detailed description of the simulation setup is given in Section IV. In Section V results are analyzed and discussed in detail. Finally, Section VI concludes the paper. II. RELATED WORK In order to better explain the functional requirements of the proposed MST, we first describe the canonical UC formulation. An interested reader can find a comprehensive literature survey in [24]. A. Canonical Unit Commitment Formulation The UC problem is an umbrella term for a large class of problems in power system operation and planning whose objective is to schedule and dispatch power generation at minimum cost to meet the anticipated demand, while meeting a set of system-wide constraints. In smart grids, problems with a similar structure arise in the area of energy management, and they are sometimes also called UC [25]. Before deregulation, UC was used in vertically integrated utilities for generation scheduling to minimize production costs. After deregulation, UC has been used by system operators to maximize social welfare, but the underlying optimization model is essentially the same. Mathematically, UC is a large-scale, nonlinear, mixedinteger optimization problem under uncertainty. With some abuse of notation, the UC optimization problem can be represented in the following compact formulation [26]: Due to the time-couplings, the UC problem needs to be solved over a sufficiently long horizon. The decision vector x = {x c , x b } for each time interval consist of continuous and binary variables. The continuous variables, x c , include generation dispatch levels, load levels, transmission power flows, storage levels, and transmission voltage magnitudes and phase angles. The binary variables, x b , includes scheduling decisions for generation and storage, and logical decisions that ensure consistency of the solution. The objective (1) captures the total production cost, including fuel costs, start-up costs and shut-down costs. The constraints include, respectively: dispatch related constraints such as energy balance, reserve requirements, transmission limits, and ramping constraints (2); commitment variables, including minimum up and down, and start-up/shut-down constraints (3); and constraints coupling commitment and dispatch decisions, including minimum and maximum generation capacity constraints (4). The complexity of the problem stems from the following: (i) certain generation technologies (e.g. coal-fired steam units) require long start-up and shut-down times, which requires a sufficiently long solution horizon; (ii) generators are interconnected, which introduces couplings through the power flow constraints; (iii) on/off decisions introduce a combinatorial structure; (iv) some constraints (e.g. AC load flow constraints) and parameters (e.g. production costs) are non-convex; and (v) the increasing penetration of variable renewable generation and the emergence of demand-side technologies introduce uncertainty. As a result, a complete UC formulation is computationally intractable, so many approximations and heuristics have been proposed to strike a balance between computational complexity and functional requirements. For example, power flow constraints can be neglected altogether (a copper plate model), can be replaced with simple network flow constraints to represent critical inter-connectors, or, instead of (nonconvex) AC, a simplified (linear) DC load flow is used. B. UC Formulations in Existing Future Grid Studies In operational studies: the nonlinear constraints, e.g. ramping, minimum up/down time (MUDT) and thermal limits are typically linearized; startup and shutdown exponential costs are discretized, and; non-convex and non-differentiable variable cost functions are expressed as piecewise linear function [20], [27]. In planning studies, due to long horizon lengths, the UC model is simplified even further. For example: combinatorial structure is avoided by aggregating all the units installed at one location [21], [28], [29]; piecewise linear cost functions and constraints are represented by one segment only; some costs (e.g. startup, shutdown and fix costs) are ignored; a deterministic UC with perfect foresight is used, and; non-critical binding constraints are omitted [30], [31] 3 .To avoid the computational complexity associated with the mixed integer formulation, a recent work [33] has proposed a linear relaxation of the UC formulation for flexibility studies, with an accuracy comparable to the full binary mixed integer linear formulation. In contrast to operation and planning studies, the computational burden of future grid scenario analysis is even bigger, due to a sheer number of scenarios that need to be analyzed, which requires further simplifications. For example, the Greenpeace study [11] uses an optimal power flow for generation dispatch and thus ignores UC decisions. Unlike the Greenpeace study, the Irish All Island Grid Study [34] and the European project e-Highway2050 [35] ignore load flow constraints altogether, however they do use a rolling horizon UC, with simplifications. The Irish study, for example doesn't put any restriction on the minimum number of online synchronous generators to avoid RES spillage, and the e-Highway2050 study uses a heuristics to include DR. The authors of the e-Highway2050 study, however, acknowledge the size and the complexity of the optimization framework in long term planning, and plan to develop new tools with a simplified network representation [35]. In summary, a UC formulation depends on the scope of the study. Future grid studies that explicitly include stability assessment bring about some specific requirements that are routinely neglected in the existing UC formulations, as discussed next. A. Functional Requirements The focus of our work is stability assessment of future grid scenarios. Thus, MST must produce dispatch decisions that accurately capture the kinetic energy stored in rotating masses (inertia), active power reserves and reactive power support capability of synchronous generators, which all depend upon the number of online units and the respective dispatch levels. For the sake of illustration, consider a generation plant consisting of three identical (synchronous) thermal units, with the following characteristics: (i) constant terminal voltage of 1 pu; (ii) minimum technical limit P min = 0.4 pu; (iii) power factor of 0.8; (iv) maximum excitation limit E max fd = 1.5 pu; and (v) normalized inertia constant H = 5 s. We further assume that in the over-excited region, the excitation limit is the binding constraint, as shown in Fig. 2. Observe that the maximum reactive power capability depends on the active power generated, and varies between Q n at P max = 1 pu and Q max at P min . We consider three cases defined by the total active power generation of the plant: (i) 0.8 pu, (ii) 1.2 pu, and (iii) 1.6 pu. The three scenarios correspond to the rows in Fig. 3, which shows the active power dispatch level P , reactive power support capability Q, online active power reserves R, and generator inertia H. The three columns show feasible solutions for three different UC formulations: all three units are aggregated into one equivalent unit (AGG), standard binary UC (BUC) when each unit is modeled individually, and the proposed market simulation tool (MST). A detailed comparison of the three formulations is given in Section V. Although the results are self-explanatory, a few things are worth emphasizing. In case (i), aggregating the units into one equivalent unit (AGG) results in the unit being shut down due to the minimum technical limit. The individual unit representation (BUC), on the other hand, does allow the dispatch of one or two units, but with significantly different operational characteristics. In cases (ii) and (iii), the total inertia in the AGG formulation is much higher, which has important implications for frequency stability. A similar observation can be made for the reactive power support capability, which affects voltage stability. Also, dispatching power from all three units results in a significantly higher active power reserve. And last, a higher reactive power generation due to a lower P reduces the internal machine angle, which improves transient stability. In conclusion, a faithful representation of the number of online synchronous machines is of vital importance for stability assessment. An individual unit representation, however, is computationally expensive, so the computational burden should be reduced, as discussed in the following section. Next, an explicit network representation is required. An AC load flow formulation, however, is nonlinear (and non-convex), which results in an intractable mixed-integer nonlinear problem. Therefore, we use a DC load flow representation with a sufficiently small voltage angle difference on transmission lines. Our experience shows that an angle difference of 30 • results in a manageable small number of infeasible operating conditions that can be dealt with separately. B. Computational Speedup The MST is based on the UC formulation using constant fixed, startup, shutdown and production costs. To improve its computational efficiency, the dimensionality of the optimization problem is reduced employing: (i) unit clustering [21] to reduce the number of variables needed to represent a multiunit generation plant; (ii) a rolling horizon approach [25], [30], [36] to reduce the time dimension; and (iii) constraint clipping to remove most non-binding constraints. 1) Unit Clustering: Linearized UC models are computationally efficient for horizons of up to a few days, which makes them extremely useful for operational studies. For planning studies, however, where horizon lengths can be up to a year, or more, these models are still computationally too expensive. Our work builds on the clustering approach proposed in [21], where identical units at each generation plant are aggregated by replacing binary variables with fewer integer variables. The status of online units, startup/shutdown decisions and dispatched power are tracked by three integer variables and one continuous variable per plant per period, as opposed to three binary and one continuous variable per unit per period. Further clustering proposed in [21] is not possible in our formulation because of the explicit network representation required in the MST. 2) Rolling Horizon: Solving the UC as one block, especially for long horizons, is computationally too expensive. This can be overcome by breaking the problem into several smaller intervals called sub-horizons [25], [30], [36]. To ensure accuracy and consistency of the solution, a proper overlap between sub-horizons is maintained and the terminating state of the previous sub-horizon is used as the initial condition of the next sub-horizon. The minimum sub-horizon length depends on the time constants associated with the decision variables. While these might be in the order of hours for thermal power plants, they can be significantly longer for energy storage. Large-scale hydro dams, for example, require horizon lengths of several weeks, or even months. In our research, however, the sub-horizon length is up to a few days to cater for thermal energy storage (TES) of CST plants and battery storage. The optimization of hydro dams is not explicitly considered, however it can be taken into account heuristically, if needed. 3) Constraint Clipping: The size of the problem can be reduced by removing non-binding constraints, which doesn't affect the feasible region. For instance, an MUDT constraint on a unit with an MUDT less than the time interval is redundant 4 . Similarly, a ramp constraint for flexible units is redundant if the time step is sufficiently long. With a higher RES penetration, in particular, where backup generation is provided by fast-ramping gas turbines, this technique can significantly reduce the size of the optimization problem, and hence improves the computational performance due to a larger number of units with higher ramp rates and smaller MUDTs. It should be noted that optimization pre-solvers might not able to automatically remove these constraints. minimize Ω t∈T g∈G c fix g s g,t + c su g u g,t + c sd g d g,t + c var g p g,t , where Ω = {s g,t , u g,t , d g,t , p g,t , p s,t , p l,t } are the decision variables of the problem, and c fix g , c su g , c sd g , and c var g are fixed, startup, shutdown and variable cost, respectively. As typically done in planning studies [21], [33], the costs are assumed constant to reduce the computational complexity. The framework, however, also admits a piece-wise linear approximation proposed in [20]. 2) System Constraints: System Constraints 5 include power balance constraints, power reserve and minimum synchronous inertia requirements. Power balance: Power generated at node n must be equal to the node power demand plus the net power flow on transmission lines connected to the node: where G n , C n , P n , S n , L n represent respectively the set of generators, consumers, prosumers 6 , utility storage plants and lines connected to node n. Power reserves: To cater for uncertainties, active power reserves provided by synchronous generation g ∈ G syn are maintained in each region r: g∈{(G syn −G CST )∩G r } (p g s g,t − p g,t )+ g∈{G CST ∩G r } min(p g s g,t − p g,t , e g,t − p g,t ) ≥ n∈Nr p r n,t . (7) For synchronous generators other than concentrated solar thermal (CST), reserves are defined as the difference between the online capacity and the current operating point. For CST, reserves can either be limited by their online capacity or energy level of their thermal energy system (TES). Variable s g,t in (7) represents the total number of online units at each generation plant, and G r and N r represent the sets of generators and nodes in region r, respectively. Minimum synchronous inertia requirement: To ensure frequency stability, a minimum level of inertia provided by synchronous generation must be maintained at all times (more details are available in [14]) in each region r: 3) Network constraints: Network constraints include DC power flow constraints and thermal line limits for AC lines, and active power limits for HVDC lines. Line power constraints: A DC load flow model is used for computational simplicity for AC transmission lines 7 : where the variables δ x,t and δ y,t represent voltage angles at nodes x ∈ N and y ∈ N , respectively. Thermal line limits: Power flows on all transmission lines are limited by the respective thermal limits of line l: where p l represents the thermal limit of line l. 4) Generation constraints: Generation constraints include physical limits of individual generation units. For the binary unit commitment (BUC), we adopted a UC formulation requiring three binary variables per time slot (on/off status, startup, shutdown) to model an individual unit. In the MST, identical units of a plant are clustered into one individual unit [21]. This requires three integer variables (on/of status, startup, and shutdown) per generation plant per time slot as opposed to three binary variables per generation unit per time slot in the BUC, as discussed in Section III.B of A Computationally Efficient Market Model for Future Grid Scenario Studies. Generation limits: Dispatch levels of a synchronous generator g are limited by the respective stable operating limits: s g,t p g ≤ p g,t ≤ s g,t p g , g ∈ G syn . The power of RES 8 generation is limited by the availability of the corresponding renewable resource (wind or sun): Unit on/off constraints: A unit can only be turned on if and only if it is in off state and vice versa: In a rolling horizon approach, consistency between adjacent time slots is ensured by: whereŝ g is the initial number of online units of generator g. Equations (13) and (14) also implicitly determine the upper bound of u g,t and d g,t in terms of changes in s g,t . Number of online units: Unlike the BUC, the MST requires an explicit upper bound on status variables: 7 A sufficiently small (∼ 30 • ) voltage angle difference over a transmission line is used to reduce the number of nonconvergent AC power flow cases. 8 For the sake of brevity, by RES we mean "unconventional" renewables like wind and solar, but excluding conventional RES, like hydro, and dispatchable unconventional renewables, like concentrated solar thermal. Ramp-up and ramp-down limits: Ramp rates of synchronous generation should be kept within the respective ramp-up (16), (17) and ramp-down limits (18), (19): p g,t − p g,t−1 ≤ s g,t r + g , t = 1, g ∈ {G syn |r + g < p g }, (16) p g,t −p g ≤ s g,t r + g , t = 1, g ∈ {G syn |r + g < p g }, (17) p g − p g,t ≤ s g,t−1 r − g , t = 1, g ∈ {G syn |r − g < p g }, (18) p g − p g,t ≤ŝ g r − g , t = 1, g ∈ {G syn |r − g < p g }. (19) In the MST, a ramp limit of a power plant is defined as a product of the ramp limit of an individual unit and the number of online units in a power plant s g,t . If s g,t is binary, these ramp constraints are mathematically identical to ramp constraints of the BUC. If a ramp rate multiplied by the length of the time resolution ∆t is less than the rated power, the rate limit has no effect on the dispatch, so the corresponding constraint can be eliminated. Constraints explicitly defined for t = 1 are used to join two adjacent sub-horizons in the rollinghorizon approach. Minimum up and down times: Steam generators must remain on for a period of time τ u g once turned on (minimum up time): Similarly, they must not be turned on for a period of time τ d g once turned off (minimum down time): Similar to the rate limits, if the minimum up and down times are smaller than the time resolution ∆t, the corresponding constraints can be eliminated. Due to integer nature of discrete variables in the MST, the definition of the MUDT constraints in the RH approach requires the number of online units for the last τ u/d time interval to establish the relationship between the adjacent sub-horizons. If the τ u/d g is smaller than time resolution ∆t, then these constraints can be eliminated. 5) CST constraints:: CST constraints include TES energy balance and storage limits. TES state of charge (SOC) determines the TES energy balance subject to the accumulated energy in the previous time slot, thermal losses, thermal power provided by the solar farm and electrical power dispatched from the CST plant: e g,t = η g e g,t−1 + p CST g,t − p g,t , t = 1, g ∈ G CST , (24) where, p CST g,t is the thermal power collected by the solar field of generator g ∈ G CST . TES limits: Energy stored is limited by the capacity of a storage tank: 6) Utility storage constraints: Utility-scale storage constraints include energy balance, storage capacity limits and power flow constraints. The formulation is generic and can capture a wide range of storage technologies. Utility storage SOC limits determine the energy balance of storage plant s: e s,t = η sês + p s,t , t = 1. Utility storage capacity limits: Energy stored is limited by the capacity of storage plant s: Charge/discharge rates limit the charge and discharge powers of storage plant s: where p − s and p + s represent the maximum power discharge and charge rates of a storage plant, respectively. 7) Prosumer sub-problem: The prosumer sub-problem captures the aggregated effect of prosumers. It is modeled using a bi-level framework in which the upper-level unit commitment problem described above minimizes the total generation cost, and the lower-level problem maximizes prosumers' selfconsumption. The coupling is through the prosumers' demand, not through the electricity price, which renders the proposed model market structure agnostic. As such, it implicitly assumes a mechanism for demand response aggregation. The Karush-Kuhn-Tucker optimality conditions of the lower-level problem are added as the constraints to the upper-level problem, which reduces the problem to a single mixed integer linear program. The model makes the following assumptions: (i) the loads are modeled as price anticipators; (ii) the demand model representing an aggregator consists of a large population of prosumers connected to an unconstrained distribution network who collectively maximize self-consumption; (iii) aggregators do not alter the underlying power consumption of the prosumers; and (iv) prosumers have smart meters equipped with home energy management systems for scheduling of the PV-battery systems, and, a communication infrastructure is assumed that allows a two-way communication between the grid, the aggregator and the prosumers. More details can be found in [15]. Prosumer Objective function: Prosumers aim to minimize electricity expenditure: where λ is the applicable feed-in price ratio. In our research, we assumed λ = 0, which corresponds to maximization of self-consumption. The prosumer sub-problem is subject to the following constraints: Prosumer power balance: Electrical consumption of prosumer p, consisting of grid feed-in power, p g− p,t , underlying consumption, p p,t , and battery charging power, p b p,t , is equal to the power taken from the grid, p g+ p,t , plus the power generated by the PV system, p pv p,t : Battery charge/discharge limits: Battery power should not exceed the charge/discharge limits: where p − b and p + b represent the maximum power discharge and charge rates of the prosumer's battery, respectively. Battery storage capacity limits: Energy stored in a battery of prosumer p should always be less than its capacity: Battery SOC limits: Battery SOC is the sum of the power inflow and the SOC in the previous period: whereê b p represents the initial SOC and is used to establish the connection between adjacent sub-horizons. IV. SIMULATION SETUP The case studies provided in this section compare the computational efficiency of the proposed MST with alternative formulations. For detailed studies on the impact of different technologies on future grids, an interested reader can refer to our previous work [14]- [19]. A. Test System We use a modified 14-generator IEEE test system that was initially proposed in [22] as a test bed for small-signal analysis. The system is loosely based on the Australian National Electricity Market (NEM), the interconnection on the Australian eastern seaboard. The network is stringy, with large transmission distances and loads concentrated in a few load centres. Generation, demand and the transmission network were modified to meet future load requirements. The modified model consists of 79 buses grouped into four regions, 101 units installed at 14 generation plants and 810 transmission lines. B. Test Cases To expose the limitations of the different UC formulations, we have selected a typical week with sufficiently varying operating conditions. Four diverse test cases with different RES penetrations are considered. First, RES0 considers only conventional generation, including hydro, black coal, brown coal, combined cycle gas and open cycle gas. The generation mix consists of 2.31 GW hydro, 39.35 GW of coal and 5.16 GW of gas, with the peak load of 36.5 GW. To cater for demand and generation variations, 10 % reserves are maintained at all times. The generators are assumed to bid at their respective short run marginal costs, based on regional fuel prices [37]. Cases RES30, RES50, RES75 consider, respectively, 30 %, 50 % and 75 % annual energy RES penetration, supplied by wind, PV and CST. Normalized power traces for PV, CST and wind farms (WFs) for the 16-zones of the NEM are taken from the AEMO's planning document [23]. The locations of RESs are loosely based on the AEMO's 100% RES study [10]. C. Modeling Assumptions Power traces of all PV modules and wind turbines at one plant are aggregated and represented by a single generator. This is a reasonable assumption given that PV and WF don't provide active power reserves, and are not limited by ramp rates, MUDT, and startup and shutdown costs, which renders the information on the number of online units unnecessary. Also worth mentioning is that RES can be modeled as negative demand, which can lead to an infeasible solution. Modeling RES (wind and solar PV) as negative demand is namely identical to preventing RES from spilling energy. Given the high RES penetration in future grids, we model RES explicitly as individual generators. Unlike solar PV and wind, CST requires a different modeling approach. Given that CST is synchronous generation it also contributes to spinning reserves and system inertia. Therefore, the number of online units in a CST plant needs to be modeled explicitly. An optimality gap of 1% was used for all test cases. Simulation were run on Dell OPTIPLEX 9020 desktop computer with Intel(R) Core(TM) i7-4770 CPU with 3.40 GHz clock speed and 16 GB RAM. V. RESULTS AND DISCUSSION To showcase the computational efficiency of the proposed MST, we first benchmark its performance for different horizon lengths against the BUC formulation employing three binary variables per unit per time slot and the AGG formulation where identical units at each plant are aggregated into a single unit, which requires three binary variables per plant per time slot. We pay particular attention to the techniques used for computational speedup, namely unit clustering, rolling horizon, and constraint clipping. Last, we compare the results of the proposed MST with BUC and AGG formulations for voltage and frequency stability studies. A. Binary Unit Commitment (BUC) We first run the BUC for horizon lengths varying from one to seven days, Fig. 4 (top). As expected, with the increase in the horizon length, the solution time increases exponentially. For a seven-day horizon, the solution time is as high as 25 000 s (7 h). Observe how the computational burden is highly dependent on the RES penetration. The variability of the RES results in an increased cycling of the conventional thermal fleet, which increases the number of on/off decisions and, consequently the computational burden. In addition to that, B. Aggregated Formulation (AGG) Aggregating identical units at a power plant into a single unit results in a smaller number of binary variables, which should in principle reduce the computational complexity. Fig. 4 confirms that this is mostly true, however, for RES50-HL7 the computation time is higher than in the BUC formulation. The reason for that is that, in this particular case, the BUC formulation has a tighter relaxation than the AGG formulation and, consequently, a smaller root node gap. Compared to the MST formulation, with a similar number of variables than the AGG formulation, the MST has considerably shorter computation time due to a smaller root node gap. In terms of accuracy, the AGG formulation works well for balancing studies [18], [19]. On the other hand, the number of online synchronous generators in the dispatch differs significantly from the BUC, which negatively affects the accuracy of voltage and frequency stability analysis, as shown later. Due to a large number of online units in a particular scenario, a direct comparison of dispatch levels and reserves from each generator is difficult. Therefore, we compare the total number of online synchronous generators, which serves as a proxy to the available system inertia. Fig. 5 shows the number of online generators of four different RES penetration levels for a horizon length of seven days. For most of the hours there is a significant difference between the number of online units obtained from the BUC and the AGG formulation. In conclusion, despite its computational advantages, the AGG formulation is not appropriate for stability studies due to large variations in the number of online synchronous units in the dispatch results. In addition to that, the computational time is comparable to the BUC in some cases. We now evaluate the effectiveness of the techniques for the computational speedup. 1) Unit Clustering: In unit clustering, binary variables associated with the generation unit constraints are replaced with a smaller number of integer variables, which allows aggregating several identical units into one equivalent unit, but with the number of online units retained. This results in a significant reduction in the number of variables and, consequently, in the computational speedup. Compared to the BUC, the number of variables in the MST with this technique alone reduces from 24 649 to 5990 for RES75 with a horizon length of seven days. Therefore, the solution time for RES75-HL7 reduces from 25 000 s in the BUC to 450 s in MST with unit clustering alone. 2) Rolling Horizon Approach: A rolling horizon approach splits the UC problem into shorter horizons. Given the exponential relationship between the computational burden and the horizon length, as discussed in Section V-A, solving the problem in a number of smaller chunks instead of in one block results in a significant computational speedup. The accuracy and the consistency of the solution are maintained by having an appropriate overlap between the adjacent horizons. However, the overlap depends on the time constants of the problem. Long term storage, for example, might require longer solution horizons. The solution times for different RES penetrations are shown in Table I. Observe that in the RES75 case, the effect of rolling horizon is much more pronounced, which confirms the validity of the approach for studies with high RES penetration. 3) Constraint Clipping: Eliminating non binding constraints can speedup the computation even further. Table II shows the number of constraints for different scenarios with and without constraint clipping. Observe that the number of redundant constraints is higher in scenarios with a higher RES penetration. The reason is that a higher RES penetration requires more flexible gas generation with ramp rates shorter than the time resolution (one hour in our case). Note that the benefit of constraint clipping with a shorter time resolution will be smaller. C. MST Computation Time and Accuracy The proposed MST outperforms the BUC and AGG in terms of the computational time by several orders of magnitude, as shown in Fig. 4 (bottom). The difference is more pronounced at higher RES penetration levels. For RES75, the MST is more than 500 times faster than the BUC. In terms of the accuracy, the MST results are almost indistinguishable from the BUC results, as evident from Fig. 5 that shows the number of online synchronous units for different RES penetration levels. Minor differences in the results stem from the nature of the optimization problem. Due to its mixed-integer structure, the problem is non-convex and has therefore several local optima. Given that the BUC and the MST are mathematically not equivalent, the respective solutions might not be exactly the same. The results are nevertheless very close, which confirms the validity of the approach for the purpose of scenario analysis. The loadability and inertia results presented later further support this conclusion. D. Stability Assessment To showcase the applicability of the MST for stability assessment, we analyze system inertia and loadability that serve as a proxy to frequency and voltage stability, respectively. More detailed stability studies are covered in our previous work, including small-signal stability [13], frequency stability [14], and voltage stability [17]. 1) System inertia: Fig. 6 (bottom) shows the system inertia for the BUC, AGG and the proposed MST, respectively, for RES0. Given that the inertia is the dominant factor in the frequency response of a system after a major disturbance, the minuscule difference between the BUC and the MST observed in Fig. 6 validates the suitability of the MST for frequency stability assessment. The inertia captured by the AGG, on the other hand, is either over or under estimated and so does not provide a reliable basis for frequency stability assessment. 2) Loadability Analysis: The dispatch results from the MST are used to calculate power flows, which are then used in loadability analysis 9 . Fig. 6 (top) shows loadability margins for the RES0 scenario for different UC formulations. Observe that the BUC and the MST produce very similar results. The AGG formulation, on the other hand, gives significantly different results. From hours 95 to 150, in particular, the AGG results show that the system is unstable most of the time, which is in direct contradiction to the accurate BUC formulation. Compared to the inertia analysis, the differences between the formulations are much more pronounced. Unlike voltage, frequency is a system variable, which means that it is uniform across the system. In addition to that, inertia only depends on the number of online units but not on their dispatch levels. Voltage stability, on the other hand, is highly sensitive both to the number of online units and their dispatch levels, which affects the available reactive power support capability, as illustrated in Fig. 3. Close to the voltage stability limit, the system becomes highly nonlinear, so even small variations in dispatch results can significantly change the power flows and, consequently, voltage stability of the system. One can argue that in comparison to BUC the proposed MST result in the more conservative loadability margin, although this is not always the case (around hour 85, the MST is less conservative). VI. CONCLUSION This paper has proposed a computationally efficient electricity market simulation tool based on a UC problem suitable for future grid scenario analysis. The proposed UC formulation includes an explicit network representation and accounts for the uptake of emerging demand side technologies in a unified generic framework while allowing for a subsequent stability assessment. We have shown that unit aggregation, used in conventional planning-type UC formulations to achieve computational speedup, fails to properly capture the system inertia and reactive power support capability, which is crucial for stability assessment. To address this shortcoming, we have proposed a UC formulation that models the number of online generation units explicitly and is amenable to a computationally expensive 9 The loadability analysis is performed by uniformly increasing the load in the system until the load flow fails to converge. The loadability margin is calculated as the difference between the base system load and the load in the last convergent load flow iteration. time-series analysis required in future grid scenario analysis. To achieve further speedup, we use a rolling horizon approach and constraint clipping. The effectiveness of the computational speedup techniques depends on the problem structure and the technologies involved so the results cannot be readily generalized. The computational speedup varies between 20 to more than 500 times, for a zero and 75% RES penetration, respectively, which can be explained by a more frequent cycling of the conventional thermal units in the high-RES case. The simulation results have shown that the computational speedup doesn't jeopardize the accuracy. Both the number of online units that serves as a proxy for the system inertia and the loadability results are in close agreement with more detailed UC formulations, which confirms the validity of the approach for long term future grid studies, where one is more interested in finding weak points in the system rather than in a detailed analysis of an individual operating condition.
2017-06-11T02:07:19.000Z
2017-01-27T00:00:00.000
{ "year": 2017, "sha1": "919f35bc88f78cf4ccad851c920828cce77f146a", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1701.07941", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "919f35bc88f78cf4ccad851c920828cce77f146a", "s2fieldsofstudy": [ "Engineering", "Computer Science", "Environmental Science" ], "extfieldsofstudy": [ "Computer Science", "Engineering", "Mathematics" ] }
262201736
pes2o/s2orc
v3-fos-license
Emotional Competences of Primary Education Teachers: A Need in School Post COVID-19 The COVID-19 pandemic has increased the number of students with mental health problems: depression, anxiety, stress. Faced with this reality, teachers and schools must be prepared to respond quickly and effectively. Therefore, the objective of this article is to analyze the emotional competences of primary school teachers in the city of Valencia based on the following sociodemographic variables: sex, age, professional experience, type of center and whether they have children. For this purpose, a quantitative methodological approach has been followed, through which the emotional competencies of primary education teachers are analyzed. These results allow us to establish teacher profiles according to sociodemographic variables and help to detect possible training deficiencies. A sample of 371 teachers of primary education in the city of Valencia has been analyzed. The Questionnaire on Teaching Competences of Primary Education Teachers, carried out under the Planned Action Model, has been used, and descriptive, univariate, bivariate and cluster analyses have been carried out. The mean, the standard deviation and the interquartile range (IQR) have been analyzed, as well as non-parametric tests such as the Wilcoxon, Kruskal–Wallis or Z test. The most significant results are that teachers have a greater ability to interpret emotions and to listen to students. On the contrary, it is teachers who most reject prejudice, discrimination and racism. Younger teachers are the ones who implement more inclusive learning environments. Finally, in general, all teachers are very respectful of students and claim to know how to manage classroom conflicts. The results obtained, in general terms, coincide with most of the research on teachers’ emotional competencies. Some aspects simply do not coincide with the literature. The teachers who participated in our research perceive themselves as having a greater capacity to observe and interpret students’ emotions, to generate learning situations that cater to diversity and to listen to their students. Other studies place these competencies at lower levels. Introduction Are teachers prepared to deal with the emotional problems caused by COVID-19 among their students?Do they have the right professional skills?These questions are necessary today, when many students are in a situation of vulnerability [1].Since March 2020-the time of school closures-research on teaching competencies has focused on teachers' ability to manage online education [2][3][4][5] or to compensate for the possible learning gaps caused by this teaching [6][7][8][9][10]. Currently, the question is whether they can address the possible psycho-emotional sequelae detected among students.Confinement, safe distancing, masks, the fear of contagion, the loss of family members and over information in the media have generated stress, depression, anxiety, exhaustion, fear and uncertainty [11], along with relationship problems between peers. With respect to teachers, it was found that, after COVID, those with long experience and high levels of compassion suffered to a greater extent [12].Other studies analyzing the growth trajectories of the levels of psychological well-being of teachers [13] have shown that it decreases significantly in those who are older and more experienced, affecting positive relationships with others. UNESCO [14] already called for the inclusion of socio-emotional competencies and resilience in teacher training to care for both students and teachers to respond to future challenges.It also calls for the development of "emotional resilience" in online teaching and in the return to face-to-face school, as well as promoting the learning of socio-emotional skills among students, to face the future in a positive way [15,16].In this process, the skills of school leaders that reinforce the socioemotional capacities of educators in their schools are important [17]. Save de Children [18], meanwhile, warned of the risk of disaffection among students with school and the loss of interest in learning due to confinement.Teachers were called out to work affectively, incorporating accompaniment, listening and personalized teaching, to make them a key factor of protection, a "tutor of resilience" [19]. The World Bank [20] also warned of possible mental health problems among students and schools and the need to implement social or emotional learning programs to combat anxiety, stress and low self-esteem and to create positive emotional climates in the classroom [21]. In summary, UNESCO [22] saw the crisis as an opportunity to rethink the meaning of education, review evaluation techniques and instruments, improve the teaching of higher thinking skills (questioning, creativity and problem solving) and socio-emotional skills (empathy, teamwork, collaboration, resilience, proactivity, initiative and responsible behavior) and re-emphasizing the role of school as a safe environment for peer relationships [23]. This article has the following objective.Analyze the emotional competences of teachers of primary education of the city of Valencia according to the following sociodemographic variables: sex, age, professional experience, type of center and whether they have children.These results allow for the establishment of teacher profiles based on sociodemographic variables and help to detect possible training deficiencies.These emotional competences of teachers are very necessary nowadays, due to the consequences and psycho-emotional and mental health problems derived from COVID-19 [24][25][26][27][28] among primary school students: anxiety, stress, depression or fear. For this purpose, this research proposes two hypotheses to analyze the emotional competences of teachers.These have been described based on the observation of reality, the reading of the scientific literature and the analysis of the results of similar research [29][30][31][32][33][34][35]. Hypothesis 1.The personal characteristics and emotional competences of teachers interfere in the management of students, finding differences depending on sociodemographic and professional variables (sex, age, professional experience, type of center in which they work, whether they have children and teaching vocation).Hypothesis 2. Classroom management presents significant differences with respect to the emotional competences of teachers, depending on professional and sociodemographic variables (sex, age, professional experience, type of center in which they work, whether they have children and teaching vocation). This research is part of a broader design to evaluate the teaching competencies of teachers based on the theory of reasoned action of Ajzen and Fishbein [36].For this purpose, some items were taken from the subscale of beliefs and attitudes of the CCPES II questionnaire [37], which allows for the evaluation of the teacher's self-perceived socioemotional competencies, such as conflict resolution, active listening, empathy, assertiveness, teaching commitment to students, etc. Teachers' emotional competencies can be defined as the set of beliefs, attitudes, skills, subjective norms, behavioral intentions and behaviors that favor the adequate management of students and the classroom, taking into account the emotional aspects and feelings of students, teachers and families [30]. Emotional Competence in Teachers The teacher is the fundamental piece in the construction of the personal identity of their students, fundamentally their emotional influence [38].It could even be affirmed that the internal socioemotional competence of teachers is a predictor of the relationship between them and their students [39].In this process, the influence of teachers' own identity and their conceptions of power has been demonstrated [40].It has also been shown that those teachers who have a high emotional competence also have it in their personal wellbeing and their assertive style, which correlates with educational style [41] and influences learning and the best disposition to learn of the students [42].For this reason, it is necessary to implement emotional competence in their training [43], since it has a direct impact on the personal, social and academic development of the students (in personal relationships, in the classroom climate, in methodology, in motivation, in academic results, in conflict management, in their self-esteem, empathy or resilience) [44]. Competences for the Management of Students The scientific literature manifests the link between the emotional competences of teachers and their ability to manage students [45][46][47].A faculty that does not possess a formed emotional competence cannot educate its students [48].In addition, there must be an adequate interrelation between cognitive and emotional processes to respect students in their personalized integral development process [49].Teachers must self-regulate to help students manage their own emotions and interpret them, fostering their self-esteem and enabling them to respond appropriately to life's challenges [50].This emotional component is reflected in their teaching style [51], which must be linked to their teaching vocation and their professional identity, which has been shaped during their training [48] and their internship [52]. Hence the importance of university training for future teachers.When comparing the levels of emotional intelligence and empathy among students of different degrees, it was observed that in the first course, the level of emotional attention was higher in education students.And in the fourth year, the levels of emotional repair and clarity increased significantly in the education and medicine degrees [53].Regarding the differences between university students in early childhood and primary education, no differences have been found in their emotional competencies [34].The combination of experience and specific training has been shown to be the most important for the majority of the components of emotional competence and the one that best enables students to develop it [32]. Competences for Classroom Management Several studies show how the emotional competences of teachers impact the climate and management of the classroom and generate an impact on the state of well-being of students [54].And when practicing teachers are analyzed, it is observed that they perceive themselves as having an average level in this competence.But as they advance in age and experience, they decrease in attention and emotional clarity [33].In addition, Palomera et al. [55] and Gutiérrez and Buitrago [56] add that contextual and methodological variables must also be considered for conflict management in such a way that having a well-planned didactic style or taking care of the contexts generating adequate learning environments facilitates the classroom climate and reduces the impact of interpersonal conflicts [57].The emotional competencies of teachers and their values are present in the form of conflict resolution in the classroom, as well as their predisposition to commitment, listening and time for their intervention [58].On the other hand, it is necessary to create a living, innovative, flexible, dynamic, versatile, changing and transformative learning environment, with a clear didactic intention [59], i.e., an environment that responds to the what, how and why of teaching, as well as one that is based on a deep knowledge of the evolutionary psychology and personal characteristics of students [60], an environment that generates human relationships based on emotions and feelings [61]. Materials and Methods This research follows a quantitative methodological approach, through which the emotional competences of primary education teachers are analyzed. Participants The sample has been obtained via non-probability sampling for convenience.Teachers participate voluntarily and anonymously, through the online response of the Questionnaire on Teaching Competences.This was sent to schools through electronic mail.A total of 371 teachers participated, both from public and subsidized schools (Table 1).To avoid bias in the selection, the formula was used for samples of finite populations, with a confidence level of 95% and an estimation error of 5%. Instrument The Questionnaire on Teaching Competences of Primary Education Teachers [36], carried out under the Planned Action Model [37], is used.This theory allows us to predict teacher behaviors based on their beliefs, attitudes, skills, subjective norms and behavioral intentions.A discussion group and a commission of experts in primary education put forward the first formulation of the questionnaire items, and all the proposals were submitted to evaluation and judgement.Subsequently, a pilot study was carried out to validate the questionnaire and to debug those items that could present problems (n = 154).During this process, we went from the initial 97 proposals to a pilot questionnaire made up of 65 items, until reaching a final questionnaire made up of 60 items, on a Likert scale with 5 responses, where 1 was Totally Disagree and 5 was Totally Agree.The questionnaire was divided into 6 factors, following the theory of planned action, and its reliability (Cronbach's alpha = 0.917) and validity (KMO = 0.757) were excellent.For this research, 13 items referring to teachers' emotional competencies were used (Table 2).The reality resulting from the COVID-19 pandemic, in which a multitude of primary school students present some psycho-emotional health problems, requires the evaluation of teaching skills to face this new challenge. I don't like the teaching profession CMP Note: CMM stands for "cuanto más mejor" in Spanish ("the higher, the better" in English); CMP stands for "cuanto más peor" in Spanish ("the higher, the worse" in English). Data Analysis The scale (summative) is configured under the criterion of "The more the better."Thus, the higher the value on the scale obtained through adding up all the scores of the questions, the higher the rating in socioemotional competence (Table 2). Its evaluative character appears in the right column: CMM indicates a higher score and better rating on the scale, and CMP indicates a higher score and worse rating.These are items presented in a negative way, so for their analysis, the answers have been transformed.Answer 1 equals 5, answer 2 equals 4 and answer 3 equals 3. Descriptive, univariate, bivariate and cluster analyses were performed to answer the research questions.In some cases, we used the mean, standard deviation (sd), interquartile range (IQR), minimum, first quartile, median, third quartile and maximum when the type of variable required it (scale variable). It is necessary to resort to the corresponding nonparametric tests, the Mann-Whitney-Wilcoxon test, hereinafter the Wilcoxon test, to compare two groups and the Kruskal-Wallis test to compare more than two.For the contrast of proportions, the Z test is used whenever possible for two samples; its objective is to determine if the two independent samples were taken from two populations, which present the same proportion of elements with a certain characteristic.When necessary, the 95% confidence interval is also presented for the proportion of elements that have a certain characteristic.And for the independence analysis for contingency tables, test χ 2 is used, if it is not applicable to Fisher's exact test.For the analysis of the results, the program R is used, and the s RCommander libraries, the graphical library ggplot2 and the library ca are used for the analysis of correspondences.For some independence contrasts, the RcmdrPlugin.IPSUR library is used [62]. Analysis of Compliance with the Hypotheses Raised Hypothesis 1.The personal characteristics and emotional competences of teachers interfere in the management of students, finding differences depending on sociodemographic and professional variables (sex, age, professional experience, type of center in which they work, whether they have children and teaching vocation). (a) Ability to interpret emotions If the ability of teachers to interpret emotions according to sex is analyzed, it is observed how teachers obtain better results (Wilcoxon, p-value = 0.00001717 < 0.05).They can observe and understand the emotions and feelings of their students better than their fellow teachers (Figures 1 and 2). Analysis of Compliance with the Hypotheses Raised Hypothesis 1.The personal characteristics and emotional competences of teachers interfere in the management of students, finding differences depending on sociodemographic and professional variables (sex, age, professional experience, type of center in which they work, whether they have children and teaching vocation). (a) Ability to interpret emotions If the ability of teachers to interpret emotions according to sex is analyzed, it is observed how teachers obtain better results (Wilcoxon, p-value = 0.00001717 < 0.05).They can observe and understand the emotions and feelings of their students better than their fellow teachers (Figures 1 and 2). Analysis of Compliance with the Hypotheses Raised Hypothesis 1.The personal characteristics and emotional competences of teachers interfere in the management of students, finding differences depending on sociodemographic and professional variables (sex, age, professional experience, type of center in which they work, whether they have children and teaching vocation). (a) Ability to interpret emotions If the ability of teachers to interpret emotions according to sex is analyzed, it is observed how teachers obtain better results (Wilcoxon, p-value = 0.00001717 < 0.05).They can observe and understand the emotions and feelings of their students better than their fellow teachers (Figures 1 and 2).It should be noted that the proportion of teachers who score high (4 or 5) is CI95% = (93.9,98.0), that is, the vast majority believes they are able to interpret and observe the emotions and feelings of students.These high scores do not depend on sex (Fisher, p-value = 0.3309), the type of center (Fisher, p-value = 0.5726), experience (Fisher, p-value = 0.1368) or age (Fisher, p-value = 0.3380). If differences in terms of having children are analyzed, it is observed that for teachers, there are no significant differences (Wilcoxon, p-value = 0.2073 > α = 0.05) (Figure 3), nor for teachers (Wilcoxon, p-value = 0.4671 > α = 0.05) (Figure 4).This shows that there is no greater sensitivity among teachers with children compared to those without children. Eur. J. Investig.Health Psychol.Educ.2023, 13, x FOR PEER REVIEW 7 It should be noted that the proportion of teachers who score high (4 or 5) is CI95% = (93.9,98.0), that is, the vast majority believes they are able to interpret and observe the emotions and feelings of students.These high scores do not depend on sex (Fisher, p-value = 0.3309), the type of center (Fisher, p-value = 0.5726), experience (Fisher, p-value = 0.1368) or age (Fisher, p-value = 0.3380). If differences in terms of having children are analyzed, it is observed that for teachers, there are no significant differences (Wilcoxon, p-value = 0.2073 > α = 0.05) (Figure 3), nor for teachers (Wilcoxon, p-value = 0.4671 > α = 0.05) (Figure 4).This shows that there is no greater sensitivity among teachers with children compared to those without children.On the other hand, the emotional competencies of teachers also have a lot to do with the way in which teachers interact with students, how they treat them and how they see them.In that sense, it is necessary to analyze the relationship of personal, educational and cultural respects. As for whether teachers respect their students, it is observed that the answers do not depend on sex, age, teaching experience, the type of center or whether they have children (Figure 5).All teachers believe they have an attitude of respect towards students.This positive view is confirmed in the number of teachers who score 4 or 5 (Table 3).All teachers believe they have an attitude of respect towards students.This positive view is confirmed in the number of teachers who score 4 or 5 (Table 3).In the same way, teachers present a very positive inclusive vision towards their students.In a very high percentage, they reject prejudice, racism and discrimination, although, unfortunately, there is a percentage of teachers (7.5%) who confess a disrespectful attitude (Figure 6). Eur. J. Investig.Health Psychol.Educ.2023, 13, x FOR PEER REVIEW 12 In the same way, teachers present a very positive inclusive vision towards their students.In a very high percentage, they reject prejudice, racism and discrimination, although, unfortunately, there is a percentage of teachers (7.5%) who confess a disrespectful attitude (Figure 6).If analyzed in terms of sociodemographic variables, no differences are observed in terms of age (Kruskal-Wallis, p-value = 0.1439 > 0.05), experience (Kruskal-Wallis, p-value = 0.6898 > 0.05), the type of center (Wilcoxon, p-value = 0.7334 > 0.05) or whether they have children (Wilcoxon, p-value = 0.08935 > 0.05, inconclusive).However, there are differences observed regarding sex (Figure 7).If analyzed in terms of sociodemographic variables, no differences are observed in terms of age (Kruskal-Wallis, p-value = 0.1439 > 0.05), experience (Kruskal-Wallis, p-value = 0.6898 > 0.05), the type of center (Wilcoxon, p-value = 0.7334 > 0.05) or whether they have children (Wilcoxon, p-value = 0.08935 > 0.05, inconclusive).However, there are differences observed regarding sex (Figure 7).If analyzed in terms of sociodemographic variables, no differences are observed in terms of age (Kruskal-Wallis, p-value = 0.1439 > 0.05), experience (Kruskal-Wallis, p-value = 0.6898 > 0.05), the type of center (Wilcoxon, p-value = 0.7334 > 0.05) or whether they have children (Wilcoxon, p-value = 0.08935 > 0.05, inconclusive).However, there are differences observed regarding sex (Figure 7).Men score higher than women (Wilcoxon, p-value = 0.01283 < 0.05), that is, they reject prejudice, racism and discrimination more.Of the teachers who answer 1, that is, who disagree with the statement of rejecting prejudice, racism and discrimination, 88% are women and 12% men.This reality has a difficult explanation. The contingency tables of those who score 4 or 5, compared to the variables sex, type of center, children and teaching experience, are as follows (Table 4): The proportion of men who score high (4 or 5) is higher, with a p-value = 0.02053 < 0.05, although, with such a tight p-value, the result is inconclusive. The contrast of proportions gives a p-value = 0.7532 > 0.05.Consequently, the proportion of teachers who score high (4 or 5) is the same in subsidized and public schools. The contrast of proportions of those who score 4 or 5 to contrast if the proportion among those who have children is higher gives a p-value = 0.03624 < 0.05.Therefore, it is higher among those who have children, although with an inconclusive result (very tight p-value). The p-value = 0.3390 for the independence contrast (Fisher) indicates that the proportion of those who score 4 or 5 is similar according to teaching experience. The independence contrast (Fisher) for the variables age and high scores (4 or 5) gives us a p-value = 0.02939.So, there is dependence between both variables.It seems that teachers over the age of 40 tend to give higher scores. If the dependency relationship between the variables age and rejection of prejudices is analyzed, it is observed how the contrast of independence (Fisher) gives a p-value = 0.03183 < 0.05.Therefore, both variables are dependent.The contingency table is as follows (Table 5): Once the dependency relationship between the two variables has been established, it is interesting to know what this relationship is like.The answer to that question lies in correspondence analysis, which is used to represent possible associations between variables (factors) and to determine if it is possible to observe patterns.The distances between the different categories indicate the greater or lesser relationship between them; the relationships between the elements are displayed as distances in a graph (Figure 8). Once the dependency relationship between the two variables has been established, it is interesting to know what this relationship is like.The answer to that question lies in correspondence analysis, which is used to represent possible associations between variables (factors) and to determine if it is possible to observe patterns.The distances between the different categories indicate the greater or lesser relationship between them; the relationships between the elements are displayed as distances in a graph (Figure 8).Correspondence analysis shows that a score of 5 is associated with the highest age (over 41 years). Regarding the commitment of teachers to the personal and cultural management of students-which, again, is an indicator of respect for the students-it is observed how there are no major discrepancies depending on the type of center (Figure 9).Correspondence analysis shows that a score of 5 is associated with the highest age (over 41 years). Regarding the commitment of teachers to the personal and cultural management of students-which, again, is an indicator of respect for the students-it is observed how there are no major discrepancies depending on the type of center (Figure 9).The Wilcoxon test gives a p-value = 0.3149, confirming the visual impression.The distributions of the scores are similar.All teachers have a high commitment to the construction of the personal and cultural identity of their students, despite the difficulties that may arise.In addition, the number of teachers who score 4 or 5 depending on the type of center (contrast of proportions p-value = 0.6715) reinforces this idea (Table 6).The Wilcoxon test gives a p-value = 0.3149, confirming the visual impression.The distributions of the scores are similar.All teachers have a high commitment to the construction of the personal and cultural identity of their students, despite the difficulties that may arise.In addition, the number of teachers who score 4 or 5 depending on the type of center (contrast of proportions p-value = 0.6715) reinforces this idea (Table 6).(c) Teaching vocation-self-esteem In this section, it is observed how teachers who like their work tend to promote the self-esteem of their students.This denotes a concern for the emotional development of their students.Although the existence of this linear relationship is observed (ANOVA for linear regression, p-value = 0.006 < 0.05), this is not very intense: R2 = 0.018.For nonparametric correlations, the results are similar.The Kendall rank correlation coefficient (Kendall's coefficient τ) of value τ = 0.175 is significant with a p-value = 0.001 < 0.05.Spearman's rank correlation coefficient (Spearman's ρ) of value ρ = 0.177 is significant (p-value = 0.001 < 0.05).The bubble graph of the scores, as well as the line adjusted to the point cloud using least squares, attest to this.Hypothesis 2. Classroom management presents significant differences with respect to the emotional competences of teachers depending on professional and sociodemographic variables (sex, age, professional experience, type of center in which they work, whether they have children and teaching vocation). (a) Conflict management The way of managing conflicts by teachers has a high emotional charge within the educational process.Analyzing the preparation of teachers to resolve such conflicts in an appropriate way-taking into account the emotional aspects-falls within their professional competence. As shown in Figure 10, all teachers score similarly in terms of their ability to deal with conflicts, regardless of their teaching experience (Kruskal-Wallis, p-value = 0.4863 > α = 0.05).It should be noted that the number of teachers who believe they are not prepared to manage conflicts in the classroom is remarkable (Table 7).These percentages of teachers who feel unprepared do not depend on sex (contrast of proportions, p-value = 0.9753), nor on whether they have children (contrast of proportions, p-value = 0.5505), on the type of center (contrast of proportions, p-value = 0.7839), on experience (contrast of proportions, p-value = 0.4920) or age (contrast of proportions, p-value = 0.9227). (b) Methodological aspects Creating learning environments for listening to students improves academic performance and the quality of interpersonal relationships.Both pedagogical initiatives have a high emotional depth and generate motivation and a desire to learn. Analyzing the results, it is observed that younger teachers (20-30 years old) are the ones who are most concerned with creating learning environments that stimulate work in the classroom and attention to diversity, mainly with respect to the group of teachers between 41-50 years old (Figure 11).As there is a case in which the variance of the groups It should be noted that the number of teachers who believe they are not prepared to manage conflicts in the classroom is remarkable (Table 7).These percentages of teachers who feel unprepared do not depend on sex (contrast of proportions, p-value = 0.9753), nor on whether they have children (contrast of proportions, p-value = 0.5505), on the type of center (contrast of proportions, p-value = 0.7839), on experience (contrast of proportions, p-value = 0.4920) or age (contrast of proportions, p-value = 0.9227). (b) Methodological aspects Creating learning environments for listening to students improves academic performance and the quality of interpersonal relationships.Both pedagogical initiatives have a high emotional depth and generate motivation and a desire to learn. Analyzing the results, it is observed that younger teachers (20-30 years old) are the ones who are most concerned with creating learning environments that stimulate work in the classroom and attention to diversity, mainly with respect to the group of teachers between 41-50 years old (Figure 11).As there is a case in which the variance of the groups with a center in the median, according to age, is not homogeneous (Levene with center in the median, p-value = 0.02509 < 0.05), we opt for a one-factor ANOVA for the assumption that the variances are not equal (Welch test). chol.Educ.2023, 13, x FOR PEER REVIEW 18 (c) The differences between the rest of the age groups are not significant.Although the Kruskal-Wallis test is not recommended, it is convenient in this case to see if this test (with the corresponding post hoc contrasts) confirms the above results (Table 8): (a) The Kruskal-Wallis test with a p-value = 0.002005 < 0.05 rejects the null hypothesis that the populations from which the samples have been extracted are equidistributed.(b) Post hoc contrasts using Wilcoxon to compare each pair of age groups and adjusting the p-value using the Holm method give a similar result.(a) For this case, the one-way ANOVA p-value (age), not assuming equal variances (Welch test), gives us a p-value = 0.0251 < 0.05.Therefore, there are significant differences between the groups.(b) Post hoc contrasts indicate that the highest average score is obtained by the 20-30 age group and that the 41-50 age group obtains the minimum average score, with the difference between the two groups being significant.(c) The differences between the rest of the age groups are not significant. Age Although the Kruskal-Wallis test is not recommended, it is convenient in this case to see if this test (with the corresponding post hoc contrasts) confirms the above results (Table 8): (a) The Kruskal-Wallis test with a p-value = 0.002005 < 0.05 rejects the null hypothesis that the populations from which the samples have been extracted are equidistributed.(b) Post hoc contrasts using Wilcoxon to compare each pair of age groups and adjusting the p-value using the Holm method give a similar result.From the table, it can be deduced that the only significant difference occurs between age groups 20-30 vs. 41-50. If the high responses (4 or 5) of teachers under 30 years of age compared to those over 30 years of age are analyzed, it is observed that the p-value for the contrast of proportions is as follows: p-value = 0.02477 < 0.05.It is accepted, therefore, that younger people score higher than older people, but with such a tight p-value, the decision is not conclusive (Table 9).The proportion of teachers who score high (4 or 5) is 95% CI = (90.24, 95.40), which shows that most feel committed to attention to diversity.On the other hand, regarding the predisposition to listen to students and let them express their ideas and opinions, it is observed how teachers are the ones who listen to them the most (Wilcoxon, p-value = 0.1717) (Table 10).It is observed that 78.9% of men score 5, while 85.1% of women score 20.0% of men score 4, while women score 13.4% (Figure 12).The teachers who score high (4 or 5), according to sex, are (Table 11): The teachers who score high (4 or 5), according to sex, are (Table 11): The Fisher test to contrast the equality of proportions between both groups gives a p-value = 0.9999 > 0.05.Therefore, it cannot be denied that the proportion of teachers who score high (4 or 5) is the same between both sexes.The confidence interval for those who score high, regardless of sex, is as follows: CI95%= (96.9, 99.4). (c) Vocation-conflict management The results show how teachers who like their work tend, slightly, to prepare more to manage conflicts well, although this relationship is not very marked, since the correlation coefficients τ of Kendall (τ = 0.018) and ρ of Spearman (ρ = 0.112) are only slightly positive.Moreover, with such tight p-values, the decision on their significance is inconclusive.The bubble graph of the scores, as well as the line adjusted to the point cloud using least squares, prove it. Discussion The aim of this research is to analyze the emotional competencies of primary school teachers in relation to two hypotheses.The first hypothesis, Hypothesis 1, which states Discussion The aim of this research is to analyze the emotional competencies of primary school teachers in relation to two hypotheses.The first hypothesis, Hypothesis 1, which states whether the personal characteristics and emotional competencies of teachers interfere in student management, as a function of sociodemographic and professional variables, is not confirmed.There are no differences with respect to age, years of experience, the type of center in which they work, whether they have children or not, and in terms of teaching vocation.Differences are only found with respect to the sex variable.In this study, female teachers show a greater tendency to interpret and respond to the emotional demands of students.This result coincides with those of Molero et al. [63], Furham [64] and Stamatopoulou et al. [65], where it is the female student teachers who show better management of emotional situations.In the same line, it also coincides with the studies of López-Lujan and Sanz [37]; Pena, Rey and Extremera [66]; Llorent et al. [30]; Llorent and Núñez-Flores [29]; Molina et al. [31]; and Palomera et al. [67], who affirm that female teachers project themselves with more ability to attend and perceive students' emotions.A study by Avissar [35] maintains that male teachers present greater difficulties in managing students' emotions. However, in the present study, the differences with respect to gender are not tangentially affirmative of this hypothesis, since, being a self-report, all teachers report respecting students' emotions as well as indicate that they tend to foster students' self-esteem.In this sense, the scientific literature advocates that all teachers become "actors of coherence" [68] (p.107).The adequate development of emotional competence among teachers improves the respectful teacher-student relationship [69][70][71], educational practice [43] and educates the civic aspect [72,73].Similarly, all teachers show a high commitment to the construction of their students' personal and cultural identity, despite the difficulties that may arise.In this sense, Jordán and Codena [74] agree on how the emotional aspect of teachers influences this construction, and a study with future teachers highlights that what concerns them most is to favor the personal growth and development of their students [75]. Regarding Hypothesis 2, which asks whether classroom management presents significant differences with respect to teachers' emotional competencies as a function of professional and sociodemographic variables, in this study, the results indicate that there are no differences in terms of gender, age, professional experience, the type of center in which they work, whether they have children or their teaching vocation.Therefore, the hypothesis is not confirmed either. This conclusion coincides with studies such as those by Lozano et al. [76], Barrientos and Pericacho [43] and Keefer et al. [77], which indicate that all teachers are aware of the importance of the adequate emotional management of the classroom to manage students' social-emotional competencies and make an effort to do so.However, they are not in line with the results obtained by Cuesta and Azcárate [78], Martínez-Saura et al. [32] and García-Domingo [33], who in their research find significant differences according to age and professional experience in classroom management, especially in relation to the management of attention to student diversity.Nor do our results coincide with those obtained in the macro study by Hattie [79], who affirms that teachers do not listen to their students, or with that carried out by Hattie and Yates [80], who confirm this conclusion and indicate that teachers' lack of empathy distances them emotionally from students.Therefore, this result does not coincide with the self-perception presented in the present study, in which it is indicated that they listen to and attend to the emotional demands of the students. It should be noted that this study coincides with many other studies carried out via self-reporting, which indicate that, although the results usually show differences with respect to gender on emotional management in the classroom, this conclusion presents an important bias, since all teachers in general perceive themselves as adequately being able to manage their students' emotions.This high self-perception of teachers' emotional competence is also reflected in the studies of Llorent and Núñez-Flores [29] and Lira et al. [81].In contrast, there are studies in which teachers perceive themselves as having simply adequate competence [33,34] and demand more training [82]. These results have a clear implication from a practical point of view.All the teachers evaluated, regardless of gender, age, professional experience, type of center or whether they have children, show a high perception of their emotional competence as teachers.These results are very positive, since student and classroom management is not conditioned by any sociodemographic variable.Therefore, these results reaffirm that the educational system presents high levels of equity, equal opportunities and quality in all educational contexts and realities. Conclusions Primary school teachers in the city of Valencia perceive themselves as having a very high emotional competence, in all its dimensions.If we consider the hypotheses put forward in this research, we observe the following: Hypothesis 1: "The personal characteristics and emotional competences of teachers interfere in the management of students, finding differences depending on sociodemographic and professional variables (sex, age, professional experience, type of center in which they work, if they have children and teaching vocation)", is not confirmed.In general terms, personal and professional characteristics do not interfere with students' emotional management.All teachers feel competent or very competent in the interpretation of emotions, in respect for the student, in the construction of their personal and cultural identity and in the promotion of self-esteem. Only a few small differences are found.It is the teachers who feel more able to observe and interpret the emotions and/or feelings of their students.Male teachers more strongly reject any type of prejudice, racism or discrimination that may occur with respect to a student.The percentage of teachers who say they do not reject prejudice, discrimination and racism is worrying.Finally, it is the teachers with more vocation who promote the self-esteem of the students the most. Hypothesis 2: "Classroom management presents significant differences with respect to the emotional competences of teachers depending on the professional and sociodemographic variables (sex, age, professional experience, type of center in which they work, if they have children and teaching vocation)", is also not confirmed.Classroom management does not present significant differences depending on the variables analyzed. Only small differences are observed: It is the youngest teachers who show the most sensitivity when it comes to creating learning environments that cater to diversity.In the same way, teachers with more vocation are those who claim to feel more qualified to resolve classroom conflicts.There is concern about the percentage of teachers who say they do not feel prepared to manage classroom conflicts. The hypotheses cannot be fully confirmed.Only some small aspects establish significant differences according to sociodemographic variables. As for the possible limitations of this research, these lie in the sample, since the results are limited to the city of Valencia, so they cannot be generalized to other realities.Another limitation lies in the structure of the research, since it is a self-perception questionnaire, and the scores tend to be very high.It would be interesting to contrast these results with the opinions of the students. As for future lines of research, these revolve around the need to implement this study in other cities and/or autonomous communities, to present an X-ray of national teachers around their emotional competences and, thus, to be able to implement continuous training programs.The aim is also to analyze the emotional competencies of teachers at all levels of the educational system-secondary education, baccalaureate and training cycles-since the psycho-emotional impact and mental health problems among young students are also very important. Figure 1 . Figure 1.Absolute frequencies according to sex. Figure 2 . Figure 2. Percentage of each response over the total, by gender. Figure 1 . Figure 1.Absolute frequencies according to sex. Figure 1 . Figure 1.Absolute frequencies according to sex. Figure 2 . Figure 2. Percentage of each response over the total, by gender.Figure 2. Percentage of each response over the total, by gender. Figure 2 . Figure 2. Percentage of each response over the total, by gender.Figure 2. Percentage of each response over the total, by gender. Figure 3 . Figure 3. Percentage of each score (women), according to whether they have children or not. Figure 3 .Figure 4 . Figure 3. Percentage of each score (women), according to whether they have children or not.Eur.J. Investig.Health Psychol.Educ.2023, 13, x FOR PEER REVIEW 8 Figure 4 . Figure 4. Percentage of each score (men), according to whether they have children or not. Figure 5 . Figure 5. Percentages for each response for sociodemographic variables. Figure 7 . Figure 7. Percentage of each score according to gender.Figure 7. Percentage of each score according to gender. Figure 7 . Figure 7. Percentage of each score according to gender.Figure 7. Percentage of each score according to gender. Figure 8 . Figure 8. Symmetrical map of the answers, based on age. Figure 8 . Figure 8. Symmetrical map of the answers, based on age. 15 Figure 9 . Figure 9. Percentage of each score according to the type of center. Figure 9 . Figure 9. Percentage of each score according to the type of center. 2 . Eur. J. Investig.Health Psychol.Educ.2023, 13, x FOR PEERREVIEW 16 Note: Own elaboration Hypothesis Classroom management presents significant differences with respect to the emotional competences of teachers depending on professional and sociodemographic variables (sex, age, professional experience, type of center in which they work, whether they have children and teaching vocation). Figure 10 . Figure 10.Absolute frequencies of responses vs. teaching experience. Figure 10 . Figure 10.Absolute frequencies of responses vs. teaching experience. Figure 11 . Figure 11.Graph of means by age. Figure 11 . Figure 11.Graph of means by age. Figure 12 . Figure 12.Percentage by sex of each of the responses. Figure 12 . Figure 12.Percentage by sex of each of the responses. Figure 13 . Figure 13.Average overall score of teachers' emotional competencies. Table 2 . Items regarding emotional competence and character. Table 3 . Proportion of teachers who score high against sociodemographic variables. Table 3 . Proportion of teachers who score high against sociodemographic variables. The proportion of those who score high differs according to age.The decision is inconclusive. Table 4 . The contingency tables for teachers scoring 4 or 5, compared to the variables gender, type of center, children, and teaching experience. Table 5 . Absolute frequencies and percentage of each score. Table 6 . Teachers who score high according to the type of center. Table 7 . Percentage of teachers scoring low and Teaching experience. Table 7 . Percentage of teachers scoring low and Teaching experience. Table 8 . Pairwise comparisons using Wilcoxon's rank sum test with continuity correction. Table 8 . Pairwise comparisons using Wilcoxon's rank sum test with continuity correction. Table 9 . High scores for ages over 30 and under 30. Table 10 . Absolute frequencies and percentage of each score according to sex. Table 11 . High scores by sex. Table 11 . High scores by sex.
2023-09-24T15:12:20.682Z
2023-09-22T00:00:00.000
{ "year": 2023, "sha1": "ee64305918abdd94ad487cab85fae0424dc416b9", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2254-9625/13/10/141/pdf?version=1695370527", "oa_status": "CLOSED", "pdf_src": "PubMedCentral", "pdf_hash": "5d5f955b4c5865bdcbd9587d527670f633541cec", "s2fieldsofstudy": [ "Education", "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
234305774
pes2o/s2orc
v3-fos-license
Ni2Fe3 Metal Catalyst and Cellulose Ratio Impact on Pyrolyzed Bio-Oil Carbon neutral bio-oil is needed to replace liquid fossil fuels in order to reduce CO2 emissions and global warming. However, the quality of bio-oil produced from biomass contains too much sugar, which is undesirable. Investigating a new method to reduce sugar concentration by changing the ratio between cellulose and catalyst is necessary because other methods all require additional costs. In this study, the Ni2Fe3 and cellulose have been chosen as the model catalyst and model biomass compound, respectively. Five different ratios have been chosen for investigation, 4:1, 2:1, 1:1, 1:2 and 1:4 (weight ratio= cellulose: catalyst) The cellulose pyrolysis experiments results show the bio-oil yield increases as the catalyst amount or ratio is increased initially and then decreased. The 2:1, 1:1, 1:2 shows the highest yields among the five different ratios. From GC-MS analysis, the sugar reduction increases initially and then decreases. Based on both results, the best cellulose/catalyst ratio was 2:1 or 1:1, which is attributed to optimized heat conduction. Introduction Renewable liquid fuels are needed in order to reduce CO2 emissions and global warming. The production of bio-oil as a transportation fuel is a potential answer (Muis et al., 2010;Krajačić et al., 2011;Shafiei and Salim, 2014;Charfeddine and Kahia, 2019). Bio-oil is produced through contemporary biological processes, typically involving renewable feedstocks such as biomass and possibly by anaerobic digestion. Bio-oil can also be derived directly from domestic resources, industrial and commercial waste. Bio-oil produced through pyrolysis of biomass has a high oxygen content (e.g., sugars), low energy density, and it is hard to use for vehicles directly. Until now, various methods and techniques were used to remove sugar in order to reduce oxygen content. For example, pretreat the reactant before the experiment, Like Torr's group (Xin et al., 2019) used pretreatment pine wood to do the pyrolysis experiment, put the wood in 1 wt% acetic acid solution for 4 h. The result shows that this method could remove more than 10% sugar. Pressure distillation is also a useful method to remove sugar, reducing oxygen content. For example, reduced pressure distillation was performed to obtain distilled bio-oil by Zheng's group (Zheng and Wei, 2011). Through this method, they reduce the sugar content from 0.9% to 0.1%. Besides, some groups used supercritical solvents to remove oxygen, like Savage's group (Duan and Savage, 2011) that upgraded the crude algal bio-oil using supercritical water method (SCW). The SCW method can improve product oil by reducing sugar and oxygen content. However, there are few works focusing on finding how the ratio between biomass model compounds and catalyst influence the component of biooil. There is a clear need to choose a suitable catalyst and biomass in order to investigate this relationship. Based on our knowledge, an easy to recover and recycle homogenous cluster catalysts is the best choice to produce bio-oil efficiently. A cluster is an ensemble of bound atoms or molecules that are intermediate in size between a molecule and a bulk solid. The metal cluster can also be used as a catalyst (such as Ru3(CO)12, Au cluster, etc.); they sometimes also bound to some other metal or cluster (Oguri et al., 2013). The cluster catalyst has excellent properties, the absence of large bulk phases leads to a high surface-to-volume ratio, which is advantageous in any catalyst application as this maximizes the reaction rate per unit amount of catalyst material, which also minimizes cost. Considering potential large-scale applications, catalysts made of cheap and earth-abundant elements are crucial for economically viable energy conversion processes. Instead of using rare earth or expensive metals, it is better to focus on making catalysts with nickel and iron in order to build a new type of catalyticcluster. In this study, cellulose was chosen as the reactant for pyrolysis, because it is the major component of biomass and often used as a model compound. Then, the Ni2Fe3 cluster catalyst will be chosen as the catalyst model (based on our group's previous result). Five different ratios between cellulose and catalyst have been chosen to investigate the relationship between the cellulose/catalyst ratio and bio-oil component. The ratios are 4:1, 2:1, 1:1, 1:2 and 1:4. The final goal of this study is to discern how the ratio influences bio-oil composition in order to assess its economic feasibility. Catalyst preparation The biomass feed, 38 μm powdered cellulose from Wako Pure Chemical Industries, and cellulose was directly used without any modification. The Ni2Fe3 catalyst is prepared by the sol-gel method, which method is appropriate to get cluster structural (De et al.,1996;Jayaprakash et al., 2015). The chemicals were citric acid, Fe(NO3)3·9H2O, ethylene glycol, and Ni(NO3)2·6H2O, as also purchased from Wako Pure Chemical Industries. The mol ratio between Metal ion (Ni, Fe) and citric acid is 1:1.2. The preparation of catalysts was performed at room temperatures following: 5.81 g Fe(NO3)3·9H2O and 12.1 g Ni(NO3)2·6H2O was dissolved in 30 ml of distilled water and stirred for 5 hours. Then 5 ml of ethylene glycol and 11.1 g citric acid was added in a beaker. Then the transparent solution was stirred for 15 hours at which time the solution became uniform. The transparent sol was dried at 110⁰C in an oven for 20 hours. Then using a furnace and calcined at 700⁰C to burn off hydrocarbons under 95% N2 and 5% H2 mix gas. Finally, the catalysts were crushed into powder in a mortar and pestle. After repeating this process many times, enough catalyst was obtained. Pyrolysis method Cellulose (4 g) was loaded in a 50 mm diameter quartz reactor with a different mass of catalyst (1 g, 2g, 4g, 8g, 16g, respectively). Then the entire system was purged with flowing nitrogen gas for 7 min in order to expel all air. Afterward, the reactor was heated to the set temperature (450°C) with a heating rate of 45°C/min. Then the nitrogen gas carried the pyrolyzed vapors from the reactor to the condenser. The vapors were condensed using cold water by the condenser. The bio-oil was collected and analyzed by GC-MS. The amount of biooil and char and coke were weighed, and then the amount of gas was calculated by subtracting the amount of bio-oil and char & coke from the initial feed ( Figure 1). XRD and SEM test XRD analysis of catalysts was conducted by X-ray Diffractometer (Mini Flex 600). 45 kV and 15 mA were used for the X-ray tube operation. The scan range (2 theta) of the XRD pattern was collected from 5° to 90° using filtered Cu radiation. Images of catalysts were obtained by a scanning electron microscope (VE-9800) operated at 10 kV. The catalyst was firstly immersed in alcohol. Then the samples were loaded and dried on a copper grid (carboncoated) before the test. GC-MS analysis method In order to identify organic compounds and analyze the component of the bio-oil samples, GC-MS equipment was used to analyze bio-oil samples. The analysis was performed on a GC-2010 Plus equipped with a GC-MS-QP2010 SE mass-detector made by Shimadzu. The column type was Stabilwax-DA 30 m x 0.25 mm, 0.25 μm diameter. The analysis was run with a 10:1 split entry. The oven temperature was held at 40°C for 5 min and then ramped to 50°C at 1°C/min. Next, it was ramped to 130°C at 2°C/min. Finally, the temperature was ramped to 260°C at 4°C/min and held for 10 min. The compounds were identified by comparing the mass spectra to NIST 11 MS library of compounds using the GC-MS software. A similarity threshold of over 80 was used to identify the compounds. All GC-MS experiments were conducted in duplicates, and the standard deviations were calculated. XRD and SEM analysis X-ray diffraction patterns of the unused catalysts and used catalyst powder samples are shown in Figure 2 (using 1:1 ratio as an example). Comparing the used catalyst in the experiment and unused catalyst shows the catalyst XRD pattern is unchanged by the pyrolysis experiments, and the catalyst could be recycled. Meanwhile, there are not any spurious diffraction peaks found in the pattern, indicating no other impurity component in catalysts and the sol-gel method is reliable for producing cluster structures. It should be noted that previous literature on the bimetallic catalysts (Ni, Fe) only exhibited a single peak indicating the formation of a Ni-Fe solid solution (Nie et al., 2014). The intensity of XRD peaks of the sample in Figure 2 reflects that the formed particles are crystalline and broad diffraction peaks indicate very small size crystallites. The SEM images are shown in Figure 3 of a typical particle with an uneven surface. From the images a), the particle size is nearly 800 nm, images b) shows the particle is more than 30 μm. It is easy to find that the particle size increases when adding more catalyst. When the catalyst ratio is high, the catalyst has little physical contact with the cellulose, which auto-aggregates at high temperatures. Yield results of pyrolysis The catalysts were mixed with the cellulose directly. The mass of bio-oil and coke were weighed, then the yields were calculated after each pyrolysis experiment. Figure 4 shows the bio-oil mass yields of different ratio. The gas yield is calculated by subtracting the mass of bio-oil and coke from the initial mass of cellulose. Each experiment was conducted more than one time to measure the repeatability. The bio-oil yield from uncatalyzed cellulose pyrolysis was 39.2% and had a standard deviation of 0.8%. The yield increase as the catalyst is added initially 39.3% for cellulose/catalyst ratio is 1:4. All the ratios used here could improve the yield of bio-oil, which is an important property, because other catalysts have shown to decrease the bio-oil yield, like ZSM-5, ZrO2 andTiO2, and silica (Stefanidis et al., 2014;Xia et al., 2015;Behrens et al., 2017). The reason for bio-oil yield increasing is the difference in specific heat capacity and heat conductivity coefficient. Specific heat capacity is the amount of heat energy required to raise the temperature of a substance per unit and the heat conductivity coefficient is used to measure the property of transferring heat. The specific heat capacity is 1.6 kJ/kg.K for cellulose, 0.44 kJ/kg.K for Ni, 0.45 kJ/kg.K for Fe (The Engineering ToolBox), respectively, which means no matter nickel and iron both can increase its temperature quickly, the heated metal can help transfer heat uniformly for cellulose. Besides, heat conductivity coefficient is different (Madelung and White, 1991) (see Table 1), Ni and Fe is nearly 120 times that of cellulose, which means for cellulose without a catalyst, heat transfer is poor, so some points will increase to high temperature in a short time, which tend to produce gas. However, after adding Ni2Fe3 catalyst, the heat conductivity coefficient is enhanced many times, which can help transfer heat well in order to produce more bio-oil (see Table 1). But if add too much catalyst, the HCC is really high, like 53.07 W/m. K, the reaction may more like fast pyrolysis, which means the major product is gas instead of liquid. It is also possible some other reactions took place during the experiment in order to increase the yield, they will be investigated in future work. From Figure 5, it is shown that the yield of bio-oil changes when the ratio is changing. R=1:2 shows the best bio-oil yield. The reason why the yield reduction is impacted is that when the cellulose ratio is low, the metal catalyst can help transfer heat uniformly and provide sufficient reactive surface (figure inset). However, when the catalyst ratio is high, the catalyst has little physical contact with the cellulose, which autoaggregates at high temperatures. A high catalyst ratio will decrease the yield of bio-oil. What conclusion can be drawn? It appears that the R=2:1, 1:1, 1:2 is better when producing bio-oil from cellulose. GC-MS results of cellulose results GC-MS was used for analyzing all bio-oil samples produced with the different ratios in order to know the component of the bio-oil. The results of the identified peaks are shown in Table 2. On average, 3% of the peaks were unidentified. All compounds were classified into different groups based on their functional group because of Pyrolysis of cellulose can produce hundreds of compounds. These groups were classified as acids, aldehydes, alcohols, furans, esters, hydrocarbons (HC), ketones, phenols, sugars, and others. Table 2 shows the major products of uncatalyzed cellulose were acids (4.9%), furans (20.7%), ketone (17.5%), and sugars (47.7%). In this study, phenolic compounds were also found in uncatalyzed cellulose pyrolysis, same as other researcher results (Behrens et al., 2017); Other main components of the bio-oils derived from noncatalytic and catalytic pyrolysis of cellulose are in good agreement with the literature results (Fabbri et al., 2007;Lu et al., 2011;Xia et al., 2015) In order to examine how the ratio between cellulose and catalyst influence the composition of the bio-oil, each functional group will be discussed herewith. Moreover, the data shows the relationship between the cellulose/catalyst ratio and the bio-oil composition in Figure 6 (Based on averaged data). First, in Table 2, the data indicate that HC was formed, which is an excellent component to improve the quality of bio-oil, although the percentage is small. No other similar published reports indicate producing HC from cellulose reactants under a nitrogen ambient environment to our knowledge. Secondly, all catalysts reduced acid and sugar, which is good for improving the quality of bio-oil. The reason why reducing acid can improve the quality of biooil is that the acid can reduce the bio-oil pH value, and make it corrosive to common metals such as aluminum, mild steel, brass and so on. Reducing sugar is important because of its large oxygen content. These results agree well with other literature that the sugar content can be reduced by reactions with catalysts (Wang et al., 2016;Behrens et al., 2017). Figure 7 shows how the different cellulose/catalyst ratios influence sugar production. The sugar reduction increasing as the catalyst is added initially and then decreases. The reason is that more catalysts can help cellulose decompose well and provide enough surface for reaction, but too much catalyst will auto-aggregate at high temperatures, reducing the reactive surface. So a high catalyst ratio not only decreases the yield of bio-oil but influences the overall catalyst activity based on Figure 7. The ratio = 4:1, 2:1, 1:1 is better for reducing sugar concentration. Third, when the sugar decreases, the furan, and ketone increase as shown in Figure 7. Cellulose/catalyst ratio of 2:1 shows the best property for increasing ketone and furan compounds because it removes the most mass of sugar. Ketone compounds can change to other hydrocarbon compounds through various methods in order to improve the quality of bio-oil as the chemical reaction advances (King et al., 2015;Mehta et al., 2015;Ly et al., 2017). So it is important to increase the ketone amount for bio-oil. Finally, all ratios do not affect the alcohol, phenol, and ester amounts. The phenol compounds are usually formed through the secondary reactions from cellulose vapors (Stefanidis et al., 2014;Wang et al., 2016), that is why it does not change a lot in this study. 7. Chemical relative compositions of the bio-oil organic phases with different cellulose/catalyst ratios. Table 2. Composition of cellulose pyrolysis products with different cellulose/catalyst ratio (peak area% of identified peaks, HC is hydrocarbon compounds, unidentifiable compounds were put in others. R=Cellulose/Catalyst)
2021-05-11T00:06:43.429Z
2021-01-01T00:00:00.000
{ "year": 2021, "sha1": "3172dfbd57e6386400f8990aa9a251d93c624e49", "oa_license": "CCBY", "oa_url": "https://www.matec-conferences.org/articles/matecconf/pdf/2021/02/matecconf_apcche21_09003.pdf", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "536747db95d4f9b882c5be3f1fe0c9dff022fd68", "s2fieldsofstudy": [ "Environmental Science", "Chemistry", "Materials Science" ], "extfieldsofstudy": [ "Chemistry" ] }
117855314
pes2o/s2orc
v3-fos-license
Interplay between Co-3d and Ce-4f magnetism in CeCoAsO We have investigated the ground state properties of polycrystalline CeCoAsO by means of magnetization, specific heat and solid state NMR. Susceptibility and specific-heat measurements suggest a ferromagnetic order at about, $T_\mathrm{C}$=75 K. No further transitions are found down to 2 K. At 6.5 K a complex Schottky type of anomaly shows up in the specific heat results. The interplay between Ce-4f and Co-3d magnetism being responsible for that anomaly is discussed. Furthermore $^{75}$As NMR investigations have been performed to probe the magnetism on a microscopic scale. As-NMR spectra are analysed in terms of first and second order quadrupolar interaction. The anisotropic shift component $K_{\mathrm{ab}}$ and $K_{\mathrm{c}}$ could be derived from the $^{75}$As powder spectra. Towards lower temperature a strong shift anisotropy was found. Nonetheless $K_{\mathrm{iso}}$ tracks the bulk susceptibility down to $T=$50 K very well. Furthermore the presence of weak correlations among the Ce ions in the ferromagnetic state is discussed. The observed increase of $C/T$ towards lower temperatures supports this interpretation. The rare earth transition metal pnictides RTPnO (R: rare earth, T: transition metal, Pn: P or As) attracted considerable attention because of the recent discovery of superconductivity with a transition temperatures T C up to 50 K in the RFeAsO 1−x F x series of compounds being the highest T C 's except for cuprate systems [1][2][3][4][5][6]. While the main studies on these materials are devoted to the superconducting materials, the non superconducting members of these family stays mostly unexplored. Nevertheless studying these compounds may provide information to understand also the superconducting state. Recent studies of CeRuPO and CeOsPO [7][8][9] indicate dissimilar types of magnetic ordering. CeRuPO is a rare example for ferromagnetic (FM) Kondo system showing a FM order at T C =15 K and a Kondo energy scale of about T K ≃ 10 K. CeOsPO exhibits an antiferromagnetic (AFM) order at T N =4.5 K. The recent studies of the relative CeFePO suggest that this is a heavy Fermion metal with strong correlation of the 4f-electrons close to a magnetic instability [10]. On the other hand in Ce-FeAsO, a complex interplay of Ce-4f and Fe-3d magnetism is found. Here, Ce orders antiferromagnetically at T N ≃3.8 K whereas the high temperature regime is dominated by the 3d magnetism of Fe [11] [12]. There is a structural transition from tetragonal to orthorhombic at ≃ 151 K followed by SDW type AFM order of Fe at ∼ 145 K. Moreover Ce magnetism is not effected strongly by the presence of the Fe moments. Furthermore, neutron scattering, muon spin relaxation experiments and recent analysis of χ(T ), C(T ) suggest that there is a sizeable inter-layer coupling in CeFeAsO [12][13][14][15]. Therefore, the pure CeFeAsO system have already proven to be a rich reservior of exotic phenomena. Apart from isoelectronic substitution on CeTPnO with Fe, Ru, Os chemically also the T= Co series form. Results on LaCoAsO and LaCoPO were reported to exhibit ferromagnetic order of Co moment with Curie temperatures T C of about T C =50 K and T C =60 K, respectively. In contrast to Fe, where the 3d magnetism depends on the Pnictide(P,As), Co stays magnetic in both the P and As series. In LaCoAsO, Co saturation moments of 0.3-0.5 µ B per Co [16][17][18] are found. It has been proposed that spin fluctuations play an important role in the magnetic behavior of LaCoAsO [16] [18], as well as the magnetic and superconducting properties of the iron-based superconductors. Last year we reported the detailed physical properties of CeCoPO and discussed about the interplay between 3d magnetic moments of Co and 4f electrons of Ce [19]. This system, similar to LaCoPO, the Co-3d electrons order ferromagnetically. However, here the Ce-ions are on the border to magnetic order and an enhanced Sommerfeld coefficient, γ ∼ 200 mJ/mol K 2 was found. In CeTPnO the substitution of P by As change the magnetism drastically. It is already evident in the case of Ce-FeAsO [20][21][22]. Therefore it is natural to investigate the physical and microscopic properties of CeCoAsO compound. In this report we present the physical properties of polycrystalline CeCoAsO using susceptibilty χ(T ) and specific heat C(T ) measurements. Additionally, we discuss the preliminary microscopic results as seen by 75 As NMR study. II. EXPERIMENTAL Samples are prepared by solid state reaction technique. The starting materials that are taken for the preparation of parent CeCoAsO are Ce and As chips and Co, Co 3 O 4 powders. First, CeAs was prepared by taking stoichiometric amounts of Ce and As in 1:1 ratio, pressed into pellets and sealed in evacuated quartz tube. With repeated heat treatment, attaining a maximum temperature of 900 • C, and grinding inside a glove box filled with inert Ar gas, single phase CeAs were obtained. CeAs were then mixed thoroughly with Co 3 O 4 and Co powder in stoichiometry and pressed into pellets. The pellets were wrapped with Ta foil and sealed in an evacuated quartz tube. They were then annealed at 1100-1150 • C for 40-45 hours to obtain the final CeCoAsO samples. X-ray powder diffraction revealed a single phase sample with no foreign phases. Susceptibility χ(T ) measurements were performed in a commercial Quantum Design (QD) magnetic property measurement system (MPMS). Specific heat C(T ), measurements were performed in a QD physical property measurement system (PPMS). For the NMR measurements, polycrystalline powder was fixed in the paraffin to ensure a random orientation. 75 As NMR measurements were performed with a standard pulsed NMR spectrometer (Tecmag) at the frequency 48 MHz as a function of temperature. The field sweep NMR spectra were obtained by integrating the echo in the time domain and plotting the resulting intensity as a function of the field. Shift values are calculated from the resonance field H * by K(T ) = (H L − H * )/H * whereas the Larmor field, H L , is given by using GaAs ( 75 As-NMR) as reference compound with 75 K ≃ 0 [23]. III. RESULTS A. Magnetisation and specific heat study Fig. 1 shows the susceptibility of CeCoAsO as a function of temperature at different fields as indicated. Above 200 K, the susceptibility follows the Curie Weiss behavior with an effective moment µ eff = 2.74 µ B . This value is higher than the value of µ Ce eff =2.54 µ B , expected for a free Ce 3+ ion. This is because of the contribution of Co to the effective moment of µ eff = µ (Co)2 + µ (Ce)2 . For the Co ions we calculate an effective moment of µ Co eff =1.03 µ B . Our results are in good agreement with findings from Ohta and Yoshimura et. al. [18]. A sharp increase of the susceptibility is observed at around 75 K. Here, a strong field dependence of the susceptibility typical for a FM system is evidenced. In the inset of Fig. 1 the magnetisation M(H) of CeCoAsO up to 5 T at 2 K is shown. A large hysteresis typical for hard ferromagnets is observed. A large hysteresis was also found for CeCoPO whereas for other RCoAsO no such large hysteresis was found [18]. In the left inset of Fig. 1 we compare the susceptibility results of CeCoAsO with that of LaCoAsO and CeCoPO at 1 T. It should be mentioned that for Ce-CoAsO at smaller fields no peak could be resolved which is in contrast to CeCoPO [19]. The temperature dependence of the susceptibilty is quiet similar for CeCoAsO and CeCoPO, whereas there is a pronounced difference to that of LaCoAsO. In LaCoAsO there is no influence of 4f magnetism. Therefore the magnetisation curve looks like a simple textbook ferromagnet. It has to be mentioned that we took the data for LaCoAsO from Ref. [? ]. For this set of data χ(LaCoAsO)> χ(CeCoAsO) is found in the FM state but surprisingly the data by Ohta et al. [18] shows smaller values (at same field) suggesting χ(LaCoAsO)< χ(CeCoAsO). Nonetheless our results for CeCoAsO are in perfect agreement with findings in [18]. The different behavior for CeCoPO and CeCoAsO indicate that the Ce-4f creates a significant change in the overall magnetic behavior. This unusual behavior of the susceptibility indicates an intricate magnetic structure with strong polarisation of the itinerant Co moments on the more localised Ce moments. Such effects are also known from 4f-ion Fe 4 Sb 12 skutterudites. Here EuFe 4 Sb 12 shows similar magnetisation curves [24]. One approach could be to describe CeCoAsO in the framework of a classical ferrimagnet like RCo 5 with R=Y, Ce, Pr.... [25]. Magnetisation in these system is goverened by the two subsystems of the 4f and 3d moments and their inter and intra molecular interactions. Because of the antiferromagnetic coupling of the rare earth spins with the Co spin for light rare earth ions like Ce, usually an ferromagnetic alignment of the Ce-4f moment and the Co-3d moments is expected [25] in the ordered state. This would imply χ(CeCoAsO)< χ(LaCoAsO). Unfortunately because of inconsistent literature for La-CoAsO data we have no prove for that. To summarize the magnetisation section it should be note that besides the high temperature ordering at T C =75 K for CeCoPO and T C =75 K for CeCoAsO no further transitions are evident from magnetisation measurements. This is in contrast to other RCoAsO system with R=Gd, Sm or Nd, where the rare earth moments orders antiferromagnetically at low temperature [18] [38]. The temperature and field dependence of the specific heat C(T ) of Ce-CoAsO is shown in Fig. 2. Towards high temperatures C(T ) converges nicely to the classical Dulong Petit limit of ∼ 100 J/mol-K. By lowering the temperature a broad anomaly at T C ∼75 K is visible on the top of the phonon dominated specific heat. We have estimated the background using the third order polynomial from 65 to 85 K, excluding the temperature range near the peak at 75 K. Then the background was subtracted from the data to obtain specific heat. In the left inset we have shown the C(T )/T vs. T plot after subtracting the background. It is worthwhile to mention that at the same temperature (75 K) the susceptibility increases sharply. Therefore, this anomaly is being due to the ferromagnetic ordering of Co. At around 6.5 K an additional broad anomaly in the C(T ) shows up. The right inset shows the C(T )/T vs. T plots. In this inset both anomalies are rather pronounced. To understand the origin of the low temperature anomaly we have investigated the field dependent specific heat in the temperature range 1.8 -15 K and in the field range 0-9 T. The right panel of Fig. 2 shows the C/T vs. T plot at different fields (curves are shifted on y axis by 0.1 J/mol-K 2 ). It seems that the effect of field on C(T ) is very small. Nonetheless, the broad maxima is shifted insignificantly towards higher temperatures with increasing field. The preliminary analysis of the specific heat reveal that this broad anomaly at low temperature is not due to the ordering of Ce. Rather this is reminiscent of Schottky type anomaly. This might be attributed to the level splitting CEF ground state of Ce by the po- Furthermore, at zero field we have estimated the entropy gain by integrating the 4f part to the specific heat C 4f /T in the temperature range 1.8-15 K. For the estimation of the entropy gain we have subtracted the phonon contribution by using reference compound data of La-CoAsO in the temperature range 1.8-15 K after Sefat et. al. [? ]. However, the contribution of LaCoAsO to the specific heat is small below 15 K. The estimated entropy gain for CeCoAsO at 15 K is 75% of Rln2. This supports the scenario based on the splitting of the CEF doublet ground state. Fig.3 shows the field sweep 75 As NMR spectra at different temperatures. Because 75 As is a I=3/2 nuclei, the quadrupole interaction should be taken into account for the interpretation of the spectra. The main effects are I.) first order interaction, occurrence of pronounced satellite peaks (3/2↔1/2, -1/2↔-3/2), II.) second order interac-tions, splitting of the central -1/2↔1/2 transition. At high temperature the second order quadrupolar splitting central transition along with the two first order satellite transitions are observed and the spectra could be nicely simulated (Fig.3, right, top). Initially, with lowering of temperature down to 50 K, the whole spectra is shifted towards the low field side with considerable line broadening and develops large anisotropy. However, below 50 K the whole spectra shifts towards the high field side with further gradual line broadening. The line broadening of the spectra become enormous below 15 K. It is clear from Fig. 3 that the NMR spectra become more and more complex by lowering the temperature and the simulation of this broad spectra, at low temperature, is rather complicated. Nevertheless, with considerable effort it is possible to identify the singularities of these spectra down to 20 K. Therefore, estimation of the shift is possible by fitting the singularity of the spectra down to 20 K. During the compilation of this paper we realize that similar analysis has been performed for LaCoAsO [36]. In Fig. 3 (Lower right panel) below 10 K a considerable background intensity is perceived in the low field side. This background signal is the combination of the 59 Co spectra and the 75 As NMR spectra. We already measured some 59 Co NMR spectra. The estimated ν Q value from 75 As NMR spectra at high temperature is 3.6 MHz. This value is similar to that of LaCoAsO system [36] and somewhat smaller than what was found for the CeFeAsO system [22]. While lowering the temperature ν Q monotonically increases and down to 20 K no drastic changes could be detected. This rules out a sudden structural change in this compound. From the simulation of the spectra, we have estimated the shift components 75 K ab and 75 K c corresponding to H⊥c and H c direction, respectively. Fig. 4 shows the variation of 75 K ab , 75 K c and 75 K iso as a function of temperature. 75 K iso was estimated using the equation 75 K iso = 2 3 75 K ab + 1 3 75 K c . From Fig. 4 it is evident that 75 K ab and 75 K c increases with decreasing temperature presenting a strong anisotropy. At high temperature the anisotropy is small where as with decreasing the temperature the anisotropy is enhanced considerably. If we compare 75 K ab , 75 K c at 50 K, it is seen that anisotropy of the transferred field is really important here and 75 K ab is 2.5 times larger than 75 K c . B. 75 As NMR From Fig. 4 it is seen that 75 K ab , 75 K c and 75 K iso increases with decreasing temperature following the bulk susceptibility down to the temperature 50 K. However, with further lowering the temperature down to 30 K shift is decreasing with temperature leaving a maxima at around 50 K. This maxima traced back the results of susceptibility and specific heat study. Which indicates the ferromagnetic Co ordering take place at T c 75 K. It is worthwhile to mention that the bulk susceptibility is constantly increasing with decreasing the temperature. It is well established that NMR shift probes the local susceptibilty, therefore it is normally not influenced by the small amount of the impurity. There are two pos- sibilities for the decrease of the shift. First, the system have small amount of impurities which is not tracked by the XRD measurement and makes the increase of susceptibility. Second, there are polarisation effects on the Ce ions by the internal magnetic field of the Co magnetism. Which in turn changes the transferred hyperfine field below 50 K. The former one is unlikely because of the well-matched magnetisation result of CeCoPO and LaCoAsO [18] [19]. Therefore, the later scenario is more likely. Below 50 K the decrement of the shift reveals that the ferromagnetically ordered Co moment polarises the Ce moment. Which eventually makes the magnetic structure complicated and changes the hyperfine field in this system. NMR probes the magnetism on a microscopic scale. As NMR gives the local hyperfine field arising from the 4f-Ce and the 3d-Co ions. Usually for the itinerant 3d ions the negative core polarisation is the dominant exchange mechanism whereas for localised Ce moments the strong conduction electron polarization contribution (Fermi contact interaction) becomes important. Sometimes both fields cancel out each other leading to K = 0 condition but often the Fermi contact interaction is more than one order of magnitude larger [29]. Therefore the total shift could be composed as K = K 4f + K 3d . Furthermore K 4f couples strongly on the effective Ce-4fmoment. The effective moment is reduced because of CEF splitting which results in a reduction of K 4f . This might explain the shift maximum observed. For an estimation of the hyperfine coupling constant 75 K iso is plotted as a function of the bulk susceptibility χ in the inset of Fig. 4. For this we have used the susceptibility measured at 5T. We assume that χ = χ iso , meaning that there is no alignment or the texture in the CeCoAsO sample. From the inset it is seen that 75 K iso is nicely following the bulk susceptibility in the temperature range 200-50 K. From the linear curve we have estimated the hyperfine coupling constants at the 75 As site, 75 A iso ≈18 kOe/µ B . The estimated 31 A iso for the CeCoPO and LaCoAsO is around 14 kOe/µ B and 24.8 kOe/µ B , respectively [36] [37]. Therefore for CeCoAsO the value of hyperfine coupling constants is higher than that of CeCoPO system which could be interpreted as being due to a weaker 3d-4f polarisation in CeCoAsO. Below 15 K the additional line broadening shows up in the spectra. Such a broadening can not be explained by the impurities or disorder. As a first approach this line broadening traced back the specific heat anomaly at 6.5 K and increase of C/T below 1.8 K. This broadening is also typical for the onset of correlation. However for this system at present with the available data it is not settled whether this is the result of complicated magnetic structure or it is due to the correlation. For another 4f-3d pnictide NdCoAsO very recently McGurie et. al. proposed from neutron scattering multiple phase transitions at low temperature [38]. IV. DISCUSSION AND SUMMARY Our findings on CeCoAsO yield several interesting phenomena which shall now be discussed. The presented results point to a FM ordering of Co ions at T C =75 K similar to FM order in the P-homolouge [19]. Similar to CeCoPO unusual, but more S-like shaped χ(T ) curves below T C are observed. This is in contrast to LaCoPO and LaCoAsO where χ(T ) behave like textbook ferromagnets. For CeCoPO and CeCoAsO a complex interplay of Co-3d moments with more localised Ce-4f moments has to be considered. More into detail: Co 3d ions order ferromagnetically at T C =75 K and the resulting internal field transferred to the Ce site partially polarizes the Ce-4f ions. Due to thermal excitation the polarization is getting stronger towards lower temperatures leading to an S-like shape of χ(T ). Moreover, in the specific heat an additional broad anomaly at around 6.5 K shows up, which is different to CeCoPO system. This low temperature Schottky anomaly indicates a complex level splitting of the ground state of the rare earth moment by the internal field. In order to get a deeper microscopic insight of this system we have performed 75 As NMR investigations. The NMR shift is increasing with decreasing the temperature following the susceptibility down to 50 K. The shift is decreased with lowering the temperature further resulting in a broad maxima at 50 K. This indicates that there is a change of hyperfine field below 50 K. Furthermore, in the 75 As NMR spectra the additional line broadening below 15 K, traced back the specific heat anomaly at 6.5 K. Such broadening effect are typical for the onset of correlations. But also a reconstruction of hyperfine fields because of CEF splitting could be a possible explanation. To fully understand the low temperature magnetism it deserves further investigation specially with microscopic tool, for instance neutron scattering and/or µSR. Furthermore based on the presented results doping studies on CeCoAsO might be fruitful in the context of superconductivity. The superconducting state of the doped RFeAsO systems are suggested to be unconventional nature. One key point to get the superconductuvity in the CeFeAsO system is to suppress the Fe magnetism by changing the carrier concentration. One approach would be to substitute F in place of O, or substitute As by P or Fe by small concentration of Co. In all cases for the specific doping concentration one would get superconductivity [33][39] [4]. Therefore still it is not settled the nature of the carrier concentration required for this CeFeAsO based superconductor. Apart from the Fe-based pnictides, superconductivity was also found in LaNiPO and LaNiAsO. Furthermore in the 122 relative Ba(Fe, Co) 2 As 2 T C 'c up to 22 K are found. 59 Co NMR investigations clearly reveals that Co is nonmagnetic [34] [35]. The absence of superconductivity in CeCo(As/P)O is not surprising considering the fact that here Co carries a moment and long range order is observed. Furthermore in these system there is a complex interplay between the Ce-4f and 3d magnetism playing a crucial role to control the magnetism. It is worthy to mention, as far as 3d magnetism is concern, there is a major difference between the CeCoAsO to that of Ce-FeAsO. For CeFeAsO, Fe magnetism can be tuned easily replacing As by P and SDW type transition diminished, whereas for CeCoAsO, Co magnetism stays rigid with that and SDW transition is absent. However still there is a possibility to suppress the Co magnetism either by doping or pressure, because an ordering temperature of around 70 K is rather low for Co ordering. Therefore further research has to answer the question whether the superconductivity appears in this system after the suppression of the Co-magnetism. Which will eventually opens up the opportunity to understand the nature of coupling between the 4f and itinerant electrons in the RTPnO systems. And NMR/NQR would be the valuable tool to probe the magnetism and superconductivity. In summary we presented magnetisation, specific heat and 75 As NMR investigations on polycrystalline Ce-CoAsO. The magnetisation and specific heat data reveal that in this system Co orders ferromagnetically at 75 K. Moreover specific heat study shows an Schottky anomaly at low temperature at around 6.5 K, which is likely due to the level splitting of the ground state of rare earth moment by the internal field. Furthermore the analysis of the 75 As NMR spectra clearly demonstrate the strong shift anisotropy towards lower temperature. Moreover the breakdown of the K vs. χ linearity below 50 K might be the signature of CEF interaction.
2010-08-06T10:06:49.000Z
2010-06-30T00:00:00.000
{ "year": 2010, "sha1": "e3a2ed837995f83f1c39df5445c4a393e1f3c836", "oa_license": null, "oa_url": "https://opus.bibliothek.uni-augsburg.de/opus4/files/70392/PhysRevB.82.054423.pdf", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "e3a2ed837995f83f1c39df5445c4a393e1f3c836", "s2fieldsofstudy": [ "Physics", "Materials Science" ], "extfieldsofstudy": [ "Physics" ] }
243733130
pes2o/s2orc
v3-fos-license
Teacher Identity in Higher Education: A Phenomenological Inquiry 1 Teacher identity remains as a major issue of research in the field of teacher s’ professional development. Understanding the construction of teacher identity is important to fully understand how teachers negotiate their selves with broader institutional power relations and to discuss how teachers invest their agency in building their professional identity. This phenomenological study analyzes the experiencesof six university lecturers with a focus on teacher identity construction in relation to broader institutional culture. This show study shows that university lecturers, who are at the bottom of professional hierarchy, do not receive much institutional support nor are they mentored by their seniors. More strikingly, the study reveals academic identity of the university lecturers are not recognized due to growing culture of partisan politics. This culture has also created sense of fear and unfriendly collegial relations. Introduction Teacher identity has received an increased attention in recent studies in the field of teacher professional development (De Costa and Norton, 2017). Keeping teacher identity at the center, studies have focused on the struggles, professional trajectories and sociocultural and political factors affecting teachers' lives and their continual professional development (Lasky, 2005). 2 However, there are two major issues which are receiving a little attention. On the one hand, most studies are focused on identity of school level teachers, and on the other, there is absence of studies on Nepali teachers' identity in higher education. This study has investigated these issues by looking at the identity construction of teachers in Tribhuvan University. More specifically, this study explores the lived professional experiences of university teachers and analyzes their identity, with a focus on their struggles and negotiations in the existing sociopolitical context of higher education. Theoretical framework This study has drawn on the theories from broader teacher identity literature to discuss the processes and importance of teacher identity in understanding multidimensional aspects of university teachers' identity development. More specifically, we have focused on the personal narratives of lecturers, who are at the bottom of the current teacher hierarchy in the university system. Building on the previous studies on teacher identity (e.g., Dickinson, 2012;Thomas and Beauchamp, 2007), we consider teacher identity as a sociocultural process in that teachers constantly negotiate their own sense of self-both personal and professional-in relation to broader sociopolitical and institutional conditions. While taking teacher identity construction as a socio-interactive phenomenon, this study has explored how university lecturers, focused on Tribhuvan University (TU) build their idea of 'how to be' and 'how to act' as a teacher (Thomas and Beauchamp, 2007) in the current university climate. This study is informed by the assumption that teacher identity construction is a continual process and shaped by broader politics and structure of an institution. This process involves teachers' personal strategies and negotiation skills with students, colleagues, and senior teachers. More importantly, it encompasses teachers' engagement in understanding and navigating power relation that exists within institutional settings. This power relation can be both structural and discursive. While structural power relation is explicitly linked with prestige, privilege and symbolic capital that different categories of teachers are given, discursive power relation deals with how lectures, also known as 'novice' in the current university teacher hierarchy, see their professional identity in relation to 'senior' or 'expert' teachers. The analysis of discursive power relation pays attention to broader experiences of novice teachers and their sense of imagined professional identity. Another important aspect of teacher identity construction is the support that teachers are provided with by the institution. Whether or not novice teachers receive appropriate induction and other professional support from the institution and other colleagues, mostly from senior teachers, plays a significant role in newly appointed teachers to find the pathways for professional development. In absence of strong institutional support mechanisms and supportive collegial relationship and mentoring, novice teachers face challenges such as inability to deal with courses and building strong sense of confidence among students. In line with Varghese et al.'s (2005) framework, we explore and discuss university teachers' identity by drawing on three broad theories which consider teacher identity as part of lived social and professional experiences. These theories include Tajfel's (1978) classic 'social identity theory', Lave and Wegner's (1991) 'community of practice' and Simon's (1995) 'image-text'. While social identity theory allows us to understand teachers' social identity that are salient in their professional development, community of practice provides a perspective to look at how novice teachers, lecturers, construct their professional self and membership in interaction with teachers in higher hierarchy-'Readers' and 'Professors'. In other words, the theory of community of practice helps to understand how lecturers' build their professional relation with their seniors, Readers and Professors, and in what ways the current hierarchy affects in building lecturers' broader professional membership in their institution. Similarly, the concept of 'image-text' provides insights into understanding the expectations and imaginations of university lecturers'. To put it differently, imagetext supports the idea that teacher identity is not just about what teachers 'are' but also about what teachers 'want to be' and 'want to see' . Together, these theories have helped us understand how teacher identity construction involves both social and professional trajectories of university lecturers. Research Questions The major research question we have addressed are how university teachers construct their professional identity in the current situation of Tribhuvan University. More specifically, we have explored the following questions: How do university lecturers' perceive abouttheir own teacher identity in the existing context of Tribhuvan University? What supports have they received for their professional development? How does the existing sociopolitical culture of Tribhuvan University impacts teachers' identity? What recommendations do theteachers provide for their professional development as university lecturers? Rationale for the Study Teacher identity construction plays a major role in teacher professional development. Recent studies have shown that it is necessary to understand how teachers 'feel' and 'live' their professional trajectories in order to develop appropriate support mechanism for their professional growth. This understanding requires engaging teachers in telling and critically reflecting on their own personal stories which incorporate professional, social and cognitive aspects of identity construction in a broader institutional setting. Although Tribhuvan University recruits and promotes teachers through its Service Commission, what is missing from the current system is how to support teachers, mainly early-career lecturers, for their professional development. There is a general lack of understanding about how university teachers develop their professional identity as university teachers. This study is important as it explores the struggles, emotions and professional trajectories of university lecturers which provide critical insights into understanding what support university teachers need for their professional development. More importantly, this study provides important policy implications with regard to professional development of university teachers. As we look at the interconnectivity of social and professional identities, this study offers critical insights into understanding teacher professional development as a negotiation of power, identities and sociopolitical dynamics in the current academic system and political environment of Tribhuvan University. Research Methodology and Data Analysis This study adopts a phenomenological research design to collect and analyze data. As a qualitative method, a phenomenological study focuses on the in-depth analysis of teachers' experiences in Tribhuvan University. This phenomenological study pays attention to the lived experiences of participants and draws common themes form the analysis of their experiences. In this study, we purposefully selected six lecturers from University Campus of Tribhuvan University. We selected 4 lecturers from the Faculty of Education and remaining 2 from the Faculty of Humanities and Social Sciences. The experiences of these teachers are collected through a series of in-depth interviews. More specifically, we interviewed the teachers in both formal and informal settings and observed their interactions with other teachers and students to understand their experiences. We also observed their professional activities such as conference presentations, publications and other involvement that support their professional development. We recorded all the interviews using audio-recorder and field notes. These interviews and fields notes were transcribed and the transcription was code by using analytic and comparative approach to grounded theory which focuses on thematic comparison from participants' experiences. In other words, we organized codes from each narrative under broad themes, related to teacher identity, and interpret them using a comparative approach. This approach helped us understand how each teachers' experiences are unique and critical to draw policy recommendations. Teacher voices, ideology, social identities and their sense of membership in the university teachers' community of practice was kept at the center of the data analysis process. Delimitations and Ethical Issues By nature, a phenomenological study, like this, includes a small number of participants. In this study, we collected detail narratives of six teachers. Another limitation of this study was that we included the lecturers from only two streamseducation and humanities and social sciences. In a phenomenological study, ethical issues, particularly personal safety is important. We protected the participants' right to anonymity by using their pseudonyms throughout our report. In the beginning of the study, we told the participants that they could withdraw their participation anytime if they felt uncomfortable to be part of the study. In the remainder of the paper, we discuss the major themes that have been drawn from the data. We begin how the university lectures perceive their own identity in the existing university structure: 'Systematically accepted': Sense of professional identity and support in the structure of Tribhuvan University Regarding their sense of belonging in the existing university system, the participants said that their respective institutions involve them in the professional activities that are 'part of the duty'. Such activities include regular teaching, supervising thesis students and setting and evaluating tests. However, they feel that such regular activities do not much contribute to develop professional identity. Teacher A, for example, reveals that it is not due to the institutional support, but due to his own effort that helps to develop their professional identity. He says that "students made me what I am." He further says: Frankly speaking, there is a little or no institutional support for my professional development. Except for a few guest lecture sessions, I don't remember my department sending me to attend any refresher courses or seminars. There is no such sponsorship. There is no additional incentive, reward or acknowledgment of good works. Else, in these ten years of service, I would have had the opportunity to recall a day when my institutional head called and congratulated me for at least ONE good work I did. No; there is nothing to recall. If we work hard with our own conscience, maybe professional development will come about. There is ZERO initiative from the department. As Teacher A has experienced, other teachers have also said that they hardly receive professional development opportunities from their institution. More importantly, the leadership rarely recognizes their good work. In this regard, teacher D says: I don't have any exciting memory to recall from among my colleagues, except for a few of my classmates and colleagues, who always gave me their hands in need. But my leadership never made me feel anything warm. The only memorable anecdotes I have are connected with my students. I remember students saying how, due to my personal love and care, they resolved their depression and withdrew their decision to quit study, and held themselves back until they completed their master's degree. I remember them saying they are in writing and journalistic field with inspirations from me. Many of my students have become a part of joys and sorrows. They were with me during my family tragedies and my family celebrations. They still are. With few or no relative living near to me in Kathmandu, I turn to my students whenever I need close people. And they have always presented themselves. This makes me think, I am very much with my students and they with me. The teachers also feel that the existing hierarchical structure is affecting their professional development. They feel that junior faculty members do not receive much support from their institutional leadership. For example, teacher A reveals: But my department perhaps has no information about this, or at least is indifferent to my overwhelming popularity among my students. I am very junior faculty, and the number of recommendation letters I write for students whenever they are applying abroad makes me think they confide in me and have faith in me. Yet, I am Hemingway's protagonist in a lonely sea. This much is for sure. Regarding the institutional support, Teacher E says: In case of opportunities, self goes first. If it is economically or anyway beneficial then seniors go first. And if large number or everybody can get involved then juniors can go as well. In case of institutional support, sometimes ICT related trainings are held for professional development. When I first entered here in semester system, we were given training on how to teach through PowerPoint. But there's not as much support as there should be. Teacher C has a similar perspective: Years back I did ICT related training. Other than that, training for teachers and motivations, I haven't got any. Even when journals are to be published, they look for their own people or who are close to them. So, whatever you do, you do it on your own. If you can speak up and claim for your article to be published then something can happen but if you stay quite then nothing is going to happen. But from the side of institution, I don't think it has done anything for its teachers. Instead when people themselves want to do something good, they are demotivated, difficulty is created in front of them, pulling of legs begin. But not everyone is like that. Few teachers have encouraged me at times and that keeps me motivated. One lecturer, Teacher C, who had taught for 10 years in an out-side valley campus feels that there is lack of institutional support from his professional development. Except for regular leaves, there is lack of support plans for further professional development of faculty members. He tells his experiences as follows: In case of professional development, organization provides holiday to do MPhil and PhD. Besides that, for regular teaching like inservice teaching and extra motivations and classes, we haven't had any assistance from the university. During my ten years of teaching, while I stayed in hostel, once I participated in Course Destination program. Since then I haven't had any opportunity. And in that also, only TA and DA was provided, nothing else. Besides that, research training, workshop training and every other knowledgeable programs organized by university are very few. Even in that, one can participate only if one has personal influence and contact. There's no HRD [Human Resource Department] department in the university to see that field. And now I've heard that University Grants Commission supports some proposals for such programs. But as there is HRM [Human Resource Management] department in other organizations, we don't have it here in TU. After candidates are appointed, HRM department looks after creating programs for developing, motivating and uplifting them. There's nothing like that here. Teacher C focuses on the need for establishing a human resource development department to support university teachers' professional development efforts. His experience tells that except for participating in a 'course dissemination' program, he has not received any opportunity for in-service professional development. He recounts: Whatever is done, is done by yourself, through your own influence. People care very less about each other. Intimacy is low. Employees of one department do not know the employees of the other departments. University has never organized any program so that we could get to know each other. Furthermore, he claims that 'personal influence and contact' is necessary to participate in professional development activities. Invisible academic and professional identity: Factors affecting identity construction Teacher identity is shaped by sociocultural and institutional landscape. The participants in this study see that the new institutional culture, which is influenced by national political context, has significantly contributed to shape their professional identities. One of the participants (Teacher D), who has taught for more than 10 years in Tribhuvan University, shares that it has been difficult for teachers to survive by doing academic work only. For him, in addition to academic identity is valued less than the 'membership of political parties'. He claims that: There's very slim chance of survival in university through only a pure academic exercise. If you cannot create your clear identity through the affiliation of any political ideology then it won't be easy for you. Before evaluating your performance, they look for which party you belong to. If you are not able to identify yourself that way, no one will recognize you, no one will protect you. You become like an eklobrihaspati. So, no one will appreciate your academic life. If you think of existing independently by doing only academic work, that will create difficulty. Participant D's metaphor of 'aklo Brihaspati' indicates that teachers who do not have affiliation with partisan politics hardly receive institutional support for professional development. Since university institutions are run on the basis of 'bhagbanda' (share among teachers' groups affiliated with political parties), teachers who are not open to any groups become 'helpless' in many ways. In this regard, he reveals that Yes, this is the culture of this university. There is scarcity of capable people in the university and those who cannot compete and cannot make academic contributions are abundant. So, those people take the shelter of a party to exist, and have strong political power behind them. And the person who does not follow the culture of factionalism has to get frustrated and flee despite he/she is capable. He further states that, Only with academic identity, it is difficult to grow, survive and thrive, and to have satisfaction. If you have capacity and have made contributions then you must be given a role. While giving you that role, you get public concern on who you are. And as the positions are already divided according to parties, you cannot be given a position when you don't have a clear political identity, even if you are an expert. Teacher Identity in Higher Education: A Phenomenological Inquiry : Phyak & Baral His observation indicates that 'academic and professional identities are secondary' and teachers are primarily identified by 'their affiliation to teachers' groups, who they are close to and who they vote." One critical issue that emerged from the narrative of the participants in this study is that teachers' identity, based on their political affiliation, begins from the hiring stage. Teachers, even part-time, should belong to political parties and recommended by teachers' groups in the university to ensure the 'authenticity of their affiliation'. Teacher E told that once teachers are first known by their 'political identity', their academic identity remains 'invisible' and 'secondary' throughout their academic career. She shares his story as follows: The concept of political factionalism was there when I entered here. Teachers from one group padlocked our department when part-time teachers were hired. They said that the selection was not according to their demand. In the time of promoting parttime teachers to contract, it is very difficult for our Head. He did not know which party the part-time teachers belong to. He suggested that the part-time teachers should get support of each factions of teachers. Head sir was saying for the good cause. Finally, each of them received support from different factions and promoted to contract teachers. From that point, those teachers' identity has changed. Now they are known as a member of political factions. They are all good teachers…but it's unfortunate their academic identity is secondary. Teacher's professional identity is shaped by multiple factors. One of the primary factors include salary. In this study, teachers reveal that the existing salary is not sufficient for them to carry out research and other professional development activities. Teacher F, for example, states that University has not been able to utilize people's potentialities at maximum level. Less work. Less payment. Service and facility provided by Nepal Government is very low. To be satisfied on this, economically, is challenging. The situation of salary is like people will want to leave the profession in two days. Like Teacher F, teachers in this study are not happy with salary. Since salary is 'very low', as Teacher F says, university teachers 'want to leave the profession in two days'. More importantly, teachers in this study are critical about unsystematic promotion system. Teachers believe in 'capacity' rather than the promotion. Teacher E, for example, asserts that 'there's no difference between a lecture and a professor. If one can build capacity then lecture is also enough." Teacher E recounts, There is no much difference because a professor and a reader. Because promotion is done through sequence and setting. If my articles are evaluated by someone who favors me and supports me, then I can easily become a reader and a professor. But if he does not support me and is biased to me then wherever my journal may be published, he won't favor me. I am not promoted. Like Teacher E, other participants in this study also reveal the emerging and influencing 'culture of factionalism' in Tribhuvan University. Since teachers receive support based on their loyalty to 'factions', in terms of partisan politics, the participants in this study feel 'alienated'. Teacher E feels that the institution where his individual efforts are even 'restricted' by the institution. For him, 'when color does not match', the leadership creates problems for individuals as well. 'Color' here refers to 'political color' of teachers. Sense of fear and hierarchical structure of the university Tribhuvan University has a strong hierarchical structure that gives more importance to the year of teaching experiences than teachers' academic strengths and research. Professors and Readers enjoy more privilege in receiving institutional support than lecturers. Participants in this study agree that the existing structure and 'hierarchical system' in the university is not supportive for lectures. Indeed, Teacher D feels that there is 'a sense of fear' among teachers due to the hierarchical structure of the university. It's not that our seniors do not mentor us. Our department is small and they do support us. But even when they will, they have not been able to support us all the time. They themselves are marginalized. The teachers in this university also expressed sense of fear in the existing culture of factionalism and hierarchy. One important phenomenon that emerges from the narratives of the teachers is 'aaphnomanche' culture. In this regard, Teacher B shares his experience as follows: Unless anyone considers you mine, you cannot enter anywhere [in the university]. All doors are closed. Someone who is less competent than you goes ahead because of membership of factions. I am really frustrated seeing this situation. When I became open as a member of one faction, everyone's behavior towards me has totally changed. Now, I feel safe. If I am in any problem, there's my faction behind me. So, here, instead of worrying about academic works and professional development, I have to fear about whether I will be victim of any other party. We have to focus on which party is dominating which sector, and where I can get the opportunity. This is similar to a kind of fear. So, while entering into class also, there's similar kind of situation. Politics among teachers is reflected among students as well. Once, a teacher in our department was replaced with the influence of strong political power. And that teacher was not even pre-informed. So, we raised our voice against such humiliation. Then they made a plan. Before also few problems were seen in his class. So, they thought that when students themselves will reject the teacher, they will bring their member in that place. But the students did not reject the teacher. Such plans are also made. The hierarchical structure is also affecting in teachers' identity. Teachers in this study consistently argue that there is 'unsaid discrimination' between 'senior' and 'junior' teachers. Teacher C, for example, reveals that There are divisions like lecturer, part-time teacher, associate professor and professor. Routine is made according to the convenience of senior professors, according to their time. And part-time teachers are compelled to come at any time that department wants. I also began that way. I had to come at morning, afternoon, whenever they wanted me to come. But not all teachers are like that. Some are there to encourage as well. But there's gap between seniors and juniors. Similarly, Teacher C reveals that There's gap between senior and junior. In any committee, only seniors are kept, there's no place for juniors. We as juniors do not even get to know what kind of committee that is and how it functions. People think about their future. And seniors are there in the committee of promotion and everywhere. If they speak against the seniors, they are finished. Those people, who are close to them, who obey and work for them are favoured by the seniors. So, it's difficult to challenge. Everything is connected to seniors. In the same line, Teacher D says: Yes. And it's much difficult. Even in routine allocation, there is hierarchical influence. Like I already said, the convenience of seniors is acknowledged while preparing routine. But even if juniors have problem, they must come at any time they are called. And, also, people don't speak. There's trend that whatever senior says must be done. And the one who speaks is considered to be disrespectful. As Teacher D has revealed, the existing hierarchical culture impacts in daily activities of lecturers. Since the allocation of time and classes is also influenced by the hierarchy among faculty members, junior faculty are forced to follow what their seniors say. More importantly, this hierarchical culture has silenced the voices of the juniors. Junior faculty members 'don't speak' even if they would like to share their discomfort because speaking up is considered 'disrespectful' for seniors. The experiences of teachers who are the bottom of professional hierarchy as determined by Tribhuvan University show that there is lack of institutional support for professional development of faculty members. Teachers work hard to explore their own professional opportunities. The teachers' experiences also imply that the existing hierarchical structure is less supportive for university teachers' professional development. Teachers' sense of fear is a critical issue that emerges from the existing hierarchical structure of the university. More importantly, the growing influence of political factions has invisibilized their professional identity and given focus on the political identity. Findings and Conclusion According to the research, since academic university teachers, particularly lecturers, are not provided many opportunities for their professional development, they have strong sense of demotivation and fear.Academic identity remains in crisis as teachers are evaluated in terms of their political identity.University lecturers have strong sense of fear in the existing hierarchical structure of the university system.Institutional support for professional development of teachers hardly exists in the university. Lecturers wish to see university as an ideal place for academic practice and the responses of the respondents reveal that different appointments on the basis of political groupings have negative affects. The research concludes that lack of motivational factors influence professional development of the lecturers and their evaluation vis-à-vis political identity hinders their professional life. Existing hierarchy in university affects one's development ultimately influencing professional identity of the individual, therefore unless a lecturer is supported by his/her institute, s/he has to face problems in career development. Institutional negligence makes one frustrated. The recent scenario reveals that the aura of university is declining gradually, therefore, to maintain its height, we have to think to make it a temple of knowledge, not a place of political foul play. Finally, appointments should be made not on the basis of political affiliation but on the basis of their expertise.
2020-07-23T09:09:36.377Z
2019-07-31T00:00:00.000
{ "year": 2019, "sha1": "66f9d9f515eec8508af8f59a327295b28776d4ff", "oa_license": null, "oa_url": "https://www.nepjol.info/index.php/batuk/article/download/30119/24145", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "21dd940c9c37ba4ca511930258100eaa484f4e73", "s2fieldsofstudy": [ "Education" ], "extfieldsofstudy": [] }
18009076
pes2o/s2orc
v3-fos-license
Evaluation of vascular lesions using circulating endothelial cells in renal transplant patients Objective: To investigate the correlation between circulating endothelial cells (CECs) and vascular lesions in renal allografts. Methodology: Sixty-two renal transplant patients were divided into four groups according to biopsy data. CECs were isolated from peripheral blood with anti-CD136-coated immunomagnetic Dynabeads and counted by microscopy during biopsy. CEC numbers were compared in each group, as well as the correlation between CECs and C4d and vascular changes in different groups. Result: CECs counts were higher in the acute rejection (AR) with endarteritis group than in the normal group (p < 0.01), acute tubular necrosis (ATN) group (p < 0.01) and chronic allograft nephropathy (CAN) group (p < 0.01), there were no difference among ATN, normal and CAN) group (p = 0.587). There was no difference among the normal group without hyaline, normal group with hyaline and CAN with hyaline group. An increasing CECs count was related to C4d-positive AR (p = 0.008; κ score = 0.519) and infiltration of inflammatory cells (p = 0.002, κ score = 0.573) in proximal tubule cells (PTCs). The CECs count decreased after intensive therapy in five patients (p = 0.001). Conclusion: Elevation of the CEC count in blood was related to endarteritis. Elevation of CEC count was related to C4d deposition and infiltration of inflammatory cells in PTCs. As surgical techniques and new immunosuppressive agents have developed, the short-term survival of patients who have undergone kidney transplantation has greatly improved. However, refractory acute rejection (AR) after kidney transplantation remains the main cause of short-term graft loss. This has increased the prevalence of chronic allograft nephropathy (CAN), which influences the long-term survival of grafts (1,2). Several studies have focused on the diagnosis and treatment of AR to find a marker for the prognosis of AR, as well as to find an individual treatment for each patient that can improve survival after AR (3)(4)(5). Vascular rejection of renal allografts has been associated with corticosteroid-resistant as well as poor short-and longterm outcome (6,7). Antibody-mediated AR was initially identified by Racusen et al. (6) and then further described by the same author (7). Because of the different pathogenesis of each type of AR, they must be distinguished and treated individually (7). More than 30 yr ago, Bouvier and colleagues (8) first reported the presence of non-hematopoietic cells of endothelial origin in the blood of rabbits after endotoxin injection. This was also confirmed by subsequent studies by Hladovec et al. (9,10). Circulating endothelial cells (CECs) have been associated with several pathological conditions that have common vascular injuries (11)(12)(13). Also, endothelial cells or endothelial progenitors in the circulation can "home" to sites of ischemia (14,15) as well as play a part in the formation of thrombotic neointima and angiogenesis on vascular prosthetic surfaces in vivo (16,17). Identification of the origins of CECs and blood endothelial outgrowth may facilitate the use of these cells in the clinical diagnosis. Also, measurement of CECs is useful in antineutrophil cytoplasmic antibody (ANCA)associated small vessel vasculitis (18). Woywodt et al. reported that CECs are a novel marker of cyclosporine-induced endothelial damage in renal transplant patients (19). The number of CECs in patients with acute vascular rejection was elevated, and the authors concluded that CEC number was a novel marker of endothelial damage in renal transplantation (20). There was also report disclosed that an increase in circulating endothelial cells was found to predict the development of cardiovascular and vascular events (21). Vascular injury in renal allografts can be assessed by renal allograft biopsy. Intimal arteritis and fibrinoid necrosis are signs of vascular rejection (6). Inflammatory cells infiltrating into proximal tubule cells (PTCs) have also been associated with antibody-mediated rejection (7). Intimal thickening in patients with CAN has also been documented (2). The relationship between CECs and such changes was unclear until now. The present study was designed to analyze the relationship between CECs and vascular injury in renal allografts and to find a non-invasive marker for vascular injury in renal allografts. Ethical approval of the study protocol The study protocol was approved by the Ethical Committee of Jinling Hospital. All patients provided written informed consent to be included in the study. Materials M-450 Dynabeads were purchased from Dynal (Oslo, Norway). Anti-CD 146 antibodies were obtained from Biocytex (Marseille, France). All other reagents were of the highest grade commercially available. Patients and control subjects Sixty-two subjects who had undergone renal transplantation were selected in this study. These patients were hospitalized and underwent renal biopsies at the Renal Transplantation Center of the Research Institute of Nephrology in Jinling Hospital from November 2006 to December 2007. Eighteen healthy volunteers were used so that a normal range of CECs was available. These healthy subjects were selected from the staff of the Research Institute of Nephrology. All patients were diagnosed according to histological changes in renal allograft biopsies according to the criteria of Banff 07. They were then initially separated into four groups: AR (n = 25); acute tubular necrosis (ATN) (n = 6); normal allograft (n = 18); and CAN (n = 13) (21). The AR group was then subdivided into the acute antibody-mediated rejection (AAMR) group (n = 13) and T-cell-mediated rejection (TCMR) group (n = 12). The AR group with endarteritis group (n = 12) was specially analyzed in this study. The normal group was defined as patients with a protocol biopsy with normal renal function and normal histological changes. The selected CAN group had the characteristics of intimal thickening accompanied by interstitial fibrosis or tubular atrophy (21). Renal histological examination Ultrasound guided percutaneous biopsy was performed on each transplant patient. Formalin-fixed tissue was embedded in paraffin using standard procedures. Sections (thickness, 2 lm) were stained with hematoxylin/eosin (H&E), periodic acid-Schiff (PAS), silver methenamine and Masson's trichrome for microscopic pathological diagnoses. For immunofluorescence analyses, renal tissues in optimum cutting temperature (OCT) compound were snap-frozen and maintained in liquid nitrogen. Immunofluorescent staining was carried out on 3-lm cryostat sections using fluorescein isothiocyanate (FITC; Dako, Copenhagen, Denmark)-labeled rabbit anti-human immunoglobulin (Ig) G, IgA, IgM, complement (C)3, C4, and C1q (Dako, Carpinteria, CA, USA). All samples were evaluated by two pathologists who were blinded to the CEC data. Isolation and counting of CECs Isolation of CECs was carried out by immunomagnetic separation after an antibody incubation step according to previously reported and validated methodology (20). Three milliliters of ethylenediaminetetraacetic acid (EDTA) blood from patients with renal transplantation and from healthy volunteers after obtaining their informed consent was collected for isolation of CECs. Antiendothelial cell monoclonal antibody (anti-CD146)-coated M-450 Dynabeads were stored at 4°C for a maximum of four wk. Blood from study subjects and healthy controls was obtained by venipuncture. After careful rotation of the tube, 1 ml blood was mixed with 1 ml isolation buffer (phosphate-buffered saline [PBS], 0.1% bovine serum albumin [BSA], 0.1% sodium azide and 0.6% sodium citrate) at 4°C. Samples were mixed in a head-over-head mixer for 30 min at 4°C and separated using a Dynal MPC-1 Magnetic Particle Concentrator (Dynal, Oslo, Norway). The sample was washed with buffer four times inside the magnet at 4°C. Between each washing procedure, the sample was flushed ten times with buffer in a 100-lL pipette. The cell-bead suspension was then dissolved in 200 lL buffer. Cells were counted using a Nageotte chamber. Endothelial cells were larger than other blood cells, had a well-delineated round or oval cell shape, and carried > 5 beads (Fig. 1). Various concentrations of fresh human endothelial cells from the umbilical vein were diluted in the blood of healthy volunteers to serve as positive controls. Statistical analyses Data are mean ± standard deviation. Significant differences between two groups were analyzed using the v 2 test. Concordance between two groups was evaluated using Fisher's exact test. All p values were two-sided. p < 0.05 was considered significant. Analyses were carried out using the SPSS version 13.5 statistical package (SPSS, Chicago, IL, USA). Demographic information and clinical characteristics Sixty-two renal transplant patients were separated into four groups according to the histology of allograft biopsy. Demographic information and clinical characteristics are shown in Table 1. CEC count in different vascular injury groups The CEC count in different vascular injury groups is listed in Table 2. Vascular injury included: endarteritis in the AR group; hyaline arteriolar thickening in the normal renal function group; and chronic hyaline arteriolar thickening in the CAN group and ATN group. The CEC count was highest in the endarteritis group. The difference in CEC count between the other groups was not significant ( Table 2; Fig. 2). We also analyzed the CEC count among three groups ( Table 2). CEC count in different types of AR groups To identify the relationship between CEC count and AR, we further analyzed the CEC count in different types of AR. C4d deposition in PTCs was considered to be a marker for antibody-mediated rejection. The criteria for C4d deposition was stated in Banff 2007. The C4d-positive group was all C4d3 according to Banff 2007. The CEC count in the AR group was higher than that of the normal group (p < 0.01). The CEC count in the C4d-positive group was higher than that of the C4d-negative group (p < 0.01; Table 3; Fig. 3). The CEC count in the C4d-negative group was also higher than that of the normal group (p < 0.01). Relationship between increasing numbers of CECs with C4d-positive cells and inflammatory cells in congested peritubular capillaries According to the range of CECs in the healthy group and normal group, we considered a CEC count 24 CECs/lL as indicating that the number of CECs was increasing. We further analyzed the relationship between increasing numbers of CECs with the number of C4d-positive cells: there was a significant correlation between the two factors (p = 0.028; j score = 0.437). The presence of inflammatory cells in congested peritubular capillaries was considered to reflect changes in acute humoral rejection. We also analyzed the correlation between the number of inflammatory cells in congested peritubular capillaries and increasing numbers of CECs in the AR group: there was a significant correlation between the two factors (p = 0.002; j score = 0.573). Pathological characteristics and short-term prognosis To evaluate the relationship between CEC count and pathological characteristics and short-term outcome, we initially divided patients in the AR group into those with a CEC count 24/lL and those with a CEC count <24/lL group. We then compared the pathological characteristics and short-term outcome between the two groups. The mean prevalence of glomerulitis, mononuclear cell interstitial inflammation, and tubulitis was compared between the two groups according to Banff 07 criteria. C4d deposition in PTCs, intimal arteritis, and mononuclear cell interstitial inflammation in PTCs were compared as was corticosteroid resistance and graft loss at one yr. Only the prevalence of intimal arteritis was significantly different between the two groups. Corticosteroid resistance and graft loss at one yr was higher in the CEC count 24/lL group( Table 4). The sensitivity and specificity of CEC number > 24 for AR with endarteritis was 83.3% and 69.8%. Changes in CEC count in subjects with acute vascular rejection before and after effective treatment Five AR patients suffered endarteritis two wk after transplantation. These AR patients were corticosteroid-resistant and received 3-5 rounds of immunoadsorption. The immunosuppressive protocol was tacrolimus combined with mycophenolate mofetil and prednisone. The renal allograft recovered gradually after intensive immunosuppressive therapy. CEC number also decreased to within the normal range as the function of the renal allograft recovered (Fig. 4). Discussion Over the past 30 yr, CEC numbers have been measured in normal individuals and patients with various diseases (22,23). However, these reports are diverse not only because of the different diseases studied, but also because of different methods of isolation and detection (16,24). In 1991, George et al. (25) unequivocally demonstrated CECs in whole blood using an endothelial cell-specific antibody. Subsequently, several research teams identified CECs in whole blood using endothelial cell-specific monoclonal antibodies. Damage to endothelial cells is the hallmark of acute vascular rejection, which is an important predictor of graft loss. Nevertheless, endothelial damage and cell death do not necessarily lead to scarring and loss of vascular function. Instead, repopulation of endothelial leaks by recipient stem cells has recently been documented in renal transplant recipients who have previously sustained acute vascular rejection (20). A continuing interplay between vascular damage and repair has therefore been postulated. This concept mandates that damaged endothelial cells undergo detachment from the basement membrane at some point of the disease process. Putative mechanisms of detachment and factors that protect against it have been reviewed (26). An increased number of CECs in acute vascular rejection has been reported (20). The present study confirmed this finding. The CEC count in patients with AR with endarteritis was highest in the AR group (Fig. 1). However, the new Banff 07 criteria emphasize that C4d deposited in PTCs is considered to be a marker of antibody-mediated rejection; this type of rejection leads to poor outcome. Therefore, we compared the CEC count between the C4d-positive group and C4d-negative group and further analyzed the correlation between increasing numbers of CECs and C4d deposition in PTCs in the AR group. We found a correlation between these two factors. We also evaluated the relationship between increasing numbers of CECs and monocyte infiltration around PTCs. We found these two factors to be correlated. This finding indicated that injury to the vessel endothelium of the graft probably plays an important part in antibody-mediated rejection. The mechanism of this phenomenon should be investigated further. Hyalinization of arteries is common in renal allografts, which was related to hypertension and E348 calcinuren inhibitor nephrotoxicity. This change has also been associated with injury to endothelial cells (27). We evaluated CEC numbers in the group with hyaline arteriolar thickening in the normal group with normal function and compared it with CEC numbers with no hyaline arteriolar thickening the normal group. We also compared CEC numbers with hyaline arteriolar thickening in the CAN group. The results showed no significant difference among the three groups. One study revealed that cyclosporine can increase the number of CECs in transplant patients compared with those in healthy subjects (19). In the present study, all patients received calcineurin inhibitors, and CEC number was also higher than that in the healthy group (data not shown). This observation could be explained by this effect (19,28). Therefore, hyaline arteriolar thickening does not lead to the increase in the number of CECs. Therefore, in contrast to the increasing number of CECs in the AR group with endarteritis as described before, we concluded that only acute injury to endothelial cells can lead to an increase in the number of CECs in the peripheral circulation. When recovery from such acute injury begins, the CEC number decreases but the vessel does not completely recover (Fig. 3). Graft endarteritis was considered to be a characteristic of T-cell-mediated AR according to Banff 97 and Banff 07 criteria. Endarteritis has good correlation with C4d deposition in PTCs (p = 0.003, j score 0.601). Antibody-mediated rejection also required the activation of T cells. This led to complement activation, which caused allograft injury, but a mixed type of rejection was also noted. Therefore, T-cell-mediated and antibody-mediated rejection cannot be completely separated in practice, and treatment for such patients should be tailored to the individual. Patients with an increasing number of CECs in the AR group had a high prevalence of corticosteroid resistance and poor short-term outcome. This phenomenon might be explained by an increasing number of CECs being related to endarteritis and C4d deposition in PTCs. Banff 97 guidelines suggest that being C4d-positive can be considered to be a form of corticosteroid-resistant AR, and that the short-and long-term prognosis is poor (21,(29)(30)(31). Work from our institution confirmed this. Tacrolimus combined with mycophenolate mofetil can effectively treat C4d-positive AR in the short term, but the predictive value of long-term survival should be studied further (32). We did not evaluate the origin of CECs. According to the present study and other reports, we hypothesized that the CECs originated from the donor because increasing number of CECs were related to injury of the endomembrane of the vessel of the allograft. This question could be answered by gene sequencing these CECs. In summary, we revealed that increasing CEC number was related to acute injury to the endomembrane of the renal allograft. The highest CEC count was related to endarteritis and decreased with recovery from the injury caused by endarteritis. CEC number was also related to the C4d-positive AR and the presence of inflammatory cells in congested peritubular capillaries (which also supported the notion that antibody-mediated rejection was related to injury to the endomembrane). CEC number was not related to hyaline arteriolar thickening and chronic vascular injury in renal allografts. An increasing number of CECs can be used as a predictor of poor short-term outcome of AR of renal allografts.
2018-04-03T01:11:18.913Z
2012-04-19T00:00:00.000
{ "year": 2012, "sha1": "06dab94dc2fafbe35489392f5f2c785a464957eb", "oa_license": "CCBY", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/j.1399-0012.2012.01620.x", "oa_status": "BRONZE", "pdf_src": "PubMedCentral", "pdf_hash": "06dab94dc2fafbe35489392f5f2c785a464957eb", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
229370561
pes2o/s2orc
v3-fos-license
Index Forecast Study Based on Amended Weighted Markov Chain in China Shanghai Composite Index is one of the most representative indexes of Chinese stock market index that is the gauge of national economy and the direction to economic development, therefore, the forecast study on Shanghai Composite index is of great significance in theory and practice. Through analysis, fluctuation of the short-term Shanghai Composite Index conforms to fundamental assumption of Markov chain forecast. This paper mainly amended weighted Markov chain model to study and forecast short-term trend of Shanghai Composite Index, hoping to facilitate investors and potential investors to make investment decisions and provide reference for them. I. INTRODUCTION This paper aims to amend weighted Markov chain model to study and forecast short-term trend of Shanghai Composite Index by use of the MATLAB tool so that the model can give references and directions to economic individuals to make better investment decisions. The stock market index is an index describing the market price level and its change. The investors will bear the price risk due to the fluctuation of stock price. The rational investors want to know the related information about stock price before investing. It is easier for investors to understand the trend of individual stocks, but more difficult for investors to understand the trend of many stocks or even the whole market. Therefore, in order to meet their needs, some financial institutions make full use of the public information, professional knowledge and ability in the market and compile stock market index through certain rules to help investors understand the stock market more clearly. The stock market index generally synthesizes the representative stocks of various industries, which helps investors evaluate the investment effect, and helps investors make prediction and judgment on the market. In addition, the stock market reflects the current economic situation. Therefore, the government, enterprises and other institutions will take the stock market index as a reference index, and judge the current economic situation. High-quality stock market index can accurately reflect the market information and be widely Manuscript received February 2, 2020; revised July 18, 2020. This work is supported by Shandong social science planning project (19CFZJ42), Qingdao social science planning project (QDSKL1901115), and 2019 National Statistical Science Research Project (2019LY31). Yanpeng Sun is with the School of Economics, Qingdao University, Qingdao Shandong, 266071, China (e-mail: jjxy_syp@163.com). used in the investment community. With the advent of the era of big data, more and more statistical learning models have been applied in the financial field. Compared with traditional artificial methods, statistical learning model has great advantages. An important application of statistical learning model in the financial field is stock market prediction. When trading stocks, rational investors usually do a lot of analysis on historical data and so on, and then buy and sell stocks. However, this method of manually analyzing data is time-consuming and laborious, and it is easy to generate many errors, which will increase investment risks and lead to investment losses. Statistical learning model can make deep and accurate analysis on a large number of data and give more accurate prediction results in a short time. Compared with manual analysis, statistical learning model improves investment efficiency and accuracy. Statistical learning model can not only process the data magnitude that cannot be processed manually, but also can browse news and social networking sites through computer technology to collect and process more information related to investment, which can not only increase the information sources of investors, but also save investors' time. In recent years, support vector machines, neural networks, genetic algorithms and other statistical pattern recognition algorithms have been widely used in stock market prediction. The application of statistical pattern recognition algorithm in stock price prediction has become a hot topic in the financial circle. A large number of conclusions show that the statistical learning model is effective in predicting stock prices, and the combination of artificial intelligence and finance is the future development trend. With the development of the world economy, the world finance is in a stage of rapid development and the financial activities are increasing. The uncertainty of the change trend of financial activities is also increasing. How to learn and master the rules of financial activities and predict the future change trend of financial activities become the focus and the main research content of academic and financial areas. Financial prediction can effectively provide the basis for making financial plans and decisions, and then maintain the healthy development of the financial market and maximize the profit of financial organizations. The more accurately the stock price tendency is predicted, the more correctly investors make decisions on investment portfolios. This paper is based on the Markov Chain Model, but considers the fluctuation of stock price, and thus increases the accuracy to some extent. This paper is mainly divided into the following parts. Firstly, the works that have been accomplished by scholars will be concluded briefly and the amended weighted Markov Chain Model will be introduced. Secondly, the theoretical model will be presented. Then, the main body of this paper comes. Empirical analysis will be demonstrated in detail. The overall conclusion will be showed at last. II. LITERATURE The origin of the Markov Model can be traced back to the second half of the 1960s, when Baum et al. presented the original prototype of the Model in a series of statistical papers. Firstly, Baum (1966) [1] published a paper on the initial prototype of Markov model, and gave a statistical inference on the probability function of finite state Markov chain. Then, Baum (1970Baum ( , 1972 [2], [3] respectively gave a maximization technique for Markov chain probability function and a correlation inequality and maximization technique for statistical estimation of Markov process probability function, which was a further expansion of 1996 and laid a foundation for later work on Markov model. Later, Ryan & Nudd (1973) [4] summarized the relevant content and application field of Viterbi algorithm, and extended the algorithm. Then, Levinson et al. (1983) [5] gave the application of Markov chain probability function theory in automatic speech recognition, combined theory with practical problems, and gave a model suitable for isolated word recognition. And Rabiner & Juang (1986) [6] summarized the relevant contents of Markov Model on the basis of previous researches. Krishnalal [7] et al. combined Markov model with support vector machine and applied it to text mining and news classification. Markov Model is also widely used in the financial field. Hassan & Nath (2005) [8], [9] first applied Markov Model to stock price prediction and gave a new method to predict stock price. This method takes the opening price, closing price, maximum price and minimum price of the stock price as the input of the model, and predicts the stock price through parameter estimation, state decoding and other steps. Later, Srivastava et al (2008) [10] applied Markov Model to credit fraud detection, and the empirical results showed that it was effective to use Markov Model to detect information fraud. And Hassan (2009) [11] combined Markov Model with fuzzy Model and applied the Model to stock price prediction. The Model USES Markov Model for data pattern recognition, then USES fuzzy logic to obtain predictions and can test stock markets from different industries. The empirical results show that the prediction accuracy of the improved model is obviously improved. Recently, Caccia & Remillard (2017) [12] proposed the multiple autoregressive Markov Model on the basis of the Markov Model. On the basis of the improved Model, the likelihood ratio test and a new goodness of fit test were conducted to test the S&P 500 daily return rate, and it was found that the improved Model had obvious advantages. Domestic research on Markov Model started relatively late, but in recent years, the application of Markov Model in the financial field has attracted more and more attention from domestic experts, scholars and investment professionals. Using Shanghai Composite Index and Dow Jones Industrial Average, Jiang & Xu (2013) [13] proposed a stock index performance forecast method based on grey residual model and BP neural network. Shi (2014) [14] provided an ARIMA model based on wavelet analysis based on monthly average close price of the Shanghai Composite Index and forecast long-term trend of index price. Song (2014) [15] determined share price inflection points and forecast share price trend by smoothing share price fluctuations. Cui (2016) [16]introduced normal Markov Chain to analyze the Economic market with main economic data index of China, demonstrating that Markov Chain theory is practicable for economic data. Fei (2016) [17] analyzed characters of random data in different environment and demonstrated that Markov Chain shows good precision in describing and predicting economic data in multiple random environments. Lin & Yang (2017) [18] introduced improved hybrid neural network based on fuzzy granulation to study the stock market. Their study shows that the stock index is predictable and the stock market data has the Statistical characteristics when no force majeure occurring the stock market, and they get good decision in predicting price range of stock index. Liang (2016) [19] creatively introduced the adaptive network based fuzzy inference system(ANFIS) model to the stock market, using the statistical characteristics of the stock market data, analyzing the relationship between data change and the time flow, and on empirical mode their study shows good description for stock time series data. Their researches focus on accuracy of forecast values in share price forecast researches but neglect the random fluctuation of stock market, which inevitably compromises forecast accuracy. On the contrary, the amended weight Markov chain is used for forecasting future fluctuation range and state probability of research objects. It fully considers random fluctuation of the stock market, featuring forecast results which are more scientific and practicable. III. THEORETICAL MODEL Define probability space (Ω, F, P) the random sequence on the X0, X1,... It's called markov chain Chain), if the following two conditions are met: (1) the state space of {Xn: n≥0} is a countable set; (2) for any n and state i0,i1...,in+1∈l, as long as P{X0=i0, X1=i1, ...z, Xn=in} >0, the following equation is true Condition (2) is called markov property (also known as no after effect), which is the basic characteristic of markov chain. It indicates that the state at time n+1 is only related to the state at time n, independent of the state before time n. Jane By itself, Markov property means that the future has nothing to do with the past, given the present. To set up an amended weighted Markov chain forecast model including: 1) Sequencing forecast samples from small to large and partitioning to construct state space I; 2) Determining price index states in different time buckets; 3) Testing Markov property; 4) Calculating all-order autocorrelation coefficient rk according to the corresponding calculation formulas, where in rk stands for autocorrelation coefficient of the k order, x1 for price index of the first bucket, x for mean price indexvalue, and n for length of price index sequence; 5) Normalizing all-order autocorrelation coefficients, wherein m is the maximum order to be calculated in forecast; A. Choosing Samples This paper uses weekly close price data of Shanghai Composite index (hereinafter refers to Shanghai Composite Index) from November 28, 2014 to July 17, 2015 as research samples, and price trend of sample data is shown as Fig. 1. 1 shows that Shanghai Composite Index has presented a stable rise from 2000 to more than 5000 points before June 2015, reflecting an inspiring vision of China's economy. During this period, the stock market was soaring, and the prices of almost all kinds of stocks increased, from which the investors would benefit a lot. Therefore, people in all walks were willing to invest in stocks. However, constant fluctuation in the rising trend always exists. The objective law also indicates that the stock price will fall when it rises to a high point, that is the market is faced with much high downturn pressure. Since June 2015, Shanghai Composite Index temporarily dropped from the high point, which exposed enormous pressure in Chinese economic recovery process. Those who purchased securities at a relatively high price confronted huge loss. Facing huge fluctuation in the stock market trend, effective study in future trend of the stock market is of great significance in social practice, which will guide people to make relatively correct decisions on investment portfolio. B. Preprocessing Data The Shanghai composite index sample data is sequenced from small to large and averagely divided into five sample ranges according to the maximum difference, including state 1: close price lower than 2400, state 2: close price between International Journal of Trade, Economics and Finance, Vol. 11, No. 5, October 2020 2400 (included) and 3000; state 3: close price between 3000 (included) and 3600, state 4: close price 3600 (included) and 4200 and state 5 [5]: close price above 4200 (included). See Table I for details. Basic statistic distribution property [6] (shown in Table II and as Fig. 2 and Fig. 3) of sample data is calculated by MATLAB software. The statistic results indicated that the sample data has no normal distribution features and their trend fluctuates about 3442. It can be judged from autocorrelation test that autocorrelation coefficient within a short delay is always positive and then negative all along. Autocorrelation distribution is not of triangular symmetric, indicating that distribution features of Shanghai Composite Index sequence live up to basic assumption of the amended weighted Markov chain model. The model can be employed in analytical prediction of Shanghai Composite Index trend. All-order autocorrelation coefficients are calculated using MATLAB software. The operation results will be shown as follows: Thus it can be seen that autocorrelation coefficient of five delay orders is remarkable, so take autocorrelation coefficients of five delays, that is, k=1, 2, 3, 4 and 5 (see Table III). All-order autocorrelation coefficients are normalized to obtain Markov chain weights [7] of all-order delays. See Table IV for operation results. International Journal of Trade, Economics and Finance, Vol. 11, No. 5, October 2020 E. Forecasting Index From the foregoing, with weekly close price data of the last five weeks (from Jun. 19, 201519, to Jul. 17, 2015 of Shanghai Composite Index as initial data and the corresponding transition probability matrixes according to the states, close price data in future two weeks of Shanghai Composite Index and state probabilities can be forecast, and see the calculation results in Table V. Weights of various prediction probabilities in the same state and forecast probability of Shanghai Composite Index in this state as well as 'state space probabilities' in Table V are displayed in MATLAB in matrixes. The results are that: The probability of Shanghai Composite Index within the next trade week is highest in state 4, at 50.13%, so we consider fluctuation range of Shanghai Composite Index should be between 3600 (included) and 4200 around; using the state 4 as the first prediction week, Shanghai Composite Index trend of the following week (from Jul. 25, 2015 to Jul. 31, 2015) can be predicted by repeating the above steps on this basis, in this way, the obtained prediction range still falls in the state 4. With view to the slow-rising state of Chinese stock market, we firmly believe space of the state 4 is operation space of Shanghai Composite Index in one to two weeks, and see the actual fluctuation situations shown as Fig. 4. F. Back Test The actual operation results of Shanghai Composite Index market shows that Shanghai Composite Index on July 24, 2015 closed at 4070.91 while at 3633.73 on July 31, 2015, and both values drop in the state 4, indicating that actual results are highly consistent with prediction results. V. CONCLUSION In conclusion, we can find that the amended weighted Markov chain can effectively forecast fluctuation ranges and probabilities of short-term trend of Shanghai Composite index. The predictive results fully indicated that slow-rising status of Chinese stock market will not change in the short term but Shanghai Composite Index will fluctuate between 3600 and 4200. Meanwhile, Chinese economic resurgence will not change in the short term but still faces huge pressure in downturn. Chinese government is supposed to introduce more supporting policies given that the market plays its leading role in resource allocation, hoping to stabilize and improve investment sentiment and safeguard economic resurgence achievements. (Since the stock market can reflect the operation of real economy, it is of necessity to actively promote the development of the real economy. It is essential to encourage the development of small and medium-sized promising enterprises which are important and active parts of the market economy. It is of great importance to set up a sound and healthy environment where the capital can flow into the corporate in need and can create the real wealth. This can be accomplished by some governmental policies that regulate the capital flow from financial institutions. As to the investment sentiment, it will expand if the real economy develops at a stable or accelerating speed. And the high investment sentiment will contribute to the advancement of real economt in turn. Then, there will be a good circulation). CONFLICT OF INTEREST The author declares no conflict of interest. AUTHOR CONTRIBUTIONS Yanpeng Sun does the Conceptualization, methodology, formal analysis, investigation, resources, data curation, writing -original draft, writing -review & editing, visualization.
2020-12-10T09:08:12.248Z
2020-10-01T00:00:00.000
{ "year": 2020, "sha1": "09567299bb5275043ff03f7aff8970e67bb10157", "oa_license": null, "oa_url": "https://doi.org/10.18178/ijtef.2020.11.5.674", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "3ec5347dbfe8a19baa670b61357315848a81f3d5", "s2fieldsofstudy": [ "Economics" ], "extfieldsofstudy": [ "Mathematics" ] }
4942955
pes2o/s2orc
v3-fos-license
HIV-1 Subtype C-Infected Individuals Maintaining High Viral Load as Potential Targets for the “Test-and-Treat” Approach to Reduce HIV Transmission The first aim of the study is to assess the distribution of HIV-1 RNA levels in subtype C infection. Among 4,348 drug-naïve HIV-positive individuals participating in clinical studies in Botswana, the median baseline plasma HIV-1 RNA levels differed between the general population cohorts (4.1–4.2 log10) and cART-initiating cohorts (5.1–5.3 log10) by about one log10. The proportion of individuals with high (≥50,000 (4.7 log10) copies/ml) HIV-1 RNA levels ranged from 24%–28% in the general HIV-positive population cohorts to 65%–83% in cART-initiating cohorts. The second aim is to estimate the proportion of individuals who maintain high HIV-1 RNA levels for an extended time and the duration of this period. For this analysis, we estimate the proportion of individuals who could be identified by repeated 6- vs. 12-month-interval HIV testing, as well as the potential reduction of HIV transmission time that can be achieved by testing and ARV treating. Longitudinal analysis of 42 seroconverters revealed that 33% (95% CI: 20%–50%) of individuals maintain high HIV-1 RNA levels for at least 180 days post seroconversion (p/s) and the median duration of high viral load period was 350 (269; 428) days p/s. We found that it would be possible to identify all HIV-infected individuals with viral load ≥50,000 (4.7 log10) copies/ml using repeated six-month-interval HIV testing. Assuming individuals with high viral load initiate cART after being identified, the period of high transmissibility due to high viral load can potentially be reduced by 77% (95% CI: 71%–82%). Therefore, if HIV-infected individuals maintaining high levels of plasma HIV-1 RNA for extended period of time contribute disproportionally to HIV transmission, a modified “test-and-treat” strategy targeting such individuals by repeated HIV testing (followed by initiation of cART) might be a useful public health strategy for mitigating the HIV epidemic in some communities. Introduction HIV-infected individuals with high plasma viral load progress to AIDS faster [1,2,3], and are more likely to transmit virus [4,5], than those with a lower viral load. As a modified version of the ''test-and-treat'' strategy [6], identification and antiretroviral (ARV) treatment of individuals who maintain high HIV-1 RNA levels for an extended period of time might represent an important public health strategy to significantly curtail HIV incidence. An extensive body of literature supports the idea that higher levels of plasma viral load in HIV-1 infection are associated with higher transmission of HIV [4,5,7,8,9]. Each 0.5 log 10 increment in HIV-1 RNA level may lead to a 40% greater risk of heterosexual transmission [10]. Studies focusing on mother-tochild-transmission (MTCT) demonstrate that levels of plasma viral RNA load [11,12,13] in HIV-infected mothers are the best predictors of viral transmission. Individuals with primary or latestage HIV infection are highly infectious [14,15] due to increased levels of viral RNA load. Although the individual benefits of starting combined ARV therapy (cART) in acute seroconverters remain uncertai, early initiation of cART may offer the secondary public health benefit of reducing transmission caused by those with recent seroconversion and higher viral loads. Viral load dynamics following HIV-1 subtype B infection have been well characterized by previous studies [1,16,17,18,19,20,21]. The initial peak of viral load resolves in a steady-state viral setpoint within four to six months. Individuals with higher viral setpoints in HIV infection generally lose CD4+ cells more quickly, progress to AIDS more rapidly, and experience mortality sooner than those with lower HIV-1 RNA set-points. Mellors et al. demonstrated that 80% of individuals with viral load $30,000 (4.48 log 10 ) copies/ml progress to AIDS within 6 years [1]. In the MACS cohort, the upper quartile of HIV-infected individuals maintained viral RNA load from 59,987 to 72,651 (4.78 to 4.86 log 10 ) copies/ml for approximately 6 to 18 months post-infection [22], and those who progressed to AIDS within 3 years maintained levels of viral load over 4.5 log 10 (Figure 2A in [22]). Recent HIV-1 subtype B-based studies from the USA and Canada [23], and the mainland USA and Hawaii [24] reported median viral RNA from 3.88 to 4.80 log 10 with inter-quartile ranges from 2.7 to 4.9 log 10 among the total number of 9,115 drug-naïve participants. Limited data regarding levels and distribution of plasma viral RNA load are available for HIV-1 non-subtype B settings, and particularly for HIV-1 subtype C. Gray et. al reported median viral load in a cohort of 51 HIV-1 subtype C-infected individuals from Zambia, Malawi, Zimbabwe, and South Africa within the 3.82 to 4.02 log 10 range during 2 to 24 months post-seroconversion [25]. In a cohort of 958 HIV-infected women attending antenatal clinics in Zambia the median viral RNA load was between 4.56 and 4.62 log 10 [26]. The median viral load in a cohort of 62 acutely and recently HIV-1 subtype C-infected individuals from Botswana was 4.10 log 10 [27]. The median (IQR) plasma HIV-1 RNA set point was estimated at 4.45 log 10 (4.32 to 5.14 log 10 ) in a cohort of 31 seroconverters from Malawi [28]. Median (IQR) plasma HIV-1 RNA in a cohort of 377 subtype C-infected infants from South Africa was as high as 5.90 (5.6-5.9) log 10 [29], which is consistent with infants exhibiting higher levels of viral load than adults. Utilizing data from clinical studies in Botswana, this study aimed to assess the levels and distribution of plasma viral RNA in HIV-1 subtype C infection, to identify the proportion of subjects who maintain high viral load for an extended period of time, and to determine how long such individuals sustain high viremia. The main rationale for employing data from cohorts representing different stages of HIV infection was to determine levels and distribution of plasma HIV-1 RNA in the local epidemic, and assess the HIV-1 RNA variability among different populations. While the clinically meaningful threshold of viral load affecting HIV transmission is unknown and is likely to be a continuum between 10,000 copies/ml and 100,000 copies/ml, we used 50,000 copies/ml as the threshold supported by the Quinn et al. [5] study that demonstrated that the highest HIV-1 transmission rates were in persons having plasma HIV-1 RNA levels greater than 50,000 (4.7 log 10 ) copies/ml. We also estimated the proportion of individuals with high viral load that can be identified by repeated HIV testing (6-month-versus 12month-interval testing) and the potential reduction of the period of high HIV transmissibility that can be achieved by repeated HIV testing and initiation of ARV treatment in the community. Ethics statement This study was conducted according to the principles expressed in the Declaration of Helsinki. The study was approved by the Institutional Review Boards of Botswana and the Harvard School of Public Health. All patients provided written informed consent for the collection of samples and subsequent analysis. Study participants and cohorts Description of the Botswana-Harvard Partnership (BHP) studies has been presented elsewhere [30]. For the purposes of this study, baseline data were used from the following seven BHP cohorts that were monitored including extensive clinical and laboratory follow up for prolonged periods. The time of enrollment to each cohort is shown in Supplementary Table S1. Three types of cohorts were distinguished: general population, MTCT, and cART-initiating cohorts. MTCT cohort BHP004, Mashi study: Prevention of milk-borne transmission of HIV-1C in Botswana (completed). The main goals of this project were two-fold. First, to assess whether the addition of a single dose of maternal nevirapine (NVP) at labor along with zidovudine (AZT or ZDV) from week 34 of gestation provides additional benefit in reducing HIV transmission from mother to child. The study was amended to determine whether maternal NVP (per HIVNET 012 protocol) is necessary in the setting of maternal ZDV from 34 weeks gestation through delivery and single-dose prophylactic infant NVP (at birth) plus ZDV (from birth to 4 weeks) for the reduction of HIV transmission from mother to child. The second goal was to determine the effectiveness and safety of prophylactic AZT to breast-feeding infants to prevent milk-borne HIV transmission. The baseline HIV RNA load in plasma was available for 1,189 Mashi participants. Results of the Mashi study were presented elsewhere [11,31,32,33,34]. cART-initiating cohort BHP007, Tshepo study: The adult antiretroviral treatment and drug resistance study (completed). The study was an open-label, randomized combination ARV study with a multi-factorial, 3x2x2 design. The factors included a comparison of three NRTI combinations (ZDV/lamivudine (3TC), ZDV/didanosine (ddI), and 3TC/stavudine (d4T)), a comparison of two NNRTIs (NVP and efavirenz (EFV)), and a comparison between two adherence strategies (standard of care (SOC) versus an intensified adherence strategy, SOC plus community-based supervision). The baseline HIV RNA load in plasma was available for 631 Tshepo participants. Results of the Tshepo study were presented elsewhere [35,36,37]. General population cohort BHP010, Botsogo study: A natural history of HIV-1 subtype C disease progression study (completed). This observational study gathered data on HIV-1 subtype C disease progression from ARV-naïve HIV-infected individuals with CD4+ cell count $400/mm 3 . The objectives of the study were (i) to determine the kinetics of HIV-1 subtype C disease progression (ii) to estimate the rate of CD4+ cell decline, and (iii) to analyze the time to first HIV-associated or AIDS-defining condition or death in persons with initial CD4+ cell count $400/mm 3 . The baseline HIV RNA load in plasma was available for 444 Botsogo participants. General population cohort BHP011, Dikotlana study: Micronutrient therapy and HIV in Botswana (completed). The study was a randomized, multifactorial, double-blind placebo-controlled trial to determine the efficacy of micronutrient supplementation in improving immune function and preventing early mortality in HIV-1-infected adults whose CD4+ were .350 cells/mm 3 . The design compared the efficacy of multivitamins, or selenium, or the combination of multivitamins and selenium to a placebo supplementation. The baseline HIV RNA load in plasma was available for 842 Dikotlana participants. MTCT cohort BHP016, Mma Bana study: A randomized trial of ZDV + 3TC + lopinavir/ritonavir vs. ZDV + 3TC + abacavir for virologic efficacy and the prevention of MTCT among breastfeeding women having CD4+.200 cells/mm 3 in Botswana (ongoing). This study involved cART initiation by week 28 of gestation in breastfeeding women having CD4+.200 cells/mm 3 . The third group included pregnant women who received ZDV + 3TC (given as co-formulated Combivir TM or Lamzid TM ) + NVP as the National Program regimen because they had CD4+,200 cells/ mm 3 . This group also breast-fed their infants. The baseline HIV RNA load in plasma was available for 726 Mma Bana participants. cART-initiating cohort BHP019, Mashi Plus study: The study was designed to determine the response to NVP-containing cART among women who have previously taken single-dose NVP for the prevention of MTCT (completed). The baseline HIV RNA load in plasma was available for 302 Mashi Plus participants. Results of the study were reported elsewhere [32,38]. cART-initiating cohort BHP026, Bomolemo study: A prospective cohort study evaluating the efficacy and tolerability of tenofovir and emtricitabine (given as co-formulated Truvada TM ) as the NRTI backbone for first-line cART in treatment-naïve adults (ongoing). The baseline HIV RNA load in plasma was available for 214 Bomolemo participants. Although HIV-1 subtyping was not performed systematically for all individuals included in the seven BHP cohorts analyzed, our previous studies provide strong evidence for the overwhelming dominance of HIV-1 subtype C as the etiologic agent of the HIV/ AIDS epidemic in Botswana [27,39,40,41]. According to the HIV Sequence Database at LANL [42], 99.4% of the deposited 1,425 sequences from Botswana belong to HIV-1 subtype C. Therefore, we assume that the vast majority of subjects in this study are infected with HIV-1 subtype C. Both baseline and longitudinal data were used from the eights cohort, BHP012 Tshedimoso study, n = 42, Markers of Viral Set Point in Primary HIV-1C Infection (ongoing). The study was designed to evaluate potential trends between viral load and viral genetic diversity in acute and early HIV-1 subtype C infection, to determine the relationship between virologic parameters and viral set-point, and to identify immunological parameters that correlate with viral set-point in primary HIV-1 subtype C infection. All subjects included in the longitudinal analysis were genotyped and were found to be infected with HIV-1 subtype C. Results of the study were reported elsewhere [27,39,41,43,44,45]. The primary infection cohort was comprised of individuals with estimated time of seroconversion. For acutely infected subjects (n = 8) the time of seroconversion was estimated as the midpoint between the last seronegative test and the first seropositive test (within a week in most cases). For recently infected subjects (n = 34) the time of seroconversion was estimated by Fiebig stage assignment as described elsewhere [43,45]. For a time zero we used the estimated time of seroconversion rather than the estimated time of HIV infection because frequent sampling in this study allowed reliable measurement of the time of seroconversion based on a series of laboratory tests, which can be more accurate than estimation of the time of HIV infection. Time points of sampling and HIV-1 RNA testing in the primary infection study (n = 42) are presented in the Supplementary Figure S1. Individuals whose CD4+ cell count dropped below 200 cells per cubic millimeter or developed opportunistic infection had access to antiretroviral therapy (Combivir (ZDV/3TC) 300/150 mg twice a day plus nevirapine 200 mg twice a day if female, or efavirenz 600 mg every day if male) free of charge, in accordance with Botswana National Treatment Program guidelines. Viral load testing Plasma HIV-1 RNA was quantified by the COBAS Ampli-Prep/COBAS AMPLICOR HIV-1 Monitor Test, version 1.5, according to the manufacturer's instructions as described previously [27]. The method of viral load quantification used in the study has been certified by the Virology Quality Assurance at Rush University, Chicago, IL, as a part of the laboratory proficiency testing. The level of detection was from 50 (1.7 log 10 ) copies/ml for the ultrasensitive method and 400 (2.6 log 10 ) copies/ ml for the standard method to 750,000 (5.88 log 10 ) copies/ml. Analysis of individuals with estimated time of seroconversion from the Tshedimoso study included both pre-and post-cART data, which is clearly indicated in the presenting materials. Statistics Descriptive statistics (mean and accompanying 95% confidence intervals, median and corresponding inter-quartile range) were quantified using Sigma Stat v. 3.5. Comparisons of continuous outcomes between two groups were based on the Mann-Whitney Rank Sum test. A Spearman rank correlation was used for analysis of potential associations between continuous variables. The Kolmogorov-Smirnov test was used to test whether the distribution of a continuous outcome follows a normal distribution. For the purpose of analysis in this study we defined a ''high-viral-load individual'' as a subject with plasma HIV-1 RNA levels $50,000 (4.7 log 10 ) copies/ml at a given test time-point. We defined the ''period of high transmissibility,'' or ''duration of high viral load,'' as the time period during which an HIV-infected individual has plasma HIV-1 RNA $50,000 (4.7 log 10 ) copies/ml. For the 14 seroconverters with high early HIV-1 RNA levels in the Tshedimoso study, we estimated the duration of high viral load in the absence of cART using cubic smoothing splines for those with more than 5 data points, and ordinary least squares regression for those with fewer data points. For individuals with increasing HIV-1 RNA levels, the duration of high viral load was imputed as the time from seroconversion to the last observation prior to cART initiation. In the sensitivity analysis, the duration of high viral load for subjects with increasing HIV RNA (n = 3) was estimated using the Kaplan-Meier method. To describe the procedure for estimating the potential reduction in the period of high viral load, we introduce some notation: Let X denote the time when new infections occur and Y denote the duration of high viral load. We assume that X follows a uniform distribution within the testing interval and is independent of Y. The distribution of Y is estimated from the empirical distribution based on the 14 seroconverters. Using t to denote the length of testing interval, the proportion of individuals with high viral load who can be identified by using repeated HIV testing at t-month interval is Pr(X+Y. t), the probability of X+Y being greater than t. The potential reduction in the period of high HIV transmissibility in individuals with high viral load that can be achieved by repeated HIV testing and ARV treatment was approximated by E(X+Y-t|X+Y-t $0), the expected value of X+Y-t when it is positive. Confidence intervals for these two quantities were derived using the bootstrap method [46]. All reported p-values are 2-sided and not adjusted for multiple comparisons. Results Baseline HIV-1 RNA levels were quantified in 4,348 drug-naïve HIV-infected individuals who participated in seven clinical research studies in Botswana. Two (Mashi and Mma Bana) were MTCT cohorts; two (Botsogo and Dikotlana) were general population cohorts comprising asymptomatic HIV-positive indi-viduals; and three (Tshepo, Mashi+, and Bomolemo) were cARTinitiating cohorts. Although time of HIV infection for participants within these cohorts was unknown, the CD4-based inclusion criteria were used at enrollment. Therefore, it is likely that the times from infection are shorter for subjects in the general and MTCT cohorts than for those in the cART-initiating cohort, as illustrated in the supplementary Figure S2. The baseline levels of HIV-1 RNA in seven BHP cohorts are presented in Figure 1. Both median and mean values ranged within about one log 10 copies/ml among analyzed BHP cohorts, from 4.12 log 10 in the Botsogo cohort to 5.30 log 10 in the Tshepo cohort. The lowest values were in the general population cohorts, Botsogo and Dikotlana, with medians (IQR) of 4.12 (3.43; 4.68) log 10 and 4.15 (3.49; 4.79) log 10 , respectively. The MTCT cohorts were close to the general population cohorts with slightly elevated median and mean values, although the differences were statistically significant between Mashi and Botsogo (p,0.001), between Mashi and Dikotlana (p,0.001), and between Mma Bana and Botsogo (p = 0.026); the difference between Mma Bana and Dikotlana was not significant. As expected, the levels of HIV-1 RNA were significantly higher in the cART-initiating cohorts, Tshepo, Mashi+, and Bomolemo (all p-values between any cARTinitiating cohort and any general population or MTCT cohort were less than 0.00001). The distribution of plasma HIV-1 RNA among BHP cohorts is shown in Figure 2. Deviation from a normal distribution was evident for each cohort, and the observed patterns were common within the categories of general population cohorts, MTCT cohorts, and cART-initiating cohorts. The HIV-1 RNA distribu-tion comprising the Botsogo, Dikotlana, Mashi, and Mma Bana cohorts were close to the normal ''bell-like'' shape of distribution, but were enriched by HIV-infected individuals with low/ undetectable levels of viral load, which was evident from spikes at the left side of the histograms representing these cohorts. In contrast, the three cART-initiating cohorts, Tshepo, Mashi+, and Bomolemo, demonstrated deviation from a normal distribution of plasma HIV-1 RNA and were skewed to the right part of the histograms, providing evidence that these cohorts were overrepresented by HIV-infected individuals with high viral loads. The normality test failed for all cohorts (p = 0.0027 for Mma Bana, and p,0.001 for all other cohorts), suggesting that the HIV RNA load in the BHP cohorts analyzed were not normally distributed. The observed lack of normal distribution of plasma HIV-1 RNA can be explained, at least in part, by varying inclusion criteria for each of the different cohorts. To address this, the baseline CD4+ cell counts data for each cohort are presented in Supplementary Table S2. Due to the known inverse correlation between CD4+ cell counts and plasma HIV-1 RNA levels, the specified levels of CD4+ cell counts at enrollment are likely to contribute to the observed lack of normal distribution of HIV-1 RNA levels. In addition, spikes at the edges of histograms can be explained by censoring of the data at low and high thresholds of HIV-1 RNA quantification. We analyzed the proportion of HIV-infected individuals within each cohort with a pre-cART HIV-1 RNA level exceeding three thresholds: $10,000 (4.0 log 10 ) copies, $50,000 (4.7 log 10 ) copies, and $100,000 (5.0 log 10 ) copies (Table 1). Consistent with the analysis of levels and distribution, the proportion of individuals exceeding each threshold was lowest in the general population cohorts, Botsogo and Dikotlana, followed by the MTCT cohorts, Mashi and Mma Bana, and was highest among the cARTinitiating cohorts, Tshepo, Mashi+, and Bomolemo. The proportion of individuals with HIV-1 RNA $50,000 (4.7 log 10 ) copies ranged from 24%-28% in the general population cohorts to 65%-83% in the cART-initiating cohorts. Potential gender differences in levels of HIV-1 RNA were analyzed in four cohorts: Tshepo, Botsogo, Dikotlana, and Bomolemo (the three remaining cohorts comprised only females). The results of HIV-1 RNA levels comparisons between genders are presented in Figure 3. Male participants had a higher HIV-1 RNA levels in plasma than female participants. In the general population cohorts, Botsogo and Dikotlana, there was a significant difference of about 0.3-0.5 log 10 between genders (p, 0.001), while in the two cART-initiating cohorts, Tshepo and Bomolemo, we observed smaller differences of about 0.1-0.2 log 10 (p = 0.030 and p = 0.052 for Tshepo and Bomolemo cohorts, respectively). Analysis of CD4+ cell values revealed no statistically significant gender difference in three out of four cohorts (data not shown). In the fourth cohort, Bomolemo, male participants had lower values of CD4+ cells than female participants (p = 0.001). No associations were found between HIV-1 RNA levels and age in five of the seven analyzed cohorts (data not shown). A weak direct association was found in the Botsogo and Dikotlana cohorts (r = 0.099, p = 0.038, and r = 0.072, p = 0.036, respectively). A weak to moderate inverse association was found between HIV-1 RNA levels and CD4+ cell counts that was statistically significant in all analyzed cohorts (Supplementary Table S3; p = 0.030 in Mashi+, and p,0.001 for all other cohorts). For every 1.0 log 10 increase in HIV-1 RNA, the loss in CD4+ cells ranging from 21.1 to 98.8 CD4+ cells (Supplementary Table S3). To assess the duration of high HIV-1 RNA levels following initial infection with HIV-1 subtype C, a primary infection cohort of subjects enrolled before or within a short time after infection [27,39,43,44,45] was utilized (Tshedimoso study). Early viral set point was defined as mean viral RNA from 50 to 200 days postseroconversion (p/s) [27,39], and was $50,000 (4.7 log 10 ) copies/ ml in 14 of 42 (33%; 95% CI: 20%-50%) subjects. The observed dynamics of HIV-1 RNA levels in the subset of 14 subjects are presented in Figure 4. Ten of the 14 subjects initiated cART once their CD4+ cell counts fell below the threshold level indicating treatment. The pre-cART HIV-1 RNA data were used to estimate the duration of high viral load in the absence of cART. In three cases, subjects A-1811, OQ-2990, and RB-6380, the pre-cART HIV-1 RNA slopes had positive values, and their duration of high viral load were estimated to be the time interval to the last observation prior to initiation of cART. The mean (95% CI) and median (IQR) for duration of high viral load were 384 (296; 472) days p/s, and 350 (269; 428) days p/s. Assuming that HIV-infected individuals with high viral load may contribute disproportionally to HIV transmission, two questions related to public health interventions were addressed. To assess the proportion of individuals wih high HIV-1 RNA levelsthat can be identified at selected intervals (with intent to initiate cART), we tested 6-and 12-month-interval algorithms of repeated HIV testing in the community (Fig. 5), under the assumption that the empirical distribution of the durations of high viral load for these 14 subjects is a good approximation to the true distribution. Because every high viral load subject was observed or predicted to remain above 50,000 (4.7 log 10 ) copies/ml for 6 months or more, HIV testing every 6 months would be able to identify all of them. In the case of 12-month-interval testing, 85% (95% CI: 77%-94%) of high viral load individuals can be identified. We used the same 6-and 12-month-interval HIV testing algorithms to estimate the fraction of the period of high transmissibility that would be eliminated by immediately treating identified high viral load individuals (with intent to initiate cART) in the community. It was estimated that 77% (95% CI: 71%-82%) of the period of high transmissibility could be eliminated by 6month-interval testing and 56% (95% CI: 47%-64%) by 12month-interval testing. We note that for those subjects with increasing HIV-1 RNA over time, using time to the last observation prior to initiation of cART as their high viral load duration led to conservative estimates of the fraction of identified subjects. This approach also underestimates the fraction of higher-risk transmission time that can potentially be eliminated by initiation of cART. In a sensitivity analysis using a less conservative Kaplan-Meier approach, we estimated that the high viral load durations of these individuals changed from 181 to 493 days, 264 to 493 days, and 417 days to 527 days, respectively (Fig. 5). The high viral load durations for the remaining 11 subjects remain unchanged. Based on this set of high viral load durations, we obtain similar results as before. More specifically, the fraction of HIVinfected individuals with high viral load that can be identified by repeated 6-month-interval testing remained at 100%, and the fraction of potential high-risk HIV transmission time that can be reduced was 79% (95% CI: 75%-83%). The fraction of individuals with high viral load that can be identified by 12-month-interval testing was 91% (95% CI: 83%-99%), and the fraction of potential high-risk HIV transmission time that can be eliminated was estimated at 59% (95% CI: 53%-66%). Discussion The Botswana population is one of those most severely affected by HIV-1 subtype C infection. To assess the levels and distribution of HIV-1 subtype C RNA in plasma, the analysis was performed utilizing existing data from three types of cohorts: general population, MTCT, and cART-initiating cohorts. Because of their size and breadth, these cohorts adequately represent the entire population in the local HIV/AIDS epidemic. The HIV-1 RNA data was presented per cohort to highlight the heterogeneity of HIV-1 RNA levels between different cohort types in contrast to the relative homogeneity within each cohort type. We found a one log 10 difference of HIV-1 RNA levels between, and a differential distribution of HIV-1 RNA levels within, different types of cohorts. The variation in HIV-1 RNA levels between cohorts observed in this study suggests that one should use caution when comparing different types of cohorts of HIV-infected individuals, even those originating from the same geographic area and infected with the same HIV-1 subtype. Our analysis provides evidence that a substantial proportion of HIV-1 subtype C-infected individuals have high HIV-1 RNA levels. Although time of infection was not known in seven analyzed cohorts, it is likely that some individuals with high HIV-1 RNA levels had been infected for a long time before enrolling in the research studies. Given the fact that approximately 25% of subjects in the general population cohorts have HIV-1 RNA levels above 50,000 (4.7 log 10 ) copies/ml, there is a possibility that in a majority of the 33% of seroconverters who had high early HIV-1 RNA levels, the viral load would never drop below 50,000 (4.7 log 10 ) copies/ml without extensive monitoring (as per study protocol) and initiation of ARV treatment. The high proportion of individuals with elevated levels of HIV-1 RNA deserves further attention and design of interventions targeting individuals with high viral load. In the cohort of individuals acutely or recently infected by HIV-1 subtype C, we observed that 33% (95% CI: 20%-50%) of individuals maintained HIV-1 RNA levels of $50,000 (4.7 log 10 ) copies/ml. Identifying HIV-infected individuals who maintain high levels of viral load for an extended period of time and intervening among them, including treating them with ARVs along with behavioral modification, might be an important public health HIV prevention strategy because such individuals are likely to transmit HIV more efficiently than those who maintain viremia at lower levels [4,5]. This fraction of HIV-infected individuals with elevated levels of viral RNA for extended periods of time may be responsible for a high proportion of HIV transmissions in the community. If the hypothesis that individuals with high HIV-1 RNA levels are fueling HIV epidemic is true, the strategy for identifying HIV-infected individuals with high viral loads followed by initiation of cART might represent a modified and more practical version of the ''test-and-treat'' approach [6]. Longitudinal data from the cohort of acutely and recently infected individuals allowed us to estimate the duration of the time with viral loads remaining above 50,000 (4.7 log 10 ) copies/ml, the proportion of individuals with high viral loads that can be identified using repeated HIV testing, and the potential reduction of the period of high HIV transmissibility that can be achieved by repeated HIV testing and treating in the community. The mean durations of approximately 384 days p/s and median of 350 days p/s are conservative estimates for time for maintaining viral RNA $50,000 (4.7 log 10 ) copies/ml because for those whose viral loads had an increasing trend before starting cART, the duration of high viral load was taken to be time from seroconversion to last observation prior to cART. This interval provides a lower bound for the true duration. Our analysis suggests that repeated HIV testing in the community could identify a high proportion of infected individuals with high viral loads if the interval between HIV tests is approximately 6 months. This approach could also reduce the period during which individuals with high HIV-1 RNA levels can transmit virus with immediate cART initiation follows HIV testing. We observed higher HIV-1 RNA levels in men in two cohorts representing the general population and low or no gender difference in the two cART-initiating cohorts. Gender differences in the levels of HIV-1 RNA were described previously [47,48,49]. Despite the initial levels of HIV-1 RNA being lower in women than in men, the rates of progression to AIDS did not differ [47]. The gender differences in viral load might have implications for initiation of cART, if the treatment strategy is based on the levels of HIV-1 RNA. Conversely, a selection bias in different cohorts cannot be completely excluded. In future studies, it would be important to determine whether the rate of HIV transmission differs between genders with similar levels of HIV-1 RNA. The limitations of the current study include the small sample size in the primary infection cohort and unknown time of HIV infection in the large cohorts representing later time points over the course of infection. The relatively small sample size (n = 42) of the primary infection cohort reflects well known challenges in identifying acutely infected individuals which include but are not limited to a lack of specific clinical signs and symptoms, and the extremely short time period preceding seroconversion. To reflect the uncertainty of the analyses associated with relatively small sample, we included 95% confidence intervals and/or interquartile ranges for all analyses throughout the paper. Another limitation of the study is unknown time of infection in the large cohorts where baseline pre-cART data was used for analysis. The analyzed data represent snapshots of HIV-1 RNA levels at different time points in the HIV/AIDS epidemic in Botswana spanning the time period from 2000 to 2009. The concern of unknown time from infection could be lessened, at least partially, Figure 5. Identification of high HIV-1 RNA individuals by repeated HIV testing: six-month intervals vs. twelve-month intervals. All subjects are assumed to be HIV-seronegative at initial HIV testing, and to acquire HIV-1 infection shortly after that. The study subjects' code is shown at the left of the graph, and four acutely infected individuals are highlighted. The high viral load for each subject is presented as a shaded triangle symbolizing the ''tip of the viral load iceberg''. The base of each triangle corresponds to the estimated time of dropping HIV-1 RNA levels below 50,000 (4.7 log 10 ) copies/ml for each subject. In all subjects estimation of high viral load duration is outlined by gray shading delineating the time from seroconversion to the last observation before cART. In addition, in three subjects -A-1811, OQ-2990, and RB-6380-yellow shading corresponds to estimation of high viral load duration using the Kaplan-Meier method. Six-month interval HIV testing is delineated at the top, and 12month interval testing is shown at the bottom. doi:10.1371/journal.pone.0010148.g005 by grouping cohorts (as presented in Fig. S2), which was largely driven by the CD4-based inclusion criteria. Conversely, the large amount of information presented on HIV-1 subtype C RNA levels from existing carefully monitored studies that target different subsets of population in one geographic region infected with a single HIV-1 subtype is a strength of the analysis performed. Although the cost-effectiveness was outside the scope of the current study, the ultimate goal of our research is to develop costeffective means for mitigation or control of HIV infection in the community. Early treatment for HIV can be cost-effective by virtue of greatly reducing the need for treatment of opportunistic infections and decreasing mortality. In fact, the per-person survival gains with cART greatly exceed many other therapeutic approaches [50]. Mathematical modeling supports early initiation of cART, genotypic testing in treatment-experienced and treatment-naïve patients, and expanded programs for HIV screening and linkage for care [50] as appropriate cost-effective public health approaches for better control of the HIV/AIDS epidemic. When data are available on transmission incidence as a function of HIV-1 RNA levels and other factors in Botswana, the analysis can be extended to directly estimate the numbers of transmissions per month averted by testing-and-treating of individuals with high HIV-1 RNA levels under different testing schedules. In the meantime, under the assumptions that no high HIV-1 RNA individuals transmit once placed on cART and that transmission incidence is constant during the period of high viral load, the quantity that we were able to estimate (the fractionate reduction in the period of high HIV-1 RNA levels due to test-and-treat) usefully measures the fractionate reduction in transmission incidence during the period of high HIV-1 RNA levels. In summary, we suggest that HIV testing aimed at identifying and offering cART to HIV-infected individuals with high viral load could be a reasonable goal in the global fight to reduce HIV incidence. If HIV-infected individuals maintaining high levels of HIV-1 RNA for extended period of time contribute disproportionally to HIV transmission, a modified ''test-and-treat'' strategy targeting such individuals by repeated HIV testing (followed by initiation of cART) might be a useful public health strategy for mitigating the HIV epidemic in some communities, particularly those with high HIV prevalence. A small sample size in this study is a limitation of the estimated parameters of interest. It would be important to apply similar analyses to other existing larger sample sets in the HIV-1 subtype B (e.g., VanGen efficacy trials) and nonsubtype B (e.g., CHAVI cohorts) settings. Figure S1 Time points of sampling and HIV-1 RNA testing in the primary infection cohort (n = 42). The time scale is set to the estimated time of seroconversion as time 0. The sampling time points were limited to 500 days p/s. The study subjects' code is shown in the column at the left. Eight acutely infected subjects are highlighted. Fourteen HIV-infected individuals with HIV-1 RNA levels $50,000 (4.7 log 10 ) copies/ml are shown with arrows preceding the subjects' code. Gray bars indicate time on cART in ten subjects (delineated by arrows on the right).
2014-10-01T00:00:00.000Z
2010-04-12T00:00:00.000
{ "year": 2010, "sha1": "0585f42d1aa994b86a10a806b6a745b528eadbda", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0010148&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "ca860bb3f6fdfafc4cf4064878e887812c86a3fe", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
10502812
pes2o/s2orc
v3-fos-license
Chemical constituents isolated from Zygophyllum melongena Bunge growing in Mongolia Abstract We report the first investigation of the chemical constituents of Zygophyllum melongena Bunge, a species growing in Mongolia. The quinovic acid glycosides 3-O-(β-D-glucopyranosyl)quinovic acid and 3-O-(β-D-glucopyranosyl)quinovic acid (28→1)-(β-D-glucopyranosyl) ester were identified in the chloroform fraction along with the flavonoid glycoside astragalin. The n-butanol fraction contained (+)-D-pinitol as the major component, a cyclitol with anti-diabetic properties. The structures of the isolated natural products were confirmed using ESI-MS and NMR spectroscopy (1H, 13C, COSY, HSQC, HMBC, NOESY and ROESY). This is the first report of the isolation of (+)-D-pinitol from the genus Zygophyllum. Introduction Zygophyllum melongena is a medicinal plant belonging to the flowering plant family Zygophyllaceae. The genus Zygophyllum is represented by about 150 species growing in deserts and steppes from the Mediterranean to Central Asia, South Africa and Australia. Twelve species of Zygophyllum are known members of the flora of Mongolia (Grubov 1982;Ligaa et al. 2006;Ayad et al. 2012 (Grubov 1982;Ligaa et al. 2006;Ayad et al. 2012). Compound 1 was obtained as colourless crystals. Its molecular weight was found to be 648 g/mol, and the molecular formula was C 36 H 56 O 10 based on mass and 13 C NMR spectra. Based on the 1 H and 13 C NMR data of compound 1 (Table S1) supported by analysis of the two-dimensional NMR spectra (Figures S1-S15) and a comparison with the corresponding values reported in the literature Ahmad et al. 1993;Zi-jun et al. 2011), the structure was identified as 3-O-(β-D-glucopyranosyl)quinovic acid (1). Compound 1 is an inhibitor of the snake venom phosphodiesterase-I (Fatima et al. 2002;Mostafa et al. 2006). Compound 2 was isolated as colourless crystals. Its molecular formula was C 42 H 66 O 15 corresponding to a molecular weight of 810 g/mol on the basis of mass and 13 C NMR spectra. By complete assignment of the 1 H and 13 C NMR data of compound 2 (Table S1) supported by the two-dimensional NMR spectra (Figures S16-S31) and a comparison with the corresponding values reported in the literature (Pizza et al. 1987;Aquino et al. 1988;Cerri et al. 1988;Aquino et al. 1989;Kang et al. 2004;Zhang et al. 2007;Zi-jun et al. 2011), the structure was assigned as 3-O-(β-D-glucopyranosyl)quinovic acid (28→1)-(β-D-glucopyranosyl) ester (2). Compound 3 was isolated as a green amorphous solid. The mass spectrum combined with the 1 H and 13 C NMR data (Table S2) suggested a molecular formula of C 21 H 20 O 11 . Based on analysis of the one-and two-dimensional 1 H and 13 C NMR data of compound 3 (Figures S32-S42) and comparison with the corresponding values reported in the literature (Davgadorj 1999;Chludil et al. 2008;Desai et al. 2014;Gaivelyte et al. 2014;Zhang et al. 2015), the structure was assigned as kaempferol 3-O-β-D-glucopyranoside (astragalin) (3). Compound 4 was isolated as a colourless amorphous solid and proved to be optically active: NMR data (Table S2) indicated a molecular formula of C 7 H 14 O 6 . Analysis of the 1 H and 13 C NMR data of compound 4 supported by the 2D NMR spectra (COSY, HMBC, HSQC and NOESY) ( Figures S43-S49) led to the assignment as (+)-D-pinitol (4). The structure of compound 4 was also confirmed by comparison with literature data which have been reported for (+)-D-pinitol (4) (El-Youssef 2007;Jain et al. 2007;El-Youssef et al. 2008). (+)-D-Pinitol (4) is known for its anti-diabetic (Narayanan et al. 1987), anti-inflammatory (Singh et al. 2001) and feeding-stimulant activities (Numata et al. 1979). In the present report, (+)-D-pinitol (4) was isolated for the first time from the genus Zygophyllum. Conclusions No previous investigation concerning the chemical constituents of the Mongolian medicinal plant Z. melongena Bunge was reported. In the present work, we have isolated four compounds from the aerial parts of this plant. The quinovic acid glycosides 3-O-(β-D-glucopyranosyl)quinovic acid (1) and 3-O-(β-D-glucopyranosyl)quinovic acid (28→1)-(β-D-glucopyranosyl) ester (2) and astragalin (3), a flavonoid glycoside, were identified in the chloroform fraction. On the other hand, large amounts of (+)-D-pinitol (4), a cyclitol with anti-diabetic properties, were isolated from the n-butanol fraction. This is the first report for the isolation of (+)-D-pinitol (4) from the genus Zygophyllum. From 4.0 kg of dried plant material of Z. melongena, an amount of 3.49 g of (+)-D-pinitol (4) was obtained. Thus, we conclude that the Mongolian medicinal plant Z. melongena is a major natural source for (+)-D-pinitol (4). The unequivocal structural assignments for the four compounds 1-4 are based on extensive NMR spectroscopic investigations ( 1 H, 13 C, COSY, HSQC, HMBC, NOESY and ROESY). The isolation of the bioactive compounds 1-4 from Z. melongena may help to explain the pharmacological properties of this plant. Supplementary material Supplementary material to this article is available online: Experimental details, Tables S1 and S2 and Figures S1-S49 ( 1 H NMR, 13 C NMR, COSY, HSQC, HMBC and ROESY spectra of the compounds 1-4). A voucher specimen of Zygophyllum melongena Bunge has been deposited at the herbarium of the Ulaanbaatar Institute of Botany, Mongolian Academy of Sciences (voucher number: chl.08.2010).
2018-04-03T06:11:15.500Z
2016-01-11T00:00:00.000
{ "year": 2016, "sha1": "561ce0cd7893877f5268a68a72dfb01f09b19e98", "oa_license": "CCBY", "oa_url": "https://figshare.com/articles/journal_contribution/Chemical_constituents_isolated_from_i_Zygophyllum_melongena_i_Bunge_growing_in_Mongolia/1632797/files/3675549.pdf", "oa_status": "GREEN", "pdf_src": "TaylorAndFrancis", "pdf_hash": "9d93d6c07912d780e84262367438188fc6f60b2c", "s2fieldsofstudy": [ "Chemistry" ], "extfieldsofstudy": [ "Medicine", "Chemistry" ] }
228819654
pes2o/s2orc
v3-fos-license
Fertility control in ancient Rome ABSTRACT This paper surveys and evaluates the range of methods recommended mostly to promote but also to prevent pregnancy in ancient Rome, and then discusses the practices of adult adoption and infant exposure in more detail in order to interrogate the notion of ‘fertility control’ from an ancient historical perspective. Is this formulation sufficiently flexible to encompass Roman procreative projects and the resources they were able to bring to bear on them? Were the methods deployed sufficiently effective to qualify as ‘control’, and was it ‘fertility’ that was being acted on through adoption and exposure? This essay answers these questions positively and argues that the Roman case has plenty to offer wider debates about the history of reproduction as it includes the desires to have and not to have children, to limit and increase offspring, to shape families in different ways. One of the key shifts in the long history of human procreation is from societies in which the dominant fertility project was the production of healthy children to those in which the limitation of that production dominates. The causes of this transition continue to be debated, and the move is a complex one beyond questions of etiology. 1 Globally the pace and patterning of change has been and remains uneven; both between and within the developed and developing world. Nor do the pressures and motivations around generation ever operate universally in any society, past or present. The reproduction of some groups is always enabled and encouraged more than others. Individual aims and circumstances vary: it is not as if there was no interest in contraceptives and abortion before the nineteenth century, nor is there an absence of those desperately seeking to have children now. Still, the broad move is clear. This was not a shift from passivity to activity, from easy thoughtlessness to careful attention. The business of having healthy children had to be worked at consciously and strategically no less than the business of not having children. Agency in the fertility domain, particularly female agency, should not be restricted to action around contraception and abortion, but understood more holistically, as recent scholarship on medicine and childbearing in medieval Europe has emphasized. 2 Studies of the historical demography of East Asia have increasingly found themselves occupying the space between Louis Henry's foundational 'natural' and 'controlled' fertility regimes, a space characterized by deliberate family planning that none the less fails to meet the narrow requirements of the traditional parity-based model. 3 Behavior was not bound simply to the number of children already born but also to their sex and survivorship, among other considerations. Recognizing that in all cases families and individuals (as well as communities and states) had procreative aims toward which they consciously worked, in divergent historical circumstances and with different resources to call upon, entails a certain comparability across time and space. It suggests a long-term narrative in which the reproductive project itself, whether more expansive or restrictive, provides the unifying thread to be tracked and analyzed. 'Fertility control', as the subtitle of this volume indicates, might be the most useful formula to encompass these different possibilities under a single rubric, and allow a richer history of continuity and change to emerge: joining two words of sufficient flexibility and openness. It is not entirely unproblematic in its resonances, however, as noted in the concluding remarks of a recent attempt to take the long view of generation: While helpful in linking the prevention and promotion of procreation, the term may be too modern for centuries before the twentieth. People have always aimed to achieve certain objectives for family continuity and population size, individual health and happiness, but their conceptual and practical tools have changed. 4 'Control' is a modern reproductive term and has taken on some particular meanings in the late modern world more generally. It raises two sets of questions in relation to its broader applicability in this field. The first are practical, or empirical, the second more conceptual. There is an issue about whether 'control' sets the efficacy bar too high for the pre-modern world: whether, or to what extent, success in respect to or at least real purchase on the challenges and aims involved is required to use this language. Then there is the sense in which 'control' has now become an aim in itself rather than a means to an end, and so perhaps lacks the categorical stability necessary to do the requisite heuristic work. The two converge and overlap at various points too, interweaving with a further series of queries about the reach of 'fertility', which is where two practices this essay gives particular attention to-infant exposure and adult adoption-come in. Both actions were enabled by Roman law and ingrained in Roman society, and their effectiveness in limiting or ensuring family size is obvious. Their relationship to notions of fertility, on the other hand, seems strained by a mixture of temporal and somatic distance, which may also cross back over with ideas of control. Is this something which can be exercised after the event? Does it have to be exercised on physical fecundity, and focused around birth, or is that to construe a more complex phenomenon too narrowly? These definitional issues are worth investigating further, as are the specificities of the historical practices themselves. The two perspectives need to be placed in dialogue. This article thus both elucidates aspects of the Roman world-the kinds of procreative strategies pursued within it and the resources available to support them-and contributes to the larger comparative project around 'fertility control'. It proceeds in two parts, the first focused on questions of control, the second on what counts as fertility. The article will discuss the methods recommended to both prevent and promote pregnancy in ancient Rome, to be precise, in late Republican and early imperial Rome, over roughly the last century BC and the first two centuries AD, including an assessment of their effectiveness. Attention then turns to exposure and adoption, to the evidence for the ways they were thought about and practiced, as similar or different to other interventions around family size, before bringing the themes of fertility and control back together in conclusion. Fertility control A little after AD 100, in Rome, the noted physician Soranus of Ephesus composed his Gynecology, the only such dedicated treatise to survive from the early Roman Empire. 5 Like other ambitious doctors, Soranus had traveled from his provincial birthplace-the city of Ephesus in Asia-to the imperial capital, via the medical schools of Alexandria, and, though he spent most of his career in Italy, he continued to write in his native Greek. Still the dominant language of learned medicine, Greek works addressed not only other physicians but also a largely bilingual (or multilingual) Roman elite. 6 The rich and powerful of Rome wanted to have healthy children and Soranus offered instructions about how to achieve that goal, advice which he constructed polemically, through opposition to and criticism of past medical authorities and present rivals. In particular, he positioned himself against the traditional Hippocratic view that female health depended on generation, arguing instead that women's physical well-being was undermined by her 'child-production' (the literal translation of the Greek teknopoia). 7 This production was necessary to ensure the 'succession of beings', the continuation of the species, but the task was more challenging than the majority of doctors admitted. It required greater attention to looking after the female body itself, to counteract the damage of childbearing. Soranus' pro-procreative program was thus complex but unremitting. It started with female anatomy and moved onto a systematic study of all the processes involved in generation, from menstruation to birth and the care of the newborn. Sex and marriage (synonymous for respectable Roman women) follow menstruation in this sequence, indeed Soranus insisted that girls pass menarche and become physically mature enough to sustain intercourse with a man and bear a child before being married. 8 This advice was somewhat at odds with elite practice in the Roman empire, though his judgement that delaying matrimony too long was also dangerous would have been less contentious. 9 He argued that questions about the fertility of any prospective bride-'whether or not they are able to conceive (sullambanein) or have the right physical formation to give birth'-should accompany the customary inquiries 'about the excellence of their lineage and the abundance of their wealth' in assessing her suitability. His aristocratic audience, however, seem to have ignored this advice. 10 All the evidence indicates that the Roman elite stuck to their traditional interests in birth, money and looks. 11 Not that they were indifferent to fertility, but concrete matrimonial decisions were usually dictated by particular contingencies, by the conjunctural needs of family alliance, and there was a view that women's childbearing prowess was something to be proved rather than guessed at. The only women who possess the virtue of 'fecunditas', in the Annals of the Roman historian Tacitus, for instance, have already born children. 12 After the account of how to recognize the capacity for conception Soranus proceeded to provide detailed advice on the best time for procreative intercourse. 13 What condition should a woman be in to optimize her chances of receiving and retaining the man's seed, of being able to begin to nourish and mold what has been held in a proper fashion, as Soranus understood the process of conception (sullēpsis in Greek, literally 'grasping') itself. 14 His answer followed the Hippocratic view that women are most likely to conceive as their periods are dwindling and stopping. 15 The womb was at its most receptive at this juncture, warm and moist in good measure, turning from evacuative to accumulative mode but not yet congested and overburdened. For the rest, body and soul must be in the right condition, feeling good and appropriately inclined. The question then is whether the basic sense in Soranus' stress on a well-balanced body and a willing soul is outweighed by the mismatch between ancient and modern understandings of the relationship between fertility and menstrual periodicity. In modern medicine, the 'fertile window' refers to the six days during which heterosexual intercourse can result in pregnancy, those being the five days before and the day of ovulation itself. If ovulation occurs exactly halfway through a standard twenty-eight-day cycle, then the window would be between days ten and seventeen (counting commences on the first day of bleeding), which clearly does not align with Soranus' best time. Recent work has emphasized variability in respect to both menstrual and ovulatory cycles, however, and there has been something of a forward drift in fertility, thus rendering Soranus' advice less problematic. 16 More importantly, it was never intended to be exclusive. These specific instructions were located within a wider frame of assumed marital intercourse. With conception complete, or at least well underway, guidance on care for the pregnant woman followed. It had three stages-the first aimed at guarding the deposited seed, the second at alleviating the ensuing symptoms, such as those associated with kissa (characterized by cravings, nausea, and general digestive disarray), and the last, as lying-in approaches, aimed at perfecting the embryo and preparing for the demands of birth. 17 Every aspect of the woman's life was to be regulated, from what she ate and drank to the frequency of her baths; her emotional and physical range was to be restricted, her thoughts and actions modulated. The main message was to take things easy-not too easy, gentle exercise was mandatory-and eat well, while avoiding shocks and traumas, excess and anything harmful. Given the agenda so far, it is perhaps surprising that the first book of the Gynecology ends with a chapter on contraception and abortion. However, as mentioned, Soranus also explicitly brought out the detrimental effects on women of all these processes: menstruation, sexual intercourse, conception and pregnancy. Childbearing uses up resources, saps vigor, and causes premature aging: 'just like with the earth, which becomes so exhausted from continuous fruit-production that it is not able to carry fruit every year'. 18 Soranus thus opened up conceptual space in which talk of family limitation could occur, within the pro-procreative program. There are the woman's interests to be balanced against the need for family continuity and while the latter has priority, some allowance can be made for the former without compromising the overall project. The chapter opens by distinguishing contraceptives and abortives, differentiating between those items and actions which prevent conception (sullēpsis) and those which destroy what has been conceived. 19 These latter were called 'destroyers' (phthoria) with the former termed 'non-birthers' (atokia). Destruction of what is carried has been controversial, Soranus reported, with some physicians opposed to any such interventions while others argued for a discriminating approach. The opposition called Hippocrates as a witness, who said 'I will give no woman an abortive', and asserted that the medical art must guard and preserve what has been generated by nature. 20 The proponents of judgement explained that they would not assist a woman who wished to destroy what she had conceived on account of adultery or vanity but rather to prevent dangers in birth caused by the womb being too small, or by calluses or fissures of its mouth, or any similar difficulties. They said the same about contraceptives, and Soranus concurred. He placed himself firmly in the camp of those willing to prescribe both atokia and phthoria in the appropriate circumstances, preferring the first, since prevention was safer than destruction. Soranus' contraceptive prescriptions can be roughly divided into three. The first clustered around the act of intercourse itself. 21 The 'best time' for procreative sex should be avoided, the woman should move away as the man is about to ejaculate, or get up immediately afterwards and encourage the seed to leave her body by squatting, sneezing or other actions. The second involved applications to the mouth of the womb prior to intercourse. 22 Substances such as old olive oil or a moist cerate of myrtle oil and white lead can be externally applied to assist in 'non-conception' (asullēpsia) or pessaries composed of items to close up the womb or heat and irritate it, preventing the entry or retention of the seed respectively, can be inserted and then removed before sex. Soranus provided several recipes, with pomegranate rind, oak gall, and various minerals the most favored ingredients. Last were what might be termed oral contraceptives. 23 Plant materials-seeds (especially rue seeds) and balms -are ingested monthly with liquids. These things destroy as well as prevent, Soranus concluded, sullēpsis was, after all, rather a complex process with a vague finishing point, and they are damaging to the body. 24 Soranus' discussion of abortives followed a similar pattern. 25 For the thirty days after conception do the opposite of what he advised to guard the deposited seed. The woman should jump around, carry heavy loads, eat the wrong foods, attempt purges and take long baths. The next stage of intervention involved more medicinally potent baths-with linseed, mallow, wormwood and rue plants in them, for example-together with the application of similarly composed poultices and enemas. Last, women could be extensively bled, or abortive pessaries resorted to. Avoid anything too powerful, however, and any kind of physical removal with sharp objects, for wounding the surrounding area is dangerous. Several of the ingredients listed here have been identified as having fertility suppressing effects in a range of ethnobotanical and laboratory studies. John Riddle was the first to survey this evidence in relation to ancient medical writings and to argue very strongly that knowledge of effective contraceptives and abortifacients was widespread in antiquity. 26 His work has been subject to sustained criticism ever since: its orientation, presuppositions, methodology and conclusions have all been called into question. 27 Discovering what modern species might be designated by ancient plant names is far from straightforward, for example, while experiments showing that feeding rats large amounts of pomegranate rind decreased their fertility by almost 30 percent may reveal nothing about its impact on human women when applied in pessary form. Still, the possibility of efficacy must be allowed for, as burgeoning global research into traditional herbal remedies-including those aimed at generation-indicate. 28 This is, of course, efficacy broadly construed, as meaningful effect rather than the guaranteed success demanded by modern biomedicine, but it seems likely that some of Soranus' prescriptions would have diminished fertility to some degree. 29 More importantly, Soranus explicitly located his discussion of contraceptives and abortives within marriage. Traditionally, though recipes and substances might simply be labeled, 'atokia', or even 'phthoria', in pharmacological contexts or works on medical materials, actual engagement with the business of prevention or destruction occurred in association with prostitution. Soranus himself referred to the case of the enslaved 'entertainer', made to expel the 'seed' (gonē) she had retained following intercourse by the Hippocratic physician who authored On the Nature of the Child. 30 The philosophical poet Lucretius, writing his Latin epic On the Nature of Things in the last decades of the Roman Republic, had asserted that women themselves can 'prevent or resist' conception, by pulling away and becoming limp as a man climaxes. 31 This technique belongs, however, to 'whores' (the Latin is the more pejorative, scorta), who wish to minimize their chances of becoming pregnant and maximize their client's pleasure. It is not the business of 'our wives'. They were there for the production of legitimate children, while prostitutes' role was the production of legitimate male sexual pleasure, a legitimacy predicated on the separation of the transaction from procreation: that it was not generative in itself and did not compromise other men's family strategies. 32 Soranus, however, wanted to make methods of non-conception, and even abortion, available to wives, under the pro-natalist banner. The second book of the Gynecology covers the business of normal birth and the ensuing care of both mother and baby. For the purposes of this discussion there are two important points in the detailed descriptions and instructions. First and foremost is the section on how the midwife (maia in Greek) was to recognize whether the infant she had just delivered was fit for rearing or not. 33 The main positive indicators were that the mother had enjoyed good health during pregnancy, birth had occurred at the proper time, the newborn had cried vigorously when placed on the ground, and was well-formed in all its parts, had good movement and sensitivity all over their body. If these criteria were not met then the midwife was to adjudge the infant unfit for rearing, too weak to survive, though more qualitative considerations may have entered the frame around formation and function too. In any case, the midwife was undertaking an essentially physical assessment that would contribute to but not necessarily determine the father's decision to rear or expose-that is to put the new-born out to die or be picked up and raised by someone else. This will be examined more fully below, for now it is sufficient to note the way Soranus' medical narrative engages with this critical social moment. The other issue of interest is the nutrition of the newborn. On a pragmatic level, Soranus favored wet-nursing over maternal breastfeeding and offered advice accordingly. 34 He aligned himself with the dominant elite practice of the early Empire, and against arguments by some philosophers and traditional moralists that women should nurse their own infants. 35 The employment of wet-nurses has, of course, practical implications for the possibility of birth-spacing in these aristocratic families, though outside those circles mothers were generally assumed to breast-feed their own babies. The latter half of the Gynecology deals with the diseases of women, in which the dangers and damaging impact of pregnancy and parturition loom large. It is not just that discussion of difficult birth-dustokia-dominates the fourth and final book, but that the experience of such travails, along with miscarriages, are the most frequent causes of many of the pathologies described, particularly of the womb. 36 There is even a condition termed the 'exhausted' (or perhaps 'debilitated') womb, caused by frequent pregnancies, stretching, and, especially, large embryos. 37 It renders the uterus almost entirely unfit for procreative duties. The sections on several of these uterine ailments are not preserved in their original Greek, but, apart from their headings, survive only in the later 'Latinizations' of the Gynecology by the fifth-century AD North African physician Caelius Aurelianus and his less firmly located successor Muscio (or Mustio). 38 Similarly, the contents of the final chapter in book three of Soranus' composition, listed as 'On non-generation (agonia) and nonconception (asullēpsia)' are transmitted only in Latin. 39 Despite variations in vocabulary and construction, the message is the same in both versions. 'Sterilitas', the Latin for infertility or barrenness, accompanies or arises from a range of affections. It may be that 'the seed is not received, or having been received it is not retained or having been nourished it is not perfected'. 40 Though all of these failures occur in the female body, the cause may lie with either party, and may be a problem of the whole body or of the particular parts. The man can be too ill or weak to produce sound seed or have a malformed penis which prevents him from ejaculating in the right area, while the woman may be too thin or feeble or too fat or dense, for example, or have a misaligned, obstructed or injured uterine opening. All these complaints can be treated, mostly dietetically if addressing the overall somatic condition, and through pharmacological applications or surgery if the problem is more localized and specific. Little detail about these therapies is offered, adding to the difficulty in assessing efficacy. The emphasis on general health seems more promising than many of the more specific interventions, though the disruptive conditions which can be fixed that way are limited. It is, however, worth bearing a couple of modern statistics in mind here. One, is that, in contrast to the 91 percent effectiveness of the contraceptive pill in Britain today, the current success rate for a cycle of IVF is only 29 percent in women under thirty-five and it drops pretty precipitously after that. 41 The second, interrelated point is about the causal complexity and uncertainty that surrounds infertility. Modern studies implicate biological, behavioral, psychological and sociocultural factors, in a range of combinations, differently distributed in a partnership, and in 15-25 percent of cases, no physiological dysfunction can be identified at all. 42 So, it may be that simply addressing the problem, doing something which was thought to help, would have had beneficial results. There were non-medical courses of action available to those struggling to have children in the Roman Empire. One such avenue did find physiological support (though missing from the Latinizations of Soranus) in the standard recognition in ancient medical and philosophical discussions that infertility could be relational. Generative failure could be caused by some sort of incongruity or incompatibility between the couple having intercourse. The flaw in the partnership was variously construed-as a mismatch of bodies or sexual pace, of constitutions or seeds-but the suggested remedy stayed the same. 43 Changing partners might bring better results. The formulations were mostly vague, but Lucretius clearly recommended divorce and remarriage, repeatedly if necessary, in contexts where no progeny had been forthcoming. 44 Indeed, he referred to both men and woman in previously barren unions who had subsequently found spouses with whom they had sweet and dutiful children. By the time Lucretius wrote his didactic epic in the first century BC, divorce and remarriage were legally (if not practically) straightforward for both parties at Rome, especially if there were no surviving offspring. 45 There was some debate about the propriety of divorcing a loyal and virtuous wife solely on the grounds of procreative failure, at least if she did not agree, but it was perfectly possible to do so and the absence of children made the logistics of separation easier. 46 This was, moreover, simply a variation on a key theme in Roman matrimony-the main reason for divorce in the late Republic and early empire was to remarry, for political, economic, or generative purposes, maybe a combination of them all. If Soranus' apparent omission of the relational aspects of infertility is puzzling, his failure to mention the non-medical practitioners who provided procreative advice and assistance in the Roman world is more understandable. A range of texts from the imperial period demonstrate that dream interpreters, astrologers and fortune-tellers were often consulted about the production of children, pregnancy, birth and the prospects of the new-born. 47 Stories of such encounters from the client's perspective are missing from the record, but there are plenty of literary allusions to the general but problematic rise of private divination in the early empire. 48 Soranus' silence on those who might be considered his competitors for the attention and largesse of the elite is hardly surprising. Much more could be said on these options, and on the possibility of direct appeal to the gods for help in having healthy children, but the point here is simply to return to the pro-procreative shape of Roman society, with which this section opened, having sketched out some of its particular lineaments and complexities. The range of resources-legal, medical, cultural and religious-available to those pursuing their particular family strategies within this frame has been part of this picture. These resources, moreover, certainly meet the requirement of real purchase and impact on the generative projects involved, and while not all were accessible to those below the elite, many were, at least in some form. Maximum effectiveness still resides in infant exposure and adult adoption, however, so it is to these phenomena that the discussion now turns. Fertility control As Soranus assumed, in the Roman world birth was followed by a decision about whether to rear the newborn. A positive judgement meant that the processes of welcoming the child into the family and community would begin, while a negative one entailed the opposite, the separation of the child from their natal family through exposure, their being put out (expositio in Latin, ekthesis in Greek) either to die or be picked up and raised by someone else. 49 It should be stressed that both possibilities were real, though the main reason for third party rescue was to bring up the infant as a slave. Exposure is, therefore, to be distinguished from infanticide: it was about separation, or rejection, not about the fate of the child. 50 It was a means of regulating family size and family composition. Soranus described a physical assessment of suitability to rear, one that was entirely gender neutral, but other ancient sources and modern scholarship raise the possibility of selectivity by sex in these post-parturition judgements, a selectivity that favored boys over girls. As Judith Evans Grubbs has explained, however, 'the case for widespread exposure of females has been hugely overblown', and, indeed, archaeology also demonstrates that at least some infants who would have failed Soranus' fitness test were brought up. 51 Issues of sex and disability surely played a role in Roman decision making about raising children, but in complex and relative rather than absolute ways. Control can be exercised over quantity and quality, of course, and the efficacy of expositio is obvious in respect to both. It allowed the number of children in a family to be limited, and decisions to be made about a balance of girls and boys (or not). Indeed, until the development of reliable fetal sex discernment tests in the twentieth century, exposure and infanticide were the only means of sex selection in relation to offspring. The question is whether that control should be understood to have been exercised over fertility as well as family. The demographic orthodoxy would seem to be not, though some have assumed and Fabian Drixler has argued otherwise, at least for infanticide in early modern Japan. 52 Scholars have also raised wider problems with solely parity-based definitions of fertility, so further consideration is required. Here the focus will be on whether the Roman sources themselves included expositio with other forms of family limitation or considered it as a distinct practice. Where did it fit in the overall demographic system of the Roman world? Soranus' approach was essentially inclusive, covering contraception, abortion, and exposure, as well as infertility treatments, in a single treatise. His discussion around raising was carefully circumscribed, however, limited by the role of the maia as reporter of the newborn's physical condition to those in the family who would make the actual decision: most critically, the father, in whose power (patria potestas) any child raised would most likely be. 53 Other factors would have been taken into consideration at this point, outside the purview of medicine or midwifery, with familial economics (broadly construed) most frequently mentioned in the sources. This came in two forms, one relating to the poor, the other to those with sufficient resources not to have to worry about an extra mouth to feed as such, but who were more concerned about the workings of a partible inheritance system in a world in which inherited wealth was key. Roman law made all legitimate offspring, female and male, automatic heirs (sui heredes) who had to be left a fair share of the estate unless explicitly disinherited. 54 Bringing up more children could therefore be understood as diminishing the financial prospects of those already integrated into the family, though this was not the only way of thinking, and there were risks involved in deciding not to rear those born later even if the older offspring had already passed the most dangerous years of life. A couple of decades before Soranus was writing, the Stoic moralist Musonius Rufus argued strongly in support of the thesis that all children born should be raised, which was more or less the position of the Stoa. 55 Musonius reserved his greatest ire for those who 'do not even have poverty as an excuse' for exposing their infants, but who decide not to rear their later-born offspring 'so that those earlier-born may inherit greater wealth'. As a rich Roman, as well as a philosopher, he would have been familiar with such practices, and the theme is repeated elsewhere. The most detailed story appears in a Greek novel of the second century AD, Daphnis and Chloe, both of whom are expositi, raised as slaves, providentially recovered by their parents so that they can end up happily married. Daphnis' father, Dionysophanes, explained that, having married young, he was already lucky enough to have two sons and a daughter when the fourth child, another boy, was born. 56 He thought his family 'was big enough', and so had the infant put out, a decision he later regretted, as his eldest son and his daughter subsequently died on a single day from the same illness. Even in his joy at finding his abandoned child, glad that he and his wife would have more support in their old age, Dionysophanes sought to reassure his other surviving son that his estate was substantial enough to make both of his children rich men. Though the first-born, Chloe had been exposed so that her father could continue to make the public expenditure required to maintain his civic status, again a matter of regret, since the expected future offspring failed to materialize. 57 To return to Musonius Rufus, however, it should be stressed that his argument was essentially a civic one. Having lots of children was an obligation citizens owed to the state and the gods, though the benefits accrued to both the community and the family concerned, far outweighing the pragmatic excuses for limiting offspring that he dealt with. Failure to raise children who have been born was the dominant means to that impious and detrimental limitation, and so the primary point of attack, but Musonius also praised a variety of measures against abortion and contraception, public rewards for the parents of multiple progeny and penalties for the childless. The focus was thus fertility; he was opposed to all deliberate attempts to restrict the number of children produced and kept in a citizen marriage, favoring both discursive and practical encouragement to large-scale childbearing. He clearly considered exposure to be the main threat to his maximizing drive but as part of a wider set of practices with the same aims and outcomes. Expositio was not just about family limitation, at least not directly. The end of marriage, through death or divorce, could have resulted in the exposure of any progeny born in the aftermath. Both pragmatic and emotional reasons seem to have been in play here, including matters of inheritance, once again. Some reported decisions appear strategic, such as the 'clean-break' agreement between a pregnant widow and her former mother-in-law recorded on a papyrus dated to 8 BC: the first acknowledged the return of her dowry, surrendered any further claims on her husband's estate, and was then permitted to put out the child and remarry. 58 Others look more impetuous, such as the second century AD case of a divorced wife who did not even tell her ex-husband (who had quickly remarried) about her pregnancy, electing to expose the baby instead. 59 Then there is the question of 'fatherless' children, those born to a woman not in a Roman marriage (iustum matrimonium), so who were not born in patria potestas with all that entailed. Most of these would still have been engendered in a relationship which provided them with recognition and support, mostly marriages between persons (such as a Roman citizen and a free non-citizen) who could not contract a full iustum matrimonium under Roman law. These were not babies born to a 'single' woman, one who society deemed should not be having children or was having them by the wrong man, for example in adultery. 60 So, though those latter women would likely have exposed their offspring, the numbers involved were probably small. From whatever sources, however, sufficient numbers of newborns were exposed and then picked up by others that the raising of foundlings, a kind of 'fostering', became a regular and to some extent regulated occurrence in the Roman world. 61 In contrast to later periods, there were no locations designated to receive abandoned infants, and no state or charitable institutions involved in their reception. 62 It seems, rather, that certain local places became informally known as spots where newborns would be put out and could be taken up, by anyone who wanted to. As already stated, the main destination for expositi was slavery: exposed infants were picked up to be raised as slaves by individuals and in a more organized, business-like manner. The other possibility consistently mentioned in the sources is that expositi might be smuggled into reasonably wealthy, even positively elite households lacking offspring and presented as the product of their marriages by wives unable or unwilling to bear children for themselves. Another Stoic philosopher, and older contemporary of Soranus, Dio Chrysostom, referred to the former situation, not unsympathetically, for example. 63 Around the same time, the satirist Juvenal viciously attacked wealthy women allegedly reluctant to bear the burdens of pregnancy or the pains and perils of giving birth, thus fostering the obnoxious traffic in suppositious children, obnoxious because of the men fooled and the aristocratic lines thus sullied. 64 Legislation and juristic discussions condemned the practice-there was no time-limit on fraud accusations concerning the introduction of such children, for instance-but they also recognized that husbands might collude in such undertakings as well as being their primary victims. 65 The other significant legal interaction with exposure on the acquisition side related to the possibility that the foundling might be reclaimed, for freedom, for their natal family, or both. This may sound rather counter-intuitive, given the intimate association of expositio and slavery in the Roman Empire but, under classical Roman law, exposure did not affect the birth status of the infant. 66 It remained free if born to a freewoman, and remained in patria potestas if that woman was in a Roman marriage. Now slave dealers could have moved expositi around, to ensure their ignorance of their origins and distance from any who did know and might have been willing to act on their behalf, adding further obstacles to a system already stacked in their own favor. Surely some did, but an alternative approach available to ordinary individuals and organized slavers also developed at least in some areas of the Roman Empire. It allowed these redemptions as long as the person who had raised the foundling was compensated for what they had spent on maintenance. Successive imperial rulers were asked to decide on this conflict between legal principle-of absolute continuity of status-and more pragmatic local custom, and while some had permitted this kind of purchase of freedom to be enforced in parts of Greece, the emperor Trajan preferred the principle. In AD 111 he responded to a letter from Pliny the Younger, governor of Bithynia-Pontus, which cast the dispute about the status and maintenance of 'those called threptoi' (foster-children) as one which affected 'the whole province', stressing the inviolability of free birth. 67 If that status were proven, end of story, they should not have to 'buy back their freedom'. The scenario described by Pliny was not one in which the freeborn status of the threptoi was disputed; the issue was simply the payment of compensation. The situation was, therefore, characterized by knowledge of what had happened, not ignorance or concealment, whether that knowledge belonged to the parents, the rescuer, some third party, or was shared by them all. As Evans Grubbs argues, this openness changes how the practice, or at least some versions of it, should be understood. 68 It suggests that some expositi did return to their original homes, even if not in the idyllic way imagined in Daphnis and Chloe. In fact, that may have been the plan all along. This chimes with the idea that among the (married) poor exposure was mostly a response to a specific crisis-such as crop failure, for whatever reason, or internal family disaster-rather than to poverty as such. For ordinary Romans, children were, despite the initial outlays, economically valuable as well as socially and culturally invaluable, as Saskia Hin has demonstrated. 69 If desperate circumstances compelled them to put out a newborn it may well have been in the hope (though not the expectation) of future recovery, when things had improved, thus locating expositio among the adaptive strategies developed to spread the burden of childbearing and improve procreative outcomes as well as among the methods of family limitation. For the wealthy and 'single' women, the contexts were different. Through discussing the redistributive, circulatory, aspects of exposure, the overlap with adoption has become apparent. Attention switches from the loss to the gain column of the family ledger and the transaction becomes more formal, stable and secure but the basic pattern is shared. In Roman adoption, a man who lacked a direct heir could acquire one, more or less fully formed, from another lineage to inherit his family name and cult as well as property. 70 More complex heirship strategies and specific political aims could also be pursued by this means, but the rich surviving juristic discourse makes it clear that supporting family continuity was central to the institution, and adoptive households should roughly replicate natural ones. 71 The model adopter was over sixty or otherwise known to be unable to procreate, had tried to have and maintain his own children, without lasting success. He should adopt an adult male at least eighteen years his junior, of similar social status if not actually part of the same kin group. 72 The adoptee should also come from a family which could bear his transfer elsewhere, indeed his move would ideally benefit both parties. So, for example, the Terentius who features in one of the exempla collected by Valerius Maximus in the early first century AD had raised eight sons to young adulthood and gave one in adoption, intending that they should all be enriched by his inheritance. 73 Even more exemplary was the case of Lucius Aemilius Paulus, scion of a noble house and victor over the Macedonian king Perseus at the decisive battle of Pydna in 167 BC, whose story was often retold. Earlier he had given up the older two of his four sons for adoption, 'from abundance' said Valerius Maximus, and to provide heirs for childless branches of two other illustrious families of Republican Rome-the Fabii and the Scipiones. 74 While the adopted sons flourished, the younger boys died, a few days on either side of the triumph he celebrated for the victory at Pydna. The narrative is a poignant one, often embellished with a speech in which Paulus expressed the view that his personal catastrophe was a counterbalance to the excessive good fortune he enjoyed in the service of the state. The adoptions did their job, however. Fabius and Scipio were able to transmit their 'names, rites and households' to outstanding heirs. They achieved their objectives in respect to their families. Their adopted sons also maintained links with their natal father, links strengthened by the particular circumstances as well as being part of the underlying structure, one of a number of signs that adoption was not the same as birth, that these were distinct ways of constituting households. Adopted children were legally in the same relationship to their paterfamilias as children who had been born to him in a legitimate marriage, but they had been raised by someone else. That raising, the emotional and material resources invested in it, its formative effects, the physical and moral resemblance between parents and offspring it forged, left its mark and was neither wiped out nor replaced by the formal transfer to a new family. The ideal, moreover, was continuity, of birth, rearing and inheritance: natural children were the preferred option, adoption was the second choice, effective but not the same in a social or personal sense. 75 Less formal practices of fostering, of raising the offspring of others, might produce closer emotional ties, but without the same legal results: foster-children could not be heirs in the same way that adopted sons were. 76 Adoption was a transaction involving Roman citizens, which required the agreement of both parties-the adopter and the father of the adoptee (the consent of the adoptee was relevant only if his father was dead). In the absence of any of those conditions raising a child without legally integrating them into the family was the only option, and there were a range of reasons why that course might be followed anyway, particularly below the elite. This also meant that any offspring born to a master by his slave women, since they followed the status of the mother, could not be adopted, and while it would have been theoretically possible to adopt children produced outside marriage, if the mother were a citizen (freed or otherwise), there is no evidence that this happened. The position of the adoptee, at least in elite circles, would have been socially untenable, and his inheritance would undoubtedly have been challenged in the courts, with some chance of success. The Roman focus on adopting adults rather than young children puts a greater distance between adoptive and 'natural' families than in most modern societies and makes it more of a stretch to include the practice under the banner of 'fertility'. The stretch may still be worth it, however, for, past or present, adoption is clearly part of a joined-up system that has fertility as its substance. The suggestion that the increased success of Assisted Reproductive Technologies is responsible for recent falls in the number of adoptions in the UK, for example, demonstrates the basic linkage. 77 Ancient Roman decisions about whether to try to have or raise a child or not were surely informed by the presence of the institution of adult adoption. It cut both ways in these considerations. The main role may have been as insurance against future losses, thus underwriting a lower fertility regime than might otherwise be expected, as in various East Asian contexts. But the practice also offered support for extra offspring, having sons who could be beneficially given away to other families. The entanglements between adoption and fertility were many and varied. Conclusions This essay has outlined the procreative projects pursued by the population of the Roman world in relation to the resources available to support and facilitate them. The aim has been to enable a fuller assessment of questions of control over those matters, as part of a longer history of fertility control. To summarize, everybody was in the business of family continuity, of having children to pass their name, status, cult and whatever property they might have owned on to, of forging links to posterity. The slaves who have, like others outside the group of elite Roman citizens, flitted in and out of this analysis, pursued a particular version of this project: wanting to have free children, to establish and then enact the possibility of family continuity after a period of generalized, definitional lack of control, including over their fertility. 78 This and other more specific generative aims and contingencies operative in the Roman world deserve more attention in their own right but for the moment the focus will be on what was more or less shared across the board. These broad family objectives were widely achieved, though keeping the offspring produced healthy was an ongoing challenge and some losses were almost inevitable for everyone. If production itself was a problem then divorce and remarriage, appeals for divine assistance and medical treatments were all possible courses of action, which could be selected, combined and repeated according to inclination and means. None of these options offered any guarantees but all improved the chances of success to some extent in a culture that studiously avoided blame for reproductive failure. Adult adoption did provide a guarantee, though of a slightly different form of family continuity, more openly instrumental and less ideal. Still, overall, it is hard not to see a comparable level of control operating in the Roman context as today in respect to the positive side of the equation, to having children. While adult adoption certainly stretches what might be considered a more traditional definition of fertility, so does modern child adoption, and various forms of surrogacy, all of which are clearly part of a single system through which progeny are currently generated and distributed. Not to consider these different actions around fertility together would seem to be a serious mistake, for both present and past. Turning to more specific projects about family size and composition, it is important to distinguish between the elite and the rest. For the vast majority, while there would have been definite advantages to birth spacing, achieved through abstinence and breastfeeding, advantages for the prospects of the children as well as for overall family wellbeing, absolute limits were not an issue. Parents seem generally to have wanted both sons and daughters, a son first and foremost to ensure the continuity of the paternal line but also daughters, who made a range of important contributions to the family enterprise. It is likely that, if there were Roman data comparable to that from various historical periods in China, a similar pattern would emerge of women continuing to bear children for longer if they had either only daughters or sons among their preceding offspring. 79 Sex-selective exposure might have been deployed in such circumstances, but decisions not to raise children were largely in response to crisis, albeit in a precarious world, where food shortages and famine were not infrequent, and with some wishful hopes of retrieving those given up when fortunes improved. Among the elite, the pressures were greater, and strategizing around inheritance more developed. 80 Birth spacing was neither sufficient nor so easily organized, given the reliance on wet-nurses; a pattern of rapid generation, of some sons and daughters, and then stopping, with the possibility of re-starting after either child mortality or a new marriage was more suited to family needs. This was harder to arrange, and though Soranus attempted to facilitate it through making contraception and abortion available to respectable married women and not just prostitutes, to protect those women from the most damaging effects of repeated childbearing, his recommendations would have been of limited efficacy, in respect to either pregnancy or well-being. Control would have come from abstinence or exposure, ultimately relying on the latter, without any benefits to female health. This was, as Musonius indicated, the dominant means of family limitation but one that operated within a wider suite of actions with the same aims, all of which he opposed while promoting moves encouraging childbearing. It is then not just measures to either support or restrict procreation which went together. Both were joined under the banner of fertility and its control.
2020-11-05T09:11:07.176Z
2020-11-02T00:00:00.000
{ "year": 2020, "sha1": "271474b888fba5fe066c33d190dc9a1d4266b490", "oa_license": "CCBY", "oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/09612025.2020.1833491?needAccess=true", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "22a8b6194667c8c024ee1f3ba2a2687027373084", "s2fieldsofstudy": [ "History" ], "extfieldsofstudy": [ "History", "Medicine" ] }
251264973
pes2o/s2orc
v3-fos-license
Development and Validation of a Novel Robot-Based Assessment of Upper Limb Sensory Processing in Chronic Stroke Upper limb sensory processing deficits are common in the chronic phase after stroke and are associated with decreased functional performance. Yet, current clinical assessments show suboptimal psychometric properties. Our aim was to develop and validate a novel robot-based assessment of sensory processing. We assessed 60 healthy participants and 20 participants with chronic stroke using existing clinical and robot-based assessments of sensorimotor function. In addition, sensory processing was evaluated with a new evaluation protocol, using a bimanual planar robot, through passive or active exploration, reproduction and identification of 15 geometrical shapes. The discriminative validity of this novel assessment was evaluated by comparing the performance between healthy participants and participants with stroke, and the convergent validity was evaluated by calculating the correlation coefficients with existing assessments for people with stroke. The results showed that participants with stroke showed a significantly worse sensory processing ability than healthy participants (passive condition: p = 0.028, Hedges’ g = 0.58; active condition: p = 0.012, Hedges’ g = 0.73), as shown by the less accurate reproduction and identification of shapes. The novel assessment showed moderate to high correlations with the tactile discrimination test: a sensitive clinical assessment of sensory processing (r = 0.52–0.71). We conclude that the novel robot-based sensory processing assessment shows good discriminant and convergent validity for use in participants with chronic stroke. Introduction Upper limb somatosensory impairments are common after stroke and associated with decreased functional performance [1]. Somatosensory function is generally divided into three modalities, namely exteroception, proprioception and sensory processing [2]. Exteroception and proprioception are defined as the primary perceptual functions, while sensory processing is the secondary function requiring higher cortical processing of the primary modalities to interpret and discriminate between stimuli [2]. Unlike exteroception, which shows nearly full recovery after stroke [3], proprioception and sensory processing often remain impaired in the chronic phase after stroke. Proprioception is still impaired in up to 50% of participants at six months after stroke, while sensory processing is still impaired in about 22-28%, depending on the assessment used [4,5]. A systematic review from 2014 showed that somatosensory impairments are associated with motor function and functional performance [6], for example, with sensory processing being the second strongest predictor for functional outcome at 6 months after stroke, only preceded by muscle strength [7]. More recent studies [4,[8][9][10][11][12][13] have confirmed these findings. Low to moderate correlations were found between somatosensory impairment and functional outcome [8,12]. Interestingly, sensory processing was found to be a Brain Sci. 2022, 12, 1005 2 of 14 prognostic factor for bimanual performance in mildly affected participants with chronic stroke [13]. Others have also suggested that recovery of somatosensory impairment might be a prerequisite for full motor recovery of the upper limb [11]. Given the persistence of somatosensory impairments into the chronic stage after stroke, and their importance for motor and functional recovery, it is key to accurately assess these impairments. Current clinical scales have suboptimal psychometric properties, such as having coarse ordinal scoring and ceiling effects [14]. For this reason, robot-based assessments have been recommended to assess upper limb impairments after stroke [15]. Various robot-based assessments of proprioception have been validated for use in participants with stroke [16][17][18], but we are not aware of any robot-based assessment for sensory processing. The aim of this study was to develop a robot-based assessment of upper limb sensory processing and to provide an easily interpretable outcome by performing a factor analysis on the different robot parameters. We hypothesized that all robot parameters would be related to the same latent factor, indicating the overall sensory processing ability. In addition, we aimed to assess the discriminative validity and convergent validity of the novel assessment. To establish the discriminative validity, the novel test should find worse performance in participants with stroke compared to healthy participants. To establish convergent validity, the novel test should show high correlations with other assessments of sensory processing, while lower correlations should be found with assessments of exteroception, proprioception, motor function and performance. We hypothesized to find good discriminative and convergent validity of the novel assessment. Participants A flowchart of participant inclusion can be found in Figure 1. Sixty healthy participants and twenty participants with chronic stroke participated in this cross-sectional study (see Table 1 for participant characteristics). Healthy participants were included if they were aged 18 years and above, had no history of stroke or transient ischemic attack and did not present with upper limb sensorimotor impairments. Participants with stroke were included if they were 18 years or older, at least six months after a first-ever unilateral supratentorial stroke (as defined by the World Health Organization) and able to perform at least some shoulder abduction and wrist extension against gravity. They were excluded if they presented with any other neurological or musculoskeletal disorders, or severe communication and cognitive deficits. This study was registered at clinicaltrials.gov (NCT04721561). Since the results obtained with this exploratory study will inform future power-based studies, a sample size of 60 healthy participants and 20 participants with chronic stroke was deemed sufficient. impairment and functional outcome [8,12]. Interestingly, sensory processing was found to be a prognostic factor for bimanual performance in mildly affected participants with chronic stroke [13]. Others have also suggested that recovery of somatosensory impairment might be a prerequisite for full motor recovery of the upper limb [11]. Given the persistence of somatosensory impairments into the chronic stage after stroke, and their importance for motor and functional recovery, it is key to accurately assess these impairments. Current clinical scales have suboptimal psychometric properties, such as having coarse ordinal scoring and ceiling effects [14]. For this reason, robot-based assessments have been recommended to assess upper limb impairments after stroke [15]. Various robot-based assessments of proprioception have been validated for use in participants with stroke [16][17][18], but we are not aware of any robot-based assessment for sensory processing. The aim of this study was to develop a robot-based assessment of upper limb sensory processing and to provide an easily interpretable outcome by performing a factor analysis on the different robot parameters. We hypothesized that all robot parameters would be related to the same latent factor, indicating the overall sensory processing ability. In addition, we aimed to assess the discriminative validity and convergent validity of the novel assessment. To establish the discriminative validity, the novel test should find worse performance in participants with stroke compared to healthy participants. To establish convergent validity, the novel test should show high correlations with other assessments of sensory processing, while lower correlations should be found with assessments of exteroception, proprioception, motor function and performance. We hypothesized to find good discriminative and convergent validity of the novel assessment. Participants A flowchart of participant inclusion can be found in Figure 1. Sixty healthy participants and twenty participants with chronic stroke participated in this cross-sectional study (see Table 1 for participant characteristics). Healthy participants were included if they were aged 18 years and above, had no history of stroke or transient ischemic attack and did not present with upper limb sensorimotor impairments. Participants with stroke were included if they were 18 years or older, at least six months after a first-ever unilateral supratentorial stroke (as defined by the World Health Organization) and able to perform at least some shoulder abduction and wrist extension against gravity. They were excluded if they presented with any other neurological or musculoskeletal disorders, or severe communication and cognitive deficits. This study was registered at clinicaltrials.gov (NCT04721561). Since the results obtained with this exploratory study will inform future power-based studies, a sample size of 60 healthy participants and 20 participants with chronic stroke was deemed sufficient. Experimental Set-Up For the robot-based assessments, the Kinarm End-Point Lab (BKIN Technologies Ltd., Kingston, ON, Canada) was used. This bimanual end-point robot allows 2D movement in the horizontal plane, without anti-gravity support, while permitting control of visual feedback through a virtual reality screen. The robot collects positional data of both upper limbs at a rate of 1 kHz. All tests were performed in a seated position with bilateral trunk restraints to avoid compensatory trunk movements. A black cloth prevented vision of the upper limbs. In three participants, hand fixation was used to maintain hand position of the affected limb, due to limited grasp function. Experimental Task Sensory processing was evaluated using two versions of a three-step sensory processing task, which differed only in their first step. For the passive condition of the task (Figure 2A), the robot first passively moved the participant's affected arm (or nondominant arm for healthy participants) in the shape of a triangle, tetragon or pentagon by starting from and returning to a starting point positioned 20 cm in front of the shoulder. For the active condition ( Figure 2B), the participant was asked to explore the same shapes by moving the affected or nondominant arm between virtual walls delimiting the shape, which are described below. For both conditions, there was no visual feedback of shape or hand position. The participant was then asked to reproduce the shape without mirroring with the less affected or dominant arm within 15 s, by starting from and returning to the same starting position. Here, visual feedback was provided on the hand position but not on the reproduced path. Finally, the participant was asked to identify the explored shape out of six options presented on the screen of the Kinarm robot. Both conditions consisted of 15 randomized trials and were preceded by 5 practice trials (all shapes are provided in the Supplementary Material ( Figure S1)). Feedback on task performance was only provided during the practice trials. In the passive condition, the robot used a bell-shaped speed profile with a maximum speed of 0.67 m/s. In the active condition, the shape was delimited with use of position-dependent force regions. Along the lines of the shape, a zero-force region with a width of 0.2 cm existed in which the participant could actively move. Outside these lines, a virtual wall with a stiffness of 6000 N/m and a viscosity of −50 Ns/m was applied. Participants were allowed to explore each shape once, at a self-determined speed within a time limit of 30 s. For both conditions, each line of the explored shapes was between 2.92 cm and 14.14 cm in length. by moving the affected or nondominant arm between virtual walls delimiting the shape, which are described below. For both conditions, there was no visual feedback of shape or hand position. The participant was then asked to reproduce the shape without mirroring with the less affected or dominant arm within 15 s, by starting from and returning to the same starting position. Here, visual feedback was provided on the hand position but not on the reproduced path. Finally, the participant was asked to identify the explored shape out of six options presented on the screen of the Kinarm robot. Both conditions consisted of 15 randomized trials and were preceded by 5 practice trials (all shapes are provided in the Supplementary Material ( Figure S1)). Feedback on task performance was only provided during the practice trials. In the passive condition, the robot used a bell-shaped speed profile with a maximum speed of 0.67 m/s. In the active condition, the shape was delimited with use of position-dependent force regions. Along the lines of the shape, a zero-force region with a width of 0.2 cm existed in which the participant could actively move. Outside these lines, a virtual wall with a stiffness of 6000 N/m and a viscosity of −50 Ns/m was applied. Participants were allowed to explore each shape once, at a selfdetermined speed within a time limit of 30 s. For both conditions, each line of the explored shapes was between 2.92 cm and 14.14 cm in length. Other Robot-Based Assessments To assess motor function, a 4-target visually guided reaching test was performed with each arm separately [19]. In this test, participants were required to perform center-out reaching movements as quickly and as accurately as possible. Ten outcome parameters were calculated, covering reaction time, speed and accuracy of reaching, after which they were combined into a single task score with higher values meaning worse motor function [19,20]. To assess proprioception, a 9-target arm position matching test was performed [16]. In this test, the robot passively moved the participant's affected or nondominant arm, after which the participant was asked to actively mirror this position with their other arm. Twelve outcome parameters were calculated, including magnitude and variability of position errors, and combined into a single task score with higher values meaning worse proprioception [16,20]. In the visually guided reaching test, visual feedback of the hand position was provided, while in the arm position matching test, visual feedback was completely blocked. No practice trials were performed, but good understanding of instructions was checked by the examiner. Both tests showed good reliability and validity in participants with stroke [16,19,21]. Clinical Assessments Clinical assessments were performed on function and activity levels of the International Classification of Functioning, Disability and Health [22]. An overview of all clinical assessments can be found in Table 2. Data Analysis For the robot-based sensory processing task, position and velocity data of both upper limbs were imported into MATLAB (MathWorks, Natick, MA, USA). The start of the exploration and reproduction phases was selected based on a hand velocity threshold of 0.02 m/s after leaving the starting point. This threshold was established based on a close examination of pilot data and aimed to exclude postural oscillations of the hand while at the starting point. Both phases ended when the hand reached the starting point again. Hand position data were normalized in time ('interp1' in MATLAB) to ignore speed differences between the exploration and reproduction phase. To evaluate reproduction accuracy, three parameters were calculated using custom MATLAB scripts. First, we computed cross-correlation values ('xcorr' in MATLAB) between the horizontal or vertical normalized hand position signals from the explored and reproduced shapes. Cross-correlation values ranged between −1 and 1, with higher values indicating larger similarity. Next, we calculated the dynamic time warping parameter ('dtw' in MATLAB) between the explored and reproduced shapes, by representing both shapes as two temporal sequences of X and Y hand position signals and finding optimal alignment between them irrespective of speed. Dynamic time warping values equalled the distance between the two sequences, with higher values indicating less similarity. A Procrustes analysis ('procrustes' in MATLAB) compared the similarity between explored and reproduced shapes by optimally superimposing both shapes by translating, rotating and scaling the reproduced shape on top of the explored shape. Procrustes values indicated the distance between both superimposed shapes, with higher values indicating less similarity. Finally, we calculated the percentage of correctly identified shapes during the identification phase. Certainty of the participant's answer during the identification phase was evaluated using a 4-point Likert scale ranging from 0 to 3, with higher values indicating higher certainty. Statistical Analysis All statistical analyses were performed in R version 4.0.3 (R Foundation, Vienna, Austria) [41]. Statistical tests were performed two-tailed with an alpha level of 0.05. Because the Shapiro-Wilk test indicated a non-normal distribution for most outcomes, we compared participant characteristics between healthy participants and participants with stroke using Mann-Whitney U tests and Fisher's exact tests ('shapiro.test', 'wilcox.test', and 'fisher.test' from the stats package [41], respectively). To combine all five parameters of the robot-based sensory processing assessment into one factor score, an exploratory factor analysis using the principal factor method [42] was performed for the passive and active conditions separately. This analysis was performed on the data of healthy participants using the 'fa' function from the psych package [43]. Scree plots indicated one latent factor, which was defined as the sensory processing ability. The factor scores of healthy participants were calculated using the regression method for the passive and active conditions separately. We then obtained factor scores for participants with stroke by first calculating the standard scores of all five parameters against the mean and standard deviation of healthy participants, and then calculating the weighted mean of these standard scores by using the factor loadings of the exploratory factor analysis as weights. This way, a factor score of zero equals the mean performance of healthy participants, and the scores of participants after stroke can be interpreted as the magnitude of deviation from normal performance. To evaluate the discriminative validity, we compared the performance between healthy participants and participants with stroke. A robust three-way ANOVA based on 20% trimmed means ('bwwtrim' from Wilcox 2017 [44]) was performed on cross-correlation values using the participant group (healthy participants vs. participants with stroke) as a between-group factor, and task condition (active vs. passive) and axis direction (X vs. Y) as the within-group factors. A robust two-way ANOVA based on 20% trimmed means ('bwtrim' from Wilcox 2017 [44]) was performed on dynamic time warping parameters, on the outcomes of the Procrustes analysis and on the percentage of identified shapes, with the participant group as the between-group factor and task condition as the within-group factor. When no interaction effect was present, we reported the main effects. When an interaction effect was significant, the simple main effects were evaluated. We corrected for multiple comparisons using the Holm-Bonferroni method whenever simple main effects were calculated ('p.adjust' from the stats package [41]) [45]. For all ANOVA analyses, we reported the effect sizes as generalized eta squared [46,47] ('anova_summary' from the rstatix package [48]). We compared the factor scores of the passive and active conditions between healthy participants and participants with stroke with independent t-tests ('t.test' from the stats package [41]). Normality and homoscedasticity were confirmed a priori using Shapiro-Wilk tests and F-tests of equality of variances ('shapiro.test' and 'var.test' from the stats package [41]). We calculated effect sizes using Hedges' g with the 'cohen.d' function from the effsize package [47,49]. In addition, we compared the performance of all participants with stroke with 95% confidence intervals of healthy participants, to identify participants presenting with abnormal sensory processing ability, as was done previously by others [16,17,19]. The convergent validity was evaluated by calculating 20% Winsorized correlation coefficients ('wincor' from Wilcox 2017 [44]) for participants with stroke between outcomes on the robot-based sensory processing task, and standardized clinical and robot-based assessments of somatosensory function, motor function, cognitive function and activities. The strength of correlation was interpreted as follows: r W < 0.30 = negligible correlation; r W = 0.30-0.50 = low correlation; r W = 0.50-0.70 = moderate correlation; r W > 0.70 = high correlation [50]. In addition, we calculated 95% confidence intervals for all correlation coefficients by performing a Fisher z' transformation ('CIr' from the psychometric package [51]) [52]. Results Sixty healthy participants and twenty participants with chronic stroke were evaluated for their sensory processing abilities, by means of robot-based passive or active exploration, reproduction and identification of different shapes. The mean time needed to perform the passive and active conditions was 6.91 and 8.59 min, respectively. F (1,76) = 0.42, p = 0.518, η 2 G < 0.01). For the Procrustes analysis, the group difference was influenced by task condition (Figure 3C; group x condition: F (1,76) = 4.88, p = 0.031, η 2 G = 0.02). In both conditions, participants with stroke showed slightly worse values than healthy participants, and the largest difference was found for the active condition. However, for both conditions, the difference was not significant ( Figure 3C; passive condition: F (1,76) = 0.60, p = 0.554; active condition F (1,76) = 1.87, p = 0.166). In addition, the active condition showed significantly worse values than the passive condition for both participant groups ( Figure 3C; healthy participants: F (1,76) = 7.08, p < 0.001; participants with stroke: F (1,76) = 6.77, p < 0.001), and the difference was largest in participants with stroke. Results from the Procrustes analysis also showed that reproduced shapes were larger than explored shapes, with a mean scale of 1.41 for both participant groups. Hand speed during the exploration of shapes did not differ between participant groups or task conditions and averaged 0.04 m/s (SD 0.01). Participants with Stroke Showed Worse Sensory Processing Ability We performed an exploratory factor analysis on all parameters to generate an easily interpretable outcome. This analysis indicated one latent factor, representing the sensory processing ability, and expressed as the factor score. All five parameters contributed to the factor score, and their factor loadings are shown in Table 3. Table 3. Factor loadings of the reproduction and identification parameters. Identification of Abnormal Performance in Participants with Stroke Based on the factor score, 11 participants with stroke (55%) had an impaired sensory processing ability on both the passive and active condition of the sensory processing task ( Table 4). The percentage of correctly identified shapes showed the largest group of participants with abnormal performance, namely 16 and 15 participants (80% and 75%) for the passive and active condition, respectively (Table 4). Table 4. Number of participants with stroke showing abnormal performance as compared to healthy participants on the passive and active conditions of the sensory processing task. The Robot-Based Sensory Processing Task Was Moderately to Highly Correlated with Sensitive Clinical Tests of Sensory Processing Convergent validity was established by correlating the factor scores of the robotbased sensory processing task with clinical and robot-based assessments of somatosensory function, motor function, cognitive function and activities. We found moderate to high correlations with sensitive clinical tests of sensory processing, in contrast to low correlations with tests of exteroception and proprioception (Table 5). In addition, low to moderate correlations were found with motor function and performance (Table 5). Correlation coefficients of all parameters with clinical and robot-based assessments can be found in the Supplementary Material (Table S1), as well as all scatterplots ( Figure S2). Table 5. Correlation coefficients between factor scores of the robot-based sensory processing tasks, and clinical and robot-based assessments of somatosensory function, motor function, cognitive function and activities. Factor Score Passive Condition Factor Score Active Condition r W 95% CI r W 95% CI Discussion In this study, we developed and validated a novel robot-based sensory processing assessment based on passive or active exploration, reproduction and identification of different shapes. First, the discriminative validity was established by showing a significantly worse sensory processing ability in participants with chronic stroke compared to healthy participants, as revealed by the less accurate reproduction and identification of explored shapes. Second, the convergent validity was established by showing moderate to high correlations with sensitive clinical tests of sensory processing, low correlations with clinical and robot-based tests of exteroception and proprioception and low to moderate correlations with motor function and performance. These novel robot-based assessments show some clear advantages compared to standard clinical assessments. They involve objective evaluation using sensitive outcome parameters measured on a continuous scale; therefore, no ceiling effects are present. Furthermore, a factor analysis creates the potential to simplify complex outcome parameters by calculating overlapping factor scores, in order to provide subsequent analyses which are easier to interpret. Regarding convergent validity, it is important to keep the differences between robotbased and clinical assessments in mind. We found low correlations with the sharp-blunt discrimination subscale of the Erasmus modified Nottingham sensory assessment, the stereognosis section of the original Nottingham sensory assessment and the functional tactile object recognition test. However, these clinical assessments showed a clear ceiling effect and ordinal scaling, whereas the robot-based factor score did not (see Supplementary Material ( Figure S2)). Higher correlation coefficients were found with the tactile discrimination test, which is a more sensitive test without ceiling effects. Furthermore, the robot-based sensory processing assessment showed smaller associations with exteroception and proprioception than expected, even though the task requires processing of these primary functions. However, the reported correlations are in line with results found by Connell and colleagues, who found low to moderate agreement (kappa = −0.1-0.54) between the different modalities [53]. These results may indicate that, even though sensory processing uses the primary exteroceptive and proprioceptive information, it should be viewed as a distinct modality. Finally, we found low to moderate correlations with motor function and performance, suggesting the association between sensory processing and functional abilities after stroke, which has been reported by others before [8,12]. Correlations with clinical assessments of proprioception and motor function were similar to the correlations with robot-based assessments of these functions, indicating the robustness of the results. Recently, Ballardini and colleagues developed a technology-based evaluation of sensory processing in a limited group of healthy participants and participants with chronic stroke [54]. The described protocol evaluates sensory processing based on exteroceptive information [54], while our protocol is based on the processing of mainly proprioceptive information. A similar task to ours was used in the experiment of Henriques and colleagues in 2004, where six healthy participants were asked to actively explore and reproduce tetragons using a planar robot [55]. Here, healthy participants were relatively accurate in reproducing the explored shapes; however, they consistently overestimated the size (mean scale of 1.15) [55], which is similar to the results found in this study. To the best of our knowledge, such a robot-based approach has never been used or validated in a group of participants with stroke. Therefore, the results from the present study add a novel assessment paradigm to the field of upper limb sensory processing evaluation. The novel robot-based assessments do have some limitations. First, the reproduction phase requires contralateral arm movement; hence, the interhemispheric transfer of information is required, which might be disturbed after stroke. Second, reproduction is performed with the ipsilesional upper limb, which might show subtle but significant impairments [56]. Third, because of the non-simultaneous execution of the exploration, reproduction and identification phases, information has to be stored in the working memory, which is often impaired after stroke [57]. Still, our novel evaluation paradigm showed valid results in our subgroup with stroke, suggesting applicability in further research. Finally, because of a possible increase in performance accuracy through learning, an additional analysis was performed to evaluate for the learning effects of the novel sensory processing task, which can be found in the Supplementary Material (S1). It is important to acknowledge the limitations of this study. First, only a limited group of participants with chronic stroke was included, which reduces the generalizability of the results. As a result of this sample size, there was low power to compare the results between subgroups of participants with stroke with and without clinically diagnosed sensory processing deficits. However, an additional subgroup analysis was performed which can be found in the Supplementary Material (S2). This additional analysis showed some interesting results and can guide further research. There also remains a lot of uncertainty about the magnitude of the correlation coefficients, given the large confidence intervals. Future research should therefore replicate results in a larger sample. As a second limitation, due to the set-up of the Kinarm robot and the active condition, which required active grasping of the end-point handles and active shoulder and elbow movements, only mild to moderately affected participants with stroke were eligible to participate. Indeed, in the present study, we found mostly only mild upper limb impairments. This might have led to an underestimation of the severity of sensory processing deficits in the general stroke population on the one hand, but this also limits the generalizability of results to the general stroke population, on the other hand. Future studies should therefore assess the usefulness and feasibility of the proposed measures in a more severely affected population. A robotic set-up which allows anti-gravity support might be preferred over the current end-point set-up for use in the more severely affected population. Based on the results described here, suggestions can be made for further implementation of the novel robot-based sensory processing assessments. Despite its requirement of active movement of the affected arm, and therefore restricted use to mild to moderately impaired participants, the active condition might be preferred over the passive condition given its greater discriminative and convergent validity (i.e., greater difference between healthy participants and participants with stroke (Figure 3), and larger correlation coefficients with clinical tests of sensory processing (Table 5)). In addition, when the primary aim is to identify upper limb sensory processing deficits, it can be suggested to skip the reproduction phase, as the identification parameters showed more favorable discriminative validity results compared to the reproduction parameters ( Figure 3 and Table 4). However, future research should prioritize replication of the current results in a larger and more heterogenous sample, and should include an additional evaluation of the reliability and responsiveness of both the robot-based passive and active sensory processing assessments, as advised by the COSMIN initiative [58]. Supplementary Materials: The following supporting information can be downloaded at: https: //www.mdpi.com/article/10.3390/brainsci12081005/s1, Figure S1: Explored shapes during the identification phase; Table S1: Correlation coefficients with reproduction and identification parameters; Figure S2: Scatterplots; S1: Learning effects; S2: Subgroup analysis between participants with stroke with and without clinical sensory processing deficits; Figure S3: Results of the subgroup analysis.
2022-08-03T15:23:08.841Z
2022-07-29T00:00:00.000
{ "year": 2022, "sha1": "7b04340a0a453f81b65ed50ad13795d151b9db41", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2076-3425/12/8/1005/pdf?version=1659321755", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "08641259674d1ad098eb44d561f531440b2398e3", "s2fieldsofstudy": [ "Psychology", "Medicine" ], "extfieldsofstudy": [] }
259061091
pes2o/s2orc
v3-fos-license
An E-C Translation Study under the Theory of Eco-translatology: Children and Nature in The Secret Garden : Approaching the second decade of the 21 century, the theoretical translation field of China is gracing with a new theory that might open up a new paradigm for the discipline. Inspired by ancient Chinese wisdom on the harmonious relationships between man and environment, the theory of eco-translatology was published by Gengshen Hu. Eco-translatology is an interdisciplinary subject that involves the subject of translation and the ideas used in ecology. In most translated (English to Chinese) children’s literature, translators select different words under their translational environment and their own understanding of cross-cultural adaptation. Therefore, the ideal version of translation would achieve the harmonious relationships between linguistic, communicative, cross-cultural transition, and other translation-related factors, it would also achieve the harmonious relationships between author’s original attempt, translators’ selection of words, and reader’s experience. This paper will make a specific analysis of the different E-C translated texts of the famous children’s literature The Secret Garden to explore translators’ different interpretations of author’s unique ideas and translators’ selection for words under the theory of eco-translatology at that time period and social background. Finally, the author concludes how eco-translatology theory could be beneficial in serving practical use in translation. Relevant Background Eco-translatology, as a new translation theory, is metaphorically compare the activity of translation and the hidden principles within the translation activities to the actual ecosystem and the process of the natural tenets in the ecosystem, also known as the Darwinian Theory [1].The Darwinian Theory includes principals such as species react differently in different environment, species will make attempt to adapt to nonindigenous environment, and the fittest survive in the end principal.Correspondingly, eco-translatology indicates that translators consciously or unconsciously try to adapt to non-native language environment, while words that survive best under cross-cultural transition would be the optimal choice for the translation. Children's literature is a distinguishable part within literature filed for its frequent use on rhetorical devices, interjections and sometimes onomatopoeia.Most children's literature emphasis on the depiction of figurativeness which eventually serves for the educational purpose.The literature chose in this paper is The Secret Garden, written by Frances Hodgson Burnett [2].This paper will explore the difference between the two E-C translated versions of The Secret Garden, which were translated by Wenjun Li and Hong Xu [3,4].First, this paper will find out how the two translators adapted on the main idea of The Secret Garden, then this paper will compare the two translated texts on linguistic dimension, cultural dimension and communicative dimension, which are the key three dimensions of eco-translatology. Theoretical Implications In order to explain the significance of eco-translatology theory, the origin of the theory must be traced.The combination of ancient Chinese philosophy, Darwinian theory plus years of research foundation and theoretical accumulation by Newmark, Warren, Bassnett&Lefevere, Wliss, Gengsheng Hu and others in the translation theory field produced the product of eco-translatology [5][6][7][8]. Before Hu put forward the independently-definitioned theory of eco-translatology, when using the term "the ecological environment of translation" or other similar terms, most academics used it as a sub-category of another translation theory or referred it in a sense closer to actual ecology than translation. Actually, before the promotion of eco-translatology, Gengshen Hu have already spent years trying to figure out a more complete and inclusive translation system just like many other researchers who have tried on "multidisciplinary" or "integrative" translation studies.The former version of translation theory such as Functional Equivalence Theory by Eugene Nida in 1969 and Skopos theory in 1970s has already proven to be the foundation stone of the discipline and has been gathering dust on the shelves for decades [9].In 2004, Hu put forward the translation adaptation and selection theory inspired by Darwinian theory, trying to point out that translation is an activity of adapting and selecting within the entire translating environment, representing the bud of eco-translatology began to take shape in Hu's mind [10]. For years, most translational theory is leaning centered on the output of the translation or focused on the way of reaching the result.Even the articles that did focused on translators as their domainant factor fail to connect translators' central position with a broader environment.Since the previous perception that translators are considered more as a carrier of knowledge and skills for translation has not been outmoded. One of the theoretical perspectives of eco-translatology is translator-centred while seeing the translational environment as an organic whole [11].This is different from other theories that mainly focused on the output of the translation.This could be contributed to the influence from traditional Chinese wisdom.In ancient Chinese philosophy "天人合一", man is not only being considered a carrier of one's own mind, no one can escape the influence of their own environment (physical environment or mental environment).Thus, this ancient Chinese philosophy believes that the mind of people must be affected by their environment and to achieve the best living is to reach the "harmonious relationships between man and environment".Build on this ancient wisdom, Hu point out that translators can't escape the influence of their own translational and language environment.Another ancient philosophy "适中尚和" pointed out that "The one that fit the environment best would be the one that achieve true harmonious".Therefore, the features extracted from these philosophies helped forming eco-translatology. Organization of the Thesis This paper discovers translators' attempt to adapt eco-translatology methods (whether consciously or unconsciously) in Frances Hodgson Burnett's unconventional description of nature and children in The Secret Garden.This paper will make a specific analysis of the different translated texts to explore translators' different interpretation of author's unique idea and their words' selection under the theory of eco-translatology at that time period and social background. Translation Analysis Study based on Eco-translatology This paper will explore the difference between the two E-C translated texts of The Secret Garden, which were translated by Wenjun Li and Hong Xu.First, this paper will find out how the two translators adapted on the main idea of The Secret Garden, then this paper will compare the two translated texts on linguistic dimension, cultural dimension and communicative dimension under the framework of eco-translatology.The purpose of this study is to compare different translation techniques adopted by different translators under the theory of eco-translatology, not to subjectively criticize the translation version of any translator. The Representation of Children's Desire of Nature The source text of these translation practices is excerpted from The Secret Garden, this book tells a story of three children growing, interacting and changing under the help of nature.The three main characters were a grumpy mean girl Mary Lennox who was neglected by her parents and got sent to a mansion owned by her uncle Mr. Craven; a paranoid weak child Colin Craven ignored by his father and refuses to get out of his bed while hiding in the mansion and a positive peasant boy Dickon who could befriend animals with a poor family consist of 12 kids.Mary discovered a deserted garden hidden in the mansion, later she shared the secret with Dickon and Colin.The three children then tended the garden to bring back its liveliness, in return the garden gradually guided the children to realize and solve their own grumpiness, loneliness and other problems. In the previous researches on translations of The Secret Garden, most studied it from the perspective of Skopos theory, functional equivalence theory and other previous theories. The reason for selecting Eco-translatology theory are two, first this book is mostly made up by dialogues (sometimes with accent), so it is the translators' responsibility to select the most appropriate tone to best present each characters' feature under the eco-translational environment.Secondly, nature is frequently mentioned in this book, therefore the translation involving the species of plants, animals as well as other specific species within the book will require certain botanic and ecology knowledge of the translators in both cultural dimension and linguistic dimension. Garden as the Symbol of one's Inner Sanctorum In The Secret Garden, the author Frances successfully managed to convey the main idea of embracing nature will benefit children physically and mentally.But the nature described in this book has much more depths regarding the main idea, the hidden secret garden also represent the inner sanctorum of one's heart [12,13]. According to the original book, the garden was once carefully tended by Mrs. Craven which is also Colin's mother.The garden became deserted after her misfortunate death when giving birth to Colin, Mr. Craven and Colin's mental state also became "a deserted and lifeless garden".Mary the mean girl who is raised in India, is always thin, sick and grumpy, however she ate and excersiced much more frequently after discovering the garden and set the goal of bringing it back to life.Colin was feeble, paranoid to the verge of suicidal thoughts in the beginning, yet he found meaning in life when Mary accidently walk into his room and shared the goal of tending the garden, he began to step outside and focused on living instead of death.Dickon was the only main character that begin with a positive attitude towards life, he served mostly as a guardian and helper character in the trio, showing a contrast example of a child growing in the arms of nature and parental love vividly. The Difference Between Different Translators' The Secret Garden Eco-translatology includes three key dimensions.The communicative dimension indicates that translators should pay great attention to interpret the communicative intention from original author.The linguistic dimension means that the translators' adaption and selection of language form different aspects and levels in the process of translation activity.And the cultural dimension refers to the translators' effort to the transmission of cultural connotation in the translation process. This following part of the paper will analyze the translation of some chosen paragraph or sentences in linguistic dimension, cultural dimension and communicative dimension respectively, in order to discover the different translation methods, the adaptation and selection techniques in the translated texts under the theory of Eco-translatology. The Linguistic Dimension The linguistic dimension refers to translators' adaption and selection of language form such as word meaning, language style in different aspects and levels in the process of translation activity.The following are two examples. Example 1: Li translated "a sour expression" to "还老哭丧着脸", Xu translated it to "那副愠怒少欢的面孔".Li's translation is obviously verbal and mixed with a subjective tone that remind Chinese children of their parents' frequent scolding, which is vivid and emotionally connecting to Chinese readers.Xu's translation used a traditional Chinese four-character idiom "愠怒少欢", meaning "always sulky and seldom happy", in Chinese the four-character idioms are usually used formally and are being considered as the language of educated people, therefore Xu's translation brought a sense of sophistication to the literature tone on the linguisitic aspect. Example 2: Li translated "had always been ill in one way or another" to "从小就这病那病不断", Xu translated it to "加上体弱多病".Li's translation include the Chinese pronoun "这" "那", meaning "this" "that", emphasizing Mary is always ill with this disease or that disease, which is antithesis neat to "ill in one way or another" as well as verbally catchy.Xu's translation used a four-character idiom "体弱多病", meaning "feeble with many disease", which is a quite direct in meaning and concise in word length translation. The Cultural Dimension The cultural dimension indicates that translators should pay great attention to interpret the communicative intention from original author. Throughout the book, there has been a noticeable number of Indian words due to Mary's background of being raised in India.In the heading paragraphs of the first chapter appeared at least three Indian words, which are "Ayah", "Mem Sahib" and "Missie Sahib". It could be seen that the two translators choose different approach on these words, Li choose to keep the indigenous flavor by translated "Ayah" to "阿妈", "Mem Sahib" to "女主人", "Missie Sahib" to "小主人", which are all verbal expressions commonly used in the feudal society of China, helping the reader to better understand the background of India at that colonized time preiod.Xu choose to translated "Ayah" to "保姆", "Mem Sahib" to "夫人", "Missie Sahib" to "小姐", these words are more modern and formal, which means Xu chose to lose the verbal and indigenous flavour and turn to a more standardized translation, which enables readers in general better understand what these Indian expressions represent.Another worth-noting benefit of Xu's translation is modern and standardized translation facilitates younger readers who lack the knowledge and understanding of the feudal society of China to understand the meaning of these Indian expressions. In chapter IX, the words "the natives charm snakes" "snake-charmer" described people in India using sounds to control snakes.In Li's translated version, he translated this words as "能引着蛇扭 身起舞的印度土著", Li's translation approach showed that attempt that he is trying to picturing the vivid scene of a person play music and a snake twirling and dancing along, though the length of the translation is much longer than the original English words, Li's translation does add a certain level of literary description.Xu translated "snake-charmer" to "耍蛇人"and "训蛇师", which is just as concise as the original English words while still keep the main idea of a snake tamer, reader could understand the Chinese translation at once. The Communicative Dimension The communicative dimension refers to the translators' effort to the transmission of cultural connotation in translation process. After analysing these two versions of translation, it was discovered that, the translator Li inclines to use expression more verbally and informal usage of regional slangs even, and tends to include readers into the book's dialogue, while Xu chooses to use more formal expressions and positioned the readers at a third-party perspective. For example, Li translated "everybody said" to "谁都说", Xu translated it to "人们都说".Li translate "everybody" to "谁", Xu translated "everybody" to "人们", Li translated "It was true" to "这说的也是大实话", and Xu translated it to "的确如此".The difference of translation in these sentences is the choice of perspective.In Chinese, most verbal expression is longer in length and more vernacular, Li's linguistic translational choice uses this verbal expression to create an inclusive conversation between the fictive book teller and the readers, while Xu's translation put readers in a third person perspective. These two translation choices reflect translators' choice on communicative dimension by positioning the readers in different perspectives, creating two literary distances for readers, the two translated texts achieved distinct reading experience. Specifically, when reading Li's translated text, reader would reflect on the verbal and informal tone, and try to connect with the text in conversation, making the readers be personally on the scene, which is the kind of closeness some literary critics pursue.Xu's translated text would provide readers with a more distant position by using more formal terms and Chinese four-character idioms, allowing readers to gain a more concise reading experience and a chance to view the text in a more analytically position, which is beneficial for readers to better deconstructing and understanding the text in translation. Conclusion Studying the practical use of eco-translatology theory by comparing different translated text from The Secret Garden is the main idea of this paper.By searching the theoretical development of ecotranslatology, it could be seen that eco-translatology is a cross-discipline theory, combining both translation and ecology.Furthermore, Gengshen Hu concluded and put forward the key features of eco-translatology, which include translator-centredness, seeing the translational environment as an organic whole, translators consciously or unconsciously try to adapt to non-native language environment, words that survive best under cross-cultural transition became the optimal choice for the translation [14].Therefore to explore translators' different interpretations of author's unique ideas and translators' selection for words under the theory of eco-translatology in The Secret Garden at that time period and social background could facilitate both translation and literary understanding. Taking the translator as the center, analyzing, studying the ecological environment of the translation texts and later compare the translations of different translators could help obtain broader perspectives and more conclusive observation of the translation, which could further helping to understand and identifying the cultural and linguistic factors brought by the translator.Secondly, analyzing the ecological environment of translation could facilitate identifying which words were discarded and which descriptions are added by the different translators during the process of translation, so as to better visualize the "abstract art contained in translation".At the same time, it can also better help to check the quality of the translated text, and may even check the existence of mistranslation caused by the translation ecological environment The International Conference on Interdisciplinary Humanities and Communication Studies DOI: 10.54254/2753-7064/3/20221033
2023-06-04T15:05:20.181Z
2023-05-17T00:00:00.000
{ "year": 2023, "sha1": "9053d68e189ed61defc8c5d56c14475845ed5a69", "oa_license": "CCBY", "oa_url": "https://chr.ewapublishing.org/media/9888a74ed0c7401289e848f81cb1f830.marked.pdf", "oa_status": "HYBRID", "pdf_src": "ScienceParsePlus", "pdf_hash": "9dcd30d7713bb403d4be5854159524ccca0058ec", "s2fieldsofstudy": [ "Environmental Science", "Linguistics" ], "extfieldsofstudy": [] }
79835626
pes2o/s2orc
v3-fos-license
Recurrent Hemoptysis in Patient with Primary Pulmonary Hypertension – A Case Report and Literature Review Pulmonary hypertension (PH) is defined as an increase in mean pulmonary arterial pressure (mPAP) ≥ 25 mmHg at rest as assessed by right heart catheterization (RHC). The symptoms of PH are non-specific and mainly related to progressive right ventricular (RV) dysfunction. In some patients the clinical presentation may be related to mechanical complications of PH and the abnormal distribution of blood flow in the pulmonary vascular bed, include hemoptysis related to rupture of hypertrophied bronchial arteries. Hemoptysis is a serious complication that is rarely reported in patients with pulmonary arterial hypertension (PAH). Hemoptysis severity ranges from mild to very severe leading tosudden death. Hemoptysis are reported to be a terminal stage of a complication due to PAH with prevalence is variable, from 1% to 6%. Although the incidence is quite rare, the presence of recurrent hemoptysis in patients with pulmonary hypertension is a sign of poor prognosis. Bronchial artery embolization is suggested as an acute emergency procedure in the case of severe hemoptysis or as elective intervention in cases of frequent mild or moderate episodes. Microscopically, PAH is characterized by intimal hyperplasia, hypertrophy of the tunica media, thickening of the tunica adventitia, and endothelial proliferation. The disease was first discovered by Dr. Ernst von Romberg in 1891. 2,3 The incidence of PAH is very rare, where in France the incidence is only 15 cases for every million people. 2,3 Hemoptysis is a serious complication that is rarely reported in patients with pulmonary arterial hypertension (PAH). Hemoptysis is reported to be a terminal stage of a complication due to PAH. The incidence of hemoptysis in PAH patients remain uncertain and tend to be unreported. Similarly, the patho-mechanism of hemoptysis in PAH remains uncertain. 4 Mortality associated with hemoptysis in PAH is influenced by multifactorial. Some evidence suggests that patients with PAH who are associated with congenital heart disease have a better prognosis for their hemoptysis complications than other PAH types. However, the underlying mechanisms of this condition remain unknown. 4 From the Chest Computed Tomography examination showsa dilation of the pulmonary truncus with diameter± 3.59 cm, dilatation of the right pulmonary artery with diameter± 2.69 cm, dilatation of the left pulmonary artery with diameter± 2.51 cm, and dilatation of the bilateral pulmonary artery branches. The diameter of the pulmonary truncus is greater than the ascending aortic aorta; There is enlargement of the right atrium and the right ventricle. Conclusion: Dilatation of the pulmonary artery that supports the diagnosis of pulmonary hypertension ( Figure 4). Patients were diagnosed with primary pulmonary arterial hypertension non-reactive O2 test, observation of hemoptysis suspected to be associated with pulmonary hypertension, suspected Health Care-Associated Pneumonia (HCAP) andhypokalemia. The treatments were: sildenafil 40 t.i.d. mg, digoxin 0.25 mg q.i.d., furosemide 20 mg q.i.d., iloprost nebulizer 2.5 mcg q.i.d., Aspar K 1 tablet q.i.d., intravenous ceftazidime 1 gr t.i.d., intravenous ciprofloxacin 400 mgb.i.d.. There is usually no symptom of orthopnea and paroxysmal nocturnal dyspnea. 1,7 Physical examination is relatively insensitive to make the diagnosis, but it can help to rule out the differential diagnosis. If on lung examination found wheezing and rales, should be considered the possibility of bronchial asthma, bronchitis or fibrosis. Rales as seen in congestive and also cardiomegaly. 9 The chest X-ray of patients proposed in this case report supports the presence of pulmonary hypertension, wherein from chest X-ray image there is presence ofcardiomegaly (RV dilatation). increases, but on the other hand there is a decrease in the production of vasodilators such as prostacyclin. 3,13 In addition, in the lumen of a patient's blood vessel with PAH, an elevated plasma serotonin level was found. Serotonin can stimulate the proliferation of smooth cardiac muscle cells and it is an important sign in the pathogenesis of PAH. Echocardiography Nitric oxide (NO) produced in the endothelial is a vasodilator that will inhibit platelet activation and vascular smooth muscle cell proliferation formed from three nitric oxide isoforms (NOs / S / NOS3). However, the role of NO in PAH pathophysiology remains unclear. Pulmonary arterial hypertension has a poor prognosis and high mortality, despite adequate treatment. 13,14 Clinically, in pulmonary arterial hypertension Pathophysiology of Hemoptysis in Pulmonary Artery Hypertension Hemoptysis is defined as a coughing of blood whose source originates from the lungs or bronchial tubes, as a result of pulmonary or bronchial haemorrhage. Hemoptysis is divided into massive and non-massive based on the coughed blood volume. However, there is no Hemoptysis is a serious complication that is rarely reported in patients with pulmonary arterial hypertension (PAH). Hemoptysis is reported to be a terminal stage of a complication due to PAH. The incidence of hemoptysis in PAH patients remains uncertain and tends to be unreported. Similarly, the patho-mechanism of hemoptysis in PAH remains uncertain. 4 Pulmonary arterial hypertension (PAH) is a pathological condition whose primary cause 19 In patients with PAH, elevated endothelin-1 and its plasma products were obtained. It is associated with the severity of the disease. Prognosis of Hemoptysis Associated with Pulmonary Hypertension The prognosis of PAH is usually worse because it is generally caused by other diseases. Usually, patients realize if they have pulmonary hypertension after the clinical symptoms appear, which that means the patient is already in an advanced stage. If it was diagnosed earlier, the prognosis will be better, at least in terms of reducing symptoms. Hemoptysis is known to be one of the complications of pulmonary hypertension. Although the incidence is quite rare, the presence of recurrent hemoptysis in patients with pulmonary hypertension is a sign of poor prognosis. 4
2019-03-17T13:11:36.996Z
2017-10-31T00:00:00.000
{ "year": 2017, "sha1": "8786c88302898f2f7ce5a84ca6d5ecacf33a98e6", "oa_license": "CCBYSA", "oa_url": "https://journal.ugm.ac.id/jaci/article/download/29703/17819", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "b3161963d9ca97842fd16c6a1e43636c1a697d28", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
257050318
pes2o/s2orc
v3-fos-license
Predicting the Early Stages of Solid-State Precipitation in Al-rich Al-Pt Alloys The high strength of structural aluminium alloys depends strongly on the controlled precipitation of specific intermetallic phases, whose identity and crystal structure can be difficult to predict. Here, we investigate the Al-Pt system, which shows some similarity to the Al-Cu system as one of their main intermetallic phases, $Al_2Pt$, is nearly isostructural with $\theta^{\prime} (Al_2Cu)$, the metastable phase responsible for the high-strength of Al-Cu alloys. However, phases in Al-Pt alloys are complex and have not been studied in detail. Using a combination of density-functional theory (DFT) calculations and classical nucleation theory (CNT) applied to the Al-Pt system, we design a workflow to predict the thermodynamics of solid solution, intermediate phases such as GP zones, stable and metastable precipitates, and their precipitation sequence. This workflow can be applied to an arbitrary binary alloying system. We confirm the known stable phases $Al_4Pt$, $Al_{21}Pt_8$, $Al_2Pt$, $Al_3Pt_2$, $AlPt (\alpha \&\beta)$, $Al_3Pt_5$, $AlPt_2 (\alpha \&\beta)$ and $AlPt_3 (\alpha \&\beta)$. We also reveal the possible existence of two phases of chemical formulae $Al_5Pt$ and $Al_3Pt$. This large number of intermetallic phases is due to the strong bonding between Al and Pt, which also leads to significant favourable Pt solute formation energy in the Al matrix. Our findings are compared with the known precipitation characteristics of the binary Al-Cu and Al-Au systems. We find that the $\theta^{\prime}$-like $Al_2Pt$ precipitate phase has a lower coherent interfacial energy than $\theta^{\prime}$. Our calculations strongly suggest that $Al_2Pt$ will precipitate first in Al-rich Al-Pt alloys and will form bulk-like interfaces similar to $\eta (Al_2Au)$ rather than like $\theta^{\prime}(Al_2Cu)$. Introduction Controlled solid-state precipitation is commonly used for achieving significant strengthening in light structural alloys. This is because precipitates of appropriate shape, distribution, number density and crystallography can constitute effective barriers to the movement of dislocations. In general, high-strength alloys require uniformly and densely distributed precipitates with high aspect ratios and a rational crystallographic orientation [1]. A well-known alloy system containing such precipitates is the Al-Cu system, which forms the basis of an important class of engineering alloys for aircraft and aerospace applications [2]. In Al-Cu alloys, a fine and uniform precipitation of θ′′ (Al3Cu) and θ′ (Al2Cu) precipitates with aspect ratios greater than 50:1 can lead to tensile strength of up to 500 MPa [2]. The θ′ phase is metastable and its crystal structure bears little resemblance to known bulk phases in the Al-Cu system [3]. More generally, precipitates forming in the early stages of precipitation are commonly metastable phases not expected to form based on the bulk phase diagram (e.g., η′ (AlCu) in Al-Cu alloys [4]) or they exhibit distorted structures of known bulk phases (e.g. Ω phase in Al-Cu alloys [5]). Consequently, predicting the early stages of precipitation in a given alloy system through atomic-scale modelling rather than in ad-hoc and empirical way, is a very challenging task. Yet this endeavour is of great importance for discovering and enhancing the properties of structural Al alloys. This challenge has motivated us to investigate the solidstate precipitation in the Al-rich part of the Al-Pt alloy system using advanced atomistic simulation methods. In the periodic table, Pt sits next to the column of Cu and Au, and shares their cubic close-packed structure. The Al-Pt system exhibits several features that should make it interesting to compare with better known Al-Cu and Al-Au binary alloys containing plate-shaped precipitates. Firstly, among all Al binary alloys, the Al2Pt phase is the only stable θ′-like intermetallic phase besides η (Al2Au) [6]. Secondly, several gaps remain in our knowledge of the precipitation characteristics of Pt-containing aluminium alloys. The latest research on the Al-Pt system indicates the following phases to be stable: Al4Pt, Al21Pt8, Al2Pt, Al3Pt2, AlPt (α & β ), Al3Pt5, AlPt2 (α & β ) and AlPt3 (α & β ) [7][8][9]. Here, α is the low-temperature polymorph and β is the high-temperature one-see Table 1 for details. Al2Pt has a cubic CaF2-type structure with 2 = 5.91 Å [10], which is similar to the crystal structure of θ′ (Al2Cu) [3] when viewed along <110> as shown in Figure 1. There remain many unknowns regarding the Al-Pt alloy system: recent experimental and computational studies have focused on the determination of bulk intermetallic phases [7,8,[11][12][13] and their applications [14,15], but have not addressed the thermodynamics of solid solution, precipitate interfacial structures and precipitation sequences. One particularly intriguing feature of the Al-Pt binary system is the existence of several stable and metastable phases of lower Pt concentration than Al2Pt [9], which is contrary to the Al-Cu [16] and Al-Au [17] systems where no stable Al-rich phases exist before the θ′ and η phases. However, there is hitherto no consensus on the precipitation sequences in the Al-Pt system. For instance, it remains unclear as to which phase will form after quenching an Al-Pt alloy, and which one will precipitate first [18][19][20]. In this work, we used first-principles calculations based on density-functional theory (DFT) combined with classical nucleation theory (CNT) to determine the thermodynamical stabilities of different phases, precipitate interfacial energies, and precipitation sequences in the Al-Pt binary system. We reveal the possible existence of two new phases, Al5Pt and Al3Pt [7][8][9]. Among all phases, only θ′-like Al2Pt is predicted to form thermodynamically stable coherent interfaces along the {001} planes and is likely to precipitate first directly from the supersaturated solid solution, thereby bypassing phases with higher Al content which could be expected to form first based on chemical potential alone. We illustrate that solute clusters, GP zones, and non-bulk like interfaces are unlikely to form due to the significant low solute formation energy of Pt in the Al matrix. Our study thus fills knowledge gaps in the Al-Pt alloy system and more generally, provides a workflow to determine precipitate phases and precipitation in an arbitrary binary alloying system. Methods We first calculate the relaxed lattice parameters, thermodynamic characteristics, elastic properties, and interfacial energies of phases in the binary Al-Pt system using densityfunctional theory (DFT). These DFT results are then adopted as input parameters for the classical nucleation theory (CNT) calculations to examine the precipitation sequences. For calculations of phonons, we used a tighter energy tolerance criterion of 1 × 10 −8 eV. We applied the CNT formalism to calculate the thermodynamics of precipitate nucleation: where is the volume of the nucleus, ∆ ℎ is the chemical energy, ∆ is the elastic strain energy, is the average interfacial energy, and is the area of the interfaces between the precipitated phases and matrix. Further details of CNT calculations are presented in the Supplementary Information 1. Results We first present the thermodynamic characteristics of various phases in Al- Stable and Metastable Phases We first investigate intermetallic phases between Al and Pt to provide a preliminary understanding of the binary Al-Pt system. Here we investigated the crystal structure and stability of the well accepted as well as controversial phases in this alloy system. Our results and data reported in literature are summarised in Table 1. A challenge in determining thermodynamic characteristics of Al-rich Al-Pt alloy system is that the full crystal structures of some reported phases, such as Al5Pt [18] and Al3Pt [7,19,20], are still unknown. We thus modelled these phases based on known intermetallic phases of the same chemical formula in binary systems with elements in the same group of Al and Pt. Among all possible crystal structures, those with the lowest formation energy are selected. Al5Pt is modelled from Ga5Pd and Al3Pt is modelled from Tl3Pt. Further details for these calculations are provided in Supplementary Information 2. We assess the stability or metastability of different Al-Pt phases by the formation energy per atom ( . ), calculated as where is the total energy of a bulk phase containing x Al and y Pt atoms, and are the per atom energies of bulk Al and Pt, respectively. A more negative formation energy represents a higher stability of the concerned phase. Since the precipitation in binary Al alloys occurs at temperatures in the range of 100-300 °C, we have also calculated the Helmholtz free energy of formation per atom ( . ), similarly to The corresponding Helmholtz free energy ( ) for each case is calculated as where is the energy calculated at 0 K and is the vibrational entropy contribution as a function of temperature. Again, a more negative free energy of formation represents greater stability. Here we have only considered vibrational contributions to the total entropy, since the configurational entropy can be neglected due to the low content of Pt, and the electronic entropy is small compared to vibrational entropy even for transition metals [29]. Figure 2 shows the calculated formation energies of intermetallic phases in the binary Al-Pt alloy system. These agree well with experimental measurements [9], except for an outlier at a concentration of 74.7 at. % Pt. The Al5Pt phase possesses a formation energy close to the convex hull, which is in agreement with a previous DFT calculation [11]. Therefore, Al5Pt is a possible stable phase. Figure 3 shows changes in the Helmholtz free energy of formation per atom from 0 K to 1000 K of selected Al-rich phases in the Al-Pt binary system. Whereas most phases become less stable at high temperatures, Al3Pt shows the opposite behaviour because it is stabilised by entropy at high temperatures. [8], [10], and [30] respectively. . Change in the Helmholtz free energy of formation from 0 K to 1000 K of selected Al-rich phases in the Al-Pt binary system. Whereas most phases become less stable at high temperatures as expected, Al3Pt shows the opposite behaviour. The Pt Solid Solution Since we are interested in predicting precipitation behaviour of Al-rich precipitate phases from a supersaturated solid solution of Pt in Al matrix, the natural starting point is the determination of Pt solute formation energies and its solubility. To interpret the results, we also examine the nature of the electronic interactions between solute and matrix. The solute formation energy per solute ( . ) can be calculated by Here, a more negative value indicates a more stable state of solute X in Al matrix. We use a 4 × 4 × 4 supercell to avoid any interactions between solute atoms, and N is the number of atoms in the supercell. Because Pt shows certain similarities to Cu and Au in the Al matrix as mentioned, we compare the solute formation energy of Pt to that of group 11 elements including Cu, Ag and Au (see Table 2). To our knowledge, no previous work exists for Pt. Our calculated results for other elements compare well with previous DFT calculations [31], and the minor discrepancies may be due to the smaller supercell size used in previous work. We find that Pt has the most negative formation energy among all selected solutes, indicating Pt solute should be stable in Al matrix. However, the solute formation energy is not directly correlated to the experimental solubility limit (see Table 2). This is because ordered phases will form beyond the solubility limit, and these phases are easier to form when two elements bind strongly according to the Hume-Rothery rules [32]. In addition, the formation energy is calculated at 0K and misses the contribution from the entropy at finite temperatures. Therefore, to evaluate the solubility of Pt in Al matrix, we calculate its solubility limit ( ) as [33] = exp ( where is the solubility limit of Pt in Al in at. % as a function of temperature T. ∆ . is the excess free energy per solute atom, is the Helmholtz free energy of the first ordered phase, and is the Boltzmann constant. Since the solubility limit depends on the first ordered phase, we have chosen both Al4Pt and Al5Pt as they are the 2 possible first ordered phases in the phase diagram as discussed in the previous section. Figure 4 shows the calculated solubility limit for Pt in Al matrix, with and without the entropy contribution. It can be observed that the calculated solubility limit is close to the experimental value at the temperature slightly higher than the eutectic temperature when considering Al5Pt as the first stable Al-rich phase (see Figure 4) [9]. This also suggests that Al5Pt could be the first ordered phase in the Al-Pt system as indicated in the previous section. To quantitatively analyse the solute formation energies and the electronic interactions between Pt and Al, we calculate the bonding electron density as where is the electron density determined by a self-consistent calculation after full relaxation, and is the electron density based on the independent atom model. Figure 5 presents the bonding electron density in the (110) plane since the bonding electrons are located in the tetrahedral holes in Al [34]. We also compare electron densities along 3 directions with largest magnitude of bonding electron densities (see Figure 5 (b)). It is evident that Pt forms the strongest and Ag forms the weakest bond to Al among all 4 alloying elements, in the same order as their solute formation energies: . [31], b, c, d, e obtained from the experimental phase diagrams from [9], [16], [17], [35] respectively. Table 2. Solute formation energy ( . ) and solubility at eutectic temperatures ( ) of Pt, Cu, Au and Ag in the Al matrix. The eutectic temperatures are 930K, 830K, 923K, and 840K for Pt, Cu, Au, and Ag respectively. For Cu, Au, Ag solutes, our results agree well with previous work. These elements have much higher solute formation energy than Pt in Al matrix. Pt Clusters and GP Zone like Structures in Al Matrix Having investigated the thermodynamics of Pt solute in Al matrix, the next step is to determine whether clusters or GP zones will form in the early stages of precipitation, as these will affect the subsequent formation of precipitate phases [2]. To this aim, we model . We also include the data for Cu solute atoms for a comparison with Pt solute atoms. The calculated excess energies for all configurations are shown in Figure 6. We find that the energetically most favourable cluster configurations consist of three nearest neighbour Pt atoms located on (001) or (111) planes (see Figure 6 (a)). These planes are the most common precipitate habit planes in Al alloys [2]. Whereas three coplanar Pt atoms can exhibit a negative excess energy, a single Pt layer in Al matrix shows positive excess energies both on (001) and (111) planes (0.07 eV and 0.36 eV, respectively). The excess energies are positive for all configurations of GP zone like structure for Pt (see Figure 6 (b)). Two Pt layers on (001) planes separated by 7 layers of Al is more favourable than other configurations. In contrast, two Cu layers on (001) separated by 3 layers of Al is more favourable than other configurations (i.e., identical to θ′′ phase). When compared with Pt, Cu has negative excess energies in all configurations of clusters ( Figure 6 (a)), GP zones, and θ′′ [36]. Based on these results, we conclude that coherent precipitates such as GP zones Precipitate Shape and Crystallographic Relationship with Matrix Precipitate shape and their crystallographic orientation in the matrix have a strong influence on mechanical properties of the binary alloy-high-aspect-ratio precipitates on low-index planes such as {001} or {111} tend to be most efficient at blocking stressinduced dislocations and thus imparting high strength to the alloy [2]. Such precipitates are usually coherent with the matrix in one or two dimensions [2], which means some of their interfaces or edges match well with the matrix. It is therefore critical to determine the possible crystallographic relationship(s) between precipitates and matrix, which will determine the precipitate shape and interfaces. To do so, we adopt Kelly et al.'s method [37,38] whereby low-index crystallographic planes in Al and precipitate structures are compared. The search was limited to the three most densely packed planes for each crystal structure, as they are more likely to result in a good match and hence low-energy interfaces. More specifically, we looked for cases where both interplanar and interatomic spacings have less than 10% misfit [37], and consequently, a greater likelihood for the formation of a coherent interface. Only Al2Pt, Al3Pt and Al5Pt were found to satisfy these matching conditions. Figure 7 shows the matched planes and corresponding interfaces between Al2Pt and Al matrix. The best match for Al2Pt in Al is for the following crystallographic relationship: (002) 2 ||(002) and (220) 2 ||(200) , leading to a coherent interface, (002) 2 ||(002) , and 2 sets of semicoherent interfaces, perpendicular to the coherent interface. The coherent interface is likely to be much larger than the semicoherent interfaces. This is analogous to the case of θ′ (Al2Cu) and η (Al2Au) precipitates in Al [3,39]. Based on the orientation relationship determined above, we constructed supercells representing an Al2Pt precipitate embedded in Al matrix to examine the stability of different interfaces using DFT calculations. When lattice parameters and of Al2Pt embedded in Al matrix are fixed to = 4.04 Å to model the coherent interface, the lattice parameter 2 / is found to increase from 5.94 Å in bulk to 6.27 Å in the Al matrix. As shown in Table 3, the formation energy per atom and excess energy per solute are both negative, suggesting a supersaturated solid solution of Pt in Al can be expected to decompose into stable Al2Pt precipitates in the above orientation relationships. Because non-bulk like interfaces have been observed within the θ′ (Al2Cu) structure as a result of Cu solute segregation, the segregation of Pt into the interfaces of Al2Pt is considered [40]. Segregation of a Pt atom increases the formation energy by 0.70 eV, indicating the Al2Pt will displays a bulk like interface similar to η (Al2Au) rather than θ′ [40,41]. This is mainly due to the significantly lower solute formation energy of Pt in Al compared to that of Cu. The entropy is not considered when examining interfaces as the embedded precipitate in our model is small compared to the Al matrix. The matching of Al3Pt and Al5Pt crystal structures with the Al matrix was analysed with the same method. We found that Al3Pt and Al5Pt precipitates are unlikely to form coherent interfaces because of the positive excess energies (see Supplementary Information 3). In summary, Al2Pt precipitates are likely to have a plate shape because of its 2-dimensional matching to the Al matrix. This is structurally analogous to η (as already mentioned) and to the θ′ phase [3]. Al3Pt and Al5Pt are likely to have equiaxed shapes since no low-energy interface is found, which is also the case for Al4Pt and Al21Pt8 [42]. Table 3. Formation energy ( . ) and excess energy ( . ) for Al2Pt precipitates in the Al matrix as calculated by DFT. The negative energy values indicate that Al2Pt precipitates are thermodynamically stable in the orientation relationship indicated in Figure 7. Interfacial Energies for Al2Pt Precipitate in Al Matrix Interfacial energy is a key parameter controlling the shape and thermodynamics of precipitation. To calculate the interfacial energy, we first calculate the formation energies of Al2Pt precipitate supercells with different size (see Figure 8). Then, the interfacial energies between a Al2Pt precipitate and the Al matrix can be obtained from the slope of the change in formation energy as a function of supercell size [43,44]. We use the bulklike interfaces for θ′ (Al2Cu) analogous to that of Al2Pt for an equivalent comparison. It can be seen that Al2Pt has a significantly smaller interfacial energy of 129 mJ/m 2 of the coherent interfaces, but a similar interfacial energy of the semicoherent interfaces when compared to θ′ precipitate in Al matrix. Therefore, Al2Pt precipitates may have a higher equilibrium aspect ratio than θ′ precipitate in Al matrix. Elastic Properties The strain energy associated with the misfit of all precipitate phases with Al matrix is the last parameter required as input into our CNT calculations. The elastic properties for all Al-Pt binary phases considered here are calculated based on DFT [45]. Strains within the range of ±1.5% with an increment of 0.5% were applied in all effective directions according to the crystal symmetry of the binary phase. Our results, presented in Table 4, are in good agreement with published values [46], but display appreciable differences with calculations reported in [11], especially for the shear modulus and Young's modulus of Al4Pt and Al2Pt. The validity of our results is supported by the fact that elastic properties calculated for pure Al are closer to the experimental measurements in our work than in [9]. Thermodynamics of Precipitate Nucleation Having calculated the interfacial energies and strain energies, we can now predict the energy change during nucleation, which measures the likelihood of precipitation of various phases in the Al-Pt binary system. We have assumed a plate-like shape for the Al2Pt precipitate and a spherical shape for the remaining binary phases with incoherent interfaces. By assuming homogeneous nucleation directly from a supersaturated solid solution, and = 2 for all incoherent spherical phases, we calculate the thermodynamics of precipitate nucleation based on the CNT formalism. Figure 9 shows the change in Gibb's free energy for nucleation of phases in Al-Pt binary system. Among all phases considered here, Al2Pt has the lowest nucleation energy barrier and critical radius, indicating it will be the first phase to precipitate, in agreement with the experimental observation [19]. Among the three smallest thicknesses of Al2Pt nuclei that we considered, 1 2 is the most favoured thickness according to our calculations. This is because the large, favourable chemical potential for Al2Pt to nucleate outweighs the unfavourable contribution of misfit strain energy due to a greater thickness (see Supplementary Information 4). Also worth noting is the much smaller critical radius and barrier to nucleation for Al2Pt compared to θ′ (Al2Cu) (see Supplementary Information 4). Again, this is mainly due to the significantly larger chemical potential for Al2Pt to nucleate compare with that of θ ′ . As displayed in Table 5, the nucleation temperature has a negligible effect on the ranking of the nucleation related parameters. Based on these calculations, the phases can be ranked in terms of ease Phases in the binary Al-Pt System Intermetallic phases with higher Al content than the precipitate phase (i.e., Al2Pt) are also of interest here because they may form in Al-rich Al-Pt alloys. As reflected by studies published to date [7-9, 12, 13, 18, 19, 48], it hitherto remains unclear which Al-Pt phases are stable or metastable. The full crystal structure of these phases is also not known. For example, the stable phase with the highest Al concentration is believed to be Al21Pt5 [9,12,13], even though its full crystal structure is yet to be determined. In disagreement with this report, a recent experimental work claimed that the equilibrium phase with the highest Al content might be Al4Pt [7,8]. Our work supports this view-Al4Pt is found to have a more negative formation energy per atom (-0.54 eV) compared to Al21Pt5 (-0.36 eV), and sits close to the convex hull (see Figure 2). Therefore, our work strongly suggest Al4Pt is the most likely equilibrium phase with highest Al content. When exploring the Al-rich part of the Al-Pt phase diagram, we considered the possible phase Al5Pt (see Section 3.1), as it may compete with Al4Pt for precipitation. A metastable phase with a composition of Al5Pt was reported to precipitate first in rapidly quenched Al-Pt alloys. This phase ( ) was determined to have an atomic volume of 17.05 Å 3 [18]. As shown in Section 3.1, we proposed a crystal structure for Al5Pt (I4/mcm) and calculated its formation energy, which was found to be negative and to be slightly above the convex hull similar to previous calculation [11]. Our proposed crystal structure results in an atomic volume of 17.09 Å 3 , which is very close to the experimental value [18]. Therefore, Al5Pt can be considered a plausible candidate for precipitation. A metastable phase of composition Al3Pt has also been suggested as a possible precipitate phase at room temperature [19,20]. More recently, a high-temperature stable phase denoted and of the same composition Al3Pt was reported to be stable [7]. Based on DFT calculations for several possible archetypal structures, we proposed a crystal structure for Al3Pt with space group Pm3 ̅ n (see Section 3.1). For Al3Pt in our work, we used the most stable structure (i.e. Pm3 ̅ n) because the full crystal structure of is still unknown [7]. However, the calculated XRD powder pattern for Al3Pt (Pm3 ̅ n) (see Supplementary Information 1) does not match the XRD pattern of , indicating that the phase characterised in Ref. [7] does not have our proposed structure. Metastable Al6Pt was also reported in quenched Al-Pt samples [19]. The only crystallographic information available is that Al6Pt is isostructural with the Ga6Pt structure with lattice parameters 6 = 15.762 Å, 6 = 12.103 Å and 6 = 8.318 Å [19], but their detailed atomic positions remain unknown [48]. In summary, our work confirms the equilibrium phase is Al4Pt rather than Al21Pt5. We also suggest possible crystal structures for Al5Pt and Al3Pt. Precipitation Sequence in Al-Pt Several studies have examined precipitation in Al-Pt alloys [7,8,[18][19][20]; however, all but one [19] deal with precipitation from the melt rather than solid solution. In this section, we focus on solid-state precipitation, which is important for high-strength properties. First, no GP zones have been reported to form in Al-Pt alloys, and our DFT results presented in Section 3.3 also suggests that large clusters and GP zones of Pt are thermodynamically unstable in Al. We predict the Al2Pt precipitate phase to adopt an orientation relationship similar to that of θ′ and η phase in Al-Cu and Al-Au binary systems respectively. This is different from what was proposed in [19] based on electron diffraction methods. We found that the orientation relationship in [19] provides poor matching between DFT relaxed Al2Pt and Al matrix, in contrast to our proposed orientation relationship. As shown in Section 3.7, Al2Pt appears to be the strongest candidate to precipitate first from the solute solution. This agrees with early studies of solid-state precipitation in Al-Pt alloys by Chattopadhyay et al. [19,49]. This may seem surprising at first, because Al2Pt is in competition with as many as four phases with much greater Al content, namely: Al6Pt, Al5Pt, Al4Pt and Al3Pt. These phases have been observed after rapidly quenching Al-(2- 3)at.% Pt alloys from the melt [18][19][20]. Indeed, the Al6Pt phase was reported to precipitate from the solute solution, but after Al2Pt [19]. Given the full crystal structure of Al6Pt is still not known, it is difficult to accurately calculate the thermodynamic parameters for its nucleation such as nucleation energy barrier and critical radius. Nevertheless, using crude approximations, we find that Al6Pt is likely not to precipitate prior to Al2Pt (see Supplementary Information 5). This is in agreement with Ref. [19]. Regarding the Al5Pt and Al3Pt phases, they contain lattice planes that match Al matrix planes well. However, the interfaces that were considered based on matching these planes were found to be unstable by DFT. Although no thermodynamically stable interfaces were found in this work, a more systematic search may reveal low energy interfaces. In this case, Al5Pt and Al3Pt may form prior to Al2Pt (see details in Supplementary Information 6). However, this situation would contradict the experimental evidence available to date [18,20]. Effect of Bonding Electron Density on Solute Formation Energy Our DFT calculation indicated that Pt has a significant negative solute formation energy in Al. The large bonding electron density between Al and Pt as shown in Figure Workflow to predict solid-state precipitation in an arbitrary binary alloy system Finally, to predict solid-state precipitation in an arbitrary binary alloy system M-X, where M is the matrix element and X is the solute, we propose the following systematic workflow. The first step is to identify all stable and metastable intermetallic phases known between two chemical elements based on the equilibrium phase diagram and databases of crystal structures [6]. A precipitate phase is likely to be structurally related to known bulk intermetallic phases, but not always, as is the case for θ′ (Al2Cu). The full crystal structure of each phase must be known in order to carry out thermodynamics calculations using DFT methods. Because some phases are only stable in a certain temperature range, we need to consider the temperature effect on their stability by estimating the entropy contribution. A major challenge here is that some intermetallic phases are either unknown or have not had their crystal structure solved. In this case, one approach is to adopt crystal structures of phases in similar systems, such as systems between elements in the same group on the periodic table. The second step is to determine the solute behaviour of X in M. This allows us to determine properties such as solubility, segregation energy and chemical potential. These The fourth step is predicting the shape and interfacial structure of precipitate phases, which is a great challenge. This requires the orientation relationship(s) between a given precipitate phase and matrix to be determined. One approach is to match the crystal structure of the precipitate phase with the matrix [37,38]. Based on the matched planes, we can then propose possible low energy interfaces, calculate their interfacial energies and misfit strain energies, and suggest the possible shape of each precipitate phase. Finally, the last step is to predict the nucleation barrier and critical size of each precipitate phase, thus allowing the precipitation sequence to be proposed. One approach, as used here, involves CNT calculations using DFT calculated quantities. Our CNT calculations are rather crude with many assumptions especially for the interfacial energies of incoherent precipitates. However, it allows a qualitatively convincing investigation of precipitation in the unexplored Al-Pt alloy system. Conclusion In this work, we propose a workflow to predict the solid-state precipitation behaviour in the binary Al-Pt system, based on a combination of atomistic first principles methods using density-functional theory and classical nucleation theory calculations. The bulk intermetallic phases Al4Pt, Al21Pt8, Al2Pt, Al3Pt2, AlPt (α & β), Al3Pt5, AlPt2 (α & β) and AlPt3 (α & β) are confirmed to be thermodynamically stable. Our calculations also support the possible existence of Al5Pt and Al3Pt. A significantly low formation energy of Pt in Al is found. This results in a high energy barrier for Pt to form clusters, GP zones, and non-bulk interfaces. When considering precipitate phases, our calculations strongly suggest that Al2Pt will adopt a similar orientation relationship and shape to the θ′ (Al2Cu) phase, and will precipitate first. Similarly to θ′, Al2Pt precipitates are expected to exhibit plate-like shapes with high aspect ratios. Our results are in partial agreement with previous inconclusive experimental work [19,49], and motivate further experimental validation through a detailed microscopic characterisation of solid-state precipitation in Al-Pt alloys. Our work constitutes an initial step towards predicting solid-state precipitation in an arbitrary Al alloy system, and ultimately, towards the discovery of high-performance Al alloys. This research was funded by the Australian Government through an Australian Research Council Discovery Project grant (DP210101451). Computational work was undertaken with the assistance of resources and services from the National Computational Infrastructure (NCI), Pawsey Supercomputing Centre, and from the MonARCH HPC cluster. S1. Details of Classical Nucleation Theory (CNT) Calculations To predict the precipitation sequences, the energy change during nucleation of different precipitate phases is calculated according to the classical nucleation theory (CNT) as where is the volume of the nucleus, ∆ ℎ is the chemical energy, ∆ is the elastic strain energy, is the interfacial energy, and is the area of the interfaces between the precipitate phase and matrix. ∆ ℎ for a precipitate phase can be estimated by its excess Helmholtz free energy per atom, ∆ . , at a certain temperature : where is the atomic volume, ∆ . is the excess energy per atom calculated at For plate-like Al2Pt, the elastic energy can be calculated based on the Christian's approximation [1,2]: where is the shear modulus, is the Poisson's ratio, and is the tensile strain normal to habit planes. Since Al and Al2Pt have very similar shear modulus, is assumed to be the same for matrix and precipitates (25 GPa where is Young's modulus of the matrix, is the bulk modulus of the precipitates, and is the strain. Assuming at least one unit cell of the precipitate phase needs to nucleate to represent such phase, a minimum can be calculated as where ( , , ) . is the maximum of lattice parameters of the precipitates, and is an integer from 1. The required number of Pt atoms are also calculated in terms of the critical size of the nucleus where is the critical volume of the precipitates calculated based on the critical radius, and is the number of Pt atoms per unit of volume in the corresponding intermetallic phase. S2. Crystallographic Information and Thermodynamics of Metastable Phases Here we investigate phases Al5Pt, Al21Pt5 and Al3Pt, whose crystal structures are not fully known. For these phases, crystal structures are proposed based on intermetallic phases between elements in the same group of Al and Pt. Table S1 lists the calculated formation energies of different phases. The crystal structures of these phases are adopted from intermetallic phases between elements in the same group of Al and Pt based on the Materials Project database [4]. Amongst two possibilities, the I4/mcm structure is found to be the most stable one for Al5Pt and is selected for further calculations. For Al3Pt phase, the Pm3 ̅ n structure is found to have the lowest energy. Because Al3Pt (Pm3 ̅ n) is stabilised by entropy at high temperatures, entropy contributions are considered for all possible crystal structures of Al3Pt, see Figure S1. It can be observed that the Pm3 ̅ n structure remains to be the most stable one. A hightemperature stable phase denoted and of a composition close to Al3Pt was reported to be stable above 801 ℃ [5]. Our newly proposed Al3Pt (Pm3 ̅ n) phase is also stable at high temperatures. Therefore, we calculate the XRD powder patterns of all possible structures of Al3Pt. As shown in Figure S2, compared to the experimental measurements [5], Al3Pt (Pnma) shows a similar noisy pattern to the phase. However, small peaks between 30° and 40° suggests Al3Pt (Pnma) have a different crystal structure to . Al3Pt (Pm3 ̅ n) also shows distinct peaks compared to . Therefore, phase may differ from any possible structure of Al3Pt considered in this work, or it could possibly contain more than one phase. Because the Pm3 ̅ n structure is the most stable one for Al3Pt, and the full crystal structure of has not been solved [5], Al3Pt (Pm3 ̅ n) is used for further calculations in this work. Table S1. Crystal structures and formation energies per atom ( . ) of possible metastable phases in the Al-Pt system. Figure S1. Change in free energy of formation per atom from 0K to 1000K for Al3Pt with different crystal structures. The Pm3 ̅ n structure remains to be the most stable one for Al3Pt. Figure S2. Calculated XRD power patterns of Al3Pt (Pnma) and Al3Pt (Pm3 ̅ n) based on their DFT relaxed crystal structures. S3. Interface Matching of Al3Pt and Al5Pt We adopted Kelly et al.'s method [6,7] to determine the interfacial structures between a give precipitate phase, Al3Pt and Al5Pt, and Al matrix. Both Al3Pt (Pm3 ̅ n) and Al5Pt (I4/mcm) possess certain planes that can match Al matrix (see Figure S3, S4 (see Table S2, S3). Therefore, a spherical shape is assumed for Al3Pt and Al5Pt because there is no stable coherent interface found for these phases. S4. Comparing the nucleation behaviour of Al2Pt to ′ (Al2Cu) Because Al2Pt shows certain similarities to θ′ (Al2Cu), we compared their nucleation behaviour. In Figure S5, dark blue curves and light blue curves are references that show the energy change during nucleation of θ′ and Al2Pt respectively. The phase Al2Pt has much smaller nucleation energy barrier and critical radius compared to θ′. The medium blue curves indicate the contribution of a certain parameter. The large difference in chemical potential is the major effects on nucleation, and the interfacial energy also have noticeable effect. In contrast, the influence of the misfit strain is neglectable. The thickness of Al2Pt affects precipitation as it leads to different strains: = −0.25, +0.15, +0.03 for = 1 2 , 1.5 2 , 2 2 , respectively. A thickness of 2 θ ′ − 2 is most desired for θ′ . However, 1 2 is more favourable than 1.5 or 2 2 for Al2Pt for its lowest nucleation energy barrier and critical number of Pt atoms. This is also because of the large chemical potential for Al2Pt to nucleate outweighs the influence of elastic strain energy. Figure S5. Determining influences of CNT parameters on the nucleation energy barrier and critical radius at 500 K. Dark and light blue represent the references of θ′ (Al2Cu) and Al2Pt. In (a), (b), and (c), a single parameter (chemical potential, interfacial energy, strain) of θ′ is changed to the according value of Al2Pt. S5. Comparing the nucleation behaviour of Al2Pt to Al6Pt The Al6Pt phase was reported to precipitate from the solid solution after Al2Pt [8]. Because the full crystal structure of Al6Pt is still unknown, we use crude approximations to estimate its nucleation behaviour. Assuming Al6Pt has the same formation energy per solute atom compared to Al2Pt (i.e., Al6Pt sits on the convex hull), an interfacial energy of 296 mJ/m 2 is required to allow Al6Pt has a same nucleation energy barrier to Al2Pt (see Figure S6). This is a small interfacial energy considering the semicoherent interfacial energies of Al2Pt and θ′ (Al2Cu) are around 500 mJ/m 2 . Because no stable phase has been reported with a stoichiometric of Al6Pt, and an interface with low interfacial energy is undermined, Al6Pt is unlikely to precipitate prior to Al2Pt. S6. Comparing the nucleation behaviour of Al2Pt to Al5Pt and Al3Pt For phases Al5Pt and Al3Pt that contain lattice planes that match Al matrix planes well, possible, yet unrevealed, low energy interfaces may facilitate their precipitation. Following the similar process as in Sec. S5, interfacial energies of 302 mJ/m 3 and 196 mJ/m 3 are required for Al5Pt and Al3Pt respectively to obtain a same nucleation energy barrier compared to Al2Pt (see Figure S7). These interfacial energies are reasonable when considering their good interface matching. A noticeable obstacle is the larger critical radius of Al5Pt and Al3Pt than that of Al2Pt, which may impede their precipitation from the Pt solid solution. Figure S7. Comparing the energy change of a spherical nucleus of Al3Pt and Al5Pt under 500K with different interfacial energies to Al2Pt.
2023-02-22T06:42:41.553Z
2023-02-21T00:00:00.000
{ "year": 2023, "sha1": "b8f0e130a40315496571961b28885fcc6baeb2da", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "b8f0e130a40315496571961b28885fcc6baeb2da", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [ "Physics" ] }
236997444
pes2o/s2orc
v3-fos-license
Immune Checkpoint Inhibitors in Special Populations Cancer is the second leading cause of death in the worldwide. With the growing burden of cancer, the studies on early diagnosis, treatment and prevention of cancer are rapidly increasing. Recently, many new therapeutic strategies have been discovered, among which immunotherapy has dramatically changed the outlook for cancer treatment. Several clinical trials are underway around the world to produce potential treatments. However, these trials set certain strict joining conditions, so that the clinical data cannot be fully applied in the real world. To help clinical oncologists with treatment decision-making, this review collected recent studies on special populations receiving immunotherapy, including organ transplant patients, pregnant women, pediatric patients, patients with pulmonary tuberculosis, patients with human immunodeficiency virus, and patients with autoimmune diseases and mental illness. Introduction In China, the incidence rate of cancer in men and women was 301.67 per 100 000 and 253.29 per 100 000, respectively, and the mortality rate of cancer in men and women was 207.24 per 100 000 and 126.54 per 100 000, respectively. 1 According to the World Health Organization (WHO), cancer has caused 9.6 million deaths in 2018 globally. 2 Immunotherapy is revolutionizing the treatment of cancer. It has increased the overall survival (OS) and progression-free survival (PFS) of many types of cancers, such as melanoma, [3][4][5] advanced non-small cell lung cancer (NSCLC), [6][7][8] renal cell carcinoma, 9 and Hodgkin lymphoma. 10,11 The targets of immune checkpoint inhibitors (ICIs) include programmed cell death 1 (PD-1) and cytotoxic T lymphocyte antigen 4 (CTLA-4) in T-cells or programmed cell death ligand 1 (PD-L1) in tumor cells. These ICIs can exert anti-tumor effects in the body by activating T-cells. But they can also causes immune-related adverse events (irAEs) by changing the immune environment, such as checkpoint inhibitor pneumonitis, immune-related thyroiditis, hepatitis, myocarditis, enteritis and diarrhea, fatigue, itching, rash, endocrine disorders and so on. [12][13][14] IrAEs most commonly occur in the skin, lung, gut, and endocrine system. 12 The incidence of irAEs is 26.82% in patients treated with anti-PD-1/PD-L1inhibitors. 15 Despite of stopping treatment in some patients when conducting clinical trials due to irAEs, most patients develop minimal symptoms during treatment and can still lead a high quality life. Because of concern about potential side effects and compromised efficacy, patients with organ transplant, tuberculosis, HIV, preexisting autoimmune diseases and mental illness have been excluded from prospective randomized trials. At the same time, the majority of current immunotherapy studies are decidedly focused on non-pregnant adult, pediatric and obstetrics space need more attention. Many oncologists can't provide a precise treatment plans when facing trial-ineligible patients. Fortunately, several studies have evaluated the safety and efficacy of immunotherapy in special population of patients who receiving immunotherapy, including organ transplant patients, pregnant women, pediatric patients, patients with pulmonary tuberculosis (PTB), patients with human immunodeficiency virus (HIV), and patients with autoimmune diseases and mental illness. Although the mechanisms of these diseases are all associated with immune system, there are great difference in clinical practice such as treatment, risk of irAEs, outcomes. So, we decided to analyze these special groups of people separately. We hope this review could help oncologists conducting clinical work. Transplant Solid organ transplantation (SOT) or hematopoietic stem cell transplantation is not rare in cancer patients, and cancer is the second leading cause of death in all SOT recipients, indicating a substantial cancer burden in this population. 16 The increasing use of ICIs assists in studying the safety and efficacy of these inhibitors in transplant patients. After transplantation, allograft rejections and graft-versus-host disease (GVHD) can usually be prevented with intense maintenance of immunosuppression. 17 More interestingly, clinical studies have shown that PD-1 or PD-L1 expression is associated with allograft tolerance, 18,19 and PD-1 gene polymorphism contributes to the reduction of allograft failure. 20 So, whether ICI will break the immune tolerance and cause severe post-transplant complications is still a question of discussion. After reviewing the existing literatures through searching PubMed, it was found that patients treated with ICIs showed different clinical responses. According to Abdel-Wahab et al, 21 among 39 cancer patients who underwent solid organ transplantation (59% with prior renal transplantation [n ¼ 23], 28% with hepatic transplantation [n ¼ 11], and 13% with cardiac transplantation [n ¼ 5]), 16 patients (41%) developed allograft rejection after ICI therapy (renal transplantation rejection n ¼ 11, 48%; hepatic transplantation rejection n ¼ 4, 36%; and cardiac transplantation rejection n ¼ 1, 20%). In total, 8 patients (21%) developed irAEs, and adverse reactions are observed in those without allograft rejection. The median OS was 12 months (95% CI 8-16 months) in patients without allograft rejection, and 5 months (95% CI 1-9 months) in those with rejection (P ¼ 0.03). Similar conclusions have been reported by De Bruyn et al, 22 and they found that among the 48 advanced cancer patients who received ICI treatment, there were 19 liver transplantation recipients and 29 renal transplantation recipients. The rejections were observed in patients receiving liver (37%) and kidney transplant (45%). These results revealed that the patients were at a higher risk of allograft rejection after transplantation. Chae et al 23 hypothesized that CTLA-4 inhibitors were safer than PD-1 inhibitors in certain solid organ transplant recipients based on their extensive literature study. Several other clinical data of cancer patients treated with ICIs after solid organ transplantation showed similar results. 24,25 Some people could tolerate ICI therapy, while others encountered severe posttransplant complications. The PD-1/ PD-L1 axis might play a critical role in allograft rejection. It has been shown 26 that PD-L1 of the donor tissue can interact with PD-1 receptor expressed on the recipient's alloreactive T cells, thus down-regulating the recipient's alloreactive T cell responses and limiting the rejections. PD-1/PD-L1 inhibitors could destroy the balance of the immune microenvironment, leading to allograft rejection in SOT patients treated with ICIs. In murine models, MEK inhibition and BTK inhibitor (Ibrutinib) could delay GVHD progression and improve survival. 23 Combination therapy of ICI with MEK or BTK inhibitors could reduce the transplantation failure rate of SOT cancer patients. 23 Pregnancy The incidence rate of cancer during pregnancy is 16.9/100,000 live birth and 24.5/100,000 birth. 27 If cancer occurs during pregnancy, both mother and the embryo are at greater risk of death. It is important to weigh maternal and fetal advantages to prolong survival and reduce the teratogenicity. Recent studies have reported that the majority of pregnant woman are already at advanced stages when they are diagnosed. In patients with positive driver gene, targeted therapy might be considered a good choice. 28,29 Immunotherapy can be assumed as the next treatment option for pregnant woman with negative driver gene. Flint et al 30 analyzed the feasibility of ICI for pregnant woman undergoing ICI by comparing the immunological similarities and differences between pregnancy and cancer. Maternal-fetal immune tolerance involving complex mechanisms might share the same pathway with cancer immune checkpoint block. 30,31 It has been demonstrated [32][33][34] that the blockade of PD-L1 can reduce the allogeneic fetal survival rate, and CTLA-4 on Treg cells may play a role in the maintenance of pregnancy by inducing an enzyme called indoleamine 2,3-dioxygenase in the dendritic cells and monocytes. Therefore, some people worry about whether immunotherapy will destroy maternal tolerance to the fetus by blocking the immune check points, and whether it means that pregnant woman cannot receive immunotherapy. On the contrary, 2 cases set forth the possibility of applying immunotherapy in pregnant woman. The first is a case of a metastatic melanoma at 7 weeks of pregnancy, who received nivolumab plus ipilimumab and successfully delivered a healthy baby. 31 Menzer et al also reported a similar case of metastatic melanoma at 18 weeks of gestation. The patient was treated with nivolumab plus ipilimumab, but the patient's condition slowly deteriorated and died from underlying disease the day before delivery. Fortunately, a premature female baby was born with no deformities or intrauterine growth retardation. 35 These reports suggested that certain patients could benefit from the use of ICIs. Multi-center trials are difficult to be conducted due to ethical challenges, different cultures and laws. For doctors, it is important for to balance the benefits and risks, and make decisions in a multidisciplinary setting. Pediatrics In developing country, cancer is the leading disease-related cause of death in children and adolescents. 36 Treatment of cancer in pediatrics is significantly different from that of adults. As reported by Ward et al 37 the most common types of cancers in childhood included acute lymphoblastic leukemia (ALL) (26%), brain and central nervous system (CNS) tumors (21%), neuroblastoma (7%), and non-Hodgkin lymphoma (NHL) (6%), whereas the most common cancers in adolescence were Hodgkin lymphoma (HL) (15%), thyroid carcinoma (11%), brain and central nervous system tumors (10%), and testicular germ cell tumors (8%). The principle behind pediatric cancer treatment is similar to that of adults, but there is no specific drug application. Traditional therapies for pediatric cancer include surgery, chemotherapy and radiation therapy. Compared with adult cancer, immunotherapies have been demonstrated to have no significant activity in the front-line treatment of pediatric cancer. However, for many refractory and recurrent tumor patients, immunotherapy has become a viable therapeutic option. 38 Recent studies have reviewed immunotherapy development for pediatric cancer. Immunotherapies included monoclonal antibodies (mAbs), checkpoint inhibitors, bispecific T-cell engagers (BiTEs), and chimeric antigen receptor T cells (CAR-Ts), which may have the chance to treat children with resistant or recurrent cancer. 38 Checkpoint inhibitors such as anti-PD-1 or anti-CTLA-4 inhibitor has a similar safety profile to that of adults, but the response rate of agents to solid cancer in children is far lower than that of adults. Recent reports [39][40][41][42][43][44] applying immunotherapy in pediatric cancer were collected (Table 1) achieved an objective response (5.9% [95% CI 2.6-11.3]), and adverse reactions were shown to be tolerable. The results of phase I study (NCT01445379) in pediatric patients with melanoma and other solid tumors who received CTLA-4 blockade therapy demonstrated good tolerance to anti-CTLA-4 therapy, but there was no objective responses. 42 However, 2 cases treated with mAbs showed obvious efficacy and safety of immunotherapy in recurrent and refractory pediatric cancer. Pinto et al 45 have demonstrated that the levels of PD-1, PD-L1, and PD-L2 are low in pediatric solid tumors. The poor reaction of pediatric cancer patients to PD-1/PD-L1 inhibitor could be associated with the low expression of PD-1/PD-L1. In the same line, Majzner et al 46 also believed that low immunogenicity was less likely to respond to single-agent checkpoint inhibition. There was limited data on the good tolerance of ICIs and mAbs for treating cancer clinically. Compared with chemotherapy and radiotherapy, which could cause neurological dysfunction, skeletal deformities and short stature, immunotherapy was associated with fewer long-term toxicities and more conducive to children who could grow healthy. 38,[47][48][49] Immunotherapy in pediatric cancer is still in the exploratory stage. By identifying optimal targets and accurate biomarkers, we believe that immunotherapy will revolutionizes the treatments for pediatric cancer and increases survival and quality of life of pediatric cancer patients. Tuberculosis According to WHO, more than 10 million people were sick due to tuberculosis (TB) in 2018 globally. 50 As Japanese data, in last 20 years, incidence of active pulmonary tuberculosis in lung cancer patients was 1.9%. 51 Cheon et al 52 reported that compared with other cancer, patients with esophageal cancer, multiple myeloma, lung cancer, pancreatic cancer, leukemia, head and neck cancer, and lymphoma were more susceptible to development of TB. Cheng et al 53 reported that hematologic cancer patients had the highest rate of active tuberculosis. Dobler et al 54 also reported that the relative risk of TB in hematologic cancer in adults was higher than that in adults with solid cancers (IRR: 3.53 vs 2.25; 95% CI 1.63-7.64; 1.96-2.58). In the past few years, some cases have reported the development of acute tuberculosis in cancer patients who were treated with nivolumab or other PD-1/PD-L1 inhibitors ( Table 2). [55][56][57][58][59][60][61][62][63][64][65] At the same time, one case with advanced pulmonary adenocarcinoma developed tuberculous pericarditis after nivolumab treatment. 60 At present, there are no big clinical trials that providing accurate data on the incidence of TB reactivation after immunotherapy. Review of the literatures revealed 2 assumptions about the mechanism of TB activation. Firstly, blockade of PD-1/PD-L1 pathway might result in the proliferation of T cells, which in turn could produce interferon-g (IFN-g) against Mycobacterium tuberculosis (Mtb). 66 This reaction might be similar to those of HIV/TB coexisting patients receiving antiretroviral treatment, who developed TB rapidly because of restoration of anti-TB specific immune response by rapid increase of CD4 þ T cells. 66,67 Secondly, activation of pulmonary tuberculosis cause diffuse lymphocyte infiltration. 60,66 These hypotheses still warranted clarification. In summary, it was important to pay attention to potential Mtb infection in patients and screen for latent TB clinically. For PTB patients undergoing ICI treatment, no exact timing to safely apply immunotherapy is present. Anastasopoulou et al 58 have suggested that ICI therapy should be paused before PTB was controlled because of potential exaggeration of inflammatory responses caused by immunotherapy. Also 2 weeks interval might be appropriate between anti-tuberculosis treatment and immunotherapy. 58 If anti-tuberculosis treatment and immunotherapy start simultaneously, then the overlapping toxicities caused by them should be focused on, especially the liver dysfunction. 58 Autoimmune Disease About 11.3% patients with advance cancer had a personal history of preexisting autoimmune diseases. 68 Previous studies have shown that PD-1/PD-L1 and CTLA-4 were associated with the development of autoimmune diseases. Nishimura et al 69 have demonstrated that PD-1 receptor deficient mice may develop immune-mediated cardiomyopathy. And Klocke et al 70 have also showed that CTLA4-deficient mice suffered from various autoimmune diseases. CTLA-4 gene polymorphism is linked with the cause of several autoimmune diseases, 79 have suggested that more than 10 mg/day use of steroid during the start of ICIs was associated with inferior clinical efficacy. Dr. Cornec 78 also suggested that for cancer patients with stable PADs, declining the use of immunosuppressive treatment during the initiation of ICIs did not reduce the efficacy of cancer treatment. Safety with regard to the use of ICIs in severe autoimmune disease patients is still unknown, and high dose of steroids might reduce the efficacy of ICIs. So, collaboration between a specialist in PAD and oncologist is very important when facing these patients. HIV The risk of cancer is 69% higher in people infected with HIV when compared to healthy population. 80 However, HIV-infected cancer patients were always excluded from the clinical trials. In the past few years, several clinical trials have evaluated the safety and efficacy of immunotherapy in cancer patients with HIV-infection. Uldrick et al 81 have enrolled 30 patients with Kaposi sarcoma (KS) (n ¼ 6) and NHL (n ¼ 5) and non-AIDS-defining cancer (n ¼ 19), and all patients were treated with pembrolizumab. The primary objective was to assess safety of pembrolizumab in cancer patients with HIV who were on antiretroviral therapy (ART) with cancer. Grade 1-2 irAEs were observed in 22 patients (73%), and grade 3 irAEs were observed in 6 patients (20%). HIV was shown to be controlled in all participants. With regard to tumor responses in patients, CR was revealed in 1 patient, PR in 2 patients, stable disease (SD) in 17 patients, and progressive disease (PD) in 8 patients, and 2 patients were not evaluable (NCT02595866). Ostios-Garcia et al 82 have enrolled 7 lung cancer patients with HIV infection and they were treated with nivolumab (n ¼ 2) and pembrolizumab (n ¼ 5). All these patients accepted ART during immunotherapy. Tumor responses in patients included PR (n ¼ 3), SD (n ¼ 2), and PD (n ¼ 2). Only 4 patients had grade1-2 irAEs. Guaitoli et al 83 have summarized clinical efficacy of immunotherapy in 28 HIV-infected cancer patients, which revealed that immunotherapy in HIV-infected patients was, as effective as in general population, with good and its safety and toxicity were similar to those general cancer patients. In summary, these results suggested that, unless there were specific situations, HIV-infected cancer patients receiving ART could be treated similarly to general cancer patients using immunotherapy. Mental Illness According to the Diagnostic And Statistical Manual Of Mental Disorders, 84 Fifth Edition, Alzheimer's disease (AD), depression, bipolar disorder and anxiety disorder all belong to mental illness. Several recent studies have demonstrated the association between immune system and mental illness. It has been reported 85 that the CNS-specific T cells can promote hippocampal neurogenesis, spatial learning and memory ability through microglial activation. This could partially explain the age-related and HIV-related cognitive impairment, because these patients had various degrees of immune system function declination. Rosenzweig et al 86 have successfully mitigated cognitive deficits and reduced pathology in the brain of 5XFAD AD mouse model through blockage of PD1/PD-L1 axis. This result suggested that ICIs might have an excellent clinical application in AD patients. When faced with health threats, emotional distress such as depression and anxiety could be easily observed in cancer patients. It has been reported 87 that the incidence rate of depression in cancer patients varied from 1% to above 50% depending on the cancer type, stage, treatment, and different depression rating scales. Depression and anxiety are both immunemediated inflammatory diseases, and that have been extensively investigated from the perspective of chemokines, cytokines, and immune cell numbers. [88][89][90][91] Fundamental research has not yet fully explained the relationship between mental illness and immune system. No clinical trials or cases have evaluated the efficacy and safety of immunotherapy in patients with mental illness. We hypothesized that persons who suffered from cancer and mental illness such as AD could benefit more from ICI therapy, and this would require further research in the future. Conclusions With the rapid expansion of ICI treatment in special populations, it is important to clearly understand the safety and efficacy of it in trial-ineligible population. SOT patients with immunotherapy have the risk of allograft rejection. There are not enough data about the efficacy and safety of immunotherapy in pregnant cancer women. In limited reports, there was no evidence that immunotherapy is associated with the risk of fetal malformation. 92 We advise use of CTLA-4 and/or PD-1 inhibitors during pregnancy only if the benefit to the mother is so great that it outweighs the substantial theoretic risks to the fetus. Patients with Mtb exhibited potential risks for the development of acute PTB when treated with ICIs. Before immunotherapy, a TB screen is important. Pre-existing autoimmune disorder is not an absolute contraindication to ICI therapy. But life-threatening autoimmune disease patients or myasthenia gravis patients may not be considered good candidates for ICI therapy. 93 HIV-infected cancer patients with ART, although viral load and CD4þ T cell numbers during treatment are heterogeneous, immunotherapy efficacy and safety are similar to general cancer patients. We thought HIV is not a contraindication to treatment. Cancer patients with mental illness such as AD may be potential beneficiaries of immunotherapy. We collected ongoing clinical trials about ICIs application in special patients (Table 3). Systematic studies and multicenter clinical trials were warranted to facilitate the acquisition of more useful data, which could guide drug application in special populations. Finally, clinicians can refer to these results to provide patients with a suitable plan by balancing potential benefits and toxicity risks. At the same time, multidisciplinary consultation is also needed for taking decision on treatment. Declaration of Conflicting Interests The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article. Funding The author(s) disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: This study was supported by the Zhejiang Province Medical Science and Technology Project (No. 2020ZH001).
2021-08-14T06:16:56.530Z
2021-01-01T00:00:00.000
{ "year": 2021, "sha1": "5b92eaa7ed9c45447b7503a38ab7d498802eda12", "oa_license": "CCBYNC", "oa_url": "https://journals.sagepub.com/doi/pdf/10.1177/15330338211036526", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "0380bfd7f79721b5f9492351beb7f007ebbdbc09", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
248451630
pes2o/s2orc
v3-fos-license
Inter‐annual and inter‐species tree growth explained by phenology of xylogenesis Summary Wood formation determines major long‐term carbon (C) accumulation in trees and therefore provides a crucial ecosystem service in mitigating climate change. Nevertheless, we lack understanding of how species with contrasting wood anatomical types differ with respect to phenology and environmental controls on wood formation. In this study, we investigated the seasonality and rates of radial growth and their relationships with climatic factors, and the seasonal variations of stem nonstructural carbohydrates (NSC) in three species with contrasting wood anatomical types (red oak: ring‐porous; red maple: diffuse‐porous; white pine: coniferous) in a temperate mixed forest during 2017–2019. We found that the high ring width variability observed in both red oak and red maple was caused more by changes in growth duration than growth rate. Seasonal radial growth patterns did not vary following transient environmental factors for all three species. Both angiosperm species showed higher concentrations and lower inter‐annual fluctuations of NSC than the coniferous species. Inter‐annual variability of ring width varied by species with contrasting wood anatomical types. Due to the high dependence of annual ring width on growth duration, our study highlights the critical importance of xylem formation phenology for understanding and modelling the dynamics of wood formation. Introduction Wood formation determines the long-term carbon (C) accumulation in trees and therefore provides a crucial ecosystem service in mitigating climate change. Differences in wood formation processes, for example timing, duration, and rates, and their climatic sensitivities have major influences on the assessment of global change impacts on terrestrial carbon sequestration (Rathgeber et al., 2016). Current knowledge on wood formation has mainly been acquired from studies monitoring annual ring formation in conifer species (Rossi et al., 2006b;Cuny et al., 2013). The formation of wood in conifers involves multiple phases of cell development including cell division, enlargement, wall thickening and maturation (Plomion et al., 2001;Rathgeber et al., 2016), which can occur at different periods and rates depending on the climate responses of the species. The occurrences of these processes are usually confined within a growing season and constrained by the availability of resources, for example warmth and soil water, with maximum growth rate usually observed to coincide with the maximum seasonal temperature or the maximum day length (Rossi et al., 2006a;Cuny et al., 2013). The activation of cambial activity in extratropical regions is regulated by temperature and photoperiod to avoid frost damage (Rathgeber et al., 2016;Oribe & Funada, 2017). Recent studies also identified critical temperature and soil water potential thresholds below which radial growth does not occur (Parent et al., 2010;Eckes-Shephard et al., 2021;Peters et al., 2021). Variations in a specific environmental condition above these thresholds may enhance or slow down radial growth during a growing season (Grabner et al., 2006;Oberhuber & Gruber, 2010;Cabon et al., 2020). However, conifers are only one component of forests globally and their growth dynamic might differ from other tree types. For example, species with contrasting wood anatomical types exhibit different physiological needs, for example in the way they rely on hydraulic transportation, to adapt to the external environment. A well known example is that ring-porous species grow faster radially in the early growing season than diffuse-porous and coniferous species, due to their need to produce new vessel conduits each year (Garcia-Gonzalez et al., 2016). These differences have also been observed to be reflected in different onset and cessation timings of xylem activities between coexisting coniferous and deciduous species (Martinez del Castillo et al., 2016;del Castillo et al., 2018). Moreover, a recent study suggested that the peak radial growth rate of ring-diffuse species is synchronised with high soil water availability (D'Orangeville et al., 2022), rather than the maximum temperature or day length for coniferous species. Evidence indicates that wood formation in species with different wood anatomies is under diverse environmental, phenological and physiological controls, with impacts on intraand inter-annual ring width variations. As the substrate and energy provider for wood formation, nonstructural carbohydrates (NSC) are expected to reflect well how tree species differently mediate the response of radial growth to environmental factors. Functionally, NSC are considered to buffer the deficits between the demands (maintenance and growth) and supply (Dietze et al., 2014). The dependence of wood formation on NSC should vary among species depending on how the xylem water transport is coordinated with foliage formation. At the beginning of a growing season, reinitialisation of wood formation is thought to occur earlier than bud break in ring-porous species due to the need to rebuild earlywood vessel conduits to support water transport to the leaf buds (Wang et al., 1992;Takahashi et al., 2013;Pérez-de-Lis et al., 2018). With the absence or near absence of photosynthesis during this stage, the stored C is crucial to provide sufficient energy and material (Pérez-de-Lis et al., 2017). By contrast, the onset of wood formation of diffuse-porous trees is less associated with C storage, as cell enlargement generally begins at or right after budbreak (Schmitt et al., 2000;Čufar et al., 2008). The onset of wood formation of coniferous species can occur either before or after bud break (Rossi et al., 2009). NSC concentrations of stems were observed to fluctuate during the growing season for both ringporous and diffuse-porous trees under different climate and site conditions (Michelot et al., 2012;Richardson et al., 2013;Scartazza et al., 2013). By contrast, total NSC concentrations in stems of coniferous trees tend to accumulate first but begin to decline around the middle of the growing season (June or July), potentially caused by lower photosynthesis rate (Martínez-Vilalta et al., 2016;Furze et al., 2019). At the end of a growing season, the NSC of both ring-porous and diffuse-porous trees accumulates, probably to ensure winter survival (but see Hoch et al. (2003) as an exception), but an opposite trend was widely observed for the coniferous trees (Martínez-Vilalta et al., 2016). Such differences in the seasonal dynamics of NSC among wood types might provide key insights into different C allocation strategies with important implications for the modelling of source and sink controls over wood formation. These diverse coordinations of phenologies and C usage among wood structures could induce differential climate sensitivities of wood formation that would significantly affect any assessment of forest C sink capacity in the context of climate change. Although wood formation within deciduous species has received some attentions in recent studies (Michelot et al., 2012;Kraus et al., 2016;Pérez-de-Lis et al., 2017;Gričar et al., 2022), few of them have comprehensively linked intra-and inter-annual NSC variations with ring formation dynamics, and none of them has focused on differences between coexisting species with contrasting wood anatomical types. In this study, we monitored xylogenesis, foliage phenology and NSC dynamics in Quercus rubra L. (northern red oak, red oak for short, a ring-porous species), Acer rubrum L. (red maple, a diffuse-porous species), and Pinus strobus L. (eastern white pine, a coniferous species) at a mixed temperate forest during the 2017-2019 growing seasons. We investigated the patterns and environmental drivers of annual radial growth for each species and their relationships with stem C storage using observations across a consecutive 3 yr. Our aim was to understand how intraannual processes, for example the timing, duration and rate of growth, generated inter-annual variability of ring width and determine the drivers across tree species with contrasting wood anatomies. Specifically, we hypothesised that (H 1 ) the relative importance of physiological and phenological terms to annual ring width development is wood anatomical-type specific, and (H 2 ) the primary environmental factors driving radial growth rate are also dependent on wood anatomies. We further investigated the variations of seasonal and inter-annual NSC concentrations to explore their potential linkages with wood formation dynamics. we hypothesised that (H 3 ) both seasonal and inter-annual variations in NSC concentrations are different between species with varying wood anatomical types. Materials and Methods The study site Harvard Forest is a mesic temperate mixed forest dominated by red oak (Quercus rubra L.) and red maple (Acer rubrum L.), with hemlock (Tsuga canadensis) and white pine (Pinus strobus L.) as the most abundant conifers. It is located in central Massachusetts, USA (42.51°N,72.22°W). Soils at Harvard Forest are on average 1 m deep and mainly constituted of well draining, slightly acidic sandy loam. The recorded mean annual temperature and mean total annual precipitation are 7.57 AE 0.78°C (µ AE σ), and 1138.6 AE 227.2 mm (µ AE σ), respectively (Boose & Gould, 2022). The data were collected in the Prospect Hill Tract, which has regenerated naturally since a stand-replacing hurricane in 1938. Our study trees were located along Prospect Hill Road at an average elevation of 355 m. Initially, we selected eight codominant trees for each species: red oak, red maple and white pine. Two red maple and two white pines of the initial selection had snapped or suffered substantial damage during a storm event in 2017. We excluded data from those trees from our analysis leaving us with eight oaks, six maples and six white pines. The trees had an average age of 69 AE 6, 75 AE 15 and 81 AE 9 yr, an average height of 21.5 AE 2.1, 23.8 AE 1.9 and 25.3 AE 3.6 m, and average diameter at breast height of 25.0 AE 4.1, 47.8 AE 16.0 and 48.7 AE 15.2 cm for red maple, red oak and white pine, respectively (µ AE σ). Data collection and processing Climate data Two datasets from Harvard Forest were used to quantify the site's long-term climate condition. Daily temperature and precipitation during 1964Daily temperature and precipitation during -2002Daily temperature and precipitation during and 2003Daily temperature and precipitation during -2019 were collected from Shaler Meteorological station (Boose & Gould, 2013) and the Fisher Meteorological station (Boose, 2006) at Harvard Forest, respectively. Both stations are a few hundred metres from our observational site. The potential evapotranspiration (PET) during 2001-2019 was taken from MODIS16A2 (Running et al., 2019). The respective spatial and temporal resolutions were 500 m and 8 d, respectively. Values of the pixel at the site and its neighbouring pixels, that is the surrounding 8 pixels, were averaged to represent the site condition. Average conditions for each month and growing season (April to September) were quantified. The ratio of precipitation to PET (P/PET) of each month and growing season was then calculated. The PET dataset was obtained and processed through Google Earth Engine (earthengine.google.com). Xylogenesis To monitor weekly growth dynamics throughout the 2017, 2018 and 2019 growing seasons, we collected microcores using a trephor tool (Rossi et al., 2006b). In 2017, the microcores were sampled every week from day of year (doy) 67-305. During subsequent years, the sampling period was better calibrated to only cover each species' growing period. The sampling was then taken from doy 94-304, doy 115-283 and doy 108-304 for red oak, red maple and white pine in 2018, respectively. The corresponding periods for 2019 were from doy 128-240, doy 107-269 and doy 107-289, for red oak, red maple and white pine, respectively. Freshly sampled microcores were immediately put into Eppendorf tubes containing a 3 : 1 solution of ethanol and glacial acetic acid, which was replaced by 75% ethanol after 24 h. Using a rotary microtome (Leica RM2245; Leica Biosystem, Nußloch, Germany), we cut microsections (7 µm-thick crosssectional cuts) from paraffin-embedded samples (Tissue Processor 1020 Leica; Leica Biosystem). All samples were double stained with astra-blue and safranin and images were produced using a digital slide scanner (Zeiss Axio Scan.Z1; Carl Zeiss AG, Jena, Germany). For each image, three radial files were chosen and the total ring width as well as zone widths for each development stage (cell division, cell enlargement, cell-wall thickening and mature xylem cells) were measured according to Rossi et al. (2006a). Dates of onset and cessation of each developmental phase, that is enlargement, wall thickening and onset of maturation, were determined with the R package CAVIAR (v.2.10-0; Rathgeber et al., 2011aRathgeber et al., , 2018. These dates were defined for each species when at least 50% of the counted radial files showed the target phase. The target phase was defined according to the criteria given in Rossi et al. (2006a) for conifer tracheids. These criteria have been similarly applied for fibre cells in both ring-porous and diffuse-porous species, and using the width of the tangential band to assess its radial progress (please also refer to Prislan et al., 2013;Gričar et al., 2020). To estimate the final annual ring width, observations of xylogenesis from images after the cessation of the enlargement phase were averaged for each individual tree. Foliage phenology To identify the onset and cessation dates of foliage activity, foliage phenology of the observational trees was monitored during 2017-2019. Throughout all 3 yr we determined key phenological dates visually using the method of O'Keefe (2019). Briefly, we observed the crown of each tree with binoculars on a daily basis to determine the percentage of branches on which a phenological event (i.e. bud burst, foliage elongation, foliage coloration, and foliage fall) had occurred. We then determined a single date for each tree from when the individual process had occurred for half of the canopy of any particular tree. Intra-annual variation in NSC To monitor intra-annual NSC storage dynamics, stem tissues were collected in April, July and October during the 3 yr of the study, using a standard increment corer (5.15 mm diameter; Haglf Co. Group, Långsele, Sweden). The measuring dates were 5 April 2017, 5 July 2017 and 4 October 2017; 23 April 2018, 11 July 2018 and 10 October 2018; and 10 April 2019, 3 July 2019 and 2 October 2019. All samples were immediately shock frozen on dry-ice in the field and brought to a freezer (maximum temperature of −60°C) for storage within 2 h of collection. After storage, all samples were freeze dried (FreeZone 2.5; Labconco, Kansas City, MO, USA and Hybrid Vacuum Pump, Vaccubrand, Wertheim, Germany), ground in a Wiley mill with a mesh 20 (Thomas Scientific Wiley Mill, Swedesboro, NJ, USA), and homogenised (SPEX Sam-plePrep 1600; MiniG, Metuchen, NJ, USA). Particularly small samples were ground with an agate pestle and mortar (JoyFay International LLC, Cleveland, OH, USA) to minimise loss of material. We homogenised the first (not including the bark and phloem) centimetre of xylem tissue. Here, c. 40 mg of finely ground and dried powder for each sample were analysed using a colorimetric assay with phenol-sulfuric acid following ethanol extraction, according to the protocol by Landhäusser et al. (2018). Absorbance values were read twice using a spectrophotometer (Genesys 10S UV-Vis; Thermo Fisher Scientific, Waltham, MA, USA) at 490 nm for sugar and 525 nm for starch. For the quality control, we included at least eight blanksboth tube and sample blanksand between 7 and 16 laboratory control standards (red oak stem wood, Harvard Forest, Petersham, MA, USA; potato starch, Sigma Chemicals, St Louis, MO, USA) with each batch of samples. The coefficient of variation for the laboratory control standards was 0.07 and 0.09 for sugar and starch concentrations in oak wood, respectively, and 0.13 for potato starch. To convert the sample absorbance values to concentrations in % dry weight and uncertainties we used the in-house R package NSCPROCESSR (https://github.com/ TTRademacher/NSCprocessR) that calibrates absorbance values with a 1 : 1 : 1, glucose : fructose : galactose (Sigma Chemicals, St Louis, MO, USA) standard curve for sugar and a glucose (Sigma Chemicals) standard curve for starch. Total stem NSC concentration was then calculated as the sum of total stem soluble sugar and stem starch concentrations. These data of xylogenesis, foliage phenology, and NSC are publicly available on the Harvard Forest Data Archive (Rademacher, 2021). Calculation of growth degree-days (GDD) To investigate cell enlargement onset in relation to temperature, we quantified GDD using the following equation: where T i is the mean air temperature (°C) on the ith day of the year, and where m is the days with a temperature higher than the base or threshold temperature (T base ,°C). We set three different T base representing a general value range of temperate forests (2.5, 5, 7.5°C) to calculate GDD. Statistical analysis General additive models were used to fit the raw xylogenesis data in different cellular development phases, that is enlargement, wall thickening and mature zones, for each individual tree following Cuny et al. (2013). The fitted models were then applied in the following statistical analysis. Linear mixed effects models were used as the major statistical tool to analyse the relationships between key variables and to identify differences between years and species. To test speciesspecific variations of dates of wood and foliage phenology, we set year and species as the categorical fixed effects. To test the seasonal variations of NSC terms, that is soluble sugar (SS), starch, and total NSC (SS + starch), we set observation date, year and species as the categorical fixed effects. To test how the variations of radial growth rates responded to the transient environmental factors, we set the weekly width of enlargement zone as the response variable and tested its relationship with the weekly mean day length (DL), air temperature (T a ), and precipitation (Prep), and two categorical predictors, year and species. To reduce the complexity of variable combinations, we built models using single species as an initial test to select the important environmental factors. To test the relative importance of physiological and phenological terms in determining the annual ring width, we set predictors to represent physiological and phenological conditions, that is mean and maximum weekly radial growth rates (G mean and G max ) as the physiological terms, and duration of radial growth (G len ) as the phenological term, and two categorical predictors, year and species, as the fixed effects. Here, G max and G mean were quantified as the maximum and mean widths of the enlargement zone, respectively. G len was calculated as the duration of cell production, that is the days between the start and end of cell enlargement. In all the above analysis based on linear mixed effect models, the tree-specific random intercept was considered. Fixed effects were tested through setting up models with each individual component or multiple components. All models were fitted by maximum likelihood using the LMERTEST package in R (v.3.1-3, Kuznetsova et al., 2017). Models were ranked according to their corrected Akaike's Information Criterion scores (AIC c ). AIC increments (ΔAIC c ) for each model were calculated with respect to that of the model with the lowest score, that is the best fitted model. ΔAIC c > 2 was considered to be significant. Climatic conditions Long-term climatic conditions of the growing season (1964-2019 for temperature and precipitation observations and 2001-2019 for PET and P/PET, respectively) at the observation site are summarised in Fig. 1. Climate indices from April to September were averaged to represent the growing season condition. The mean growing season temperatures during 2017-2019 were c. 1σ warmer than the 56-year mean (1964-2019), respectively. The values from 2017 to 2019 were 0.98σ, 1.51σ and 0.76σ. The growing season precipitation of 2018 marked the highest record during the three observational years (3.48σ above the average), which was mainly from the high rainfall during the late growing season (July to September; Supporting Information Fig. S1). The year of 2018 also marked the rainiest year since instrumental recording. The other 2 yr were also wetter than the average conditions, with 0.28σ and 0.67σ more precipitation above the average condition. The 3 yr showed low PET with a general decreasing trend from 2017 to 2019. The lowest value was −1.48σ in 2019. The pattern of P/PET was very similar to that of precipitation, with significantly higher values in 2018. Additionally, it should be noticed that a relatively dry year, that is 2016, took place before the observation period. In 2016, the seasonal precipitation and P/PET of 2016 were both c. 1σ lower than the average condition. Inter-annual variations of ring formation Contrasting inter-annual patterns of ring formation were found for the three species. For red oak, the mean annual ring width was significantly lower in 2017, and was 57.7% as wide as in the other 2 yr on average (59.5%, 55.9% with respect to 2018 and 2019; Fig. 2; Table S1). For red maple, higher mean ring width was also found in 2018 and 2019 but with very high variability between individuals (Table S1). Mean annual ring width of white pine barely varied during the 3 yr, with 2017 showing a slightly thinner mean ring width. Foliage and wood phenologies The relationship between foliage and wood phenologies for the different tree species was generally consistent across the observational 3 yr (Fig. 3). The beginning of cell enlargement was earlier than budburst for red oak and white pine, by an average of 14.7 and 29.9 d, respectively. The order was opposite for red maple, with budburst preceding the onset of cell enlargement by 18.9 d on average. Cessation dates of cell enlargement and wall thickening were much earlier than those of foliage fall for the two angiosperm species. New Phytologist Using different baseline temperatures (T base ), growth degreedays (GDD) generally accumulated faster in 2017 than in the other 2 yr during the beginning period of cell enlargement for red oak (doy 112.3 AE 7.8) and white pine (doy 132.2 AE 8). After then, GDD in 2018 gradually became higher than in 2017 and 2019 during the beginning period of cell enlargement for red maple (doy 147 AE 8), especially for the calculations using 5°C and 7.5°C as T base . Dates of budburst, foliage fall, and onset of cell enlargement showed significant differences among species and years (Table 1). With high variations between individuals, the cessation dates of enlargement and wall thickening were significantly different between years but not between species. The mean annual duration of cell production and wood formation (cell enlargement and wall thickening) was longest for white pine and then red oak and red maple ( Fig. 3; Table S1). We observed shorter durations of enlargement and wood formation in 2017 than the other 2 yr for red oak, but similar durations for red maple and white pine over the 3 yr. Through the mixed effects model, significant differences of durations were found between species but not between years. Regarding foliage phenology, the duration of foliage activity, that is days between budburst and foliage fall, showed significant differences between both years and species (Table 1). Intra-and inter-annual xylogenesis dynamics As the critical phase for radial growth, cell enlargement started and peaked earlier for red oak than the other two species in all 3 yr (Figs 2, 3). The timings of peak zone width (first peak in case of red maple) were at doy 137.8 (AE9.1), 172.9 (AE7.0), and 163.2 (AE8.4) for red oak, red maple and white pine, respectively. Individual trees from all three species exhibited a single peak pattern in 2017, with more similar timing of the peak than in the other 2 yr (Figs S2-S4). On average red maple showed a bi-model growth pattern in 2018 and 2019, but individual trees exhibited uni-and bi-modal growth dynamics (Fig. S3). The end of cell enlargement from all species showed high variabilities between individuals, ranging from doy 175 to 250. We first tested the importance of different environmental factors, that is weekly mean daylength (DL), air temperature (T a ), and precipitation (Prep) on variations in intra-annual radial growth rate, that is weekly mean width of the enlargement zone. Based on an initial analysis within each species, we identified DL and T a as the most significant environmental factors, and so were tested further using the two factors (Tables S2-S4). DL played a dominant role in affecting the variations of weekly radial growth rate during the 3 yr. Models involving DL showed substantial better performance than those only involving T a (Table 2). Among the species, the weekly radial growth rate of white pine showed the closest relationship against DL, with marginal R 2 reaching 0.44 using DL as the single fixed effect (Table S4). T a can explain a relatively smaller proportion of the variations of cell enlargement, and mainly for red oak and white pine (Tables S2-S4). Incorporating the interactions between T a and DL could further enhance the model performance, but could only marginally increase the explanatory power for the variations of radial growth rate. No general effect of Prep was found. Its effect was more transient. One example is a synchronisation of high rainfall and growth stimulation in 2018 for red maple (Figs 2, S5). Overall, the annual ring width can be better explained by the duration of radial growth (G len ) than mean (G mean ) or maximum (G max ) weekly radial growth rates (Table 3). Models with consideration of G len showed substantial lower AIC and higher explanatory power for annual ring width than the models considering only G mean or G max . The varied role of G mean and G max was probably a consequence of the high intra-annual variability of cell enlargement between years and individuals (Figs S2-S4). Models considering interactions between G len and species compared favourably against models not including these interactions, indicating different responses existed between species (Tables 3, S5). The year factor showed a minor effect on the annual ring width prediction compared with that among species. For individual species, G len played a major role in controlling the annual ring width for red oak and red maple (Tables S6-S8). For white pine, there was a strong effect on annual ring width from the individual trees (R 2 = 0.68). We also found an effect of G mean for white pine, but it was significantly less important than the tree factor (Table S9). By contrast, G max and G len showed comparably minor effects on ring width growth for this species (Table S7). To further explain this strong effect from the individuals, we further tested the relationship between diameter at breast height (DBH) and mean annual ring width of individual trees, and found a significant relationship for white pine (R 2 = 0.73, P < 0.01), but not for the other species (Fig. S6). Seasonal dynamics of NSC Concentrations were significantly different among the three species for total stem NSC, soluble sugar (SS), and starch, with all the well fitted models containing the species effect ( Fig. 4; Table 4). In contrast with the obvious difference in annual ring width for red oak and red maple, both the year and observational date showed a minor effect on the stem NSC concentration terms. Seasonal variations of mean stem NSC were consistent over years for both red oak and red maple. The relatively low NSC concentrations appeared in the middle of the growing season (early July), corresponding to the middle timing of wall thickening, that is the major stage for trees to deposit C into wood. While the high NSC concentrations appeared at the early and late growing season, corresponding to the timing before and after wall thickening. For white pine, stem NSC concentrations concentration first decreased and then increased. However, due to the variations between individuals, no significant difference between years was found (Fig. S7). Discussion Duration vs rate to determine annual ring width The duration of radial growth (G len ) is determined by xylem formation phenology, that is the timing of onset and cessation of cell enlargement, while the radial growth rates (G max or G mean ) reflect more the intrinsic growth potential under a specific environment. Our results confirmed H 1 , that is the relative importance of physiological and phenological terms to annual ring width development is wood anatomy-type specific, by revealing the dominant role of G len in determining annual ring width variation between years for both angiosperm species, in contrast with within white pine. Previous studies have found that variations of ring width in conifers (Vaganov et al., 2006;Rathgeber et al., 2011b;Cuny et al., 2012;Ren et al., 2019) and ringporous species (Delpierre et al., 2016;Pérez-de-Lis et al., 2017), were mainly attributed to G max rather than G len . One exception to such a G max dominant view is from a comparison using trees with three different wood anatomical types, in which annual ring width was significantly correlated with the end date of the growing season, but not with G max , implying a phenological control over radial growth on an annual basis over different tree types (Michelot et al., 2012). Through multiyear observations, this study is the first direct evidence to indicate a strong dependence of annual ring width on G len for the typical angiosperm species under the same environmental conditions. Regarding white pine, we observed relatively minor effects from both duration and rate terms to explain annual ring width. Instead, a major effect was generated from the tree factor. This suggests that radial growth in this species is controlled by the intrinsic biological condition, for example vitality, as a high correlation between DBH and mean annual ring width was found (Fig. S6). In addition, the observed growth patterns in pine seemed to be not simply determined by a single species-specific strategy, as we observed individual trees of white pine with different growth patterns, for example short G len and low G max , short G len and medium G max , long G len and medium G max , and so on (Fig. S4). Therefore, the outcome also reflected each individual's fit to its corresponding micro-physical and biological environment in an ecosystem (Rathgeber et al., 2011b). Inter-annual variability of phenologies The general features of foliage and wood phenologies from the different species in this study were consistent with previous observations. For all 3 yr, the onsets of wood development preceded or followed the budburst for red oak and red maple, respectively. This corroborated previous conclusions for ring-porous and diffuseporous trees (Takahashi et al., 2013;Guada et al., 2020). White pine showed an earlier onset of growth than bud burst, which was similar to findings from other coniferous species (e.g. Rossi et al., 2009;Moser et al., 2010;Takahashi & Koike, 2014). Environmental drivers of the beginning of cell enlargement are relatively well understood. Recent studies have suggested that a comprehensive consideration of photoperiod and temperature can accurately estimate the onset of radial growth of various species (Delpierre et al., 2019;Huang et al., 2020). Based on the GDD calculation, the inter-annual variability of spring growth onset of different species from our study can be interpreted. The early spring of 2017 was warmer than the other 2 yr, leading to a corresponding faster accumulation of GDD during the onset period for red oak and white pine in 2017. By contrast, the The corrected Akaike's Information Criterion increments (ΔAIC c ) for each model are shown with respect to that of the model with lowest score (the best fitted model). ΔAIC c > 2 are considered to be significant. Table 3 Linear mixed models to evaluate the effect of the maximum enlargement zone width (G max ) or mean enlargement zone width (G mean ), and the duration of enlargement phase (G len ) on the annual ring width for the three species (red oak, red maple and white pine) during 2017-2019. Models Fixed effects Variations of enlargement zone Research New Phytologist GDD of 2018 accumulated faster than in the other 2 yr for red maple, possibly leading to a slightly earlier growth onset for that year (Fig. 3d). The relative low variabilities of spring onset between individuals further confirmed the robustness of the results of the inter-annual variations of growth onset from the three species. Factors determining the timing of cessation of cell enlargement, however, are not well understood in either conifers or angiosperms. Current knowledge on this phenological transition in wood formation is mainly from conifers and at the cellular or individual level, and suggests a potential combined effect from hormonal signals, resource availability, and direct environmental factors (Uggla et al., 2001;Sorce et al., 2013;Cartenì et al., 2018). At the individual level, inconsistent patterns of enlargement cessation were found between years, which seemed to be not controlled by any single factor (Fig. S8). Once averaged to the stand level, we identified the high variability between individuals for each species. Interestingly, we observed a potential linkage between the start and end of enlargement for red oak at the stand level, which was similar to the findings from foliage phenology (Fu et al., 2014;Keenan & Richardson, 2015;Zani et al., 2020). The early cessation of cell enlargement for red oak in 2017 corresponded to the earlier beginning of cambial activity and later cessation seems linked to the delay of onset for the other 2 yr (Fig. 3). One possible explanation for the covarying pattern is the sink limitations from nutrient supply or phloem loading (Paul & Foyer, 2001;Ryan & Asao, 2014). However, the phenomenon of earlier growth cessation has not been widely observed in the free-air CO 2 enrichment (FACE) experiments (Norby, 2021) and the other two species involved in this study. In addition, the preceding autumn phenology may also affect the phenology of current year by modifying the timing and duration of dormancy period (Marchand et al., 2021). Our results, therefore, call for future tests to better understand the underlying mechanisms driving the variations of autumn phenology, especially for species other than conifers. Endogenous properties, rather than transient environmental factors, control radial growth dynamics The variations of weekly width of enlargement zone tracked photoperiod more than the other transient environmental factors at the site. Therefore, we reject H 2 by identifying a common primary environmental factor for all three species independent of wood anatomy. Among the three species, white pine exhibited the highest correlation with photoperiod. On the one hand, the close relationship between the variability of weekly width of enlargement zone and photoperiod in white pine suggests a relatively consistent bell-shaped pattern of radial growth for white pine. G max of white pine was generally reached around the summer solstice day (doy 173), confirming the conclusions of previous studies (Rossi et al., 2006b;Cuny et al., 2015). On the other hand, this relatively constant pattern over years also indicates a lower plastic response to environmental factors compared with the two angiosperm species. Regarding red oak, cell enlargement peaked earlier than for the other two species due to the requirement to form vessels at the beginning of the growing season, and therefore presents an asymmetric growth pattern with rapid width increment during the early growing season. Red maple exhibited the highest plasticity of cell enlargement among the three species. All individuals reached the first growing peak around the summer solstice (Figs 2, S3), but individuals with longer G len exhibited a second surge of growth after the first peak in 2018 (several trees in 2019 as well). The second peak was synchronised with the high water input after July (Fig. S5). This species-specific response is potentially related to the high sensitivity of red maple wood formation to water supply. Due to its shallow root system, water demand of red maple is strongly dependent on surface soil moisture, potentially causing variation in both foliage and wood activities (Gilman, 1990;Tschaplinski et al., 1998). The substantially lower ring width only for red maple in 2016, that is the year with a dry growing season, seems to corroborate this explanation ( Fig. S9; Notes S1). Seasonal and inter-annual variability of NSC Species-specific NSC concentrations clearly reflected the differential dependence on C storage by each species through the growing season. Therefore, we accept H 3 . The substantially higher concentration of NSC and larger magnitude of changes for red oak and red maple indicated the greater demand of C storage during different periods, for example onset of growth and dormancy period, than for the coniferous species. Our study therefore provides evidence and explanation for the substantial differences in C storage levels by linking them to the species-specific growth strategies. In contrast with that found for annual ring widths, no significant changes in red oak and maple NSC were observed across years, suggesting that radial growth was not always limited by C supply. In red oak, for example, the earlier onset of wood formation in 2017 did not result in lower stem NSC concentrations than in other years. This result suggested a sink limitation to wood formation, possibly induced by temperature limitation. However, we could not exclude that the seasonally constant NSC concentration was maintained by NSC remobilisation from other organs as suggested by Barbaroux & Breda (2002). As shown by Furze et al. (2019), there was a significant reduction in branch NSC and a corresponding increment of stem NSC during April in both red oak and maple at the same site. A recent study has Research New Phytologist also suggested that storage in living bark tissue can be a source of NSC remobilisation for xylem formation (Schoonmaker et al., 2021). This xylem NSC maintenance may cause C depletion in other organs and induce further feedbacks between source and sink activities (Hartmann & Trumbore, 2016). In addition, low stem NSC concentration was observed during the middle of the wall-thickening stage for red oak and red maple. The result is different from the existing evidence for coniferous species, for which stem NSC concentration generally remained high during the similar timing of a year (Oberhuber et al., 2011;Simard et al., 2013). Our observations, therefore, implied a more significant investment of C storage in support of C deposition into wood for both ring-porous and diffuse-porous species than the coniferous ones. Nevertheless, due to the high variability of NSC observed in the above-ground organs (Tixier et al., 2018), the frequency and timing of the NSC observation was also critical to illustrate the seasonal dynamics of C supply condition. Implementing comprehensive measurements of the whole tree NSC pattern with fine temporal resolutions will be helpful to better understand the role of C storage and the effects on wood formation. Conclusion We have identified species-specific inter-annual variability of ring width linked to wood anatomical types and drivers. Both red oak and red maple showed a high dependence of annual ring width on the duration of radial growth (G len ). For red oak, with shorter G len in 2017, a thinner ring width of that year was identified than for the other 2 yr. For red maple, similar G len were found for all 3 yr. We observed a larger mean ring width in 2018, potentially due to the high water supply during the late growing season. Whereas with high plasticity of cell enlargement, large differences of annual ring width between individuals were found. For white pine, ring width mainly varied between individuals rather than years, with only minor inter-annual variability. Such diversity of wood formation activity between species at a single site suggested that an aggregated inter-annual variability would be different from any of the single species once aggregated to the ecosystem level. This should be responsible for the lack of a clear role of transient environmental factors for intra-annual variations of radial growth from our study. It is important to emphasise that our current understanding of wood formation was largely derived from conifers, and model development is inevitably based on existing knowledge (Vaganov et al., 2006;Fatichi et al., 2019). With increasing recognition of the importance of sink activities, it is necessary to know how to adequately represent xylogenesis in global models across all tree anatomies (Friend et al., 2019). We have identified contrasting patterns of xylogenesis from trees with different wood anatomies at the same site. Our results highlighted the fact that species-specific wood anatomical types and differences in their sensitivities to environmental pressures largely determined the annual developmental dynamics of ring width increment, with differences in timing, duration, and rate (D'Orangeville et al., 2022). It is therefore essential to study the drivers of the dynamics of wood formation in species other than conifers to meet future modelling requirements. Our results are a step in this direction by providing new insights into the importance of the duration of radial growth for wood formation of angiosperms, and therefore increases the understanding of the drivers of inter-annual and inter-species variations in wood formation. Supporting Information Additional Supporting Information may be found online in the Supporting Information section at the end of the article. Notes S1 Quantification of annual ring width based on the standardisation data. Table S1 Summary of important growth indexes. Table S2 Linear mixed models to evaluate the responses of the weekly variations of enlargement zone, against day length (DL), air temperature (T a ) and precipitation (Prep) for red oak during 2017-2019. Table S3 Linear mixed models to evaluate the responses of the weekly variations of enlargement zone, against day length (DL), air temperature (T a ) and precipitation (Prep) for red maple during 2017-2019. Table S4 Linear mixed models to evaluate the responses of the weekly variations of enlargement zone, against day length (DL), air temperature (T a ) and precipitation (Prep) for white pine during 2017-2019. Table S5 All linear mixed models were tested to evaluate the effect of the maximum enlargement zone width (G max ) or mean enlargement zone width (G mean ), and the duration of enlargement phase (G len ) on the annual ring width for the three species during 2017-2019. Table S6 Linear mixed models to evaluate the effect of the maximum enlargement zone width (G max ), the mean enlargement zone width (G mean ) and the duration of enlargement phase (G len ) on the annual ring width for red oak during 2017-2019. Table S7 Linear mixed models to evaluate the effect of the maximum enlargement zone width (G max ), the mean enlargement zone width (G mean ) and the duration of enlargement phase (G len ) on the annual ring width for red maple during 2017-2019. Table S8 Linear mixed models to evaluate the effect of the maximum enlargement zone width (G max ), the mean enlargement zone width (G mean ) and the duration of enlargement phase (G len ) on the annual ring width for white pine during 2017-2019.
2022-05-01T06:23:18.222Z
2022-04-30T00:00:00.000
{ "year": 2022, "sha1": "abbd6e4b360a5948df4cc482a9401c125dd5294c", "oa_license": "CCBY", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/nph.18195", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "9f24b3a1ccfe3a2d9db2ada589c710e398459473", "s2fieldsofstudy": [ "Environmental Science", "Materials Science" ], "extfieldsofstudy": [ "Medicine" ] }
239013317
pes2o/s2orc
v3-fos-license
Integrated traditional Chinese and conventional medicine in the treatment of anemia due to lower-risk myelodysplastic syndrome: study protocol for a randomized placebo-controlled trial Background Erythropoiesis and iron homeostasis are closely related; anemia due to lower-risk myelodysplastic syndromes (MDS) remains difficult to treat. In the last decade, we have been committed to improving the regulation of iron metabolism using traditional Chinese medicine (TCM). Previous studies have found that the TCM Yi Gong San (YGS) can reduce the expression of transferrin by inhibiting hepcidin overexpression caused by inflammation, promote the outward transfer of intracellular iron, and improve the symptoms of anemia. Here, our study aimed to compare the efficacy of a conventional drug with YGS with that of conventional medicine with placebo to provide a scientific basis for making clinical decisions. Methods A prospective, multicenter, double-blinded, randomized controlled clinical trial will be conducted to evaluate the therapeutic efficacy of conventional medicine combined with YGS with that of conventional medicine alone in the treatment of MDS. A total of 60 patients would be enrolled in this study, with each treatment group (conventional medicine + YGS and conventional medicine + placebo) comprising 30 patients. Oral medication would be administered twice daily for 3 months. All patients would be followed up throughout the 3-month period. The primary outcome was measured by assessing blood hemoglobin level. The secondary outcome was measured by assessing TCM symptom score, iron metabolism, hepcidin levels, and inflammatory factors. Discussion This trial would aim to demonstrate the effectiveness and feasibility of YGS in the treatment of lower-risk MDS anemia, as well as its impact on inflammatory factors and iron metabolism in patients with lower-risk MDS. Trial registration Chinese Clinical Trials Registry (http://www.chictr.org.cn/) ChiCTR1900026774. Registered on October 21, 2019. Keywords: Anemia of lower-risk myelodysplastic syndrome, Traditional Chinese medicine, Treatment, Clinical trials Background Myelodysplastic syndromes (MDS) are a heterogeneous group of hematologic neoplasms characterized as a clonal disorder of the hematopoietic stem/progenitor cells leading to a host of hematological malignancies. MDS can be manifested as one or more morbid hematopoiesis of the bone marrow hematopoietic cells, ineffective hematopoiesis, and high-risk acute myeloid leukemia transformation [1]. Anemia is the most common clinical manifestation in patients with MDS. Approximately 60%-80% of patients have symptoms of anemia, and nearly 50% of MDS patients have severe anemia with an increased risk for heart and lung failure. Patients with International Prognostic Scoring System-Revised score of ≤3.5 has been categorized as lower-risk MDS subpopulation according to the 2012 International Working Group criteria. According to the European MDS statistics, lower-risk MDS accounts for approximately 70% of the entire MDS population, and most lower-risk MDS patients have symptomatic anemia; therefore, improving anemia and alleviating the symptoms of anemia are the main treatment goals for patients with lower-risk MDS [2]. The clinical treatment of lower-risk MDS anemia has always been a challenging task, despite the development of various treatment methods in Western medicine, including erythropoietin, immunosuppressants (cyclosporine A), immunomodulators (lenalidomide amine and thalidomide), and those undergoing clinical trials. A previous study reported that in patients with MDS anemia treated with the aforementioned drugs, 80-90% still require blood transfusion, and blood transfusion dependence (1 U red blood cell transfusion at least every 8 weeks within 4 months) is negatively related to patient survival [3]. Lower-risk MDS anemia, also known as ineffective hematopoiesis, is primarily caused by the excessive proliferation of red blood cell precursors, which fail to mature and impair the ability of red blood cells to carry oxygen in the blood. Additionally, erythropoiesis and iron homeostasis are closely related. Abnormal iron metabolism is an independent risk factor affecting the prognosis of patients with lower-risk MDS [2], and hepcidin plays a key role in iron homeostasis. MDS has a complex iron regulation mechanism: anemia, hypoxia, inflammation, and iron overload, which have opposite effects on hepcidin production. Compared with healthy individuals, patients with MDS deprived of blood transfusion have significantly increased serum ferritin (SF) and hepcidin levels, but have decreased hepcidin/SF ratio [4,5]. In patients with lower-risk MDS undergoing blood transfusion, the hepcidin levels initially increase; however, as the amount of transfused blood increases, the hepcidin levels gradually decrease, eventually causing iron overload [6]. Iron overload would further aggravate the underlying hematopoiesis deficiency in those with MDS, making blood transfusion ineffective [2]. In the last decade, we have been committed to improving the mechanism of iron metabolism using traditional Chinese medicine (TCM). According to the TCM theory, spleen deficiency is an important triggering factor of MDS, and the approach called "tonifying the spleen" in TCM clinical practice is a crucial step of MDS treatment. The spleen is responsible for maintaining the normal distribution of iron in the body. The implementation of this function depends on the transportation of spleen Qi. Therefore, the deficiency of spleen Qi and the diminished function of spleen would cause an iron distribution disorder. Yi Gong San (YGS) is a representative TCM prescription for promoting the movement of spleen Qi. Previous studies have found that YGS used in TCM has no effect on iron metabolism in normal mice. However, it can reduce the expression of transferrin by inhibiting hepcidin overexpression caused by inflammation, promoting the outward transfer of intracellular iron and thus improving anemia [7][8][9]. Reports from many clinical trials have shown that YGS can improve inflammation-induced anemia (anemia due to chronic diseases) [10]. In fact, prior to blood transfusion, lower-risk MDS have the same pathological process of iron metabolism with anemia due to chronic diseases. However, it is not clear whether YGS is a suitable therapeutic candidate for anemia in patients with lower-risk MDS. Thus, this study aimed to compare the efficacy of conventional drug with YGS with that of conventional drug alone, as a basis for making clinical decisions. Objective and design This multicenter, pragmatic, randomized, and controlled trial is conducted to evaluate the effectiveness of integrating YGS into the conventional treatment for anemia in patients with lower-risk MDS. A total of 60 patients would be recruited and randomly assigned to one of the two treatment groups. During the treatment period, patients in both arms would be followed up for 3 months. The primary outcome (hemoglobin) and secondary outcome measures (TCM symptom score, iron metabolism, hepcidin levels, and inflammatory factors) would be assessed at different points within the trial period. The study has been approved by the research ethics committee of the Shanghai Baoshan Hospital of Integrated Traditional Chinese and Western Medicine (#201809-01). This trial adhered to the Declaration of Helsinki and has been registered at the Chinese Clinical Trials Registry (ChiCTR1900026774) on October 21, 2019. The trial results would be reported according to the Standard Protocol Items: Recommendations for Interventional Trials statement and the latest version of the Consolidated Standards of Reporting Trials statement. Study setting and participants All adult patients with lower-risk MDS admitted at the respiratory departments in three tertiary care hospitals would be screened and enrolled. Eligible patients would be asked to provide an informed written consent and would be centrally randomized to either the integrated YGS and conventional medicine group or the conventional medicine group. For both groups, all medications would be administered orally for 3 months (12 weeks). Patient recruitment has begun in March 2020 and should be completed in December 2021. The inclusion criteria are as follows: Patients aged > 18 years and older Patients whose peripheral blood cell, bone marrow smear, gene, and chromosome profiles meet the criteria for lower-risk MDS according to the 2020 NCCN guidelines (V1) Patients with a hemoglobin level of < 110 g/L Patients whose symptoms meet the spleen Qi deficiency pattern of MDS according to the Standard Criteria for Syndrome Differentiation by TCM [11] The exclusion criteria are as follows: Patients who are blood transfusion dependent Pregnant or lactating women, and women planning to become pregnant (must undergo urine pregnancy test, a standard test for women of childbearing age, before treatment) Patients with liver and kidney function insufficiency (whose blood aspartate aminotransferase, alanine aminotransferase, or creatinine concentrations exceed the normal value by more than 3 times the upper limit) Patients who recently participated in clinical trials of other drugs (within 2 weeks of TCM, within 7 halflives of Western medicine) Patients with comorbidities that may cause anemia such as neoplastic diseases Patients who are unwilling to cooperate Patients who cannot complete the specified observation items after being selected Randomization and double-blind approach Participants who met the inclusion criteria and signed an informed consent form would be allocated to a group ( Fig. 1). Based on a stratified block randomization design with a 1:1 ratio, two groups of patients would be assigned by the study center at the time of study. Allocation of patients is based on the random numbers (1 to 60) generated using an SPSS software version 25.0, and patient's details are saved and kept in a sealed envelope by an independent clinical statistician based on the double-blind research method. In the study, the procedures of generating random numbers, evaluating the results of primary and secondary outcomes, and statistical analysis are independently performed by dedicated personnel who are blinded to outcome data. Upon completion of the study and data analysis, the efficacy and safety of YGS treatment shall be evaluated. During the clinical trial period, the treatment regimen of both study groups would be managed by their respective study center/hospital according to the approved treatment plan. If a patient develops a serious adverse reaction or lifethreatening condition during the course of clinical trial, the clinician responsible for the patient must provide the best on-site medical care. Simultaneously, the clinician must immediately report the incidence to the inspector, upon which the letter containing patient's treatment grouping information needs to be opened to break the blind, and the type of drug used is identified to ensure that the patient is provided with the appropriate emergency medical assistance. Once a blind is uncovered, the case would be regarded as a shedding case and would be excluded from the statistics, and any occurrence of adverse reactions would be recorded. Intervention treatment All patients would be treated with conventional medicine according to the MDS 2020 NCCN guidelines (V1) and TCM guidelines for the diagnosis and treatment of MDS (2019) [12]. Poria Sclerotium (Fu-Ling). Granules containing these herbs are manufactured by Jiangyin Tianjiang Pharmaceutical Co., Ltd with strict compliance to Good Clinical Practice conditions in terms of the production process and packaging of mixed granules for TCM. Each YGS granule contains 20 g of each herbal raw material (Table 1). For the conventional medicine group, patients would be administered placebos containing maltodextrin, lactose, and food coloring, according to the modern pharmaceutical technology and placebo specification for granules. The placebos appear identical to YGS granules in terms of packaging, weight, odor, and color. In all patients, the medications would be administered twice daily, after breakfast and dinner, and up to 3 months (12 weeks). Data collection Most patients in the outpatient clinic would be enrolled in the study. We would first obtain the patient's informed consent. Then, we will collect the patient's demographic data (sex, age, occupation, home address, diagnosis, and past history of illness) and help the patient fill out the TCM symptom scale to determine the type of symptom presented. A peripheral blood sample would be collected from each included patient prior to the initiation of treatment. The blood sample would be examined for various biochemical parameters such as liver and kidney function, iron metabolism-related indicators, hepcidin levels, and inflammatory factor-related markers. By randomly assigning them into either group, the patients would be administered with oral doses of conventional medicine with YGS or placebo for 3 months (12 weeks). At 1-, 2-, and 3-month intervals, the patients will be followed up in order to determine who among them would be suitable to fill up a new TCM symptom scale form to rate their symptoms, and a fresh peripheral blood samples would be collected (Table 2). Use case record form to collect patient clinical data and test data, and enter the information and data into electronic data capture. Efficacy assessment Primary outcome The primary outcome is the blood hemoglobin level; by comparing it to the baseline level prior to the initiation of treatment, an increase of 10 g/L is considered as an effective treatment outcome. Secondary outcomes The secondary outcomes measure the changes in TCM symptom scores, iron metabolism, hepcidin levels, and inflammatory factors. Spleen Qi deficiency syndrome is primarily characterized by poor appetite, fatigue, abdominal distension after meal or in the afternoon, and abnormal stool. The effectiveness of TCM is evaluated using the TCM syndrome score method (Table 3). According to the Standard Criteria for Syndrome Differentiation by TCM [11], a reduction in TCM syndrome score by ≥30% indicates that the clinical symptoms of spleen Qi deficiency either improved or disappeared and the treatment is considered clinically effective. The TCM syndrome score is calculated as follows: [(scores before treatment − scores after treatment) ÷ scores before treatment] × 100%. The indicators for iron metabolism are serum iron, total iron-binding capacity, SF, and soluble transferrin receptor. Inflammatory factors include interleukin (IL)-1, IL-6, and tumor necrosis factor (TNF)-α. Adverse event reporting Details of the mild or severe adverse effects following TCM treatment would be recorded in the "case report form," which comprise the time of occurrence, clinical manifestations, number of treatment elapsed days, duration, outcome, and possible side effects of the drug. Patients with abnormal laboratory test results must be followed up until the test results return to normal or to the level prior to the administration of medications; the clinicians shall determine whether the abnormalities are related to the treatment drug. If a serious adverse reaction develops, the "serious adverse event form" should be filled out, and the incident would be reported to the sponsor, research ethics committee, the Safety Supervision Department of National Medical Products Administration, and the Health Administration Department within 24 h. The sponsor and research ethics committee in charge will conduct the quality control and patient compliance assessment in this trial. Statistical analyses A descriptive statistical analysis of all the quantified variables in this study would be performed. The mean, median, and standard deviation will be calculated for quantitative variables, while the absolute and relative frequencies will be calculated for qualitative variables. The SPSS software (version 25.0; IBM Inc., USA) would be used for data processing and analysis. Statistical analysis would be based on the intention-to-treat and perprotocol population principles. The missing data can be processed using the last observation carried forward method. To compare the changes in TCM symptom scores and hematological data of patients before and after taking intervention treatment, multiple comparisons would be performed. A repeat measurement data analysis of variance will be initially conducted followed by a separate effect analysis, including comparisons between groups at each time point, using a multivariate logistic regression model to analyze the correlation between iron metabolism, hepcidin levels, and IL-6, TNF-α, IL-1 expression changes, and curative effect. Additionally, a subgroup analysis can be performed according to the R-IPI, and covariance analysis can be performed considering the factors such as age and sex. A two-sided P value of < 0.05 is considered significant. Discussion The release of inflammatory factors plays an important role in the development of lower-risk MDS, including the upregulation of cytotoxic T cells [13]. Bone marrow fibroblasts and macrophages of patients with MDS continuously release IL-6, IL-1, TNFs, and other inflammatory factors to further aggravate the compromised hematopoiesis in the bone marrow and promote the apoptosis of bone marrow cells [14]. Patients with MDS often have a low immune function and increased coinfection risk especially those with leukopenia. Coinfection would further stimulate the release of inflammatory factors in the body, especially IL-6 and IL-1, that can induce liver hepcidin overexpression and inhibit iron absorption and release of stored iron [15], resulting in iron metabolism disorders that are similar to anemia induced by chronic diseases. C-reactive protein (CRP) levels indirectly reflect the expression of inflammatory factors. Previous studies have shown that serum CRP levels in patients with MDS are positively correlated with the hepcidin, especially the RAEB type [4]. Similar to SF, CRP has a significant effect on patient's survival, indirectly indicating that inflammatory factors can affect the survival of patients with MDS [2]. YGS is an oral concoction first reported in the Pediatric Medicine Card Straight Tactic, a famous classical book of TCM written approximately 900 years ago. YGS is known for its functions of tonifying the splenic Qi and gasification stagnation. YGS has been traditionally used in Korea to treat a variety of inflammatory diseases; pretreatment with YGS inhibited the production of TNF-α and IL-6 in LPSstimulated mouse peritoneal macrophages [16], which is consistent with our previous animal experiment results [7], indicating the confirmed inhibitory effect of YGS on inflammatory factors. In our current study, we intend to enroll patients with MDS anemia who are not dependent on blood transfusion and provide them with relevant treatments during the early stages of lower-risk MDS. In patients with lower-risk MDS, early T cell function is increased, and inflammatory factors are highly expressed [17], which is the same as the pathological process of iron metabolism in ACD. Therefore, based on a previous study, we hypothesized that YGS can regulate iron metabolism by reducing the inflammatory factors of lower-risk MDS, thereby improving anemia. We intend to include 60 patients with lower-risk MDS anemia; conduct a randomized, double-blind, placebo-controlled parallel trial; administer oral YGS granules or placebo granules to included patients; and evaluate the clinical efficacy of this treatment. This trial has the following advantages: (1) This study is a randomized, double-blind, placebo-controlled, multicenter study that is scientific and rigorous. (2) At present, no relevant research has evaluated the effects of TCM on inflammatory factors and iron metabolism in MDS. The findings of this study would fill this gap of knowledge in the literature. This study has certain limitations. It only evaluates the early stages of MDS; most of the patients with MDS are already on the late stages when they see a doctor. Therefore, it might be difficult to recruit patients, and we have to set the sample size based on the minimum number of cases required for statistical analyses. In this study, MDS treatment using Western medicine is also recorded as an individualized treatment regimen, which would be further analyzed as a stratified group of results in our statistical analyses. The results of this clinical trial would reveal the effectiveness and feasibility of YGS in the treatment of lowerrisk MDS anemia, as well as its impact on inflammatory factors and iron metabolism in patients with lower-risk MDS. We hope that the combination treatment of Chinese and Western medicine can delay the progression of MDS and improve the prognosis and survival of patients. Trial status This study is currently in the process of recruiting participants. The protocol version number is 2.0, dated October 08, 2018. Recruitment of participants commenced on March 23, 2020, and is expected to end on December 31, 2021. At the time of manuscript submission, we have recruited 19 patients from three hospitals concurrently; therefore, this clinical trial is expected to be completed on time. The funders had no role in the design of the study, collection, analysis, and interpretation of data and in writing or approving the manuscript. Availability of data and materials The datasets generated and/or analyzed during the current study are not publicly available, owing to the protection of privacy for patients, but they are available from the corresponding author upon reasonable request. Declarations Ethics approval and consent to participate The study was approved by the research ethics committee of the Shanghai Baoshan Hospital of Integrated Traditional Chinese and Western Medicine (identifier: 201809-01). All procedures performed in the studies involving human participants were carried out in accordance with the ethical standards of the institutional and/or national research ethics committee and with the 1964 Declaration of Helsinki and its later amendments or comparable ethical standards. A written informed consent would be obtained from each participant and would be provided to the patients for signature prior to the study enrollment. To encourage adherence to the treatment, all patients would receive the study drug and laboratory tests for free during the entire observational period. In order to protect the privacy of patients, the original materials are also stored in the hospital and cannot be accessed without the consent of the ethics committee. Data publication will also hide the patient's name. Consent for publication Not applicable.
2021-10-18T13:44:57.765Z
2021-10-18T00:00:00.000
{ "year": 2021, "sha1": "78efce2b22c54861d284ba292f4bea0771becf7d", "oa_license": "CCBY", "oa_url": "https://trialsjournal.biomedcentral.com/track/pdf/10.1186/s13063-021-05646-2", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "78efce2b22c54861d284ba292f4bea0771becf7d", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
1245827
pes2o/s2orc
v3-fos-license
Self-Tuning at Large (Distances): 4D Description of Runaway Dilaton Capture We complete here a three-part study (see also arXiv:1506.08095 and 1508.00856) of how codimension-two objects back-react gravitationally with their environment, with particular interest in situations where the transverse `bulk' is stabilized by the interplay between gravity and flux-quantization in a dilaton-Maxwell-Einstein system such as commonly appears in higher-dimensional supergravity and is used in the Supersymmetric Large Extra Dimensions (SLED) program. Such systems enjoy a classical flat direction that can be lifted by interactions with the branes, giving a mass to the would-be modulus that is smaller than the KK scale. We construct the effective low-energy 4D description appropriate below the KK scale once the transverse extra dimensions are integrated out, and show that it reproduces the predictions of the full UV theory for how the vacuum energy and modulus mass depend on the properties of the branes and stabilizing fluxes. In particular we show how this 4D theory learns the news of flux quantization through the existence of a space-filling four-form potential that descends from the higher-dimensional Maxwell field. We find a scalar potential consistent with general constraints, like the runaway dictated by Weinberg's theorem. We show how scale-breaking brane interactions can give this potential minima for which the extra-dimensional size, $\ell$, is exponentially large relative to underlying physics scales, $r_B$, with $\ell^2 = r_B^2 e^{- \varphi}$ where $-\varphi \gg 1$ can be arranged with a small hierarchy between fundamental parameters. We identify circumstances where the potential at the minimum can (but need not) be parametrically suppressed relative to the tensions of the branes, provide a preliminary discussion of the robustness of these results to quantum corrections, and discuss the relation between what we find and earlier papers in the SLED program. Introduction In this paper we study the very low-energy dynamics of six-dimensional supergravity interacting with two non-supersymmetric, space-filling, codimension-two branes. Our interest is in situations where the back-reaction of the branes breaks a degeneracy of the bulk system and lifts an otherwise flat direction. As in two earlier papers [1,2] we focus on systems for which the interactions are weak enough to ensure that the energetics lifting this flat direction are amenable to understanding in the effective 4D theory below the Kaluza-Klein (KK) scale. We compute this low-energy potential explicitly within the classical limit, to identify how it depends on the various parameters describing the underlying UV completion. To this end we study a specific system of branes interacting through the bosonic fields of chiral, gauged six-dimensional supergravity [3]. We use this specific theory for two reasons. First, it is known to admit explicit stabilized extra-dimensional solutions -both without branes [4] and with them [5,6,7,8,9,10,11] -for which gravity competes with flux quantization and brane back-reaction to stabilize the extra dimensions. This makes it a good laboratory for studying in detail how interactions amongst branes and fluxes can compete to shape the extra dimensions while going beyond the restriction to one extra dimension of the well-explored 5D Randall-Sundrum models [12]. In this motivation one wishes to know whether or not it is possible to achieve dynamically stable extra dimensions that are exponentially large functions of the not-too-large parameters of the fundamental theory. Second, this system was proposed some time ago [5,13,14] (and again recently in more detail [15]) as a concrete laboratory in which to explore whether the interplay between supersymmetry and extra dimensions can help resolve the cosmological constant problem [14,16], essentially by having the quantum zero-point fluctuations of the particles we see curve the extra dimensions instead of the four large dimensions explored by cosmologists. In the simplest picture ordinary particles are localized on the 4D branes and so their quantum fluctuations contribute to the brane tensions, while many of the simplest brane solutions [5,6] are flat for any value of the tension. In this motivation the issue is to understand how (and whether) the 4D theory captures this special feature of the extra-dimensional picture, and thereby to understand how robustly (and whether) the effective 4D curvature can be suppressed relative to naive expectations. In the simplest model [4] flux quantization and gravity drive the system to a supersymmetric ground state with a single flat direction corresponding to a breathing mode with origins in an accidental scaling symmetry generic to the classical supergravity field equations. Brane back-reaction then typically lifts this degeneracy (and generically breaks supersymmetry) leading to a vacuum configuration whose properties involve a competition between inter-brane forces and flux quantization. Because the energy cost of this lifting is often smaller than the Kaluza-Klein (KK) scale it can be understood purely within the low-energy 4D theory, and a puzzle for these systems has been how this low-energy theory 'knows' about extra-dimensional flux quantization (as it must if it is to properly reproduce the competition with other effects in the 6D UV completion). An important part of this story is the ability of the branes to carry localized amounts of the stabilizing external magnetic flux [17], where the integral is over the 4D brane world-sheet ⋆ F is the 6D Hodge dual of the 2-form Maxwell field-strength and A is a dilaton-dependent coefficient. This is important because the system often responds to perturbations by moving flux onto and off of the branes, since it is energetically inexpensive to change the value of φ. We use the effective theory that captures the low-energy dynamics of this flux in the higher-dimensional theory -developed in companion papers [1,2] -to work out the effective 4D description provided here, identifying in particular the precise form of the scalar potential that governs the energetics of vacuum determination. We find the following main results. • 4D effective description: We describe the low-energy 4D effective theory appropriate for physics below the Kaluza-Klein (KK) scale, within which the extra dimensions themselves are too small to be resolved, and show how this reproduces the dynamics of the known cases where the 6D dynamics is explicitly known. We find that the news of flux quantization comes to the low-energy theory by a space-filling 4-form gauge field, F µνλρ , whose value satisfies general quantization conditions [18,19] that are ultimately inherited from the higher-dimensional quantization of Maxwell flux. • Dynamics of modulus stabilization: Most trivially we verify in more detail earlier claims [17,20,21] that (with two transverse dimensions) brane couplings generically do stabilize the size of the transverse dimensions in supersymmetric models, in a manner similar to Goldberger-Wise stabilization [22] in 5D. They do so because they break the classical scale invariance of the bulk supergravity that prevents the bulk from stabilizing on its own (through eg flux stabilization). • Exponentially large dimensions: We show that simple choices for brane-bulk couplings allow the extra dimensions to be stabilized at a size, ℓ, that is large relative to other microscopic scales, r B , exponentially 1 in the parameters of the underlying theoryi.e. ℓ 2 /r 2 B = e −ϕ , so ℓ/r B can be enormous if ϕ is only moderately large, say O(10), and negative. • Connection between brane-dilaton couplings and curvature: As has been known for some time [23] there is a strong connection between the strength of brane-dilaton couplings and on-brane curvatures, with vanishing brane-dilaton couplings implying vanishing onbrane curvatures. More recently [2] -see also [24] -it was found that the absence of dilaton couplings is not as straightforward as demanding dilaton-independence of the brane tension and BLF coefficient, A(φ), of (1.1), due to the necessity to hold fixed the Maxwell field far from the brane, rather than at the brane position, when deriving the dilaton dependence of the brane. Complete dilaton-independence of the brane action instead turns out to be equivalent to the condition for scale-invariance, despite the presence of the metrics in the Hodge dual of (1.1). Our 4D potential allows us to compute the subdominant size of the curvature as explicit functions of the deviations from scale-invariance, and verify that they reproduce the curvatures found directly within the 6D UV completion. • Low-energy on-brane curvature: We find that the dynamics of modulus stabilization usually also curves the dimensions along the brane world-sheets, and generically does so by an amount commensurate with their tension, R ∼ G N T , where T is the brane tension (defined more precisely below) and G N is Newton's constant for observers living on the brane. For specific parameter regimes the on-brane curvature can be less than this however, being parametrically suppressed relative to the tension. In some cases the suppression of R in the near-scale-invariant limit can be regarded as a consequence of the generic runaway present for scale-invariant potentials: weak scalebreaking tends to place minima out at large fields for which the potential is relatively small. In this way it potentially converts Weinberg's no-go theorem [25] from a bug into a feature. Although our personal motivation for studying this system is because of its potential application [5,14] to the cosmological constant problem [14,16,25], the ability to stabilize two transverse dimensions with exponentially large size given only moderately large input parameters potentially puts large-extra-dimensional models [26] on a similar footing as warped Randall-Sundrum models [12]. A road map We organize our discussion as follows. The following section, §2, describes the 6D system whose 4D physics is of interest, summarizing the main results explained in more detail in [2]. The purposes of doing so is to show how properties of the bulk physics (such as extradimensional size and on-brane curvature) are constrained by the field equations, which controls the extent to which they depend on the properties of any source branes. This provides the tools required for matching to the 4D effective theory, relevant to energies below the KK scale. This matching is itself described in §3, which determines the 4D effective theory required to reproduce the dynamics of the full higher-dimensional theory. Next, §4 uses this effective description to explore the implications of several choices of parameters within a class that minimize the couplings between the brane and the bulk dilaton. In particular we compute here the classical predictions for the modulus mass and vev (and so also the size of the extra dimensions) as well as the on-brane curvature at the minimum. We find examples that produce exponentially large dimensions and with parametrically suppressed curvature in the on-brane directions. §4 concludes with a brief discussion about the robustness of the various examples, and surveys some ways that quantum corrections might be expected to complicate the picture. Our conclusions are summarized in a final discussion section, §5. The higher-dimensional system We here briefly outline the action and field equations of the UV theory whose low-energy description we wish to capture: the system studied in [2] consisting of a bulk Einstein-Maxwell-Dilaton sector that arises as the bosonic part of six-dimensional supergravity, plus two space-filling 3-branes situated within two transverse extra dimensions. The Bulk The bulk action is a subset of the action for Nishino-Sezgin supergravity [3] given by where 2 κ denotes the 6D gravitational coupling and R M N denotes the 6D Ricci tensor while is a gauge field strength for a specific U (1) R symmetry that does not commute with 6D supersymmetry (with gauge coupling g R ). The second line sets up notation for the Einstein-Hilbert, scalar and gauge parts of the action in terms of the items in the line above. Notice S B scales homogeneously, S B → s 2 S B under the rigid rescalings g M N → s g M N and e φ → s −1 e φ , making this a symmetry of the classical equations of motion. Besides ensuring classical scale invariance this also shows that it is the quantity e 2φ that plays the role of in counting loops within the bulk part of the theory. The bulk system enjoys a second useful scaling property: physical properties depend only on g R through a field-dependent combinationĝ R (φ) = g R e φ/2 . The value φ = 0 can always be chosen as the present-day vacuum provided the values of g R is chosen appropriately. For many purposes it is useful to work with a 4-form field strength, F M N P Q that is dual to A M N , in terms of which the bulk action can be written where L st is a surface term [2,18] that emerges when performing the duality transformation from A (2) to F (4) = dV (3) , The Branes We take the brane action to include the first two terms in a derivative expansion 3 at the position of the brane (with z M v (σ) denoting the brane position fields). Despite its appearances, the localized-flux term, L ζ v , does not depend on this metric because the explicit dependence cancels with that hidden within the totally antisymmetric 4-tensor, ε µνλρ , associated with the metric. Since it turns out the branes repel one another their position modes are massive enough to be integrated out in the 4D effective theory, and so we simply assume static branes and choose coordinates so that they are located at opposite ends of the transverse extra dimensions. It is also possible to frame the branes using a more UV-complete theory for which they arise as classical vortex-like solutions (as is done explicitly in [1,2]), though we do not need the details of this explicit extension in what follows. Bulk geometry and field equations Our interest is in geometries that are maximally symmetric in 4D (spanned by coordinates x µ ) and axially symmetric in the transverse 2D (spanned by y m ) about the positions of two source branes situated at opposite ends of a compact transverse space. We therefore specialize to fields that depend only on the proper distance, ρ, from the points of axial symmetry, and assume the only nonzero components of the gauge field strength, A mn , lie in the transverse two directions, and so its dual, F µνλρ , lies entirely in the space-filling 4D. The metric has the general warped-product form whereǧ µν (x) is the maximally symmetric metric on d-dimensional de Sitter, Minkowski or anti-de Sitter space. The corresponding 6D Ricci tensor has components and where ∇ is the 2D covariant derivative built from g mn andŘ µν and R mn are the Ricci tensors for the metricsǧ µν and g mn . For the axially symmetric 2D metrics of interest we make the coordinate choice g mn dy m dy n = dρ 2 + B 2 (ρ) dθ 2 . (2.8) With the assumed symmetries the nontrivial components of the matter stress-energy are where all three quantities, ̺ = 1 4 g µν T µν , X = − 1 2 g mn T mn and Z can be split into bulk and localized brane contributions: ̺ =̺ B +̺ loc , X =X B +X loc and so on. 4 The bulk contributions to these quantities are given by Under the above assumptions the field equations simplify to coupled nonlinear ordinary differential equations. Denoting differentiation with respect to proper distance, ρ, by primes, the dilaton field equation reads where the the above equality defines Y. Two things are important about Y: (i) Y contains no terms from the bulk lagrangian and so vanishes identically in the absence of the source branes; and (ii) the brane contribution to Y vanishes everywhere if and only if the brane lagrangian does not break the scale invariance of the bulk action. Similarly the three nontrivial components of the trace-reversed bulk Einstein equations reduce to the 4D and 2D trace equations, and as well as the (ρρ) -(θθ) equation (2.14) Notice the special feature of codimension-two sources that eq. (2.12) governing the 4D cur-vatureŘ does not depend on the 4D part of the stress-energy, ̺. Brane stress energies The integrated localized contributions to the stress energy and to Y can be written as sums over each brane of known functions of the brane tension, T v , and localized flux, ζ v . For instance, the energy density is given by where W v is the metric warp-factor evaluated at the corresponding brane position and we define the notation -vanishes or diverges at the brane positions, but if so eq. (2.15) shows this can be absorbed into a renormalization of T v [21,29], such as would be expected physically if the value of T v were to be inferred from a measurement of (say) a defect angle, whose size is governed by by the physical energy ̺ v . This is addressed in more detail in Appendix B. Similarly the scale-breaking brane contributions to the dilaton equation are given by The brane does not break scale invariance if both the tension and localized flux are independent of φ: Again, any singularities associated with the vanishing or diverging of fields near the branes can be renormalized into the bulk-brane effective couplings. The off-brane components of the brane stress-energy are somewhat more subtle to obtain since the dependence of the brane action on the extra-dimensional metric is often only given implicitly. In general, however, stress-energy conservation and the equilibrium balancing of stress-energy within any localized brane ensures these are given by [1,2] where the approximation is valid up to terms that are suppressed by at least two powers of the assumed small ratio between the size of the brane and the size of the bulk. Flux quantization The symmetry ansatz requires the 4-form field to satisfy with Q independent of the 4 space-filling coordinates. The Bianchi identity, dF = 0, then implies Q also cannot depend on the transverse two coordinates and so is a constant. This constant is the integration constant we would have found if we had explicitly solved the Maxwell field equation for A mn . The value of Q is fixed by flux quantization [2] as follows 22) where N is the integer measuring the total flux of A M N through the transverse two dimensions, and ζ v is the parameter in (2.4) that measures the amount of this flux that is localized onto the position of the brane. Here, φ v denotes the value of the dilaton at this brane position, and represents the integral of W k over the transverse dimensions using the scale-invariant metric, g mn := e φ g mn , so the particular case k = 0 gives the extra-dimensional volume, Ω := Ω 0 , as measured by this metric. Boundary conditions The near source behaviour of the bulk fields is controlled by the properties of the brane sources, and this manifests in boundary conditions that must be satisfied by the bulk fields as they approach the branes. In practice, these boundary conditions can be derived by integrating the field equations over the localized region containing the brane source, as described in [1,2] in more detail. Performing this operation on (2.11), for example, gives the following boundary conditions for the dilaton at the positions of the branes where the approximation uses (2.20) to identify Y v as the leading contribution to this boundary condition. Similarly, where the suppression of X v and Z v implies they are subdominant to the energy density Lastly, the boundary condition for the warping in the metric is given by (2.26) Here and above, a v subscript on a buk field (or its derivative) denotes that this quantity is evaluated at the brane position ρ = ρ v . Control of approximations Because we explore classical behaviour it is important to specify its domain of validity. The fundamental parameters of the problem are the gravitational constant, κ; the gauge coupling, g R (ϕ) = g R e ϕ/2 ; and the size of the brane tensions, T v , and flux-localization parameters, ζ v . In the exact, scale invariant solutions of Appendix A the size of the transverse dimensions, ℓ, can be written in terms of parameters of the lagrangian and the ambient value of dilaton, ϕ, as follows In these solutions, the flux integration constant introduced above is given by Q = 2g R /κ 2 and we use this as a benchmark value when making various estimates. Weak gravitational response to the energy density of the brane requires κ 2 T v ≪ 1, and this ensures physical observables such as defect angles are small. Similarly, the response to localized flux is controlled by κ 2 Qζ ′ ∼ g R ζ ′ and so requires g R ζ ′ ≪ 1. Since our interest is in the regime where the intrinsic brane width is much smaller than the transverse dimensions we assume throughout ℓ ≫r V where ℓ (r V ) is a measure of the extra-dimensional (brane) size. This is accomplished ifr V /ℓ ∼ (r V g R /κ)e ϕ/2 ≪ 1 which can usually be ensured by requiring e ϕ ≪ 1 , (2.28) although we discuss below an example where the brane size also depends on the value of the dilaton, thus complicating this argument. Finally, in supergravity semiclassical reasoning also depends on ϕ because it is e 2ϕ that counts loops in the bulk theory. Consequently we also require e ϕ ≪ 1 in order to work semiclassically. Integral relations From the point of view of the low-energy theory, it is the field equations integrated over the extra dimensions that carry the most useful information. Integrating the dilaton field equation, (2.11), over the entire compact transverse dimension gives Since integration over the transverse space can be regarded as projecting the field equations onto the zero mode in these directions, (2.29) can be interpreted as the equation that determines the value of the dilaton zero-mode and must agree with what is found by varying the potential of the effective 4D theory obtained in later sections. In the absence of the sources this zero mode is an exact flat direction of the classical equations associated with the scale invariance of the bulk field equations and the localized contribution to (2.29) expresses how this flat direction becomes fixed when the sources are not scale-invariant. Integrating the trace-reversed Einstein equation over the entire transverse space leads to where the second equality uses (2.30). This again emphasizes that it is the integrated offsource stress-energy, X , that ultimately controls the size of the on-source curvature [23] for generic φ, and that this receives contributions coming from both bulk and brane-localized contributions to the integral. By contrast, using (2.29) to evaluate X -at the specific value of the would-be zero-mode of φ that minimizes its potential -gives a result for the curvature that depends only on brane properties: As we see below, the 6D coupling, κ, is related to its 4D counterpart, κ 4 , by 33) once evaluated at the minimum of the potential for the would-be zero-mode. So (2.32) shows that the curvatureŘ has a size that is equivalent to what would be obtained by a 4D cosmological constant, U ⋆ , of size This explicitly relates the size of the potential at its minimum to the size of scale-breaking on the branes. Orders of magnitude Detailed studies of how the bulk solutions depend on the brane parameters in the UV-complete theories of [2] show several kinds of response are possible. Generic case In the generic situation is of order the generic size of a gravitational field coming from the brane energy density. When this is true, it follows from the brane constraints in (2.20) that and so are suppressed compared to the naive estimate κ 2 T v [2]. Eq. (2.34) then shows the resulting 4D curvature corresponds to an effective 4D cosmological constant of order and so is generically of order the brane tension. Scale invariant case In the scale invariant case we have T ′ v = ζ ′ v = 0 and so the quantities Y v vanish. Eq. (2.20) then implies the off-brane components of the brane stress-energies also satisfy Lastly, the vanishing of Y v ensures the same for the 4D curvature: As we shall see, in the 4D Einstein frame the scalar potential for the dilaton zero-mode, ϕ, turns out to be proportional to X ∝ e 2ϕ , so this vanishing of X andŘ is achieved by having the zero-mode run away to e ϕ → 0 [25]. Decoupling case T ′ v = 0 An intermediate situation is given by the decoupling choice, for which T v is φ-independent but ζ v (φ) is not. In this case Y is not exactly zero, but should be suppressed because Y arises purely from the φ-dependence of a derivatively suppressed term in the brane action where the last estimate shows how the derivative suppression can be rewritten as a suppression by the size of the extra dimensions. As a consequence we also have suppressions in the offbrane stress-energy components, 41) and the effective cosmological constant corresponding toŘ satisfies Our goal in the next sections is to reproduce these estimates using a more carefully computed potential for the low-energy 4D effective theory, and to determine the value of the zero mode that ultimately controls the size of these estimates. EFT below the KK scale Consider next the viewpoint of a lower-dimensional observer with access only below the KK scale. In particular we address the following puzzle. We know in the full D-dimensional theory that flux quantization plays a crucial role in determining the d-dimensional curvature that would be seen by any observer below the KK scale [17]. (We know this because it determines Q through (2.22), and this then governs the size ofĽ A = −L F appearing in ̺ and X .) But how is this flux-dependence seen by a lower-dimensional observer who cannot resolve the extra dimensions? The field content naively available in the generic case to the lower-dimensional observer is fairly limited: a massless graviton g µν ; massless gauge bosons, one arising from the higherdimensional gauge field, A µ , and another, B µ , arising from the metric due to the unbroken axial rotational invariance of the extra dimensions; and the dilaton zero-mode, ϕ, arising due to classical scale-invariance. Although our tale can be told purely using these fields, our interest in practice is in a bulk coming from higher-dimensional supergravity for which additional light particles also exist. The low-energy field content available in 6D within Nishino-Sezgin supergravity [3] also includes the 'model-independent' axion, a, that is dual to the components C µν of the bulk Kalb-Ramond field, as well as the harmonic part of the extra-dimensional components of the same field, C mn . Because the supersymmetry breaking scale in the bulk is also the KK scale these do not appear with superpartners as supermultiplets in the 4D theory. One of these fields, C mn , turns out to Higgs the would-be massless gauge boson, A µ , which then acquires a mass at the KK scale [30]. 5 To understand how flux quantization trickles down to the low-energy EFT it is useful to supplement these fields with the 4-form field, F (4) , that is dual to A (2) . Although this field has trivial dynamics in the low-energy theory, its constant value knows about flux quantization and so can bring the news about it to the lower-dimensional world. Lower-dimensional action With these comments in mind we seek that part of the low-energy 4D EFT describing the dynamics of the 4D metric, g µν (x), the dilaton zero mode, ϕ(x), and the 4-form field strength, F µνλρ . Because of the appearance of the low-energy scalar we distinguish several important metric frames: the 6D Einstein-frame (EF) metric, g µν , in terms of which the UV theory is formulated; the scale-invariant frameĝ µν = e ϕ g µν which does not transform under the classical scaling symmetry of the UV theory; and the 4D Einstein-frame metric,g µν , which must be given byg since this ensuresg µν → s 2g µν under the scale transformations, as required for the lowerdimensional Einstein-Hilbert term to scale properly. We do not similarly canonically normalize the zero mode kinetic term in 4D because we wish to keep its transformation property under the classical scaling symmetry: For subsequent applications it is important to get right the proportionality constant in (3.1). In particular, we want it to be unity in the present-day vacuum, ϕ = ϕ ⋆ , which we determine below by minimizing the ϕ scalar potential. Havingg µν and g µν differ in normalization amounts to a change of units, and so needlessly complicates the dimensional estimate of the size of terms in the low-energy potential. Consequently we use below the following, more precise, version of (3.1), The most general lagrangian for these fields at the two-derivative level can be written where tildes on upper indices indicate that they are raised using the inverse metricg µν , andǫ µνλρ is the appropriate volume tensor built fromg µν (whose nonzero components are ±(−g) −1/2 ). The surface term, L st4 , is given by and is required to the extent there are boundaries (including asymptotic infinity) whose behaviour we wish to track [2,18]. This last equation uses the definitioň Notice that the equations of motion for the 3-form gauge potential, ∂ µ √ −g Z FF µνλρ = 0, imply that evaluating L st4 at a solution gives Combining this with the above, evaluating the gauge part of the 4D action using the 4-form equations of motion therefore gives Field equations The field equations obtained from the EF 4D action (3.3) are the field equation for the 3-form gauge potential, Writing F µνλρ = f 4ǫµνλρ shows that f 4 is algebraically fixed in terms of an integration constant, K 4 and couplings in the lagrangian, and because of this F (4) does not describe propagating degrees of freedom. In terms of f 4 we have L F = − 1 2 Z F f 2 4 , so evaluating the action using (3.7) shows that the influence of the 4-form field is to shift the scalar potential of the remaining scalar-tensor theory to (3.10) The Einstein equations similarly are λρ T λρgµν with stress tensor The traced Einstein equation therefore is which again shows the effect of the 4-form field is to shift the potential of the scalar field from Finally, the dilaton equation becomes where primes here denote derivatives with respect to ϕ. This is again consistent with the replacement V 4 → U = V 4 − L F . In this argument it may come as a surprise that L st4 can contribute at all to the field equations for ϕ, given that L st4 is a surface term which therefore should not contribute to equations of motion at all. It is indeed true that because L st4 is a surface term it can only contribute to the variation of the action with respect to field variations that are nonzero at the boundaries of spacetime. But when evaluating the ϕ potential we first evaluate the lagrangian (and so in particular L st4 ) at the solution to the V µνλ equation of motion, and this solution necessarily contributes to the surface terms whenever its field strength satisfies F µνλρ = f 4ǫµνλρ . It is for this reason that L st4 contributes to the variation of the action with respect to ϕ if f 4 depends on ϕ and the variation is made after V (3) is eliminated as a function of ϕ. This is why its presence resolves [18] paradoxes that would otherwise arise [33] when handling 4-form fields. Matching Next we try to identify the unknown functions of ϕ in the 4D theory in a way that captures all of the properties of the 6D theory. Since the main focus is on the 4D theory, we adopt in this section (and in the next section) the notation where g µν (x) (rather thanǧ µν ) denotes just the x µ -dependent 4D part of the 6D metric, g M N (x, y), without the warp factors, W 2 (y), in 6D Einstein frame. So (for instance) Form field We first match the 4-form field, since this is what passes the flux-quantization conditions down to the low-energy theory. The 6D dual Maxwell field equation, integrated over the extra dimensions, for the geometries of interest is where warp factors are written explicitly so that 4D indices are raised (and ǫ µνλκ is built) with the 4D g µν rather than the 6D version. This is to be compared with its 4D counterpart, derived above in 4D EF, where the first equality transforms to 6D EF from 4D EF. Equating coefficients gives and In the first equality the dilaton evaluated at the brane positions, is implicitly expressed in terms of the amplitude, ϕ, of the would-be bulk zero-mode. The second, approximate, equality assumes the zero mode u 0 (y) to be y-independent so that φ v = ϕ is the same at the position of all branes. The solution to the 4-form field equation in 4D is given by where K 4 is an integration constant. Similarly the solution to the 6D equation, integrated over the transverse space, is where K 6 is also an integration constant and the second equality uses (3.18) and (3.19). Comparing these solutions shows But in 6D the Bianchi identity [2] also tells us that where 6D flux-quantization requires and so brings the news about flux quantization to the lower-dimensional world [18,19]. With this choice L F evaluates in 4D to Einstein-Hilbert term The 4D Einstein-Hilbert terms dimensionally reduce in the usual way to give which uses √ g 2 = √ĝ 2 e −φ to express things in terms of the scale-invariant 2D measure and we absorb the net zero-mode factor, e −ϕ into the metric when transforming to the 4D EF metric:g µν = e −ϕ g µν (with ∂ϕ terms not written, but handled below). Comparing this with the 4D action gives the following ϕ-independent expression for the 4D gravitational coupling, (3.27) Earlier sections remarked on the freedom to shift φ → φ − ϕ ⋆ in the bulk provided one also rescales coupling constants such as g R → g R⋆ = g R e ϕ⋆/2 . Eq. (3.27) reflects this freedom in the following way. If φ = 0 is chosen so that g 2 R < ∼ κ, then r B ∼ κ/g R is not particularly large so having a large transverse space requires e ϕ⋆ ≪ 1 so that ℓ = r B e −ϕ⋆/2 ≫ r B . In this case (3.27) shows that it is the explicit factor of e −ϕ⋆ that makes the 4D Planck mass large compared with the 6D Planck mass. On the other hand if φ is shifted so that ϕ ⋆ ≃ 0 then we have g 2 R⋆ ≪ κ and so ℓ 2 ∼ r 2 B⋆ ≫ κ. In this case (3.27) gives a large 4D Planck mass because of the large integration volume, which is of order ℓ 2 rather than order κ. Scalar-tensor properties To determine the scalar potential and kinetic terms we evaluate the 6D actions at the solution of the 2D metric and 4-form equations of motion, but do not use the 4D metric or scalar field equations so that these can be kept free. The starting point in 6D is the 2D integral of the 6D EF lagrangian density, which has the form We first evaluate the 4-form field at the solution to its field equations, using a result proven in [2], where we also split the φ kinetic term into its 4D and 2D parts, and use (2.10) and (2.15) to trade the remaining terms for ̺ =̺ B + ̺ loc . We then eliminate R (2) using the field equation (2.13) to find In these expressions the combination L F is to be regarded as the function of ϕ and flux quanta given by (3.25). These are to be compared with the 4D action evaluated using only the 4-form field equations, in which we are also to regard L F as the 4D ϕ-dependent combination The ϕ kinetic term comes partly from the dimensional reduction of the kinetic term for φ and partly from the kinetic term for the radion, ℓ, in the 6D Einstein-Hilbert action, and gives [30] where S = e −2ϕ ∝ ℓ 2 e −φ . The total kinetic contribution then is 35) and so Z ϕ = 2. The remaining terms determine the scalar potential, which we seek in 4D Einstein frame. On the 6D side we have (3.36) A check on the normalization comes from the 4D Einstein equation which in Einstein frame statesR = −4κ 2 4 U . This agrees with the above given that it implies −4κ 2 On the 4D side, earlier sections show how the 4-form effectively shifts the effective 4D EF potential from V 4 to U = V 4 − L F . Comparing with the 6D result then shows Although this can be solved for V 4 this is less useful than directly working with the total effective potential, U . 3.4 Sources of ϕ-dependence within U Eq. (3.36) is one of our main results, since it gives the effective potential whose minimization determines the value of the dilaton zero-mode, ϕ = ϕ ⋆ , and thereby also fixes the size of the extra dimensions, since ℓ 2 = r 2 B e −ϕ⋆ . The value of the potential at this minimum, U (ϕ ⋆ ) also determines the response of the gravitational field implied when ϕ seeks its minimum in this way. To make this ϕ-dependence more explicit we use There are four main ways that ϕ enters into this expression. • The explicit overall factor of e 2ϕ . • The ϕ-dependence of the explicit factors of the flux-localization parameter, • The explicit ϕ-dependence of the brane stress-energy parameters, v X v (ϕ). • Some ϕ-dependence potentially enters through the integration volumes Ω k . Because Ω k is scale invariant it contains no explicit factors of ϕ, but there can be a hidden ϕ-dependence because Ω k usually also depends implicitly on T v and ζ v (eg through the defect angle, α v − 1 ∝ κ 2 T v ) and so inherits any ϕ-dependence carried by the brane parameters. We next check several special cases the above potential should reproduce. Scale invariance When neither T v nor ζ v depend on φ the branes preserve the bulk scale-invariance. In this case all of Ω k , T v , X v and ξ are ϕ-independent, so the only dependence on ϕ is the overall factor of e 2ϕ , as would be dictated in general grounds by scale invariance. Although this is always minimized at U = 0, unless the square bracket vanishes this is achieved by a runaway to zero coupling, ϕ → −∞, as required by Weinberg's no-go theorem [25]. Vanishing U (ϕ) Whenever X v vanishes (such as happens for the BPS vortices [2] for example) or is negligible, and V 0 = 1 2 V 2 0 is positive, the quantity e −2ϕ U (ϕ) becomes proportional to a difference of squares and it is simple to enumerate sufficient conditions for it to vanish. In particular where Ω 2 := Ω 4 Ω −4 . This clearly vanishes for all ϕ whenever the functions ξ(ϕ) and Ω(ϕ) are related by for all ϕ. When Ω k and Ω are ϕ-independent (which at least requires T v to be independent of ϕ) then (3.41) can only be satisfied for all ϕ if ξ is also ϕ-independent, which implies scale invariance. Salam-Sezgin solution The Salam-Sezgin solution [4] described in Appendix A.1 has no sources and so ξ = T v = X v = 0. It is a supersymmetric solution to 6D supergravity and so V 0 = 2g 2 R /κ 4 and N = ±2π/g R . The solution is unwarped, W = 1, so Ω k = Ω s := πκ 2 /g 2 R for all k. With these choices the scalar potential becomes 42) as it should, revealing ϕ as the flat direction. Rugby ball solutions We can also investigate the shape of the effective potential when scale invariant branes are added to the system. The rugby-ball solutions presented in Appendix A.2 are generated by identical, scale-invariant, supersymmetric [34] branes, and the potential is expected to vanish in this special case. Explicit solutions are also known when more general scale-invariant branes source the bulk [6], although these solutions generally have bulk fields with nontrivial profiles. We side-step the technical issues associated with nontrivial warping and dilaton profile and treat both cases simultaneously, by assuming that branes' tension, T , and localized flux, ξ = 2ζ, are small enough that we can linearize about the Salam-Sezgin solution (and so also choose flux quantum N = ±2π/g R ). This assumption allows us to use the linearized scalar potential (B.48) calculated in Appendix B. When specialized to the Salam-Sezgin background around which we are perturbing, it reads Above, we have tracked the X v contribution to the potential, but the branes are scale invariant, so this quantity is also suppressed as in (2.38), and can be neglected. It then follows that the potential vanishes when the branes satisfy This is identical to the supersymmetry condition on the branes [34], as expected. Incidentally, when the branes are UV completed as supersymmetric vortices [2] it is also true that the vortex BPS conditions ensure X v = 0 identically. When the branes are not supersymmetric, the right-hand size reduces to 2T at linear order when ξ = 0, in agreement with the non-SUSY theory [1]. In this case, the resulting potential has the standard runaway form expected for scale-invariant couplings [25]. Self-tuning under scrutiny Now that the tools for computing the dilaton potential are assembled, we can minimize it to explore the size of e ϕ⋆ and U ⋆ = U (ϕ ⋆ ) as functions of the microscopic choices (like T v and ζ v ) that describe the branes. Implications of φ-independent tension We expect special things to happen if we can ensure a small φ derivative near the branes, since we know the curvature vanishes exactly if φ ′ vanishes at both branes [2,23]. This asks the brane lagrangian to be chosen to depend as weakly as possible on φ. The simplest choice is to demand complete φ-independence for both T v and ζ v for all branes, but although it is true that this leads to solutions with R = 0 it also implies scale invariance 6 and the results of the previous section confirm that flat curvature in this case is found by having ϕ run away to infinity (thereby not breaking scale invariance) [25]. Consequently in this section we instead choose φ-independence just for the leading term, T v , in the hopes that the resulting curvatures can be suppressed. In this case only two sources of ϕ-dependence remain in U : the overall factor of e 2ϕ and any dependence arising within ξ(φ) = v ζ v (φ). (The latter of these includes both the explicit ξ-dependence and any implicit dependence of Ω k on ξ.) Because the branes break scale invariance we expect the flat direction for ϕ to be lifted and the dynamics to choose an energetically preferred value, ϕ ⋆ . Furthermore, since the lifting comes from ξ, which arises only from the derivatively once-suppressed localized-flux term, we expect Y v and direct brane contributions to the potential like X v to be KK-suppressed -as argued in more detail in [2]. This leaves the bulk contribution to U , but because of (2.34) this is also expected to be suppressed once ϕ adjusts to approach the value ϕ ⋆ . What we do here that [2] did not do was compute the shape of U explicitly and minimize it to determine ϕ ⋆ and U ⋆ = U (ϕ ⋆ ), thereby showing in detail how direct brane contributions to U compete with the interference the branes cause in the cancelations among the bulk terms in U . Because ℓ ∝ e −ϕ⋆/2 in the vacuum this calculation of ϕ ⋆ also computes the size of the extra dimensions, and we seek solutions with a large hierarchy between the brane size and the size of the transverse dimensions: ℓ ≫r V . It is only for such solutions that the above arguments would suggest any suppression in U ⋆ . Consequences of ∂T /∂φ = 0 For these reasons our main interest is in situations where T is ϕ-independent but ζ v = ζ v (ϕ). We next argue that this ensures the contribution of X v to U becomes negligible. Ultimately, it is the derivative suppression of ζ within the brane action in (2.18) that suppresses X v in the potential. For instance, neglecting any ϕ-dependence in Ω gives the first Inserting this information into U then shows that the contribution of X v may be dropped relative to the (N + ξ) 2 / Ω −4 term whenever (κξ ′ ) 2 ≪ 2π Ω −4 , as is true when the extra dimensions are much larger than the microscopic sizes determining κ and ξ ′ . As before, for supergravity we have V 0 = 2g 2 R /κ 4 and as argued above the only ϕ dependence enters through ξ and the overall factor of e 2ϕ dictated by scaling, making the scalar potential in the 4D theory whereΩ 2 := Ω 4 Ω −4 and in the first line we write Ω k (ϕ) to emphasize that the volumes can also depend on ϕ through ξ. General features Broadly speaking the potential described above has the form and so its extrema, ϕ ⋆ , make the derivative vanish. Our interest is in minima, so we demand the second derivative be positive. There are two classes of solution: 1. The runaway: ϕ ⋆ = ϕ ∞ = −∞, with e ϕ⋆ = 0 and so U ⋆ = U ′′ ⋆ = 0; and 2. Any nontrivial solutions to F ′ (ϕ ⋆ ) + 2F (ϕ ⋆ ) = 0. Evaluated at any of these latter extrema we have 7 Control of approximations requires we check that at any such a minimum e ϕ⋆ is small enough to justify our semiclassical analysis. Our main interest is in the non-runaway minima, and for these notice that using (4.3) to infer F and neglecting the ϕ-dependence ofΩ when differentiating the result gives an expression for U ⋆ that agrees with the estimate of (4.1). This shows in a more pedestrian way how the low-energy theory knows of the higher-dimensional connection between U ⋆ and Y . Of particular interest is how specific choices for ζ v (and so also ξ = v ζ v ) influence the shape of F (ϕ), and through this the values of ϕ ⋆ and U ⋆ . We seek to arrange two things: (i) that −ϕ ⋆ be moderately large (to achieve large extra dimensions, given ℓ ∝ e −ϕ⋆/2 ); and (ii) that U ⋆ be suppressed below the generic brane scale T v (as required to make progress on the cosmological constant problem if ordinary particles are localized on the branes and so contribute their vacuum energies as corrections to the corresponding brane tension). One way to achieve these ends would be to arrange F (ϕ) = F 0 F(ǫϕ), where ǫ is a moderately small dimensionless parameter and F 0 is a very small energy density. In this case the linearity of (4.5) ensures the value of ϕ ⋆ does not depend on F 0 at all, and if F(x) contains only order-unity parameters we expect to find |ϕ ⋆ | ∼ O(1/ǫ). Having ϕ ⋆ ∼ −75 would ensure e −ϕ⋆/2 ∼ 10 16 ; adequate even for models with very large extra dimensions [5,26]. The question is whether there is enough freedom available in ξ(ϕ) to arrange both of these conditions, and if so whether the choices made can be technically natural. The next sections explore this question by choosing ξ = µf (ϕ) for several simple choices, where µ is a mass scale that can be adjusted independently from the scale in T v . Although we find no obstruction in principle to being able to obtain both large ϕ ⋆ and small U ⋆ , the simple examples we explore so far each only appear to accomplish one or the other and not both simultaneously. Perturbative solutions As argued in §3.4, there are several values of ϕ for which we know U (ϕ) must vanish. One of these is the limit ϕ → −∞, for which U → 0 because of its exponential prefactor. The second case where we know U = 0 is when ϕ = ϕ s is such that ξ(ϕ) happens by accident to pass through a point where its value agrees with the supersymmetric limit for the given tension. (As shown in the Appendix, at the linearized level this occurs for any ϕ s satisfying g R ξ(ϕ s ) = ∓κ 2 T , if the two branes share equal tensions.) Whenever this occurs Q also takes its supersymmetric value, which ensuresŘ = 0 (and so U = 0). The significance of such a zero is that it guarantees the existence of at least one maximum or a minimum for U in the range −∞ < ϕ < ϕ s . (A similar conclusion is also possible for any interval between two distinct solutions to g R ξ(ϕ s ) = −κ 2 T , should more than one of these exist.) If this extremum is sufficiently close either to ϕ ∞ or to ϕ s then we can analyze the shape of the potential by perturbing around the situation where U vanishes. To that end let us write the brane properties as T v = T 0 + δT v and ζ v = ζ 0 + δζ v , where T 0 and ζ 0 define a supersymmetric configuration for which g R ξ 0 = g R ξ(ϕ s ) = 2g R ζ 0 = ∓κ 2 T 0 . Then the unperturbed potential vanishes, U 0 = 0, and deviations from this can be computed perturbatively in δT v and δζ v . There are two naturally occurring small parameters with which to linearize, κ 2 δT ≪ 1 and g R δξ(ϕ) ≪ 1, whose relative size is a knob we get to dial. Both of these are small to the extent that the bulk is only weakly perturbed by the source branes. This leads to a potential of the generic form where y(ϕ) := g R δξ/2π ≪ 1, and the linearized calculation of the Appendix -culminating in (B.48) -shows the coefficients A and B are given by where δT avg = 1 and at non-runaway solutions, U ′ ⋆ = U ′ (ϕ ⋆ ) = 0, we have We now describe several types of extrema that such a potential generically possesses. In each case we do not propose an explicit form for δξ for all ϕ (and so also do not compute the potential U for all ϕ), but instead investigate its structure near the extrema of U subject to various assumptions about how δξ varies in this region. As a result we do not in these first examples try to compute the value of ϕ ⋆ from first principles, but only its difference from the position, ϕ r , of a nearby reference point (such as a zero of U or a minimum of ξ(ϕ) etc). We solve for all quantities in terms of the reference point, ϕ r , and comment on the size of U ⋆ , the KK scale, ℓ and the zero-mode mass, m ϕ , at the minimum. Case I: Near a zero of U Consider first the simplest situation where δξ depends very weakly on ϕ so we may Taylor expand ξ about the point ϕ = ϕ s where U vanishes and we assume |g R µ/2π| ≪ 1. The potential near ϕ = ϕ s becomes where b ≃ 2g R µ/κ 2 . Extrema are determined by the vanishing of and so for finite ϕ ⋆ this implies The condition g R µ/2π ≪ 1 ensures that |y ⋆ | ≪ 1 at this point, justifying our perturbative analysis of the extremum. The corresponding physical KK scale is In agreement with [17,20], the breaking of scale-invariance by the branes allows their back-reaction to stabilize the size of the extra-dimensions, in a 6D version of the Goldberger-Wise [22] mechanism in 5D. The stabilized size of the extra dimensions is exponentially large compared to microscopic scale r B to the extent that ϕ s is large and negative. The full linearization of the 6D system for this example is also given in Appendix C, including a discussion of the warping and dilaton profile generated by the bulk response to the brane perturbations, and of the renormalizations of brane couplings that these require. Later examples also provide concrete cases for which the value of ϕ s can be computed in terms of brane properties, and briefly discuss choices that can make ϕ s large and negative. At this extremum we have We see we have a local minimum (maximum) between ϕ = ϕ s and ϕ → −∞ when b ∝ g R µ is positive (negative), for which U ⋆ is negative (positive). 8 Keeping in mind the normalization of the ϕ kinetic term in the 4D theory we see the classical prediction for its mass at this minimum is Since generically W −2 is of order the KK volume we see m ϕ is suppressed below the KK scale by the small factor g R µ/2π, justifying its calculation in the 4D EFT. This same factor provides the suppression of U ⋆ relative to the 6D Planck scale, and as a result m 2 ϕ ∼ |U ⋆ |/M 2 p . We return below to a discussion of the robustness of such predictions to quantum corrections. Case II: Near a minimum of ξ Consider next a situation where ϕ = ϕ m is a local minimum of ξ(ϕ), and where ξ m = ξ(ϕ m ) is not a point where U vanishes. In this case we expand ξ in powers of ϕ − ϕ m to write T v = T 0 + δT v and ξ = ξ 0 + µ(ϕ − ϕ m ) 2 . Here T 0 is chosen so that g R ξ 0 = −κ 2 T 0 (and we choose N = +1) so that it is δT avg = 1 2 v δT v that controls the value of U at ϕ = ϕ m . To justify the perturbative analysis we assume the resulting δT satisfies |κ 2 δT | ≪ 1 and |g R µ/2π| ≪ 1. With these choices we then have 20) and the potential becomes where a ≃ v δT v = 2 δT avg and b ≃ 2g R µ/κ 2 . Their dimensionless ratio is a free parameter. The extrema, ϕ ⋆ , are determined by the vanishing of and so the non-runaway solutions satisfy Reality of this root requires 4a/b ≤ 1 and so 4κ 2 δT ≤ g R µ. If |a/b| ≪ 1 the roots take the approximate forms and if a/b is large and negative they become Because ϕ ⋆− approaches ϕ m as a/b → 0 perturbation theory also justifies the expansion of δξ in powers of ϕ − ϕ m for this root. It may nonetheless be justified in any case for the other roots if it happens that δξ remains quadratic out to sufficiently large ϕ − ϕ m , and that y remains small for all of this range. There are two parameter regimes of interest. The first is |4a/b| ≪ 1 and for this choice ϕ ⋆ − ϕ m has the same order of magnitude as a/b ∼ κ 2 δT /g R µ. As before, the corresponding physical KK scale is ℓ = (κ/2g R ) e −ϕ⋆/2 , and because ϕ ⋆ − ϕ m is at most order unity, having this be large compared to microscopic scales requires ϕ m large and negative. At the extremum ϕ ⋆ − ϕ m ≈ −a/b we have We see that this is a local minimum when b ∝ g R µ is positive (ie whenever ϕ = 0 was a minimum for δξ). Furthermore the back-reaction with the bulk drags the value of ϕ ⋆ to be smaller (larger) than the minimum of δξ depending on whether or not a ≃ 2δT avg is positive (negative). The value of the potential at this point is U ⋆ ≈ 2δT avg and so is unsuppressed relative to (and shares the same sign as) δT avg . At this minimum the classical prediction for the would-be zero-mode mass is driven by its potential on the brane, and so is below the KK scale because g R µ ≪ 1. Another interesting parameter range enumerated above takes a/b large and negative. In this case provided the quadratic form for δξ applies for fields this large. Notice that y ⋆ ∼ g R µ(ϕ ⋆ − ϕ m ) 2 ∼ a ∼ κ 2 δT avg remains small. Of special interest in this case is where |a/b| dominates ϕ m , since this could explain why ϕ ⋆ is also large and negative (and so why ℓ ∝ e −ϕ⋆/2 could be potentially enormous without needing to explain the size of ϕ m ). At this extremum the size of the potential is which is suppressed relative to the tension scale, a = 2δT avg , by the assumed small quantity |b/a| ≃ |g R µ/κ 2 δT avg | 1/2 ≪ 1. Similarly, 32) and the concavity of the potential is once again controlled by b, with b < 0 (and so a > 0) giving a minimum at large negative values of ϕ ⋆ . The classical mass of the would-be zero mode at this minimum is and this lies below the KK scale beause |g R µ| ≪ κ 2 δT ≪ 1 by assumption. Although this gives large dimensions or small U ⋆ , it does not provide a phenomenologically viable value for both simultaneously, inasmuch as a large-volume value like ϕ ⋆ ∼ −75 only provides a moderate suppression of U ⋆ relative to tension scales. The extension of this example to a perturbation in the full 6D theory is also given in Appendix C, including a discussion of brane renormalization. Case III: Near a singular point of ξ The previous examples assume ξ varies smoothly with ϕ, so we next consider a singularity in ξ at ϕ = ϕ c . Singularities can arise in low-energy actions at places in field space where the low-energy approximation fails, such as places where integrated-out species of particles become massless. For purposes of illustration we consider a branch point of the form ξ = ξ 0 +δξ = µ(ϕ−ϕ c ) η with η an arbitrary exponent. The case η near zero is particularly interesting because this profits by being near the scale-invariant case η = 0. As above, we write T v = T 0 + δT v and dial T 0 so that it is related to ξ 0 by g R ξ 0 = −κ 2 T 0 . The potential becomes where the ratio between a ≃ v δT v = 2 δT avg and b ≃ 2g R µ/κ 2 is again a dial we can exploit. Assuming 0 < η < 1 the extrema are determined by the vanishing of which has solutions in the regime |ϕ − ϕ c | ≫ 1 of the form which for small η is true even if a/b = κ 2 δT avg /g R µ is only moderately large and negative. (For instance choosing η = 1 3 and κ 2 δT avg ∼ −4g R µ gives ϕ ⋆ − ϕ c ≃ −64.) At this point we have Small η has the virtue of amplifying both the size of ϕ ⋆ and the suppression of U ⋆ , although not in a way that seems phenomenologically viable for both at the same time. Case IV: Exponential ξ Next consider an example whose solutions are perturbatively close to the asymptotic runaway. This example is similar to the scaling case examined for the UV vortex completion in [2], where T = T 0 + δT and ξ = ξ 0 + µ e sϕ , where g R ξ 0 = −κ 2 T 0 and our main interest is in s not far from zero. In this case y = (g R µ/2π) e sϕ so y ′ = sy, and U = a + b e sϕ + · · · e 2(ϕ−ϕ⋆) , (4.38) with a ≃ 2δT and b ≃ 2g R µ/κ 2 . Then the non-runaway solutions to U ′ = 0 satisfy If e sϕ⋆ is small enough to drop all but the first two terms we have which requires a and b to have opposite signs. The value of U at the extremum is The factor of s found in U ⋆ can be understood because when s → 0 the potential becomes scale-invariant and so must then be minimized at U ⋆ = 0 with ϕ ⋆ → −∞. If s > 0 then having small e ϕ⋆ means we must also have |a| ≪ |b| (which corresponds to κ 2 |δT | ≪ |g R µ| ≪ 1). In this case U asymptotes to zero as ϕ → −∞ from below (above) if a is negative (positive), so the extremum is a minimum if a < 0 and b > 0 (ie when g R µ > 0 and δT < 0) in which case U ⋆ < 0. Conversely, if s < 0 then having small e ϕ⋆ means we instead must have |a| ≫ |b| (and so |g R µ| ≪ κ 2 |δT | ≪ 1), and in this case it is for b < 0 and a > 0 (ie for g R µ < 0 and δT > 0) that the above root is a minimum. Writing s = −σ which again requires µ and δT to have opposite signs. The value of U at the extremum is which is again negative and order σδT . To be much smaller than δT we would need σ ≪ 1. The corresponding physical KK scale is and the classical prediction for the mass of the would-be zero mode is (4.45) The minimum found above is most interesting when |s| ≪ 1, for two reasons. First, small s ensures that e ϕ⋆ can be extremely small even if both κ 2 δT , g R µ and their ratio are only moderately small. For example, taking κ 2 δT ∼ 0.3 and g R µ ∼ −0.0003 gives κ 2 δT /g R µ ∼ −10 3 and so s = −σ ∼ −0.1 gives the enormous hierarchy e ϕ⋆ ≃ r B /ℓ ∼ 10 −15 appropriate to a picture with micron-sized extra dimensions [5,26] when the bulk is controlled by TeV scale physics. Such large radii arise because the choice 0 < |s| ≪ 1 makes the setup close to scale-invariant, and so the potential in this limit is close to its runaway form, U ∼ U 0 e 2ϕ . The small scale-breaking parameters then give a weak ϕ-dependence to the prefactor U 0 , creating a minimum out at large negative ϕ. The minimum occurs at large −ϕ precisely because of the potential's close-to-runaway form. Small |s| is also interesting because of the suppression implied by (4.41) for the value of U ⋆ . As mentioned earlier, this suppression arises generically because the system becomes classically scale invariant in the s → 0 limit, and so U ⋆ must vanish in this limit. Effectively this converts Weinberg's runaway no-go from a bug to a feature, with weak scale-breaking driving U ⋆ to be small precisely because the minimum gets driven out to infinity in the scaleinvariant limit. As before, however, although both large ℓ and small U ⋆ are possible, no one choice of parameters gets both right at the same time (without very precise tuning to make |a/b| extremely close to unity. When large ϕ does not imply large dimensions Equating large negative ϕ ⋆ to a large hierarchy between KK size, ℓ, and brane size, r V , (as done in the previous examples) implicitly makes an assumption about the ϕ-dependence of r V . The issue is whether or not obtaining large e −ϕ⋆eg (4.44) -is sufficient to imply a large hierarchy between ℓ and the transverse brane size, r V . It need not be, depending on the other microscopic details that determine r V . In particular it depends on how the brane size itself depends on ϕ ⋆ . For instance, the UV completion considered in [2] provides an example where the connection between large ϕ ⋆ and large ℓ/r V can fail. In this example the branes are resolved in the UV as Nielsen-Olesen vortices [36] with tension, T v ≃ v 2 , set by a scalar vev, v, and brane-localized flux, ξ ≃ (2πnε/e)e sϕ , set by a dimensionless mixing parameter, ε, an integer, n, and a gauge coupling, e. In this UV completion the physical size of the vortex is r −1 V ≃ vê(ϕ) = ev exp 1 2 (1 + 2s)ϕ , which turns out to inherit a dependence on ϕ from the effective couplingê(ϕ). Consequently r V /ℓ can be related to the tension and localized flux by for any choice of s or ϕ. What is important here is that r V /ℓ is ϕ-independent when expressed in terms of the parameters, T and ζ, appearing in the brane effective lagrangian, since these are the combinations that are relevant to the long-distance physics governing the size of ℓ. As a result, in this particular model it doesn't matter how large ϕ ⋆ is when predicting r V /ℓ. Notice that this line reasoning relies on all of ξ depending on ϕ in the same way, rather than there being several contributions involving different scales and depending differently on ϕ. This is why it does not also apply to the previous examples, for which ξ = ξ 0 + δξ(ϕ). Scenarios of scale Before turning to the robustness of the above examples it is useful to have some idea in mind for the the mass scales appearing in all sectors of the theory. This is important when estimating quantum corrections in particular, since for naturalness problems the heaviest scales are usually the most dangerous. We also imagine at least one brane lagrangian being modified to include brane-localized particles, including the known Standard Model (SM) particles. There are several mass scales potentially in play: the inverse brane width, M ∼ 1/r V ; the SM electroweak scale, m; and the scale set by bulk couplings, κ −1/2 and g −1 R . Without loss we may shift ϕ in the bulk so that ϕ = 0 corresponds to g −1 R ∼ κ −1/2 ∼ M g defining the same scale. The effective bulk gauge coupling, g ⋆ = g R e ϕ⋆/2 and the KK scale, m KK ∼ 1/ℓ ∼ g ⋆ /κ = (g R /κ)e ϕ⋆/2 , are then computed from these once the dilaton is stabilized at ϕ = ϕ ⋆ . We assume the hierarchy and ask how loops might depend on these scales. It is also useful to imagine the UV completion of the brane eventually becomes supersymmetric at high enough energies, since this is likely necessary to deal with naturalness at the highest scales possible. This could happen at the string scale if the brane UV completes as an object within string theory, or it could happen above or below the scale M if the branes UV complete as vortices in a higher-dimensional field theory. For concreteness we consider the vortex completion, since the extension to string theory of the system used here remains an open question [37]. Since our goal is to explore extra-dimensional approaches to the hierarchy problem, we always take the brane SUSY-breaking scale, M s , much larger than electroweak scales: M s ≫ m. If we choose M s ≪ M then the vortex sector would be supersymmetric (in that it would preserve at most half of the supersymmetries of the bulk [40]) with the branes likely arising as BPS solutions. Until distorted by supersymmetry-breaking effects (if any) we would then expect the largest contributions to T and ζ to be φ-independent, with T = T s ∼ O(M 4 ). The supersymmetry breaking such branes generically imply for the bulk sector is then minimized if the branes carry the supersymmetric amount of flux [34], so we take κ 2 T s = ± 1 2 g R ζ s . This implies ζ s ∼ κ 2 T s /g R ∼ M 4 /M 3 g ≪ M in magnitude. These assumptions ensure a flat potential, U = 0, for ϕ and allows supersymmetry to protect this shape from scales higher than M s , leaving nontrivial corrections to the low-energy theory (where we can try to estimate them). We expect nonzero δT = T (ϕ) − T s and δζ = ζ(ϕ) − ζ s once effects of the SUSY-breaking brane sector are included. This includes but need not be limited to the SM sector (which is assumed to be localized to one of the branes/vortices). On dimensional grounds, if SUSY breaks on the branes with scale M s such that m ≪ M s ≪ M we expect the dominant deviations from the supersymmetric limit to be of order δT (ϕ) ∼ M 4 s and δζ(ϕ) ∼ M s . If the supersymmetry breaking physics respects the bulk scale invariance then δT and δζ remain ϕ-independent; otherwise not. Suppose the supersymmetry-breaking sector does break scale invariance but only through the localized flux term as examined above, so T = T s +δT with δT ∼ M 4 s and ζ = ζ s +δζ with δζ(ϕ) ∼ M s f (ϕ), for some function f (ϕ), although the precise form for f is not yet crucial. Assuming M s ≫ M 4 /M 3 g then there should exist a value, ϕ = ϕ s , for which U (ϕ s ) = 0 because ζ(ϕ s ) accidentally takes the supersymmetric value corresponding to T = T s + δT , We imagine the value, ϕ s , where this occurs to be moderately large (of order -75 or so in the extreme case of very large dimensions). This scenario fits very cleanly into the class of models for which the perturbative methods explored earlier apply, with y If δf varies slowly enough to be approximated as linear near ϕ s the analysis of earlier sections would predict a minimum with ϕ ⋆ − ϕ s ≃ − 1 2 at which point the classical 4D energy density is U ⋆ ∼ −g R M s /κ 2 . Other forms for f (ϕ) would predict different scalings. Finally, loops of Standard Model particles should also contribute to T and ζ and further perturb them away from their supersymmetric relationship, by an amount at least δT SM ∼ m 4 and δζ SM ∼ ǫ m (where ǫ < ∼ 1 is a dimensionless measure of the strength with which the SM sector couples to the bulk gauge field). Even if not supersymmetric, such SM contributions need not contribute any ϕ-dependence if they preserve scale invariance. There are two natural ranges of values to think through, depending on whether our interest is in the electroweak hierarchy (quantum corrections to scalar masses) or the cosmological constant problem (quantum corrections to vacuum energies). We consider each of these briefly in turn. Electroweak Hierarchy For applications to the electroweak hierarchy we ask the extra dimensions to be large and take the large scales all to be of order the electroweak scale, with the minimal hierarchy required for control of approximations. In this case the premium is on predicting the value of ϕ ⋆ from first principles to ensure sufficiently large ℓ/r B using only a relatively modest hierarchy amongst lagrangian parameters, and we are happy to fine-tune away any cosmological constant. This can be done, for example, if the vortex size, r V , is ϕ-independent and controls the supersymmetric brane physics at scale M , and the supersymmetry-breaking brane physics at scale M s generates an exponential δζ ∼ M s e sϕ . Taking for illustrative purposes M g ∼ 50 TeV, M ∼ M s ∼ 5 TeV and m ∼ 100 GeV with s ≃ 0.2 then gives M g ℓ ∼ 10 15 , which is in the ballpark required. Such a dynamical explanation for the exponentially large size of ℓ elevates the large-dimensional models [26] to a footing similar to their warped competitors [12], although this would be more satisfying with a more explicit picture for the SUSY-breaking brane physics to see more explicitly how it generates the required ϕ-dependence for ζ and T . The challenge and opportunity in this scenario is to better construct the SUSY breaking physics, partly to see what signals it could imply at the LHC. There is clearly some freedom to dial scales somewhat, though if M s and M g are both taken much larger than the electroweak scale we must again ask what protects the value of the Higgs mass on the brane. Implicit in any such model is that whatever quantum gravity eventually kicks in at M g does not allow the higher scales to feed into the Higgs mass and thereby ruin the naturalness of the low-energy picture. Vacuum Energies Although the ideal situation would be to explain the observed Dark Energy density, it would already be progress on the cosmological constant problem to suppress U ⋆ below the electroweak scale. This requires the classical contribution be smaller than the known quantum effects (usually not hard), while choosing parameters so that the quantum effects themselves can be smaller than the electroweak scale (usually much harder). The hope here is that because SM loops generate changes to the brane tension, δT ∼ m 4 , we seek choices that keep this from directly contributing to U ⋆ . A best case in this type of scenario is to imagine that all physics couples to ϕ in the scaleinvariant way down to as low an energy (say µ) as possible. If µ ≪ m ≪ M s ≪ M then this implies the UV physics is to first approximation scale invariant though not supersymmetric, so that T and ζ are constants for which κ 2 T and g R ζ are not similar in size. In this case we imagine the scale-invariance breaking at scale µ introduces a ϕ-dependence only to δζ, in such a way that ζ accidentally passes through the supersymmetric point, ζ ∼ ±2κ 2 T /g R at ϕ s ∼ −75 or so. This ensures the extra dimensions can be very large (best of all would be in the micron range) as desired. Provided the variation in ϕ is slow enough to justify Case I above, the classical prediction for U ⋆ is negative 9 with magnitude ∼ g R µ/κ 2 . Choosing M g as low as possible (in the 10 TeV regime, say) then gives a suppression of U ⋆ relative the electroweak scale by of order g R µ. How much suppression depends on how small µ can be, which requires a better theory of the origins of the ϕ-dependence. Since U ⋆ ∼ µM 3 g we see that having |U ⋆ | < ∼ (10 −2 eV) 4 and M g ∼ 10 TeV requires fantastically small values like µ < ∼ |U ⋆ |/M 3 g ∼ 10 −47 eV. To the extent that useful progress on lowering U ⋆ below the electroweak scale requires scale-invariant couplings of ϕ to ordinary matter, the obstacle is likely to be solar-system constraints on the existence of light Brans-Dicke scalars with gravitational couplings. Robustness As for any approach to naturalness problems the key question concerns robustness of the result. One must check whether conclusions survive the inclusion of subdominant terms in the various approximations being made. Although a full analysis of all of these corrections goes beyond the scope of this article, we make a few preliminary estimates of the size of some of the usual suspects. Potentially fragile choices Assessments of robustness turn on the generality of the choices for parameters in the classical theory. Because it is the branes that are responsible for breaking supersymmetry we might expect that it is choices made for the brane actions in particular that are the most susceptible to perturbations (such as by receiving quantum corrections once these are included). The basic choices used in previous sections concern the magnitude and φ-dependence of the brane action, parameterized by the small dimensionless quantities κ 2 T (φ) and g R ζ(φ) for each of the branes. In particular the previous sections make two non-generic assumptions about the brane action: • We choose no φ-dependence for T but allow φ-dependence for ζ; • We dial freely the relative magnitudes of κ 2 T and g R ζ. It is the sensitivity of these choices to quantum corrections on which we focus. Some quantum estimates UV sensitive quantum corrections in this type of model come in two broad classes: quantum corrections to the bulk lagrangian due to loops of bulk fields; and quantum corrections to the brane lagrangians due to loops of fields on the brane and loops involving bulk fields located close to the brane. In both cases it is loops of the most massive particles that are potentially the most dangerous. Corrections to the Bulk Sector Loops within the supergravity describing the bulk have been studied in some detail [34,38,39], and although loops of individual massive states do renormalize all terms in the bulk and brane lagrangians their contributions to the bulk lagrangian tend to cancel once summed over 6D supermultiplets [39]. The only bulk renormalizations that survive these cancelations are renormalizations of those interactions allowed by bulk supersymmetry, for which we do not make any special requirements. This is required physically because UV modes far from the branes effectively do not know that supersymmetry is broken. The UV dangerous renormalizations coming from the supersymmetric sector are those that renormalize the non-supersymmetric brane physics. These should not be dangerous to the extent we do not make special assumptions about the sizes (or the dependence on bulk fields) of couplings likeŤ and ζ in the brane action. From the point of view of the vacuum energy, the most dangerous renormalizations of the bulk are dimension-four interactions involving curvature squared terms (and their partners under supersymmetry) since these can acquire renormalizations proportional to the squaredmass, M 2 , of the massive bulk supermultiplet [34,38,39]. These can generate contributions to the 4D vacuum energy of order M 2 /ℓ 2 , and so be larger than the 1/ℓ 4 desired to describe Dark Energy in SLED models. But they are generically smaller than the O(M 4 ) contributions described below, and so represent a lesser worry than the brane renormalizations we describe next. The Brane Sector: Bulk Loops Loops of bulk fields involving virtual particles physically near the branes also renormalize the brane lagrangian, as computed in [34,38]. These loops turn out not to be dangerous for our two brane choices, however, for two reasons. The first statement is that although bulk loops contribute of order M 4 to the brane tension, they do not introduce nontrivial φ-dependence to the tension if this was not already present because of the underlying scale invariance of the bulk system. Secondly, bulk loops involving massive multiplets that carry gauge charge can also renormalize ζ. But because the correction is of order δζ ∼ g 2 R M 2 ζ [34] it is technically natural (from the point of view of these loops) to choose ζ to be small. The Brane Sector: Brane Loops Massive fields localized on the branes are among the most dangerous (and arguably the most difficult to understand) from the point of view of naturalness, because these fields can be heavy and are not constrained by supersymmetry (at least at scales below M s ). In principle these include loops of familiar SM fields that are the origin of the cosmological constant problem in the first place. Integrating out such particles of mass M generically renormalizes the brane tension by an amount of order M 4 , so we run into naturality problems as soon as we must demand δT be smaller than this. For applications to the cosmological constant problem this is why all contributions to U ⋆ of order δT are not regarded as being progress. In general such loop contributions to T could also play a role by introducing nontrivial φdependence, although this can be protected against by demanding the couplings of the brane matter to preserve scale invariance. For SM fields this is trouble to the extent that it gives them Brans-Dicke couplings [41] to the light scalar ϕ of gravitational strength [42], which are ruled out phenomenologically (for sufficiently light ϕ) by PPN solar-system tests of gravity [43]. Of course, mechanisms exist for weakening the couplings of light scalars [35,44], usually by making these couplings ϕ-or environment-dependent or by making the scalar massive enough not to mediate a sufficiently long-range force. Although much model-building could be forgiven if progress could be made on the cosmological constant problem, we regard this to be a real worry whose resolution goes beyond the scope of this (already very long) paper. The same kinds of problems need not be a worry for brane corrections to ζ, however, because these cannot be generated unless the field in the loop already couples to the bulk gauge field. Brane-generated contributions to δζ should be easy to suppress simply by not coupling heavy brane particles to this field. Discussion This paper's aim is to carefully determine how codimension-two objects in 6D supergravity back-react on their in environment through their interactions with the bulk metric, Maxwell field and dilaton, and how this back-reaction gets encoded into the effective potential of the low-energy 4D world below the KK scale. To this end, we construct the corresponding four-dimensional effective theory, and show how the flux quantization conditions of the UV theory are brought to 4D by a four-form gauge flux dual to the Maxwell field. The 4D theory generically contains a light scalar dilaton to the extent that the branes do not strongly break the classical bulk scale invariance. We identify the scalar potential for this scalar and show at the linearized level that it agrees with what is obtained by explicitly linearizing the higher-dimensional field equations. This calculation in particular corrects some errors in [17], which misidentified some of the boundary conditions associated with the brane-localized flux term. We confirm the result of [17] that the breaking of scale invariance by the branes can lead to modulus stabilization and allow explicit computation of the extra-dimensional size, in a codimension-two version of the Goldberger-Wise mechanism [22]. We confirm that this size can be exponentially large in the brane couplings. A moderate hierarchy of order 75 amongst the brane couplings can be amplified to produce enormous extra dimensions in this way, thereby fixing a long-standing problem with the use of large extra dimensions to solve the electroweak hierarchy problem. For the particular choice of near scale-invariant couplings we can (but need not) also find some parametric suppression in the value of the on-brane curvature and dilaton mass, although for those examined so far this suppression seems fairly weak. We are unable to find simple examples which both generate exponentially large dimensions and suppress the classical vacuum energy (though we also are unable to prove this to be impossible). Although we make preliminary estimates about the size of quantum corrections and the robustness of the parametric suppressions of the potential, we leave a more detailed treatment to later work. A. Scale invariant solutions In this appendix we present the details of well-known solutions that exist when the branes are scale invariant. We first describe the Salam-Sezgin solution [4] that applies when there are no branes, and we then show how this solution generalizes to the rugby ball solution [5] in the case where the branes are identical, scale invariant, and supersymmetric. A.1 Salam-Sezgin solution In the absence of branes, it is consistent to assume a trivial warp factor W = 1 and no dilaton profile φ ′ = 0. The second of these conditions is satisfied as long as the source terms in the dilaton field equation vanish where we eliminate the bulk field strength in terms of Q using A ρθ = QBe φ . The constant and we conclude the extra dimensions are spherical with proper radius ℓ 2 s = r 2 B e −ϕ . Consistency requires verifying the flux-quantization condition returns Q = Q s . To check we evaluate where g A is the gauge coupling of the background gauge field (which in principle could differ from g R if this field gauges a group other than the R-symmetry for which g R is the coupling). We see that only the supersymmetric choices g A = g R and Q/Q s = N = ±1 are consistent with φ = ϕ being a finite constant, and because Q = ±Q s the value, ϕ, remains undetermined by the field equations. A.2 Supersymmetric rugby ball Many of the nice properties of the Salam-Sezgin solution are preserved if identical, scale invariant, supersymmetric branes are added to the system, with action where Σ v (u) denotes the worldsheet of each brane, parameterized by the four coordinates u µ . By assumption T and ζ are the same for both branes and independent of the dilaton (as required for the branes not to break the classical bulk scale invariance). These choices are necessary if the branes are not to source gradients of the warp factor or dilaton, making it still consistent to assume W = 1 and φ ′ = 0. As before, the condition of constant φ requires bulk sources in the dilaton field equation to vanish, and so flux quantization must return the same value for Q as in the Salam-Sezgin solution:Q = ±Q s = ±2g R /κ 2 . This choice of Q also preserves the radius of the extra dimensions so the rugby ball metric function is solved by B = αL s e −ϕ/2 sin(ρ e ϕ/2 /L s ) . (A.6) Note the presence of the constant α in this solution, which physically represents a conical singularity at the poles of the sphere with defect angle δ = 2π(1 − α). This differs from the Salam-Sezgin value, α s = 1, because the presence of branes modifies the boundary condition of the bulk metric function B at the position of the branes (2.25) to satisfy where the sign assumes the derivative is in the direction away from the brane and T is the brane's tension. Nonzero defect angles make the bulk resemble a rugby ball rather than a sphere. The other effect of the branes is to introduce a localized piece of A ρθ at the brane positions, and this modifies the flux-quantization condition (A.4) to become showing that ζ/2π describes that amount of the total gauge flux that is localized in this way. Evaluating as before gives which shows how the brane-localized flux compensates for the reduction of bulk volume caused by the defect angles. Flux quantization is only consistent with constant φ if it returns Q = ±Q s . Having source branes can allow this if T and ζ are related by in addition to the bulk conditions g A = g R and N = ±1. This brane condition also turns out to be required by demanding supersymmetry not be broken by the presence of the branes [34], showing how supersymmetry again ensures the value Q = Q s required for a flat potential that does not determine the value φ = ϕ. B. Linearized solutions We now assume that the branes are perturbatively close to the identical, scale-invariant supersymmetric ones just described. However, the perturbations we consider to the tension and localized flux need not respect scale invariance and can differ at each brane where T 0 and ζ 0 satisfy (A.10). We track the effects of these perturbations on the the bulk fields by solving the entire set of field equations, including the equations for warping, the dilaton profile, and flux quantization, at linear order in the perturbations. When the brane perturbations break scale invariance, we also solve for the stabilized value of the zero mode, ϕ = ϕ ⋆ , to linear order. We also calculate the 4D effective potential for ϕ at the linearized level, and show how it reproduces this stabilized value of the zero mode computed with the full 6D theory. Full field equations We first present the set of field equations and boundary conditions to be solved. Because of the scale invariance of the unperturbed theory it is useful to switch to the following scale invariant variables b := e φ/2 B and dσ = e φ/2 dρ . (B.2) With these the undifferentiated dilaton only appears in the field equations through scalebreaking terms. Since these terms are by assumption perturbatively small, we can simply replace the dilaton factor φ appearing there with the zero mode ϕ. These variables also simplify the linearization of the scale invariant terms in the field equations since the background dilaton solution readsφ ′ = 0 (where bars denote background quantities). Additionally, the equations simplify if we rewrite the warp factor as so we can perturb around the background solutionω = 0. In these new variables the background of the bulk metric function simplifies tob =ᾱL sin(σ/L), withᾱ determined by κ 2 T 0 andL = L s = r B . Since our interest is in computing the shape of the zero-mode potential we also follow Refs. [17] and add a stabilizing current to the bulk action Choosing J appropriately allows us to investigate values of ϕ away from the minimum of the potential while still solving all of the field equations. In particular, we read the equation that would have determined the stabilized value ϕ = ϕ ⋆ as instead to be solved for J(ϕ), allowing us to trace the shape of the effective potential for ϕ. Then, ϕ = ϕ ⋆ corresponds to J = 0. To solve for the perturbations to the bulk metric function and warp factor, we desire two linear combinations of the Einstein equations (2.12) -(2.14) that contain second derivatives of the metric fields, and no factors of the 4D curvature. The first of these reads and is to be solved for b. (From here on primes on bulk fields denote differentiation with respect to σ rather than ρ.) The other relevant Einstein equation is and this is to be solved for ω. In these variables the dilaton field equation (2.11) similarly reads and flux quantization (2.22) (for N = 1 and g A = g R ) can be written as Finally, we rewrite the boundary conditions in the new variables, to get where σ v are the brane positions and the signs are such that the derivatives of b and φ are in the direction away from the branes. In general the right-hand side of these boundary conditions generically diverge as σ → σ v . As shown explicitly in the examples of Appendix C this divergence can (and must) be renormalized into the parameters describing the brane-bulk couplings. Linearized field equations The perturbations we consider to the tension and localized flux need not respect scale invariance, by depending nontrivially on the dilaton, and they can differ at each brane The supersymmetric solutions are relatively simple because gradients in the warp factor and dilaton are absent:ω = 0 andφ ′ = 0. This need no longer be true given any asymmetry in the brane perturbations, and so to linearized order these bulks fields instead satisfy where primes again denote differentiation with respect to σ. These changes feed into the Einstein equations that govern the bulk metric function and the flux quantization condition that governs the size of Q, so b(σ) =b(σ) + δb(σ) and Q =Q + δQ . (B.13) To solve for the field perturbations, we now linearize the full field equations around the supersymmetric rugby ball case. This gives the following Einstein equation for the metric function where δq = δQ/Q. We also have the linearized dilaton field equation Inserting this into the Einstein equation gives The linearized field equation for the warp factor simplifies a great deal The linearized boundary conditions reduce to whereσ v = {0, πL} are the unperturbed values of the brane positions, and again the sign assumes derivatives are directed away from the branes. Finally, combining the two boundary conditions gives an expression for the near-source derivative of the bulk metric function In many of the above results we use the useful property of the unperturbed solution that κ 2Q2L2 = 1. Linearized solutions The where φ 1 is another integration constant. Integrating again gives the full solution for the dilaton, The constant part of the dilaton profile, ϕ, need not be perturbatively small so in φdependent expressions that are already perturbatively small, like Je −φ , we can make the replacement φ → ϕ. This allows us to rewrite the Einstein equation (B.16) as The general solution to this field equation is given by δb =ᾱL b 0 cos z + b 1 sin z + (δq + δj)z cos z − ω 1 sin z cos z , (B.27) where b 0 and b 1 are integration constants. We are free to shift the radial coordinate to ensure δb(0) = 0 and thereby set b 0 = 0. Changes in geometry The points σ v where the metric function vanishes define the brane positions. These are also perturbed relative to the background value σ v =σ v + δσ v and we can solve for these perturbations by linearizing b(σ v ) = 0, which gives This shows that the choice b 0 = 0 ensures that that one of the branes is always located at the origin b(σ 0 ) = 0 at linear order. At the other pole, nearσ π = πL, we instead find the following shift δσ π πL = δq , Evaluating the integral using the explicit solutions derived above gives where the integral over δω vanishes because it is odd on the interval of integration. We learn the perturbation to the volume is independent of warping to linear order and is determined by the integration constants. Boundary conditions and integration constants We next determine these integration constants in terms of the assumed brane perturbations, δT v and δζ v , using the near-brane boundary conditions. We first evaluate the combined boundary condition (B.20) using (B.27) at both branes to get the following relation between integration constants and brane parameters where we use v = {0, π} as an index to represent the branes located near z = 0 and z = π, and the explicit sign e iv appears because the boundary conditions assume a radial coordinate that increases away from the brane (so the radial derivative in the boundary condition is −d/dσ near σ = πL ). The dilaton boundary condition (B.18) similarly evaluates using the dilaton solution in (B.23), to give The integration constant controlling the gradient in W is fixed by the difference between the (B.33) at v = 0 and v = π in terms of brane differences where δT dif = 1 2 (δT v=0 − δT v=π ) and so on. Using this in the difference between the two versions of (B.34) similarly determines the gradient of φ by fixing The remaining integration constants are found by summing rather than subtracting boundary conditions, and the two versions of (B.34) sum to give δq in terms of brane averages where δT avg = 1 2 (δT v=0 + δT v=π ) and so on. This expression can be used in conjunction with (B.33) to solve for the last integration constant, b 1 , and we find These four conditions completely fix the integration constants, φ 1 , ω 1 , b 1 and δq in terms of the brane parameters, the stabilizing current J and ϕ (which to this point remains arbitrary). A final relation comes from the linearized flux-quantization condition, Setting J = 0 in this gives the stabilized value, ϕ = ϕ ⋆ , of the zero mode entirely in terms of the brane parameters. The effective potential We now construct the effective potential of the 4D theory using the ancillary current J, and verify that it is minimized by the condition (B.41). The addition of ∆S J to the bulk action gives rise to a corresponding term in the effective theory. To identify how the current contributes to the effective theory we note that it can be treated like a novel contribution to the 6D potential ∆V B = J . Since J is a known function of ϕ this can be read as a differential equation for the potential which is solved by It is possible to directly integrate this expression to find the linearized potential U (ϕ) = 2 e 2(ϕ−ϕ⋆) δT avg + 2g R κ 2 δζ avg , (B.48) where we have usedQ = 2g R /κ 2 . There is no background contribution to the potential because it vanishes identically in the supersymmetric case around which we are perturbing. This potential agrees with the linearization of the potential found by dimensional reduction in §3.4 of the main text, and correctly predicts that the energy is perturbed by v δT v at linear order when the brane tension is perturbed in a way that vanishes when the brane perturbations are scale invariant and supersymmetric. Furthermore, minimizing the potential for general brane perturbations gives the same condition on the zero mode as (B.41) gives when J = 0, and this confirms that the effective potential reproduces the stabilization of the zero mode that was derived in the 6D theory. C. Examples of stabilization We now investigate simple examples of zero mode stabilization by choosing explicit forms for the φ-dependence of the brane perturbations. In all cases, we imagine the φ-dependence of the brane appears predominantly in the flux perturbation, since we expect this choice to help suppress vacuum energies, as in (2.42). In many cases we find that exponentially large extra dimensions and suppressed curvatures can be obtained if there is a hierarchy between the size of the brane perturbations. Along the way, we also illustrate how classical renormalization of brane parameters can be used to absorb divergences that arise when the brane is treated as an idealized, infinitely thin source. The procedure renders finite physical observables like the value of the zero mode, and the potential at its minimum. Flux with linear φ-dependence We now investigate simple example in which the the branes are perturbed identically, with the following properties Note that we have regularized the lim σ→0 log[sin(σ/L)] divergence with the finite expression log(ǫ/L). However, the divergence as ǫ → 0 must be absorbed into the brane couplings such that physical quantities are finite. In general, divergences associated with brane terms that are linear in a bulk scalar can be absorbed by renormalizing the φ-independent part of the brane tension [21]. This case is no different, and the observables of the theory can be made finite if we renormalize the tension as follows κ 2 τ (r) = κ 2 τ − 2λ 2 g 2 R πᾱ log(ǫ/r) . (C.5) In particular, this renormalization gives a finite expression for the value of the zero mode For convenience, we can choose the renormalization scaler =L to eliminate the logarithmic term and this gives (C.7) Note that the limit λ → 0 sends to zero mode to the expected runaway value ϕ ⋆ → −∞. Also note that the value of the zero mode comes to us as the ratio of two small, dimensionless numbers κ 2 τ (L) and λg R but can itself be made large if λg R ≪ κ 2 τ (L). Because the proper volume of extra dimensions is controlled by ℓ 2 = r 2 B e −ϕ⋆ , a large negative value of the zero mode gives large extra dimensions. Furthermore, this choice does not invalidate the assumed perturbativity of κ 2Q δζ ≈ λg R ϕ ≈ κ 2 τ (L) ≪ 1 near the minimum of the potential, and so the approximate, linearized potential is valid in this region. Flux quadratic in φ We now consider the case in which the perturbation to the localized flux is quadratic in φ δT v = τ and δζ v = m φ 2 . (C.11) We again make the simplifying assumption of identical branes and this gives φ 1 = ω 1 = 0 so that φ = δq log(sin z) + ϕ. Inserting this into (B.37) allows us to rewrite it as follows δq = 2mg R πᾱ δq log(ǫ/L) + ϕ , (C.12) where the logarithmic divergence of the dilaton is ǫ-regularized in the same way as before. This equation can be used to solve for δq in terms of the zero mode πᾱδq = 2mg R ϕ 1 − 2mg R πᾱ log(ǫ/L) . (C.13) When the brane has a quadratic coupling to a bulk scalar field, the associated divergences can be absorbed into the renormalization of this this coupling's coefficient [21,29]. In the present case this amounts to renormalizing m as follows m(r) = m 1 − 2mg R πᾱ log(ǭ/r) . (C.14) This gives a finite value for the δq as a function of the zero mode πᾱδq = 2gm(L)ϕ because it absorbs the divergences associated with evaluating the dilaton profile at the brane positions mφ(σ 0 ) = m(L)ϕ. If we assume t ≫ 1 then the stabilized value of the zero mode is dominated by the root of the large ratio t as follows ϕ ⋆ = ± κ 2 τ 2g R m(L) , (C. 18) and the negative solution can be a minimum if g R m(L) < 0. This would also require κ 2 τ > 0 if t > 0 is to be satisfied. If the stabilized value of ϕ ⋆ is chosen to be large and negative, then the extra dimensions have exponentially large radius as suggested by the leading order result ℓ 2 = r 2 B e −ϕ⋆ . Finally, the value of the potential at this minimum can be written as follows Similar to the linear case, this vacuum energy is suppressed relative to the naive expectation 2τ , though the suppression here is weaker because it is sensitive to the root of the hierarchy in the brane perturbations.
2015-09-14T17:14:44.000Z
2015-09-14T00:00:00.000
{ "year": 2015, "sha1": "71dfafb59390a8421e857eff71177f7288efcd21", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/JHEP10(2015)177.pdf", "oa_status": "GOLD", "pdf_src": "Arxiv", "pdf_hash": "3e0af574b736790e441a818518af6261dc49942d", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
119417849
pes2o/s2orc
v3-fos-license
Measurement of a Magnonic Crystal at Millikelvin Temperatures Hybrid systems combining magnons and superconducting quantum circuits have attracted increasing interest in recent years. Magnonic crystals (MCs) are one of the building blocks of room-temperature magnonics and can be used to create devices with an engineered band structure. These devices, exhibiting tunable frequency selectivity and the ability to store travelling excitations in the microwave regime, may form the basis of a set of new tools to be used in the context of quantum information processing. In order to ascertain the feasibility of such plans, MCs must be demonstrated to work at the low temperatures required for microwave-frequency quantum experiments. We report the first measurements of the transmission of microwave signals through an MC at 20 mK and observe a magnonic bandgap in both continuous-wave and pulsed excitation experiments. The spin-wave damping at low temperatures in our yttrium iron garnet MC is higher than expected, indicating that further work is necessary before the full potential of quantum experiments using magnonic crystals can be realised. Hybrid systems combining magnons and superconducting quantum circuits have attracted increasing interest in recent years.Magnonic crystals (MCs) are one of the building blocks of roomtemperature magnonics and can be used to create devices with an engineered band structure.These devices, exhibiting tunable frequency selectivity and the ability to store travelling excitations in the microwave regime, may form the basis of a set of new tools to be used in the context of quantum information processing.In order to ascertain the feasibility of such plans, MCs must be demonstrated to work at the low temperatures required for microwave-frequency quantum experiments.We report the first measurements of the transmission of microwave signals through an MC at 20 mK and observe a magnonic bandgap in both continuous-wave and pulsed excitation experiments.The spin-wave damping at low temperatures in our yttrium iron garnet MC is higher than expected, indicating that further work is necessary before the full potential of quantum experiments using magnonic crystals can be realised. Superconducting quantum circuits have become an increasingly mature experimental technology in recent years [1,2].As a result, there has been a surge of interest within the circuit quantum electrodynamics (circuit QED) community in combining such circuits with other physical systems such as spin ensembles [3,4], acoustic waves [5], and magnonic structures [6]. The goal of quantum magnonics is to investigate the physics of magnons at the quantum level and to create novel microwave devices useful for quantum information processing.Dipolar magnons (spin waves) [7] have µmwavelengths and are readily excited over a range of microwave frequencies which overlap with those of superconducting quantum circuits.Recent work includes the measurement of surface spin waves in a µm-thick yttrium iron garnet (YIG) waveguide at millikelvin temperatures [8], the demonstration of strong coupling between bulk YIG samples and resonators [9][10][11][12][13][14][15] and the excitation of a single magnon in a YIG sphere using a superconducting qubit [16,17]. Magnonic crystals (MCs) [18,19], the magnetic analogue of photonic crystals, are magnetic waveguides with artificially engineered magnonic bandgaps.MCs are created by imposing periodic changes in a waveguide's magnetic properties or environment.Various implementations have been demonstrated, including several static varieties and a dynamic variant with a bandgap that can be switched on and off. At room temperature, MCs have been used to create a range of devices including oscillators and filters [20], logic gates [21], and magnon transistors [22].Several of the properties of magnonic crystals -notably their strong and tunable frequency selectivity, storage capability, and ability to alter the propagation direction of signals -have potential utility in the manipulation of single magnon excitations in experimental solid-state quantum devices [23,24].Until now, however, it remained to be * sandoko.kosen@physics.ox.ac.uk established that MCs can be used at the millikelvin temperatures required for such devices.In this work, we present the first measurements of a magnonic crystal at millikelvin temperatures, a step towards the incorporation of MCs into quantum devices. The basis for the magnonic crystal used in our experiments is a structured YIG waveguide (thickness S = 5.19 µm, room-temperature saturation magnetization M s = 138.6kA/m)epitaxially grown on a gadolinium gallium garnet (GGG) substrate.YIG, a ferrimagnetic electrical insulator, has extremely low spin-wave damping at room temperature and is therefore much used in room-temperature magnonic device development [25].The MC is formed from a series of eight equally-spaced grooves, each of width w = 40 µm and depth d = 0.5 µm, chemically etched into the magnetic film.The distance between the grooves is a = 300 µm (see fig. 1(a)).Spin waves are excited and detected by niobium microstrip antennae fabricated 2.66 mm apart on a sapphire crystal substrate in direct contact with the MC.In order to assure compatibility with the thin-film superconducting measurement structures used in circuit QED, it is desirable to apply the required bias magnetic field inplane.We chose to carry out our experiments in the backward volume geometry (BVMSW) [26] (bias magnetic field parallel to the spin-wave propagation direction ( k B), which is along the longitudinal axis of the waveguide).At room temperature, crystals measured in the backward volume configuration have been shown to display bandgaps with a higher rejection ratio than magnetostatic surface spin waves (MSSW) ( k ⊥ B, in-plane field) [27]. A dilution refrigerator is used to cool the MC assembly, housed in a copper sample box, down to 20 mK.The MC is first characterised at room temperature using a network analyser.Figure 2 shows the transmission measured at room temperature as a function of microwave input signal frequency with B = 107 mT.The displayed data is relative to that measured at zero field, i.e. when no spin waves are excited within the waveguide and only directly-coupled electromagnetic signals propagate between the input and output antennae through the vacuum of the sample box.In this figure, the highest frequency at which the BVMSW are observed corresponds to the spins precessing uniformly throughout the material (FMR, k = 0).Propagating modes (k = 0) have lower frequencies. Below the FMR frequency, the data displays oscillations caused by the interference between the spin-wave signal and the directly-coupled signal; due to the different dispersion relation of the magnonic and photonic waves, these signals accumulate different phases while travelling to the output antenna, resulting in interference fringes.The magnonic bandgaps of the crystal appear as gaps in this pattern: in the bandgaps, the transmitted spin-wave signal is suppressed while the directly-coupled signal is unaffected, resulting in regions without oscillations. Calculations were made using the transfer matrix method following the treatment of Chumak [27].In this model, spin waves accumulate phase and experience damping while propagating in between neighbouring edges of the grooves defining the lattice of the magnonic crystal.At the interfaces between etched and unetched regions, spin waves undergo partial reflection and transmission.For completeness, it should be noted that the coupling of the antennae to the waveguide has some dependence on k which is not included in the model: the effect of this on the key qualitative features being fitted (namely the position and width of the bandgaps) is negligible. Apart from the FMR linewidth (∆H), two phenomenological parameters appear in the model: ζ which accounts for the increased damping due to two-magnon scattering within the grooves, and η which is used to match the predicted and observed width and depth of the bandgaps.For simplicity, in our calculations ζ is set to zero and η is adjusted to fit the measured widths of the gaps.The theoretical prediction of the transmission characteristics across the magnonic crystal with M s = 138.6 kA/m (dotted line in fig.2) is consistent with the observed positions and widths of the bandgaps. Figure 3 compares the transmission characteristics of an unstructured magnonic waveguide (11 µm film thickness, 1 mm inter-antennae spacing) and the same magnonic crystal at 20 mK.An offset has been applied to the data to shift the baseline to 0 dB.In contrast to the room-temperature measurement in fig.2, measurements at 20 mK are made as a function of the magnetic bias field (B) while keeping the input frequency constant.The system is excited using a constant frequency 4 GHz microwave tone with a power of −70 dBm at the input of the antenna.The lowest field at which the BVMSWs are observed corresponds to the FMR.Signals at higher fields are propagating modes (k = 0).At 20 mK, the measurement of the unstructured waveguide (fig.3(a)) shows oscillations across the spinwave passband that decay in amplitude as k increases (i.e. as B increases).As in the data of fig.2, the oscillations are due to the interference between the spin waves and the directly-coupled signals.As anticipated, without the etched grooves, no magnonic bandgap is observed.In the MC measurement (fig.3(b)), a single bandgap is observed.Its position at ∼ 66.2 mT agrees with that predicted using the transfer matrix method with a saturation magnetisation of M s = 197 kA/m [8,28].Higher order bandgaps are not visible due to the significant apparent increase in damping. The presence of a magnonic bandgap can also be observed in time-resolved measurements at 20 mK.Since the spin waves propagate slower than the directlycoupled signal (which travels at the speed of light to the output antenna), for sufficiently short excitation pulses, the two can be separated in time.Care has to be taken, however, not to make pulses so short as to have a frequency bandwidth that exceeds the width of the bandgap: under these conditions the gap cannot be observed since the signal always has a component lying outside of the bandgap which can propagate freely through the crystal. Figure 4 shows the time response of the magnonic crys- tal to a Gaussian pulse (σ = 30 ns) with a carrier frequency of 4 GHz.Such pulses are slightly too long to allow complete temporal separation between the directlycoupled signal and the spin-wave signal, but the bandgap starts to be obscured if they are made shorter.Initially, only the directly-coupled signal is measured (∼ 40 ns < t < ∼ 100 ns).When the spin waves start to arrive at the output antenna (∼ 100 ns < t < ∼ 160 ns), they overlap in time with the directly-coupled signals, interfering destructively at 'X' in fig. 4. Beyond 160 ns, the directly-coupled signal disappears, leaving only the transmitted spin-wave signal.Figure 4(b) shows the linecuts from the same data at t = 160 ns and t = 200 ns.The first bandgap of the magnonic crystal is visible at 66.2 mT, consistent with the continuous-wave measurement in fig.3(b). A comparison between the room-temperature (fig.2) and cold data (fig.3(b)) indicates the presence of a significant increase in spin-wave damping at millikelvin temperatures.There are three possible sources of damping that warrant careful consideration: magnetic impurities in the YIG, enhanced damping due to the scattering processes caused by uneven etching of the grooves, and the GGG substrate upon which the MC is grown. Previous measurements [28][29][30][31] have shown that FMR linewidths in YIG initially increase as the material's temperature is decreased (below 100 K), reach a maximum value, and then begin to reduce again.This is generally attributed to the presence of paramagnetic rare-earth impurities in YIG with temperature-dependent relaxation times.While the lowest temperatures reached in these earlier works are around 5 K, they consistently report decreasing linewidths when the temperature is reduced below 10 K. Furthermore, the linewidths of YIG spheres measured in Refs.[10] and [12] at millikelvin temperatures are similar to the values observed at room temperature.From this, it seems likely that it is feasible to produce a pure YIG material with a linewidth at millikelvin temperatures comparable to the roomtemperature value. The surface roughness of a ferrite sample is known to influence the FMR linewidth because it increases twomagnon scattering, especially in a thin film sample [32].Spencer [29] has shown that better polished YIG spheres do exhibit lower linewidths across a range of temperature, from 300 K down to 5 K. Rough surfaces inside the grooves which define an etched MC are known to contribute to damping [27] but, as yet, there is no reason to think that this effect would be significantly enhanced at low temperatures. The substrate upon which the YIG film is grown, gadolinium gallium garnet, is known to be paramagnetic below 70 K [33].GGG is well-known to have a frustrated spin system with an ordered antiferromagnetic state below 400 mK at a relatively high field (∼ 1 T) [34].At low field, the material undergoes a spin glass transition below ∼ 200 mK [35].While its behaviour at the intermediate field ranges of our experiments is not well-documented, given these known magnetic properties and the relatively narrow linewidths measured in bulk YIG at low temperature (i.e. in the absence of GGG) it seems highly likely that, if not the only culprit, losses due to its lowtemperature magnetic system coupling to the YIG are at least an important contributor to the increased damping we observe. In conclusion, we have measured a bandgap in a magnonic crystal consisting of an etched YIG waveguide at 20 mK.Our results are consistent with calculations based on the transfer matrix method, both for continuous-wave and time-resolved measurements.Room-temperature and cold measurements of the same magnonic crystal indicate the presence of higher-thanexpected spin-wave damping in the YIG at millikelvin temperatures.Future experiments investigating spin waves in YIG waveguides at millikelvin temperatures may provide more insight into the nature of this damping.This is essential if magnonic crystals are to be used for manipulation of magnons at the quantum level. FIG. 2 . FIG.2.Transmission of continuous-wave BVMSW signals through the magnonic crystal measured at room temperature using a network analyser (solid line).The bias magnetic fixed is fixed at B = 107 mT.Data is relative to that measured at zero-field, i.e. when no spin waves are excited and only directly-coupled electromagnetic signals contribute.The theoretical transmission (dashed line) is calculated using the transfer matrix method with Ms = 138.6kA/m,η = 8, ζ = 0, ∆H = 0.5 Oe. FIG. 3 . FIG.3.Transmission of continuous-wave BVMSW signals measured at 20 mK.Measurements are performed by sweeping the magnetic bias field (B) while applying a 4 GHz microwave input tone through (a) an unpatterned magnonic waveguide (11 µm thickness, 1 mm antenna separation), and (b) a magnonic crystal waveguide (5.19 µm thickness, 2.66 mm antenna separation).An offset has been applied to the data to shift the baseline to 0 dB.The theoretical curve is calculated using the transfer matrix method with Ms = 197 kA/m, η = 8, ζ = 0, ∆H = 0.5 Oe. FIG. 4 . FIG. 4. Time-resolved measurements of pulsed BVMSW signals through the magnonic crystal at 20 mK.(a) Measurements are performed by sweeping the bias magnetic field B while applying a 4 GHz microwave tone with a Gaussian enve-(σ = 30 ns).The dominant measured response in the horizontal band from ∼ 40 ns to ∼ 160 ns is the directly-coupled signal.The markers 'X' indicate where the spin-wave and the directly-coupled signals interfere destructively.Beyond 160 ns, the directly-coupled signal disappears, leaving only the transmitted spin-wave signal.A bandgap is observed at 66.2 mT.(b) Linecuts at t = 160 ns and t = 200 ns as a function of magnetic field.
2019-02-28T14:57:10.000Z
2017-11-02T00:00:00.000
{ "year": 2017, "sha1": "ced0bb7de6a0d4a391dd82e1922253fb2ce1060e", "oa_license": null, "oa_url": "https://ora.ox.ac.uk/objects/uuid:e34efd2c-116c-4a5d-9540-4573ecc28f3e/files/mb960f5c8fba45fa5de35e3db921b08f7", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "ced0bb7de6a0d4a391dd82e1922253fb2ce1060e", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Materials Science", "Physics" ] }
151230148
pes2o/s2orc
v3-fos-license
Mental Health of Adolescents and Youth The health and mental well-being of children and adolescents in general is good. Most are satisfied with their lives, perceive their health to be good, and do not regularly suffer from health complaints. The main problems of the first half of the 20th century, such as acute infections and high infant mortality, have diminished in importance. Instead of physical disorders, mental illness accounts for a large and growing share of ill health among children and adolescents in Europe. Including emotional problems, this may conduct problems and learning disabilities came to the fore in the middle of the last century. Currently, the frame of mental health and socio-economic influences on health has risen to achieve importance within child and adolescent health. World Health Organization (WHO) declared that young people’s mental health is a key area of concern to which professionals and policy-makers must direct their attention. Focusing only on mental health disorders does not give the whole picture of the state of mental health among young people. A general problem is the predominant understanding of mental health as the absence of mental disorder. Risk factor research has focused on mental health problems rather than strengths and positive outcomes. Consideration of resilience has emerged from research indicating that a proportion of young people had a positive life trajectory despite having faced diverse potentially harmful life experiences. Worldwide up to 20% of children and adolescents suffer from disabling mental health problems: Mental health of adolescents and youth, it is a very problematic issue that is not expressed only with elements. This required a lot of analysis and statistic to arrive for a result which serves to us for knowing and preventive the problem. As an issue, we must identify two major part of this: What is mental health (definition, cause, diffusion, ages, and preventive)? And what can we do to improve the mental health of Adolescents and Youth (family, society, hospitals, school, and public places)? The estimates of psychological problems and disorders may therefore be higher than is reported in studies. On an individual level, mental health problems can have deteriorating effects on young people’s social, intellectual, and emotional development and consequently on their future. At its worst, they can lead to loss of life. Suicide is one of the three leading causes of death in young people and a public health concern in many European countries. Besides the negative effects on an individual level, mental illness affects also many other spheres of life―family, friends, and society at large―causing costs not only in health care system. Furthermore, there are close links between child and adult mental illness―the presence of mental illness during childhood may lead to up to 10 times higher costs during adulthood. Children are our future. Through well-conceived policy and planning, government can promote the mental health of children, for the benefit of the child, the family, the community, and society. Introduction Mental health can emerge in late childhood and early adolescence. Recent studies have identified mental health problems, in particular depression, as the largest cause of the burden of disease among young people. Poor mental health can have import effect on the wider health and development of adolescents and is association with several health and social outcomes, such as higher alcohol, tobacco and illicit substances use, adolescent pregnancy, school dropout, and delinquent behaviors. There is growing consensus that healthy development during childhood and adolescence contributes to good mental health and can prevent mental health problems. Enhancing social skills, problem-solving skills, and self-confidence can help prevent mental health problems, such as conduct disorders, anxiety, depression, and eating disorders as well as other risk behaviors including those that relate to sexual behavior, substance abuse, and violent behavior. Health workers need to have the competencies to relate to young people, to detect mental health problems early, and to provide treatments which include counseling, cognitive-behavioral therapy and where appropriate and psychotropic medication. Child mental health policies are stimulated by the interaction of knowledge, public awareness, social mobilization, and advocacy. They are also influenced by other contextual issues, such as historic and existing health, social and educational policy, and services. The need to develop policy on child mental health has, sadly, been widely neglected, but is now recognized as a crucial first step in the development of accessible and effective services for children. As well as policy, detailed strategic action plans, identification of entry points and levers for change are all crucial. What Is Mental Health? Classic Version of the Hippocratic Oath (…) If I fulfill this oath and do not violate it, it may be granted to me to enjoy life and art, being honored with fame among all men for all time to come; if I transgress it and swear falsely, it may be the opposite of all this be my lot. (Edelstein, 1943) 1 Mental health is more than just being free of a mental illness. It is more of an optimal level of thinking, feeling, and relating to others. What is mental illness? Mental illness refers to all of the diagnosable mental disorders. Mental disorders are characterized by abnormalities in thinking, feelings, or behaviors. Highly common, the individual can expect to meet the formal diagnostic criteria for some form of anxiety, depressive, behavioral, thought, or substance-abuse disorder during their lifetime. The common types of mental illness are anxiety, depressive, behavioral, and substance-abuse disorders, phobias, panic disorder, generalized anxiety disorder, and social anxiety disorder. Behavioral disorders are characterized by problems conforming to the tenets of acceptable behavior. Depressive disorders involve feelings of sadness that interfere with the individual's ability to function or, as with adjustment disorder, persist longer than most people experience in reaction to a particular life stressor. Definition Mental health has defined as "a state of successful performance of mental function, resulting in productive activities, fulfilling relationships with people, and the ability to adapt to change and to cope with adversity". 2 It 162 might seem easy to define mental health as the absence of mental illness; most experts agree that there is more to being mentally healthy. The state of being mentally healthy is enviable given the advantages it affords. For example, mentally healthy adults tend to report the fewest health-related limitations of their routine activities, the fewest full or partially missed days of work, and the healthiest social functioning (for example, low helplessness, clear life goals, high resilience, and high levels of intimacy in their lives). Mentally healthy individuals tend to have better medical health, productivity, and social relationships. Medications may play an important role in the treatment of a mental illness, particularly when the symptoms are severe or do not adequately respond to psychotherapy. Cause-What Are the Causes and Risk Factors for Mental Illness? Mental health disorders in children and adolescents are caused by biology, environment, or a combination of the two. Examples of biological factors are genetics and chemical imbalances in the body, and damage to the central nervous system, such as a head injury. Many environmental factors also can affect mental health, including exposure to violence, extreme stress, and the loss of an important person. There is no one test that definitively indicates whether someone has a mental illness. Therefore, health-care practitioners diagnose a mental disorder by gathering comprehensive medical, family, and mental-health information. Mental illness refers to all of the diagnosable mental disorders and is characterized by abnormalities in thinking, feelings, or behaviors. Some of the most common types of mental illness include anxiety, depressive, behavioral, and substance-abuse disorders. There is no single cause for mental illness. Rather, it is the result of a complex group of genetic, psychological, and environmental factors. While everyone experiences sadness, anxiety, irritability, and moodiness at times, moods, thoughts, behaviors, or use of substances that interfere with a person's ability to function well physically, socially, at work, school, or home are characteristics of mental illness. One frequently asked question about mental illness is if it is hereditary. Most mental disorders are not directly passed from one generation to another genetically, and there is no single cause for mental illness. Rather, it is the result of a complex group of genetic, psychological, and environmental factors. Genetically, it seems that more often than not, there seems to be a genetic predisposition to developing a mental illness. Everything from mood, behavioral and developmental, and thought disorders are thought to have a genetic risk for developing the condition. Ages The number of young people and their families who are affected by mental, emotional, and behavioral disorders is significant. It is estimated that as many as one in five children and adolescents may have a mental health disorder that can be identified and require treatment. Young people who experience excessive fear, worry, or uneasiness may have an anxiety disorder. "Anxiety disorders are among the most common of childhood disorders. According to one study of 9-year-old to 17-year-old, as many as 13 of every 100 young people have an anxiety disorder", 3 100 children may have major depression, and as many as eight of every 100 adolescents may be affected. 4 Preventive While medication can be quite helpful in alleviating and preventing overt symptoms for many psychiatric conditions, they do not address the many complex social and psychological issues that can play a major role in how the person with such a disease functions at work, at home, and in his or her relationships. The interventions are therefore seen by some as being forms of occupational therapy for people with mental illness. For example, treatment of bipolar disorder with medications tends to address two aspects: relieving already existing symptoms of mania or depression and preventing symptoms from returning. Hospitals Talk therapy (psychotherapy) is usually considered the first line of care in helping a person with a mental illness. It is an important part of helping individuals with a mental disorder achieves the highest level of functioning possible. Psychotherapies that have been found to be effective in treating many mental disorders include family focused therapy, psycho-education, cognitive therapy, interpersonal therapy, and social rhythm therapy. What Can We Do to Improve the Mental Health of Adolescents and Youth? Family Families and communities, working together, can help children and adolescents with mental disorders. A broad range of services is often necessary to meet the needs of these young people and their families. Parents, practitioners, and policy-makers are recognizing the importance of young people's mental health. Youth with better mental health are physically healthier, demonstrate more socially positive behaviors and engage in fewer risky behaviors. Conversely, youth with mental health problems, such as depression, are more likely to engage in health risk behaviors. Furthermore, youths' mental health problems pose a significant financial and social burden on families and society in terms of distress, cost of treatment, and disability. Most mental health problems diagnosed in adulthood begin in adolescence. Half of lifetime diagnosable mental health disorders start by age 14; this number increases to three fourths by age 24.6. (Knopf, Park, & Mulye, 2008, p. 10) The ability to manage mental health problems, including substance use issues and learning disorders, can affect adult functioning in areas, such as social relationships and participation in the workforce. Society Individuals with mental illness are at risk for a variety of challenges, but these risks can be greatly reduced with treatment, particularly when it is timely: Psycho-education, cognitive therapy, interpersonal therapy, and social rhythm therapy. Family focused therapy involves education of family members about the disorder and how to help (psycho-education), communication-enhancement training, and teaching family members problem-solving skills training. Psycho-education services involve teaching the person with the illness and their family members about the symptoms of the sufferer, as well as any warning signs (for example, change in sleep pattern or appetite, and increased irritability) that the person is beginning to experience another episode of the illness, when applicable. In cognitive behavioral therapy, the mental-health professional works to help the person with a psychiatric condition identify, challenge, and decrease negative thinking and otherwise dysfunctional belief systems. The goal of interpersonal therapy tends to be identifying and managing problems the sufferers of a mental illness may have in their relationships with others. Social rhythm therapy encourages stability of sleep-wake cycles, with the goal of preventing or alleviating the sleep disturbances that may be associated with a psychiatric disorder. MENTAL HEALTH OF ADOLESCENTS AND YOUTH 164 The term "vulnerable child population" encompasses a broad spectrum of different individuals who are at greater risk of mental health problems for a non-exhaustive list of vulnerable populations. Those groups may have very different mental health needs, but often share experiences of stigma, discrimination, and/or difficulties accessing mental health services and promotion or preventive action. In addition, the available services may not be adapted to their specific needs. Some of these groups also constitute a low-prevalence group, and for this reason are further neglected. (Knopf, Park, & Mulye, 2008, p. 22) For example, in most of the surveys, there are problems with general indices representing the circumstances of children in minority groups-ethnic minorities, Roma, refugee/asylum seeker, and disabled children-which are too small in numbers to be represented in general samples of the population, and a tendency has also been noted for many of the indicators to relate only to the circumstances of older children. However, these low-prevalence groups usually have a greater need of attention compared to other more numerous groups which may already receive more attention. They are: children living in poverty homeless; children early school leavers; children experiencing bullying; children traveler; children juvenile offenders; children abandoned due to parental migration for employment, children physical, and learning disabled; children with mental disorders or drug abuse; children using alcohol; and abused children. School Children often feel sad, cry, or feel worthless then lose interest in play activities, or schoolwork declines. Physical well-being: Children may experience changes in appetite or sleeping patterns and may have vague physical complaints. They believe they are ugly, unable to do anything right, or that the world or life is hopeless. It also is important for parents and caregivers to be aware that some children and adolescents with depression may not value their lives, which can put them at risk for suicide. The situation of young people is rapidly changing across the globe. The group of young people is less homogenous than the group of school aged children, and the life trajectory for young adults is not as predictable or as homogeneous as in previous generations (Rowling 2006) 5 . Transition into adulthood is a period which is determined by many changes. Adolescents and young adults are in a key phase of establishing independent identity, making educational and vocational decisions and lifestyle choices as well as forming interpersonal relationships. All of these have major long-term influences on the individual, particularly in terms of factors that influence mental health and well-being. Young people are particularly vulnerable to social exclusion, notably in the transition stage between education and employment. For example, leaving school early without access to full time work can lead to disconnection economically and socially and failure to develop a sense of the future. These young people form a specific category of "invisible" young people, as their possibilities and rights to a minimum income or health insurance are in many countries only minor (Stengård & Appelqvist-Schmidlechner, 2010, p. 3). Schools play a major role in supporting young people with emotional and behavioural problems and are often where symptoms of mental disorders are first identified. A school staff member was among those to suggest that some help for emotional or behavioural problems was needed in two fifths (40.5%) of cases. Just over one fifth (22.6%) of young people who used health services had been referred by their school. Teachers and other school staff provided 18.9% of students with informal support for emotional and behavioural problems. This was higher (51.0%) for students assessed as having a mental disorder. Of the four types of disorder, major depressive disorder had the greatest impact on school attendance. Students with this disorder averaged 20 days absent from school in the previous 12 months due to its symptoms. Major depressive disorder had the greatest impact on functioning at school, with one third (34.3%) of students experiencing severe impact and another 34.1% a moderate impact due to this disorder. For adolescents, conduct disorders had almost the same level of impact (22.8% severe and 43.6% moderate), but conversely also had the highest proportion (21.8%) for whom the disorder had no impact (Lawrence et al., 2015, p. 20). Public Places Stress has been found to be a significant contributor to the development of most mental illnesses, including bipolar disorder. For example, gay, lesbian, and bisexual people are thought to experience increased emotional struggles associated with the multiple social stressors associated with coping with reactions to their homosexuality or bisexuality in society. Unemployment significantly increases the odds ratio of an individual developing a psychiatric disorder. It almost quadruples the odds of developing drug dependence and triples the odds of having a phobia or a psychotic illness like schizophrenia. Being unemployed more than doubles the chances of experiencing depression, generalized anxiety disorder (GAD), and obsessive-compulsive disorder. Mental health problems of young people affect whole society, in spite of the fact that most children and adolescents perceive their health to be good; there is a sizeable minority of young people reporting their health to be either "fair" or "poor" and experiencing a number of recurring health complaints. As mental health problems in adolescence tend to be under-recognized and undertreated (Sourander et al. 2004), estimates of psychological problems and disorders may therefore be higher than is reported in studies. On an individual level, mental health problems can have deteriorating effects on young people's social, intellectual and emotional development and consequently on their future. At its worst, they can lead to loss of life. Suicide is one of the three leading causes of death in young people and a public health concern in many European countries (WHO, 2001). Besides the negative effects on an individual level, mental illness affects also many other spheres of life-family, friends and society at large-causing costs not only in health care system. (Stengård & Appelqvist-Schmidlechner, 2010, p. 7) Conclusion Young people can have mental, emotional, and behavioral problems that are real, painful, and costly. These problems, often called "disorders", are sources of stress for children and their families, schools, and communities. Monitoring systems are an important component of efforts to promote mental health, and prevent and treat mental health problems. Efforts promote a healthy adolescence and lay the groundwork for healthy adulthood. The foundation for good mental health is laid in the early years and society as a whole benefits from investing in children and families. Fortunately, the majority of young people in the EU enjoy good mental health. However, on average, one in every five children and adolescents suffers from developmental, emotional, or behavioural problems and approximately 1/8 have a clinically diagnosed mental disorder. Therefore, there is a clear and urgent need for development of effective policies and practices in enlarged Europe and for a creative process of interaction and a proactive exchange of information between European countries.
2019-05-13T13:06:03.414Z
2019-03-28T00:00:00.000
{ "year": 2019, "sha1": "78d594e2c7919b5e3eadea8a36f1a47053da36f3", "oa_license": "CCBYNC", "oa_url": "http://www.davidpublisher.org/Public/uploads/Contribute/5cc008d2ab8a1.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "369adea2614420ca64b97fa32e577ee9bca41ec3", "s2fieldsofstudy": [ "Psychology", "Medicine" ], "extfieldsofstudy": [ "Psychology" ] }
6178545
pes2o/s2orc
v3-fos-license
A Dual-Color Fluorescence-Based Platform to Identify Selective Inhibitors of Akt Signaling Background Inhibition of Akt signaling is considered one of the most promising therapeutic strategies for many cancers. However, rational target-orientated approaches to cell based drug screens for anti-cancer agents have historically been compromised by the notorious absence of suitable control cells. Methodology/Principal Findings In order to address this fundamental problem, we have developed BaFiso, a live-cell screening platform to identify specific inhibitors of this pathway. BaFiso relies on the co-culture of isogenic cell lines that have been engineered to sustain interleukin-3 independent survival of the parental Ba/F3 cells, and that are individually tagged with different fluorescent proteins. Whilst in the first of these two lines cell survival in the absence of IL-3 is dependent on the expression of activated Akt, the cells expressing constitutively-activated Stat5 signaling display IL-3 independent growth and survival in an Akt-independent manner. Small molecules can then be screened in these lines to identify inhibitors that rescue IL-3 dependence. Conclusions/Significance BaFiso measures differential cell survival using multiparametric live cell imaging and permits selective inhibitors of Akt signaling to be identified. BaFiso is a platform technology suitable for the identification of small molecule inhibitors of IL-3 mediated survival signaling. Introduction Cell-based screens have been widely used in drug discovery although historically, these assays are conducted using genetically diverse cell lines derived from human tumors [1,2]. Since the complex intracellular signaling networks that drive cancer cell growth and survival have begun to be elucidated, a more rational approach to drug discovery has become feasible [3]. However, the implementation of target-orientated cell-based screens for anticancer drugs remains a challenge, both because of their reliance on defined genetic changes and because of the lack of proper control cells. To overcome this fundamental problem, we have developed a rational strategy for cell-based drug discovery that is based on the convenience and flexibility of the Ba/F3 cell system, an immortalized IL-3-dependent pro-B lymphoblastic cell line [4]. IL-3 supports the growth and survival of Ba/F3 cells through the activation of distinct signaling pathways. Upon binding to its cognate receptor IL-3 activates the Janus kinase signal transduction and transcriptional activation pathways (JAK/STAT) to induce Bcl-x L [5]. Similarly, IL-3 activation of the PI3K/Akt pathway is involved in inhibiting the intrinsic apoptotic machinery in Ba/F3 cells [6][7][8]. Overexpression of several constitutively active signaling molecules abrogates the dependence of these cells on IL-3 [9]. Hence, we generated isogenic cell lines derived from Ba/F3 (BaFiso) in which IL-3 independent survival is sustained by independent signaling events. Each of these isogenic lines was genetically labeled with a fluorescent reporter and thus, the ratio of two spectrally distinct cell populations could be used as primary endpoint of the system to monitor pathway-specific cytotoxicity. Accordingly compounds can be screened in co-cultures of these lines and the change in the relative cell number of the two lines readily and rapidly measured to identify those molecules that specifically interact with one of the signaling pathways. In this instance, BaFiso has been designed as a live-cell system suitable to identify specific inhibitors of Akt signaling. Tagging isogenic Ba/F3 cells individually with two different chromophores The BaFiso system is a dual fluorescence cell-based screening system in which compounds can be readily monitored thanks to the stable expression of yellow or cyan fluorescent proteins that individually tag each of the isogenic cell lines (Fig. 1). To introduce the genes encoding the different fluorescent proteins into Ba/F3 cells, retroviral supernatants were generated by transfection of LinX packaging cells. Through clonal propagation, we were able to establish Ba/F3 cell lines that robustly and homogeneously expressed ECFP ( Fig. 2A and B) or EYFP ( Fig. 2C and D). Stable transfectants of these proteins were FACS-sorted to ensure that they expressed similar levels of the fluorescent reporter protein. Generation of double stable Ba/F3 cell lines The strategy described here is based on paired isogenic cell lines whose survival in the absence of IL-3 is sustained by the activation of independent signaling pathways. Several signaling pathways have been implicated in IL-3-mediated survival, including those involving Akt and Stat5 [10,11]. In order to introduce constitutively active forms of these genes into our dual fluorescence cellbased system, we used retroviral constructs carrying a myristoylated derivate of Akt and STAT5A1*6, which contains two activating amino acid substitutions [11]. The yellow labeled Ba/F3 cells were used to generate Akt-dependent reporter cells whereas the cyan tagged cells were used to establish PI3K/Akt independent reporter cells. The retroviral supernatants of LinX packaging cells were employed to transduce Ba/F3/EYFP cells (BY) with myr-Akt and Ba/F3/ECFP cells (BC) with STAT5A1*6 ( Fig. 2E). Stably expressing cell clones were selected and the expression of the transgenes was confirmed by western blot analysis (Fig. 2F). The level of Akt expression was monitored using an antibody that recognizes Akt irrespective of its phosphorylation state. Akt migrates as a single band with an apparent molecular weight of 60 kDa, although a larger protein was also identified in immunoblots of Akt from lysates of BYA cells. This additional form can be explained by the difference in size produced by the myristoylation signal present in the constitutive active form of Akt used to generate the BYA cell line. Despite the Stat5a protein present in the parental BC cells, ectopic expression of the constitutively active form of Stat5a in BCS cells could be unequivocally demonstrated in western blots probed with an antibody directed against the Flag-tag. Indeed, STAT5A1*6 expression also increased the total Stat5a protein level in BCS cells as shown by immunoblotting using an antibody recognizing Stat5. Figure 1. Schematic overview of the BaFiso assay system. BaFiso consists of paired isogenic cell lines that have been engineered to acquire IL-3 autonomous growth through constitutive activation of Akt or Stat5 signaling. The two cell lines to be compared are individually tagged with either yellow or cyan fluorescent proteins. Equal numbers of yellow and cyan cells were co-cultured, treated with compounds and the change in the relative cell number was calculated on the basis of the distinct fluorescent proteins measured. Our strategy aims to identify lead compounds that specifically kill test cells with activated Akt signaling (yellow cells) and that spare the otherwise isogenic control cells (cyan cells). doi:10.1371/journal.pone.0001823.g001 To examine whether PI3K/Akt or Stat5 signaling is indeed activated in the stable BYA or BCS cells respectively, we analyzed downstream elements in these two pathways. Phosphorylation of Akt (Ser473) has been widely used as a read out of activation of the PI3K pathway. When we compared the level of Akt phosphorylation in lysates of BY and BYA cells cultured in the presence of IL-3, there was dramatic increase in Ser473 phosphorylation of Akt in BYA cells, reflecting the activity of this pathway. To investigate whether the activation of Akt in BYA cells had an impact on downstream events, we analyzed the Thr389 phosphorylation of the linker domain of the p70 S6 kinase that is constitutively activated upon overexpression of a gag fusion of Akt [12]. There was a significant increase in the intensity of the band corresponding to p70 S6 kinase (Thr389) in BYA cells when compared to BY control cells. On the other hand, the expression of the known STAT5 target gene, pim-1, was upregulated upon expression of constitutive activated Stat5a, consistent with previous studies [13]. Ectopic expression of activated Akt and Stat5a confers IL-3 independence Consistent with previous reports, expression of constitutively active mutants of Akt and Stat5a provide signals for cytokineindependent survival of Ba/F3 cells [9,11]. The increased resistance to IL-3 withdrawal of the BYA and BCS cell lines when compared to the parental BY and BC cell lines was confirmed by morphological assessment. Parental BY and BC cells were cultured in the presence or absence of IL-3 and the degree of cell death was assessed after 24 hours by microscopic examination (Fig. 3A). The number of cells with an apoptotic phenotype increased significantly after IL-3 withdrawal in the cultures. The effect of the constitutive activation of Akt or Stat5 signaling was examined when IL-3 was withdrawn from representative BYA and BCS cell clones. As such, the capacity of the constitutively active forms of the signaling molecules Akt and Stat5a to impede apoptosis was evident and accordingly, cell death was dramatically reduced in Ba/F3 cells ectopically expressing myr-Akt or STAT5A1*6, even in the absence of IL-3 (Fig. 3A). We also determined the metabolic activity as a measure of cell viability using the alamar blue assay, in which a redox indicator changes color from blue to pink depending on metabolic status of the cells (Fig. 3B). The activity of myr-Akt in BYA cells was significantly higher in the absence of IL-3 than that of the parental cells. Similarly, STAT5A1*6 also maintained the activity of BCS cells albeit to slightly lesser degree (Fig. 3C). We examined the time course of cell viability following IL-3 withdrawal (Fig. 3D) and 24 hours after IL-3 deprivation, approximately 60% of the BYA or BCS cells remained viable compared to approximately 25% of the parental BY and BC cell lines. The viability of BY and BC cells further diminished after 60 hours of IL-3 starvation to 13% and 9%, respectively. In contrast, the viability of BYA and BCS cells remained around 50% after 60 hours in the absence of IL-3. The protection from IL-3 withdrawal afforded by enhanced Stat5 signaling is independent of Akt activity The capacity to monitor pathway-specific cytotoxicity in our assay is based on the use of isogenic control cells that confer survival in the absence of IL-3 in an Akt-independent manner. Since Akt is one of the major downstream targets of PI3K signaling, its phosphorylation status is commonly used to monitor the activity of the PI3K/Akt pathway. We analyzed the impact of ectopic expression of myr-Akt and STAT5A1*6 on Akt activation using an antibody that specifically recognizes Ser473 phosphorylated Akt (Fig. 4). The intensity of Akt phosphorylation was compared to the overall expression of Akt and a-tubulin using specific antibodies. Parental BY and BC cells possess relatively low basal levels of Akt phosphorylation which further decreased upon withdrawal of IL-3. Ectopic constitutively active Stat5a expression had no significant impact on the phosphorylation of Akt in the presence or absence of IL-3, indicating that the enhanced survival of BCS cells triggered by STAT5A1*6 upon IL-3 starvation is independent of Akt signaling. Consistent with previous studies, overexpression of myr-Akt dramatically augmented Ser473 phosphorylation [14] and high levels of Akt phosphorylation were still detected in the complete absence of IL-3 (Fig. 4). In conclusion, these results show that we have generated stable Ba/F3-derived cell lines in which the inhibition of the intrinsic apoptotic machinery is mediated by ectopic expression of constitutively active mutants of Akt or Stat5a. Since the abrogation of IL-3 dependence occurred through the activation of independent signaling pathways, these cell lines can be used together as paired isogenic test and control cells to identify pathway specific inhibitors. Detection of selective toxicity associated with activated Akt signaling The BaFiso assay was set up in 96-well plates and with an automated workflow [15]. Equal numbers of BYA and BCS cells were mixed and seeded at a density of 20,000 cells per well using multidrop dispenser. All liquid handling for treatment and staining was carried out by a robotic workstation and the BD Pathway 855 cell imaging platform was used for automated image acquisition. In order to test the sensitivity and the capacity to detect EYFP and ECFP separately using BD Pathway 855 bioimager, the cocultured cells were photographed for the two fluorochromes sequentially and the images superimposed. In order to avoid ECFP bleeding into the EYFP emission channel, a special filter set was used that clearly separates the two fluorochromes. A third fluorochrome, the far red/infrared fluorescent cell-permeant DNA probe, DRAQ5, was employed to perform automated segmentation of cell nuclei. An image algorithm was applied to segment the cell nucleus based on local thresholds. The ratios of the cyan and yellow fluorescence signals were determined by dividing the number of ECFP positive cells by the number of EYFP positive cells in each well. As a proof of principle, we sought to determine how a panel of commercially available agents of known mechanism of action would behave in the BaFiso screen. The test compounds included: the DNA-damaging chemotherapeutic compound cisplatin; the modulator of membrane lipid structure Minerval; the Akt inhibitor 10-(49-(N-diethylamino)butyl)-2-chlorophenoxazine (Akt Inhibitor X); the protein tyrosine kinase inhibitor Genistein; the inhibitor of nuclear export Leptomycin B; the broad protein kinase inhibitor Staurosporine; the PDK1 inhibitor UCN-01; the Raf1 Kinase Inhibitor; the PI3K inhibitor LY294002; the topoisomerase II inhibitor Etoposide; and Lithium chloride, a GSK-3 inhibitor. A robotic workstation was used to prepare mother plates containing three different concentrations of these compounds. Cocultured BaFiso BYA/BCS cells were exposed to equal volumes of the test compounds, resulting in a final concentration range greater than two orders of magnitude around the IC 50 value for each compound. The final concentration of dimethyl sulfoxide was kept at 1% after addition of the compounds. Each plate contained several internal controls, including untreated wells and wells treated with different concentrations of DMSO or ethanol alone. The performance of the BaFiso system upon exposure to the panel of test compounds was measured in terms of the ECFP/EYFP ratio. The majority of the test compounds reduced the number of DRAQ5 positive cells (Fig. 5A) without affecting the ratio of cyan and yellow fluorescent signals (Fig. 5B), suggesting a non-selective cytotoxic effect on both BaFiso cell lines independent of the gene that has been engineered to sustain interleukin-3 independent survival of the cells. In contrast exposure to Minerval or LiCl did not affect the viability of the BaFiso cell lines (Fig. 5A) nor did it alter the ratio of the fluorescent signals (Fig. 5B). Most importantly, two compounds that are known to inhibit the kinase activity of Akt, UCN-01 and Akt Inhibitor X, [16,17] selectively compromised the viability of the yellow tagged BYA cells thereby increasing the ratio of cyan to yellow fluorescent cells (Fig. 5B, C and D). In contrast, the broad spectrum PI3K isoform inhibitor LY294002 failed to affect the proportion of the fluorescent signals, indicating that the myristoylated form of Akt bypasses the requirement of PIP3-mediated membrane recruitment for its activity. Taken together, these data demonstrate that we have developed an image-based screening system that is capable of identifying specific inhibitors of the Akt pathway. Discussion The most frequently used anti-cancer therapies were discovered on the basis of their anti-proliferative activity in functional cell assays but with no pre-existing knowledge of the mechanism of action. As a result none of the current drugs directly targets the molecular lesions responsible for malignant transformation and they are not selective. Indeed this lack of selectivity between cancer cells and normal cells is currently one of the main reasons for the failure of conventional chemotherapy. In recent years, our understanding of the genetics of human cancer has increased rapidly, enabling more rational approaches to drug discovery for anti-cancer therapies to be adopted. Accordingly, the present study set out to develop a rational cell-based drug discovery strategy, an approach that has historically been compromised by the lack of appropriate control cells [18]. With the objective of identifying lead compounds that specifically kill cells with activated Akt signaling and that spare control cells, we have combined the use of co-cultured isogenic cell lines with fluorescent technology. We introduced a myristoylated form of Akt which constitutively localizes to the plasma membrane, bypassing the requirement for PIP3 in Akt activation. This myr-Akt has been shown to constitutively inactivate proapoptotic downstream targets [14]. In order to generate Ba/F3 cells that survive in the absence of IL-3 independent of activated PI3K/Akt signaling, we transduced Ba/F3 cells with a retrovirus encoding STAT5A1*6, an activated mutant of STAT5. STAT5A1*6 has two amino acid substitutions and it is constitutively phosphorylated, localized in the cell nucleus and transcriptionally active in the absence of IL-3 [11]. In the BaFiso system presented here, the protective potential of myr-Akt is slightly greater than that provided by STAT5A1*6, which may be explained by the greater expression of myr-Akt. The design of the screen relies on the lack of relevant crosstalk between the pathways engineered to support IL-3 independent survival. Previous work has shown that the induced expression of bcl-xL and pim-1 promotes the IL-3-independent survival of Ba/F3 cells upon activation of STAT5 [13]. In contrast, studies in multiple cell lines suggest that Akt phosphorylates and inactivates proapoptotic proteins such as GSK-3b, Foxo3a and Bad in response to IL-3 [8,19,20]. We confirmed that the activation of Stat5 signaling in BCS cells did not increase Akt activity either in the presence or absence of IL-3. Another common source of interference to be mitigated in multiplexed screening procedures is the bleed-through of fluorescence from one channel to the other. BaFiso allows simultaneous viewing of three different fluorescent signals and sharp separation of the emission signals from the cyan and yellow protein is achieved using a special filter set. We implemented BaFiso as an automated live-cell assay using a multidrop dispenser, a robotic workstation and a robotic cell imaging platform. We assessed the properties of this HTS co-culture assay using a panel of test compounds of known activity. The cytotoxicity of the test compounds was monitored by quantifying the DRAQ5 labelled cells and all compounds tested except LiCl and Minerval reduced the viability of Ba/F3 cells. The fact that only two compounds known to selectively interfere with Akt signaling, Akt inhibitor X and UCN-01, reduced the number of yellow tagged BYA cells demonstrates the specificity of the BaFiso system. The Akt inhibitor X is a N-substituted phenoxazine that inhibits the activity of Akt even in the absence of its pleckstrin homology domain and it has been suggested that it may bind in the ATP binding site [17]. In contrast, UCN-01 has been reported to inhibit several kinases including PDK1, a key regulator of Akt activity [16]. Interestingly, staurosporine that differs from UCN-01 only by the absence of a hydroxy group on the lactam ring failed to change the ratio of the BaFiso cell lines. A specificity analysis against a kinase panel revealed different patterns of inhibition for UCN-01 with respect to staurosporine [16]. It remains to be determined if these differences in specificity could account for the different behaviour observed for these two compounds in the BaFiso assay. The BaFiso screening design presented here offers some major advantages over traditional in vitro biochemical assays or more classical cellular assays. Co-culture and simultaneous testing of the paired isogenic cell lines in this assay provides an internal control and eliminates errors resulting from separate assessments. BaFiso is an image based high throughput assay that enables compound that produce artefacts and cytotoxicity to be identified on a single cell basis. Live cell imaging of the BaFiso cell lines permits the repeated monitoring of the same cells over the timecourse of an experiment, leading to a more accurate assessment that minimizes the variability in cell numbers between wells. Finally, the dual fluorescence co-culture system used in BaFiso is adaptable to any gene or pathway that can support IL-3 independent survival of Ba/F3 cells. Expression Vectors and Reagents The enhanced fluorescent protein vectors (pECFP-C1 and pEYFP-C1) were purchased from Clontech. The cDNAs encoding ECFP and EYFP were subcloned into the SnaBI sites of the pBABEpuro retroviral vector. The myr-Akt was kindly provided by Dr. Philip Tsichlis and we PCR amplified myr-Akt-HA using forward 59-CGCGGATCCATGGGGAGCAGCAAGAGCAAGC-39 and reverse 59-ACGCGTCGACTCATCTAGAAGCGTAATCTG-GAACC-39 primers, before subcloning the BamHI and SalI digested PCR product into the corresponding restriction site of the retroviral vector pWZL-Blast. The Stat5A1*6-Flag construct was a kind gift from T. Nosaka (University of Tokyo). The nature of all constructs was confirmed by DNA sequencing. All chemicals were purchased from commercial sources except UCN-01 which was kindly provided by NCI, Cisplatin which was provided by C. Navarro (Universidad Autónoma de Madrid, Spain), and Minerval which was generously provided by P. Escriba Cell Culture Murine pro-B Ba/F3 cells were obtained from the American Type Culture Collection (ATCC) and maintained in RPMI 1640 containing: 10% fetal calf serum; 2 mM L-glutamine; 50 mM 2mercaptoethanol (Sigma); antibiotics and antimycotics (Gibco); and 3 ng/ml of recombinant murine IL-3 (R&D Systems, Minneapolis, MN, USA). LinXE ecotropic retrovirus producing cells [21] were grown in Dulbecco's modified Eagle's medium with glutamax supplemented with 10% fetal bovine serum (FBS), penicillin, streptomycin and fungizone (Gibco). Cell cultures were maintained in a humified incubator at 37uC with 5% CO 2 . To remove IL-3, the cells were washed twice in PBS at room temperature. Retroviral constructs were introduced into packaging cells by standard calcium phosphate transfection and retroviralmediated gene transfer was performed as described previously [22]. After infection of Ba/F3 cells with retroviral supernatants containing either EYFP or ECFP, stable cell lines were selected in medium containing 2 mg/ml of puromycin for one week. In order to establish Ba/F3 cell lines homogeneously expressing EYFP or ECFP, we performed clonal propagation in Clona-cell TCS semisolid culture medium (Stem Cell Technologies, Vancouver, Canada) containing 2 mg/ml puromycin according to the manufacturer's protocol. Ba/F3 cell clones stably expressing EYFP (BY cells) or ECFP (BC cells) were used as parental cells for the secondary stable infection with retroviral supernatants containing either myr-Akt or Stat5A1*6-Flag, respectively. Stable Ba/F3 cells co-expressing EYFP and myr-Akt (BYA cells) were selected with 0.8 mg/ml Neomycin and 1 mg/ml puromycin for 2 weeks. Stable Ba/F3 cells co-expressing ECFP and Stat5A1*6-Flag (BCS cells) were selected with 15 mg/ml Blasticidine and 1 mg/ml puromycin for 2 weeks. The generation of cell clones was performed as described above. Fluorescence-activated cell sorting (FACS) of EYFP or ECFP expressing cells was performed on a FACSAria (BD Biosciences, San Jose, CA, USA). Western Blot Analysis Cells incubated under different conditions were washed twice with TBS prior to lysis in buffer containing: 50 mM Tris HCl, 150 mM NaCl, 1% NP-40, 2 mM Na 3 VO 4 , 100 mM NaF, 20 mM Na 4 P 2 O 7 , and protease inhibitor cocktail (Roche Molecular Biochemicals, Indianapolis, IN). Proteins were resolved on 10% SDS-PAGE, and transferred to PVDF membranes (Immobilon-P, Millipore). The membranes were incubated with the first antibody overnight at 4uC, washed and incubated with antimouse (1:10000) or anti-rabbit (1:5000) horseradish peroxidase conjugated antibodies. Immunoreactive proteins were visualized using the enhanced chemiluminescence (ECL) Western blotting detection system (Amersham Pharmacia Biotech) and Kodak-X-Omat LS film (Kodak). Antibodies against phospho-AKT (Ser473) and AKT were purchased from Cell Signaling (Beverly, MA), those against STAT-5 from (R&D Systems, Minneapolis, MN, USA), and the antibodies against a-tubulin and Flag were obtained from Sigma (St Louis, MO). Survival assay Each cell line was individually seeded at 10 4 cells per well in a 96 well plate, in the presence or absence of IL-3. AlamarBlue TM (Serotec, Oxford, UK) was added to the culture medium at a final concentration of 10% (v-v) and after 24 hours, absorbance was measured at the two different wavelengths of maximal absorbance of the reduced and oxidized forms of AlamarBlue TM , 570 and 600 nm using Victor 1420 Multilabel Counter (Perkin-Elmer, Wellesley, USA). The percentage cell survival was calculated according to the manufacturers' instructions. Time course experiments of cell viability post IL-3 withdrawal were performed using trypan blue exclusion. BaFiso assay Equal numbers of parental BC/BY cells or activated test cells BCS/BYA were mixed in culture medium deprived of IL-3 and seeded in 96-well black clear bottom microplates coated with Poly-D-Lysine (Becton Dickinson Biosciences, San Jose, California, USA) at a density of 20,000 cells per well using Titan Multidrop 384 automatic dispenser (Titertek Instruments, Inc., Huntsville, AL). The final volume of the cell suspension was 200 ml in each well. After incubation at 37uC with 5% CO 2 for 1 hour the far-red fluorescent cell-permeable DNA probe, DRAQ5 TM (Biostatus Ltd, Leicestershire, UK) was added at a final concentration of 5 mM to all wells 15 minutes prior to obtaining the first images. Then, 2 ml of each test compound or vehicle was transferred from the mother plates to the assay plates using a robotic workstation (Biomek R FX Beckman). Cells were incubated in the presence of the test compounds for 12 hours. Image acquisition and processing Assay plates were read on the BD Pathway TM 855 Bioimager (Becton Dickinson Biosciences, San Jose, California, USA) equipped with a 430/25 nm/470/30 nm ECFP excitation/ emission filter, 500/20 nm/535/30 nm EYFP excitation/emission filter and 635/20 nm/695/55 nm DRAQ5 excitation/ emission filter. Images for each well were acquired in the three different channels for ECFP, EYFP and DRAQ5 using a 206 dry objective. The plates were exposed for 0.55 ms (Gain 14) to acquire ECFP images, 0.68 ms (Gain 32) for EYFP images and 0.47 ms (Gain 5) to acquire DRAQ5 images. The far red fluorescence intensity of DRAQ5 was used to perform automated segmentation of the cell nuclei and in turn to quantify the total cell number. Data analysis The data output of the BD Pathway Bioimager is as standard text files. These files contained the raw fluorescence data for each cells population. Data were imported into the data analysis software, BD Image Data Explorer, and the ratios of the ECFP positive cells to EYFP positive cells were determined by dividing the number of cyan fluorescence-emitting single cells by the number of yellow fluorescence-emitting single cells in each well. This procedure was repeated for each well. By measuring changes in the ratio between the cyan and yellow signal, the possible pathway-specific cytotoxicity of each compound could be determined. In order to estimate the quality of the HCS assay, the Z' factor was calculated by the equation: Z' = 1 -[(36std. dev. of positive controls)+(36std. dev. of negative controls)/(mean of positive controls) -(mean of negative controls)] as described previously [23].
2014-10-01T00:00:00.000Z
2008-03-19T00:00:00.000
{ "year": 2008, "sha1": "b68d5e767a3e399de0d502f7b0c1affbfecb6494", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0001823&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "b68d5e767a3e399de0d502f7b0c1affbfecb6494", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
7604264
pes2o/s2orc
v3-fos-license
Increasing the Analytical Sensitivity by Oligonucleotides Modified with Para- and Ortho-Twisted Intercalating Nucleic Acids – TINA The sensitivity and specificity of clinical diagnostic assays using DNA hybridization techniques are limited by the dissociation of double-stranded DNA (dsDNA) antiparallel duplex helices. This situation can be improved by addition of DNA stabilizing molecules such as nucleic acid intercalators. Here, we report the synthesis of a novel ortho-Twisted Intercalating Nucleic Acid (TINA) amidite utilizing the phosphoramidite approach, and examine the stabilizing effect of ortho- and para-TINA molecules in antiparallel DNA duplex formation. In a thermal stability assay, ortho- and para-TINA molecules increased the melting point (Tm) of Watson-Crick based antiparallel DNA duplexes. The increase in Tm was greatest when the intercalators were placed at the 5′ and 3′ termini (preferable) or, if placed internally, for each half or whole helix turn. Terminally positioned TINA molecules improved analytical sensitivity in a DNA hybridization capture assay targeting the Escherichia coli rrs gene. The corresponding sequence from the Pseudomonas aeruginosa rrs gene was used as cross-reactivity control. At 150 mM ionic strength, analytical sensitivity was improved 27-fold by addition of ortho-TINA molecules and 7-fold by addition of para-TINA molecules (versus the unmodified DNA oligonucleotide), with a 4-fold increase retained at 1 M ionic strength. Both intercalators sustained the discrimination of mismatches in the dsDNA (indicated by ΔTm), unless placed directly adjacent to the mismatch – in which case they partly concealed ΔTm (most pronounced for para-TINA molecules). We anticipate that the presented rules for placement of TINA molecules will be broadly applicable in hybridization capture assays and target amplification systems. Introduction The stability of double-stranded DNA (dsDNA) is naturally limited to allow cellular processes that require helix dissociation such as gene transcription, gene regulation and cell division. However, the sensitivity of DNA diagnostic assays depends upon the stability of dsDNA helices. The analytical sensitivity of an assay can be improved by decreasing stringency, but at the risk of crossreactivity to other targets. In addition, we report the synthesis of a novel ortho-TINA amidite using the phosphoramidite approach. Until now, ortho-TINA containing oligonucleotides has been synthesized by postsynthetic oligonucleotide modification using the Sonogashira Pd-catalyzed coupling reaction [12,16]. Although this approach is advantageous for the screening of different intercalators [14], to achieve sufficiently high coupling yields the reaction has to be repeated several times with fresh portions of the Sonogashira mixture. This can affect the subsequent oligonucleotide purification process. The phosphoramidite approach permits the production of a large number of oligonucleotides with several ortho-TINA molecule insertions in the sequences. We find that inclusion of paraas well as ortho-TINA molecules in an oligonucleotide is capable of improving the analytical sensitivity of probe hybridization without increasing crossreactivity in a competitive antiparallel duplex hybrization capture assay. We anticipate that TINA molecules will enable a general improvement in the performance of future clinical diagnostic assays based upon conventional hybridization, as well as Polymerase Chain Reaction (PCR) and other primer-based enzymatic target amplification systems. Synthesis of ortho-TINA amidite As an alternative to the traditional method of postsynthetic oligonucleotide modification, we prepared the ortho-TINA monomer for use on a DNA synthesis platform via the more convenient phosphoramidite approach. The newly designed ortho-TINA phosphoramidite was synthesized in two steps from a known starting compound [12]. Full details of the synthesis procedure are provided in the materials and methods section, and in Supplementary Data S1. In brief (Figure 2), the starting compound (3) was prepared (80% overall yield) in three steps from commercially available compounds S-(+)22,2-dimethyl-1,3-dioxalane-4-methanol (1) and 2-iodobenzylbromide (2). In the first step of the ortho-TINA phosphoramidite synthesis, 1-ethynylpyrene was coupled to compound 3 using the Sonogashira coupling mixture [14]. To eliminate oxygen, the reaction mixture was degassed with nitrogen prior to the addition of tritylated compound 3; when the reaction mixture was not degassed, the product yield decreased significantly. DMT-protected ortho-TINA (4) was obtained as yellow foam (85% yield), and its structure was confirmed by NMR spectrometry. Finally, the secondary hydroxyl group was phosphatized. Signals in the 31 P NMR spectrum with chemical shifts of 148.9 and 149.3 ppm, respectively, confirmed the formation of the phosphoramidite (5). Thermal stability of para-and ortho-TINA modified oligonucleotides To determine the optimal placement of paraand ortho-TINA molecules for stabilizing antiparallel DNA duplexes, we used a fluorescence resonance energy transfer (FRET) based high-speed melting curve method, as described and validated previously [17,18]. An 18-mer oligonucleotide from the Escherichia coli (E. coli) rrs gene (base pair 772-789) was used as the target. Table 1 shows the melting points (Tm) of paraand ortho-TINA modified oligonucleotides and changes in Tm associated with mismatches in the target strand (DTm). The full data set can be found in Table S1. Orthoand para-TINA molecule insertions in the oligonucleotide increased Tm when placed terminally on the oligonucleotide, although the para-TINA molecule produced the greater increase. Maximum stability was reached when there was a modification at both termini. Placed internally, para-TINA molecules decreased Tm in all positions, especially when at the center of the oligonucleotide, whereas the positive effect of ortho-TINA molecules on Tm was neutralized towards the center of the oligonucleotide. The combination of a terminal paraor ortho-TINA molecule with an internal paraor ortho-TINA molecule showed the highest increases in Tm when the two modifications were separated by six or twelve nucleotides, equaling a half or complete helix turn. Both ortho-TINA and (especially) para-TINA molecules were found to partly conceal the DTm of a mismatch immediately next to them, but when the mismatch was moved one or more nucleotides away, they had no effect on DTm. The stabilizing effect of paraand ortho-TINA molecules increased when the oligonucleotide sequence was shortened from eighteen to sixteen nucleotides. Effect of ionic conditions on dsDNA E. coli rrs gene PCR product capture by para-and ortho-TINA containing oligonucleotides Until now, the effects of TINA molecules have only been evaluated by Tm analyses, which are good model systems, but do not provide information on how TINA-modified oligonucleotides will perform as competitive annealing probes. To address this issue, we used the LuminexH 200 TM instrument to analyze the capture of denatured biotinylated E. coli rrs PCR product by magnetic microspheres coated with oligonucleotide sequences targeting base pairs 772-789 from the E. coli rrs gene. Figure 3 and Figure S1 show the capture of biotinylated rrs PCR product (in two-fold dilution series from 2.5 mL to 0.0098 mL rrs PCR product) by unmodified DNA oligonucleotides and oligonucleotides terminally modified with paraor ortho-TINA molecules in buffers of increasing ionic strength (100-1,000 mM monovalent cation). The overall level of median fluorescence intensity (MFI) was generally higher at greater ionic strength. In 150 mM buffer, the ortho-TINA modified oligonucleotide increased the analytical sensitivity 27-fold and the para-TINA modified oligonucleotide increased the analytical sensitivity seven-fold, compared with the unmodified DNA oligonucleotide. In 300 mM buffer, ortho-TINA modified oligonucleotide increased analytical sensitivity eleven-fold and para-TINA modified oligonucleotide six-fold, and even at 1,000 mM, a four-fold increase in analytical sensitivity was observed with both modified oligonucleotides compared with the unmodified equivalent. To ensure that the increased analytical sensitivity was target sequence independent, the capture sequence was changed to base pairs 446-463 of the E. coli rrs gene. The corresponding sequence from Pseudomonas aeruginosa (P. aeruginosa) rrs gene is the most closely related sequence among the human pathogens. Consequently, P. aerugionsa was used as cross-reactivity control and contains a cluster of four mismatches to the E. coli sequence. A helper oligonucleotide (targeting E. coli rrs gene base pairs 464-483) was also added to prevent secondary structure formation (not required for base pairs 772-789). Changing the target sequence did not change the capture curves for the unmodified DNA and ortho-TINA modified oligonucleotides, whereas para-TINA modified oligonucleotides did not perform as well as for the 772-789 base pair target. There was no cross-reactivity with the P. aeruginosa control sequence. Effect of hybridization temperature on dsDNA E. coli rrs gene PCR product capture by para-and ortho-TINA containing oligonucleotides To investigate whether the modulating effect of TINA molecules was temperature specific, the DNA hybridization assay was repeated at annealing temperatures from 42-62uC at three different ionic strengths and with two different concentrations for the E. coli rrs gene 446-463 base pair target sequence. P. aeruginosa was used as a cross-reactivity control sequence. As shown in Figure 4, the relative MFI of the terminally modified orthoand para-TINA and unmodified DNA oligonucleotides remained unchanged between 42uC and 52uC (temperature used in the ionic experiments), with the modified oligonucleotides generally providing the highest MFI. Above 52uC the difference in MFI rapidly diminished due to loss of signal. As expected, the level of cross-reactivity with the P. aeruginosa oligonucleotides rose with increasing ionic strength as the annealing temperatures decreased. Effect of unlabeled helper oligonucleotide on dsDNA E. coli rrs gene PCR product capture Table 1. Cont. helper nucleotides that prevent the formation of secondary structures in the RNA [19]. This is in contrast to 16S E. coli rRNA nucleotide 772-789 for which no secondary structure has been found. Accordingly, in the studies reported here, we included an unlabeled DNA helper oligonucleotide targeting E. coli rrs gene base pairs 464-483 when capturing the E. coli rrs gene base pair 446-463 sequence, to avoid formation of secondary structures in the denatured single stranded DNA. As shown in Figure 5, we also examined the individual effect of this DNA helper oligonucleotide on analytical sensitivity when targeting E. coli rrs gene base pairs 446-463. Addition of the helper oligonucleotide increased the analytical sensitivity of the unmodified DNA and orthoand para-TINA modified oligonucleotides by approximately two-fold. As shown in earlier experiments (Figure 3), targeting base pair 446-463 with TINA/DNA modified oligonucleotides plus the helper nucleotide (to relieve secondary structure), gave similar levels of capture sensitivity to those obtained when targeting base pair 772-789 (no secondary structure). Discussion In the current paper, we have characterized the stabilizing effect and established design rules for placement of orthoand para-TINA molecules into Watson-Crick based antiparallel DNA duplexes. According to thermal stability analyses, both paraand ortho-TINA molecules should be placed terminally in the nucleotide sequence, and preferably on both the 59 and 39 terminal positions to achieve a maximum increase in Tm. Placement of para-TINA molecules at the 59 and 39 termini gave the most pronounced increase in Tm compared to ortho-TINA molecules. The stabilizing effect of paraand ortho-TINA molecules changes when they are placed internally in the oligonucleotide sequence. Ortho-TINA molecules have either a positive effect or no effect on Tm, whereas para-TINA molecules decrease Tm when placed internally. However, neither paranor ortho-TINA molecules interfere with mismatchinduced DTm, unless they are placed internally directly adjacent to the mismatch. Overall, when several TINA molecules are placed in an oligonucleotide, the highest increase in Tm is observed if they are placed at the 59 and 39 terminal positions (preferable) or, if placed internally as well, with the modifications separated by a half or whole helix turn. The present thermal stability study was done using a single target sequence (the E. coli rrs gene base pair 772-789). The validity of the design rules are therefore still to be established, but the design rules suggested in this paper are in concordance with previously published thermal stability data on nucleic acid intercalator molecules in other target sequences [11][12][13][14][15][16]20]. The design rules identified in this study are also identical to the design rules we established previously for placement of para-TINA molecules into Hoogsteen based parallel DNA triplex formations [18]. Since thermal stability data for a number of different nucleic acid intercalating molecules are in perfect agreement with the herein presented design rules [11][12][13][14][15][16]20], we speculate whether these design rules might represent general design rules for placement of intercalator molecules into Watson-Crick based antiparallel duplex and Hoogsteen type triplex formations. Previously, para-TINA has been tested for triplex and quadruplex hybridization in cellular systems [21,22], but the present study is the first evaluation of paraand ortho-TINA molecules in antiparallel DNA duplex based hybridization capture assays. The pronounced increase in analytical sensitivity conferred by paraand ortho-TINA molecules in antiparallel DNA duplex hybridization is note-worthy, especially since the increased analytical sensitivity is seen for two different target sequences, with and without a helper oligonucleotide. In addition, the specificity of the signal is maintained without cross-hybridization under a wide range of ionic conditions (100 mM to 1 M monovalent cations). As previously stated, the corresponding sequence from the P. aeruginosa rrs gene was used as a cross-reactivity control in the hybridization capture assay, as it is the most closely related sequence among the known human pathogens. This sequence contains a cluster of four mismatches to the E. coli sequence, so a closer related sequence would have been desirable from a pure ''cross-reactivity control'' point of view. However, we decided to use the P. aeruginosa rrs gene sequence as cross-reactivity control, since we wanted the capture of biotinylated PCR product in the hybridization capture assay to reflect the clinical diagnostics reality the most. So, the true impact of TINA molecules on oligonucleotide cross-reactivity is still to be established. The E. coli rrs gene base pair 772-789 target sequence was used in both the thermal stability study as well as the antiparallel duplex based hybridization capture assay. In the thermal stability study, placement of para-TINA molecules at the 59 and 39 termini gave the most pronounced increase in Tm compared to ortho-TINA molecules, but for capture of denatured E. coli rrs PCR product the analytical sensitivity was highest for ortho-TINA modified oligonucleotides. The thermal stability study reflects the temperature at which the fluorescence signal is changing at the highest rate, whereas the analytical sensitivity established in the hybridization capture assay reflects the hybridization to the target sequence in competitive annealing with the complementary strand of the PCR product. So even though the para-TINA modified oligonucleotides caused the highest Tm, the ortho-TINA modified oligonucleotides performed better in the competitive annealing hybridization capture assay. Since addition of ortho-TINA in particular to the oligonucleotides increases the analytical sensitivity, we expect that ortho-TINA molecules, in particular, will be beneficial for increasing sensitivity, without compromising target specificity, in future clinical diagnostic assays, based on target hybridization capture as well as in target amplification systems. An example could be placement of an ortho-TINA molecule at the 59 end of PCR primers to increase efficacy of primer annealing, and thereby the overall efficacy in quantitative as well as end-point PCR reactions. Synthesis of ortho-TINA amidite Solvents were dried prior to use. All chemicals were obtained from Sigma-Aldrich (Brøndby, Denmark) and were used as purchased. The silica gel (0.040-0.063 mm) used for column chromatography was purchased from Merck & Co Inc. (Whitehouse Station, NJ, USA). Solvents used for column chromatography were distilled prior to use. NMR spectra were measured on a Varian Gemini 2000 spectrometer at 300 MHz for 1 H using TMS (d: 0.00) as an internal standard, at 75 MHz for 13 C using CDCl 3 (d: 77.0) as an 1-Ethynylpyrene coupling (iv) was accomplished using the Sonogashira coupling mixture [14]. The reaction mixture was degassed with nitrogen prior to addition of tritylated compound 3. DMT-protected ortho-TINA (4) was obtained as yellow foam, and its structure confirmed by NMR spectrometry. The second step (v) was also performed in an inert nitrogen atmosphere, in the dark at 0uC to RT. NMR spectrometry confirmed the forma- Oligonucleotides and fluorescence resonance energy transfer (FRET) system All oligonucleotides were purchased from IBA GmbH (Göttingen, Germany) or DNA Technology A/S (Risskov, Denmark) on a 0.2 mmol synthesis scale with high performance liquid chromatography (HPLC) purification and subsequently quality control. Melting curve acquisition Melting curve experiments were performed on a LightCyclerH 2.0 using 20 mL LightCyclerH capillaries. 0.5 mM of each oligonucleotide was mixed with sodium phosphate buffer (50 mM NaH 2 PO 4 /Na 2 HPO 4 , 100 mM NaCl and 0.1 mM EDTA) at pH 7.0. Tm measurements were carried out using a standard program: (i) dissociation at 37 to 95uC, ramp rate 0.2uC/ sec, 5 min hold at 95uC; (ii) annealing at 95 to 37uC, ramp rate 0.05uC/sec, continued measurement of fluorescence; (iii) 5 min hold at 37uC; and (iv) denaturation at 37 to 95uC, ramp rate 0.05uC/sec, and continued measurement of fluorescence. Tm was determined using fluorescence data from both the annealing and denaturation curves. No hysteresis was observed. Using Light-CyclerH Software 4.1 for melting curve analysis, Tm was defined as the peak of the first derivative. All melting curve determinations were conducted as single capillary measurements. A setup control (matching oligonucleotides D-624 and D-643) was included in all runs. Prior to Tm identification, runs were color compensated by subtraction of the fluorophore background fluorescence. Coupling of oligonucleotides to LuminexH MagPlexH microspheres Conventional DNA oligonucleotides were coupled to Mag-PlexH-C magnetic carboxylated microspheres following the carbodiimide coupling procedure for amine-modified oligonucleotides, as recommended by Luminex Corporation. In short, 2.5610 6 microspheres were activated in 0.1 M MES, pH 4.5, followed by addition of 0.2 nmol oligonucleotide and 25 mg EDC. The coupling reaction was incubated for 30 min in the dark, followed by addition of 25 mg EDC and another 30 min incubation. 1.0 mL of 0.02% Tween-20 was added and the supernatant was removed after magnetic separation for 1 min on a DynaMag TM -2 magnetic particle concentrator (Invitrogen A/S, Tåstrup, Denmark). 1 mL of 0.1% SDS was added and vortexed, followed by magnetic separation and resuspension in 100 mL Tris-EDTA buffer, pH 8.0, and refrigerated storage. For orthoand para-TINA modified oligonucleotides, a novel inhouse carbodiimide/sulpho-NHS coupling procedure was followed. In a low retention microcentrifuge tube (Axygen, Union City, CA, USA), 2.5610 6 microspheres were washed and activated in 100 mL of 0.1 M MES, pH 6.0, then resuspended in 35 mL buffer. 125 mg sulpho-NHS was added, followed by 625 mg EDC, incubation in the dark for 15 min, addition of another 625 mg EDC and 15 min incubation. Activation buffer was removed and 97 mL of 0.1 M phosphate buffer, pH 7.2, was added followed by 0.3 nmol oligonucleotide. Microspheres were incubated for 2 hours at RT on a Thermo-shaker TS-100 (BioSan, Riga, Latvia) at 900 rpm, followed by optional overnight incubation, without shaking. Microspheres were washed once in 100 mL of 0.1 M phosphate buffer, pH 7.2, blocked in 0.1 M phosphate buffer with 50 mM ethanolamine, pH 7.2, and incubated for 15 min at RT on the Thermo-shaker at 900 rpm. Microspheres were separated and resuspended in 100 mL Tris-EDTA buffer, pH 8.0, and stored at 5uC. All separation steps involved placing the microcentrifuge tube in the magnetic separator for 1 min, with low speed vortexing for 20 sec after each addition of buffer or reagent. To ensure equal coupling efficiency for the carbodiimide coupling procedure, and the carbodiimide/sulpho-NHS coupling procedure used for the orthoand para-TINA modified oligonucleotides, a biotinylated oligonucleotide with or without terminally para-TINA modifications was included in each coupling protocol. The coupling efficiency was evaluated by incubation of 0.2 mL microspheres with 0.5 mg Streptavidin-R-PhycoErythrin Premium Grade (S-21388, Invitrogen A/S) with 10 mg albumin fraction V (Merck & Co Inc.), 0.03% Triton X-100 and 10 mM phosphate buffer, pH 6.4 with 200 mM NaCl. The reaction mixture was incubated for 15 min in an iEMSH Incubator/Shaker HT (Thermo Fisher Scientific) at 25uC and 900 rpm. After three washes in 10 mM phosphate buffer, pH 6.4, with 200 mM NaCl and 0.03% Triton X-100, 350 microspheres were counted on the LuminexH 200 TM instrument. Similar coupling efficiencies were found using both procedures. Microspheres from a single coupling round were used in all experiments. PCR products were pooled and purified using NucleoSpinH Extract II PCR clean-up (Macherey-Nagel GmbH). The purified product was evaluated by gel electrophoresis on a 1.5% agarose gel in TAE buffer with ethidium bromide staining with GeleRuler TM 100 bp Plus DNA Ladder (Fermentas GmbH, St. Leon-Rot, Germany). DNA concentration was 54.8 ng/mL, as determined by OD50 measurement on the NanoDrop TM 1000. The pooled PCR product was used in all experiments. Biotinylated PCR products were detected on the LuminexH 200 TM instrument (Luminex Corp.). A 70 mL premix of microspheres, PCR product, Triton X-100 and helper oligonucleotide (for E.coli rrs gene base pair 446-463 capture) was mixed in an EppendorfH twin.tec 96-well PCR plate and incubated at 95uC for 10 min in a SensoQuest Labcycler (SensoQuest GmbH, Göttingen, Germany). The PCR plate was immediately transferred to ice for 2 min and 50 mL was transferred to a conical bottom 96 MicroWell TM Plate (NUNC, Thermo Fisher Scientific, Roskilde, Denmark) on ice, and 50 mL of a cold 2x hybridization buffer added. The final mixture consisted of 0.2 mL of the relevant microsphere (approximately 2,500 microspheres/well), a two-fold dilution series of biotinylated E. coli rrs gene PCR product from 2.5-0.0098 mL, 0.03% Triton X-100, and 1x hybridization buffer (20 mM NaH 2 PO 4 /Na 2 HPO 4 adjusted with NaCl to monovalent cation concentrations of 100, 150, 200, 300, 400, 500 and 1000 mM at pH 7.0 (52uC)). The mixture was incubated for 15 min in an iEMSH Incubator/Shaker HT (Thermo Fisher Scientific) at 900 rpm and 52uC, or at 42, 46, 50, 54, 58 or 62uC in the temperature experiments. After incubation, the plate was washed three times by using a 96-well magnetic separator (PerkinElmer, Skovlunde, Denmark), removing the supernatant, and adding 20 mM NaH 2 PO 4 /Na 2 HPO 4 adjusted with NaCl to 50 mM monovalent cation concentration and 0.03% Triton X-100 at pH 7.0. Next, 0.5 mg Streptavidin-R-PhycoErythrin Premium Grade (S-21388, Invitrogen A/S, Tåstrup, Denmark) with 10 mg albumin fraction V (Merck & Co Inc.), 0.03% Trition X-100 and 1x hybridization buffer, was added to each well. Plates were incubated for 15 min at 52uC (or relevant experimental temperature), and washed three times as previously described. Wash buffer was added, and incubated for 30 min at RT before LuminexH 200 TM analysis, counting 300 of each microsphere set. The final step at RT avoided decreasing background fluorescence in the LuminexH analysis due to sedimentation of unevenly sized microspheres [23]. All dilution series were run in triplicate, with results presented as mean of MFI and 95% confidence intervals. Analytical sensitivity was defined as the limit of detection (LOD), calculated by adding three standard deviations to the mean background MFI. Differences in analytical sensitivity were defined as the ratio between the LOD of DNA and orthoor para-TINA modified oligonucleotides. Supporting Information Table S1 Change in Tm and DTm of Watson-Crick based antiparallel duplexes stabilized by para (X)-and/or ortho (Y)-TINA monomers. Tm was determined using 0.5 mM of each strand in 50 mM phosphate buffer, pH 7.0, with 100 mM NaCl and 0.1 mM EDTA. Tm was defined as the peak of the first derivative using both annealing and dissociation curves. Base mismatches are underlined and marked in bold blue. *Mismatch adjacent to TINA. (XLS) Figure S1 Competitive annealing of orthoor para-TINA terminally modified oligonucleotides compared with unmodified DNA oligonucleotide to denatured PCR products in buffer of increasing ionic strength -complete data. E. coli rrs biotinylated PCR product was captured by unmodified DNA oligonucleotide
2014-10-01T00:00:00.000Z
2011-06-03T00:00:00.000
{ "year": 2011, "sha1": "62209171b59e4ad1b0ca7c6b9a31a187f300d22f", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0020565&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "62209171b59e4ad1b0ca7c6b9a31a187f300d22f", "s2fieldsofstudy": [ "Chemistry" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }
239649859
pes2o/s2orc
v3-fos-license
Face recognition for presence system by using residual networks-50 architecture Received Aug 10, 2020 Revised Apr 16, 2021 Accepted Apr 26, 2021 Presence system is a system for recording the individual attendance in the company, school or institution. There are several types presence system, including the manually presence system using signatures, presence system using fingerprints and presence system using face recognition technology. Presence system using face recognition technology is one of presence system that implements biometric system in the process of recording attendance. In this research we used one of the convolutional neural network (CNN) architectures that won the imagenet large scale visual recognition competition (ILSVRC) in 2015, namely the Residual Networks-50 architecture (ResNet-50) for face recognition. Our contribution in this research is to determine effectiveness ResNet architecture with different configuration of hyperparameters. This hyperparameters includes the number of hidden layers, the number of units in the hidden layer, batch size, and learning rate. Because hyperparameter are selected based on how the experiments performed and the value of each hyperparameter affects the final result accuracy, so we try 22 configurations (experiments) to get the best accuracy. We conducted experiments to get the best model with an accuracy of 99%. INTRODUCTION Biometrics is a term used to determine the DNA of an individual, hand geometry, face or physical characteristics, such as signatures, sounds and so on. Biometric systems are generally used to authenticate and identify individuals by analyzing the individual's physical characteristics, such as fingerprints, irises, veins and others [1]. Biometric systems use unique physical characteristics of individuals which different from others to be identified and analyzed to achieve certain goals [2]. One of biometric system that also superior and can be applied to the attendance system based on the comparison made is using faces [3]. As identity information, human faces have the advantage of being unique and free of imitation [4]. In the face recognition system, the technology used is face detection which is the first step in facial recognition process and face recognition [5]. One of the supporting media that can be relied upon in the attendance system using face detection and face recognition is a real time video camera. Camera in the face detection system have the advantage of application flexibility, so they do not require users to make direct contact with the attendance system [6]. In this research, based on a case study of presence system, we conducted a study of the application on biometric systems by using deep learning. Deep learning is a type of machine learning method that makes computers learn from experience and knowledge without explicit programming and extract useful patterns from raw data [7]. With the presence system using deep learning for face detection and face recognition, it is expected that the process of recording student attendance is more efficient, as well as reducing fraud that might occur. In develop the presence system there will be 2 stages, namely face detection and face recognition. In the face detection stage, the haar cascade classifier method is used to detect elements on the face, namely the eyes, nose and mouth [8]. In the face recognition stage, the convolutional neural network (CNN) algorithm is used for the process of recognizing and matching input data with data on the model. In our research we only use CNN which the values of the kernel are determined by training, while a haar-feature is manually determined. While well-trained CNN could learn more parameters (and thus detect a larger variety of faces), haar based classifiers run faster [9]. Haar cascade detect human faces enclosed by a square and give center points of face elements (eyes, nose, and mouth) [10]. The haar cascade classifier is also called the Viola-Jones Method, which is the most widely used method for detecting objects. The application of human faces detection by using haar cascade classifier can be carried out to get a comprehensive result such as for detect human faces on thermal image [11]. Among all deep learning structure, CNN and recurrent neural networks (RNN) are the most popular structures [12]. As state above we use CNN because we want to get the best accuracy and CNN has been proven to be very effective in areas such as facial recognition and classification compared to another method [13]. Also, CNN extracts features automatically, so there is no need to select features manually [14]. There are already some researches in face recognition based on CNN, some of them implemented augmented reality to compared it with face database and give high accuracy [15], and many of them implementing softmax architecture for facial recognition which already proven give a good accuracy [16]. In here we propose to use ResNet-50 architecture for recognition, because has good performance if compared to simple CNN [17]. Residual networks (ResNet) are a convolutional network that is trained on more than 1 million images from the ImageNet database and for ResNet-50 the total number of weighted layers is 50, with 23534592 parameters that can be trained [18]. Our contribution in this research is to determine effectiveness ResNet architecture with different configuration of hyperparameters. This hyperparameters includes the number of hidden layers, the number of units in the hidden layer, batch size, and learning rate [19]. Because hyperparameter are selected based on experiments performed and the value of each hyperparameter affects the final result accuracy, so we try 22 configurations (experiments) to get the best accuracy [20]. We also want to prove that learning rate has a large influence on accuracy among hyperparameters in here [21]. RESEARCH METHOD The explanation of the data processing design scheme is as follows as shown in Figure 1: − Data in the form of face images are used as input to be processed. − Each person images are capture with 15 different position and expression in RGB color space and JPG format. − Pre processing stage is the stage of image data uniformity consisting of uniforming the size of the image and image augmentation. − In here we make each image has same specification, such as its color space and resolution. − Classification, namely the stage of recognizing faces with several stages consisting of convolutional layers, pooling layers, flatten, fully connected layers and softmax. − In here we use CNN with a certain number of layers. The number of layers that contribute to a model of data is called the depth of a model [22]. In every stage we do several hyperparameter configuration for every experiment that we conduct until we found the best combination and get the best accuracy. The detail about the hyperparameter configuration could be seen in the explanation subsection about the model experiment design. − The output of processed data is information from the data that has been identified. − In here we try to get the result wether the recognition system could recognize the person correctly or not. We do the experiment with 9 different persons, each of them will be identified by the system that we build, from that we could know how robust our system to recognize the face of each person. Preprocessing Preprocess data on a convolutional neural network has several stages, the first stage is Image Scaling at this stage, the input data will be equal in size. This stage is needed because the available image size does not always match the image size specified as a dataset. After that we continue with Augmentation process, at this stage, augmentation consists of 3 stages: Giving a blur effect to the image, giving a random noise effect (noise) and adding light intensity to the image. The purpose of this augmentation is to uniform image data in order to simplify the classification process. We also do some geometrical operation such as flip, shift (translation), rotation, and segmentation. Flip consists of 3 stages: Data changing the image to horizontal, vertical and rotated horizontally and vertically. The purpose of this flip is to reproduce data to simplify the classification process. At Shift stage, we translate/shifts the location of the object from the original object in the data. Then in the rotation stage, the image will be shifted counterclockwise according to a predetermined angle. Last is the segmentation stage, segmentation is used to detect the edges of faces to get an image from an image. Classification The explanation of the scheme in Figure 2 is as follows: − Input data will enter the convolution layer, which is the process of manipulating images to produce a new image to be entered at a later stage. We use zero padding in convolutional process [24] as shown in Figure 3. − Pooling layer at this stage is to do calculations on each pixel of the image feature that has been converted in here into a matrix. The goal is to divide an image into several features to make it easier to do an image match. − Flatten is the stage where the features produced at the pooling layer with a matrix size n x m will be provided that n> 1 and m> 1 will be converted to the order matrix 1x1. 5491 − Fully connected layer which is the stage of producing output in the form of the probability of an image that will be used in the classification process of output data. − Softmax is the stage of calculating probabilities on all labels in the data. − The final result of this process is the value of the softmax calculation, which is the probability of each label in the data. Model experiment design We divide the data that has been preprocessed into a data train and data test, with a share of 80% for the data train and 20% for the test data. 1050 data were divided into 2 parts, 840 data to be data train and 210 data to be test data. Data that has been divided into data train and data test will go through the training model stage using the convolutional neural network algorithm. In the process of training models, for the first experiment we use hyperparameter that has learning rate with value 0.1, epoch with value of 10 and step per epoch with value of 100. In data processing using convolutional neural network, the data goes through several stages, namely pooling layer, flatten, fully connected layer, softmax calculation to produce a model. The resulting model will be evaluated to find the accuracy of a model. The experimental design for this model can be seen in Figure 4. RESULTS AND DISCUSSION In this section, it is explained the results of research and at the same time is given the comprehensive discussion. The results obtained and discussion of the implementation of attendance system development using face recognition technology are as follows: Result of data collections Each person dataset must consist of 15 image data with different conditions, we use this based on design of smart door system for live face recognition based on image processing [25] and we do not use augmented reality database for comparison. We collected images of each individual faces by taking each individualfaces with different variations and then developing image collection techniques with preprocess data methods to homogenize the whole picture. The dataset sample could be seen in Table 1. The face forms tilted to the right 45 °, and smiles. Face shape facing up, eyes closed and expressionless. The shape of the face is tilted to the right 45 °, and without expression. Face shape facing upwards of 45 °, and smiling expressions. The shape of the face is tilted to the left 45 °, and expressionless. Face shape facing up to 45 °, eyes facing up and without expression. Face shape facing right 45 ° and smiling expression. Face shape facing left 45 °, and smiling expressions. The shape of the face is tilted to the left 45 °, and the expression smiles. Face shape facing up with eyes staring straight at the camera and smiling expression. Face up with eyes closed and without expression. Result of preprocessing data In this phase the initial data that already collected has different sizes and not uniform, so the preprocessing stage was needed in this research. We include the function of image normalization to ensure the uniformity in image size and augmentation. The tests that carried out in the preprocessing stage use 15 variations of image data for 1 class. Following is an example of the test results for one image data that has through the preprocessing phase that can be seen in Table 2. At the preprocess phase, one image data produces 87 preprocess data. So, for 1 class that contains 15 variations of data, the total data generated after preprocessing is 87x15=1,305 data. From 53 classes collected, preprocess data will be obtained, that is 53 classes x 1,305 data = 69,165 preprocess data. Result of model testing In this phase we conducted an experiment with data sharing which is 80:20, with lots of data 1050 with 840 for data train and 210 for test data. We also conducted an experiment by using 13050 data with 10440 for data train and 2610 for test data. The number of classes modeled is 10 classes, this is due to limitations on inadequate support resources for modeling 53 classes. The result of model testing with different configuration of hyperparameter can be seen in Table 3. From the results of the modeling experiments above, it can be concluded that the 22 nd experiment has the best accuracy with 99% data train accuracy and 99% data test accuracy. This is because we has experimented with hyperparameters and obtained the right hyperparameters to build a model with value of data train accuracy and data test accuracy that reached 99%. Result of prototype presence system The results of this implementation are the prototype that built using a graphical user interface (GUI) provided by Python 3.6 and tkinter which is successfully built to recognize the face from each student by using previously trained model with 22 nd experiment hyperparameter configuration as shown in Figure 5. Result of prototype presence system After implementing data modeling, we tests the results of the models that have been built. To get the results of evaluations that can be compared and concluded the results, we conducted an object experiment on 9 students. Each student did 5 object experiments. The result of object testing can be seen in Table 4. CONCLUSION The conclusion obtained from the research that already conducted is the presence system was develop in the form of a prototype using the convolutional neural network (CNN) algorithm by conducting trial experiments on hyperparameters such as the learning rate with a value of 0.0001, epoch with a value of 100, and step per epoch with a value of 150 so this hyperparameter configuration give us a model with accuracy of 99 %. After that we built a presence system prototype by using a graphical user interface (GUI) which is provided by python named tkinter, then we applied the model that has been obtained into the prototype so that the presence system prototype can be used to predict the facial image.
2021-07-26T00:04:00.185Z
2021-12-01T00:00:00.000
{ "year": 2021, "sha1": "401ea7c6783b089e05983446a6aefae2d53b7a4a", "oa_license": "CCBYSA", "oa_url": "http://ijece.iaescore.com/index.php/IJECE/article/download/25622/15265", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "59f7d33b98b10328f84a60e96f68d142ba648f66", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
219888469
pes2o/s2orc
v3-fos-license
Field-based screening of selected oral antibiotics in Belize Introduction The presence of poor quality antibiotics on the market has contributed to the antibiotics resistance and global threat to public health. Antibiotic resistance is now a global concern. One area to address this issue is by evaluating the quality of antibiotics accessible to the public. The purpose of this study was to test and compare (with corresponding pharmacopeia) the quality of common oral antibiotics available in the country of Belize with a view to providing base-line data on the testing of medications imported to the country for public consumption. The study focused only on level 2 field-based screening quality assurance on three Key Access Antibiotics from the World Health Organization (WHO) Model List of Essential Medicines. Methods Five brands of antibiotic tablets/capsules with denoted pharmacopeia imported into the country of Belize were tested for quality at The University of Belize pharmacy laboratory. A sample of 30 tablets/capsules each of the selected antibiotic brand were used for study. Visual inspection and weight variation were done for each sample while Monsanto type tablet hardness tester, Roche@Tablet Friability Test Apparatus (single drum), and Ajanta@ Tablet Disintegration Test Apparatus (double basket) were conducted on selected antibiotics. Results were recorded and compared with corresponding pharmacopoeia references. Results Most of the samples collected passed performed tests. Only a few samples from both BP and USP antibiotics failed in visual inspection and weight variation tests. All antibiotics tested conformed to their corresponding pharmacopeia reference in terms of friability and disintegration time. Conclusion Most of the selected antibiotics passed performed tests when compared with their pharmacopeia. Only a few samples from both BP and USP antibiotics failed the tests conducted. There is need for regular quality assurance tests on all medications imported to Belize especially antibiotics. Introduction The presence of poor quality antibiotics on the market has contributed to the antibiotics resistance and global threat to public health. Antibiotic resistance is now a global concern. One area to address this issue is by evaluating the quality of antibiotics accessible to the public. The purpose of this study was to test and compare (with corresponding pharmacopeia) the quality of common oral antibiotics available in the country of Belize with a view to providing base-line data on the testing of medications imported to the country for public consumption. The study focused only on level 2 field-based screening quality assurance on three Key Access Antibiotics from the World Health Organization (WHO) Model List of Essential Medicines. Methods Five brands of antibiotic tablets/capsules with denoted pharmacopeia imported into the country of Belize were tested for quality at The University of Belize pharmacy laboratory. A sample of 30 tablets/capsules each of the selected antibiotic brand were used for study. Visual inspection and weight variation were done for each sample while Monsanto type tablet hardness tester, Roche @ Tablet Friability Test Apparatus (single drum), and Ajanta @ Tablet Disintegration Test Apparatus (double basket) were conducted on selected antibiotics. Results were recorded and compared with corresponding pharmacopoeia references. Results Most of the samples collected passed performed tests. Only a few samples from both BP and USP antibiotics failed in visual inspection and weight variation tests. All antibiotics tested conformed to their corresponding pharmacopeia reference in terms of friability and disintegration time. Introduction The presence of poor quality antibiotics on the market has contributed in no small measure to the global antibiotics resistance and a threat to public health. Antibiotic resistance has become a global emergency that healthcare professionals are confronted with in recent years [1]. In the United States alone, at least 2 million individuals were reported to be affected by antibiotic resistant bacteria [2]. The Center for Disease Control (CDC) and prevention reported more than 23,000 deaths in the United States 2013 [3] and 33,000 deaths in Europe [4] as a result of antibiotic resistance. The threat of antibiotics resistance does not only apply to the western developed countries but also in developing countries where health care provision is a major challenge. In Thailand for instance, antibiotic resistance has accounted for more than 38,000 deaths [5]. Between 2000 and 2010 alone, a total increase of antibiotic consumption has been observed in 71 countries. This increase has skyrocketed up to 45% for last-resort antibiotics, whose usage is only reserved when other antibiotic treatments are no longer effective [6]. The increased consumption of antibiotics can only result in the increase risk for developing resistance. While bacteria naturally developed resistance over time, many factors have been reported to accelerate this process. These factors include misuse and overuse of antibiotics, inappropriate prescribing, extensive agricultural usage and inadequate discovery of new antibiotics [7][8][9][10][11][12]. Furthermore, poor quality of antibiotics available to the public may lead to the development of antibiotics resistance. The quality of a medication is affected by low drug potency, poor formulation and/or presence of impurities [13,14]. Although there is limited data that link the quality of specific antibiotics to resistance in particular diseases, evidence shows that resistance develops when bacteria is being exposed to sub-therapeutic doses of antibiotics. This will contribute to treatment regimens appearing as ineffective resulting in stronger antibiotics being needlessly introduced which will further escalate the possibilities of resistance [1]. Presently, the Belize Ministry of Health (MOH) through its drug regulatory department is actively implementing and enforcing the Belize Antibiotics Act to ensure compliance. The Antibiotics Act ensures that antibiotics are only accessed based on prescription from a licensed medical practitioner among other things. The incidence of poor quality antibiotics especially in developing countries like Belize pose a threat to public health leading to poor management and the development of antibiotic resistance. The issues of limited human and financial resources are an additional challenge which is also characterized in many developing countries. These challenges put huge strain in the number of personnel required for regulatory affairs and enforcements. Furthermore, lack of funding makes it difficult to conduct complex quality assurance tests required on medications especially antibiotics imported into the country. The apparent lack of public awareness on the quality of medications as well as the severity of global antibiotic resistance crisis makes monitoring antibiotics use in Belize a challenging task for the MOH. This study therefore was designed to provide the first base-line data that authors are articulated in the 'author contributions' section. These pharmacy stores only sell pharmaceutical products to the public and are not involved in any form of research. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. Competing interests: The authors have read the journal's policy, and the authors of this manuscript have the following competing interests to declare: CY is a paid employee of La Sante Pharmacy. EG is a paid employee of Friendly Pharmacy. CB is a paid employee of CJS Pharmacy. AP is a paid employee of Genesis Medical Clinic. This does not alter our adherence to PLOS ONE policies on sharing data and materials. These pharmacy stores only sell pharmaceutical products to the public and are not involved in any form of research. There are no patents, products in development or marketed products associated with this research to declare. compared the quality of some selected antibiotics with their corresponding pharmacopeia. To the best of our knowledge, this type of study has not been conducted in the country of Belize hence its significance to both the Belize Ministry of Health and The University of Belize who currently is the only university in the country responsible for the training of pharmacist. Sampling A modified Newton et al., (2009) [15] proposed Medicine Quality Assessment Reporting Guidelines (MEDQUARG) was adapted as part of the sampling strategy for this study. The study specifically looked at the question "Are there antibiotics of poor quality in certified drug outlets in Belize?" The study was not designed to conduct complex tests to ascertain quality of the selected oral antibiotics rather an initial screening to determine quality. Based on the MEDQUARG sampling strategy therefore, five brand products of Amoxicillin 500mg, Co-Trimoxazole 960mg and Ciprofloxacin 500mg in oral tablet and/or capsule formulations were conveniently purchased from licensed community pharmacies specifically in San Ignacio Town, Belmopan City and Orange Walk Town and from licensed distributor companies. 90 units of tablets or 60 units of capsules with same batch numbers were selected and packed in sterile specimen containers and coded as follows: • AMOX C 1 to C 5 for each brand of Amoxicillin 500mg capsules, • CO-TRI T 1 to T 5 for each brand of Co-Trimoxazole 960mg tablets, • CIPRO T 1 to T 5 for each brand of Ciprofloxacin 500mg tablets. Tablets and capsules were purchased locally from verified licensed and registered pharmaceutical stores in the country of Belize. The samples were purchased between the months of July to September 2019. A sample is considered to fail screening when it failed visual inspection, weight variation, friability test and disintegration test when compared with corresponding pharmacopoeia references. Data analysis Data entry and analysis was done by both Microsoft Excel 2007 [16] and IBM 1 Statistical Package for Social Sciences (SPSS 1 ) Statistics version 25 for Windows [17]. Storage and transportation Purchased tablets and capsules were transported in carefully packaged sterile specimen containers with sterile gauze to ensure the protection of samples. All containers were adequately sealed and packaged in such a manner to avoid breakage and/or contamination during transportation and storage [18]. Storage conditions were kept in accordance with the storage requirements for each individual drug. Generally, samples were stored at an ambient temperature of between 69˚F (20.56˚C) to 77˚F (25˚C) until tested. The medications were stored at the University of Belize Pharmacy Laboratory under air conditioning, away from sunlight and away from access to other individuals. The samples were only accessible to the researchers when the tests were performed. No other person had access to the samples. Visual inspection Visual inspection refers to the process of identifying crucial container integrity defects such as cracks, misapplied stoppers/seals, unidentified material, precipitation, discoloration and cosmetic defects such as cracks, scratches and dirt [19]. All the packaging and blister packs of formulations were inspected manually and any abnormal spelling or unduly faded colours of the packaging were recorded. The visual inspection process followed International Pharmaceutical Federation (FIP) and the USP tool for Visual Inspection of Medicines. A checklist for visual inspection of medicines was used to identify suspicious products for further examination [20]. Each medication was thoroughly physically examined using the checklist in S1 Appendix. Sizes (length, width and diameter) of dosage form were measured using a standardized ruler to ensure uniformity and results were recorded. Mean was calculated and logged into tables. Weight variation 30 tablets of each sample Co-Trimoxazole 960mg and Ciprofloxacin 500mg were weighed, calibrated and measured using an Ohaus1 Scout™ Pro electronic digital scale. The results were recorded for each tablet and mean calculated, recorded and compared with corresponding pharmacopeia standards. 30 capsules of each sample Amoxicillin 500mg were weighed whereby whole capsules and empty capsule shells were weighed separately. Powder weights for each sample were also calculated by deducting weight of empty capsule from whole capsule [21,22]. Results for both capsules and tablets were recorded. The percentage difference in the weight variation was determined within permissible limits. Hardness test The Monsanto type hardness tester was used to test the hardness of each tablet. The tester was first placed across the diameter between the spindle and the anvil. The tablet was then placed in position and the knob adjusted to hold the tablet. Before the pressure was applied, the reading of the pointer was calibrated to zero. The pressure was finally applied slowly to determine the hardness of the tablet by breaking it. The test was measured in kilograms (kg) and later converted to Newton's (N) as corresponding pharmacopoeia requirement [23,24]. 30 tablets of each brand Co-Trimoxazole 960mg and Ciprofloxacin 500mg were tested for hardness and average was determined and recorded. Friability test 10 sample tablets of each antibiotic were placed in Tablet Friability Test chamber (single drum) to determine broken tablets and the amount lost through chipping. Each brand antibiotic tablets were tested three times at 25 ± 1 Revolutions Per Minute (RPM) at 4 minutes (approximately 100 rotations) [25,26] and the sample tablets we then weighed and recorded to compare with corresponding pharmacopoeia reference (total of 30 tablets per brand). Friability was calculated using the simple formula: Where W1 = weight of the tablets before test while W2 = weight of the tablet after test Disintegration test Ajanta @ Tablet Disintegration Test Apparatus (double basket) was utilized and a water bath was maintained at 37˚C ± 2˚C whereby apparatus was set to run at 30 minutes with 29 to 32 cycles per minute [27][28][29]. Results obtained were recorded as mean. Results were then compared with corresponding pharmacopoeia reference(s). Collected brands of Amoxicillin 500mg, Co-Trimoxazole 960mg and Ciprofloxacin 500mg were tested for disintegration using the following procedures: 1unit of each obtained brand antibiotic was placed in each of six tubes (1 round) of each basket in disintegration test apparatus. Every 5 minutes the mesh of each tube was checked for disintegration process and results were logged. Each basket was recorded as one round and a total of 5 rounds were conducted (a total of 30 units for each brand antibiotics). Mean was calculated and comparison made with corresponding pharmacopeia. Results and discussion The arbitrary pharmacopoeial acceptance limits of Amoxicillin 500mg, Co-Trimoxazole 960mg and Ciprofloxacin 500mg were compared in this study with the aim to providing baseline information on the quality of these antimicrobial agents. The general and individual guidelines, and set criteria of the different antibiotics were assessed and compared for quality. These physical quality factors have serious health and economic consequences for the patient and for the country. Moreover, the risk of poor quality antibiotics has detrimental effects on patient's prognosis, antibiotic resistance and mortality rate [30]. 30 units of each sample Amoxicillin 500mg (AMOX C 1 -C 5 ), Co-Trimoxazole 960mg (CO-TRI T 1 -T 5 ) and Ciprofloxacin 50mg (CIPRO T 1 -T 5 ) were tested for weight variation and disintegration for both capsules and tablets formulations. Only hardness and friability tests were conducted for tablet formulation. Table 1 indicates all antibiotic samples collected and tested from different pharmacopoeia standards. Physiochemical parameters of each antibiotics are summarized in Tables 2-4 for Amoxicillin 500mg, Co-Trimoxazole 960mg and Ciprofloxacin 500mg respectively. The detailed results of each antibiotic are discussed under each test performed. Visual inspection Great importance is given to visual inspection of dosage forms, since it frequently provides a first vital indication of degradation, poor manufacturing, tampering or counterfeiting [31]. The powdered surfaces, the non-uniform scoring depths, and the indentations observed on the tablets in this study are indications that further testing is required to identify the problem, which could either be from manufacturing practices or from transportation and storage. Degradation during storage and transportation is of particular significance especially in tropical countries like Belize [32]. Amoxicillin 500mg The checklist (S1 Table) for visual inspection was used to inspect all the sample brands of AMOX C 1 -C 5 collected. Apart from AMOX C 2 as loose capsules, AMOX C 1 , C 3 , C 4 and C 5 PLOS ONE Field-based screening of selected oral antibiotics in Belize were packaged individually in blister packs. AMOX C 1 and C 5 were packaged in aluminum with transparent polyvinyl chloride (PVC) whereas AMOX C 3 and C 4 in aluminum with nontransparent (white) PVC. In terms of labeling, and trade names where applicable, spelling and information provided (either in English or Spanish) were appropriate. AMOX C 1 -C 5 were observed to have uniform size with standard deviation (SD) of 0 in cm for diameter and thickness as shown in Table 2. Printing behind blister packs showed sign of fading for AMOX C 1 especially when rubbed, but words were still evident. Raw Shape, color and texture, were also uniform and samples were free of contamination. However, a few empty capsules had minimal powder remnants on PVC after capsules were removed from original package (AMOX C 1 and C 4 ), indicating spillage. The results from the present study suggests more in-depth tests to be conducted to ensure quality of AMOX C 1 and C 4 . Co-Trimoxazole 960mg. The checklist (S2 Table) for visual inspection was also used to inspect all the sample brands of CO-TRI T 1 to T 5 collected. CO-TRI T 2 and T 4 were packaged in loose tablets while CO-TRI T 1 , T 3 and T 5 were packaged individually in blisters. CO-TRI T 1 and T 3 were packaged in aluminum with transparent PVC while CO-TRI T 5 was packaged in aluminum with semi-transparent (yellow) PVC. Information provided on labeling, trade names, and spelling was appropriate for the tested samples. The results shown in Table 3 indicated that all CO-TRI T 1 to T 5 have uniform size with SD between 0.00 and 0.04 in cm for diameter, width and thickness. Shape, color, texture, and tablet markings were all uniform and samples were free of contamination. Minimal powders on PVC were also noted to be present in all samples. Thus, more complex tests may be required to ensure quality of CO-TRI T 1 -C 5 as this may be due to the kind of coating used which may indicate a possible fault in manufacturing, storage or transportation [31]. Additionally, few chippings were also observed in CO-TRI T 1 and T 5 . Though chippings were observed, friability test results were found to be within acceptable limits. Ciprofloxacin 500mg. Furthermore, the checklist (S3 Table) for visual inspection was used to inspect all samples of Ciprofloxacin 500mg tablets. CIPRO T 1 was packaged in aluminum backing and covers (ALU-ALU), while CIPRO T 2, T 3, and T 5 were packaged in aluminum with transparent PVC. Likewise, CIPRO T 4 was packaged in aluminum with semitransparent (brown) PVC. Appropriate labeling, trade names, and spelling information was provided for CIPRO T 1 to T 5 samples. The results shown in Table 4 indicated that all CIPRO T 1 to T 5 have uniform size with SD 0.00 and 0.02 in cm for diameter, width and thickness. Shape, color, texture, and tablet markings were also uniform, and samples were free of contamination in CIPRO T 4 and T 5 . Slight chipping was evident in CIPRO T 1 and tablet surface was noted to be powdered and scoring depths were not uniform. Tablet surface from CIPRO T 2 appeared to be uneven and pinholes were noted to be present. Color was not uniform in CIPRO T 3 and minimal powder was present on blister after tablets were removed from original package. Therefore, more detailed examination and tests will be needed for CIPRO T1, T2 and T3 as powdered surfaces, the non-uniform scoring depths and the indentations on the tablets are indicators that further testing is required to identify the problem either from manufacturing practices or from transportation and storage. This is important to note as degradation during storage and transportation is of particular significance especially in tropical countries like Belize [32]. Weight variation test Weight uniformity is very important as it ensures that consumers take a precise pharmaceutical dose. Furthermore, weight uniformity ensures that a consistent dose and quantity of API is maintained between all batches and doses [33]. The fluctuations in the weight variation seen in some of the samples may indicate poor quality control measures either by inconsistent powder, granulate density or particle size distribution, which are all common sources of weight variation during compression. Regardless of the reason, the tablets, though under the acceptable range may provide sub-therapeutic levels of the antibiotics which in turn may contribute to antibiotic resistance. Similarly, the tablet samples found to weigh over the acceptable range can negatively impact the patient through increased adverse effects, increased toxicity levels and increased potential for drug-drug interactions. Under pharmacopeia standards, tablets/capsules over 249mg weight for BP standards and 324mg for USP standards should not deviate 10% from average weight, and no more than 2 tablets/capsules should deviate from average weight from a test of 20 sample tablet/capsules [19,21,34]. Amoxicillin 500mg. 30 capsules of each brand were weighed by whole capsule, powder and empty capsule, and logged respectively in grams (S4 Table and S5 Table). They capsules were then examined against their corresponding pharmacopeia standards, either USP or BP. AMOX C 1 and C 2 tested samples were under USP standards. AMOX C 1 failed to be within standard acceptable range. Weight variation for both whole capsules and powder of AMOX C 1 for three capsules were noted to be below 5% average weight, and one capsule was over 5% average weight. Another capsule was found to be below 10% average weight as shown in Fig 1 below. The higher SD value of 0.0281 powder weight in grams in AMOX C 1 ( Table 2) further reflects the results. AMOX C 2 has been noted to be within acceptance value under USP standards as shown in Fig 1. None of capsules were found to be over or under 5% average weight. AMOX C 3 conformed to BP standards as none of the 30 capsules were over or under 5% from average weight as shown in Fig 2. For AMOX C 4 , the weights of four powder sample were found to be under and three samples were over 5% of average weight as shown in Fig 2. Nevertheless, none of the weights had over or below 10% from average weight in AMOX C 4 with a higher SD at 0.0194 powder weight in grams. Only one of the powder sample was found to be slightly over 5% of average weight in AMOX C 5 . The results of the tests for AMOX therefore, showed that AMOX C 3 and C 5 conformed to BP weight variation standards, while AMOX C 4 failed to comply with weight uniformity according to corresponding standards. PLOS ONE Field-based screening of selected oral antibiotics in Belize More than 2 individual capsules and powder samples were shown to be over 5% of average weight. Detailed examination and testing is recommended for AMOX C 1 and C 4 as both failed to pass corresponding pharmacopeia references in terms of weight variation. Co-Trimoxazole 960mg. 30 tablets of each brand of CO-TRI were weighed and logged in grams (S6 Table). All five brands of Co-Trimoxazole 960mg tested have a SD of 0.0045 and 0.0192 in grams as shown in Table 3. The weight of 30 tablets each of CO-TRI T 1 , T 2, and T 3 were analysed and found to conform to BP standards as none of the 30 tablets were over or under 5% from average weight as shown in PLOS ONE Field-based screening of selected oral antibiotics in Belize CO-TRI T 4 and T 5 were also tested for weight uniformity, and found to be within acceptance value under USP pharmacopoeia standards as none of the tablets were over or under 5% from average weight as shown in Fig 4. All samples passed corresponding pharmacopeia references in terms of weight variation. This may be an indication of constant levels of API, thereby preventing fluctuation of systemic API [33]. Ciprofloxacin 500mg. 30 tablets of each brand were weighed and logged in grams (S7 Table). All five brands of Ciprofloxacin 500mg tested had weight variation test showing a SD 0.0079 and 0.0212 in grams. CIPRO T2 was noted to have higher SD than others at 0.0212 in grams (Table 4). These tablets were later used in Tablet hardness test. CIPRO T 1 , T 3, and T 4 were found to be within acceptable value under USP standards as none of the tablets were over or under 5% from average weight as shown in Fig 5. However, two tablets were found to be over, and two tablets under 5% average weight in CIPRO T 2 , which reflected upon the higher SD as earlier observed, none deviated 10% over or under average weight. The weight of 30 tablets CIPRO T 5 were analysed and shown to be in conformity with BP standards as none of the 30 tablets were over or under 5% from average weight as revealed in Fig 6. We suggest further testing and examination of CIPRO T 2 as they were found to fail corresponding specification in weight variation test. This may indicate variation of API or even presence of impurities whereby patients may run the risk of having drug-drug interactions, toxicity and even treatment failures [33]. Hardness test Tablet hardness serves both as a criterion to guide product development and as a quality-control specification. To this end, tablets that are too hard could be due to excessive bonding potentials between active ingredients and excipients thereby preventing proper dissolution. On the contrast, tablets that are too soft could be due to weak bonding which subsequently leads to premature disintegration or chipping and breaking [35]. Both cases are counterproductive, and as observed in the present results, most samples have high standards of deviation of the hardness test. This may lead to some of the tablets being broken down and dissolved before it reaches its absorption site, meanwhile the latter may pass undissolved through its absorption site hence inhibiting the tablet to perform its pharmacological activity. The inconsistent readings may lead to drastic variations in bioavailability in between doses. Monsanto type hardness tester was utilized for testing and results were initially in kilogram (kg) which were later converted to N as per BP and USP standards (1 kg = 9.8066500286389 N) [23,24]. Co-Trimoxazole 960mg. The breaking force of a tablet is a form of measuring mechanical integrity [24]. Hardness test results appeared to be inconsistent, especially for CO-TRI T 1 , T 2 and T 5 with high SD of 21.6487, 22.1141 and 28.0401 in N respectively (Table 3). Figs 7 and 8 presents the percentage hardness variation from average under BP and USP respectively. CO-TRI T 1 -T 5 were shown to be inconsistent in tablet hardness which may indicate drastic PLOS ONE Field-based screening of selected oral antibiotics in Belize bioavailability in vivo between doses [35]. However, as hardness test is one of which determines whether tablets may disintegrate, the batch is still accepted if disintegration test are within specified range [35]. Therefore, these batches cannot be deemed pass or fail without performing disintegration test. Ciprofloxacin 500mg. Hardness test results appeared to be inconsistent, especially in CIPRO T 2 and T 4 with high in N of 22.0765 and 26.8526 respectively (Table 4) [35]. Even so, hardness test cannot be a sole determinant of the acceptability of batch PLOS ONE Field-based screening of selected oral antibiotics in Belize without disintegration test [35]. Therefore, these batches cannot be deemed pass or fail without performing disintegration test. Even though all tablet samples tested for tablet hardness were shown to be inconsistent, they were within acceptable standards in terms of friability and disintegration tests. Therefore, the batches tested are considered to be acceptable. Friability test Friability testing is another test that determines physical strengths of tablet formulations [25]. Under BP and USP standards, the maximum acceptance loss of mass is 1% (single test or mean of tests) from 10 tablets at 25 ± 1 rotations per minute at 4 minutes (approximately 100 rotations) which can be repeated up to 3 times [25, 26]. PLOS ONE Field-based screening of selected oral antibiotics in Belize Co-Trimoxazole 960mg. Total of 30 tablets of each brand Co-Trimoxazole 960mg were tested for friability at 10 tablets per test, and mean of each brand was calculated. Results for each time and percentage loss are shown in S8 Table for those that complied with BP and S9 Table for USP. Mean percentage losses of CO-TRI T 1 -T 5 were documented in Table 3. CO-TRI T 1 -T 5 were shown to have less than 1% mean loss as shown in Table 3. This indicates that these antibiotics are able to withstand normal transportation and handling conditions without considerable breaking and/or chipping as to affect formulation [35]. Ciprofloxacin 500mg. Similarly, a total of 30 tablets of each brand Ciprofloxacin 500mg were tested for friability at 10 tablets per test and mean of each brand was calculated. Each result for time and percentage loss are shown in S10 Table for those that complied with USP and S11 Table for BP. Mean percentage losses of CIPRO T 1 -T 5 were documented in Table 4. CIPRO T 1 -T 5 were shown to have less than 1% loss as shown in Table 4. The results showed that all tablets tested for Ciprofloxacin 500mg are formulated within standards as they were observed to withstand regular transportation and handling conditions. This signifies that tablets of the same batches will not chip or break before they reach consumers and affect efficacy [35]. Disintegration test Disintegration test monitors how a dosage form is dispersed [27,29]. According to USP and BP standards, hard gelatin capsules, regular coated or film coated tablets should disintegrate completely in a water bath maintained at 37 ± 2˚C within 30 minutes at 29 to 32 cycles per minute [27][28][29]; only fragment of capsule shells may remain in the mesh [29]. Amoxicillin 500mg. Disintegration time for AMOX C 1 -C 5 were observed to be within 30 minutes as shown in Table 2. Also, fragments of empty shells were found to be present in AMOX C 1 and C 2 at 30 minutes. This indicates that all brands passed disintegration test in accordance to their corresponding pharmacopeia reference. Co-Trimoxazole 960mg. Time required to dissolve CO-TRI T 1 -T 5 is shown in Table 3. Apart from CO-TRI T 5 with higher SD of 11.18 in minutes from mean time, CO-TRI T 1 to T 4 had SD of 0.00 to 0.50 in minutes. From the mean disintegration times for CO-TRI T 1 -T 5 shown in Table 3, all samples had disintegration time within 30 minutes. Since CO-TRI T 1 -T 5 conformed to their corresponding pharmacopeia standards, the results therefore suggests that the tablets are appropriately formulated to disintegrate in vivo and the release of active ingredients will not be delayed, as would be the case if any were not to disintegrate within specified time [36]. Ciprofloxacin 500mg. Disintegration time for CIPRO T 1 -T 5 is shown in S12 Table. Mean disintegration test of CIPRO T 1 -T 5 were calculated and reported. Apart from CIPRO T 3 with higher SD of 10.73 in minutes from mean time, CIPRO T 1 , T 2 , T 4 and T 5 had SD of 0.00 to 0.89 in minutes (Table 4). Table 4 demonstrates mean disintegration time for CIPRO T 1 -T 5 and, all samples had disintegration time within 30 minutes. Disintegrated within pharmacopeia standards was seen in all tablets tested for disintegration of Ciprofloxacin 500mg, indicating all tablets of the same batch can disperse in vivo within desired time to give pharmaceutical effect [36]. Conclusion In developing countries such as Belize, poor quality of drugs is attributed to insufficient quality assurance, poor or substandard storage facilities, a deficiency in or lack of active regulatory systems in place to effectively evaluate drug quality. Presently, in Belize, the drug regulatory system and quality assurance is just developing. Also, as far as we know, little or no research has been conducted in this area hence the need for this baseline study on drug quality. As a baseline study, the physical qualities of selected oral antibiotics in Belize were tested and compared with their corresponding pharmacopeia. Majority of the selected antibiotics passed performed tests when compared with their pharmacopeia. Only a few samples from both BP and USP antibiotics failed the test conducted. The results of the present study provide the need for detailed, regular and consistent quality assurance test for all medications imported to Belize for public consumption. Any test that failed quality, even if only in one parameter, is a clear caution that potential unsuitability of the drug may exist. Since the tests performed in this study were only level 2 of field-based screening of quality assurance [37], we cannot draw a generalized conclusion based on our findings. We therefore recommend a full analytical quality assurance tests using instruments such as High Performance Liquid Chromatography (HPLC) be carried out for quality control that meet international standards. Limitations The main limitation to this study is the number of tests of conducted on the samples collected which in our opinion was not adequate for a wider generalization. Also, we acknowledge that more quality control test for both tablets and capsules could have been done to support current findings. Additionally, the lack of adequate equipment and funding for a more complex drug quality testing were also considered legitimate limitations to the current study. However, since the objective of the study was to conduct a level 2 field-based screening of the antibiotics with the intent to provide a baseline data for use in planning a much larger study, we believe this objective have been adequately achieved, especially that to the best of our knowledge and after careful search on the Internet a similar study have not been conducted in Belize. This therefore makes the study unique and relevant, hence its strength.
2020-06-18T09:06:44.983Z
2020-06-17T00:00:00.000
{ "year": 2020, "sha1": "1733beedeca6c7e19e3f65d8042450de019e2e40", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0234814&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "fc3870ce67f1981f36bdb672c5662b6f4b2dceb3", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
207957913
pes2o/s2orc
v3-fos-license
Elongation Patterns of the Collateral Ligaments After Total Knee Arthroplasty Are Dominated by the Knee Flexion Angle The primary aim of this study was to assess the effects of total knee arthroplasty (TKA) implant design on collateral ligament elongation patterns that occur during level walking, downhill walking, and stair descent. Using a moving fluoroscope, tibiofemoral kinematics were captured in three groups of patients with different TKA implant designs, including posterior stabilized, medial stabilized, and ultra-congruent. The 3D in vivo joint kinematics were then fed into multibody models of the replaced knees and elongation patterns of virtual bundles connecting origin and insertion points of the medial and lateral collateral ligaments (MCL and LCL) were determined throughout complete cycles of all activities. Regardless of the implant design and activity type, non-isometric behavior of the collateral ligaments was observed. The LCL shortened with increasing knee flexion, while the MCL elongation demonstrated regional variability, ranging from lengthening of the anterior bundle to slackening of the posterior bundle. The implant component design did not demonstrate statistically significant effects on the collateral elongation patterns and this was consistent between the studied activities. This study revealed that post-TKA collateral ligament elongation is primarily determined by the knee flexion angle. The different anterior translation and internal rotation that were induced by three distinctive implant designs had minimal impact on the length change patterns of the collateral ligaments. INTRODUCTION The anterior and posterior cruciate ligaments are primary restraints in the natural knee (Boguszewski et al., 2011;Hosseini Nasab et al., 2016). Thus, after cruciate sacrificing Total Knee Arthroplasty (TKA), the passive restraint in the post-operative joint must be provided by the geometric congruency of the implant components and the surrounding soft tissues, predominantly the collateral ligaments. Thus, a thorough understanding of the relationships between postoperative ligament strains and implant design is crucial to improve future component geometries and soft tissue balancing techniques. TKA joint stability has traditionally been achieved through concave depressions in the tibial insert that result in highly congruent interfaces with the medial and lateral femoral condyles (e.g., ultra-congruent designs). Posterior stabilizing designs attempt to provide additional stability through a post-cam mechanism, which constrains excessive posterior translation of the tibia relative to the femur. While both designs have been highly successful in achieving anteroposterior (AP) knee stability (Song et al., 2017), overly constrained knee motion has been observed clinically, especially regarding internal tibial rotation (Guan et al., 2017), and is thus a plausible cause of non-physiologic collateral ligament strain and possible pain. Medial stabilized TKA component designs attempt to better replicate the kinematics of the natural knee by allowing substantial AP translation of the lateral condyle (Young et al., 2018). For example, the GMK Sphere (Medacta International, Switzerland) implant possesses a congruent spherical geometry on the medial side to provide anterior-posterior stability, and a relatively flat lateral tibial plateau to enable free movement of the lateral condyle. These intended implant motion patterns have indeed been confirmed through reconstruction of the 3D joint kinematics throughout both lunge movements and complete cycles of daily activities using mobile fluoroscopy (Scott et al., 2016;Schütz et al., 2019a). However, it is still not well-understood how such medially stabilized component designs affect the strains in the passive restraints of the knee. The elongation patterns of the collateral ligaments during functional movements following TKA have important implications for post-operative knee stability, range of motion, and pain (Jeffcote et al., 2007;Babazadeh et al., 2009;Mihalko et al., 2009;Goudie and Deep, 2014). When ligaments are over-strained, cell death, plastic deformation, and micro-tears can occur (Provenzano et al., 2002). On the other hand, ligament unloading can lead to tissue adaptation and a diminished healing response to damage (Provenzano et al., 2003;Martinez et al., 2007). Thus, it is important for TKA component designs to restore normal elongation patterns of the collateral ligaments during functional activities. The ability to estimate post-TKA ligament elongation patterns has critical implications for how to best balance the soft tissues. While adequate intraoperative balancing of the collateral ligaments during TKA is known to be essential for clinical success (Insall and Scott, 2001;Meloni et al., 2014), there is no consensus on what ligament fibers should be preserved and what fibers can be safely released. It is also not clear how the ligament balancing technique should be adapted based on the choice of implant design. Here, knowledge of elongation patterns of the individual ligament bundles in knees replaced with different implant designs could help surgeons to better understand the consequences of the release of specific ligament fibers and thereby enhance current soft tissue balancing approaches. Although MRI and CT techniques have traditionally been used to provide access to quasi-static strain patterns, a number of methods exist to quantify ligament elongation during dynamic movements. Strain sensors have been applied to measure in vivo anterior cruciate ligament strains in patients undergoing partial meniscectomy (Fleming and Beynnon, 2004); however implantation and removal of the sensors is highly invasive and requires a wire to pass through the skin. Recently, an image-based approach was introduced that leverages fluoroscopy to quantify tibio-femoral kinematics, and then uses the relative movement of ligament attachment footprints to estimate elongation patterns. This approach has been utilized to measure anterior cruciate ligament elongation during stance in downhill running (Tashman et al., 2007), and collateral ligament elongations during the stance phase of walking (Liu et al., 2011) and a single legged lunge (Park et al., 2005;Van De Velde et al., 2007). However, the use of stationary fluoroscope setups has limited the range of dynamic movements that can be imaged. Using a novel moving fluoroscope to overcome the limitations of a stationary imaging modality, this study aims to investigate the effects of TKA component design on collateral ligament elongation patterns that occur during level walking, downhill walking, and stair descent. Three component designs were evaluated: the GMK Primary posterior stabilized (PS), the GMK Sphere (SP), and the GMK Primary ultra-congruent (UC). In addition, the effect of activity type on ligament elongation patterns was investigated. METHODS Tibiofemoral kinematics were captured in three groups of patients with different TKA component designs (Figure 1) throughout complete cycles of activities of daily living. Each group consisted of 10 patients with unilateral TKA: the GMK Primary PS (5 m/5 f, aged 69.0 ± 6.5, 3.1 ± 1.6 years postoperative, BMI 27.6 ± 3.5), the GMK Sphere (2 m/8 f, aged 68.8 ± 9.9, 1.7 ± 0.7 years postoperative, BMI 25.4 ± 3.7) and the GMK Primary UC (3 m/7 f, aged 75.0 ± 5.1, 3.9 ± 1.5 years postoperative, BMI 25.9 ± 3.2). The study was approved by the local ethics committees (KEK-ZH-Nr. 2015-0140). Patients were selected based on the following inclusion criteria: unilateral TKA, more than 1 year after surgery, BMI ≤ 33, good clinical and functional outcomes (WOMAC between 0 and 28 and pain VAS ≤ 2). Kruskal-Wallis tests did not detect any statistically significant difference between the three groups regarding the age (p = 0.67), BMI (p = 0.81), time postoperatively (p = 0.27), or WOMAC score (p = 0.41) of the subjects. All the surgeries were performed by two senior knee surgeons using a medial parapatellar approach following the recommendations of the manufacturer. Surgeries were all performed with the aim to minimize soft-tissue damage. A mechanical hip-knee-ankle alignment of 180-183 • was targeted. Where necessary, a very minimal release of the MCL was permitted in order to balance the knee, together with the removal of osteocytes that potentially blocked the smooth gliding of the soft tissues. Knee Joint Kinematics The moving fluoroscope (a modified Philips BV Pulsera videofluoroscopy system mounted on a moving carriage; List et al., 2017;Hitz et al., 2018) at the Institute for Biomechanics, ETH Zürich was used to quantify tibio-femoral kinematics (Figure 2). Single plane fluoroscopic images were obtained at 25 Hz throughout five complete cycles each of level walking, downhill walking, and stair descent (Schütz et al., 2019a,b). In total, 450 gait cycles were measured (30 patients, 3 activities, each with 5 repetitions). For each imaging frame, the 3D pose of both the tibial and femoral implant components were determined using a 2D/3D registration algorithm introduced by Burckhardt et al. (2005) with an accuracy of up to 1 • in rotation, 1 mm in-plane and 3 mm out-of-plane (Foresti, 2009). TKA Modeling Tibial and femoral bone geometries, as well as the MCL and LCL attachment sites, were adapted from a previously developed multibody TKA knee model (Smith et al., 2016) within the OpenSim modeling environment (Delp et al., 2007). Subjectspecific models were created by scaling each bone in the superiorinferior direction based on limb lengths measured using skinmounted optical markers taken from reference standing trials (Vicon, OMG, Oxford, UK). Each model was additionally scaled in the anterior-posterior and medial-lateral directions based on the dimensions of the femoral implant. The MCL and LCL were represented using a series of onedimensional elements connecting their origin and insertion sites. The LCL was represented by a single fiber, while three fibers for the MCL corresponded to the anterior (aMCL), intermediate (iMCL), and posterior (pMCL) bundles of the ligament (Figure 3). Ellipsoidal wrapping objects were used to prevent penetration of the ligament bundles into the bone and implant geometries. For each of the 450 trials, the measured 3D implant kinematics were used as an input for the subject-specific OpenSim models to calculate the resulting ligament elongation patterns. The elongation of each ligament was normalized to its own reference length, which was defined as the average fiber length at heel strike of the five level walking trials (Liu et al., 2011), and presented against the normalized time in percentage of the gait cycle (%GC, starting with heel-strike) or the corresponding knee flexion angle. Statistical Analysis For each implant design and activity, intra-subject variability of the ligament elongation was assessed by calculating the standard deviation (SD) between trials for a single subject at each time point of the activity cycle. These SDs were then averaged over the gait cycle and across subjects within an implant group. A one-way repeated-measures ANOVA, based on statistical parametric mapping (SPM)-a statistical approach to examine differences in state-space or spatio-temporal data (Pataky et al., 2016)-was performed to test the influence of the implant design and activity on the ligament elongation patterns. Here, the test was considered significant if the maximum F-value (F max ) from ANOVA exceeded the F statistics (F-value corresponding to the significance level of p = 0.05). To exclude the influence of between-subject variability of the ligament attachment points, the elongation patterns were further compared on the within-subject level. Therefore, for the three groups of implant designs, intra-subject differences in ligament elongations between the studied activities (absolute differences between the elongation patterns of different activities) were calculated at each specific flexion angle and results were averaged across subjects. These patterns were used to assess differences in ligament elongations between the designs and activities over the range of knee flexion angle. Sensitivity Analysis To assess sensitivity of ligament elongation patterns to changes in secondary kinematic parameters, the pose of the tibia relative to the femur was perturbed around the baseline kinematics of a single gait cycle. Here, the anterior-posterior translation was perturbed within ±5 mm, while the abduction/adduction and internal/external rotations were varied within ±5 • , using 1 mm and 1 • intervals. It should be noted that only one of the input kinematic parameters was perturbed at a time and the other parameters were kept at their baseline values. For each parameter, the perturbed kinematics were fed into the multibody model of the corresponding subject and the output elongation patterns were plotted against time as percentage of the gait cycle. Moreover, to estimate the range of possible errors in ligament elongation outputs due to inaccuracy of the fluoroscopic kinematics, a worst-case scenario was simulated by perturbing the baseline kinematics captured from a subject with a PS implant design during level walking. Here, an outof-plane error of 3 mm was introduced to the mediolateral translation. The perturbed kinematics were used to drive the multibody model of the corresponding subject and the output elongation patterns were plotted against time as percentage of the gait cycle. Implant Kinematics A full description of the kinematics of the three implant designs has been reported previously (Schütz et al., 2019a), but are briefly summarized in Table S1 for completeness. All three implants showed similar ranges of knee flexion during the three studied activities with the GMK Primary UC exhibiting a slightly smaller range of motion compared with the other two implant designs. The GMK Sphere showed the smallest range of anteroposterior translation on the medial side for level walking, downhill walking, and stair descent, followed by the GMK UC and the GMK PS implants (SP was statistically significantly different from PS and UC, p < 0.006). The GMK Sphere showed the largest range of anteroposterior translation on the lateral side, as well as the largest range of tibial internal-external rotation (13.2 ± 2.2 • ), both observed during stair descent (SP was statistically significantly different from PS and UC, p < 0.006). All the three implant designs exhibited very limited range of abduction-adduction rotation during studied activities (average range smaller than 3 • ). Intra-subject Variability of the Ligament Elongations In general, the ligaments demonstrated relatively consistent elongation patterns across subjects and component designs throughout each activity (Figure 4). The largest intra-subject variations were observed in subjects with PS implants, while the smallest were observed in the UC implant subjects. Here, the average intra-subject standard deviation (SD) for the LCL elongation during level walking was 1.19, 1.08, and 1.05% for patients with PS, SP, and UC implants, respectively (p = 0.51). For comparison, the corresponding SDs for iMCL were 0.37, 0.33, and 0.29% (p = 0.10). Similar intra-subject variations were observed during downhill walking and stairdescent (Figures S1, S2), where the average within-subject SDs for LCL elongation were 1.18% (PS), 1.09% (SP), and 0.94% (UC) during downhill walking (p = 0.11) and 1.18% (PS), 1.07% (SP), and 0.98% (UC) during stair descent (p = 0.23). The intra-subject variability of the ligament elongation was generally larger in the swing phase compared to the stance phase. For example, for the sphere implant, the average intra-subject standard deviation for LCL elongation during level gait was 1.80% for the stance and 2.99% for the swing phase. Effect of Implant Design For all the three studied groups, slackening of the LCL was observed in the loading response period from 0 to 10% GC (Figure 5). From 10 to 50% GC, the ligament gradually recovered to its reference length. From push-off to mid-swing, the LCL experienced a rapid slackening and reached its shortest length (−11.09% in PS, −11.72% in SP, and −10.88% in UC; average across all subjects) at ∼70% GC, which corresponded with peak knee flexion. As the knee was extended through terminal swing, the LCL continuously elongated to recover its reference length at the time of next heel strike. Although the maximum shortening of the LCL was slightly greater in patients with sphere implants (Table S2), no statistically significant difference was observed between the elongation patterns of the LCL across the patient groups with differing component design (Table S3). For downhill walking and stair decent, the LCL experienced a bi-phasic lengthening pattern consisting of steady shortening in the stance and a gradual stretching in the swing phase (Figures S1, S2). Compared to level walking, the LCL reached slightly shorter lengths during downhill walking and stair descent. The iMCL bundle was close to isometric during downhill walking and reached its shortest length at toe-off of stair-descent (−5.58, −6.27, and −5.94% of the reference length, for PS, SP, and UC groups; average across all subjects). Similar to the level walking, during downhill walking and stair descent, no statistically significant design-dependency was observed in the collateral ligament elongations ( Table S3). The length of the three MCL bundles remained nearly isometric during the first 50% of the level gait cycle. However, starting from late stance phase, the MCL bundles demonstrated considerably different elongation patterns (Figure 5). Regardless of the implant design, the anterior bundle experienced lengthening from 50 to 70% gait cycle with a maximum FIGURE 4 | Subject-specific elongation patterns of the LCL (top, different color each subject, solid lines represent intra-subject means and shadings represent ± SDs) are shown compared to the average elongation patterns of the three MCL bundles (bottom, solid lines represent inter-subject means and shadings represent ± SDs) during level walking. The vertical dotted line represents the average toe-off time for subjects in the same group. Frontiers in Bioengineering and Biotechnology | www.frontiersin.org elongation of 5.11% for PS, 4.02% for SP, and 4.12% for UC implants. The posterior bundle showed the opposite elongation pattern, with shortening until 75% gait cycle, where it experienced its shortest length (−5.70% for PS, −6.56% for SP, and −5.29% for UC). The iMCL remained nearly isometric throughout the entire gait cycle, with the average elongation of each implant group never exceeding −0.29 and 2.06%. Similar to level walking, non-uniform elongation of the MCL bundles was observed during downhill walking and stair-descent (Figures S1, S2). Regardless of the implant design, the aMCL experienced lengthening during the stance phase of downhill walking while the pMCL became shorter in this period. Except for 50-80% GC, the aMCL and pMCL had opposite elongation patterns during stair-descent. The iMCL bundle was close to isometric during downhill walking and was in its slackest condition at toe-off of the stair-descent (−5.58, −6.27, and −5.94% of the reference length, for PS, SP, and UC groups). Regardless of the activity, no significant design-dependency was observed in elongation patterns of the MCL bundles ( Table S3). Effect of Activity Type In all three implant designs, shortening of the LCL and pMCL with increasing the knee flexion angle was observed during all the studied activities (Figure 6). The length of iMCL remained unchanged from full extension to 40-50 • flexion and underwent a small shortening thereafter. During all the three activities, the aMCL lengthened from 0 to 40-50 • flexion and shortened afterwards. Regardless of the implant design, the statistical tests performed for assessing task-dependency of the ligament elongation were generally insignificant when comparing across patients with the same implant. This was consistent for the tests performed separately on the stance and swing phases of the activities (Tables S4, S5). However, the withinsubject tests for task-dependency resulted in highly subjectspecific outcomes. In some subjects the repeated measures ANOVA indicated significant differences between elongation patterns of the collateral ligaments during different activities. However, the mean-differences were generally small (compared to the corresponding standard deviations, Figure 7) and those differences were observed only over small ranges of flexion ( Figure S3). In general, all implant designs showed similar differences between the collateral ligament elongations when activities were compared (Figure 7). The only substantial difference between component designs was found in the comparison of LCL elongation between level and downhill walking (Figure 7, uppermiddle), where the PS showed far greater differences than the UC. For each component design, the difference in MCL elongation between level-walking and stair descent showed a clear flexion dependency. Sensitivity of Ligament Elongations to Knee Kinematics The highest impact of the changes in secondary kinematic parameters of the implant was due to variation in abduction/adduction parameter (Figure 8). During the stance phase of the gait cycle, a variation of ±5 • in the adduction of the implant resulted in ±5.2% changes in the LCL elongation and ±3.5% of the iMCL elongation. A change of ±5 mm in the anterior translation of the implant had small influences on the ligament elongation patterns during level walking (about ±1.8% for LCL and ±0.5% for iMCL). A variation of ±5 • in internal rotation of the implant had negligible influences on the elongation of collateral ligaments (about ±1.3% for LCL and ±0.4% for iMCL). The simulated out of plane error (3 mm in the mediolateral translation) resulted in small changes in ligament elongation patterns ( Figure S4). Here, the LCL was the most affected ligament with 0.53% change in the ligament elongation. DISCUSSION This study quantified elongation patterns of the collateral ligaments in patients with different TKA implant designs throughout complete cycles of level gait, downhill walking, and stair descent using kinematics measured by a novel moving fluoroscope to drive a multibody knee model with fiber based ligaments. Interestingly, the component design and activity type did not demonstrate statistically significant effects on the collateral elongation patterns. Instead, the knee flexion angle was the primary determinant of ligament elongation. The LCL shortened with increasing knee flexion, while the MCL demonstrated regional variability depending on the attachment location. The anterior MCL bundle lengthened from 0 to 40-50 • and shortened thereafter, while the iMCL bundle remained isometric over the same rage, and then shortened with further flexion. The posterior MCL bundle demonstrated shortening with increased flexion across the entire range of motion. These results have important implications for surgeons performing intra-operative soft tissue balancing, and help provide a more comprehensive understanding of post-TKA ligament function during activities of daily living. The presented ligament elongation patterns must be interpreted in the context of the limitations of the model. Since preoperative MRI and postoperative CT images were not available, the location of ligament attachments was approximated by scaling a generic model using the subject-specific implant dimensions and location of bony landmarks. However, even with high-resolution MR images, error on the order of centimeters remains in the assessment of the ligament attachment sites (Rachmat et al., 2014). We assessed the attachment location error using 10 fibers for the LCL and 20 fibers for the MCL with varying attachment locations and found the same general trends for the bundle elongation patterns as well as the design and activity dependencies. Moreover, the within-subject design that was used for assessing task-dependency of the ligament elongation patterns, minimizes the impact of uncertainty in ligament attachment points on the study results. Another limitation originates from one-dimensional path representations of the ligaments that may not reflect the actual elongations experienced by ligament fibers, as they do not account for material continuum, fiber twisting, or interfacial sliding. Similar to all image-based and many sensor-based studies, the zero-strain condition of the ligaments was not assessed in the current study. Thus, the elongation patterns reported in this study cannot be translated to the corresponding strain patterns, and therefore any between-study comparison of the data is limited to those with a consistent choice of reference length. Last but not least, the relatively small sample size and gender imbalance between the groups are clear limitations that should be addressed in future FIGURE 7 | The intra-subject differences in ligament elongations between activities for the three groups of implant designs plotted against the knee flexion angle. Average patterns (solid lines) and standard deviations (shadings) were calculated only over the flexion ranges achieved by all the subjects within an implant design group during all the five included trials. Frontiers in Bioengineering and Biotechnology | www.frontiersin.org studies. In light of these limitations, the presented results should not be interpreted with a focus on the absolute magnitudes of the reported elongations, but rather on the differences between the ligament bundles, implant designs, and activities performed. A key finding was that none of the ligament bundles showed purely isometric behavior during the studied activities for any of the implant designs. Only the iMCL during level walking demonstrated nearly isometric behavior. The LCL and pMCL demonstrated considerable shortening with increasing the knee flexion, while the aMCL was elongated at mid-flexion compared to full-extension and deep flexion. These findings provide in vivo evidence that the traditional intra-operative soft tissue balancing goal of achieving symmetrical flexion and extension gaps (Bottros et al., 2006;Daines and Dennis, 2014) does not reflect the dynamic function of the collateral ligaments. This is further supported by studies of dynamic limb alignment and internalexternal rotation in intact knees that show increased laxity at 90 • of flexion compared to extension (Roth et al., 2015). As such, our results support the recent trend in soft tissue balancing to aim for increased laxity in flexion compared to more extended poses (Roth and Howell, 2017). A common cause for TKA patient dissatisfaction is midflexion instability (Hasegawa et al., 2018). This is often attributed to improper soft tissue balancing (Chang et al., 2014;Ramappa, 2015). Progressive release of the MCL is often used to correct varus alignment and achieve symmetric and rectangular gaps in 0 and 90 • flexion (Griffin et al., 2000;Whiteside et al., 2000;Chen et al., 2011;Aunan et al., 2012). Our results demonstrate that the aMCL elongates with increasing flexion until 40-50 • , with similar lengths at 0 • , and 75 • during stair descent, while the iMCL and pMCL shorten throughout the entire range of flexion. Thus, the aMCL likely provides the majority of the restraint at mid-flexion. Accordingly, any intraoperative release of the aMCL based on intraoperative laxity testing at 0 and 90 • should be performed with caution. This supports previous research suggesting that over-release of the MCL is a plausible cause of mid-flexion instability of the replaced knees (Sharma, 2013;Ramappa, 2015) and that in addition to intraoperative laxity assessment at 0 and 90 • , the tension balance should also be checked at mid-flexion (Bottros et al., 2006). The flexion dependency of the collateral ligaments' elongations after TKA has previously been indicated by in vitro and in vivo experiments for simplified loading conditions, but it was uncertain whether this phenomenon extended to locomotor movements. In a cadaveric study (Ghosh et al., 2012), a differential variable reluctance transducer (DVRT) was sutured to the middle portion of the collateral ligaments after cruciate retaining TKA to quantify the MCL and LCL elongation patterns. Assuming average lengths of 100 mm (Liu et al., 2010) and 60 mm (Meister et al., 2000) for iMCL and LCL, they measured 2% slackening of iMCL and 12% slackening of the LCL at 60 • knee flexion with an applied quadriceps force. Isometry of the iMCL after TKA was also reported by a similar cadaver study (Jeffcote et al., 2007). Interestingly, another in vitro investigation (Kowalczewski et al., 2015) found near isometry in the MCL and LCL during passive flexion, but during simulated squatting the LCL shortened and the MCL lengthened with increasing flexion. In vivo assessment of a forward lunge using stationary fluoroscopy revealed shortening of the LCL and pMCL, close to isometric behavior of iMCL and lengthening of the aMCL with increasing flexion (Park et al., 2015). Our results demonstrated similar elongation patterns to this in vivo study. Thus, our findings extend upon the previous literature to show that in multiple component designs and dynamic functional activities, the LCL shortens with increasing flexion, and the MCL elongation patterns vary between the anterior to posterior bundles until 40-50 • flexion, after which all bundles shorten. Despite small but significant differences in the in vivo measured kinematics (Schütz et al., 2019a), we found that posterior-stabilized, medial-stabilized, and ultra-congruent TKA designs all resulted in similar collateral elongation patterns (regardless of minor differences between their peaks) throughout complete cycles of level walking, downhill walking, and stair descent. The measured kinematics indicate that the GMK sphere implants indeed provide medial constraint, with the medial femoral condyle exhibiting a range of anterior-posterior translation of only 3.7 mm compared to 10.6 mm for PS, and 5.9 mm for UC implants during level walking. During stair descent, the GMK sphere also demonstrated the smallest range of medial anterior-posterior translation, as well as the largest range of internal/external tibial rotation (Sphere: 13.2 • , PS: 9.0 • UC: 8.4 • ). However, during all the studied activities, abduction/adduction was very consistent between the three implant designs (maximum difference of 0.7 • ). These kinematic differences due to implant design only resulted in subtle differences in ligament elongations. Here, the MCL and LCL elongations were most sensitive to perturbations in abduction/adduction (Figure 8), but the implant designs resulted in minimal kinematic variability in this degree of freedom. Anterior translation and internal/external rotation caused tibial movements that were perpendicular to the orientation of the collateral ligaments. These movements resulted predominantly in a gliding of the ligament, but only minimal lengthchanges of the collateral ligament fibers that are relatively long (100-120 mm for MCL), and therefore the resultant longitudinal strain was very small. Thus, as our sensitivity study demonstrates, a larger kinematic shift than what we observed along these degrees of freedom (DOF) is necessary to induce any detectable change in ligament elongation. As a result, the subtle kinematic differences measured between the component designs do not induce significant changes in MCL and LCL elongations. While a potential goal to improve patient satisfaction for TKA is to restore the ligament elongation patterns in healthy knees, direct comparison of our results is limited by the available data on in vivo collateral ligament elongations during different functional activities. Using stationary fluoroscopy to study the stance phase of walking, Liu and co-workers found the aMCL elongated with increasing flexion, the iMCL remained relatively isometric, and the pMCL shortened with flexion (Liu et al., 2011). The magnitudes and trends of MCL elongation were generally similar to our measurements on TKA subjects during the stance phase of level walking. In a similar study examining single leg lunges, shortening of the MCL and the mid and posterior fibers of LCL was observed with increased knee flexion (Park et al., 2005). In vitro measurements of MCL and LCL elongation on intact cadaveric knees are widely available. Using DVRT sensors, Harfe and co-workers found the posterior fibers of MCL and LCL to be longest in extension and to shorten with knee flexion (Harfe et al., 1998). Gosh and co-workers measured the distance between the centers of ligament attachment sites and reported shortening of the superficial MCL and LCL during passive knee flexion from 0 to 110 • flexion in the presence of a simulated quadriceps force (Ghosh et al., 2012). The elongation patterns presented in this study are therefore generally consistent in trend with those reported for the healthy knees at low flexion angles; however, none of the previous studies reported elongation data throughout complete cycles of level gait, downhill walking or stair decent. CONCLUSIONS Understanding the postoperative elongation patterns of the collateral ligaments is crucial for improving intraoperative soft tissue balancing and implant design, in order to achieve better patient satisfaction following TKA. This study revealed that post-TKA ligament elongation is primarily determined by the knee flexion angle. The altered anterior translation and internal rotation that were induced by three different implant designs had minimal impact on the length change patterns of the collateral ligaments. Furthermore, the constrained geometries of the studied implant designs led to very similar flexion dependent MCL and LCL elongation patterns during level walking, downhill walking, and stair descent. However, generalization of our findings to other implant designs with distinctly different functionalities should only be undertaken with caution. Our results also have clinical implications for soft tissue balancing based on our observations of non-isometric behavior of the collateral ligaments with flexion. The conventional goal for ligament balancing techniques to achieve similar gaps at 0 and 90 • flexion does not reflect the functional elongation patterns of the ligaments observed during activities of daily living. Improved techniques that account for differences in knee laxity and ligament elongation throughout the range of flexion should be considered. DATA AVAILABILITY STATEMENT The datasets generated for this study will not be made publicly available. Due to patient data privacy issues, these data can only be made available with an additional request to the ethics commission. ETHICS STATEMENT The studies involving human participants were reviewed and approved by the local ethics committee of canton Zurich, Switzerland (KEK-ZH-Nr. 2015-0140). All participants provided written informed consent to participate in this study, and have allowed publication of their data in an anonymized manner. AUTHOR CONTRIBUTIONS SH was involved in planning of the study. He also processed the experimental data, performed the modeling and data analyses, and wrote the manuscript. CS helped in modeling, interpretation of the results, and editing the manuscript. PS and BP performed the measurements and helped in editing the manuscript. WT and RL were involved in planning and supervision of the study, interpretation of the data, and editing the manuscript. WT provided the resources used for experiments and simulations. FUNDING This study was partially supported by Medacta international AG, and the Swiss Commission for Technology and Innovation (CTI).
2019-11-14T14:03:05.024Z
2019-11-12T00:00:00.000
{ "year": 2019, "sha1": "5bb7a9ea137b96fc71be99a56d5e99ba9b38e4e0", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fbioe.2019.00323/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "5bb7a9ea137b96fc71be99a56d5e99ba9b38e4e0", "s2fieldsofstudy": [ "Medicine", "Engineering" ], "extfieldsofstudy": [ "Medicine" ] }
8865181
pes2o/s2orc
v3-fos-license
Imagining Participatory Action Research in Collaboration with Children: an Introduction For decades, social science researchers have been studying programs, services, and settings that are explicitly designed to have an influence on children (e.g., mental health services for children, school classrooms, after school programs, families, neighborhoods). Researchers who are concerned with the contexts in which children develop, social issues that influence children, and/or social justice generally define and evaluate a problem related to these programs or settings, and sometimes create and assess an intervention. Consequently, these researchers are often the ones to determine the problem definition. Common definitions include poor developmental or educational outcomes, child abuse, child labor violations, and so forth. These problem definitions and subsequent conceptualizations then become part of a larger narrative about what or who needs fixing (Seidman and Rappaport 1986). Frequently, these problems are studied by collecting survey data from adults or by observing children. Generally, these measures and observational procedures are designed by adult researchers. In the field of community psychology, however, there has been a broad consensus that community members should also be involved in defining problems and solutions, as their participation improves the research and benefits the community. When thinking about issues that affect children, community psychologists have most frequently conceptualized important stakeholders as parents and extended family members, family advocates, teachers, mental health professionals, and other adults in children’s lives. These adults may be consulted in interviews or focus groups, usually responding to the problem as conceptualized by the researcher. Increasingly, adult stakeholders and older youth may take on more participatory roles. Rarely, however, are children consulted or asked to help formulate the problem definition or proximate solution. Indeed, research is typically done for children, but not with children. This special issue is a collection of papers about participatory action research with children who are middle school age or younger, and is intended to stimulate dialogue and to offer alternatives when conducting research that affects children. For decades, social science researchers have been studying programs, services, and settings that are explicitly designed to have an influence on children (e.g., mental health services for children, school classrooms, after school programs, families, neighborhoods). Researchers who are concerned with the contexts in which children develop, social issues that influence children, and/or social justice generally define and evaluate a problem related to these programs or settings, and sometimes create and assess an intervention. Consequently, these researchers are often the ones to determine the problem definition. Common definitions include poor developmental or educational outcomes, child abuse, child labor violations, and so forth. These problem definitions and subsequent conceptualizations then become part of a larger narrative about what or who needs fixing (Seidman and Rappaport 1986). Frequently, these problems are studied by collecting survey data from adults or by observing children. Generally, these measures and observational procedures are designed by adult researchers. In the field of community psychology, however, there has been a broad consensus that community members should also be involved in defining problems and solutions, as their participation improves the research and benefits the community. When thinking about issues that affect children, community psychologists have most frequently conceptualized important stakeholders as parents and extended family members, family advocates, teachers, mental health professionals, and other adults in children's lives. These adults may be consulted in interviews or focus groups, usually responding to the problem as conceptualized by the researcher. Increasingly, adult stakeholders and older youth may take on more participatory roles. Rarely, however, are children consulted or asked to help formulate the problem definition or proximate solution. Indeed, research is typically done for children, but not with children. This special issue is a collection of papers about participatory action research with children who are middle school age or younger, and is intended to stimulate dialogue and to offer alternatives when conducting research that affects children. Imagining Childhood Community psychology challenges us to create spaces where those who have structurally been denied a voice in democracy can begin to build power for civic engagement. This mandate is of utmost importance because if there is a group that is systematically excluded from civil society, then this structural exclusion tends to breed injustice. Historically, community psychologists have engaged specific populations that fit this description, including those labeled as seriously mentally ill (e.g., Fairweather et al. 1969). More recently, researchers have also worked with those who are or have been incarcerated (Fine et al. 2003), recent immigrants (Solis 2003;Suárez-Orozco 2000), those who are undocumented (Dominguez et al. 2009), and youth (Watts and Flanagan 2007). Children also fit this description. Children are often not consulted or even asked to participate in civil society, nor in research that is about their lives. These omissions are likely the consequences of researchers' views of children, which are informed by societal beliefs. For example, dominant narratives in many societies hold that children are not able to participate in making important decisions that affect them. Yet an empowerment perspective demands that we question these dominant narratives and to seek out alternative stories that challenge assumptions about children's capacities (Rappaport 2000). This perspective enables us to imagine shifting roles and relationships, as well as the possibility of meaningful partnerships between adults and children. We may envision children as collaborative change agents in the settings and contexts of their lives. Developmental research supports this vision of children taking up more active roles in second order setting change, suggesting that children hold more complex cognitions than was earlier presumed (Kellett et al. 2004;Rogoff 2003). This research has generated a more multifaceted understanding of the active and ongoing transactions between individual children and their social worlds. Indeed, a sociocultural approach has directed attention to children's changing participation over time in the meaningful routine cultural practices of families, neighborhoods, schools, and other key settings in their lives (Rogoff 2003). Researchers studying the timing of children's acquisition of various skills and competencies have become more aware of great variation in different cultural communities. Adult goals and expectations, as well as routine activities to which children are exposed, influence the development of skills and competencies, such as being responsible for themselves or others, or participating as apprentices in a research team. Another growing area of research, known as the sociology of childhood or childhood studies, has also raised many questions about how children are viewed within many communities, especially Western societies. The critique offered within these perspectives is that childhood is socially and culturally constructed, and that the construction of ''child as innocent'' or ''child as becoming'' leaves children without a say in important matters affecting them (Durand and Lykes 2006;Kellett et al. 2004). Instead, the sociology of childhood perspective encourages us to listen to children's perspectives and view children as experts in their own lives. Children's expertise can be cultivated by teaching them specific skills. Participating in research, for example, can help them gain more control of the resources that affect their lives. Children, therefore, can become advocates for themselves and others. Imagining Research with Children These sociocultural findings and childhood studies/sociology of childhood perspectives, when combined with other research that indicates the benefits of learning more from the community members being theorized, lay important groundwork for epistemological innovation, especially as it relates to how knowledge is generated and understood. Collaborative methodologies are consistent with community psychology values (e.g., collaboration, valuing human diversity, social justice) and theories (e.g., empowerment, civic participation). For example, research that has asked homeless people what services they need has resulted in a very different perspective and understanding compared with research that asks case workers about the needs of the homeless (Acosta and Toro 2000). Research dealing with children and their lives can similarly be transformed by embracing the role of children as social actors and collaborators/co-researchers. Research that affects children can be further reinvigorated by reconceptualizing the research process as an intervention in and of itself, where children learn skills through guided participation and active engagement. In other words, research and intervention are not separate steps, but rather are the components of praxis, or an embodied theory, with an agenda of creating conditions that facilitate individual and group empowerment, as well as social change. Using the theoretical framework of participatory action research with children has the potential to strengthen research findings, interventions, and social action. This special issue brings together an eclectic set of papers that engage children-from around the world, who are of middle school age and younger, and who are of different races, ethnicities and generally from financially poor communities-in a participatory action research (PAR) process. PAR is a theoretical standpoint and collaborative methodology that is designed to ensure a voice for those who are affected by a research project (Nelson et al. 1998). Cycles of a PAR project may engage participants in any or all of the following: helping to formulate the problem definition, assessing the problem, determining an intervention, implementing the intervention, and assessing the intervention. Multiple methods are often used with PAR, including surveys, focus groups, interviews, Photovoice projects, observations, and community mapping. Although PAR research has engaged adults and older youth in the process, very little PAR research, especially in the United States, has included the role of the child as social actor, collaborator, researcher and/or change agent. This state of affairs is problematic given that participatory action researchers and community psychologists argue that problem definitions and interventions are more valid and effective when all stakeholders are involved in the process. What happens when the people of concern are children? Are they afforded the same rights by society and by researchers? If researchers interested in empowerment are obligated to collaborate in communities in ways that enhance the power that people have over their own lives (Rappaport 1981), does this same obligation hold if our participants are children? This special issue addresses these questions as it tests and expands the theoretical underpinnings of empowerment and PAR by collaborating in an embodied theory with children. Imagining this Special Issue This compilation of articles is quite diverse in terms of the disciplinary backgrounds of the authors, as well as the countries and settings where the research takes place. In addition to academic contributors, there are practitioners (Chen et al. 2010 (Porter et al. 2010), and public health (Wong et al. 2010). Also, PAR is represented in many places around the world, allowing readers to examine how PAR is situated in and across several countries. Outside of the US, these places include Sub-Saharan Africa (Porter et al. 2010), Bosnia and Herzegovina (Maglajlic 2010), Canada (Liegghio et al. 2010), and the UK (Clark 2010;Kellett 2010). This diversity allows for rich comparisons with respect to methods, age of children, social and cultural contexts, and settings where the research is conducted. There are, of course, a number of ways that this special issue could have been organized. We chose to group the articles according to whether the primary focus was on theory and methods, school-based examples, or community-based examples. As we read through the articles, many issues arose across the three subsets of articles. We were particularly struck by the observation that, although all of the papers deal with children and PAR, the papers are positioned differently in terms of guiding paradigm and theoretical tradition when engaging children as collaborators. Given that many papers draw from multiple paradigms and theoretical traditions, our intention is not to sort papers into mutually exclusive groups, but rather to examine the papers along these two dimensions. Guiding Paradigms The papers draw upon three broad guiding paradigms: postpositivism, social constructivism, and critical theory. This range of perspectives within the special issue is an important reminder that PAR can be a method choice and/ or an epistemological choice. PAR as a method can be used, of course, with any paradigm because a method is simply a tool for collecting data. Where PAR is taken up by researchers primarily as a method choice, it is often used in conjunction with a postpositivist perspective. In these cases, the reason for using PAR is generally to increase the validity of data, often to provide evidence to support structural changes within specific settings. For example, in a multisite study reported by Chen and colleagues (2010), girls in five US cities served as evaluators of their after-school programs. The authors found that PAR is a promising evaluation tool; the girls determined what offerings worked well and what could be improved to make after-school programming more engaging for them. Additionally, staff learned that the girls were capable of engaging in research, which challenged their assumptions about the girls and had implications for future programming. Finally, the authors recommend that PAR practices be integrated in future program evaluation across the organization's many US sites as a way to improve data collection, showcase the skills and talents of the girls, and alter relationships between the girls and staff. In another large scale study discussed by Porter and colleagues (2010), children in Ghana, Malawi, and South Africa used a variety of methods, including interviewing and weighing carried loads, to learn from other children about their travel, transportation problems, and safety concerns. The long-term goal of the project was to improve children's safety as they travel from one place to the next. In both papers, some authors are a part of the community being studied. Yet, it is also the case that in both papers, the authors argue convincingly that children are able to collect better data because of where they are positioned (i.e., as insiders) and that children contribute to the strength and integrity of the research findings. As an epistemological choice, PAR is most closely aligned with social constructivism and critical theory. Indeed, epistemology deals with how we know things; by definition, it includes the relationship between the researcher and knowledge, as well as how this relationship is connected to knowledge generation. Both social constructivism and critical theory argue that knowledge is coconstructed and produced through the relationships between researchers and participants, and that these relationships are mediated through values. Within these frameworks, PAR highlights the relationship between the researcher and the researched through a reflexive examination of the researcher, and also brings into question how knowledge is constructed. Like all paradigms, social constructivism and critical theory bring with them specific sets of assumptions and values that shape research and action (e.g., for critical theory, the importance of working toward social justice). Several papers in this special issue emphasize the importance of attending to power relationships and how they affect knowledge construction in PAR with children. For example, Liegghio and colleagues (2010) caution that when working with children who have been diagnosed with mental health issues, adult roles need to be carefully scrutinized with respect to power and privilege. Additionally, adult roles need to change to be more aligned with social justice values. They see children as active responsible agents and co-constructors of knowledge, and they view PAR as a tool for changing the way children diagnosed with mental health issues are viewed and treated. Power and knowledge construction are also at the forefront for Kellett (2010). In her paper featuring the original research of an 11 year old girl, she argues for childhood emancipation. Through using PAR, she helps us imagine a world where children's perspectives are center stage, and children have the power to contribute to social change. Both the Liegghio et al. (2010) and Kellett (2010) papers, among others in the special issue, urge us to contemplate how empowerment and social change are connected to the research process, from ''before the beginning.'' Within this special issue, all the papers are connected to empowerment and social change, but how they are connected to these issues vary based on theories of change, which are embedded in their respective guiding paradigms. Another important distinction related to guiding paradigm is how these special issue papers are positioned with respect to best practices and best processes. Goals of interventions from a post-positivist paradigm include looking for best practices that can lead to universal claims, generalizability of knowledge, and empirically supported interventions. A number of papers in this issue take up these goals. For example, drawing upon a study designed to determine what high quality PAR implementation in middle schools entails, Ozer et al. (this issue) propose core components and key conditions for effective implementation and sustainability of school-based PAR, as well as challenges to implementing best practices. Also aiming toward best practices, Foster-Fishman et al. (2010) offer a clear set of tools for engaging youth in qualitative data analysis. Their ReACT method of data analysis includes a sequence of creative activities in which youth identify important messages and organize those messages into thematic groups. Wong and colleagues (2010) posit a new model for thinking about youth development and participation that is based in best practices from the positive youth development literature. Using an empowerment framework, they identify five types of participation that vary along the dimension of youth-adult control and in their relationship to optimal child and adolescent health promotion. Goals associated with best practices are related yet distinct from social constructivist and critical theory paradigms, especially when considering interventions. Here, the belief is that a focus on best practices may separate knowledge generation from specific contexts. In other words, practices that work in one context cannot be moved wholesale into another context and be expected to show the same level of efficacy because of different contextual demands and conditions. To prevent this separation, social constructivism and critical theory focus on applicability through thick description instead of generalizability. In these frameworks, some practices are understood as transferable and others are not, as the focus is on ensuring that all practices are contextually and culturally appropriate; the assumption within these paradigms is that all contexts are rich and varied and therefore require flexibility and adaptation. With these context-dependent ideas in mind, the focus is on best processes, or what processes should be followed to enact a contextually and culturally appropriate intervention. The Maglajlic (2010) contribution takes this perspective by arguing against a common way to conduct PAR across several research settings in Bosnia and Herzegovina. She offers a timely critique of international models of community development as children in three different regions ask one another what they want from their communities and share what they learn about participation in community life with adults. In a smaller scale study with younger children, Clark (2010) makes a similar point, suggesting that adult researchers should make available and accessible multiple methods and roles for children. As child researchers choose methods and enact roles, adult partners may further identify and build upon the strengths of these child researchers. In Clark's innovative approach (2010), young children create a composite picture, or mosaic, of their lives from a number of different tools, including child-led tours and map making. Although assumptions about standardization and generalizability differ across perspectives, both best practice and best process approaches are designed to lead to the best outcomes for stakeholders who are, in this case, children. Both approaches also emphasize the need for extensive preparation and training for child and adult research collaborators. Additionally, lessons learned from each of our contributors remind us that PAR is always situated in broader social, economic, political, relational, and institutional contexts. Theoretical Traditions Along with different guiding paradigms, this set of papers also draws from different theoretical traditions to inform PAR with children. In general, these papers are rooted in one or more of the following literatures: positive youth development, sociocultural perspectives, critical education, and community psychology. Am J Community Psychol (2010) 46:60-66 63 Positive youth development (PYD) is an approach that grew out of dissatisfaction with prevention research and intervention focusing on isolated risk factors (e.g., for teen pregnancy, substance abuse, or youth violence). Recognizing that the most effective prevention programs were not directed toward one risk factor, but instead looked more like health promotion and skills development, this strengths-based approach challenged those in the prevention field to think about youth as resources to be developed rather than problems to be managed (Shinn and Yoshikawa 2008). Research grounded in PYD has focused on identifying and supporting contexts that promote educational achievement, healthy outcomes, and strengths. Yet healthy outcomes, milestones, and assets traditionally have been defined by adult experts, including developmental psychologists and youth advocates. Also, a PYD approach tends to be couched in a best practices perspective, looking for common solutions and models across varied contexts and diverse children. PAR programs utilizing this tradition focus on the positive impact for individual youth development: building cognitive and emotional competencies, interpersonal skills, and so forth. Wong and colleagues (2010) draw from and contribute to this tradition by offering a heuristic tool for those interested in settings for child and adolescent health promotion. A sociocultural perspective on children holds that how they move from being novices to experts is shaped by their particular area or setting. Children's expertise is acknowledged through the community, formally (e.g., giving a presentation) and/or informally (e.g., an adult telling a child that she did a good job on a task). PAR from a sociocultural approach tends to engage adults in teaching children sets of skills so that the children can become experts in the skills and then carry out research that is important to them. Clark's MOSAIC method (2010) is rooted in this tradition. She discusses how children become knowledge builders through collecting data (i.e., creating artifacts). Children develop skills that enable them to share their expertise with others, which moves them from being labeled as novices to socially recognized as experts. Those working from a critical education framework argue that when people come together and think critically about their world and their position in the world, they develop a critical consciousness that moves them into action. Using a critical education perspective to inform PAR will often take a dialogic approach, or a dynamic approach centered in dialogue with others that can transform the situation, and focus on the analysis of the data collected with an emphasis on how the data relate to broader structural conditions. Van Sluys (2010) deftly uses this framework, researching adult literacy practices with a set of middle school students to facilitate them re-positioning themselves as literacy students within the broader structural constraints of schooling. The children take the lessons they have learned in their research to change their actions in and reformulate their relations to other settings. Finally, a community psychology approach emphasizes empowerment through participation in problem definition and the development of solutions. PAR from a community psychology perspective will therefore focus heavily on identifying subordinated stakeholders and involving these groups in determining problem definition and solutions in an effort to ensure that these groups have more control over the resources that affect their lives. Ren and Langhout (2010) take this approach by focusing on children defining problems for their elementary school recess time, as well as determining potential solutions. Many papers in this special issue draw from more than one of these theoretical traditions. Theoretical traditions also inform how the researchers think about the participant and social change process. At the individual level of change, PAR can be viewed as youth development (positive youth development approach; see Ozer et al. this issue and Wong et al. 2010), skill building and identity development (sociocultural approach; see Chen et al. 2010 andClark 2010), transformational education (critical education approach; see Van Sluys 2010), or creating an empowering setting (community psychology approach; see Foster-Fishman et al. 2010 andRen andLanghout 2010). With respect to social change, PAR can be viewed as altering a setting, a policy, social geography, or relationships and roles. Examples in this special issue include attempts to change schools (Duckett et al. 2010;Newman Phillips et al. this issue), playgrounds (Ren and Langhout 2010), after-school programs (Chen et al. 2010), how municipalities function (Maglajlic 2010), mental health care systems (Liegghio et al. 2010), and transportation policies (Porter et al. 2010). One notable point regarding the varied research programs that these papers represent is that many of these activities are aimed at change on more than one level of analysis; by examining a different set of the data from the same project, the same project at a different point in time, or the data in a different way, changes at other levels of analysis could be highlighted. Challenges in Conducting PAR with Children Many of the papers in this issue tell a similar story about the limits of conceptualizing and actualizing PAR as a ''project.'' It may be useful and expedient for adult researchers in academia or community-based organizations to think about a specific beginning and ending point for their work, and it may be necessary to establish clear boundaries around a particular set of events. Yet PAR cannot be successful without attention to roles and relationships that exist prior to any project, the institutional policies and norms that exist outside the project, and the energies required to sustain change efforts beyond the project. Projects are inextricably connected to the daily realities of children and adults. So, for example, Duckett and colleagues (2010) describe a project in the UK designed to engage children in considering the concept of a healthy school and in building a healthier school together. These university researchers reflected on why the project was not as successful as they hoped, analyzing power relationships and concluding that institutional strains, both in public schools and in higher education/academia, led to conflicting perspectives and ultimately a failed project. Phillips and colleagues describe a project designed to engage children and teachers in PAR, but point to broader structural issues that created challenges. Detractors from the PAR process include limits in the timeframe of the project that did not allow for relationship building over time, inadequate time and institutional support for teachers to feel empowered in the project, and the climate of high stakes testing in public schools in the US. Both papers provide useful critical analysis of well-planned projects that faced serious barriers in overcoming external stresses. Another challenge in engaging in PAR with children is in how to conceptualize the nature of adult and child research relationships. Indeed, there are several ways for adults to work with children within a PAR context, and there is likely no one right way. The special issue features a wide range of child-adult collaborations, from children who serve as primary problem posers to children who participate as data collection experts in studies that have already been clearly defined by adults. Many collaborations feature child-adult research relationships that are somewhere in between these points. Children often have some influence, but within adult-guided parameters. Within this special issue, children are conceptualized as both novices and experts, with expertise coming both from lived experience and from training in research processes. Adults are conceptualized as novices and experts as well. We appreciate Clark's (2010) term, ''authentic novice,'' to describe the stance of the adult researcher who recognizes that communication difficulties between adults and children are not just children's problems. Indeed, these authentic novices seek to build bridges to children's lived worlds in their collaboration. In all cases, the roles of children and adults deserve careful attention. A final set of challenges deal with political and ethical issues. These challenges are addressed-usually by examining power in roles and relationships with childrenacross many of the papers in this special issue. The papers raise important questions about the conditions under which children's participation may actually increase vulnerability and/or subordination. They also remind us that listening to children sometimes sounds nice until we hear what they have to say. The perspectives of children may bring conflict as they challenge adult roles and perspectives, as well as institutional norms, cultures, and communities. These papers highlight the potential of PAR projects to pose problems and raise challenges to the status quo rather than offering easy solutions to adult-defined problems. Imagining the Future This special issue is poised to contribute to a conversation that is just beginning, with rich reflections from a variety of orientations, methods, traditions, contexts, and regions. Although the papers are diverse, they have much in common. Together, they make a strong case that participatory action research with children is not just about applying the same set of conceptual or methodological lessons from PAR with adults or older youth to a younger group of people. We require a different skill set to do this exciting work; these competencies include new ways of thinking about children, research expertise, research projects, and research products. We note how strange it is to produce a special issue aimed primarily at adult researchers, even as we imagine new ways of children and adults working together to understand the obstacles that children face in realizing their goals and dreams. Yet, our intention is that in so doing, we facilitate an awakening of the imagination not just in our readers but in the children with whom we collaborate.
2014-10-01T00:00:00.000Z
2010-06-23T00:00:00.000
{ "year": 2010, "sha1": "ff2c59eae93cb61d9195ba05f7cd678f472c3dd1", "oa_license": "CCBYNC", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1007/s10464-010-9321-1", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "d8f15132b62f1c122c6b5817f7abdf96faeada06", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [ "Psychology", "Medicine" ] }
269707133
pes2o/s2orc
v3-fos-license
Monitoring of spatio‑temporal glaciers dynamics in Bhagirathi Basin, Gharhwal Himalayas using remote sensing data Glacier retreat represents a highly sensitive indicator of climate change and global warming. Therefore, timely mapping and monitoring of glacier dynamics is strategic for water budget forecasting and sustainable management of water resources. In this study, Landsat satellite images of 2000 and 2015 have been used to estimate area extent variations in 29 glaciers of the Bhagirathi basin, Garhwali Himalayas. ASTER DEM has been used for extraction of glacier terrain features, such as elevation, slope, area, etc. It is observed from the analysis that Bhagirathi sub-basin has a maximum glaciated area of ~ 35% and Pilang has the least with ~ 3.2%, whereas Kaldi sub-basin has no glacier. In this region, out of 29 glaciers, 25 glaciers have shown retreat, while only 4 glaciers have shown advancement resulting in a total glacier area loss of ~ 0.5%, while the retreat rate varies from ~ 0.06 m/yr to ~ 19.4 m/yr. Dokarni glacier has maximum retreat rate (~ 19.4 m/yr), whereas Dehigad has maximum advancing rate (~ 10.1 m/yr). Glaciers retreat and advance have also been analyzed based on terrain parameters and observed that northern and southern orientations have shown retreat, whereas the area change is highly correlated with glacier length. The study covers more than 65% of the total glaciated area and based on the existing literature represents one of the most exhaustive studies to cover the highest number of glaciers in all sub-basins of the Bhagirathi basin. Introduction Long-term mapping and integrated monitoring of glacier change is crucial for planning water security measures, for water budget forecasting, and also for predicting the melting rates as a response to future climate warming.Regular glacier mass balance monitoring is thus critical for responsible land management and decision-making.Mapping of glaciers regularly using conventional methods of surveying is difficult due to the high altitude, inaccessibility, ruggedness, and harsh climatic conditions of the mountainous terrain.Remote sensing (Ekwueme (2022), Mann and Gupta (2022) and Jabal et al. (2022), Dibs et al. 2023a, Kanmani et al. 2023) provides an alternative for mapping and monitoring of glaciers in mountainous terrain (Hall et al. 2003).Numerous satellites have been successfully used for mapping and monitoring glaciers' facies and features.The use of geospatial techniques is increasing day by day for creating digital maps of various glacier features and facies, for 2D and 3D visualization of glaciers, estimation of mass balance, temporal changes in ice volume, determination of equilibrium line, etc. (Bolch 2007;Hall et al. 2003;Surazakov and Aizen 2006;Bauder et al. 2007;Leonard and Fountain 2003;Kulkarni et al. 2004, etc.). In the recent past, it has been reported that glaciers in different regions of the Himalayas are receding at varying rates.Receding glaciers will affect the availability of water in different regions of North India.The Himalayas have the largest glaciated area (approx.33,000 km 2 ) outside the Polar Regions and housed a total number of 5243 glaciers (Kaul 1999).The Himalayas have three parallel ranges, the Himadri, Himachal, and Shivalik (Bhambri et al. 2011).Himalayan glaciers are generally divided into three river basins, namely Indus, Ganga, and Brahmaputra.The Indus basin has the largest number of glaciers (approx.3500), whereas the Ganga and Brahmaputra basins contain about 1000 and 660 glaciers, respectively (Kaul 1999;Hasnain 1999). Surface runoff from glaciers/snowmelt is a source of water for drinking as well as industry (Barnett et al. 2005;Moore et al. 2009).Himalayan glaciers are a major source of water for the northern state of India.It has been reported that Himalayan glaciers are receding at an alarming rate (Kulkarni et al. 2005(Kulkarni et al. , 2007)), which may have many implications, e.g., formation of lakes near glaciers snout and subsequently imposing danger of glacial lake outburst floods, water scarcity in the northern region of India, etc. Various studies have focused on assessing the mass balance of different glaciers in the Himalayas.Dobhal and Mehta (2008) reported a snout recession of the Dokriani glacier with an average rate of ~ 15.9 m during 1991-2007.They also reported that the frontal area vacated by the glacier was about ~ 10% from 1962 to 1991.Pavlova et al. (2016) investigated melt-related issues for Silvretta Glacier.Yong et al. (2010) reported a reduction in glaciers of the central Himalayas in the Nepal region by ~ 501.91 ± 0.035 km 2 from 1976 to 2006.Pandey et al. (2011) reported a loss of ~ 9.23 sq.km.glacier area during 1962-2007 of a Himalayan glacier in Himachal Pradesh.Basnett and Kulkarni (2011) reported that Eastern Himalayan Region glaciers de-glaciated less than the western Himalayas and observed the loss in glacier area as ~ 1. 17% between 1990 and 2004.They also reported that big glaciers (area > 10 sq.km) receded by ~ 0.51%, while small glaciers (area < 10 sq.km) reduced by ~ 6.67% during this period.Bolch et al. (2012) reported that ~ 25% of glaciers in the west of the Karakoram were stable or advancing, while glaciers in the north of the Karakoram (Wakhan Pamir) had shown retreat from 1976 to 2007.Racoviteanu et al. (2015) estimated the glacier area in Sikkim and Nepal from 1962 to 2000 and reported that the glacier area was reduced in Nepal by ~ 0.53 ± 0.2% per year and in Sikkim by 0.44 ± 0:2% per year.They also reported that the debris-free glaciers show more reduction in area than the debris-covered glaciers.In a study conducted by Frauenfelder and Kaab (2009), the glacier area in the North-West Himalayas was reduced by ~ 8% in 10 years.Garg et al. (2017) studied the influence of topography on central Himalayan glaciers and reported that out of 18 observed glaciers, 8 glaciers had shown a moderate influence, and 4-5 glaciers had shown a strong influence of topography on area loss and retreat.Meteorological observations near the snout of the Gangotri glacier in central Himalaya have shown a decreasing trend in the snowfall and an increasing trend in the temperatures (Gusain et al. 2015).Sunita et al. (2023) analyzed the snow cover area (SCA) over the Beas River basin, Western Himalayas for the period 2003-2018.Results showed an average SCA of ~ 56% of its total area, with the highest annual SCA recorded in 2014 at ~ 61.84%.Conversely, the lowest annual SCA occurred in 2016, reaching ~ 49.2%.Notably, fluctuations in SCA are highly influenced by temperature, as evidenced by the strong connection between annual and seasonal SCA and temperature. Although several studies have been undertaken in some basins of the Himalayas (Kulkarni et al. 2007;Bolch 2007;Shukla et al. 2009;Shukla et al. 2010;Garg et al. 2017 etc.), still glaciers of the Bhagirathi basin have not been thoroughly explored.Only a few studies are available on the glaciers of this basin, and that too is concentrated in restricted areas. Thapliyal et al. ( 2023) assessed the spatio-temporal changes in the glacier area, volume, and shift in snout positions of the Satopanth (SPG) and Bhagirathi-Kharak (BKG) glaciers of the Mana basin in the Central Himalayan region of India from 1968 to 2017 based on the CORONA photograph and satellite data.The ice velocity was found to vary from 0.117 md − 1 to 0.165 md − 1 from January to October months.Besides this, they analyzed temperature variability from 2001 to 2020 and found significant warming over the basin along with a significant negative trend in SCA during different seasons.Khan et al. (2017) evaluated the glacial melt fraction at the exit of the Bhagirathi basin (Devprayag) to be 11% considering pre-monsoon data and 12% considering post-monsoon data.They observed a generally decreasing depletion of glacial melt components observable downstream from the glacier snout.Glacial melt fraction decreases from the snout of the Gangotri glacier to Devprayag during the pre-monsoon and the post-monsoon period, whereas surface runoff in the form of snowmelt is the major fraction during the pre-monsoon and in the form of rainfall during the post-monsoon period in the Bhagirathi river. Since there are only limited studies on the glaciers of the Bhagirathi basin, the novelty of the present study consists in analyzing glacier retreat and advancement for the years 2000 and 2015 in the whole Bhagirathi Basin, Garhwal Himalayas using remote-sensing data by covering more than 65% of the total glaciated area; it therefore represents one of the most exhaustive studies to cover the highest number of glaciers in all sub/basins of the Bhagirathi Basin and it is of crucial importance in understanding the overall status of the glaciers of the Bhagirathi Basin. Study area The study area lies in the Garhwal Himalayas of the State of Uttarakhand, India, and is part of the Indian central Himalaya.The Garhwal region's two major drainage basins are Alaknanda and Bhagirathi River Basin.In this study, Bhagirathi River Basin (Fig. 1) has been selected, which lies between 78° 08′ 49″ E and 79° 24′ 43″ E longitude and 30° 08′ 49″ N and 31° 26′ 35″ N latitude. Bhagirathi, Bhilangana, Jalandhari, Jahnvi, Pilang, and Kaldi are sub-drainage basins of the Bhagirathi basin.Most of the glaciers in this basin lie at an altitude between 3800 and 7000 m (a.s.l.).The total catchment area of the basin is approximately ~ 7600 km 2 , 75% of which is above 2000 m (a.s.l.).Gangotri is the largest glacier in the basin.Gusain et al. (2015) have reported an annual mean of min.temperature, max.temperature, and snowfall as − 2.3 ± 0.4 °C, 11.1 ± 0.7 °C, and 257.5 ± 81.6 cm, respectively, at the Bhojbasa observation station near the snout of Gangotri Glacier in the study area. Materials In this study, Landsat images of Thematic Mapper (TM), Enhanced Thematic Mapper (ETM), and Landsat 8-Operational Land Images (OLI) (Dibs et al. 2023b) at 30 m spatial resolution have been used to estimate glacier area change between the year 2000 and 2015.The summary of the data set is listed in Table 1. The Bhagirathi basin has 288 glaciers, among which 26 glaciers having an area of more than 5 km 2 have been selected, as they contribute to the 2/3rd of the total glaciated region of the basin, together with three small glaciers.Thus, a total of 29 glaciers have been monitored in this study, which may be regarded as the most representative glaciers of the whole Bhagirathi River Basin.The detailed list of glaciers in the Bhagirathi sub-basins is given in Table 2. Methods The overall approach used in the present study is described below.Landsat Images (TM/OLI) of the different periods over the study area have been used and obtained from the http:// glovis.usgs.org/ during 2000 and 2015.Apart from Landsat images, ASTER GDEM has been obtained from http:// www.lpdaac.usgs.gov/ datap ool/ datap ool.asp/ and used in the study for topographic analysis.The methodology adopted for the present study is shown in Fig. 2. In the first step of preprocessing, all the datasets were co-registered taking 2015 OLI as base images with RMSE of < 1 pixel (30 m) to achieve congruence among them.The slope match method, considered as one of the best methods for the Indian Himalaya (Mishra et al. 2009), has been used to get topographically corrected reflectance.In this study, various glacier parameters, such as glacier area, glacier length, snout position, snout altitude, aspect, etc., were extracted using satellite images and DEM to evaluate the glacier health in terms of retreat and advance.The glacier snout position has been estimated by means of visual interpretation techniques using various features like shape, texture, tone, and surroundings (Kulkarni et al. 1991;Basnett et al. 2013).However, satellite images could not identify the snout position visually, so False Color Composites (FCC) were determined using different band combinations, such as SWIR-NIR-Green, NIR-SWIR-Red, NIR-Red-Green, etc., to identify the snout properly.Glacier boundaries available in the GLIMS datasets were manually delineated, reported as one of the most accurate methods by Garg et al. (2017).Therefore, in the present study, glacier boundaries have been demarcated manually using Landsat images with the help of the 3D visualization technique. Orientation of the glaciers has been determined in eight directions, i.e., North (N), North-East (NE), East (E), South-East (SE), South (S), South-West (SW), West (W), and North-West (NW) using ASTER DEM.Glacier length is defined as the length of the longest flow line of a glacier and can also be used to study glacier health.Furthermore, the relationship/correlation between all the selected topographic parameters with the glaciers' change (area/length) was estimated and then analyzed. Results In the Bhagirathi Basin, 29 glaciers have been considered and various glacier parameters, such as glacier length, glacier area, topographic parameters, etc., were computed using Landsat satellite images and ASTER DEM to assess glaciers' status and changes in the Bhagirathi River Basin. Here, the results of all the glaciated basins and sub-basins have been discussed below to get a microscopic overview of the Bhagirathi Basin. Spatio-temporal variability of glaciers in Bhagirathi basin during 2000-15 The Bhagirathi basin consists of five sub-basins, mainly Bhagirathi, Bhilangana, Jahnvi, Jalandhari, Pilang, and Kaldi.Kaldi sub-basin has no glaciated region, whereas the remaining sub-basin has 288 glaciers.However, in the present study, 29 glaciers with an area of more than 5 km 2 have been considered and are given in Table 2 Spatio-temporal variability of glaciers in Bhagirathi sub-basin during 2000-15 It is observed from Furthermore, glacier terrain parameters (i.e., aspect and slope) have been derived from ASTER DEM and show that more than 50% of the advancing glaciers are northly oriented, while less than 20% are oriented toward the south are also observed during analysis.It has been widely reported that north-facing glaciers persist, whereas south-facing glaciers are disappearing (Sakai et al. 2002;Dobhal and Mehta 2010;Garg et al. 2016).The amount of depletion is not significant, but it shows a general trend of depleting glaciers in the Bhagirathi basin.Also, it was found that the maximum mean slope of ~ 26.50° in the Maitri glacier has the highest advancing trend.Similar results have been reported in various studies (Bhambri et al. 2011;Dobhal et al. 2013). Additionally, the analysis shows a positive correlation of ~ 0.3 between variations in the glacier length and area, while a negative correlation value of ~ 0.62 between glacier length and snout elevation for retreating glaciers.A positive correlation between mean glacier slope and area variation has been observed for the advancing glaciers (~ 0.68) and retreating glaciers (~ 0.5). Spatio-temporal variability of glaciers in Jalandhari sub-basin during 2000-15 It is observed from Furthermore, glacier terrain parameters (i.e., aspect and slope) have been derived from ASTER DEM and show that northerly and north-easterly oriented glaciers are retreating fast in these sub-basins are observed during analysis.The maximum mean slope of 34.40º has been found in Chhalan glacier, which shows the highest retreating trend (Fig. 5). Additionally, the analysis shows a positive correlation ~ 0.8 (in Sian glacier) between glacier length and area: nevertheless, a high negative correlation ~ 0.9 between glacier length and snout elevation for retreating glaciers has also been observed.A positive correlation between length and area highlights that glacier area decreases with a decrease in length, whereas a negative correlation between length and snout elevation demonstrates that glacier length decreases with the increase in snout elevation.A low negative correlation value of 0.119 between length and the northern orientation suggests that glaciers facing north have no significant length variation.This result conforms to the universally accepted thesis that topographic shading is an important factor affecting glacier mass balance and that glaciers facing north are more persisting in nature. In the Northern Hemisphere, north-facing slopes in latitudes from about 30-55 degrees receive less direct sunlight than south-facing slopes and remain therefore cooler.This causes snow on north-facing slopes to melt slower than on south-facing ones.The scenario is just the opposite for slopes in the Southern Hemisphere, where north-facing slopes receive more sunlight and are consequently warmer. Also, a negative correlation (~ 0.68) between the mean glacier slope and area has been estimated for the glaciers of the Jalandhari sub-basin. Spatio-temporal variability of glaciers in Jahnvi sub-basin during 2000-15 It is observed from Fig. 6 that in the Jahnvi sub-basin, out of 6 glaciers this sub-basin, 5 glaciers have shown a marked decline in length, whereas one glacier has shown a marked increase in length.The maximum increase in length of ~ 152 m has been found in the Dehigad glacier during the analyzed period.Whereas, Surali glacier has shown a maximum decrease in length (172 m), followed by Kailash (133 m), Guligad (119 m), 60 Bombigad (106 m), and Mana (90 m), respectively. Interpretation of satellite imagery for the years 2000 and 2015 indicated a decrease in the area in all the selected glaciers, except Dehigad glacier.A marked decrease of glacier area approx.~ 0.33 km 2 is observed in Surali glacier.Other Fig. 6 Change in length, area, snout elevation during data period, and terrain parameters in the Jahnvi sub-basin glaciers, namely, Mana, 60 Bambigad, Guligad, and Kailash, shown a decrease in the glacier area by ~ 0.013 km 2 , ~ 0.05 km 2 , ~ 0.06 km 2 , and ~ 0.097 km2, respectively, while Dehigad glacier is increasing by ~ 0.016 km 2 , during data period (Fig. 6). Change in snout elevation between the data period shows that the maximum increase is observed in 54 Bhagirathi glacier (~ 212 m a.s.l.) and the maximum decrease in Maitri glacier (~ 25 m a.s.l.).54 Bhagirathi, Kedar, Chaturangi, Raktavarn, Swetvaran, Gangotri, and 13 Bhagirathi glaciers have shown an increase in snout elevation, whereas Maitri, 52 Bhagirathi, and Rudugaira glaciers have shown decrease in snout elevation during 2000-2015 in study area.Detailed observations are shown in Fig. 5. Furthermore, glacier terrain parameters (i.e., aspect and slope) have been derived from ASTER DEM and show that north-easterly and easterly oriented glaciers are retreating in these sub-basins are observed during analysis.The maximum mean slope of 30.50º has been found in the Dahigad glacier, which shows some advancing trend.Various authors have reported that persisting glaciers are situated on steeper slopes. Additionally, the analysis shows a low positive correlation value of ~ 0.13 between glacier length and the northern slope.It suggests that glaciers facing north are having no significant length variation.This result conforms to the universally accepted thesis that topographic shading is an important factor affecting glacier mass balance and that glaciers facing north are more persisting in nature.A low negative correlation value ~ 0.15 has been observed between glacier length and southern orientation.South-facing glaciers have shown length variations owing to direct solar radiation. But here, the result deviates from the common understanding.A positive correlation (~ 0.47) between glacier mean slope and area has been observed for the glaciers of Jahnvi sub-basin.A moderate negative correlation (~ 0.35) has been observed between glacier snout elevation and area.This implies that with the increase in snout elevation, the glacial area decreases.Furthermore, the calculated results of glacier area change clearly show that all glaciers except one have lost glacier area during the study period.The total glacier area changed from 61.77 km 2 to 61.23 km 2 (0.88%). Spatio-temporal variability of glaciers in Bhilangana sub-basin during 2000-15 It is observed from Fig. 7 that in the Bhilangana sub-basin, all the glacier length is decreasing during the study period.Khatling glacier has shown a maximum decrease in length (212 m) followed by 15 Bhilangana (181 m) and 19 Bhilangana (47 m), respectively. The interpretation of satellite imagery for the years 2000 and 2015 shows a decrease in area in all the selected glaciers of the Bhilangana sub-basin.19 Bhilangana, 15 Bhilangana, and Khatling have shown a decrease in the area by ~ 0.01 km 2 , ~ 0.03 km 2 , ~ 0.06 km 2 , and 0.07 km 2 , respectively.Although the magnitude of the area variation is very low, these results still reflect the general trend of glacial retreat in the Bhilangana sub-basin (Fig. 7). Furthermore, glacier terrain parameters (i.e., aspect and slope) have been derived from ASTER DEM and show that northerly oriented glaciers are retreating at a very slow rate in these sub-basins are observed during analysis.The maximum mean slope of ~ 27.88° has been observed in 15 Bhilangana glaciers and results in a minor decrease in area.Various authors have reported he existence of persisting glaciers on steeper slopes. Additionally, the analysis shows a high positive correlation value ~ 0.89 between glacier length and area, whereas a moderately high negative correlation value ~ 0.69 between glacier length and snout elevation in this sub-basin.A positive correlation between length and area suggests that glacier area decreases with a decrease in length, whereas a negative correlation between length and snout elevation shows that glacier length decreases with the increase in snout elevation and this conforms to the existing hypothesis. A high negative correlation value (~ 0.80) has been estimated between the glacier northern slope and area; this can be attributable to north-facing slopes showing a lower areal variation.A moderate negative correlation value ~ 0.45 has been observed between glacier length and the northern slope.The analysis shows that glaciers facing north have no significant length variation.This result conforms to the universally accepted thesis that topographic shading is an important factor affecting glacier mass balance and that glaciers facing north are more persisting in nature.Similar findings have also been reported by several authors working on Himalayan glaciers. A high negative correlation (~ 0.94) was estimated between glacier snout elevation and area.This implies that with an increase in snout elevation, the glacial area covers decrease.Generally, at higher elevations, persisting glaciers are found owing to prevailing low-temperature conditions.The overall results of area change estimation show that all glaciers of the Bhilangana sub-basin have been depleted during the period between 2000 and 2015, and the total glacier area has changed from 54.72 km 2 to 54.61 km 2 (~ 0.20%). Spatio-temporal variability of glaciers in Pilang sub-basin during 2000-15 In the Pilang sub-basin, a significant negative change in length ~ 291 m was found in the Dokriani glacier followed by Jaonli (109 m) and 27 Ganigad (1 m), respectively.All the three glaciers of this sub-basin have shown a decrease in length (Fig. 8). The analysis of Landsat temporal imageries (2000 and 2015) shows a decrease in area for all the glaciers of the Pilang sub-basin.27 Ganigad, Jaonli, and Dokrani glaciers have shown a decrease in the area by ~ 0.1 km 2 , ~ 0.01 km 2 , and ~ 0.2 km 2 , respectively.A marked decrease in the glacial cover area has been observed in the Dokriani glacier and this has also been reported earlier by Dobhal et al. (2004).Although the magnitude of the area variation is low in the other two glaciers of this sub-basin, these results broadly reflect the general trend of glacial retreat in the Pilang subbasin.Apart from glacier change in length and area, snout It is observed that snout elevation changes by ~ 151.3 m a.s.l., 22 m a.s.l., and 1.1 m a.s.l., for Dokrani, Jaonli, and 27 Ganigad glaciers, respectively (Fig. 8).The maximum mean slope of 27.88º has been found in 15 Bhilangana glaciers which have shown a minor decrease in area was observed in analysis. Additionally, the analysis shows a positive correlation (~ 0.66) between glacier length variation and area, whereas a very high negative correlation (~ 0.97) between glacier length and snout elevation in this sub-basin.A positive correlation between length variation and area variation suggests that glacier area decreases with a decrease in length and vice versa, whereas a high negative correlation between length variation and snout elevation shows that glacier length significantly decreases with the increase in snout elevation. A high negative correlation (~ 0.85) has been estimated between glacier length and the northern slope.It suggests that glaciers facing north are having no significant length variation.A very high negative correlation (~ 0.99) was estimated between glacier length and mean slope.This implies as the mean slope increases, the glacier length decreases significantly.Different rates of retreat/advance of glaciers within a region, where the climatic conditions do not change significantly, are due to the important role played by the dynamics of ice movement, which in turn is controlled by the mean slope and length of the glacier. A high negative correlation (~ 0.83) was estimated between glacier snout elevation and area in this sub-basin.Several authors (Kulkarni et al. 2005) have found the same trend in the Himalayan glaciers.Generally, at higher elevations, more persisting glaciers are found due to prevailing lower-temperature conditions.A negative correlation value (~ 0.16) has been estimated between the glacier northern slope and area.This result indicates that north-facing glaciers can be more or less persisting.The overall results of area change estimation show that all glaciers of the Pilang sub-basin have been depleted during the period between 2000 and 2015 and the total glacier area changed from 25.36 km 2 to 25.04 km 2 (1.26%). Error estimation The error estimation is an essential step to calculate the accuracy of the results; during this study, following methods have been used for the same: (a) Error in digitization of glaciers' boundaries: Results obtained from Landsat image and Google Earth (GE) were compared.Although this is a crude method, as Google Earth images can only be used for reference purposes, this method has been widely used by the various authors (e.g., Hall et al. 2003;Bhambri et al. 2012;Garg et al., 2016).The same method has been used to estimate the mapping error on the Landsat Images and Google Earth and observed an error of less than 1%. (b) Error in the estimation of glacier area: This error is also known as Mapping Error.The mapping area can be calculated using formulae where N is the number of pixels along the glacier boundary and A is the pixel area.The errors in estimation vary from 2.86% to 9.41% for the year 2000 and 2.85% to 9.81% for the year 2015, respectively. (c) Error in the ASTER DEM elevation: The image matching method was used to create the ASTER DEM, which is available for 99% of the world's land (Hu et al. 2017).ASTER DEM has approximately 15 m horizontal and 15-25 m vertical accuracy, depending on the environment in the area.The vertical and horizontal error in ASTER DEM will lead errors in snout elevation. Discussion The present study has demonstrated that most of the glaciers in the Bhagirathi river basin are retreating and resulting in a loss of the glaciated area.However, a few glaciers have shown advancement and increase in the glaciated area during the period.The present study shows an increase in the retreat of the Dokriani glacier from 2000 to 2015 and the area vacated about 2.9% of the total glacier area, which is coherent with the result form Dobhal and Mehta (2008) who reported a snout recession of the Dokriani glacier with an average rate of ~ 15.9 m during 1991-2007. The present study reports a glacier area loss of ~ 0.1% to ~ 4.4% from 2000 to 2015 of glaciers in Bhagirathi Basin, Garhwal Himalayas.The highest area loss was shown by the Surali glacier ~ 4.4% during the study period.A similar trend of glacier recession has been observed in other parts of the Himalayas.This is coherent with the results of Yong et al.In the present study, it has also been observed that large glaciers with a glacier area of more than ~ 10 km2 have lost a mean glacier area of 0.16%, while smaller glaciers have lost a mean glacier area by ~ 1.14%.This is coherent with what found by Ahmad (2016) who indicated that glaciers having low maximum altitude, low relief, and fewer lengths are more prone to shrinkage than others in the Indian Himalaya.This is also coherent with the results of Basnett and Kulkarni (2011) who reported a recession of ~ 0.51%, for big (1) Mapping Error = N * A∕2, glaciers (area > 10 sq.km) and of ~ 6.67% for small glaciers (area < 10 sq.km) during that period. By analyzing spatio-temporal patterns of glacier change in China over past decades together with their influencing factors, Su et al. ( 2022) also found a negative correlation between glacier AAC (Annual Area Change) and glacier size. The present study has demonstrated that most of the glaciers in the Bhagirathi river basin are retreating and resulting in loss in the glaciated area.However, only 3 glaciers out of 29 have shown advancement and increase in the glaciated area during the period 2000-2015.This is more critical than the findings of Bolch et al. (2012) regarding the period 1976-2007.They reported that ~ 25% of glaciers in the west of the Karakoram were stable or advancing, while glaciers in the north of the Karakoram (Wakhan Pamir) had shown retreat. This study has also highlighted the influence of topography on glacier mass balance. In the present study, a significant positive correlation was observed between the glacier area and length of the glacier, however, no significant correlation was observed between the area vacated and the snout elevation.A positive correlation between length variation and area variation suggests that glacier area decreases with a decrease in length. According to Garg et al. (2017), only 4-5 glaciers out of 18 had shown the strongest influence of topography on area loss and retreat.Meteorological observations near the snout of the Gangotri glacier in central Himalaya have shown a decreasing trend in the snowfall and an increasing trend in the temperatures (Gusain et al. 2015). In the study, a high positive and negative correlation is observed between glacier length and snout elevation in advancing and retreating glaciers, respectively.This is because in retreating glaciers, decrease in glaciers length results in snout recession and increases in snout elevation.This is coherent with the results obtained by Ahmad (2016) who monitored and predicted glaciers recession and found that shift in snout at higher altitude contributes loss to ablation area, while shifting in equilibrium line at high altitude results in accretion of ablation area.He also showed that decrease in ablation length is partially compensated by accreted ablation length in accumulation area. Finally, both glaciers with both northern and southern slopes have shown retreat.Although, lower latitude glaciers have shown a higher retreat than higher altitude ones.This is because the former usually experience significant periods in the summer months of above zero temperatures and melting, whereas, in high latitude locations, temperatures may never rise above zero, and so, no melting occurs.This explains why ice sheets are so thick in polar regions, despite very low precipitation inputs. The current retreat and area loss of most of the glaciers may be attributable primarily to the impact of climate change in the region, although the variation in retreat and area loss from one glacier to another can be attributed to variation in micro-climatology and topography of the region. Though most of the all glaciers of the Himalaya are retreating at different rates, some are also advancing, which indicates that global warming is not the only reason behind glacier dynamics.Factors other than climate changes must be responsible for glaciers behavior.The glacial snout is generally in an unstable equilibrium, its position being determined by a complex interplay of topo-climate factors like snowfall, temperature through the year, deposition of rock debris, and effects of ocean tides on iceberg calving rates. Researchers are still investigating on anthropogenic/natural factors behind the glacier retreat and advance.Studies lasting many tens of years are necessary to unravel the phenomena responsible for glaciers behavior. Conclusions Himalayan glaciers represent headwaters of several of Asia's great river systems; they act as buffer-generally as reservoir of water in winter and might release melt water in summer to partly satisfy the water needs for irrigation, hydropower, and local water supplies for people living in the Ganges-Brahmaputra, Indus, Mekong, Yangtze, and Yellow river basins. Glaciers of the Himalayas are a sensitive indicator of climate change: regardless of altitude or latitude, they have been melting at a high rate since the mid-twentieth century. However, scientists currently have only a limited understanding of the extent of the melting of glaciers, since the full extent of ice loss has only been partially measured and understood. Therefore, integrated monitoring of glacier dynamics in the Himalayas is utmost required for planning water security measures in the Himalayas. This study represents an important contribution to glacier mass balance monitoring in Himalaya.The novelty of the present study consists of applying remote sensing to study glacier retreat and advancement for the years 2000 and 2015 in Bhagirathi Basin, Gharhwal Himalayas.Although several studies have been undertaken in the Himalayas, glaciers of the Bhagirathi basin have not been explored satisfactorily. This study comprehensively monitors the status of 29 selected glaciers (~ 65% of total glacierised area) in the Bhagirathi river basin and investigates glacier dynamics from 2000 to 2015.Impacts of the topographic factors on the glacier changes have also been examined.The following major inferences may be drawn from the results: • The total glacier area decreased from 494.98 km 2 in 2000 to 492.49 km 2 in 2015 (i.e., loss of ~ 0.5% glacier ice) during the study period.• All selected glaciers, except four (Dehigad, Maitri, 52 Bhagirathi, and Rudugaira) have shown retreat with retreat rates varying from a minimum of ~ 0.06 m per year (27 Ganigad) to a maximum of ~ 19.4 m per year (Dokriani) during the study period.Both of these glaciers lie in the Pilang sub-basin.• The glaciers showing a retreat of more than ~ 10 m/ year are as follows: Dokriani (~ 19 ma-1), 2 Chhalan (~ 16 ma-1), Khatling (~ 14 ma-1), 13 Bhagirathi (~ 13 ma-1), 15 Bhilangana (~ 12 ma-1), Surali (~ 11 ma-1), Gangotri (10.7 ma-1), Swetvaran (10.5 ma-1), and Raktavaran (10.1 ma-1).• A high positive and negative correlation is observed between glacier length and snout elevation in advancing and retreating glaciers, respectively.• Glaciers with both northern and southern slopes have shown retreat.Although, lower latitude glaciers have shown higher retreat, and glaciers having a higher portion northerly oriented have shown less retreat in few basins.Dehigad glacier is oriented 92% toward North and has the maximum observed advancing (152 m). Future directions of the present research will be to run simulation models to the impact of glacier retreat an on groundwater storage dynamics and aquifer system evolution at short and at long term in the Himalayas.This will address a current research gap as the current knowledge of future impacts on glacier melting on aquifer dynamics is limited by oversimplification of groundwater processes in hydrological models due computational and/ or observational limitations in mountain regions, including Himalayas (Somers et al. 2019). Fig. 1 Fig. 1 Geographical location map of the study area showing sub-basins and glaciers of the Bhagirathi basin Fig. 3 Fig. 3 Maximum retreating (i.e., Dokarni glacier) and advancing (i.e., Dehigad glacier) glacier in terms of a area and, b snout elevation; maximum advancing glacier (i.e., Dehigad glacier) in terms of c area and d snout elevation during 2000-15 Fig. 5 that in the Jalandhri sub-basin all the glaciers is decreasing in length during the study period.Rohilla glacier has shown a minimum decrease in length (86 m) followed by 66 Chhaling (100 m), Sian (119 m), 61 Jalandhari (123 m), 5 Bartikhunt (180 m), and 2 Chhalan (251 m).A decrease in glacier length up to ~ 251 m in the Jalandhari sub-basin is indicative of overall depletion of glacial cover. Fig. 4 Fig. 4 Change in length, area, snout elevation during data period, and terrain parameters in the Bhagirathi Sub-basin Fig. 7 Fig. 7 Change in length, area, snout elevation during data period, and terrain parameters in the Bhilangana sub-basin Fig. 8 Fig. 8 Change in length, area, snout elevation during data period, and terrain parameters in the Pilang sub-basin (2010) who reported a reduction in glaciers of the central Himalayas in the Nepal region by ~ 501.91 ± 0.035 km 2 from 1976 to 2006. Table 1 Dataset used in the study Table 2 List of selected glaciers in the Bhagirathi Basin . It is observed from the analysis and shown in Fig.3that Dokarni and Dehigad glacier lies in Pilang and Jahnvi sub-basins, respectively, shows maximum retreat (~ 0.204 km 2 ) and advancing (~ 0.016 km 2 ) in terms of area during 2000-2015.Snout elevation is observed at 3923 m a.s.l. and 4028 m a.s.l. in Dokarni glacier, and 4099 m a.s.l. and 4039 m a.s.l. in Dehigad glacier during 2000 and 2015, respectively.The study also reveals changes in snout elevation and length as -2.7 m a.s.l. and -291 m in Dokarni glacier, and 1.5 m a.s.l. and 152 m in Dehigad glacier during the data period.Furthermore, all the glaciers show retreating activities, except four glaciers (Dehigad, Maitri, 52 Bhagirathi, and Rudugaira) during the study period.
2024-05-11T15:52:27.336Z
2024-05-01T00:00:00.000
{ "year": 2024, "sha1": "a4ce36c58631d176480c82a09f4777029160bf88", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s12665-024-11565-7.pdf", "oa_status": "HYBRID", "pdf_src": "ScienceParsePlus", "pdf_hash": "910c8396cb5cf5f31b3563c1658d19a860025525", "s2fieldsofstudy": [ "Environmental Science", "Geography" ], "extfieldsofstudy": [] }
239770507
pes2o/s2orc
v3-fos-license
Analysis of Centroid Trajectory Characteristics of Axial-Flow Pump Impeller Under Hydraulic Excitation The hydraulic excitation characteristics of axial flow pump unit are studied through theoretical analysis, numerical simulation and field test in this paper. The correlation between impeller hydraulic and radial vibration displacement of impeller centroid is obtained through theoretical analysis. Through the 1-way fluid-solid-interaction (FSI) numerical simulation, the distributions of water pressure and displacement on the impeller surface are obtained, and the time-domain and frequency-domain characteristics of transient hydraulic and radial displacement are revealed. Through the field test, the external characteristics of axial flow pump unit and the time-frequency characteristics of the pressure pulsation at the measuring points beside the inlet of the impeller are obtained. The comparisons between simulation results and experimental results show that the former is accurate and reliable. INTRODUCTION With the construction of the whole line of the South-to-North Water Transfer Project in China and the renewal of national pumping stations, the number of large-scale axial-flow pumping stations is gradually increasing, and its safe operation has also attracted attention from all aspects. From the operation of pump stations built at home and abroad, vibration problems often affect the safe operation of pumping stations. Hydraulic excitation is one of the main reasons for the vibration of axial-flow pump unit, especially the rotor system of the unit. Transient hydraulic excitation of axial flow pump refers to the vibration characteristics of axial flow pump components under complex unsteady flow actions, such as cavitation, rotating stall, dynamic and static interference, inlet and outlet reflux and so on. Therefore, the research on the hydraulic excitation characteristics of axial flow pump includes two aspects: the research on the internal flow field of pump device and the vibration response of the pump unit. At present, there are many achievements in the research of the internal flow field of axial flow pump. Firstly, the velocity distributions at the inlet and outlet of the rotor are studied, and the unsteady numerical simulation is carried out to understand the complex internal flow structure in the rotor (Cao et al., 2013). The influence of rotor-stator axial clearance on tonal noise produced by axial fan used in automotive cooling system is studied (Canepa et al., 2015). The existence of impeller tip clearance amplifies the pressure fluctuation in the impeller area from the hub to the shroud (Feng et al., 2016). The steady and unsteady internal flow fields of large vertical axial flow pump are studied, and the mechanism of low-frequency pressure fluctuation in pump unit under stall condition is revealed (Kan et al., 2018). The effects of sweep angle, blade pitch angle and hub inlet angle on the hydraulic characteristics of submersible axial flow pump were studied by using three-dimensional Reynolds averaged Navier Stokes Equation (Kim et al., 2019). The hydrodynamic characteristics of axial flow pump with inconsistent blade angle are studied. The results show that the pressure pulsation caused by blade angle deviation is mainly low-frequency pulsation (Shi et al., 2021). The above research shows that the change of internal flow field in axial flow pump is closely related to inlet and outlet velocity, tip clearance and blade angle. The stress distribution in the impeller are calculated. The results show that the equivalent stress in the blade hub firstly increases and then decreases from the leading edge to the trailing edge (Pei et al., 2016). The study on the stability of the pump station unit in reverse generation shows that the stress is mainly concentrated at the root of the pressure side and suction side of the blade, and the maximum equal stress appears at the suction side of the blade (Zhou et al., 2021). The above literature reveals the distribution law of water acting force on blades. The three-dimensional unsteady numerical simulation of complex turbulent flow field in axial flow pump is carried out. The results show that the dominant frequency is in good agreement with the blade passing frequency (Shuai et al., 2014). Simulation of the flow rate of three pumps with different number of blades shows that the blade passing frequency and its harmonics dominate the pressure fluctuation spectrum (Kang et al., 2015). The above two literatures show that the blade passing frequency is the main frequency component of fluid exciting force in the impeller. Based on the numerical simulation method, the internal flow pulsation characteristics of the new S-type axial extension pump device are studied. It is calculated that the radial force vector diagrams under different working conditions show petal shape, which further reveals the characteristics of the fluid exciting force in the impeller of the axial flow pump (Yang, 2013). The research on the vibration response of axial flow pump unit mainly consists of the following results. The transient characteristics of axial flow pump during start-up are studied experimentally and numerically (Fu et al., 2020). The numerical study of vibration characteristics reveals the time and frequency laws of fluid pressure pulsation and structural vibration at the same position in a vertical axial flow pump (Zhang et al., 2019). The impeller fluid structure coupling solution strategy is used to solve the severe vibration of reversible axial flow pump. It shows that the total deformation increases and the equivalent stress first increases and then decreases (Meng et al., 2018). By comparison, it is found that the pressure pulsation at the inlet of the impeller is the main reason for the hydraulic induced vibration of the pump device (Duan et al., 2019). The above research shows that the vibration of axial flow pump unit is closely related to the change of internal flow field. The full-scale structural vibration and noise caused by fluid in an axial flow pump are simulated by hybrid numerical method. The results are consistent with the 28 spectral characteristics of unsteady pressure fluctuation, indicating that there is a correlation between fluid exciting force and vibration spectrum . According to the vibration signal monitoring, the cavitation performance of axial flow pump is obtained (Shervani-Tabar et al., 2018). The effects of the number of guide vanes on the pressure fluctuation and structural vibration acoustics caused by unsteady flow are studied by hybrid numerical method. The results show that the blade passing frequency is the dominant frequency of pump vibration acceleration . A flexural torsional coupling rotor bearing system with rub impact under electromagnetic excitation of hydro generator unit is established. The system reveals that the dynamic characteristics of the system will change significantly when the composite eccentricity is considered . The mechanical response of the selected rotor system model to pressure fluctuation is simulated, and the model accuracy of the system under turbulence effect is improved (Silva et al., 2021). The small difference between vibration test results and fluid-solid-interaction (FSI) analysis results is confirmed, and the applicability and reliability of FSI analysis method are verified (Lee et al., 2021). The research results on the vibration response of axial flow pump unit reveal the correlation between the unit vibration and internal flow field, and the analysis method is advanced, but no clear conclusions about hydraulic excitation characteristics of axial-flow pump unit have been drawn. As an important graph of rotor vibration signal, the impeller's centroid trajectory (the trajectory of its geometric center when the rotating shaft rotates) can intuitively understand the motion characteristics of the system. For the rotor system with faults, the centroid trajectory diagram contains a large amount of fault vibration information, which is an important basis for judging the rotor operating state and fault symptoms. Because the impeller is the part in the rotor system of axial-flow pump unit, which is in direct contact with water, and the clearance distance at the top of the blade is very small, the axis trajectory of the impeller is also an important criterion to judge whether the rotor system is stable or not. The axis trajectory of a normally operating rotor system usually presents an elliptical shape with little difference between the long and short axes. The characteristics of the centroid trajectory when mechanical faults occur in the rotor system is summarized. It is found that the axis trajectory will appear "8" or "crescent" shaped when various misaligned faults occur in the rotor system . The characteristics of centroid trajectory in mechanical faults such as rub impact, asymmetric support and crack are studied. It is found that the smaller the rub impact gap is, the thicker the elliptical pattern is and the more harmonic components are; the closer the crack is to the measured axis, the more serious the axis trajectory distortion is; the more serious the asymmetric support fault is; the more obvious the period doubling phenomenon in the centroid trajectory is (Han et al., 2010). The hydraulic excitation characteristics of centrifugal pump model device are studied by using the combination of experimental measurement and FSI numerical simulation. It is found that under the action of hydraulic excitation, the elliptical track of centrifugal pump impeller axis becomes a distorted ellipse containing high-frequency harmonics, which reveals the hydraulic excitation mechanism of centrifugal pump (Shi et al., 2017). This paper will refer to the method of this paper. Firstly, through theoretical analysis, the relationship between the hydraulic pressure on the impeller and the axial vibration displacement of the blade wheel is obtained and the vertical axial flow pump unit with the box culvert channel is taken as an Frontiers in Energy Research | www.frontiersin.org October 2021 | Volume 9 | Article 754419 example. Then, the hydrodynamic pressure on the impeller and the axial vibration displacement of the impeller are calculated through the 1-way FSI numerical simulation to verify the rationality of the theoretical analysis. Finally, the accuracy of the numerical simulation is verified by comparing the field test results of the pump station with the numerical simulation results. Mathematical Model of Hydraulic Excitation The hydraulic excitation of rotor system of axial flow pump unit satisfies the general Equation of rotor dynamics. According to Lagrange's theorem, the hydraulic excitation Equation is as follows where M, C, and K represent the mass coefficient, damping coefficient and stiffness coefficient respectively, and the € X(t), _ X(t), and X (t) represent the acceleration, velocity and displacement of centroid respectively. Assuming that the elastic coupling between centroid is ignored, X (t) can be expressed as the motion coordinates of the impeller's centroid on the plane, i.e. (x, y). Then the above Equation can be used to solve the impeller axis trajectory under hydraulic excitation. F (t) represents the force of water on the impeller. It can be regarded as the resultant force acting on the central particle of the impeller. If X (t) can be expressed in Fourier series as follows, then X (t) and F (t) can be expressed respectively by Eq. (2) and Eq. (3) through Eq. 1. (2) and Eq. (3) represent the amplitude of each frequency component of displacement and force. n is the number of frequency components. f i is the frequency. φ i represents the phase. α i and b i can be expressed by Eq. 4 and Eq. 5. Eq. (5). It is clear from Eq. (2) and Eq. (3) that the frequencies of force and displacement correspond one by one, but their phases and amplitudes are different. The phase angle α i and force amplitude b i are consistent with M, C, K and f i . If C 0, X(t), and F(t) are in the same phase, and if C → ∞, the phase difference between X(t) and F(t) is π/2. Rotor-Stator Interaction The axial flow pump studied in this paper has three blades and five guide vanes. The expanded section of blade and guide vane at 0.5 times radius is shown as Figure 1A. Rotor-Stator Interaction refers to the interaction between the flow field in the flow channel between adjacent guide vanes and the flow field in the flow channel between adjacent blades. The flow field after interference will produce periodic force on the blades. It is clear from Figure 1A that the phase difference between adjacent guide vanes is 2π/5, while it is 2π/3 between adjacent blades. Assuming that blade 2 and guide vane 4 are aligned at the initial time, the phase difference between blade 3 and guide vane 1 is 2π/15, which is the same as the phase difference between blade 1 and guide vane 2. This is shown in Figure 1B. The case shown in Figure 2B is extended to the case of any number (for example the number is 1) of guide vanes and three blades, the phase difference between the blade and the nearest guide vane is Then the flow force acting on the three blades can be expressed by Eq. 6. It can be seen from Eq. 6 that the resultant force of the flow field acting on the three blades only contains the blade passing frequency (three times of the shaft frequency) and its harmonics. NUMERICAL SIMULATION The hydraulic excitation Eq. 1 mentioned in the theoretical analysis is a second-order partial differential Equation. In addition, the force of water on the impeller F (t) in the Equation needs to be obtained by solving the fluid control Equation (a partial differential Equation) , so it is difficult to obtain the axis trajectory under hydraulic excitation by analytical method. In this paper, the 1-way FSI analysis method based on Workbench software is used to realize the numerical simulation of hydraulic excitation. surface of flow passage components, so the turbulence model is more suitable. The residual type in the convergence criterion adopts RMS, the residual target is set to 1e−8, and the maximum number of cycles in each iteration step is set to 10. The structural control Equation is Eq. 1, which is solved by ANSYS solver based on the finite element method (the implicit iterative method combining Newmark time integration and HHT algorithm) to obtain the displacement distribution at any point of the rotor system, including radial displacement of impeller's centroid. The data transfer on the FSI interface and the coordination of the two solvers are accomplished through the ANSYS Multifield Coupling solver. The time step of fluid calculation and structure calculation is equal to each other. Axial Flow Pump Unit Model The axial flow pump unit studied in this paper consists of four parts, which are inlet channel, outlet channel, guide vane and rotor system. The impeller is contained in rotor system. The structural composition of vertical axial flow pump unit is shown in Figure 2. This 3D model can be imported into the Workbench platform to complete the FSI simulation between the water in the channel and rotor system. The length, width and height of the inlet channel are 31.0155, 9.66 and 4.4045 m respectively, while they are 28. 704, 9.66, and 4.4045 m respectively of the outlet channel. The diameter of the impeller is 3.45M, and it is 3.4155 m of the guide vane inlet, and outlet diameter is 4.416 m.The rated speed is 100RPM. The design lift is 1.22 m and the design flow is 33,459.25 L/s. The time step information required for unsteady simulation has a great impact on the calculation results. The time step setting should meet the Whittaker-Shannon sampling theorem, so as to avoid mixing phenomenon at the concerned frequency. It is not difficult to deduce that total time and time steps satisfy the Eq. 7, as shown below. Δt Δφ 6n where, Δt is calculation time step rotation angle of each step; Δφ is water pump speed; n is sampling frequency; T is total sampling time; C is rotation number of water pump; Δf is sampling frequency resolution. The rated speed of the axial flow pump studied in this paper is 100RPM, so the rotation frequency is 1.67 Hz. The axial flow pump has 3 blades and 5 guide vanes, so theoretically, each blade will be interfered by a static guide vane with 5 times rotating frequency, and the water quality point between the blade and guide vane will be interfered by 15 times rotating frequency. Therefore, the highest frequency concerned is set as 15 times of the rotating frequency, which is 25 Hz. Therefore, according to Whittaker-Shannon sampling theorem, in order to avoid mixing phenomenon at the concerned frequency, the sampling frequency should be 5 to 10 times of the highest frequency. From the Eq. 7, the corresponding rotation angle of each step is 3°, and the time step or time step is 0.005 s. In order to obtain stable numerical simulation results, the total time is set as 6 rotation cycles, that is, C 6 and the total time is 3.6 s. The stability of the calculated results can be obtained from the time domain analysis by taking the last four rotation periods as timefrequency domain analysis. Grid Division and Independence Test The quality of the grid will affect the stability of the calculation, and the size and quantity of the grid will lead to great deviation of calculation results. Therefore, the information and independence test of grid is very important. The axial flow pump unit is composed of fluid domain and structure domain. The three-dimensional water model of the inlet and outlet channels in the fluid domain is meshed in ANSYS ICEM software. The three-dimensional water area model of impeller and guide vane and its grid division are completed in TurboGrid software. The hexahedral structured grid is uniformly adopted for the calculation grid, so as to improve the computing efficiency and ensure the calculation accuracy. The renderings of water area grid division of model pump device is shown in the Figure 3, with the grid quality exceeding 0.37. The statistics of grid number and node number are shown in Table 1. Before numerical calculation, it is necessary to verify whether the number of grids has an impact on the calculation results. The proof of grid independence of this model is given in the literature (Shi et al., 2017). It is pointed out that when the number of grids exceeds 2,500,000, the calculation results are basically unchanged with the increase of the number of grids, so the number of grids in this model meets the requirement of independence. The grid division of rotor system structure domain is completed through the mesh function of ANSYS mechanical software, and the unstructured grid is adopted. The rotor system grid is shown in the Figure 4. The grid test of rotor system is completed by increasing grid size from 0.04 to 0.2 m at an interval of 0.02 m. The radial static displacement of the impeller under the action of the steady flow field of the impeller axis under each grid size is shown in Figure 5. It can be seen from Figure 5 that, the grid size varies greatly in the range of 0.1-0.2 m, and it is stable in the range of 0.04-0.1 m, whose variation displacements do not exceed 0.5 μm. If the grid size is less than 0.04 m, the calculation time and memory consumption increase exponentially. After comprehensive consideration, the element size of the rotor system is selected as 0.1 m, which is consistent with the fluid grid size, so as to facilitate the data transmission at the interface of fluid-structure coupling. The total grid number of the rotor system is 73,251. Numerical Simulation Platform The flow chart of FSI simulation on the Workbench platform is shown in Figure 6. The flow chart contains four modules, namely, A, B, C and D. Module A is responsible for steadystate flow field calculation, while module D is responsible for unsteady flow field calculation. The water model contained in inlet channel, impeller, guide and outlet channel is input into modules A and D. The steady and unsteady forces of the flow passage parts are solved respectively by solving RANS Equation and combining with turbulence model. The 3D model of rotor system is input into module B and C. The calculation results of module A are loaded on the blades and hub surfaces of the impeller in module B. Then the static displacement distribution of the rotor system under the action of steady-state flow field is calculated by solving Eq. 1, where F(t) is a constant calculated by model A, while M and C are both equal to zero. The calculation results of module D are gradually loaded on the blades and hub surfaces of the impeller in module C, and then the transient vibration of the rotor system is calculated by solving Eq. 1. The boundary conditions of rotor model in modules B and C are set separately, as shown in Figure 7. It can be seen from Figure 7 that the boundary conditions of the two are basically the same. The only difference is that the loading force of module C is unstable, while that of module B is stable. Static Response of Impeller Under Steady Flow Field The simulation results of modules A and B are shown in Figure 8. It can be seen from Figure 8 that the maximum value of constant water pressure acting on the impeller is concentrated at the blade inlet, followed by the area close to the rim on the pressure surface. There is a negative pressure area on the suction surface. The maximum deformation occurs in the blade, which is consistent with the pressure distribution, followed by the hub axial and the motor rotor. Verification of Rotor-Stator Interaction The time and frequency domain diagram of unsteady flow field force on three blades, which are calculated by module D are shown in Figure 9. It can be seen from the figure that the time domain waveform of the force on three blades has no obvious periodicity, but the periodicity of the synthetic force on three blades is obvious. It can be seen from the frequency domain diagram of force that the force spectrum of the three blades are completely consistent. The main frequency is twice of shaft frequency, which can be recorded as f 1 , and the secondary domain frequency contains f 3 , f 4 , f 5 . However, the spectrum of the resultant force of the three blades only contains f 3 , f 6 , f 9 , which is consistent with the conclusion of Eq. 6. Time-Frequency Domain Analysis of Impeller Force and Radial Centroid Trajectory The radial force and radial centroid trajectory of the impeller are shown as Figure 10. It can be seen from Figure 10 that the radial force of the blade and the hub are basically the same, but their shapes are different, and the rotational symmetry of the radial force on blades is more obvious. The radial centroid trajectory shows obvious triangular characteristics. According to the analysis of the numerical simulation results of the water receiving force of the impeller, the characteristics of the water receiving force of the blade are caused by the dynamic and static interference between the blade, the inlet channel and the guide vane. The hub is of rotational symmetry geometric characteristics, and the water bearing force is evenly distributed at most of the overflow wall. However, the water bearing force distribution at the junction of blade and hub is the same as that of blade, which is the main reason why the radial forces of blade and hub are basically the same. The radial centroid trajectory shows obvious triangular characteristics, indicating that the three times conversion frequency component accounts for the main proportion in the radial synthetic displacement frequency component. The time and frequency domain diagrams of the impeller radial force and radial centroid displacement are shown in Figure 11. It can be seen from Figure 11 that the dominant frequencies of radial force and radial centroid displacement of impeller are both f 1 , and other frequency components can be ignored. The amplitude and phase of f 1 are different from radial force to axial displacement, which is consistent with the conclusion of Eqs 2 and 3. The force spectrums of blade in X and Y direction are basically the same, which is the same as the force spectrums of hub and displacement spectrums of axis center point, indicating that the radial force and displacement are balanced. According to the time domain response in Figure 11, it can be found that there is an obvious periodicity of the radial force on the axial flow pump blade in the last four rotation cycles of the total calculation time, indicating that the numerical calculation results are convergent. In order to further verify the accuracy of numerical simulation, the field test is carried out, and the numerical simulation and field test results of the external characteristics of the axial flow pump unit and the pressure pulse action of the measuring point at the inlet of the impeller are compared, as described below. Test Platform The axial-flow pump unit introduced in this paper has been applied in a pump station project of a water conservancy project and has completed the trial operation. In the field test, the pressure pulsation signals of two measuring points at the impeller inlet are detected by Keller pressure pulsation sensor 23SY and transmitted to TN-8000 data collector. The head of the pump unit is obtained by measuring the water level difference between the inside and outside of the pump station, and the flow is measured by the Doppler tester. The shaft power is calculated by the voltage, current and power factor data collected by the sensor, and the efficiency is calculated by head, flow and shaft power. The flow, head, torque, speed and other sensors in the field test instrument have been verified by the nationally recognized measurement and calibration department in China, and the calibration time is within the validity period. The uncertainty of the test system can be reflected by the total uncertainty of efficiency test, and the calculation formula is as follows where E η is the total uncertainty of efficiency test; (E η ) s is the system uncertainty; (E η ) r is random uncertainty. (E η ) s can be expressed by Equation (9), where E Q is system uncertainty of flow measurement, and the full range of its calibration result is ±0.18%; E H is system uncertainty of static head measurement, and the full range of its calibration results is ±0.015%. The dynamic head can be ignored because of the large cross-sectional area of the pressure measuring section between the inlet and outlet water channel and small flow velocity. Therefore, E H is the system error of device head measurement. E H is the system uncertainty of torque measurement, and the uncertainties of torque and speed sensor are ±0.24%; E H is system uncertainty of speed measurement. When the sampling period of the measurement system is 2 s and the speed is not less than 1000 RPM, the uncertainty is ±0.05%. From the Eq. 9, the system uncertainty is ±0.2612%. Under the working condition near the stable design head, the uncertainty of performance test is estimated according to the dispersion degree of efficiency measurement. The calculation formula is as Equation (10) FIGURE 12 | Comparison of pressure fluctuation. where S η is the standard deviation of the average efficiency, which equals to 0.0322%; η is the average efficiency, equals to 59.34%; t 0.95(n−1) is the value of t distribution corresponding to 0.95 confidence rate and n−1 degrees of freedom, which is 2.26. The random uncertainty is ±0.123%. According to the Eq. 10, the total uncertainty of efficiency test is ±0.2163%, which is better than the requirements for the total uncertainty (Standard, 2007). Comparison of External Characteristics The comparison of external characteristics between test and simulation is shown in Figure 12. It can be seen from Figure 12 that the test values of head and efficiency match well with the simulation values in the range from small flow to design flow. In the large flow area, the maximum head difference between test value and simulation value is 0.4 m, and the maximum efficiency difference is 28%. There is a certain difference in the head and efficiency curves in Figure 12, but the values with large differences are concentrated in the large flow area, and the matching degree between the small flow area and the design flow area is high. According to the efficiency calculation formula, the efficiency difference between numerical simulation and field measurement is mainly due to the large head value obtained by numerical simulation at large flow. The axial flow pump unit model used in the numerical simulation is greatly simplified compared with the actual unit, such as the roughness of the surface of the flow passage parts, the installation process of the unit and the structural composition of the unit, which leads to the small hydraulic loss in the numerical simulation, so the calculated head is too large. However, under large flow, the impeller is more fully affected by the fluid, so the difference of external characteristics does not affect the characteristics of radial centroid trajectory caused by hydraulic excitation under large flow. This paper mainly studies the variation law of hydraulic excitation radial centroid trajectory under the flow condition corresponding to the maximum efficiency (i.e. 1.2 times the design flow). The error of numerical simulation is relatively small, and the comparisons between experimental measurement and numerical simulation results of pressure pulsation at the measuring points beside the impeller inlet under this flow will further prove the reliability of numerical simulation. Comparison of Pressure Fluctuation The time-frequency domain diagram of numerical simulation of pressure fluctuation at the measuring points beside impeller inlet obtained through simulation and test are shown in Figures 13,14, respectively. By comparing Figures 13, 14, it is clear that the pressure at the two measurement points are almost the same, but the amplitude is different. The dominant frequency of both is f 3 (blade passing frequency) with an amplitude of about 3.5 kPa in simulation, while its amplitude is almost 2.0 kPa in field test. The comparison results of simulation and test on the aspect of external characteristics and pressure fluctuation show that the numerical simulation can accurately reflect the actual operation of axial flow pump unit. In this paper, the hydraulic excitation characteristics of axial flow pump unit are studied through theoretical analysis, numerical simulation and field test. Through theoretical analysis, it is found that the frequency of the axial displacement corresponds to the force acting on the impeller, but the amplitude and phase, which are related to the mass coefficient, damping coefficient, stiffness coefficient and frequency, are different from the displacement to the force. The force on a single blade includes the rotor-stator interaction frequency component between the blade and the guide vane. However, in the resultant force of the three blades, there is only blade passing frequency and its harmonics. Through numerical simulation, it is found that the pressure distribution of the impeller corresponds to the vibration deformation distribution, and the motor rotor and shaft also deform under the hydraulic action of the impeller. The frequency component of the force on the blades is the same as that of transient centroid trajectory displacement, which is consistent with the conclusions of theoretical analysis. The external characteristics of the axial flow pump unit obtained from the field test and the pressure pulsation value at the measuring point beside the impeller inlet are basically consistent with the numerical simulation results, which shows that the numerical simulation results are accurate and reliable. DATA AVAILABILITY STATEMENT The original contributions presented in the study are included in the article/Supplementary Material, further inquiries can be directed to the corresponding author. AUTHOR CONTRIBUTIONS XD completes the conception and writing of the whole manuscript. FT provides all the parameters of axial flow pump in the manuscript and gives guidance to the research method. HX completes the numerical simulation. JC provides guidance for numerical simulation analysis methods. QL helps to determine the overall framework of the manuscript. WD provides help for the theoretical analysis in the manuscript. RZ provides help for the field tests in the manuscript.
2021-10-26T13:17:06.159Z
2021-10-26T00:00:00.000
{ "year": 2021, "sha1": "285264143c67825038d1c400ce5bc70f0ee69278", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fenrg.2021.754419/pdf", "oa_status": "GOLD", "pdf_src": "Frontier", "pdf_hash": "285264143c67825038d1c400ce5bc70f0ee69278", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [] }
235899322
pes2o/s2orc
v3-fos-license
DiRe Committee : Diversity and Representation Constraints in Multiwinner Elections The study of fairness in multiwinner elections focuses on settings where candidates have attributes. However, voters may also be divided into predefined populations under one or more attributes (e.g.,"California"and"Illinois"populations under the"state"attribute), which may be same or different from candidate attributes. The models that focus on candidate attributes alone may systematically under-represent smaller voter populations. Hence, we develop a model, DiRe Committee WinnerDetermination (DRCWD), which delineates candidate and voter attributes to select a committee by specifying diversity and representation constraints and a voting rule. We analyze its computational complexity, inapproximability, and parameterized complexity. We develop a heuristic-based algorithm, which finds the winning DiRe committee in under two minutes on 63% of the instances of synthetic datasets and on 100% of instances of real-world datasets. We present an empirical analysis of the running time, feasibility, and utility traded-off. Overall, DRCWD motivates that a study of multiwinner elections should consider both its actors, namely candidates and voters, as candidate-specific models can unknowingly harm voter populations, and vice versa. Additionally, even when the attributes of candidates and voters coincide, it is important to treat them separately as diversity does not imply representation and vice versa. This is to say that having a female candidate on the committee, for example, is different from having a candidate on the committee who is preferred by the female voters, and who themselves may or may not be female. Introduction The problem of selecting a committee from a given set of candidates arises in multiple domains; it ranges from political sciences (e.g., selecting the parliament of a country) to recommendation systems (e.g., selecting the movies to show on Netflix). Formally, given a set C of m candidates (politicians and movies, respectively), a set V of n voters (citizens and Netflix subscribers, respectively) give their ordered preferences over all candidates c ∈ C to select a committee of size k. These preferences can be stated directly in case of parliamentary elections, or they can be derived based on input, such as when Netflix subscribers' viewing behavior is used to derive their preferences. In this paper, we focus on selecting a k-sized (fixed size) committee using direct, ordered, and complete preferences. Which committee is selected depends on the committee selection rule, also called multiwinner voting rule. Examples of commonly used families of rules when a complete ballot of each voter is given are Condorcet principle-based rules [1], which select a committee that is at least as strong as every other committee in a pairwise majority comparison, approval-based voting rules [1,2,3] where each voter submits an approval ballot approving a subset of candidates, and ordinal preference ballot-based voting rules like k-Borda and β-Chamberlin-Courant (β-CC) [4,5] that are analogous to single-winner rules. We note that this version of CC rule is different from the Chamberlin-Courant approval voting rule used in the context of approval elections [6,7]. We refer readers to Section 2.2 of [5] for further details on the commonly used families of multiwinner voting rules. In this paper, we focus on ordinal preference-based rules that are analogous to single-winner rules. Recent work on fairness in multiwinner elections show that these rules can create or propagate biases by systematically harming candidates coming from historically disadvantaged groups [8,9]. Hence, diversity constraints on candidate attributes were introduced to overcome this problem. However, voters may be divided into predefined populations under one or more attributes, which may be different from candidate attributes. For example, voters in Figure 1b are divided into "California" and "Illinois" populations under the "state" attribute. The models that focus on candidate attributes alone may systematically under-represent smaller voter populations. Suppose that we impose a diversity constraint that requires the committee to have at least one candidate of each gender, and a representation constraint that requires the committee to have at least one candidate from the winning committee of each state. Observe that the highest-scoring committee, which is also representative, consists of {c 1 , c 2 } (score = 17), but this committee is not diverse, since both candidates are male. Further, the highest-scoring diverse committee consisting of {c 1 , c 3 } (score = 13) is not representative because it does not include any winning candidates from Illinois, the smaller state. The highest-scoring diverse and representative committee is {c 2 , c 3 } (score = 12). This example illustrates the inevitable utility cost due to enforcing of additional constraints. Note that, in contrast to prior work in computational social choice, we incorporate voter attributes that are separate from candidate attributes. Also, our work is different from the notion of "proportional representation" [3,10,11], where the number of candidates selected in the committee from each group is proportional to the number of voters preferring that group, and from its variants such as "fair" representation [12]. All these approaches dynamically divide the voters based on the cohesiveness of their preferences. Another related work, multi-attribute proportional representation [13], couples candidate and voter attributes. An important observation we make here is that, even if the attributes of the candidates and of the voters coincide, it may still be important to treat them separately in committee selection. This is because Fairness in Ranking and Set Selection. There is a growing understanding in the field of theoretical computer science about the possible presence of algorithmic bias in multiple domains [14,15,16,17,18,19], especially in variants of set selection problem [20]. The study of fairness in ranking and set selection, closely related to the study of multiwinner elections, use constraints in algorithms to mitigate bias caused against historically disadvantaged groups. Stoyanovich et al. [20] use constraints in streaming set selection problem, and Yang and Stoyanovich [21] and Yang et al. [22] use constraints for ranked outputs. Kuhlman and Rundensteiner [23] focus on fair rank aggregation and Bei et al. [24] use proportional fairness constraints. Our work adds to the research on the use of constraints to mitigate algorithmic bias. Fairness in Participatory Budgeting. Multiwinner elections are a special case of participatory budgeting, and fairness in the latter domain has also received particular attention. For example, projects (equivalent to candidates) are divided into groups and for fairness they consider lower and upper bounds on utility achieved and the lower and upper bounds on cost of projects used in every group [25]. Fluschnik et al. [26] aim to achieve fairness among projects using their objective function. Next, Hershkowitz et al. [27] have studied fairness from the utility received by the districts (equivalent to voters), Peters et al. [28] define axioms for proportional representation of voters, and Lackner et al. [29] define fairness in long-term participatory budgeting from voters' perspective. However, we note that none of these work simultaneously consider fairness from the perspective of both, the projects and the districts. Two-sided Fairness. The need for fairness from the perspective of different stakeholders of a system is well-studied. For instance, Patro et al. [30], Chakraborty et al. [31], and Suhr et al. [32] consider two-sided fairness in two-sided platforms 3 and Abdollahpouriet al. [34] and Burke et al. [35] shared desirable fairness properties for different categories of multi-sided platforms. However, this line of work focuses on multi-sided fairness in multi-sided platforms, which is technically different from an election. An election, roughly speaking, can be considered a "one-sided platform" consisting of more than one stakeholders as during an election, candidates do not make decisions that affect the voters. Hence, δ-sided fairness in one-sided platform is also needed where δ is the number of distinct user-groups on the platform. More generally, δ-sided fairness in η-sided platform warrants an analysis of δ · η perspectives of fairness, i.e., the effect of fairness on each of the δ stakeholders for each of the η fairness metrics being used. In elections, δ = 2 (candidates and voters) and η = 1 (voting). Additionally, Aziz [36] summarized a line of work related to diversity concerns in two-sided matching that focused on diversity with respect to one stakeholder only. Unconstrained Multiwinner Elections and Proportional Representation. The study of complexity of unconstrained multiwinner elections has received attention [5]. Selecting a committee using Chamberlin-Courant (CC) [37] rule is NP-hard [38], and approximation algorithms have resulted in the best known ratio of 1 − 1 e [39,40]. Yang and Wang [41] studied its parameterized complexity. Another commonly studied rule, Monroe [11], is also NP-hard [42,4]. Sonar et al. [43] showed that even checking whether a given committee is optimal when using these two rules is hard. Finally, the hardness of problems involving restricted voter preferences and committee selection rule have been studied [44,45] and so has the proportional representation in dynamic ranking [46]. Constrained Multiwinner Elections. The study of complexity of using diversity constraints in elections and its complexity has also received particular attention. Goalbase score functions, which specify an arbitrary set of logic constraints and let the score capture the number of constraints satisfied, could be used to ensure diversity [47]. Using diversity constraints over multiple attributes in single-winner elections is NP-hard [13]. Also, using diversity constraints over multiple attributes in multiwinner elections is NP-hard, which has led to approximation algorithms and matching hardness of approximation results by Bredereck et al. [8] and Celis et al. [9]. Finally, due to the hardness of using diversity constraints over multiple attributes in approval-based multiwinner elections [48], these have been formalized as integer linear programs (ILP) [49]. In contrast, Skowron et al. [39] showed that ILP-based algorithms fail in real world when using ranked voting-related proportional representation rules like Chamberlin-Courant and Monroe rules, even when there are no constraints. Overall, the work by Bredereck et al. [8], Celis et al. [9], and Lang and Skowron [13] is closest to ours. However, we differ as we: (i) consider elections with predefined voter populations under one or more attributes, (ii) delineate voter and candidate attributes even when they coincide, and (iii) consider representation and diversity constraints. No previous work, to the best of our knowledge, has considered fairness from the perspective of voter attributes or has delineated candidate and voter attributes even when they coincide. Preliminaries and Notation Multiwinner Elections. Let E = (C, V ) be an election consisting of a candidate set C = {c 1 , . . . , c m } and a voter set V = {v 1 , . . . , v n }, where each voter v ∈ V has a preference list v over m candidates, ranking all of the candidates from the most to the least desired. pos v (c) denotes the position of candidate c ∈ C in the ranking of voter v ∈ V , where the most preferred candidate has position 1 and the least preferred has position m. Given an election E = (C, V ) and a positive integer k ∈ [m] (for k ∈ N, we write [k] = {1, . . . , k}), a multiwinner election selects a k-sized subset of candidates (or a committee) W using a multiwinner voting rule f (discussed later) such that the score of the committee f(W ) is the highest. Formally, given E = (C, V ) and k, f outputs the required committee W of exactly k candidates with the highest score. We assume ties are broken using a pre-decided priority order over all candidates. Candidate Groups. The candidates have µ attributes, For example, candidates in Figure 1a have one attribute gender (µ = 1) with two disjoint groups, male and female (g 1 = 2). Overall, the set G of all such arbitrary and potentially non-disjoint groups will be A (1,1) , ..., A (µ,gµ) ⊆ C. Note that the number of groups a candidate belongs to is equal to the number of attributes µ. The number of populations a voter belongs to is equal to the number of attributes π. Additionally, we are given W P , the winning committee of each population P ∈ P. We note that a fine-grained accounting of representation is not possible in our model. This is because when a committee selection rule such as Chamberlin-Courant rule is used to determine each population's winning committee W P , then a complete-ranking of each population's collective preferences is not possible. Thus, we have design our model to only consider each population's winning committee W P . Multiwinner Voting Rules. There are multiple types of multiwinner voting rules, also called committee selection rules. In this paper, we focus on committee selection rules f that are based on single-winner positional voting rules, and are monotone and submodular (∀A [8,9]. Definition 1. Chamberlin-Courant (CC) rule: The CC rule [37] associates each voter with a candidate in the committee who is their most preferred candidate in that committee. The score of a committee is the sum of scores given by voters to their associated candidate. Specifically, β-CC uses Borda positional voting rule such that it assigns a score of m − i to the i th ranked candidate who is their highest ranked candidate in the committee. Definition 2. Monroe rule: The Monroe rule [11] dynamically divides the n voters into π populations based on the cohesiveness of their preferences where π = k (assuming k divides n). Then, each subpopulation's most preferred candidate is selected into the k-sized committee. Formally, for each population, say P ∈ P, select the candidate c that has the highest score for that subpopulation: max c∈C (f P (c)). In other words, each candidate in the committee is represented by an equal number of voters. A special case of submodular functions are separable functions, which calculate the score of committee as follows: score of a committee W is the sum of the scores of individual candidates in the committee. Formally, f is separable if it is submodular and f(W ) = c∈W f(c) [8]. Monotone and separable selection rules are natural and are considered good when the goal of an election is to shortlist a set of individually excellent candidates [5]: Definition 3. k-Borda rule The k-Borda rule outputs committees of k candidates with the highest Borda scores. DiRe Committee Model In this section, we formally define a model to select a diverse and representative committee, namely DiRe committee, and show its generality. Definition 4. Unconstrained Committee Winner Determination (UCWD): We are given a set C of m candidates, a set V of n voters such that each voter v ∈ V has a preference list v over m candidates, a committee selection rule f, and a committee size k ∈ [m]. Let W denote the family of all size-k committees. The goal of UCWD is to select a committee W ∈ W that maximizes f(W ). We now discuss the diversity and representation constraints. The lowest possible value that these constraints can take is 1, which replicates real-world scenarios. For instance, the United Nations charter guarantees at least one representative to each member country in the United Nations General Assembly, independent of the country's population. Similarly, each state of the United States of America is guaranteed at least three representatives in the US House of Representatives. Hence, from fairness perspective, each candidate group and voter population deserves at least one candidate in the committee. Theoretically, all results in this paper hold even if the lowest possible value that the constraints can take is 0. Diversity Constraints, denoted by l D G ∈ [1, min(k, |G|)] for each candidate group G ∈ G, enforces at least l D G candidates from the group G to be in the committee W . Formally, for all G ∈ G, |G ∩ W | ≥ l D G . We note that we do not propose to use the upper bounds as it induces quota system, which is not desirable from social choice perspective. Representation Constraints, denoted by l R P ∈ [1, k] for each voter population P ∈ P, enforces at least l R P candidates from the population P 's committee W P to be in the committee W . Formally, for all P ∈ P, |W P ∩ W | ≥ l R P . We again do not propose to use the upper bounds as it induces the undesirable quota system. Definition 5. (µ, π)-DiRe Committee Feasibility ((µ, π)-DRCF): We are given an instance of election E = (C, V ), a committee size k ∈ [m], a set of candidate groups G over µ attributes and their diversity constraints l D G for all G ∈ G, and a set of voter populations P over π attributes and their representation constraints l R P and the winning committees W P for all P ∈ P. Let W denote the family of all size-k committees. The goal of (µ, π)-DRCF is to select committees W ∈ W that satisfy the diversity and representation constraints such that |G ∩ W | ≥ l D G for all G ∈ G and |W P ∩ W | ≥ l R P for all P ∈ P. All such committees that satisfy the constraints are called DiRe committees. If a committee selection rule f is also an input to the feasibility problem, we get the (µ, π, f)-DRCWD problem: Definition 6. (µ, π, f)-DiRe Committee Winner Determination ((µ, π, f)-DRCWD): Given an instance of (µ, π)-DRCF and a committee selection rule f, let W denote the family of all size-k committees, then the goal of (µ, π, f)-DRCWD is to select a committee W ∈ W that maximizes the f(W ) among all DiRe committees. (µ, π, f)-DRCWD and Related Models Our model provides the flexibility to specify the diversity and representation constraints and to select the voting rule. Thus, in this section we define the diverse committee problem [8,9] and the apportionment problem [10,50] as special cases of (µ, π, f)-DRCWD. (µ, 0, f)-DRCWD and Diverse Committee Problem. We define the diverse committee problem in our model [8,9]: In the diverse committee problem, we are given an instance of UCWD that consists of a set of candidate groups G and the corresponding diversity constraints, lower bound l D G and upper bound u D G , for all G ∈ G. Let W denote the family of all size-k committees. The goal of the diverse committee problem is to select a committee W ∈ W that maximizes the f(W ) among the committees that satisfy the constraints. It is clear that (µ, 0, f)-DRCWD, i.e., without the presence of any voter attributes, is equivalent to the diverse committee problem. As we do not use upper bounds, our model is generalizable when the upper bound u D G in the diverse committee model is equal to the size of group G for all G ∈ G and the minimum value that the lower bound can take is 1 for all G ∈ G. This is in line with the approach used in Theorem 6 of Celis et al. [9]. Formally, u D G = |G| and l D G ≥ 1 for all G ∈ G. (0, 1, f)-DRCWD and Apportionment Problem. We define the apportionment problem in our model [10]: In the apportionment problem, we are given an instance of UCWD that consists of a set of disjoint voter populations P over one attribute and winning committees W P for all P ∈ P. Let W denote the family of all size-k committees. The goal of the apportionment problem is to select a committee W ∈ W that maximizes the f(W ) among all the committees that satisfy the lower quota, i.e., ∀P ∈ P, |W P ∩ W | ≥ |P | n · k. It is easy to see that (0, 1, f)-DRCWD, which consists of zero candidate attributes and one voter attribute, is same as the apportionment problem if we set the representation constraint of each population to be equal to the lower quota of the apportionment problem. Formally, ∀P ∈ P, l R P = |P | n · k , realistically assuming that ∀P ∈ P, |P | n ≥ 1 k . Finally, we note that our model can be adopted to accept approval votes as an input and thus if each population is completely cohesive within itself, then the representation constraints can be set to formulate known representation methods like proportional justified representation [3] and extended justified representation [6] as (µ, π, f)-DRCWD. Though we note that such reformulations may not be as straightforward as the discussed reformulations. Complexity Results In this section, we give a classification of the computational complexity 4 of the (µ, π, f)-DRCWD problem under different settings. Finding a committee using a submodular scoring function like the utilitarian version of Chamberlin-Courant rule is known to be NP-hard [38] and selecting a diverse committee when a candidate belongs to three groups is also known to be NP-hard [8,9]. However, the proofs of these hardness results are fragmented over several papers and the proofs use reductions from several well-known NP-hard problems. For instance, the proof of hardness for the use of Chamberlin-Courant uses a reduction from exact 3-cover [38] and the proof of hardness for computing a diverse committee uses a reduction from 3-dimensional matching [8] and 3-hypergraph matching [9]. Moreover, we are the first ones to introduce representation constraints and hardness due to its use is unknown. Hence, in this section, we provide a complete classification of the (µ, π, f)-DRCWD problem by giving a reduction from a single well-known NP-hard problem, namely, the vertex cover problem, inspired by the similar approach used in [51]. Finally, we note that as the following classification holds for every integer µ ≥ 0 (specifically, every whole number as µ can not be negative) and every integer π ≥ 0, our reductions are designed for the same range of values. Theorem 1. Let µ, π ∈ Z : µ, π ≥ 0 and f be a committee selection rule, then (µ, π, f)-DRCWD is NP-hard. Corollary 1. Classification of Complexity of (µ, π, f)-DRCWD. Tractable Case Theorem 2. [Theorem 21, Corollary 22 in full-version of Celis et al. [9]] The diverse committee feasibility problem can be solved in polynomial time when µ = 2. Proof. When π=0, there are no voter attributes or representation constraints, and hence, the (µ, 0, f)-DRCWD problem is equivalent to the diverse committee problem. Moreover, when f is a monotone, separable function, then the complexity of the (µ, 0, f)-DRCWD is equivalent to the complexity of (µ, 0)-DRCF. Thus, the polynomial time result of diverse committee feasibility problem when the number of groups a candidate belongs to is equal to two, which in our model implies that the number of candidate attributes is equal to two (µ = 2), holds for our setting (Theorem 9 [8], Corollary 22 (full-version) [9]). More specifically, when µ = 2, we use the algorithm given in the proof of Theorem 21 by Celis et al. [9] and set the upper bound equal to the group size. Formally, u D G = |G| for all G ∈ G. Next, when µ = 1, a straight-forward algorithm that selects the top l D G scoring candidates for all G ∈ G results into a DiRe committee, which satisfies the diversity constraints |G ∩ W | ≥ l D G . Hardness Results NP-hard problem used. As discussed earlier, the NP-hardness of (µ, π, f)-DRCWD when using representation constraints is unknown. Moreover, the known hardness results for using submodular but not separable scoring function and diverse committee selection problems were established via reductions from different NP-hard problems. We will establish the NP-hardness of (µ, π, f)-DRCWD for various settings of µ, π, and f via reductions from a single well known NP-hard problem, namely, the vertex cover problem on 3-regular 5 , 2-uniform 6 hypergraphs [52,53]. . , e n } where each e ∈ E connects two vertices in X such that e corresponds to a 2-element subset of X, then a vertex cover of H is a subset S of vertices such that each e contains at least one vertex from S (i.e. ∀ e ∈ E, e ∩ S = φ). The vertex cover problem is to find the minimum vertex cover of H. diversity constraints When π = 0, (µ, π, f)-DRCWD is related to the diverse committee selection problem. However, the hardness of (µ, π, f)-DRCWD when µ ≥ 3 does not follow the hardness of the diverse committee selection problem when the number of groups that a candidate can belong to is greater than or equal 3 [8,9] as the reductions in these papers are specifically for the case when µ = 3. More specifically, Theorem 9 of Bredereck et al. [8] uses a reduction from 3-Dimensional Matching that only holds for instances when the number of groups that a candidate can belong to is exactly 3. Also, they set lower bound and upper bound to 1, which is mathematically different from our setting where we only allow lower bounds. On the other hand, Theorem 6 ("NP-hardness of feasibility: ∆ ≥ 3" 7 ) of Celis et al. [9] uses two reductions: the first reduction from ∆-hypergraph matching is indeed for the case when the number of groups that a candidate can belong to is greater than or equal to 3 but is limited to instances when lower bound is set to 0 and upper bound to 1, which is a trivial case in our setting as we only use lower bounds and do not allow for upper bounds. Moreover, in-principle, the reduction from ∆-hypergraph matching uses a different problem for each ∆ as when ∆ = ∆ , the ∆-hypergraph matching and ∆ -hypergraph matching are separate problems. The second reduction from 3-regular vertex cover is for instances when the number of groups that a candidate can belong to is exactly 3. Hence, in this section, we give a reduction from a single known NP-hard problem, namely the vertex cover problem, such that our result holds ∀µ ∈ Z : µ ≥ 3 even when ∀G ∈ G, l D G = 1. Also, the reductions are designed to conform to the real-world stipulations: (i) each candidate attribute A i , ∀i ∈ [µ], partitions all the m candidates into two or more groups and (ii) either no two attributes partition the candidates in the same way or if they do, the lower bounds across groups of the two attributes are not the same. For stipulation (ii), note that if two attributes partition the candidates in the same way and if the lower bounds across groups of the two attributes are also the same, then mathematically they are identical attributes that can be combined into one attribute. The next two theorems help us prove the statement in Corollary 1(3). Proof. We reduce an instance of vertex cover (VC) problem to an instance of (µ, π, f)-DRCWD. We have one candidate c i for each vertex x i ∈ X, and m · (2µ 2 − 7µ + 3) dummy candidates d ∈ D where m corresponds to the number of vertices in the graph G and µ is a positive, odd integer (hint: the number of candidate attributes). Formally, we set A = {c 1 , . . . , c m } and the dummy candidate set We set the target committee size to be k + mµ 2 − 3mµ. Next, we have µ candidate attributes. Each edge e ∈ E that connects vertices x i and x j correspond to a candidate group G ∈ G that contains two candidates c i and c j . As our reduction proceeds from a 3-regular graph, each vertex is connected to three edges. This corresponds to each candidate c ∈ A having three attributes and thus, belonging to three groups. Next, for each of the m candidates c ∈ A, we have µ − 3 blocks of dummy candidates and each block contains 2µ − 1 dummy candidates d ∈ D. Thus, we have a total of m · (µ − 3) · (2µ − 1) dummy candidates, which equals to m · (2µ 2 − 7µ + 3) dummy candidates. Next, each block of candidates contains 3 sets of candidates: Set T 1 contains one candidate and Sets T 2 and T 3 contain µ − 1 candidates each. Specifically, each of the µ − 3 blocks for each candidate c ∈ A is constructed as follows: Each candidate in the block has µ attributes and are grouped as follows: • The dummy candidate d T1 1 ∈ T 1 is in the same group as candidate c ∈ A. It is also in µ − 1 groups, individually with each of µ − 1 dummy candidates, d T2 i ∈ T 2 . Thus, the dummy candidate d T1 1 ∈ T 1 has µ attributes and is part of µ groups. • For each dummy candidate d T2 i ∈ T 2 , it is in the same group as d T1 1 as described in the previous point. It is also in µ − 1 groups, individually with each of µ − 1 dummy candidates, d T3 j ∈ T 3 . Thus, each dummy candidate d T2 i ∈ T 2 has µ attributes and is part of µ groups. • For each dummy candidate d T3 i ∈ T 2 , as described in the previous point. Next, note that when µ is an odd number, µ − 1 is an even number, which means Set T 3 has an even number of candidates. We randomly divide µ − 1 candidates into two partitions. Then, we create µ−1 2 groups over one attribute where each group contains two candidates from Set T 3 such that one candidate is selected from each of the two partitions without replacement. Thus, each pair of groups is mutually disjoint. Thus, each dummy candidate d T3 j ∈ T 3 is part of exactly one group that is shared with exactly one another dummy candidate d T3 j ∈ T 3 where j = j . Overall, this construction results in one attribute and one group for each dummy candidate d T3 j ∈ T 3 . Hence, each dummy candidate d T3 j ∈ T 3 has µ attributes and is part of µ groups. As a result of the above described grouping of candidates, each candidate c ∈ A also has µ attributes and is part of µ groups. Note that each candidate c ∈ A already had three attributes and was part of three groups due to our reduction from vertex cover problem on 3-regular graphs. Additionally, we added µ − 3 blocks of dummy candidates and grouped candidate c ∈ A with candidate d T1 1 ∈ T 1 from each of the µ − 3 blocks. Hence, each candidate c ∈ A has 3 + (µ − 3) attributes and is part of µ groups. We set l D G = 1 for all G ∈ G, which corresponds that each vertex in the vertex cover should be covered by some chosen edge. Finally, we introduce m + (m · (2µ 2 − 7µ + 3)) voters. For simplicity, let c i denote the i th candidate in set C. The first voter ranks the candidates based on their indices. The second voter improves the rank of each candidate by one position but places the top-ranked candidate to the last position. Next, the third voter further improves the rank of each candidate by one position but places the top-ranked candidate to the last position. Similarly, all the voters rank the candidates based on this method. Hence, the last voter will have the following ranking: Finally, there are no voter attributes, and hence, π = 0 and there are no representation constraints (l R P = φ). This completes our construction for the reduction, which is a polynomial time reduction in the size of n and m. Note that we assume that the number of candidate attributes µ is always less than the number of candidates |C|. More specifically, our reduction holds when 3 ≤ µ ≤ |C| − 2, which is a realistic assumption as we ideally expect µ to be very small [9]. We first compute the score of the committee and then show the proof of correctness. When f is a monotone, separable scoring function, we know that Next, given a scoring vector s = (s 1 , s 2 , . . . , s 2µ 2 m−7µm+4m ) where s 1 is the score associated with candidate c in the ranking of voter v whose pos v (c) = 1 and so on, s 1 ≥ s 2 ≥ · · · ≥ s 2µ 2 m−7µm+4m and s 1 > s 2µ 2 m−7µm+4m , the score of each candidate c ∈ C is but as each candidate occupies each of the m + (m · (2µ 2 − 7µ + 3)) positions once, f(c) can be rewritten as Hence, as all candidates c ∈ C have the same score, the score of each k + mµ 2 − 3mµ-sized committee W ∈ W will be the highest such that f(W ) is Note that computing any highest scoring committee using a monotone, separable function takes time polynomial in the size of input. For clarity w.r.t. to the score of the committee, consider the following example: W.l.o.g., if we assume that f is k-Borda, then s = (m + (m · (2µ 2 − 7µ + 3)) − 1, . . . , 1, 0). Hence, all candidates c ∈ C get the same Borda score f(c) of (m + (m · (2µ 2 − 7µ + 3)) − 1 + · · · + 1 + 0 which is the sum of first m + (m · (2µ 2 − 7µ + 3)) − 1 natural numbers, all the scores in the scoring vector of Borda rule. Therefore, each k + mµ 2 − 3mµ-sized committee will be the highest scoring committee W ∈ W with a f(W ) of Hence, the NP-hardness of the problem is due to finding a feasible committee that satisfies for all G ∈ G, |G ∩ W | ≥ l D G where l D G = 1. Therefore, for the proof of correctness, we show the following: Claim 1. We have a vertex cover S of size at most k that satisfies e ∩ S = φ for all e ∈ E if and only if we have a committee W of size at most k + mµ 2 − 3mµ that satisfies all the diversity constraints, which means that for all G ∈ G, (⇒) If the instance of the VC problem is a yes instance, then the corresponding instance of (µ, π, f)-DRCWD is a yes instance as each and every candidate group will have at least one of their members in the winning committee W , i.e., |G ∩ W | ≥ 1 for all G ∈ G. Note that we have set l D G = 1 for all G ∈ G. More specifically, for each block of candidates, we select one dummy candidate from Set T 1 and all µ − 1 dummy candidates from Set T 3 . This helps to satisfy the condition |G ∩ W | ≥ 1 for all candidate groups that contain at least one dummy candidate d ∈ D. Overall, we select µ candidates from µ − 3 blocks for each of the m candidates that correspond to vertices in the vertex cover. This results in (µ · (µ − 3) · m) = mµ 2 − 3mµ candidates in the committee. Next, for groups that do not contain any dummy candidates, select k candidates c ∈ A that correspond to k vertices x ∈ X that form the vertex cover. These candidates satisfy the constraints. Specifically, these k candidates satisfy |G ∩ W | ≥ 1 for all the candidate groups that do not contain any dummy candidates. Hence, we have a committee of size k + mµ 2 − 3mµ. (⇐) The instance of the (µ, π, f)-DRCWD is a yes instance when we have k + mµ 2 − 3mµ candidates in the committee. This means that each and every group will have at least one of their members in the winning committee W , i.e., |G ∩ W | ≥ 1 for all G ∈ G. Then the corresponding instance of the VC problem is a yes instance as well. This is because the k vertices x ∈ X that form the vertex cover correspond to the k candidates c ∈ A that satisfy |G ∩ W | ≥ 1 for all the candidate groups that do not contain any dummy candidates. This completes the proof. Proof. We reduce an instance of vertex cover (VC) problem to an instance of (µ, π, f)-DRCWD. We have two candidate c i and c m+i for each vertex x i ∈ X, and 2m · (2µ 2 − 7µ + 3) dummy candidates d ∈ D where m corresponds to the number of vertices in the graph G and µ is a positive, even integer (hint: the number of candidate attributes). Formally, We set the target committee size to be 2k + 2mµ 2 − 6mµ. Next, we have µ candidate attributes. Each edge e ∈ E that connects vertices x i and x j correspond to two candidate groups G, G ∈ G such that group G contains two candidates c i and c j that correspond to vertices x i and x j and the group G contains two candidates c m+i and c m+j that also correspond to vertices x i and x j . Note that by having 2m candidates in A, we are in fact duplicating the graph H. As our reduction proceeds from a 3-regular graph, each vertex is connected to three edges. This corresponds to each candidate c ∈ A having three attributes and thus, belonging to three groups. Next, for each candidate c ∈ A, we have µ − 3 blocks of dummy candidates, each block containing 2µ − 1 dummy candidates d ∈ D. Thus, we have a total of 2m · (µ − 3) · (2µ − 1) dummy candidates, which equals to 2m · (2µ 2 − 7µ + 3) dummy candidates. Next, each block of candidates contains 3 sets of candidates: Set T 1 contains one candidate and Sets T 2 and T 3 contain µ − 1 candidates each. Specifically, each of the µ − 3 blocks for each candidate c ∈ A is constructed as follows in line with the construction in the proof for Theorem 3: • Set T 1 consists of single dummy candidate, d T1 Each candidate in the block has µ attributes and are grouped as follows: • The dummy candidate d T1 1 ∈ T 1 is in the same group as candidate c ∈ A. It is also in µ − 1 groups, individually with each of µ − 1 dummy candidates, d T2 i ∈ T 2 . Thus, the dummy candidate d T1 1 ∈ T 1 has µ attributes and is part of µ groups. • For each dummy candidate d T2 i ∈ T 2 , it is in the same group as d T1 1 as described in the previous point. It is also in µ − 1 groups, individually with each of µ − 1 dummy candidates, d T3 j ∈ T 3 . Thus, each dummy candidate d T2 i ∈ T 2 has µ attributes and is part of µ groups. Note that the grouping of the candidates in Set T 3 differs significantly from the construction in the proof for Theorem 3: i ∈ T 2 , as described in the previous point. Next, note that when µ is an even number, µ − 1 is an odd number, which means Set T 3 has an odd number of candidates. We randomly divide µ − 2 candidates into two partitions. Then, we create µ−2 2 groups over one attribute where each group contains two candidates from Set T 3 such that one candidate is selected from each of the two partitions without replacement. Thus, each pair of groups is mutually disjoint. Hence, each dummy candidate d T3 j ∈ T 3 is part of exactly one group that is shared with exactly one another dummy candidate d T3 j ∈ T 3 where j = j . Overall, this construction results in one attribute and one group for all but one dummy candidate d T3 j ∈ T 3 , which results into a total of µ attributes and µ groups for these µ − 2 candidates. This is because µ−2 2 groups can hold µ − 2 candidates. Hence, one candidate still has µ − 1 attributes and is part of µ − 1 groups. If this block of dummy candidates is for candidate c i ∈ A, then another corresponding block of dummy candidates for candidate c m+i ∈ A will also have one candidate d T 3 z ∈ T 3 who will have µ − 1 attributes and is part of µ − 1 groups. We group these two candidates from separate blocks. Hence, now that one remaining candidate also has µ attributes and is part of µ groups. As there is always an even number of candidates in set A (|A| = 2m), such cross-block grouping of candidates among a total of (µ − 3) · 2m blocks, also an even number, is always possible. As a result of the above described grouping of candidates, each candidate c ∈ A also has µ attributes and is part of µ groups. Note that each candidate c ∈ A already had three attributes and was part of three groups due to our reduction from vertex cover problem on 3-regular graphs. Additionally, we added µ − 3 blocks of dummy candidates and grouped candidate c ∈ A with candidate d T1 1 ∈ T 1 from each of the µ − 3 blocks. Hence, each candidate c ∈ A has 3 + (µ − 3) attributes and is part of µ groups. We set l D G = 1 for all G ∈ G, which corresponds that each vertex in the vertex cover should be covered by some chosen edge. Finally, we introduce 2m + (2m · (2µ 2 − 7µ + 3)) voters, in line with our reduction in proof of Theorem 3. For simplicity, let c i denote the i th candidate in set C. The first voter ranks the candidates based on their indices. The second voter improves the rank of each candidate by one position but places the top-ranked candidate to the last position. Similarly, all the voters rank the candidates based on this method. Hence, the last voter will have the following ranking: Finally, there are no voter attributes, and hence, π = 0 and there are no representation constraints (l R P = φ). This completes our construction for the reduction, which is a polynomial time reduction in the size of n and m. Note that we assume that the number of candidate attributes µ is always less than the number of candidates |C|. More specifically, our reduction holds when 3 ≤ µ ≤ |C| − 2, which is a realistic assumption as we ideally expect µ to be very small [9]. We first compute the score of the committee and then show the proof of correctness. When f is a monotone, separable scoring function, we know that Next, given a scoring vector s = (s 1 , s 2 , . . . , s 4µ 2 m−14µm+8m ) where s 1 is the score associated with candidate c in the ranking of voter v whose pos v (c) = 1 and so on, s 1 ≥ s 2 ≥ · · · ≥ s 4µ 2 m−14µm+8m and s 1 > s 4µ 2 m−14µm+8m , the score of each candidate c ∈ C is but as each candidate occupies each of the 2m + (2m · (2µ 2 − 7µ + 3)) positions once, f(c) can be rewritten as Hence, as all candidates c ∈ C have the same score, the score of each 2k + 2mµ 2 − 6mµ-sized committee W ∈ W will be the highest such that f(W ) is Note that computing any highest scoring committee using a monotone, separable function takes time polynomial in the size of input. For clarity w.r.t. to the score of the committee, consider the following example: W.l.o.g., if we assume that f is k-Borda, then s = (2m + (2m · (2µ 2 − 7µ + 3)) − 1, . . . , 1, 0). Hence, all candidates c ∈ C get the same Borda score f(c) of (2m + (2m · (2µ 2 − 7µ + 3)) − 1 + · · · + 1 + 0 which is the sum of first 2m + (2m · (2µ 2 − 7µ + 3)) − 1 natural numbers, all the scores in the scoring vector of Borda rule. Therefore, each 2k + 2mµ 2 − 6mµ-sized committee will be the highest scoring committee W ∈ W with a f(W ) of Hence, the NP-hardness of the problem is due to finding a feasible committee that satisfies for all G ∈ G, |G ∩ W | ≥ l D G where l D G = 1. Therefore, for the proof of correctness, we show the following: Claim 2. We have a vertex cover S of size at most k that satisfies e ∩ S = φ for all e ∈ E if and only if we have a committee W of size at most 2k + 2mµ 2 − 6mµ that satisfies all the diversity constraints, which means that for all G ∈ G, |G ∩ W | ≥ l D G which equals |G ∩ W | ≥ 1 as l D G = 1 for all G ∈ G. (⇒) If the instance of the VC problem is a yes instance, then the corresponding instance of (µ, π, f)-DRCWD is a yes instance as each and every candidate group will have at least one of their members in the winning committee W , i.e., |G ∩ W | ≥ 1 for all G ∈ G. Note that we have set l D G = 1 for all G ∈ G. More specifically, for each block of candidates, we select one dummy candidate from Set T 1 and all µ − 1 dummy candidates from Set T 3 . This helps to satisfy the condition |G ∩ W | ≥ 1 for all candidate groups that contain at least one dummy candidate d ∈ D. Overall, we select µ candidates from µ − 3 blocks for each of the 2m candidates that correspond to vertices in the vertex cover. This results in (µ · (µ − 3) · 2m) = 2mµ 2 − 6mµ candidates in the committee. Next, for groups that do not contain any dummy candidates, select 2k candidates c ∈ A that correspond to k vertices x ∈ X that form the vertex cover. These candidates satisfy the constraints. Specifically, these 2k candidates satisfy |G ∩ W | ≥ 1 for all the candidate groups that do not contain any dummy candidates. Hence, we have a committee of size 2k + 2mµ 2 − 6mµ. (⇐) The instance of the (µ, π, f)-DRCWD is a yes instance when we have 2k + 2mµ 2 − 6mµ candidates in the committee. This means that each and every group will have at least one of their members in the winning committee W , i.e., |G ∩ W | ≥ 1 for all G ∈ G. Then the corresponding instance of the VC problem is a yes instance as well. This is because the k vertices x ∈ X that form the vertex cover correspond to the 2k candidates c ∈ A that satisfy |G ∩ W | ≥ 1 for all the candidate groups that do not contain any dummy candidates. We remind that we had constructed 2m candidates in the instance of (µ, π, f)-DRCWD problem that correspond to m vertices in the VC problem, which means that we need 2k candidates instead of k candidates to satisfy diversity constraints for candidate groups that do not contain any dummy candidates. This completes the proof. representation constraints We now study the computational complexity of (µ, π, f)-DRCWD due to the presence of voter attributes. Note that the reduction is designed to conform to the real-world stipulations that are analogous to the stipulations for the candidate attributes. The following theorem helps us prove the statement in Corollary 1(4). Theorem 5. If µ = 0, ∀π ∈ Z : π ≥ 1, and f is a monotone, separable function, then (µ, π, f)-DRCWD is NP-hard, even when ∀P ∈ P, l R P = 1. Proof. We reduce an instance of vertex cover (VC) problem to an instance of (µ, π, f)-DRCWD. We have one candidate c i for each vertex x i ∈ X, and n · m dummy candidates d ∈ D where n corresponds to the number of edges and m corresponds to the number of vertices in the graph G. Formally, we set A = {c 1 , . . . , c m } and the dummy candidate set D = {d 1 , . . . , d nm }. Hence, the candidate set C = A ∪ D consists of m + (n · m) candidates. We set the target committee size to be k. We now introduce n 2 voters, n voters for each edge e ∈ E. More specifically, an edge e ∈ E connects vertices x i and x j . Then, the corresponding n voters v ∈ V rank the candidates in the following collection of sets T = (T 1 , T 2 , T 3 , T 4 ) such that T 1 T 2 T 3 T 4 : • Set T 1 : candidates c i and c j that correspond to vertices x i and x j are ranked at the top two positions, ordered based on their indices. For a th voter where a ∈ [n], we denote the candidates c i and c j as c ia and c ja . • Set T 2 : m out of (n · m) dummy candidates are ranked in the next m positions, again ordered based on their indices. For each voter, these m candidates are distinct as shown below. Hence, for all pairs of voters More specifically, the voters rank the candidates as shown below: Next, there are no candidate attributes, and hence, µ = 0 and there are no diversity constraints (l D G = φ). The voters are divided into disjoint population over one or more attributes when ∀π ∈ Z, π ≥ 1. Specifically, the voters are divided into populations as follows: ∀x ∈ [π], ∀y ∈ [n], ∀z ∈ [n], voter v z y ∈ V is part of a population P ∈ P such that P contains all voters with the same z mod x and y. Each voter is part of π populations. We set the representation constraint to 1. Hence, l R P = 1 for all P ∈ P. The winning committee W P for each population P ∈ P will always consist of the top k-ranked candidates in the ranking of the voters in population P , which means that W P , ∀P ∈ P, can not contain candidates from Set T 3 and Set T 4 . This is because, by construction, (a) the ranking of all voters within a population v ∈ P , for all P ∈ P, is the same and (b) the first k candidates of each population will only get selected because either (i) they will indeed be the highest scoring candidates for the population or (ii) in case of a tie, they get precedence because we break ties based on the indices of candidates such that c i gets precedence over c j for all i < j. This completes our reduction, which is a polynomial time reduction in the size of n and m. For the proof of correctness, we show the following: We have a vertex cover S of size at most k that satisfies e ∩ S = φ for all e ∈ E if and only if we have at least one committee W of size at most k that satisfies all the representation constraints, which means that for all P ∈ P, |W P ∩ W | ≥ l R P which equals |W P ∩ W | ≥ 1 as l R P = 1 for all P ∈ P. (⇒) If the instance of the VC problem is a yes instance, then the corresponding instance of (µ, π, f)-DRCWD is a yes instance as each and every population's winning committee, W P for all P ∈ P, will have at least one of their members in the winning committee W , i.e., |W P ∩ W | ≥ 1 for all P ∈ P. Indeed, even had the winning committee of each population been of size 2 instead of k, the instance of (µ, π, f)-DRCWD will be a yes instance as the vertex cover corresponds to the winning committee representing each and every population as |W P ∩ W | ≥ 1 for all P ∈ P. (⇐) The instance of the (µ, π, f)-DRCWD is a yes instance when each and every population's winning committee, W P for all P ∈ P, will have at least one of their members in the winning committee W , i.e., |W P ∩ W | ≥ 1 for all P ∈ P. Then the corresponding instance of the VC problem is a yes instance as well. More specifically, there are two cases when the instance of the (µ, π, f)-DRCWD can be a yes instance: • Case 1 -When only the candidates from Set T 1 are in the committee W : An instance of (µ, π, f)-DRCWD when µ = 0 and π = 1 is a yes instance when each and every population has at least one representative in the committee, i.e., |W P ∩ W | ≥ 1 for all P ∈ P. We note that for all P ∈ P, each population's winning committee W P consists of two candidates from Set T 1 and top k − 2 candidates from Set T 2 . Hence, when the winning committee W consists of only the candidates from Set T 1 of the ranking of each and every voter v ∈ V , it implies that it will be a yes instance, which in turn, implies that there is a vertex cover of size at most k that covers all the edges e ∈ E because the vertices in vertex cover x ∈ S correspond to the candidates in the winning committee c ∈ W . • Case 2 -When candidates from Set T 1 and Set T 2 are in the committee W : In Case 1, we showed that if a candidate c in the winning committee W is from Set T 1 , then it corresponds to a vertex in the vertex cover. Additionally, as the population's winning committee W P for all P ∈ P is of size k, an instance of (µ, π, f)-DRCWD can be a yes instance even if a dummy candidate from Set T 2 is in the winning committee W . More specifically, there are two sub-cases: -for some population P ∈ P, dummy candidate d from Set T 2 AND candidate c from from Set T 1 are in the committee W : if a population's candidate c from Set T 1 , who is also in W P , is in W , then this sub-case is equivalent to Case 1, and hence, a corresponding vertex in the vertex cover v ∈ S exists. We note that this sub-case does not allow for any of population to have a representative from W P in W only from Set T 2 , which is our next sub-case. -for some population P ∈ P, only dummy candidate d from Set T 2 is in the committee W : if for a given population P ∈ P, a committee W represents the population via only the dummy candidate d who is in a population's winning committee d ∈ W P , then the representation constraint l R P = |W P ∩ W | = 1 is satisfied as W P ∩ W = {d}. However, for all pairs of voters Hence, we can replace any such dummy candidate d ∈ W P with a candidate c ∈ W P as that candidate d can not be representing any other population P ∈ P \ P . Formally, a winning committee W is always tied 8 to another winning committee W where W =(W \ {d}) ∪ {c} where {c, d} ∈ W P for some P ∈ P. This is equivalent to saying that we are replacing candidate d from Set T 2 with a candidate c from Set T 1 of the population P . Thus, a yes instance of (µ, π, f)-DRCWD due to W , or due to the equivalent committee W , in this sub-case corresponds to a vertex cover S that covers all the edges e ∈ E. These cases complete the other direction of the proof of correctness. Finally, we note that for this reduction and the proof of correctness, we assume the ties are broken using a predecided order of candidates. We also note that as we are using a separable committee selection rule, computing scores of candidates takes polynomial time. This completes the overall proof. The reduction in the proof of Theorem 5 holds ∀π ∈ Z : π ≥ 1 as each voter in the reduction can belong to more than one population. Next, as focus of this section was to understand the computational complexity with respect to representation constraints, we ease the stipulation that required each candidate attribute to partition all candidates into more than two groups. Hence, for each candidate attribute A i , ∀i ∈ [µ], we simply create one group that consists of all the candidates and set l D G = 1 for all G ∈ G and the problem still remains NP-hard. (µ, π, f)-DRCWD w.r.t. submodular scoring function Chamberlin-Courant (CC) rule is a well-known monotone, submodular scoring function [9], which we use for our proof. The novelty of our reduction is that it holds for determining the winning committee using CC rule that uses any positional scoring rule with scoring vector s = {s 1 , . . . , s m } such that s 1 = s 2 , s m ≥ 0, and ∀i ∈ [3, m − 1], s i ∈ Z : s i ≥ s i+1 and s 2 > s i . Proof. We reduce an instance of vertex cover (VC) problem to an instance of (µ, π, f)-DRCWD. Each candidate c i ∈ C corresponds to a vertex x i ∈ X. For each edge e ∈ E, we have a voter v ∈ V whose complete linear order is as follows: the top two most preferred candidates correspond to the two vertices connected by an edge e. These two candidates are ranked based on their indices. The remaining m − 2 candidates are ranked in the bottom m − 2 positions, again based on their indices. We set the committee size to k. This is a polynomial time reduction in the size of n and m. For the proof of correctness, we note that there are no candidate and voter attributes, and thus, no diversity and representation constraints. Hence, we show the following: Claim 4. We have a vertex cover S of size at most k that satisfies e ∩ S = φ for all e ∈ E if and only if we have a committee W of size at most k with total misrepresentation of zero, which means that at least one of the top 2 ranked candidates of each voter is in the committee W . (⇒) If the instance of the VC problem is a yes instance, then the corresponding instance of (µ, π, f)-DRCWD is a yes instance as each and every voter will have at least one of their top two candidates in the committee and this will result in a misrepresentation score of zero as s 1 = s 2 and ∀i ∈ [3, m], s i < s 2 . (⇐) If the instance of (µ, π, f)-DRCWD is a yes instance, then the VC is also a yes instance. When a committee W does not represent a voter's one of the top-2 candidates, it implies that the dissatisfaction is greater than zero. Hence, for each voter v ∈ V that is not represented, there exists an edge e ∈ E that is not covered. Hence, we can say that (µ, π, f)-DRCWD is NP-hard with respect to µ = 0, π = 0 and f = submodular function. The proof of Theorem 6 shows that when we use a submodular but not separable committee selection rule f, (µ, π, f)-DRCWD is NP-hard even when µ = 0 and π = 0. Next, as focus of this section was to understand the computational complexity with respect to monotone submodular scoring rule, we ease the stipulation that required each candidate attribute to partition all candidates into more than two groups and required each voter attribute to partition all voters into more than two population. The problem remains hard even when we have candidate attributes and the diversity constraints are set to one and have voter population and the representation constraints set to one. Specifically, for each candidate attribute, create one group that contains all the candidates and for each voter attribute, create one population Table 2: A summary of inapproximability and parameterized complexity of (µ, π)-DRCF. The value in brackets of the header row represent the values of µ and π, respectively, such that results hold for all µ ∈ Z and all π ∈ Z that satisfy the condition stated in the brackets. The results are under the assumption P = NP. 'Thm.' denotes Theorem. 'Obs.' denotes Observation. 'Cor.' denotes Corollary. ε denotes an arbitrarily small constant such that ε > 0 and the results are meant to hold for every such ε > 0. that contains all the voters. This is analogous to not having any candidate or voter attributes. Hence, even when l D G = 1 for all G ∈ G and l R P = 1 for all P ∈ P, (µ, π, f)-DRCWD is NP-hard if f is submodular committee selection rule. Inapproximability and Parameterized Complexity The hardness of (µ, π, f)-DRCWD is mainly due to the hardness of (µ, π)-DRCF, which is to say that satisfying the diversity and representation constraints is computationally hard, even when all constraints are set to 1. Formally, the hardness remains even when l D G = 1 for all G ∈ G and l R P = 1 for all P ∈ P. Hence, in this section, we focus on the hardness of approximation to understand the limits of how well we can approximate (µ, π)-DRCF and focus on parameterized complexity of (µ, π)-DRCF. It is natural to try to reformulate representation constraints as diversity constraints. However, in our model, it is not possible to do so as each candidate attribute partitions all m candidates into groups and the lower bound is set such that l D G ∈ [1, min(k, |G|)] for all G ∈ G. However, for representation constraints, W P , for all P ∈ P, contains only k candidates and the remainder m − k candidates consisting of C \ W P , for all P ∈ P, may never be selected. Hence, representation constraints can not be easily reformulated to diversity constraints. Moreover, even if we relax the lower bound of the diversity constraint to l D G ∈ [0, min(k, |G|)] instead of l D G ∈ [1, min(k, |G|)], for all G ∈ G, to allow for such a reformulation, the following settings of (µ, π)-DRCF and (µ, π, f)-DRCWD are technically different and we may not carry out any reformulations amongst each other: • Using only diversity constraints • Using only representation constraints • Using both, diversity and representation, constraints The above listed settings are technically different from each other as the sizes of candidate groups and the size of the winning committees of populations have implications on our approach to solve a problem. For instance, using both, diversity and representation, constraints and using only representation constraints are mathematically as different as the vertex cover problem on hypergraphs and the vertex cover problem on k-uniform hypergraphs, respectively. The differences between the hardness of approximation for the latter two problems is well-known. Overall, while reformulations such as converting representation constraints to diversity constraints do not impact the computational complexity of the problem, it affects the approximation and parameterized complexity results. Hence, we study the hardness of approximation and the parameterized complexity of the above listed settings of (µ, π)-DRCF in detail without carrying out any reformulations between the different settings of the constraints. Observation 2. ∀µ ∈ Z and ∀π ∈ Z, the following settings of the (µ, π)-DRCF problem are not equivalent: (i) µ=0 and π ≥ 1, (ii) µ ≥ 3 and π = 0, and (iii) µ ≥ 1 and π ≥ 1. Inapproximability In this subsection, we focus on allowing size violation as deciding on which constraints to violate is not straightforward, especially as constraints are linked to human groups. Hence, we define the size optimization version of (µ, π)-DRCF and study its inapproximability: 9 For Theorem 9, we assume that the Unique Games Conjecture (UGC) [54] holds, specifically as the result that showed pseudorandom sets in the Grassmann graph have near-perfect expansion completed the proof of 2-to-2 Games Conjecture [55], which is considered to be a significant evidence towards proving the UGC. Moreover, GapUG( 1 2 , ε) is found to be NP-hard, i.e., a weaker version of the UGC holds with completeness 1 2 (See [56] and "Evidence towards the Unique Games Conjecture" in [55] for more details). Without the assumption on UGC, the result for our problem when µ = 0 and π ≥ 1 will change and for arbitrarily small constant ε > 0, the problem is inapproximable within a factor of k − 1 − ε for every integer k ≥ 3 [57] and within a factor of √ 2 − ε when k = 2 [55,58]. Definition 8. (µ, π)-DRCF-size-optimization: In the (µ, π)-DRCF-size-optimization problem, given a set C of m candidates, a set V of n voters such that each voter v i has a preference list vi over m candidates, a committee size k ∈ [m], a set of candidate groups G and the corresponding diversity constraints l D G for all G ∈ G, and a set of voter populations P and the corresponding representation constraints l R P and the winning committees W P for all P ∈ P, find a minimum-size committee W ⊆ C such that W satisfies all the diversity and representation constraints, i.e., |G ∩ W | ≥ l D G for all G ∈ G and |W P ∩ W | ≥ l R P for all P ∈ P, respectively. Proof. We reduce from the set multi-cover problem with sets of bounded size, a known NP-hard problem [52], to (µ, 0)-DRCF-size-optimization problem. More specifically, given a set X = {v 1 , ..., v |G| }, and a collection of m sets S i ⊆ X such that |S i | ≤ µ, the goal is to choose some sets of minimum cardinality covering each element v i . Then, we construct a (µ, 0)-DRCF-size-optimization instance. To do so, we have a corresponding candidate c i for each set S i , and a corresponding group G ∈ G which is equal to {c j : v i ∈ S j } for each element v i . Hence there are m candidates and |G| candidate groups such that each candidate belongs to at most µ groups. The diversity constraints l D G are set to be equal to 1, which corresponds to the requirement that each element is covered. This is an approximation-preserving reduction for all µ ≥ 3 and π = 0. Hence, the minimum cardinality of the constrained set cover problem is at most k if and only if an at most k-sized feasible committee exists. Given that set multi-cover problem is inapproximable within (1 − ε)· (ln µ − O(ln ln µ)) [59], so is our (3, 0)-DRCF-sizeoptimization problem. We note that this result holds for (µ, π)-DRCF-size-optimization problem for all µ ∈ Z : µ ≥ 3 and π = 0. While the above proof is similar in flavor to the one given in Theorem 7 (Hardness of feasibility with committee violations) of Celis et al. [9], we note that our inapproximability ratio differs from their inapproximability ratio of (1 − ε)· ln (|G|). This is because our ratio exploits the candidate structure where each candidate is bounded by the number of attributes µ, which bounds the number of groups they can be a part of. Hence, our reduction is from set cover problem where each set is of bounded size. An instance of HS consists of a universe U = {x 1 , x 2 , . . . , x m } and a collection Z of subsets of U , each of size ∈ [1, m]. The objective is to find a subset S ⊆ U of size at most k that ensures for all T ∈ Z, |S ∩ T | ≥ 1. We construct the (1, 1)-DRCF-size-optimization instance as follows. For each element x in the universe U , we have the candidate c in the candidate set C. For each subset T in collection Z, we either have candidate group G ∈ G or winning committee W P of population P ∈ P. Note that we have |G| + |P| = |Z|. We set l D G = 1 for all G ∈ G and l R P = 1 for all P ∈ P, which means |W ∩ G| ≥1 and |W ∩ W P | ≥1, respectively. This corresponds to the requirement that |S ∩ T | ≥ 1. Hence, we have a subset S of size at most k that satisfies |S ∩ T | ≥ 1 if and only if we have a committee W of size at most k that satisfies |W ∩ G| ≥1 for all G ∈ G and |W ∩ W P | ≥1 for all P ∈ P. Proof. We give a reduction from vertex cover problem on k-uniform hypergraphs to (0, 1)-DRCF. An instance of vertex cover problem on k-uniform hypergraphs consists of a set of vertices X = {x 1 , x 2 , . . . , x m } and a set of n hyperedges S, each connecting exactly k vertices from X. A vertex cover X ⊆ X is a subset of vertices such that each edge contains at least one vertex from X (i.e. s ∩ X = φ for each edge s ∈ S). The vertex cover problem on k-uniform hypergraphs is to find a vertex cover X of size at most d. We construct the (0, 1)-DRCF instance as follows. For each vertex x ∈ X, we have the candidate c ∈ C. For each edge s ∈ S, we have a population's winning committee W P of size k for all P ∈ P. Note that we have |P| = |S|. We set l R P = 1 for all P ∈ P, which means |W ∩ W P | ≥1. This corresponds to the requirement that s ∩ X = φ. Hence, we have a vertex cover X of size at most d if and only if we have a committee W of size at most d that satisfies |W ∩ W P | ≥1 for all P ∈ P. In addition to this general inapproximability result, we informally conjecture that improve upon the ratio of k − ε. Conjecture 1. A proof of above conjecture implies that there exists a polynomial time approximation algorithm for the (µ, π)-DRCFsize-optimization problem (µ = 0 and π ≥ 1) with approximation ratio at most where g(φ) is a function that maps the cohesiveness of the preferences φ to the maximum number of winning committees W P that a candidate can belong to. Specifically, if such a g(φ) exists and if π = 1, then the stated approximation ratio exists directly due to Halperin [63]. Parameterized Complexity In most real-world elections, the committee size k is constant. Hence, our first result here is inspired by the parameterized complexity results in this field [38,41]. Observation 3. The (µ, π)-DRCF problem can be solved in O(m k · (|G| + |P|)). If k is a constant, then it is a polynomial time algorithm. We select a set of committees W, each of size k, and then check for the satisfiability of the constraints for each committee W ∈ W. It is easy to see that W has m k committees, that is, |W| ≤ m k . Checking whether a committee W ∈ W satisfies all the constraints takes O(|G| + |P|), which is the total number of constraints to be checked. Hence, we can solve (µ, π)-DRCF in time polynomial in m and n, given k is constant. Next, when the committee size (k) is not a constant, the rate of growth of the number of candidates to be elected may be much slower than the number of candidates (k m). Theorem 10. [60,64] The regular hitting set problem with unbounded subset size is W[2]-hard w.r.t. k. Corollary 4. If ∀µ ∈ Z : µ ≥ 3 and π = 0, then (µ, π)-DRCF problem is W[2]-hard w.r.t. k and the hardness holds even when l D G = 1 ∀ G ∈ G. Proof. In the proof for Theorem 7, we gave a reduction from minimum set cover problem to (µ, π)-DRCF problem w.r.t. µ and π for all µ ∈ Z : µ ≥ 3 and π = 0. Additionally, we know that the minimum set cover problem has a well-known one to one relationship with the hitting set problem with no restriction on the subset size [60,65,66]. Hence, as regular HS with unbounded size of subsets is W[2]-hard [60], our results here follow due to the one to one relationship between the regular HS problem and the minimum set cover problem. Proof. Proof of Theorem 9 shows that our problem is equivalent to k-HS when µ = 0 and π ≥ 1. Hence, our algorithm here is motivated from bounded tree search algorithm in Section 6 of [60] where they showed that when k is small, a d-hitting set problem, which upper bounds the cardinality of every element in the subsets to be hit to d, can be solved using an O(c k + m) time algorithm with c = d − 1 + O(d −1 ). In our case, d = k. We have modified our algorithm from [60] to return all committees that satisfy the representation constraints. Algorithm 1 Parameterized Polynomial-time Algorithm for Representation Constraints Input: C, k, and W P and l R P for all P ∈ P Output: W : |W ∩ W P | ≥ 1 for all P ∈ P 1: C = C \ {y} : ∀x, y ∈ C, if y ∈ W P , then x ∈ W P , for all P ∈ P 2: for each W P do 3: Create k branches, one each for each c i ∈ W P 4: Choose c 1 for the hitting set and choose that c 1 is not in the hitting set, but c i is for all i ∈ [2, k] In the above algorithm, steps 3 and 4 creates k branches in total. Hence, if the number of leaves in a branching tree is b k , then the first branch has at most b k−1 leaves. Next, let b k be the number of leaves in a branching tree where there is at least one set of size k − 1 or smaller. For each i ∈ [2, k], there is some committee W P in the given collection such that c 1 ∈ W P , but c i / ∈ W P . Therefore, the size of W P is at most k − 1 after excluding c 1 from and including c i in the committee W . Altogether we get If there is already a set with at most k − 1 elements, we can repeat the above steps and get The branching number of this recursion is c from above, and note that it is always smaller than k − 1 + O(k −1 ). As a conclusion of our theoretical analyses, we make an interesting observation: When π = 0, (µ, π)-DRCF becomes NP-hard when µ = 3. On the other hand, when µ = 0, (µ, π)-DRCF becomes NP-hard even when π = 1. This means that introducing representation constraints makes the problem hard "faster" than introducing diversity constraints. In contrast, with respect to the parameter k, the former case is W[2]-hard and the latter is fixed parameter tractable for all π ∈ Z : π ≥ 1. This reinforces our claim that even if it may seem natural to try and reformulate representation constraints as diversity constraints, we should not do so as the size of candidate groups and the size of winning committee of voter populations has implications on how one may try to solve the problem efficiently. Heuristic Algorithm In the previous sections, we saw that our model, which is useful from the social choice theory perspective to have more "fairer" elections, is computationally hard and it is hard even when we parameterize the problem on the size of the committee. Hence, we take a pragmatic approach to evaluate if our model is efficient in practice. We do so by developing a two-stage heuristic-based algorithm, in part motivated from the literature on distributed constraint satisfaction [67], which allow us to efficiently compute DiRe committees in practice. We develop a heuristic-based algorithm as the use of integer linear program formulation in multiwinner elections is not efficient [39], especially when using the Monroe rule. Moreover, in addition to the known temporal efficiency of using a heuristic approach as compared to a linear programming approach, our empirical evaluation shows that the algorithm returns an optimal solution (discussed later in Section 8.3.1), thus overcoming one of the biggest disadvantages of using a heuristic approach. DiReGraphs We represent an instance of the (µ, π, f)-DRCWD problem from Figure 1 as a DiReGraph (Figure 2). The constraints are represented by quadrilaterals and candidates by ellipses. More specifically, there are candidates (Level B) and the DiRe committee (Level D). Next, there is a global committee size constraint (Level A) and unary constraints that lower bound the number of candidates required from each candidate group or voter population (Level C). Edges connecting candidates (Level B) to unary constraints (Level C) depends on the candidate's membership in a candidate group or a population's winning committee. The idea behind DiReGraph is to have a "network flow" from A to D such that all for each X u = {X i ∪ X j } : X i ∈ comp i and X j ∈ comp j do 6: if !pairwise_feasible(X u , D, S) return false 7: for each comp in SCC do nodes on level C are visited. More specifically, the aim is to select k candidates (Level A) from m candidates (Level B) such that the in-flow at the unary constraint nodes (Level C) is equal to the specified diversity or representation constraint. A node is said to have an in-flow of τ when τ candidates in the committee W are part of the group/winning population. Formally, τ = |W ∩ G| for each candidate group G ∈ G and τ = |W ∩ W P | for each population P ∈ P. When the last condition is fulfilled, there will be a DiRe committee (Level D). Example 2. Creating DiReGraph: Consider the election setup shown in Figure 1. The candidate c 2 ( Figure 1) is a male who is in winning committees of both the states, namely California and Illinois. Hence, c 2 in DiReGraph ( Figure 2) is connected with the three sets of constraints, one each for male and the two states, namely CA (California) and IL (Illinois). DiRe Committee Feasibility Algorithm Algorithm 2 has two stages: (i) preprocessing reduces the search space used to satisfy the constraints and efficiently finds infeasible instances, and (ii) heuristic-based search of candidates decreases the number of steps needed either to find a feasible committee, or return infeasibility. Create DiReGraph The first step of the algorithm is to create the DiReGraph based on the variables that are given as the input. We have the following input: variables X = {X 1 , . . . , X |G|+|P| } are represented by the nodes on Level C. The domain D = (D 1 , . . . , D |G|+|P| ) of these variables are represented by edges that connect the node on Level C to the nodes (candidates) on Level B. Formally, for each D i ∈ D where D i is G ∈ G or W P : P ∈ P, we have an edge e that connect node on Level B with node on Level C. The constraints S = {S 1 , . . . , S |G|+|P| } correspond to the diversity and representation constraints. Formally, for each S i ∈ S, S i is l D G for each G ∈ G or l R P for each P ∈ P. Algorithm 3 Pairwise Feasibility Algorithm function pairwise_feasible(X, D, S) returns false if an inconsistency is found, or true 1: queue = (X i , X j ) : X i , X j in X and X i = X j 2: while queue is not empty do 3: if domain_reduce(X, D, C, X i , X j ) then 6: if |D i | = 0 return false 7: for each X x ∈ X \ {X i , X j } do 8: add (X x , X i ) to queue 9: return true function domain_reduce(X, D, S, X i , X j ) returns true iff the domain D i of X i is reduced 1 Preprocessing Find the strongly connected components SCC of a graph in time linear in the size of m and |G| + |P|, equivalent to m and n in real-world settings. The next step is find inter-and intra-component pairwise feasibility. We note that we only do a pairwise feasibility test as previous work has shown that doing a three-way, a four-way or greater feasibility tests increase the computational time significantly without improving the scope of finding a group of variables whose combination guarantees an infeasible instance [67]. Inter-component pairwise feasibility: Select two variables X i , X j corresponding to constraints S i , S j on level C of DiReGraph, one each from different components of SCC. Do a pairwise feasibility check for each pair and return infeasibility if any one pair of variables can not return a valid committee. The correctness and completeness of this step is easy. If there are more constraints than the available candidates, it is impossible to find a feasible solution. Also, if a pair of constraints are pairwise infeasible, then it is clear that they will remain infeasible overall. Intra-component pairwise feasibility: Repeat the above procedure but now, within a component. This step also helps in returning infeasibililty efficiently. Reducing domain: Based on empirical evidence of the previous work that used a setting similar to ours, pairwise infeasibility causes a majority of overall infeasible instances [67]. Hence, if a committee did exist, the domain of each variable is reduced by removing candidates that explicitly do not help to find feasible committees. Select unsatisfied variable: Use the "minimum-remaining-values (MRV)" heuristic to choose the variable having the fewest legal values. This heuristic picks a variable that is most likely to cause a failure soon, thereby pruning the search tree. For example, if some variable X i has no legal values left, the MRV heuristic will select X i and infeasibility will be returned, in turn, avoiding additional searches. Sort most favorite candidates: Use the "most-favorite-candidates (MFC)" heuristic to sort the candidates in domain D i such that a candidate on level B of DiReGraph who is most connected to level C (out-degree) is ranked the highest. This heuristic tries to reduce the branching factor on future choices by selecting the candidate that is involved in the largest number of constraints. Overall, the aim is to select the most favorite candidates into the committee as they help satisfy the highest proportion of constraints. For completeness and to get multiple DiRe committees, after sorting step, use a "shift-left" approach where the second candidate becomes the first, the first becomes the last, and so on. This allows us to get multiple DiRe committees. Example 4. Sorting candidates: In Figure 2, the ordering of candidates will be c 2 , c 1 , c 4 , and c 3 as c 2 has out-degree of 3, c 1 and C 4 has 2, and c 3 has 1. Ties are broken randomly. We now give an example to explain the entire algorithm. The first step of the algorithm is to create a DiReGraph as shown in Figure 2. Level A is set to 2 as k = 2. Level B consists of four nodes, each representing one candidate. Level C consists of four nodes, which is equal to |G| + |P|, the number of candidate groups and voter populations in the election. Level D consists of the final output. Each node on Level B is connected with Level A and each node on Level C is connected to Level D. The candidate c 1 is a male who is in winning committees of California. Hence, c 1 in DiReGraph is connected with the two sets of constraints, one each for male and CA (California). Next, a subgraph SG consisting of eight nodes from Levels B and C and the corresponding edges that connect these eight nodes is created. As there is only one strongly connected components in the SG, we directly check for the intra-component pairwise feasibility. For each pair of domains, there is always a feasible committee that exists. Hence, the algorithm continues to execute. Moreover, none of the domains get reduced as the constraints are set to one. Before reaching the final step of the algorithm, the algorithm would have terminated if no feasible committee existed that satisfied all the pairwise constraints. In the final step, the select_unsatisfied_variable function selects a candidate at random as all the variables have the same ratio of 2 for |Di| /(Si − Xi.inFlow) as |D i |=2 and S i =1 for all D i ∈ D and S i ∈ S. Next, for each variable that remains, we check whether adding that variable violates the global constraint (committee size on Level A of DiReGraph) or not. We keep on backtracking till we either find a committee or exhaustively navigate through our pruned search space. To get more than one committee, rerun the heuristic_backtrack function by applying an additional "left-shift" operation on the result of the sort_candidates function each time the heuristic_backtrack function is implemented. We note that this increases the time complexity of the algorithm linearly in the size of G and P. Empirical Analysis We now empirically assess the efficiency of our heuristic-based algorithm using real and synthetic datasets. We also assess the effect of enforcing diversity and representation constraints on the feasibility and utility of the winning committee selected using different scoring rules. Real Datasets RealData 1: The Eurovision dataset [68] consists of 26 countries ranking the songs performed by each of the 10 finalist countries. We aim to select a 5-sized DiRe committee. Each candidate, a song performed by a country, has two attributes, the European region and the language of the song performed. Each voter has one attribute, the voter's European region. Specifically for the European region attribute, Australia and Israel were labeled as "Others" as they are not a part of Europe. RealData 2: The United Nations Resolutions dataset [69] consists of 193 UN member countries voting for 81 resolutions presented in the UN General Assembly in 2014. We aim to select a 12-sized DiRe committee. Each candidate has two attributes, the topic of the resolution and whether a resolution was a significant vote or not. Each voter has one attribute, the continent. Dividing Candidates into Groups and Voters into Populations: To assess the impact of enforcing constraints, we generate datasets with varying number of candidate and voter attributes by iteratively choose a combination of (µ, π) such that µ and π ∈ {0, 1, 2, 3, 4}. For each candidate attribute, we choose a number of non-empty partitions q ∈ [2, k], uniformly at random. Then to partition C, we randomly sort the candidates C and select q − 1 positions from [2, m], uniformly at random without replacement, with each position corresponding to the start of a new partition. The partition a candidate is in is the attribute group it belongs to. For each voter attribute, we repeat the above procedure, replacing C with V , and choosing q − 1 positions from the set [2, n]. For each combination of (µ, π), we generate five datasets. We limit the number of candidate groups and number of voter populations per attribute to k to simulate a real-world division of candidates and voters. Setup System. We note that all our experiments were run on a personal machine without requiring the use of any commercial or paid tools. More specifically, we used a controlled virtual environment using Docker(R) on a 2.2 GHz 6-Core Intel(R) Core i7 Macbook Pro(R) @ 2.2GHz with 16 GB of RAM running MacOS Big Sur (v11.1). We used Python 3.7. Constraints. For each G ∈ G, we choose l D G ∈ [1, min(k, |G|)] uniformly at random. For each P ∈ P, we choose l R P ∈ [1, k] uniformly at random. Voting Rules. We use previously defined k-Borda, β-CC, and Monroe rules. More specifically, we have two rules from from submodular, monotone class of committee selection rule due to inherent difference in their method of computing committees and one rule from separable, monotone class of functions. We deem these to be sufficient due to our focus on the study of (µ, π)-DRCF as discussed later. Results We now present results of empirical analyses of the efficiency of the heuristic algorithm and of the feasibility of DiRe committees and the cost of fairness. Efficiency of Heuristic Algorithm All experiments in this section combine instances of k-Borda and β-CC as there was no pairwise significant difference in the running time between the sets of instances of these two scoring rules (Student's t-test, p > 0.05). We present the result for Monroe rule separately. Algorithm is efficient: Our heuristic-based algorithm is efficient on tested data sets (Figure 3, Figure 4, and Figure 5 Promisingly, using DiReGraph made the algorithm more efficient on instances that were sparsely connected as the average running time for all µ when π ≤ 2 was 281.47 sec (sd = 208.65) for k-Borda and β-CC, and 358.87 sec (sd = 265.82) for Monroe. Higher π led to denser DiReGraphs. Performance when compared to ILP: The real-world application of an ILP-based algorithm is very limited when using Chamberlin-Courant and Monroe rules [39]. More specifically, some instances of the ILP-algorithm that implemented the Monroe rule for k=9, m=30, and n=100 timed out after one hour. The running time increased exponentially with increase in the number of voters as all instances of the ILP-algorithm that implemented the Monroe rule for k=9, m=30, and n=200 did not terminate even after one day [39]. Hence, our algorithm, which (i) handles constraints and any committee selection rule and (ii) terminated in (avg) 724 sec, has a clear edge. Promisingly, the first committee returned by the algorithm (in < 120 sec) was the winning DiRe committee among 63% of all instances. Moreover, our algorithm scales linearly with an increase in the number of voters. Efficiency and cohesiveness: Our algorithm was the most efficient when the voters were either less cohesive (φ ≤ 0.3) or more cohesive (φ ≥ 0.8) ( Figure 5). Among these two efficient sets of instances, the time taken (mean = 105.40 sec (sd = 4.16) for k-Borda and β-CC, and mean = 141.06 sec (sd = 8.08) for Monroe) by the preprocessing stage to return infeasibility for low φ was less and the time taken (mean = 156.80 sec (sd = 2.86) for k-Borda and β-CC, and mean = 203.98 sec (sd = 12.66) for Monroe) by the heuristic-based search stage to return a DiRe committee for higher φ was less. This shows the efficiency of our algorithm in opposing scenarios: the preprocessing step was efficient when φ was low as it was easy to find a pair of constraints that are pairwise infeasible, and the heuristic-based backtracking was efficient when φ was high as it was easy to find a DiRe committee. Feasibility and Cost of Fairness All experiments in this section consider instances of k-Borda and β-CC separately as there was a difference in the unsatisfiability between the two (Student's t-test, p < 0.05). We continue to an analyze Monroe separately. Higher number of attributes result in infeasible committee: Figure 7 shows the proportion of feasible instances for each combination of µ and π. As the number of attributes increases, the proportion of feasible instances decreases. However, Figure 8 shows that the mean proportion of constraints satisfied for each instance is ≥ 90% (sd ∈ [0, 5]). Hence, from the computational perspective, these results show the real-world utility of breaking down (µ, π, f)-DRCWD problem into two-steps: (i) (µ, π)-DRCF problem solved using our algorithm followed by (ii) utility maximization problem. As we expect a constant number of committees to be feasible in real-world, we can overcome the intractability of using submodular scoring function, notwithstanding the worst case when all committees are feasible. On the other hand, more promisingly: (i) When the sum of all the constraints was less than (µ · k), then, indeed, a feasible committee did exist on 85% of instances. (ii) More specifically, when the sum of the constraints was less than k for all groups under each candidate attribute individually, then, indeed, a feasible committee did exist on all but one instance. Infeasibility and unsatisfiability is dependent on cohesiveness: There was a negative correlation between the maximum proportion of unsatisfied constraints and φ, for all the three scoring rules (mean Pearson's ρ = -0.95, p <0.05) It was to easier to satisfy the constraints when the cohesiveness (φ) was high, which led to lower infeasibility for higher φ ( Figure 6). Note that the correlation is stated keeping the candidate groups and voter populations constant. Only the preferences vary and hence, do the winning committee W P for each population P ∈ P. This is to say that higher cohesiveness of voters leads to higher cohesiveness among all W P s and in turn easier to satisfy the constraints and in turn higher proportion of feasible committees. β-CC and Monroe satisfies higher proportion of representation constraints: β-CC and Monroe rules are better at satisfying representation constraints as compared to k-Borda (Figure 7) as they are designed to maximize the voter representation, and in turn, the population satisfaction. However, we note that even when we use a committee selection rule that guarantees proportional representation, our analysis found that it was indeed the smaller population whose the representation constraints were violated disproportionately more than that of the larger population. Hence, the price of diversity was paid more by smaller population as compared to larger population, which quantitatively reaffirms the need for DiRe committees. Easier to satisfy constraints when k /(|G| + |P|) ratio is higher: The proportion of constraints that are satisfied per instance changed from 100% to 49% (mean=82%, sd=12%) as the ratio changed from 1.00 to 0.25. This analysis basically captures the committee size to number of constraints ratio. Overall, it is easier to have a feasible instance with a larger the committee size (k) or a smaller the number of constraints (|G| + |P|). Higher group size to lower constraint ratios are easier to satisfy: An important step of our heuristic algorithm was the use of the "minimum-remaining-value" heuristic, which helped in the selection of unsatisfied variable. We quantified the need for this heuristic by systematically varying the ratio of group (and population) size to lower constraint (equivalent to |Di| /Si), and found that the utility ratio is the highest when the average of the said ratio across all groups and population is the highest. Also, feasibility of an instance increases with an increase in this ratio. Hence, our heuristic, which prioritizes the lower ratio is efficient as it makes sense to first satisfy the groups or populations that are the hardest to satisfy. Real Datasets For each dataset, we implemented our model using 3 sets of constraints: constraint 1 only, constraint 2 only, and constraints 1 & 2. For Eurovision, these were at least one from each "region", at least one from each "language", and both combined. The ratios of utilities of constrained to unconstrained committees were 0.97, 0.88, and 0.82, respectively. For the UN resolutions, the constraints were at least two from each "topic", at least six from "significant vote", and both combined. The ratio of utilities was 0.99 for each of the individual constraints. No feasible committee was found when the constraints were combined. Importantly, our algorithm always terminated in under 102 sec across all instances. Conclusion and Future Work Conclusion: There is an understanding in social sciences that organizations that answer the call for diversity to avoid legal troubles or to avoid being labeled as "racists" may actually create animosity towards racial minorities due to their imposing nature [72,73,74]. Similarly, when voters feel that diversity is mandatory and if it comes at the cost of their representation, it can do more harm than good. Hence, it is important to consider all actors of an election, namely candidates and voters, when designing fair algorithms. Doing so in this paper, we first motivated the need for diversity and representation constraints in multiwinner elections, and developed a model, (µ, π, f)-DRCWD. (µ, π, f)-DRCWD, which gives DiRe committees, is also needed because the call for diversity is becoming ubiquitous. However, in the context of elections, only diversity can do more harm than good as the price of diversity may disproportionately be paid more by historically disadvantaged population. Finally, we show the importance to delineate the candidate and voter attributes as we observed that diversity does not imply representation and vice versa, which contrasts the common understanding, and hence, requires further investigation. This is to say that having a female candidate on the committee is different from having a candidate on the committee who is preferred by the female voters, and who themselves may or may not be female. These two are separate but equally important aims that need to be achieved simultaneously. We note that (µ, π, f)-DRCWD can satisfy many properties of multiwinner voting rules (e.g., monotonicity) [4] and it can be used as a common framework to solve other problems. As our model was computationally hard (Tables 1 and 2), we developed a heuristic-based algorithm, which was efficient on tested datasets. Finally, we did an empirical analyses of feasibility, utility traded-off, and efficiency. Future Work: It remains open to determine how the diversity and representation constraints are set to have a "fair" outcome. The way these constraints are set can lead to unfairness and hence, newer approaches are needed to ensure fairer outcomes. For instance, existing methods that guarantee representation fail when voters are divided into predefined population over one or more attributes. The apportionment method is one way to set the representation constraints, however, it does not account for the cohesiveness of the preferences within a population. Furthermore, this work can also give mathematical guarantees about the existence of DiRe committees. Additionally, just like correlation does not imply causation, diversity does not imply representation and vice versa. Hence, a mathematical framework is needed that can answer the following question: when does diversity imply representation, or when does representation imply diversity? The implications of such a formal framework range from hiring to clinical trials. Another open question is determining what candidates are used to satisfy the constraints. Assuming that the given constraints are acceptable by everyone, the candidates chosen to satisfy the constraints can lead to unfair outcomes, especially for historically disadvantaged groups. For example, consider a k-sized committee election (k = 4). The accepted constraints are that the committee should have two male and two female candidates. Next, consider two cases: (Case 1) top two scoring male candidates are selected in the committee and the bottom two scoring female candidates are selected and (Case 2) the top and the bottom scoring male candidates are selected in the committee and the top and bottom scoring female candidates are selected. While both the cases satisfy the given constraints, Case 1 is unfair for female candidates as their top-scoring candidate does not get a seat on the committee while male candidates do get their top two scoring candidates in the committee. This inequality is distributed in Case 2 and it seems naturally "fairer". Next open question pertains to the classification of the complexity of (µ, π, f)-DRCWD w.r.t. the committee selection rule f. In this paper, f could take only two values: it can either be a monotone, submodular but not separable function or a monotone, separable function. We established that determining the winning committee using the former is NP-hard, which was done via establishing hardness of using Chamberlin-Courant rule that uses a positional scoring rule whose first two values of the scoring vector are same (Theorem 6). However, the classification of complexity of determining the winning committee using Chamberlin-Courant rule, Monroe rule, and other submodular but not separable scoring functions w.r.t. different families of positional scoring rules remains open. Continuing on the mathematical front, another future direction pertains to the relaxations made to group the candidates and the voters for Corollaries 2 and 3. More specifically, we showed the hardness persists even when a candidate attribute groups all candidates into one group and a voter attribute groups all voters into one population. These are unrealistic instances as real-world stipulation will require that each candidate attribute partitions the candidates into two or more groups and each voter attribute partitions the voters into two or more populations. Mathematically, Corollaries 2 and 3 will not hold under this stipulation and new proofs are required such that they conform to the stated stipulations. Finally, the restrictions on voter preferences is another direction for future research. The reduction used to prove Theorem 5 shows that finding a committee using our model is NP-hard even when each population has one voter. This can be generalized to say that the hardness persists even when each population is completely cohesive within itself, which is a very natural assumption to make. For example, all male voters may have the same preferences and all female voters may have the same preferences and one population's preferences are different from the other's. However, given a set of constraints, one can explore as to how does the cohesiveness of voter preferences across populations affect the complexity of our model? In addition, it remains open whether the structure of voter preferences across population affects the complexity? Finally, the winning committees of populations can also be cohesive or structured, independent of the structure and cohesiveness of the voter preferences. Hence, even when voters' preferences are not cohesive or structured, the cohesiveness or structure among the population's winning committees may make our model tractable. An immediate consequence of a positive result on this front may be the narrowing of the gap between the intractability of finding a proportionally representative committee (e.g., using Chamberlin-Courant rule) and its tractable instance due to structured preferences. If finding a winning committee is tractable under the assumption of cohesiveness and/or structure of winning committees of the voter populations, then there is hope that finding a proportionally representative committee may also be tractable even under a weaker assumption on the structure of the voter preferences. On the other hand, if we know that the preferences of the voters is cohesive by a factor of φ : 0 ≤ φ ≤ 1, then we conjecture that there exists a polynomial time approximation algorithm for the (µ, π)-DRCF problem (µ = 0 and π ≥ 1) with approximation ratio at most k − (1 − o(1)) k(k−1) ln ln g(φ) ln g(φ) where g(φ) is a function that maps the cohesiveness of the preferences φ to the maximum number of winning committees W P of each population that a candidate can belong to. This approximation ratio improves on the general inapproximability ratio of k (Theorem 9) for (µ, π)-DRCF when φ is not known.
2021-07-16T01:15:41.762Z
2021-07-15T00:00:00.000
{ "year": 2021, "sha1": "43646e78309d9e381f47c8eee0e878867a9aaf36", "oa_license": null, "oa_url": "https://www.ijcai.org/proceedings/2022/0714.pdf", "oa_status": "BRONZE", "pdf_src": "Arxiv", "pdf_hash": "43646e78309d9e381f47c8eee0e878867a9aaf36", "s2fieldsofstudy": [ "Economics" ], "extfieldsofstudy": [ "Computer Science" ] }
237357880
pes2o/s2orc
v3-fos-license
Management of acute metabolic acidosis in the ICU: sodium bicarbonate and renal replacement therapy This article is one of ten reviews selected from the Annual Update in Intensive Care and Emergency Medicine 2021. Other selected articles can be found online at https://www.biomedcentral.com/collections/annualupdate2021. Further information about the Annual Update in Intensive Care and Emergency Medicine is available from https://link.springer.com/bookseries/8901. Introduction Metabolic acidosis is a process caused by an increase in weak acids or a decrease in strong ion difference (SID) [1]. Serum proteins, albumin, and inorganic phosphate are considered as weak acids. Strong ions, such as Na + , K + , Ca 2+ , Mg 2+ , and Cl − , exist at a fully ionized status in body fluids. SID is the presence of an excess of strong cations over strong anions, and the normal value in plasma is 42 mEq/l. The method to quantify metabolic acidosis using SID and weak acids was introduced by Stewart in the 1980s and still creates debate in its clinical application [2]. Plasma base excess is widely used to identify a metabolic component of acidosis in clinical practice. The base excess approach was shown to be equivalent to Stewart's SID approach in quantifying acid-base status in critically ill patients [3]. Metabolic acidosis is classified into acute and chronic. Although it is not clearly defined, acute metabolic acidosis occurs within a few days. Chronic acidosis is a condition that lasts for weeks or even years [4]. In this chapter, we focus on acute metabolic acidosis in intensive care unit (ICU) patients and provide an update from recently published clinical studies. Epidemiology of metabolic acidosis in the ICU Acute metabolic acidosis is well-recognized in the ICU. However, epidemiological data are scarce, which has limited our understanding of the approach to metabolic acidosis until recently. A retrospective observational study using a large binational ICU database in Australia and New Zealand examined the incidence, characteristics, and outcomes of patients with various definitions of metabolic acidosis [5]. Severe metabolic acidosis was defined as a pH ≤ 7.20, PaCO2 ≤ 45 mmHg, HCO3 − ≤ 20 mmol/l, and total sequential organ failure assessment (SOFA) score ≥ 4 or lactate ≥ 2 mmol/l [6] and occurred in 1.5% of the patients in the ICU. The ICU and hospital mortality rates of these patients were 43.5% and 48.3%, respectively. Moderate or severe metabolic acidosis was defined as pH < 7.30, base excess < −4 mmol/l and PaCO2 ≤ 45 mmHg, and occurred in 8.4% of ICU patients. The ICU and hospital mortality rates were 17.3% and 21.5%, respectively [5]. The mortality of patients with moderate or severe metabolic acidosis was higher than that of patients with sepsis observed in the same database [7], suggesting the clinical relevance of improving care for patients with metabolic acidosis. A French multicenter prospective study described the incidence of severe acidemia in five ICUs [8]. Severe acidemia was defined as pH < 7.2, including respiratory acidosis, metabolic acidosis, and mixed acidosis. This severe acidosis occurred in 8% (200/2550) of the patients within 24 h of ICU admission. After excluding patients with diabetic ketoacidosis (DKA), which is adjudicated to be an entity with a low risk of death, and patients with respiratory acidosis, ICU mortality of patients with metabolic or mixed severe acidosis was as high as 57% (89/155) [8]. A recently published international observational study conducted in 18 ICUs in Australia, Japan, and Taiwan reported that 14% (1292/9437) of critically ill patients had moderate or severe metabolic acidosis [9]. The median incidence of metabolic acidosis at a study ICU was 172.5 patients/year, suggesting that the management of metabolic acidosis is a relevant issue in patient care in the ICU. Common types of metabolic acidosis in the ICU The causes of acute metabolic acidosis are diverse in critically ill patients. DKA, lactic acidosis, and hyperchloremic acidosis are responsible for most cases of severe metabolic acidosis cases due to a decreased SID [10]. DKA is a medical emergency in patients with diabetes mellitus as a result of insulin deficiency. The hepatic metabolism of fatty acids produces beta-hydroxybutyrate and acetoacetate, strong anions in the human body. As hyperglycemia induces osmotic diuresis, patients with DKA have a markedly reduced extracellular fluid volume. The available evidence was summarized in a review that revealed the paucity of sufficient data on clinical impact [11]. To date, a comprehensive epidemiological study on DKA investigating its epidemiology and clinical outcomes is still lacking. Lactate is a strong anion in the human body as more than 99% of lactate is ionized. Lactic acidosis is observed in cardiogenic or hypovolemic shock, severe heart failure, severe trauma, and sepsis [8], with high mortality rates, ranging from 30 to 88% depending on the definition used [12,13]. The mortality of patients with lactic acidosis was reportedly the highest (56%) amongst patients with metabolic acidosis defined by a SBE (standard base excess) < -2 mEq/l [14]. Recently, hyperchloremic acidosis caused by intravenous fluid products has become widely known and is reported in 19% to 45% of patients in the ICU [14,15]. Table 1 shows the electrolytes and SIDs of intravenous fluid products commonly used in the ICU. Theoretically, acidosis occurs when intravenous fluid products with a SID lower than that of the patient's plasma are administered. Balanced crystalloids, i.e., Ringer's acetate, Ringer's lactate, and Plasmalyte, contain acetate, lactate, or gluconate to replace chloride. Those strong anions do not contribute to SID as they are metabolized by the liver faster than renal chloride excretion. Why metabolic acidosis matters Metabolic acidosis can have various adverse effects, but the most critical consequence is its effect on the cardiovascular system. Recognition of this effect dates back to the 1960s when a study reported reduced cardiac contractility at pH < 7.1 when lactic acid was administered to dogs [16]. Animal experiments were also performed on dogs given lactic acid and hydrochloric acid to produce lactic acidosis and hyperchloremic acidosis. The dogs were given epinephrine, norepinephrine, and dobutamine to counteract the shock status. The cardiac index decreased when epinephrine or norepinephrine was administered; however, dobutamine administration increased the cardiac index. This result suggested that acidosis decreased the catecholamine reactivity to norepinephrine or epinephrine [17]. Pedoto et al. reported that when hydrochloric acid was administered to rats to mimic hyperchloremic acidosis, nitric oxide (NO) production increased, provoking vasodilation, and resulting in reduced systemic blood pressure [18]. Fatal arrhythmias induced by acidosis have also been reported in an experimental model [19]. However, clinical studies in humans have not yet demonstrated a causal relationship between metabolic acidosis and cardiovascular dysfunction [20][21][22]. How we manage metabolic acidosis in the ICU Metabolic acidosis in critically ill patients is not a single disease but a syndrome driven by various underlying conditions. As such, the basic principle is to treat the underlying cause of metabolic acidosis. Sodium bicarbonate may be administered if there is a concern for the suppressed cardiac function that metabolic acidosis may cause. The rationale for using sodium bicarbonate for metabolic acidosis is that the intravenous administration of a high SID solution would increase the pH, resulting in improved cardiac function. The evidence on the biochemical effects of intravenous sodium bicarbonate in acute metabolic acidosis has been systematically reviewed [23]. The summary of 12 relevant studies showed that pH, serum bicarbonate, base excess, serum sodium, and PaCO2 increased during and after the intravenous administration of sodium bicarbonate [23]. By contrast, serum anion gap and potassium decreased. Some concern was raised about intracellular acidosis due to the back-diffusion of CO2 and decreased ionized calcium that might impair cardiac contraction. However, there was no consistent evidence from the literature review that sodium bicarbonate administration was associated with decreased ionized calcium or decreased cardiac output [20,24]. The effects of sodium bicarbonate on clinically relevant outcomes should be investigated in RCTs. The systematic review [23] identified only two RCTs that have been conducted [5,25]. Hoste et al. compared the effect of sodium bicarbonate and tris(hydroxymethyl)aminomethane, THAM, in 18 patients with mild metabolic acidosis [25]. The trial, published in 2005, did not report clinically important out-comes, perhaps because the trial was conducted as a pilot trial. THAM has not been explored for its effects since this trial and is rarely used in current clinical practice. An important RCT investigating the effects of sodium bicarbonate for severe metabolic acidosis was published in 2018 [5]. The BICAR-ICU trial was conducted in 26 French ICUs and enrolled 389 patients with severe acidemia (pH ≤ 7.20, PaCO2 ≤ 45 mmHg, HCO3 − ≤ 20 mmol/l, and total SOFA score ≥ 4 or lactate ≥ 2 mmol/l). The trial excluded patients with DKA or chronic kidney disease (CKD). Patients were allocated to an intervention group receiving 4.2% sodium bicarbonate to maintain pH > 7.3 throughout the ICU stay or a control group with usual care. There was no difference in the primary outcome, which was a composite of death by day 28 and at least one organ failure at day 7, between the groups. However, treatment with sodium bicarbonate was associated with a reduced need for renal replacement therapy (RRT) in the ICU. Furthermore, in the pre-specified subgroup of patients with acute kidney injury (AKI) (AKIN score 2 or 3), sodium bicarbonate was associated with improved survival and reduced need for RRT [5]. In a retrospective, observational study using the Medical Information Mart for Intensive Care (MIMIC)-III database, sodium bicarbonate administration was not associated with improved survival in patients with metabolic acidosis (pH < 7.3, HCO3 − < 20 mmol/l and PaCO2 < 50 mmHg) but was associated with improved survival in septic patients with stage 2 or 3 AKI and severe acidemia (pH < 7.2) [26]. A recent international observational study revealed that 18% of patients with moderate or severe metabolic acidosis receive sodium bicarbonate in current clinical practice [9]. However, the total amount of sodium bicarbonate given during the first 24 h of metabolic acidosis was 110 mmol, which was not adjusted for body weight or base excess. The study also reported that sodium bicarbonate administration was possibly associated with lower ICU mortality in acidotic patients with vasopressor dependency, albeit with a lack of statistical significance. Given that the rationale to use sodium bicarbonate would be to support cardiovascular function, this finding provides a sound basis for further investigation on the effect of sodium bicarbonate in patients with metabolic acidosis and on vasopressors. Sodium bicarbonate for subtypes of metabolic acidosis Administration of sodium bicarbonate has been considered for DKA not only because sodium bicarbonate reverses the acidotic status but because acidosis possibly contributes to insulin resistance [27]. However, a retrospective single center study from the USA reported that sodium bicarbonate administration in the emergency department was not associated with time to resolution of acidosis in patients with DKA with a pH < 7.0 [28]. There was also no difference in hospital length of stay [28]. A systematic review in 2011 found that sodium bicarbonate did not shorten the duration of acidosis, ketosis, or glycemic levels [11]. Furthermore, there was a high incidence of hypokalemia that required correction in patients who received sodium bicarbonate [11]. These findings imply that the beneficial effects of sodium bicarbonate administration for DKA might be limited. However, the systematic review by Chua et al. revealed a lack of rigorous randomized clinical trials that assessed patient-centered outcomes in these patients [11]. Sodium bicarbonate for lactic acidosis has been compared with saline in two small-scale randomized, crossover, single center trials [20,21]. Cooper et al. reported that sodium bicarbonate administration increased pH and PCO2 with no change in blood pressure or cardiac output [20]. Similarly, Mathieu et al. found an increase in pH but no change in hemodynamic parameters, including cardiac index [21]. For cardiac arrest, several observational studies have reported an increase in the rate of return of spontaneous circulation in patients receiving sodium bicarbonate [29][30][31][32]. However, one study found that this treatment was associated with a worse survival rate and neurological outcomes to hospital discharge [33]. A pilot RCT showed no improvement in patient mortality [34], return of spontaneous circulation rate, or neurologically favorable status in treated patients [35]. At present, routine use of sodium bicarbonate is not recommended for cardiopulmonary resuscitation [36]. Renal replacement therapy for metabolic acidosis There has been no clear consensus of clinical indications for RRT; however, severe acidosis is a commonly accepted indication. In RCTs on the timing of RRT that have been published over the past 5 years, i.e., the AKIKI trial, the IDEAL-ICU trial, and the STARRT-AKI trial, metabolic acidosis with severe acidemia was used as one of the absolute indications [37][38][39]. The AKIKI trial was a multicenter RCT in France, enrolling patients with stage 3 AKI, in which 67% of the patients had septic shock [37]. The trial compared early initiation of RRT in stage 3 AKI and delayed initiation with absolute indications. The absolute indications for RRT included severe acidemia with pH < 7.15, either metabolic acidosis or mixed acidosis. Of note, 21% of the trial participants in the control group received RRT for metabolic acidosis [37]. The IDEAL-ICU trial was another multicenter RCT conducted in France, enrolling patients with septic shock and stage 3 AKI [38]. The absolute indications for RRT included metabolic acidosis with pH < 7.15 and base deficit > 5 mEq/l or HCO3 − < 18 mEq/l. Among the patients who received RRT for the absolute indication, 13.4% met the metabolic acidosis criteria [38]. The STARRT-AKI trial was the largest international RCT, including 3019 patients from 15 countries [39]. The main aim of the trial was to assess whether an accelerated strategy to start RRT at stage 2 or 3 AKI would improve patient-centered outcomes compared with a delayed initiation with absolute indications. The absolute indications for RRT included severe acidemia and metabolic acidosis, defined as pH ≤ 7.2 or HCO3 − < 12 mmol/l. Of the patients treated with RRT, 16.6% met the criteria for severe metabolic acidosis [39]. From the STARRT-AKI trial and the BICAR-ICU trial, patients with stage 2 or 3 AKI should avoid immediate intervention with RRT and may benefit from sodium bicarbonate if severe metabolic acidosis is present despite appropriate treatment for underlying conditions. Agenda for future research Recent clinical research, including large RCTs, has provided new evidence and advanced our understanding of the management of metabolic acidosis. However, highquality data from rigorous clinical research to guide standard practice are still lacking. Research priorities include the following: • The benefits and harms of sodium bicarbonate on cardiovascular function • Sodium bicarbonate not only for severe metabolic acidosis but for moderate metabolic acidosis • Sodium bicarbonate for severe metabolic acidosis with stage 2 or 3 AKI (BICARICU-2, Clini caltr ials. gov identifier NCT04010630, in progress). Conclusion We have reviewed the recent clinical data on epidemiology and management of metabolic acidosis. Metabolic acidosis is common in the ICU, and even moderate metabolic acidosis carries higher mortality than severe sepsis. Sodium bicarbonate or RRT is used occasionally to normalize acid-base imbalance due to metabolic acidosis in the ICU; however, high-quality evidence is still limited. Patients with severe metabolic acidosis and stage 2 or 3 AKI might be a possible target population for sodium bicarbonate administration. Further clinical trials are required to provide more robust information in a clinically relevant patient population.
2021-08-31T13:41:29.505Z
2021-08-31T00:00:00.000
{ "year": 2021, "sha1": "16d916f4f33c94240124d0321a93cbe592a1d734", "oa_license": "CCBY", "oa_url": "https://ccforum.biomedcentral.com/track/pdf/10.1186/s13054-021-03677-4", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "16d916f4f33c94240124d0321a93cbe592a1d734", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
182264383
pes2o/s2orc
v3-fos-license
“The Hard Problem of Consciousness”. Theoretical solution of its main questions The problem of explaining the connection between the phenomena of subjective reality and brain processes is usually called the “Hard problem of consciousness”. The solution of its main theoretical issues is of great importance for the development of modern neuroscience, especially for such direction as neurocryptology (“Brain-Reading”). From the standpoint of the information approach it is proposed to address these issues, namely: 1) the nature of the connection with brain processes of the phenomena of subjective reality (mental state), 2) their causal ability to control bodily functions; 3) the compatibility of freedom of will with the determinism of brain processes and also 4) the frequently asked question: why is information about the acting agent not just represented, but experienced in the form of subjective reality. The connection between the phenomenon of subjective reality and the corresponding brain process is the relationship between information and its carrier; specific features of this relationship are analyzed. Mental causation is a kind of information causality that differs from ordinary physical causation because information is invariant with respect to the physical properties of its carrier, can be coded in different ways. The author identifies and describes those analytical parameters that must be taken into account in the problems of constructing a model of the code neurodynamic structure of the phenomena of subjective reality. A hypothesis is advanced on the emergence of the very quality of subjective reality in the course of biological evolution. On this basis, the methodological aspects of the problem of deciphering brain neurodynamic code of the phenomena of subjective reality and the prospects of neuroscience studies of consciousness are discussed. Introduction. Subjective reality as an object of a neuroscientific investigation In modern analytical philosophy the problem of consciousness is called a "Hard problem" [1,2] because consciousness has a specific and inalienable quality of subjective reality (let us abbreviate SR). It is this quality that is the main stumbling block for its scientific explanation SR is the reality of the conscious states of the individual, which directly certifies for him that he exists. The quality of the SR is indicated in the philosophical literature by different, but in their meaning similar terms: "mental", "introspective", "phenomenal", "subjective experience", "qualia", etc. In recent decades, the term "SR" has become quite widely used to describe the specifics of consciousness, including representatives of analytical philosophy [3]. The concept of "SR" encompasses both individual phenomena and their types (sensations, perceptions, feelings, thoughts, intentions, desires, volitional efforts, etc.), as well as an integral personal formation united by our self, taken in its relative identity to itself, and thus in its reflexive and non-reflexive unity, actual and dispositional dimensions. This holistic formation is a dynamical continuum temporarily interrupted by deep sleep or in cases of the loss of consciousness. SR always represents a certain "content", which is given to an individual in the form of a "current present", i.e. now, although this "content" can relate to the past and the future. A specific features of SR phenomena, as already noted, is that physical properties cannot be attributed to them (mass, energy, spatial characteristics): they differ from the subjects of the study of classical natural science and evince a special ontological status, the definition of which has always presented difficult questions to philosophers of materialistic orientation and naturalists, especially for those who studied the connection of psychic phenomena with the activity of the brain. These complex issues at the ontological level have, on the other hand, no less epistemological questions. The point is that the description of SR phenomena is made in terms of intentionality, purpose, meaning, value, will, etc., while the description of physical phenomena and brain processes-in terms of mass, energy, spatial characteristics, etc., and between these conceptual complexes there is not direct logical connections. Some intermediate conceptual link is required to link, to combine these different types of descriptions in a single conceptual system capable of providing a theoretically grounded explanation of the connection between SR phenomena and brain processes. How to find it and thereby overcome the "explanatory gap"? This is how the representatives of analytical philosophy term the situation in regards to the Mind-Brain Problem. At the same time, SR represents the "internal", individual-subjective experience inherent only in a concrete individual (expressed in first person reports). How do we transfer from this individualsubjective experience to intersubjective, generally valid statements (from a third person) and to the substantiation of true knowledge? In general philosophical terms, these questions have been repeatedly posed and variously resolved from one or other classical position. However, in light of the pressing problems of modern science, they continue to be open. This is especially acute in those branches of neuroscience that are aimed at researching mental activity, the phenomena of consciousness, and do not accept reductionist solutions (i.e. concepts that seek to reduce SR phenomena to physical processes, speech or behavioral acts). In this respect, the questions of phenomenological analysis and systematization of SR phenomena, the discretization of the SR continuum, the formation of such invariants of SR phenomena which could serve as sufficiently definite objects for correlating them with brain processes, become of fundamental importance (I will return to this later). To solve this problem, first of all, a theoretically substantiated answer to two main questions is required: 1. How to explain the connection between the phenomena of SR and brain processes, if physical properties (mass, energy, spatial characteristics) cannot be attributed to the first, while the latter necessarily possess them? 2. If physical properties cannot be attributed to the SR phenomena, how can one explain their ability to causally affect bodily processes? In addition to these basic issues, there are a number of others, which usually form stumbling blocks for natural scientists and urgently require solutions. However, we must immediately say that the answers to them are determined by solutions available for the first two. Moreover, it could be argued that they depend more on the solution of the first fundamental issue. These other significant issues are as follows: 3. How to explain the phenomena of voluntary actions and free will and how to combine them with the determinism of brain processes? 4. How to explain the emergence of the very quality of SR in the process of evolution, which, at first glance, seems to be unnecessary for the effective functioning of the organism (which always served as an excuse for epiphenomenalistic interpretations of SR and reductionist constructions, the use of models of "zombies", etc.)? 5. Why is information about the acting agent not just represented, but experienced in the form of SR? This is a question closely connected with the previous one (it is usually sharply posed by representatives of analytical philosophy). These and a number of other particular questions will be singled out and theoretically interpreted below. The proposed theory It relies on modern knowledge of biological evolution and the processes of self-organization (biological and social, including its technical components) and uses an information approach to address the issues raised [4][5][6][7]. It should be noted at once that, despite the difference in the philosophical interpretations of the notion of information and the absence of a unified information theory, this concept has generally accepted meanings. I use the concept of information in the general sense in which it is used in practically all sciences, namely: as the "message content", "signal content" (N. Wiener's definitions). Therefore, there is no need to go into his various philosophical interpretations, to evaluate each of the two basic concepts of information (attributive and functional), to choose one or another. Although I prefer a functional rather than an attributive concept, the information approach to the Mind-Brain Problem developed below is compatible with both. The theory I propose is relatively clear and simple, and therefore it is convenient for criticism. Three initial parcels are accepted in it. The first two are principles with no empirical refutations, the third is an intuitively acceptable agreement, which is convincingly confirmed by ordinary and scientific experience. I quote these initial assumptions. I. Information must be embodied in its physical medium (does not exist outside and beside it). II. Information is invariant with respect to the physical properties of its carrier, i.e. the same information (for a given self-organizing system-for a given organism, a person or a community) can be embodied and conveyed by carriers of different physical properties, i.e. encoded in different ways. For example, the information that rain is expected tomorrow may be transmitted in different languages, orally, in writing, using the Morse code, etc.; in all these cases its carrier can be different in mass, energy and space-time characteristics). Let us abbreviate this principle-IP. III. SR phenomenon (for example, my sensory image in the form of visual perception of some object A, experienced at a given interval) can be considered as information (about this object). Note that the information allows not only a syntactic description, but also a semantic (contentsemantic) and pragmatic (target, value, "effective", program-managing) description, which meets the requirements for describing the phenomena of SR. If these three initial assumptions are accepted, then the desired explanatory consequences are logically deduced from them. The following is the answer to the t question indicated above. How to explain the connection between the phenomena of SR and brain processes? 1. Since this phenomenon of SR is information about A (we denote it by A), it has its own definite carrier (we denote it by X), which according to the data of neuroscience is a certain cerebral neurodynamic system. Thus, the phenomenon of subjective reality is necessarily related to an appropriate brain process as information to its carrier. Although the neurodynamic system X necessarily consists of physical components, its functional specificity cannot be explained on the basis of physical properties and regularities (since, as it is well known, the description of functional relations is logically independent of the description of physical properties and relations). This is shown by an analysis of the nature of the necessary connection between A and X. 2. The connection between A and X is not causal, it is a special kind of functional connection: A and X are simultaneous and one-cause phenomena; they are in relation of mutual single-valued correspondence; X is the code embodiment of A or, briefly, the code A. This kind of connection can be called a code dependence, it is formed in the phylogenesis and ontogenesis of the self-organizing system (it has the character of a historical neoformation and in this sense is random, i.e. the information acquired in this self-organizing system has such a code embodiment, but in principle it could have another, but, having arisen in this form, it becomes a functional element of the process of self-organization in this system). This relation is valid, i.e. retains its functional role either in onetime action or in some interval (for example, conditioned reflex communication), and often throughout the life of the individual and even the through the entire history of the species, and in the case of the fundamental DNA code-for the entire period of the existence of living systems on the Earth. But even the genetic code is not an exception, its emergence was not necessary, also had a probabilistic, random character 1 . Even more it is inherent in the origin of the code structure of language (as evidenced by many different languages). However, the random nature of the formation of this code dependence does not cancel the principle of the necessary connection of information and its carrier, but only indicates that the specific carrier can be different in its physical properties (in accordance with the IP). Of course, in the course of evolution, more economical forms of codes were selected for their mass, energy, space-time characteristics. In a complex self-organizing system (that is, consisting of self-organizing elements and subsystems), there is a multi-step hierarchy of code dependencies that distinguish its history (both phylogenetically and ontogenetically). This hierarchy of code dependencies is the main levels and nodes of the organization of the given system and, consequently, the main contours of the management structure. The experience of this type of system testifies to very complex relations of centralization and autonomy in their integral functioning. These relations are still poorly understood. However, there is no doubt that this is a kind of fusion of hierarchical centralization of code dependencies with a high degree of autonomy for certain levels of organization, which includes not only cooperative relations but also competitive ones. Self-organization is a multidimensional dynamic structure of code dependencies (or information processes, respectively). Hence the special urgency of studying the nature of code dependence as an element of selforganization. The connection between A and X, like any code dependence, qualitatively differs from a purely physical relationship, it expresses the specifics of information processes. Among them, some informational processes in the brain related to the quality of SR are represented in the form of code formations of type X. A thorough study of the connection between A-X, the structural and functional organization of systems of type X, means the deciphering of the brain code of this phenomenon of SR. 3. But what does the decoding operation mean, if the information is necessarily embodied in its carrier, and the latter always represents one or other of its code incarnations (i.e. if the information always exists only in a certain code form and nothing else)? It can only mean the transformation of one code into another: conversion of "incomprehensible" for the given self-organizing system into "understandable". Therefore, two types of codes should be distinguished: 1) "natural" and 2) "alien". The former are directly "understandable" to that self-organizing system to which they are addressed; more precisely, the information embodied in them is "understandable" (for example, the values of the patterns of the frequency-impulse code, coming from certain structures of the brain to the muscles of the hand, the words of the native language for the interlocutor, etc.). The information is "understandable" in the sense that it does not require a decoding operation and can be directly used in order to ensure management. The "natural" code carries information in a form open to "understanding"; it does not require either a study of the structure of the signal nor special analysis of the information. We perceive a friend's smile not as a set of movements of a multitude of facial elements, but immediately in its integral "meaning". Unlike the "natural" code, "alien code" is directly "not understandable" for a self-organizing system, it cannot perceive and use the information embodied in it. For this, it needs to perform a decoding operation, i.e. a transformation of the "alien" code into "natural" code. It is important to note that in cryptology and after it in modern science the term "code" is usually not used to denote objects that have "natural" codes (because of their "transparency"). However, the approach I propose to decipher the cerebral codes of SR phenomena relies on a broader theoretical basis in comparison with classical cryptology, in which a narrow interpretation of the concept of code is adopted 2 . The way to convert "alien" code into "natural" code is either initially programmed in the structure of a self-organizing system, or it was created on the basis of its experience and as a result of random findings. Often it remains unknown to us and might be researched by a researcher due to persistent search (as evidenced by the experience of cryptology, linguistics, ethnography, other sciences where such a problem arises 3 ). 4. Both "natural" and "alien" codes are for a given self-organizing system (organism, its subsystems, personality, community etc.), internal and external. Apparently, the "alien" codes are mostly external. However, they also exist at the level of the individual in the processes of autocommunication. Here, the internal "alien" codes are manifested in the form of incomprehensible and often negative in their "meaning" subjective experiences and symptoms that take their origin in unconscious and somatic sphere; this also applies to a variety of cases of psychopathology. Let's pay attention to the apparently paradoxical situation: the code of type X is for me an internal "natural code" in the respect that directly opens to me the information contained in it (i.e. the image A). In this respect, the code X is decoded in my brain automatically. But at the same time it is for me an external "alien" code in the sense that I do not know anything about its location in my brain, its composition and functional structure (and generally do not feel what is going on in my brain while I am experiencing the image A). In other words, in the SR phenomena I have the information in its "pure" form, and the information about its carrier is completely closed. However, in order to understand the specific dependence of A on X, it is necessary to know the structure of this carrier, it is necessary to decipher its code structure, just as it is required in learning a previously unknown language. Here X, being for me and to all of us an "alien code", becomes a special object of investigation with the aim of deciphering it, clarifying the information contained in its A in an independent way, i.e. based on the removal of signals from my brain and with the help of certain methods of converting X to a suitable "natural code" (in the form of text, image, digital record, etc.) that is always automatically converted to an internal "natural code" of the researcher's brain which serves a guarantee of content understanding in this information (in the form of the corresponding phenomena of SR). And this provides an understanding of the results of deciphering the code X by other researchers and other people, i.e. its intersubjective status 4 . Thus we can speak about the possibility of the emergence of a new type of communication, which can now serve as an object of serious philosophical reflection on the future of terrestrial civilization. Along with the development of brain-computer interfaces, on the basis of which significant results have been achieved, there is the task of creating a so-called "neuronet", i.e. of "brain-brain" interfaces, which promise us the creation of a fundamentally new type of communications. If the brain codes of SR phenomena are thoroughly deciphered, this will violate the fundamental principle of social self-organization-relative autonomy, "closeness" of the subjective world of the individual. What will happen if it is "discovered" beside the will of its owner, if some become "open", and others "closed", etc.? No less interesting is the question: what will happen with our society, with its political, economic and other institutions, if all modern homo sapiens and institutional subjects suddenly become "open"? (Nobody can deceive anyone, everyone tells only the truth; let's conduct such a thought experiment). 5. Accordingly, two different types of codes ("natural" and "alien") and two different aspects of decoding the code should be distinguished. When decoding an "alien" code (that is, transforming it into "natural" code), the task is to understand its information content. Conversely, when deciphering a "natural" code, the structure of which is unknown, the task is to recognize, understand precisely its structure (structural-functional-spatial-temporal characteristics, physico-chemical organization). Hence, there are two types of code decoding tasks: direct and reverse. Direct task: a code object being given, the task is to find out the information contained in it. In the case of code objects of type X, difficulties are related to determining and making a description of it, not to mention the search for ways to decode the code and implement the decoding process7. Reverse task: information is given (say, A, i.e. the information in a "pure" form), it is required to determine its carrier and to study its functional structure in order to independently reproduce this information. Due to IP, this task is more difficult than the direct one, since the given information may have different carriers (although their diversity is limited by the properties of the brain, by the specificity of its substrate, elements, synaptic connections, morphological structures, etc.). To this we should add that any translation of the information into another language entails some loss of the original content (a question requiring special analysis). In the real process of studying code dependencies, direct and reverse tasks reveal a close interdependence. Nevertheless, in the problematic of deciphering the neurodynamic code of psychic phenomena, the reverse task occupies a dominant position, for here the search is directed from the information given to us to its carrier. In the case under consideration, from A to the desired neurodynamic correlations, which should correspond to varying degrees to X. These correlations are established and investigated in modern neuroscience using various methods (EEG, EMG, fMRI, PET, etc.). In this case, the detected correlates are only indirectly associated with X, which represents an extremely complex, multidimensional circular neurodynamic network system, and require special analysis and interpretation using mathematical and other means to construct adequate models of the desired code dependence. Over the past ten years, great results have been achieved in deciphering the brain codes of visual perception, not only in the case of static and relatively simple black and white visual images8, but also in the decoding of moving color images-a fragment of a movie perceived by the subject (corresponding images experienced by him, reproduced on the computer screen as a result of But even in this case, in spite of my experience of A in a "pure" form, I will have to do the same as an external observer, i.e. get A (its "content") in an independent way. analysis and synthesis of elements of their brain correlations received mainly through the use of the fMRI method (see: [8,9]). This direction of neuroscience, which is called "brain-reading", or more accurately could be named neurocryptology, is developing rapidly and sets the task of deciphering the brain codes of various phenomena of SR (not only visual, but auditory and tactile perceptions, emotions, voluntary actions and even thinking 5 ). It acquires a strategic importance for the creation of new "brain-machine" interfaces and the development of convergent technologies (NBIC). To increase the effectiveness of this area of neuroscience research, however, a thorough phenomenological development of code deciphering objects is necessary, i.e. articulation and formation of sufficiently defined phenomena of SR. In existing studies, the decryption object of the code (i.e., the distinguished phenomenon of SR) remains largely undetermined, which reduces their effectiveness. 6. Forming the code decryption object requires a correct sampling of the continuum of the SR as a "current present", breaking it into specific elements and fragments. Where it is possible, it is desirable that sampling reach the level of quantification of SR phenomena. Such an operation can be realized for relatively simple phenomena of SR (sensations, perceptions, some emotional states). It involves minimizing the given phenomenon of SR by "content" and in time. For example, in the tachistoscopic experiment, I perceive a white square on a black background in a dark room for a minimum time. This can be called a quantum of visual perception. We agree that the information A, discussed above, is just such a quantum of SR. Then its carrier X is the desired code structure A (at least, the observed neurodynamic correlation A) should be limited to the same time interval. The set of such my perceptionquanta makes it possible to form a personal invariant of perception (information) A and thus to assume the corresponding personal invariant X. In the same way, one can also form interpersonal invariants A and X, when the participants of the experiment are different individuals. Clear invariants of this kind are needed to comply with the principle of repeatability of the experiment. This applies not only to invariants of certain types of phenomena of SR, but also to the invariant of any phenomenon of SR, i.e. to the invariant description of any realized state in general, and accordingly to the description of those specific properties of brain activity, those specific information processes that determine the presence, in the given interval, in all of us the quality of SR (in contrast to information processes in the brain, which, according to D. Chalmers, "go on in the dark"). Despite the IP, there are sufficient grounds to believe that code-based neurodynamic systems of type X, which are carriers of certain phenomena of SR, although they include a wide range of elements and properties, nevertheless have essentially general characteristics that allow us to determine and decipher the code of this information (of this SR phenomenon). One usually refers to the fact that the observer is dealing with individual, original, unique phenomena. But he, one way or another, always overcomes this abyss of diversity, creating suitable invariants. A necessary condition for scientific research is the formation of such invariants, which register unity in diversity, and their use for the purpose of scientific explanation. It can be said that this is a common place for a scientist. But, like many simple truths, it contains a lot of theoretical difficulties that particularly affect the problem of deciphering the brain codes of psychic phenomena and, first of all, in solving the problems of forming clear 0 invariants of those SR phenomena that are offered as an object of deciphering their brain codes 6 . These difficulties are exacerbated by the absence of taxonomy of SR phenomena, the shortcomings of their classification, the extreme weakness of attempts to theoretically order their diversity. This applies even more to the understanding of the extremely complex, multidimensional value-semantic and active-volitional structure of the SR and its self-organization, the core of which is our "I". In the meantime, every single phenomenon of the SR, even in the form of its personal invariant, always carries in itself to some extent the properties of this structure and cannot be comprehended outside of these properties. Hence, it follows that the task of deciphering the neurodynamic code of a given SR phenomenon must register these properties. 7. As it was shown above, A and X are simultaneous, one-cause phenomena that are in a singlevalued correspondence. But this means that the phenomenological description of the essential properties of A-at least of its formal content, temporal, structural, dynamic properties-can be extrapolated to X, i.e. can serve as a primary model of X, pointing to the essential properties of X that are necessary for understanding its code organization. Let's try to distinguish these properties, i.e. basic parameters of the description of any phenomenon of SR. The processes of coding and decoding primarily indicate the necessary participation in them of memory, they indicate the circular structure of actualization-deactualization of the SR's experienced content in this interval of the "current present". These aspects of code interpretation are of fundamental importance and are the subject of special studies. We shall concentrate on the phenomenological properties of the code decoding object (of the isolated SR phenomenon), which are caused by the multidimensional dynamic structure of SR. In each SR phenomenon, a mapping is shown not only of some "external" content, but also of it itself. This is manifested in its irremovable belonging to its "own" I (which in psychiatry is called the "sense of belonging", it is broken only in pathological cases, and implies the phenomena of depersonalization well described by psychiatrists and usually associated with them the phenomena of derealization). This unity of external-imaging and self-imaging allows us to consider that the basic dynamic structure of SR is bimodal, i.e. its main introsubjective relations, which determine the dynamic integrity of the SR, represent a unity of the opposite modalities of "I" and "not-I", which is realized by their mutual position and variable correlation 7 . Such bimodality, including the mechanism of variable correlation of the mirror type, must also be inherent in the neurodynamic 6 One of the ways of forming personal and interpersonal invariants of the visual image and the corresponding neurodynamic carriers (using the principle of system isomorphism) was elaborated in detail in paragraph 5 of Chapter 5 of the abovementioned book [4] (see Pp. 284-300). Here, the formation of the interpersonal invariant of any SR phenomenon in general was also considered. 7 The substantiation of this proposition and the detailed phenomenological analysis of the structure of SR are contained in: organization of the code structure of any SR phenomenon. Now we are far from understanding its "device", but it is this feature of every act of consciousness that relieves us of the infamous homunculus and allows us to explain the phenomenon of mapping the mapping (information on information) that is characteristic of any phenomenon of SR. In addition to the above two integral parameters (memory and basic structure of SR), six more parameters for describing the phenomenon of SR taken for the purpose of deciphering its neurodynamic code must be singled out. They can be called analytical, since each of them denotes one "dimension" in the multidimensional dynamic structure of the SR. Taken together they serve to describe a model capable of displaying the essential properties of the desired codal neurodynamic organization. 8. Let's consider each of them. 1) The time parameter, which has already been mentioned above, fixes the selected phenomenon of SR in a certain time interval. In the same interval, its neurodynamic code also functions, which limits the zone of its search and identification. 2) The content parameter (or, rather, the parameter of content) means that any SR phenomenon is a mapping and the meaning of something. This is the "content" of a certain interval of the "current present", regardless of its adequacy or inadequacy, and whether it acts as a "one-time" experience of a given individual or in the form of a personal invariant, an interpersonal invariant or in any other different form. This parameter indicates the register of the neurodynamic organization through which the "content" of this phenomenon is coded, and aims at an experimental search for this register (functional mechanism). The latter presents, apparently, the greatest complexity in the problem of decoding the code. Although simple types of "content" of SR phenomena are reproduced on the computer screen using the fMRI method, we are aware that the tomogram observed in the brain, being a correlate of the SR phenomenon experienced, nevertheless very indirectly expresses its real, codal neurodynamic organization. This is, so to speak, only the first step in solving the problem of deciphering brain codes of SR phenomena. 3) The formal parameter means that any content of the phenomenon of SR appears in a certain form and refers to the corresponding class, gender, species, i.e. it is somehow categorized. When we talk about visual perception or perception in general, we have in mind a certain form of existence of sensual images. It orders their colossal variety. Despite the absence of a scientific taxonomy of SR phenomena, we mostly successfully use formal discretizations, which are set by psychology on the basis of generalizations of everyday experience, natural language (sensation, visual perception, perception in general, representation, concept, etc.). The formal parameter registers the necessary property of the SR phenomenon and therefore compels the introduction of this parameter into the model of its neurodynamic code organization; it aims at elucidating those functional neurodynamic mechanisms that perform operations of categorization, classification, generalization, identification. 4) The truth parameter characterizes any SR phenomenon in terms of the adequacy of the mapping of the corresponding object. It can be true or false, questionable or indefinite. However, in all cases, we still have a fundamental attitude toward truth and rightness, which functions disproportionately and is often areflexive. We are constantly "tuned" to achieve adequate knowledge of what interests us. Any interval of the "current present" includes the authorizing register of "accepting" or "not accepting" this "content" (including doubt, probabilistic assessment, feeling of uncertainty). It is far from perfect and often selects false and ridiculous ideas as "true" and "right". However, all the really true ideas and theories that originated and began their journey in the minds of individuals were sanctioned in the beginning by a personal register. and only with time did they receive confirmation at the level of interpersonal and suprapersonal social-register sanctioning. The presence of a personal authorizing register in the structure of SR phenomena allows us to assume the same kind of functional mechanism in the neurodynamic code organization of the SR phenomenon under study and to outline the ways of its special investigation. 5) The value parameter characterizes the significance for an individual of the "content" of the experienced SR phenomenon, its relation to this "content". The value "dimension" of the SR has a specificity that is not reducible to the "truth" and other parameters of the SR. It is well known that false representations can have an extremely high value for a person, and true ones a very low and even negative significance. In this respect, the value parameter, like the truth one, has two poles, one of which expresses a positive value and the other a negative one. The structure of personal attitude values includes three main types: 1) hierarchical (a clear distinction between higher and lower values, subordination of the lower to higher, unambiguous choice); 2) rank-and-file (when numerous value intentions are located at approximately one, mostly low level, they are easily interchangeable and the choice between them is either extremely difficult or, conversely, very simple); 3) competitive (when two value intentions are incompatible but a choice is required, if it is not produced it generates an agonizing state of ambivalence, which, however, can be successfully overridden). It is the value intention that dominates a given time interval and determines choice, decision and action. The value parameter denotes a similar specific functional register in the code brain organization of SR phenomena that implements motivational stimuli and various types of sanction (cognitive, emotional, painful, etc. 6) The activity (intentional-volitional) parameter characterizes any phenomenon of SR in terms of its activity, highlighting such factors as future projections, probabilistic predictions, goalsetting and purposefulness, volition, action and creative new developments. This parameter expresses an activity vector as a special quality that cannot be replaced by any of the above parameters despite a close connection with them, especially with the value parameter. It is important to consider the activity in its self-development as a process of new formations, including significant changes in its direction and ways of realization, as an opportunity for the formation of increasingly perfect forms of activity. This parameter aims at the study of those specific functional mechanisms in the brain that support active states and implement them in various activities. Its clear awareness stimulates the study of the processes of neurodynamic self-organization, which serves as an indispensable factor in the functioning of neurodynamic code carriers of SR phenomena. Briefly outlined above, then, are the two integral and six analytical parameters indicating those analogous functional registers of the brain neurodynamic organization that should serve as the object and purpose of neuroscience research in solving the problem of deciphering brain codes of SR phenomena. Registering the main dynamic measurements of the multidimensional structure of SR, they can be used to construct more advanced computer models of code representation of SR phenomena in the brain, and thereby to understand the dynamic self-organizing structure that functionally determines the quality of the SR. 9. Proceeding from the principle of invariance of information (IP) in relation to the physical properties of its carrier and, accordingly, the principle of system isofunctionalism (substantiated by A. Turing) 8 , one can draw a conclusion about the theoretical conceivability of reproducing the quality of SR on other, non-biological substrates. SR is a functional property of the neurodynamic selforganizing system. There is no theoretical prohibition on the realization of this property on other suitable substratum bases. It is possible to create such elements (differing from neural ones in physicochemical and morphological features) and a dynamic self-organizing system built from them that will be able to reproduce information processes that determine the quality of SR, i.e. to represent for the control center of this system an information in a "pure form" and the ability to operate on it, and thus to constrain the reflexive and bimodal registers of information processing characteristic of ourselves. In this direction, the convergent development of NBICS (nano-technologies, biotechnologies, information, cognitive, social technologies and corresponding scientific disciplines), which is creating new components and ways of self-organization, opens new perspectives for the formation of artificial intelligence and for the transformation of human nature. In recent years, these issues have become the subject of thorough discussion by major scientists and philosophers. It is of strategic importance, determining the future of mankind and the direction of anthropo-technological evolution, and, above all, represents the way out of the steadily deepening global crisis in our consumer civilization. We now turn to the answers to the second main question. If physical properties cannot be attributed to the SR phenomena, how can one explain their ability to causally affect bodily processes? 1. The phenomenon of SR causes external or internal bodily changes, complex actions of the personality, and determines their result as information on the basis of the existing code dependence, which is a kind of "isolated" information in the continuum of physical interactions (as long as the code structure of this self-organizing system is preserved). When I say to a student: "Come up to me!" and he performs this action, it is caused and determined not by the physical properties of the words I uttered but namely by the information expressed with their help, its semantic and pragmatic features. In themselves, the physical properties of the information carrier do not explain the resulting effect, although they necessarily participate in the act of determination. This is confirmed by the fact that I can cause exactly the same effect to be brought about by other words and, in general, by signals very different in their physical properties (by virtue of the principle of invariance of information with respect to the physical properties of its carrier-IP). 2. Here we have a special type of causality-information causation. Its specificity in comparison with physical causality is determined by the IP. Psychic causation is a kind of information causality; in analytical philosophy it is called mental causation. The notion of psychic causality encompasses also the unconsciously produced actions. Stressing the intimate relationship 8 The principle of isofunctional systems means that the same function or complex of functions can be reproduced on substrates different in their physical (chemical) properties. A model example: a natural tooth is removed, an artificial tooth is inserted. The function is the same, the substrate is different. On the path of this type of substitution, it is difficult to establish any limit. Many internal and external organs of man are now successfully prothetized. This also applies to individual components of the brain (for example, implanting an electronic chip in the brain of a paralyzed person, allowing him to mentally manage a wheelchair, etc.). between the conscious and the unconscious levels of the psyche, we are still primarily interested in consciously assumed actions (which are initiated by the on-going phenomenon of SR, because there are also unconscious psychological forms of causation). Therefore, in this case it is perhaps better to use the concept of mental causality as a subspecies of information causality (if the mental is limited to us by SR phenomena). 3. It is important to emphasize that the concept of information causality does not contradict the notion of physical causality. Physical causality retains its entire significance if it does not pretend to be a universal means of explaining all the phenomena of reality without exception, for example, explaining the causes of the economic crisis or the causes of self-sacrifice of the individual. Psychic causality gives a scientifically grounded answer to the classical question of the impact of the mental on the physical. But here the reciprocal question arises: the effect of the physical on the mental. Even if we disregard those cases where severe mechanical, temperature and radiation-related influences, etc. destroy brain code structure and biological organization, physical cause can serve to explain the essential properties of the mental when it comes to direct sensory mappings (sensations, etc.), or the influences on code structures that lead to mutations, or electromagnetic, chemical (through blood) and further effects on the brain. But even here, physical causes are often mediated by information processes and, accordingly, informational causes. The concept of information causality substantially expands the theoretical means of scientific explanation; it becomes necessary when the subject of research is self-organizing systems (biological, social, and in some respects technical ones). Theoretical and empirical justifications of information causality differ significantly from the principles of description and explanation of physical causality. This determines the ontological status of information (in particular, mental) causality. We now turn to the third question. How to explain the phenomena of voluntary actions and free will and how to combine them with the determinism of brain processes? 1. Together with the ability to have information in a "pure form" we are given the ability to operate it across a fairly wide range. This expresses the activity of SR. It includes voluntary actions that can take place not only in the purely mental plane, but also in the communicative and practical ones. An analysis of the structure of a voluntary action indicates the essential role of the areflexive and dispositional levels in it. However, the initiator and regulator of this action is always the specific phenomenon of SR. Therefore, a voluntary action is visual evidence of mental causality. Let us take a simpler example (in comparison with the one given in 2.1). I want to turn on the light of the desk lamp and I do this by pressing the button. In this case, the mental reason in the form of my desire and motivation is a program of actions which triggers a chain of code transformations that have been well worked out in phylogenesis and ontogenesis (i.e., sequential and parallel inclusion of the code programs of the arm movement and other bodily changes associated with it, programs of energy supply of the whole complex of actions leading to the achievement of the goal). The phenomenon of SR, which has a higher value (and belief) ranking, can also have a more powerful causal effect on bodily processes. Well known are the somatic effects of the "overvalued idea" and many such manifestations of the extraordinary power of mental causation, mental control. We may refer to the experience of Second World War with its many striking examples of strength of spirit and will, outstanding feats for the sake of the Motherland, duty, honor, justice, etc. 2. Mental causality means not only the influence of the mental on the corporeal, but also (it is not always taken into account) the influence of the mental on the mental. The fact that one thought can influence another, that it can cause another thought, is the universal fact of our psychic experience. Despite the difficulties involved in discretization of SR phenomena, in relatively simple cases it is possible to fairly clearly represent the associative transition from one of them to another as a cause-effect relation. For example, the visual image of A evokes for me, in the next instant, the visual image of B. Such mental causality or the "mechanism" of posteriority of B from A, does not fundamentally differ from those processes when the phenomenon of SR causes a certain bodily change. Only the contours of code transformations, those subsystems of the brain in which they occur, and the character of the effector changes (their presence or absence in external organs) are different. 3. But the information A is embodied in the neurodynamic system X, and B is embodied in the neurodynamic system Y, respectively. The transformation of A into B is the transformation of X into Y. If I can do it of my own free will, then I can operate and control these brain neurodynamic systems. Managing one's SR phenomena, one's thoughts (as per 1.2) is the management of the corresponding brain code structures. Each of us voluntarily manages a certain class of brain neurodynamic systems (often in not the best way for the self), although this is not sensed and this ability of the self is generally not recognized. 4. But what is our self from the standpoint of neuroscience? According to modern studies (Damasio [10], Edelmen, Tononi [11], Matyushkin [12] and others), our self is represented in the brain by a special structural-functional subsystem, which is called the Ego-system of the brain (or the Self). It includes the genetic and biographical levels of the dispositional properties of the individual and forms the highest, personal level of brain self-organization and management that forms the conscious-unconscious contour of mental processes16. It is at this level that the code transformations that represent our self and information in a "pure" form (that is, as SR) are performed, and ensure the activity of the self in the form of voluntary actions, the self's ability to self-organize, the ability to maintain identity, aspirations and target vectors. The Ego-system embodies the personality characteristics of the individual, the individual's ability to express his will. And here the question arises as to free will and its compatibility with the determinism of brain processes. 5. There is no room here to analyze the problem of free will. But I must say that anyone who denies free will, like unrelenting physicalists, disclaims themselves as persons, absolving themselves of all responsibility for their actions, including the assertion that there is no free will. Each of us is sure that in many cases, at will, at one's own will, one can make a choice and operate with this or that idea, thought or intentional vectors, etc., although in the composition of SR there are also classes of phenomena that are irresistibly imposed on us from the outside or from within our inner world, which are not manageable or only partially manageable, often with great difficulty (pain, emotion, etc.). Nevertheless, our self can control itself and its own phenomena of SR across a very broad range (and indeed expand it). The assertion of the existence of free will has to be drawn from particular cases. But this is quite enough for its recognition (see Dubrovsky [13]). 6. If the ability to voluntarily control one's ideas and thoughts is the ability to control their brain code carriers, this means the ability: 1) to manage the energy supply of these operations, including the corresponding biochemical processes; 2) to change the program of actions, therefore to change their neurodynamic code structures; 3) to expand the contours of mental regulation (including the creation of access to vegetative functions, as yogis do when they, for example, they change their heart rhythms through their volition). This approach allows us to investigate more deeply the phenomena of "exertion of thought", "exertion of will", ways of intensifying the creative process, creating new resources for mental self-regulation, not only in terms of the functional but also in terms of the moral. In other words, we are capable of constantly expanding the range of possibilities for managing our own brain neurodynamics (with all the desirable and perhaps undesirable consequences involved). 7. But my ability to voluntarily control my own brain neurodynamics means that the Egosystem of the brain is a self-organizing, self-controlled system. Consequently, the act of free will (in terms of both the produced choice and the generation of internal effort to achieve the goal, including the energy supply of the action) is an act of self-determination. This means that the concept of determination should be taken not only in the sense of external, but also in the sense of internal determination given by the programs of the self-organizing Ego-system and the brain as a whole. Thus, the thesis of the incompatibility of the concepts of freedom of will and the determinism of brain processes is eliminated, and with it the infamous homunculus. These questions are of fundamental importance for deciphering brain codes, since the latter are also self-organizing systems, the functional elements of the brain's Egosystem. 8. In the context of problems of mental causation and free will, a sacramental question often appears, implying an explanatory impasse: how can the mental (SR phenomenon) act on the brain if it is generated by the brain? From the standpoint of an information approach, it is not difficult to answer this. Of course the mental affects the brain in the sense that the activated neurodynamic code system, carrying the personality information in its "pure form" (the mental,) is able to affect other code structures of the brain, including those that carry information processes on the pre-psychic level (i.e. those information processes that are going on "in the dark"), and thereby affect different levels of brain activity, including circulatory processes, biochemical and electrical changes in individual neurons and synaptic networks. This happens sometimes in a particularly strong form. Suddenly a fortuitous thought comes about: Illumination. Emotional outburst. Stormy productive activitymental or practical. This is an extremely valuable mental state. Initiated at the level of the brain's Ego-system, it produces functional changes in other brain subsystems and, as a result, causes strong reactions in a number of internal organs and throughout the body system. Every mental state of the individual is the product of the specific activity of the brain at the level of its Ego-system, and when it is actualized, the functioning of all brain subsystems essentially changes (in comparison with those states when there is no SR-during a deep sleep or in a temporary loss of consciousness). There remains one more, perhaps most difficult and hard question-about the origin of the quality of SR. It is equivalent to the question: "What is subjective reality for?", why did it arise in the course of biological evolution? I will try to answer this briefly from the standpoint of information and evolutionary approaches (more detail see: Dubrovsky [14]). How to explain the emergence of the very quality of SR in the process of evolution? 1. The process of the emergence of multicellular organisms advanced the cardinal task of creating a new type of management and maintaining integrity, on the solution of which their survival depended. After all, the elements of such a self-organizing system are separate cells, which also represent selforganizing systems with rather tough programs, their having been "developed" by evolution for many hundreds of millions of years. But now the latter had to be coordinated with the general organizational program, and vice versa. This is a very difficult task, the solution of which presupposed finding the optimal measure of centralization and autonomization of control loops, a measure that could ensure the preservation and strengthening of the integrity of a complex living system in its continuing interactions with the external environment. This means a measure of centralization of management that does not violate the fundamental programs of individual cells, and such a measure of autonomy of their functioning that does not prevent, but rather promotes, their friendly participation in the implementation of programs of the whole organism. Together with the centralization of management, it was required to ensure its high-speed efficiency. This measure of centralization and high efficiency was achieved due to the emergence of psychic control in those multicellular organisms that moved actively in the external environment and in ever-changing situation. In organisms with minimal motor activity that are attached to one place, such as plants, the psyche does not develop. 2. Evolution demonstrates the intimate relationship between motor and mental functions, which confirms the emergence and development of the psyche in precisely those complex organisms that are actively moving in the external environment. Hence the obvious causal capacity of the mental (the information in the form of SR,) to directly and instantaneously produce external actions and control the organs of motion Fundamental results on the organic connection between perceptual and motor functions have been obtained in recent years on the basis of the study of "mirror systems" of the brain (see: Rizzolatti, Sinigalia [15]). In contrast to this, the management of internal organs and processes is performed automatically, on the unconscious and pre-psychic levels. At the same time, there is a constant "adjustment" of certain parameters of local and integral changes (energy, information) in the internal environment of the organism for the effective implementation of its actions in the external environment. 3. Psychic control is associated with the process of specialization of cells and the emergence of the nervous system, which performs the functions of programming and implementing actions based on the analysis and integration of information coming from the external and internal environment of the body. Products of this integration are expressed initially in the form of sensations-emotions and only at subsequent stages of evolution, in more complex forms of SR (perceptions, representations, concepts, mental actions, etc.). Accordingly, operational registers of the SR are also complicated. The psyches of animals possesses the quality of SR, which at high levels of evolution takes on a rather complex structure, includes the hierarchical centralization of SR phenomena, i.e. a kind of "Self", the evolutionary premise of the human Ego-system. The quality of SR represents a specific level of information processes at the level of the Ego-system of the brain. 4. In order for information to acquire the form of SR, its two-step code transformation is necessary at the level of the Ego-system: the first of them presents information to it, that is in the "darkness", the second forms the "natural code" of higher order, thus creating the phenomenon of information about information, i.e. it is "opened up" and made relevant for the Ego-system, the individual. This is what was called the information given to us "in its pure form" and the ability to operate with it. The state of SR initiates a new type of active occupation in the living system. This is a state of awareness, attention, alertness, constant readiness for immediate action, the state of finding the necessary means of subsistence, probing danger and realizing vital functions. The quality of the SR created by the subsystem of "natural codes" of the second order within the framework of the Ego-system is the quality of virtual reality, its original, fundamental form that acquires, in the process of anthropogenesis, in the emergence of language and in social development, newer and newer forms of external objectification. The processing of information in such a code structure, i.e. at the virtual level, is of high operational efficiency and can be carried out autonomously by external effector functions, which are included only once the program of action has been formed and authorized. 5. The development of the psyche initiated growth of the multistep and multifaceted production of information about information. The range of virtual operations expands, making the generalization of experience more efficient; developing the ability of "delayed actions" and virtual trial actions, the ability of forecasting, and the ability of building models of a probable future; creating an ever-higher level of stirring activity; and multiplying its degrees of freedom. In humans, unlike in nimals, information processes that determine the quality of SR acquire new essential features thanks to the emergence and development of language. This primarily concerns an additional and very productive level of coding and decoding, created by the language system, which qualitatively improves the analytical and synthetic abilities of information operation and develops metarepresentation and reflection. 6. The foregoing contains in many respects an answer to question 5: why the information, about the acting agent, is not simply represented but is experienced in the form of SR? Because the experience in the form of SR combines the functions of mapping and control, it is such a way of "representation" and actualization of the information for the "Self", which allows it to easily, quickly and, most importantly, voluntarily operate on the information in a "pure" form (i.e at the level of virtual reality). The question "How (through what mechanisms) do information processes in the brain create the quality of SR?" belongs to the competence of modern neuroscience. Studies show that the condition for the emergence of subjective experience is the circular process and the synthesis of information in certain brain structures. Subjective experience in the form of sensations arises when two types of information are compared and synthesized on neurons of the projection cortex of the brain: sensory information (about the physical parameters of the stimulus) and the information retrieved from memory about the significance of the signal. Information synthesis is provided by the mechanism of returning impulses to the places of initial projections after a response from those brain structures that carry out the processes of memory and motivation. Sensation is an act of "information synthesis" [16]; performed within the framework of this cycle; it arises as a result of the highfrequency cyclic process of "self-identification" [17]. 7. In any phenomenon of SR given information about some object and information about this information (at least in the form of a sense of belonging to me, to my Self), but as was already noted, absolutely no information is furnished about its brain carrier. Elimination of the mapping of the brain information carrier is inherent in all mental activity. The ability of such a mapping did not arise and did not evolve in the course of evolution due to the IP. Since the same information can be embodied and transmitted by media of different properties, the ability to display the medium did not matter for the adequate behavior and survival of the organism. To do this, it needs the information itself (about external objects and situations, about the most likely changes in the environment and about how to interact with it, about one's own states, etc.), the ability to operate it and to use it for management purposes. Exactly these functions have been developed in the process of evolution and anthropogenesis. Humans, while conducting practically all forms of social functions required by life, have no need for the information about brain carriers of that information that they operate. However, the situation has recently begun to change. Following the decoding of the genome, the problem of deciphering brain codes of psychic phenomena (primarily SR phenomena) has been placed on the agenda, and, as was already noted, it is being successfully pursued. There are reasons to believe that the tasks associated with this problem are caused by the essential needs of the society and that the successes in their solution mark the beginning of a new stage in human development and social self-organization as a whole. Already an elementary analysis shows that the ability of a selforganizing system to display a carrier of information and to control this carrier is unusually expanding the scope of its cognitive and transformational activity and, most importantly, the possibility of self-transformation. In this regard, we can talk about new opportunities for transforming those genetically conditioned properties of human nature and consciousness which serve as the initial reason for the steady deepening of the ecological crisis and other global problems of terrestrial civilization. It is primarily about the indefatigable consumer intentions of the social individual and his aggression toward to his own kind (and thus toward himself). Of course, there are serious doubts about the possibility of changing these negative properties of human nature that remain within the framework of our biological organization. But if these properties cannot be changed, an anthropological catastrophe awaits us. Of course, other variants of overcoming the present crisis of our civilization and the ascent to a new stage of it are theoretically conceivable, but all of them are somehow connected with such self-transformations that imply a change in the consciousness of the social individual. The latter, to some extent and in certain respects, turns out to be dependent on the results of the development of the Mind-Brain Problem. Conclusion The proposed solution of the main theoretical questions of the "Hard problem of consciousness" can be useful for the development of modern neuroscience studies of the phenomena of consciousness, especially for such a direction as Brain Reading. This concerns the following tasks: 1) the formation on the basis of a phenomenological analysis of the object of investigation, i.e. personal and interpersonal invariants of certain phenomena of subjective reality, with which neurodynamic correlations are established; 2) the solution of a number of methodological issues of the procedure for deciphering the brain neurodynamic codes of these phenomena of SR (they are set out above); 3) refinement of the parameters of the model of the neurodynamic code, which must be taken into account when planning such studies. It should be borne in mind that the neurodynamic correlate of this phenomenon of subjective reality, determined with the help of appropriate methods, reflects only a particular aspect, fragment, a sign of the real neurodynamic code structure, which according to modern views is an activated multidimensional neural network and is cyclical. The proposed theory can contribute to further research of the current theoretical and methodological issues of modern neuroscience-an important condition for its achievement of new frontiers in the study of consciousness.
2019-06-07T23:03:41.707Z
2019-04-27T00:00:00.000
{ "year": 2019, "sha1": "aa1ba4c9ebf44192ae869f16bde209aa6c274423", "oa_license": "CCBY", "oa_url": "https://doi.org/10.3934/neuroscience.2019.2.85", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "20488d0134715f58f18b58313cfc3b1d2b6cc156", "s2fieldsofstudy": [ "Philosophy" ], "extfieldsofstudy": [ "Medicine", "Psychology" ] }
248604045
pes2o/s2orc
v3-fos-license
Real-Time Detection of Incipient Inter-Turn Short Circuit and Sensor Faults in Permanent Magnet Synchronous Motor Drives Based on Generalized Likelihood Ratio Test and Structural Analysis This paper presents a robust model-based technique to detect multiple faults in permanent magnet synchronous motors (PMSMs), namely inter-turn short circuit (ITSC) and encoder faults. The proposed model is based on a structural analysis, which uses the dynamic mathematical model of a PMSM in an abc frame to evaluate the system’s structural model in matrix form. The just-determined and over-determined parts of the system are separated by a Dulmage–Mendelsohn decomposition tool. Subsequently, the analytical redundant relations obtained using the over-determined part of the system are used to form smaller redundant testable sub-models based on the number of defined fault terms. Furthermore, four structured residuals are designed based on the acquired redundant sub-models to detect measurement faults in the encoder and ITSC faults, which are applied in different levels of each phase winding. The effectiveness of the proposed detection method is validated by an in-house test setup of an inverter-fed PMSM, where ITSC and encoder faults are applied to the system in different time intervals using controllable relays. Finally, a statistical detector, namely a generalized likelihood ratio test algorithm, is implemented in the decision-making diagnostic system resulting in the ability to detect ITSC faults as small as one single short-circuited turn out of 102, i.e., when less than 1% of the PMSM phase winding is short-circuited. Introduction Permanent magnet synchronous motors (PMSMs) have gained popularity in industrial applications such as electric vehicles, robotic systems, and offshore industries due to their merits of efficiency, power density, and controllability [1][2][3]. PMSMs working in such applications are constantly exposed to electrical, thermal, and mechanical stresses, resulting in different faults such as electrical, mechanical, and magnetic faults [4]. Among these various faults, the stator winding inter-turn short circuit (ITSC) fault is considered as one of the most common faults [5] due to the excessive heat produced by a high circulating current in a few shorted turns of the stator winding [6]. Subsequently, this excessive heat causes further insulation degradation and might lead to a complete machine failure [7] if it is not detected and treated in time. Therefore, developing methods for monitoring and detecting the ITSC fault in its early stages can substantially lower maintenance costs, downtime of the system, and productivity loss. ITSC faults can be detected by signal-based, data-driven, and model-based techniques [8]. The first approach aims to detect fault characteristic frequencies in measured motor signals, namely, current, voltage, or vibration signals [9][10][11], being processed by time-frequency signal analysis tools such as Fourier transform [12], matched filters [13], Hilbert-Haung transform [14], wavelet transforms [15], and Cohen distributions [16]. These signal-based methods face challenges of real-time implementations due to the computational burden, and missing fault characteristic signals does not guarantee that the machine is healthy [8]. The data-driven approach such as an artificial neural network (ANN) [17] and Fuzzy systems [18] requires a lot of historical data to train models and classify localized faults. Historical data is restricted in industry and producing a lot of historical data in healthy and faulty conditions is costly and time-demanding [19]. Alternatively, model-based techniques have been proposed to detect ITSC faults [20][21][22]. Among them, the finite element method (FEM)-based models have been widely used due to the accuracy and convenience of taking into account physical phenomena, e.g., saturation. FEM models, known as time-demanding and computationally heavy ones, require deep knowledge of the system, e.g., detailed dimensions and material characteristics. Other model-based methods that use mathematical equations to model a motor's behavior have been reported to have challenges regarding validity when experiencing abnormal conditions such as internal faults [8]. To address the mentioned challenges, structural analysis is proposed as an alternative solution for detecting ITSC faults in electrical motors. The structural analysis algorithm has been well studied and developed in the literature [23][24][25] and applied to different structures. The structural analysis approach has been able to successfully detect faults in automotive engines [26][27][28], hybrid vehicle [29], and battery systems [30]. In [31,32]. The algorithm has successfully been applied on PMSM electric drive systems to detect sensor faults such as voltage, current, encoder, and torque sensors. In our previous study [33], it was proposed that the algorithm can be used on an electric drive system to also detect common physical faults in PMSMs such as ITSC and demagnetization, and residual responses were obtained by simulation. However, in previous studies, this algorithm has not been implemented in real-time diagnosis of an industrial PMSM for detection of ITSC faults. Implementing a structural analysis technique on a PMSM and drive, this paper aims to achieve the following contributions: • Detection of both internal motor faults and external measurement faults, namely ITSC and encoder faults; • Detection of the lowest level of ITSC fault, with one shorted turn in stator phase winding; • Early detection of an ITSC fault, i.e., considering a lower fault current in the degradation path as compared to shorted turns; • Modeling of the noise in drive system measurement signals with unknown amplitude and variance. This paper presents a systematic fault diagnosis methodology based on structural analysis for detecting multiple faults in PMSM drives, namely ITSC faults and encoder fault. To achieve this, a healthy dynamic mathematical model of PMSM is defined in the abc reference frame based on the dynamic constraints, measurements, and derivatives. To model an ITSC fault in any phase, specific fault terms are added to the three-phase flux and voltage equations. These fault terms include the deviations in the voltage, flux, and currents of the stator winding caused by an ITSC fault, since a part of winding is shorted; hence, three-phase voltage and flux signals are subjected to changes. In addition, fault terms are added to the dynamic model to take into account the encoder faults, resulting in errors of the angular speed and angle measurements. Subsequently, the analytical redundant part of the structural model is extracted and divided into minimally over-determined sub-systems from which three sequential residuals are obtained based on the error in the current signal of each phase. Furthermore, a resultant residual is formed in the αβ frame to achieve a better demonstration of different ITSC fault levels. Finally, a generalized likelihood ratio test is developed to detect the faults in the resultant and encoder residuals under unknown noise parameters assumptions, i.e., unknown amplitude and unknown variance. Modeling Inter-Turn Short-Circuit Fault The studied PMSM consists of distributed three-phase windings on the stator and PMs on the rotor. Each phase winding contains several coils in parallel, being formed by wrapping bundles of wires together. The wire insulation of the stator windings might be degraded over time under electrical, mechanical, and thermal stresses, which may eventually lead to electrical faults such as an inter-turn short circuit (ITSC), a phase to ground short circuit (PGSC), and a phase to phase short circuit (PPSC).The stator ITSC fault is considered the most common electrical fault [34] and usually occurs in a few shorted turns. The degraded path among the shorted turns is provided by a nonzero resistance of the faulty insulation, leading to a circulating fault current. This circulating fault current results in copper losses and excessive heat in the shorted turns since only a few turns are involved, and the current-limiting impedance is low. The insulation might further degrade and even propagate to nearby turns. This might cause other critical faults such as a PGSC fault, a PPSC fault, and even a complete failure. Therefore, monitoring and detecting the ITSC fault in early stages would reduce costs and downtime caused by the machine failure. To model ITSC faults in a PMSM, it is necessary to know how the motor signals and parameters are affected by the different levels of the fault. The schematic of a PMSM stator winding under ITSC faults with different levels in each phase is shown in Figure 1. The level of fault in abc phases is denoted by µ a , µ b , and µ c , respectively, which are defined by the ratio of the number of shorted turns to the total number of turns per abc phase winding. In a healthy condition, each phase winding of a PMSM has a resistance of R s and an inductance of L s . In the presence of ITSC faults in each phase, the phase winding is split into a faulty part with µR s and µL s , and a healthy part with (1 − µ)R s and (1 − µ)L s resistance and inductance values. As a result, there is not only mutual inductance between healthy and faulty parts in each phase winding, but also between the faulty winding with other phase windings [35]. In addition, the degraded resistance of the insulation in each phase is denoted by R a f , R b f , and R c f , while the circulating fault currents are i a f , i b f , and i c f , respectively. To detect an incipient ITSC fault, the resistance of the degraded path should be higher than the resistance of the shorted turns [36]. This is due to the fact that an ITSC fault forms gradually over time and starts with a low current circulating through the degraded path. Structural Analysis for PMSM with ITSC and Encoder Faults Structural analysis aims to extract the analytic redundant relations (ARRs) of a system based on the mathematical equations that describe the system's dynamic [23,37]. A structural analysis algorithm relies on redundancy in a system (a redundant part of the complex system) and yields residuals for fault detection and isolation (FDI) based on ARRs. Assuming that a model M has outputs z and inputs u, a residual is extracted by eliminating all the unknown variables, i.e., substituting an unknown variable with its equivalent obtained value through a redundant path. Therefore, it leads to a relation that contains only the known variables r(u, z) = 0 which is known as an ARR if the observation z is consistent with the system model [23]. As a result, this residual's response will maintain a zero value under a null hypothesis (nonfaulty case) H 0 and a nonzero value under an alternative hypothesis (faulty case) H 1 as follows: This methodology is especially effective for fault diagnosis of complex systems where a prior deep knowledge of the whole system is neither needed nor affordable in terms of computational burden and processing time. Instead, a small redundant part of the system is selected and processed to obtain smaller redundant subsystems that can be used in forming residuals for detecting each predefined fault. First, the structural model of a redundant system is formed and represented by an incidence matrix with variables as columns and equations as rows. The variables are categorized as unknown variables, known variables, and faults, while the equations are categorized as dynamic equations, measurements, and differential equations. Each row of the incidence matrix connects an equation to the corresponding variables if they are present in that specific equation. Next, the justdetermined and over-determined parts of the system are separated by rearranging the rows and columns in a way to form a diagonal structure that is known as Dulmage-Mendelsohn (DM) decomposition. Using the analytic redundant part of this structure and based on the degree of the redundancy, several smaller sets of ARRs are identified. These smaller sets are called minimally over-constrained sets and have one degree of redundancy, holding exactly one more equation than the number of variables. Subsequently, a fault signature matrix is formed to demonstrate which fault can be detected or even discriminated. Finally, specific diagnostic tests (residuals) are designed to detect faults. Here, a structural analysis of a PMSM experiencing independent ITSC faults in each phase is presented, and diagnostic tests are proposed to detect and discriminate them. Figure 2 shows the modeling diagram of a faulty PMSM and the drive system where measurements are acquired by sensors and faults are located inside the motor. PMSM Mathematical Model The dynamic equations of a faulty PMSM in an abc frame with ITSC faults present in three phases are represented by equations e 1 − e 9 as shown in (2), where v a , v b , and v c are the stator phase voltages; i a , i b , and i c are the stator phase currents; λ a , λ b , and λ c are the stator phase fluxes; T e is the electromagnetic torque; T L is the load torque; ω m is the rotor's angular speed; θ is the electric angular position; R a , R b , and R c are the stator phase resistances and L a , L b , and L c are the stator phase inductances; λ m is the flux produced by rotor PMs; p is the pole pairs; J is the rotor inertia, and b is the friction coefficient. As discussed in Section 2, an ITSC fault splits the phase winding into a faulty part with resistance and inductance of µR s and µL s and a healthy part with resistance and inductance of (1 − µ)R s and (1 − µ)L s . The changed resistance and inductance of the winding have direct correlation with voltage equations and flux equations. Under a healthy condition, the model of PMSM, especially e 1 -e 6 , have no fault terms. Therefore, any changes in the inductance will affect both voltage and flux equations (e 1 -e 6 ) directly, and any changes in the resistance will affect only voltage equations (e 1 -e 3 ) directly. Here, f v a and f λ a are added to the corresponding equations of the healthy PMSM to account for the ITSC fault in phase a. Similarly, f v b , f v c , f λ b , and f λ c terms are added to account for ITSC faults in phases b and c, respectively. These fault terms are shown in red in (2). The known variables consist of the motor signals, which are measured for both control purposes and fault diagnosis. Thus, in addition to the three-phase currents and angular position, i.e., y i a , y i b , y i c , and y θ that are necessary for the control system. Three-phase voltages, i.e., y v a , y v b , and y v c , are also measured to complete the diagnostic system. Equation (3) shows these known variables, where f θ and f ω fault terms are also added to account for speed and angle measurement error. In addition, since the dynamic model of PMSM includes five differential constraints in the abc frame, these are needed to be defined as unknown variables. Equation (4) shows the differential constraints for the structural model. Structural Representation of the PMSM Model The structural model of the PMSM with ITSC and encoder faults is obtained based on the redundant dynamic model in (2)-(4), as shown in Figure 3. The incidence matrix contains 22 rows, representing the nine defined equations in (2), the eight measured known variables in (3), and the five differential constraints of unknown variables as shown in (4). The columns of the matrix are subdivided into three groups of unknown variables, known variables, and faults. The known variables are obtained directly from the measurements, while the unknown variables can be calculated based on the known variables. The faults considered in the structural model are variations in phase voltage and flux to represent ITSC faults in each phase. Analytical Redundancy of the Model To detect specific faults in a redundant system, faults must first be introduced to the model, and then a proper diagnostic test containing the considered fault is selected. A diagnostic test is a set of equations (or consistency relations) extracted from the system model, in which at least one equation is violated in case of the presence of a considered fault. A system model is called a redundant model if the system model consists of more equations than unknown variables. Assuming that model M = (C, Z) contains constraints (equations) C and variables Z, let unknown variables X be the subset of all variables Z in model M (X ⊆ Z). The degree of redundancy of the model M is defined as: where |C| denotes the number of equations, and |X| is the number of unknown variables contained in the model M. According to bipartite graph theory, any finite dimensional graph such as M = (C, Z) can be decomposed into three sub-graphs as follows [37]: For diagnostic purposes, the over-constrained sub-graph is the interesting part because it contains the important redundancy that is necessary for detecting a fault. According to [38], a fault is structurally detectable if the equation that contains the fault variable lies in the over-determined part of the whole model (e f ∈ M + ). To obtain these sub-graphs, a canonical decomposition of the main structure graph (M) is required. An example of this canonical decomposition is shown in Figure 4 This canonical decomposition is achieved after the rows and the columns of the main structural graph (structural model incidence matrix) are rearranged so that the matched variables and constraints appear on the diagonal. Therefore, having a decomposition tool that analyzes the redundancy of the structural model and forms this diagonal structure is very beneficial. Dulmage-Mendelsohn (DM) is a key decomposition tool that is applied on a structural model directly and obtains a unique diagonal structure by a clever reordering of equations and variables [39]. Figure 5 shows the DM decomposition for the PMSM structural model, where the analytic redundant part is expressed in the bottom-right part containing all the faults. Since this part includes redundancy and can be monitored, diagnostic tests can be designed with the set of ARRs in M + . As a result, if a fault is defined in the model and is supposed to be detected by the diagnosis system, a residual that is sensitive to the presence of that fault must exist. Diagnostic Test Design This section presents the procedure of designing diagnostic tests for ITSC and encoder faults. First, the over-determined part of the structural model is separated into smaller redundant subsystems where faults are observable, and then the sequence of obtaining residuals for the detection of each fault is explained. Minimal Testable Sub-Models According to the definition given by [38], an equation set M is a TES set if: M is a proper structurally over-determined set. 3. For any M M where M is a proper structurally over-determined set, it holds that where F(M) is the set of faults that influence any of the equations in M. A TES M is a minimal test equation support (MTES) if there exists no subset of M that is a TES holding the degree of redundancy of one. Following the algorithm in [38], the structural model is subdivided into efficient redundant MTES sets. Each MTES set contains a group of ARRs that together hold the degree of redundancy of one, meaning that there is only one equation more than the number of variables involved. In addition, they are obtained in a way that the effect of faults is considered. This reduces computational complexity significantly without reducing the possible diagnosis performance as compared to structurally over-determined (MSO) sets. Figure 6 shows all the MTES sets found for the considered structural model here, where each row of the matrix connects the corresponding MTES to the equations involved. Figure 7 shows the signature matrix of MTES sets, indicating which fault terms are included in each MTES. MTES 1 includes f θ and f ω fault terms that can be used for detecting a rotor's speed and angle measurement error. Diagnosability Index An important criterion for selecting MTES sets is to satisfy diagnosability requirements. This includes detectability of any single fault as well as isolability between any two faults. Here, an index for the proper selection of MTES sets that are suitable to be used in sequential residual generators is introduced. Zhang [40] proposed a diagnosability index that is aimed at achieving the maximum degree of diagnosability for each residual by comparing the distance between the fault signature matrices of MTES sets: where D(V f 0 , V f j ) stands for the distance between the fault signature of f j and the healthy case and measures the detectability of fault f j . D(V f i , V f j ) is the distance between two fault signatures and is defined as the Hamming distance [41] between the two fault signature strings: Sequential Residuals for Detecting ITSC and Encoder Faults This section presents the sequence of deriving four residuals (R 1 − R 4 ) based on the obtained MTES sets. These residuals aim to detect ITSC faults in any of the phase windings as well as encoder measurement faults. To form residual R 1 that is sensitive to an ITSC fault in phase a winding, an MTES set should be chosen that contains f v a and f λ a fault terms. As can be seen in the fault signature matrix in Figure 7, MTES 7 − MTES 10 can be used for forming such a residual because these four MTES sets all contain f v a and f λ a fault terms. Among them, an MTES set is preferred that contains a lower number of fault terms because it will be more isolated and less influenced by other faults. MTES 7 and MTES 8 contain three fault terms, while MTES 9 and MTES 10 contain four fault terms. Therefore, either MTES 7 or MTES 8 should be chosen, and MTES 8 is preferred due to a lower number of involved equations (MTES 8 contains six equations, while MTES 7 contains eight equations) which leads to less complexity, as seen in Figure 6. MTES 4 − MTES 6 can be used for forming residual R 2 because they contain f v b and f λ b fault terms. Among them, MTES 5 is preferred because it contains a lower number of fault terms compared to MTES 6 and a lower number of equations compared to MTES 5 . Similarly, MTES 3 is chosen to form residual R 3 that is sensitive to an ITSC fault in phase-c winning and contains a lower number of equations compared to MTES 2 , given the fact that both contain f v c and f λ c fault terms. To form residual R 3 that is sensitive to an encoder fault (angular velocity and position measurements), an MTES set is preferred that contains both f θ e and f ω m , and the only MTES set that contains such fault terms is MTES 1 . The combination of these four MTES sets, i.e., MTES 1 , MTES 3 , MTES 5 , and MTES 8 yield a high diagnosability index as m D = 1.88, and this maximizes the chance of discrimination of each fault from others. The sequential residuals are obtained as follows: 1. R 1 : MTES 8 is used for deriving R 1 based on the error between calculated and measured current of phase a winding, i.e m 4 in (3): And the sequence of obtaining these variables is as follows: e 4 : i a = 1 L a (λ a − λ m cos θ) where λ a state is a state variable and updated at each time-step as follows: 2. R 2 and R 3 follow the same procedure mentioned for R 1 based on the error between calculated and measured currents of phase b and phase c using MTES 5 and MTES 3 , respectively. 3. R 4 : MTES 1 is used for deriving R 4 based on the error between the calculated and measured shaft's angular speed, i.e m 8 in (3): and the sequence of obtaining the unknown variable, ω m , is as follows: e 9 : ω m = p dθ dt (13) Experiments and Results The proposed diagnostic method is implemented and validated through an in-house experimental setup in this section. First, ITSC faults were applied to the phase windings of a four-pole PMSM, as shown in Figure 8. Each phase winding of the motor has two coils in series, each of which has 51 turns with three parallel branches. For phase a, one of the turns was short circuited, or about a 1% fault level. For the phases b and c, three and five turns were short circuited, resulting in almost 3% and 5% fault severity, respectively. The connection wires to these extra taps in the phase windings were taken out of the motor and connected to 100 mΩ resistors (similar to R f Figure 1) both to limit the short circuit current and to simulate the winding insulation degradation, as shown in Figure 9. Furthermore, controllable relays were placed between winding taps and fault resistors to activate or deactivate the fault. The faulty motor was mechanically coupled to a generator as a variable load and an incremental encoder to measure the rotor's angle and velocity. The motor was driven by a Watt&Well DEMT 3-ph voltage source inverter, which had embedded voltage and current sensors, being fed by a Keysight N8949A dc supply. In addition, a dSpace MicrolabBox control unit was used as a real-time interface device for implementing both control strategy and data acquisition from Matlab/Simulink with a sampling time of 50µs. The parameters of the studied PMSM are listed in Table 1. To test the residual responses and effectiveness of the diagnostic system, the motor was driven from stationary to nominal speed, i.e., 1500 rpm, and kept in a steady-state condition. During the operation of the motor, the encoder and ITSC faults were applied at different time intervals using controllable relays. At t = 1-3 s, the encoder measurement fault was applied with a 1 rad/s error. At t = 4.471-7.238 s, the ITSC fault in phase a was applied which had 1% fault severity (one shorted turn in phase a winding); at t = 9.613-12.76 s, the ITSC fault in phase b appeared with 3% fault severity (three shorted turn in phase b winding); at t = 15.6-18.41 s, the ITSC fault in phase c with 5% fault severity (five shorted turn in phase c winding) was applied on the motor. The residual responses for the mentioned faults were obtained and are shown in Figure 10. Before the faults were applied, the motor was operating in a healthy mode (t = 0-1 s), and all the residuals remained averagely zero (neglecting the noise). This is because there was no error between the measured signals and the calculated ones used in each residual. First, when the encoder fault appeared, R 4 obtained a nonzero dc value, and it went back to average zero as soon as the fault disappeared. When the ITSC fault in phase a was applied, R 1 was directly affected and obtains/ed a higher oscillating value. Due to mutual induction of the fault current, this fault was also observable in R 2 and R 3 . In addition, the controller response had a role in the increase of other phase currents. Since a part of the winding was gone, more I q was required to keep the motor speed constant at 1500 rpm. The same logic can be used for ITSC faults in phases b and c as the residuals obtain higher oscillating values. The behavior and response of the residuals during each ITSC fault, can be used as the ground for detection of faults in the PMSM. This is implemented using signal processing-detection theory and explained in the following section. Diagnostic Decision Using the residual responses, a diagnostic decision making system was designed to detect the ITSC faults based on statistical signal processing-detection theory. While R 4 can be directly used to detect encoder faults, a combination of R 1 -R 3 is required to effectively detect ITSC faults. The R 1 -R 3 residuals obtained in the previous section, are designed based on abc frame voltage equations e 1 -e 3 in (2), and an ITSC fault in any phase creates unbalance in the residual output. Before designing the statistical detector and to form a better index that obtains a nonzero dc value in case of an ITSC fault, the residuals in the abc frame are taken into an αβ frame using the power invariant Clarke transformation as follows: The absolute value of the resultant is calculated: Figure 11 shows the absolute value of the resultant residual in an αβ frame where ITSC faults in all phases are more obvious compared to abc residuals R 1 -R 3 . In implementing a structural analysis, the goal was to form residuals that have a zero value in a healthy scenario and a nonzero value in a faulty scenario. However, derivatives, integrals, and even uncertainties in the dynamic model affect the calculation of unknown variables and cause the variable output signal to be a little bit distorted. In addition, phenomena such as environmental noise and switching noise affect the signals. These lead to a residual output signal that fluctuates around zero instead of having a perfect signal that holds the absolute zero value in a healthy scenario. Even in a faulty scenario, the residual signal fluctuates around a nonzero value as seen in Figure 11. Therefore, extra signal processing is required to deal with model uncertainties and environmental noise and to be able to distinguish and isolate the indicator signal from noise. Here, a generalized likelihood ratio test (GLRT) is proposed to deal with such model uncertainties and also to provide the ground for calculating and setting thresholds based on the probabilities of detection and false alarms in a formulated and scientific manner. Generalized Likelihood Ratio Test GLRT is a composite hypothesis testing approach that can be used for detecting a signal in realistic problems [42]. It is noted that GLRT does not require prior knowledge of the unknown parameters such as mean (µ) and variance (σ 2 ) values in a probability density function (PDF) of a signal. GLRT deals with unknown parameters by replacing them with their maximum likelihood estimates (MLEs). If data x have the PDF p(x;θ 0 , H 0 ) under a null hypothesis H 0 and p(x;θ 1 , H 1 ) under alternative hypothesis H 1 , the GLRT decides H 1 if: whereθ 1 is the MLE of θ 1 assuming H 1 is true,θ 0 is the MLE of θ 0 assuming H 0 is true, and γ is the threshold. Design of Test Statistic Based on Generalized Likelihood Ratio Test Before going through the design process, it is beneficial to know the PDF of the measurement noise signal. This gives us enough knowledge to make the assumptions that are close to our realistic problem. Using the first part (t = 0-1 s) of the resultant residual in Figure 11, the PDF of the noise signal in a noise-only hypothesis is obtained and shown in Figure 12. The PDF of the noise signal in Figure 12 is very close to the PDF of a white Gaussian noise (WGN), thus it can be reasonably modeled with a WGN probability distribution function. To design a realistic detector, it is assumed that the arrival time of the fault is completely unknown. Furthermore, the PDF is not completely known, meaning that the parameters mean µ and variance σ 2 are to be estimated using MLE. The noise in the resultant residual during operation in a healthy condition is modeled as WGN. Since the resultant residual (R r ) obtains a nonzero dc value when ITSC faults appear, the data are considered as only noise under nonfaulty hypothesis H 0 , and an added dc level value to the noise under faulty hypothesis H 1 . Thus, the detection problem becomes as follows: where andσ 1 2 are the MLE of parameters A and σ 2 1 under H 1 , andσ 0 2 is the MLE of the parameter σ 2 0 under H 0 . By maximizing p(x; A, σ 2 , H 1 ), parameters andσ 1 2 are obtained as follows [43]: ∂p(x; A, σ 2 , H 1 ) ∂σ 2 which results in: Similarly, by maximizing p(x;σ 0 2 , H 0 ),σ 0 2 is obtained as follows: which results in: Therefore, (18) becomes: which is equivalent to: From (19) and (21),σ 1 2 can intuitively be obtained as follows: which yields: Since ln(1 +x 2 σ 1 2 ) is monotonically increasing with respect tox 2 σ 1 2 , an equivalent and normalized test statistic can be obtained as follows: The GLRT has normalized the statistic byσ 1 2 which allows the threshold to be determined. Since the PDF of T(x) under null hypothesis H 0 does not depend on σ 2 , the threshold is independent of the value σ 2 [42]. GLRT for Large Data Records As N −→ ∞, the asymptotic PDFs ofx will converge to normal distributions under both hypotheses as follows:x and therefore:x Squaring the normalized statistic in (29) will lead to the modified test statistic T(x) in (27) which produces a central chi-squared distribution under H 0 and a noncentral chi-squared distribution under H 0 , with one degree of freedom: where λ is the noncentrality parameter and is calculated as [42]: It was shown in (30) that T(x) has a noncentral chi-squared distribution with one degree of freedom, and it is equal to the square of random variable x in (29), therefore x ∼ N ( √ λ, 1). Thus, the probability of a false alarm (P FA ) can be obtained as: where Q(x) is the right-tail probability of random variable x. Thus, the threshold can be obtained as follows: Similarly, the probability of detection P D can be obtained as follows: GLRT Test on Residual Response For the case study, the statistical detector should be designed in a way that it is able to detect even the smallest ITSC fault (<1%). Therefore, the noncentrality parameter λ is calculated based on the implementation of (31) on the resultant residual at t = 4.471s-7.238 s when the motor is experiencing the lowest ITSC fault level in phase a winding and yields λ = 6.78. Using this value, the threshold and receiver operating characteristics (ROC) of the detector is obtained based on (32)- (34) and shown in Figure 13. The P FA values here are for the lowest ITSC fault level in phase a, which means other ITSC faults in phases b and c have lower P FA values. Using P FA = 2%, the threshold is obtained as γ = 5.41, and this results in P D = 60.93% for ITSC in phase a. Furthermore, the probability of detection for ITSC faults in phases b and c and the encoder fault are calculated P D = 98.13%, P D = 100%, and P D = 99.65%, respectively. The test statistics were implemented on the resultant residual as shown in Figure 14. The valuesx 2 andσ 1 2 were calculated using a moving window (FIFO register) with the length of N = 10, 000, which runs through the resultant residual over time. Figure 14a shows the output of test statistic on resultant residual along with the threshold of γ = 5.41 while Figure 14b shows the output of the test statistic on R 4 . The test statistic's output value is compared with the threshold value over time, and if it exceeds the threshold, the fault alarm is tripped accordingly. Figure 15 shows the detector's logical output value which attains a low value in a healthy condition and a high value during a faulty case. This proves that the detector has successfully detected all the faults that are fairly close to expected values of P D , while experiencing no false alarm. Discussion Some remarks can be withdrawn regarding the presented methodology and the obtained results. First, structural analysis for detecting ITSC and encoder faults was successfully implemented on the in-house setup including the PMSM and the drive system, and the residuals were formed based on ARRs. Second, a GLRT-based detector was designed to effectively detect the changes in the residuals even with unknown noise parameters. Third, a scientific threshold was calculated based on the probability of a false alarm (P FA ) and the probability of detection (P D ). The suggested combination method is very effective for the fault detection since it can detect the lowest level of ITSC fault, i.e., one single shorted turn (<1%) in the stator winding. On the other hand, using a Clarke transformation disabled the diagnostic system to isolate the ITSC faults in different phases, and using a moving window with the length of N = 10, 000 over the test statistics causes a delay in detection of the faults. These small demerits were found when testing the diagnostic method under the smallest ITSC fault. In previous studies, a GLRT-based detector has been implemented for stator imbalancefault detection in induction motors [44]. The noise parameters were also considered unknown, and therefore, they have been replaced with their MLEs. Moreover, a threshold was calculated based on P FA = 0.1% and P D , which makes the diagnostic system experience fewer false alarms. However, the first fault level that the system can detect is 25% of stator-phase resistance, which is a quite high level of fault severity. As a result, the system would go into severe imbalance from the time that the fault appears until the time the diagnostic system detects it. In our case, even if the P FA was chosen as 0.1%, the P D for ITSC in phase a would be 24.61%, the P D for ITSC in phase b would be 86.8%, the P D for ITSC in phase c would be 99.99%, and the P D for the encoder fault would be 95.86%. Thus, the diagnostic system still detects the smallest fault, even with P FA = 0.1%. However, knowing that a slightly higher probability of a false alarm is not that irritating (P FA = 2%), a better probability of detection is achieved (P D = 60.93%) in our study based on setting a lower threshold. Other studies with different methods have also chosen a higher level of fault as the starting point. A Kalman filter for detection of ITSC in PM synchronous generators has been implemented in [45], which can successfully detect fault levels as low as 8%. In addition, a combination of an extended Park's vector approach with spectral frequency analysis was introduced in [46] which could successfully detect three shorted turns in synchronous and induction motors. Conclusions This paper presents a novel method for real-time and effective detection of incipient ITSC and encoder faults in the PMSM. Structural analysis was employed to form the structural model of the PMSM. The Dulmage-Mendelsohn decomposition tool was used to evaluate the analytical redundancy of the structural model. The proposed diagnostic model was implemented on industrial PMSM, ITSC, and encoder faults were applied to the system in different time intervals, and residuals responses were obtained. Subsequently, a GLRT-based detector was designed and implemented based on the behavior of the residuals during healthy (only noise) and faulty (noise + signal) conditions. To make the GLRT-based detector effective to deal with such a realistic problem, the parameters such as mean µ and variance σ 2 in the probability density function of the noise signal were considered to be unknown. By replacing these unknown parameters by their maximum likelihood estimates, a test statistic was achieved for the GLRT-based ITSC and encoder fault detector. Following this step, a threshold was obtained based on choosing the probability of a false alarm P FA and the probability of detection P D for each detector based on which decision was made to indicate the presence of the fault. The experimental results show that the designed GLRT-based detector is able to efficiently detect even small ITSC and encoder faults in the presence of noise, proving the effectiveness of this diagnostic approach.
2022-05-10T16:29:30.883Z
2022-04-29T00:00:00.000
{ "year": 2022, "sha1": "056f1ab584f0e05346d3396597a4c3f0730c84ad", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1424-8220/22/9/3407/pdf?version=1651218411", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "637a95acc697e6d393a5ff7acd6c2e7f8094730a", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [] }
227128740
pes2o/s2orc
v3-fos-license
Fecal Klebsiella pneumoniae Carriage Is Intermittent and of High Clonal Diversity The Klebsiella pneumoniae complex comprises several closely related entities, which are ubiquitous in the natural environment, including in plants, animals, and humans. K. pneumoniae is the major species within this complex. K. pneumoniae strains are opportunistic pathogens and a common cause of healthcare-associated infections. K. pneumoniae can colonize the human gastrointestinal tract, which may become a reservoir for infection. The aim of this study was to investigate the fecal K. pneumoniae carriage in six healthy individuals during a 1 year period. Stool samples were obtained once a week. Using direct and pre-enriched cultures streaked on ampicillin-supplemented agar plates, up to eight individual colonies per positive sample were selected for further characterization. Whole genome sequencing (WGS) was performed for strain characterization. Sequence type (ST), core genome complex type (CT), K and O serotypes, virulence traits, antibiotic resistance profiles, and plasmids were extracted from WGS data. In total, 80 K. pneumoniae isolates were obtained from 48 positive cultures of 278 stool samples from five of the six test subjects. The samples of the five colonized volunteers yielded at most two, three, four (two persons), and five different strains, respectively. These 80 K. pneumoniae isolates belonged to 60 STs, including nine new STs; they were of 70 CTs, yielded 48 K serotypes, 11 O serotypes, and 39 wzc and 51 wzi alleles. Four of the five subjects harbored serotypes K20 and K47, as well as STs ST37, ST101, ST1265, and ST20, which had previously been linked to high-risk K. pneumoniae clones. In total, 25 genes conferring antibiotic resistance and 42 virulence genes were detected among all 80 isolates. Plasmids of 15 different types were found among 65 of the isolates. Fecal carriage of individual strains was of short duration: 70 strains were found on a single sampling day only, and 5 strains were isolated in samples collected over two consecutive weeks. Two of the five colonized individuals—working colleagues having meals together—shared identical K. pneumoniae types four times during the study period. Our findings point toward the potential role of food as a reservoir for K. pneumoniae in humans. INTRODUCTION Klebsiella pneumoniae was first described in 1882 as a bacterium isolated from the lungs of patients who had died from pneumonia (Friedlaender, 1882). The K. pneumoniae complex consists of closely related species designated as K. pneumoniae phylogroups Kp1-Kp7, comprising K. pneumoniae subsp. ozaenae, K. pneumoniae subsp. pneumoniae, K. pneumoniae subsp. rhinoscleromatis, K. quasipneumoniae subsp. quasipneumoniae, K. quasipneumoniae subsp. similipneumoniae, K. variicola subsp. variicola, K. variicola subsp. tropica, K. africana, and K. quasivariicola (Rodrigues et al., 2018(Rodrigues et al., , 2019. K. pneumoniae complex can be found ubiquitously in nature, including in plants, animals, and humans (Lai et al., 2019). Most K. pneumoniae infections in Europe and North America are healthcareassociated and caused by classical K. pneumoniae strains (cKp) (Russo et al., 2018). With the emergence of carbapenem-resistant strains, infections due to cKp have become a major public health threat (World Health Organization [WHO], 2017; Wyres and Holt, 2018) causing life-threatening nosocomial infections like urinary tract infections, bloodstream infections, and pneumonia in immunocompromised and critically ill patients (Podschun and Ullmann, 1998). K. pneumoniae is a listed ESKAPE pathogen, an acronym defined by the Infectious Diseases Society of America for antibiotic-resistant Enterococcus faecium, Staphylococcus aureus, K. pneumoniae, Acinetobacter baumannii, Pseudomonas aeruginosa, and Enterobacter spp. (Rice, 2008). In 1986, hypervirulent K. pneumonia (hvKp) strains emerged in Asian countries associated with communityacquired infections like pyogenic liver abscess, meningitis, endophthalmitis, soft tissue abscesses, urinary tract infections, and pneumonia (Martin and Bachman, 2018;Russo and Marr, 2019). In contrast to cKp strains, hvKp strains cause infections mainly in young and healthy individuals (Struve et al., 2015;Paczosa and Mecsas, 2016). In contrast to cKp, which is the dominating cause of infections in Western countries, hvKp are endemic mainly in countries of the Asia-Pacific region. A differentiation between cKp and hvKp is challenging due to overlapping characteristics in both pathotypes (Russo and Marr, 2019). Several virulence factors present on large virulence plasmids (pK2044 and pLVPK) have been identified, allowing the most accurate discrimination of cKp to hvKp (Lee et al., 2017;Russo et al., 2018;Russo and Marr, 2019). Key virulence factors necessary for infection are the polysaccharide capsule (K antigen) and lipopolysaccharide (O antigen), which contribute to serum resistance and resistance to phagocytosis (Cortés et al., 2002). HvKp clones circulating in the community are associated with particular capsule types, mainly K1, K2, K20, and K57 (Lee et al., 2016) and certain sequence types (STs) like ST23, ST65, ST86, ST375, and ST380 (Bialek-Davenet et al., 2014;Lin et al., 2014;Lee et al., 2017). The convergence of carbapenem-resistance and virulence resulted in the emergence of carbapenem-resistant hvKP strains in China, which is expected to become a serious future public health issue (Zhao et al., 2020). K. pneumoniae can colonize the nasopharynx and the gastrointestinal tract. The gastrointestinal colonization of healthy individuals with undefined pathotypes ranged from 5 to 35% in Western countries (Martin et al., 2016;Gorrie et al., 2017) and from 19 to 88% in Asian countries (Chung et al., 2012). Nasopharyngeal colonization of healthy humans ranged from 1 to 5% in Western countries and from 1.4 to >20% in Asian countries and Brazil (Lima et al., 2010;Farida et al., 2013;Dao et al., 2014). Contamination of food with K. pneumoniae and a general poor sanitation status have been associated with increased colonization of healthy humans (Farida et al., 2013;Huynh et al., 2020). In a study from Malaysia, 32% of street food samples tested positive for K. pneumoniae (Haryani et al., 2007). Colonization has been identified as a potential reservoir for infection with Kp strains (Gorrie et al., 2017) and the infection risk with K. pneumoniae is considered to be four times higher for colonized patients compared to non-carriers (Selden et al., 1971;Martin et al., 2016). During warm months, K. pneumoniae bloodstream infection rates are 1.5 times higher, reflecting an increased fecal carriage rate in humans in summer (Anderson et al., 2008). Therefore, screening of healthy individuals is a recommended action to obtain an overview on strain diversity and to detect emerging resistant and virulent strains (Russo and Marr, 2019). To our best knowledge, there is no longitudinal Kp colonization study of healthy individuals. Most studies are focused on short/long-term colonization of hospitalized patients. Therefore, the aim of this study was to investigate the colonization pattern of K. pneumoniae in healthy humans during a 1 year period. Sample Collection and Microbiological Culturing of K. pneumoniae From calendar week (CW) 15/2018 to CW14/2019, fecal samples from six healthy individuals were screened for the presence of K. pneumoniae. Fecal samples of about 2 g were collected in sterile plastic containers once a week and processed in the laboratory within 24 h. Volunteers lived in six different households in Vienna (subject 1) and Graz (subjects 2-6). Subjects 2 and 4 often spent lunch breaks together, having their meals in various restaurants. Subject 1 was 60-65 years old, subjects 2 and 4 were aged 25-30, subject 3 was aged 40-45, and subjects 5 and 6 were aged 50-55 years. Subject 4 followed a gluten-free diet. Subject 6 was vegetarian but ate fish. To detect K. pneumoniae, all feces samples were plated on Simmons Citrate Agar with 1% Innositol (SCAI) (BIO-RAD, Hercules, United States) and incubated for 48 h at 44 • C. In addition, broth enrichment was performed (1 g feces in 9 ml LB medium with 10 µg/l ampicillin overnight at 37 • C), followed by cultivation on an SCAI medium for 48 h at 44 • C. Up to eight single colonies resembling K. pneumoniae morphologically were selected from each agar plate and subcultured for further processing. Species confirmation was carried out using matrix assisted laser desorption/ionization time-of-flight mass spectrometry (MALDI-TOF-MS) Biotyper (Bruker, Billerica, MA, United States) according to the manufacturer's instructions. DNA Extraction and Whole Genome Sequencing DNA was isolated from bacterial cultures using the MagAttract HMW DNA Kit (Qiagen, Hilden, Germany) according to the manufacturer's protocol for gram-negative bacteria. The amount of input DNA was quantified on a Lunatic instrument (Unchained Labs, Pleasanton, CA, United States). Ready to sequence libraries were prepared using Nextera XT DNA library preparation kit (Illumina, San Diego, CA, United States); pairedend sequencing with a read length of 2 × 300 bp using Reagent Kit v3 chemistry (Illumina) was performed on a Miseq instrument (Illumina). Sequence Data Analysis All study isolates were sequenced to obtain a coverage of at least 80-fold. Obtained raw reads were quality controlled using FastQC v0.11.9 and de novo assembled using SPAdes (version 3.9.0) (Bankevich et al., 2012) to produce draft genomes. Contigs were filtered for a minimum coverage of 5 × and a minimum length of 200 bp using SeqSphere + software v6.0.0) (Ridom, Münster, Germany). The classical multilocus sequence type (MLST) (Diancourt et al., 2005) and the public K. pneumoniae sensu lato core genome MLST (cgMLST 1 ) were determined using SeqSphere+. For MLST, new combinations of alleles or new allele types composing new sequence types (STs) were submitted to 1 https://www.cgmlst.org/ncs/schema/2187931/ the curators of the MLST database 2 . For phylogenetic analysis, minimum spanning trees (MSTs) were calculated based on the sensu lato cgMLST scheme; related isolates were identified with a complex type (CT) distance of 15 alleles (see footnote 1). The diversity of capsule synthesis loci (K loci), lipopolysaccharide O antigen (O loci), and allele diversity of K locus genes wzc and wzi were determined using Kaptive Web 3 (Wick et al., 2018). Plasmids and genes conferring antibiotic resistance were detected using PlasmidFinder 1.3 (Carattoli et al., 2014) available from the Center for Genomic Epidemiology 4 and the comprehensive antibiotic resistance database (CARD) (Jia et al., 2017). Virulence genes were detected using the virulence allele library from the Institut Pasteur BIGSdb database for K. pneumoniae 5 (Bialek-Davenet et al., 2014). Nucleotide Sequence Accession Numbers This Whole Genome Shotgun project has been deposited at the DDBJ/EMBL/GenBank under the accession PRJNA663884. The version described in this paper is the first version. The raw sequence reads have been deposited in the Sequence Read Archive (SRA) under accession no. SRR12653693-SRR12653772. RESULTS During the study period from CW15 in 2018 to CW14 in 2019, a total of 278 stool samples (43-49 samples/patient) were analyzed from the six study participants (Figure 1). Forty-eight of these 278 stool samples yielded K. pneumoniae: subject 1 in two of 46 weekly samples (in total: 5 clones); subject 2 in 13 of 49 samples (in total: 13 clones); subject 3 in 0 of 45 samples (in total: 0 clones); subject 4 in 15 of 48 samples (in total: 26 clones); subject 5 in 7 of 47 samples (in total: 10 clones); and subject 6 in 11 of 43 stool samples (in total: 17 clones) (Figure 1 and Table 1). Altogether, 80 K. pneumoniae isolates were retrieved from the 278 stool samples. Subject 3 was negative for K. pneumoniae colonization during the whole study period. The remaining five test persons were colonized with K. pneumoniae strains in samples accounting for a total of 2-15 weeks periods [mean: 8; median: 9] during the 1 year study period. No correlation between the number of K. pneumoniae positive stool samples and seasons could be observed (Figure 1). The 80 K. pneumoniae isolates were assigned to 60 different classical STs and 70 cgMLST complex types (CTs) ( Table 1 and Figure 2). On average, all study isolates had 99.7% (98.6-100%) good core genome targets of the defined cgMLST scheme 6 . For nine isolates, which were obtained from the five colonized volunteers, new STs were determined and submitted to the K. pneumoniae MLST database 7 : volunteer 1 (ST4099), volunteer 2 (ST4133), volunteer 4 (ST4090, ST4098, ST4102), volunteer 5 (ST4092), and volunteer 6 (ST4121, ST4122, and ST4123) (Table 1 and Figure 1). Serotype analysis from WGS data identified 39 wzc and 51 wzi alleles, 48 K serotypes, and 11 O serotypes (Table 1 and Supplementary Table 1). Eighteen isolates had no wzc, and in addition, three of these had no wzi either. Among the 48 K serotypes 22 had low or non-match confidence as defined by Kaptive-web (Supplementary Table 1). As shown above, inter-and intra-proband strain diversity was high with 60 different STs and 70 different CTs among 80 isolates. The volunteers were colonized with strains belonging to the same STs (ST20, ST34, ST37, ST45, and ST200) several times during the study period (Figures 1, 2 and Table 1). CgMLST analysis revealed an inter-patient core genome diversity of strains with the same ST from 88 to 679 allelic differences. Subjects 2 and 4 shared four strains with identical STs, CTs, and K serotypes: ST1265/CT2688/K33 isolates (Figure 2, cluster 1) were collected in CW16/18 and differed by one allele in their cgMLST (both volunteers had the same meal in the same restaurant the day before sampling); ST632/CT2724/K141 strains (Figure 2, cluster 3) were collected in CW21/18 and CW22/18 and differed by four alleles (both volunteers had the same meal at a birthday party in CW21); ST1758/CT2750/K27 strains (Figure 2, cluster 10) were collected in CW32/18 and shared the identical set of cgMLST alleles (both volunteers had the same meal at a birthday party in CW31); and ST469/CT2721/K105 strains (Figure 2, cluster 2) were collected in an interval of 17 CWs (CW19/18 and CW36/18) showing one allelic difference (no correlation detectable). All five isolates of proband 1, which were derived from two stool samples, were unrelated as determined by ST, wzc and wzi allele typing, serotyping, and cgMLST analysis (Figures 2, 3 and Table 1). From proband 2, 15 isolates were cultured and assigned to 13 different STs and 13 different K serotypes. The volunteer was colonized with an ST915/CT2759/K107 isolate for two consecutive CWs (CW37/18-38/18) and with an FIGURE 2 | Minimum spanning tree (MST) based on cgMLST analysis of 80 K. pneumoniae isolates derived from subjects 1, 2, 4, 5, and 6. Numbers on connection lines represent allelic differences between isolates. Isolates are colored by sequence type (ST). DISCUSSION Recent studies have shown that gastrointestinal colonization with K. pneumoniae is a common and significant reservoir for the transmission and subsequent infection of patients (Martin et al., 2016;Dorman and Short, 2017;Gorrie et al., 2017). In our study, K. pneumoniae was found in 0.0-31.3% (mean 17.2%) of stool samples tested. This is lower than in previous studies with a colonization rate of 37.5% (Marques et al., 2019) and 55.9% (Huynh et al., 2020) but is in concordance with other studies reporting 4-10% colonization rates for test subjects (Choby et al., 2020). In contrast to other studies, where an increased fecal carriage rate during the summer was reported (Anderson et al., 2008), no such seasonal correlation could be observed in our study. It is of interest that one individual remained K. pneumoniae free during the entire 1 year study period. An explanation for this colonization failure might be a specific composition of the test persons' microbiota that prevented K. pneumoniae from persisting in the gut, as has previously been shown in ICU patients (Collingwood et al., 2020). All other five participants in our study were colonized with K. pneumoniae strains in at least one of the weekly obtained samples, with individual stool samples yielding up to five different strains. Colonization with multiple strains has already been reported in other studies (Marques et al., 2019). K. pneumoniae highrisk clonal lineages are either multidrug-resistant strains mainly causing severe infections in hospitals (Navon-Venezia et al., 2017) or are drug-susceptible hypervirulent strains (hvKp) causing infections in the community mainly in younger and FIGURE 7 | Minimum spanning tree (MST) based on cgMLST analysis of 19 K. pneumoniae isolates derived from subject 6. Numbers on connection lines represent allelic differences between isolates. Isolates are colored by date of isolation. healthy individuals (Paczosa and Mecsas, 2016). High-risk clonal lineages of the multidrug-resistant type exist worldwide and can be assigned to certain K. pneumoniae STs (Roe et al., 2019;Huynh et al., 2020). Although mainly found in hospitals, these clones can also colonize individuals outside hospitals (Holt et al., 2015). Since colonization is a potential reservoir for infection with K. pneumoniae strains (Gorrie et al., 2017), investigation of the rates and duration of carriage is of importance to assess the potential risk for that community. In our study, the diversity of isolates colonizing the test persons was high and colonization with specific strains occurred for a maximum of two consecutive weeks. Also based on the finding that two individuals who regularly ate meals together were colonized several times with identical strains, we hypothesize that the high diversity of isolates in our study is due to the consumption of contaminated food; food as a source of K. pneumoniae carriage has been previously described (Huynh et al., 2020;Koliada et al., 2020). The observed colonization of healthy individuals with diverse strains but for short time periods is in contrast to the situation in hospitals where patients are colonized over long periods of time with specific resistant clones due to treatment with antibiotics (Martin et al., 2016). In our study, no multidrug-resistant K. pneumoniae isolates were found, which is concordant to recent studies on healthy individuals without reported use of antibiotics (Marques et al., 2019;Huynh et al., 2020). In total, 25 resistance genesmainly genes encoding for efflux pumps-were found among all 80 K. pneumoniae isolates. All isolates carried SHV betalacatamases. In conclusion, our study revealed that fecal K. pneumoniae carriage is intermittent and of high clonal diversity. Colonization with specific strains could be observed for a maximum of only two consecutive calendar weeks. Two of the five colonized individuals-working colleagues having the same meals together several times-shared identical K. pneumoniae types four times during the study period pointing toward the potential role of food as a reservoir of K. pneumoniae for humans as also described recently (Huynh et al., 2020). In contrast to E. coli, which is a lifelong colonizer of the human gut (Palmer et al., 2007), K. pneumoniae seems unable to colonize a healthy human permanently. DATA AVAILABILITY STATEMENT The whole genome sequencing datasets generated for this study can be found in the DDBJ/EMBL/GenBank; accession PRJNA663884. ETHICS STATEMENT The studies involving human participants were reviewed and approved by Dr. Michael Tamchina, Co-chair of the Ethics committee of the city of Vienna, Thomas Klestil Platz 8, 1030 Vienna, michael.tamchina@wien.gv.at. The patients/participants provided their written informed consent to participate in this study.
2020-11-24T14:09:58.549Z
2020-11-24T00:00:00.000
{ "year": 2020, "sha1": "1c8f58fa53168913c747b8d0cfa419b1f23526c8", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fmicb.2020.581081/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "1c8f58fa53168913c747b8d0cfa419b1f23526c8", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
133907068
pes2o/s2orc
v3-fos-license
Role of peers and family with the occurrence of human immunodeficiency virus in the working area of Seberang Padang health center, Padang city Human immunodeficiency virus (HIV) is a disease that continues to develop and become a global problem that hit the world. According to WHO (World Health Organization) data in 2012, the discovery of HIV cases in the world in 2012 reached 2.3 million cases, of which 1.6 million patients died of AIDS (acquired immunedeficiency syndrome) and 210,000 patients under age of 15 years-old. Indonesia is the country with the fastest HIV/AIDS transmission in Southeast Asia. HIV/AIDS is an iceberg phenomenon, where only a few are seen, while there are more unknowns. Various efforts have been made by the government and non-governmental organization in HIV/AIDS precaution and prevention, but the HIV/AIDS epidemic continues to continue. INTRODUCTION Human immunodeficiency virus (HIV) is a disease that continues to develop and become a global problem that hit the world. According to WHO (World Health Organization) data in 2012, the discovery of HIV cases in the world in 2012 reached 2.3 million cases, of which 1.6 million patients died of AIDS (acquired immunedeficiency syndrome) and 210,000 patients under age of 15 years-old. 1 Indonesia is the country with the fastest HIV/AIDS transmission in Southeast Asia. HIV/AIDS is an iceberg phenomenon, where only a few are seen, while there are more unknowns. Various efforts have been made by the government and non-governmental organization in HIV/AIDS precaution and prevention, but the HIV/AIDS epidemic continues to continue. 2 In Padang itself the first HIV-AIDS case was reported in 1992. In 1992 it was detected through the results of a surveillance survey (sero survey) 1 of HIV case and as of the end of 2015 there were 1,300 cases of HIV-AIDS in AIDS 515 cases, HIV 785 cases and 77 died. In 2016 there were reported HIV/AIDS cases in Padang as many as 300 HIV cases, 53 AIDS cases and 4 people died. 3 Based on research conducted by Juliastika in Manado in 2011 regarding the correlation of knowledge and role of peers about the risky behavior of HIV/AIDS in Manado city, the results of the study showed that the role of peers was 80% influential with the occurrence of HIV. 4 Based on research conducted by Nurul, about the factors related to the prevention of HIV/AIDS by students of senior high school 8 Padang, the results of the percentage of HIV/AIDS prevention was deficient which higher in respondents with fewer parents (51.4%). 5 The purpose of this study is to find the role of parents and peers with HIV occurrence at the Seberang Padang health center. METHODS This study is an observational analytic study with research design using a case control approach where the dependent variable and independent variable on the object of research are measured starting from past explanation to trace the history of their experiences. The study was conducted in the working area of the Seberang Padang health center in Padang city for 3 months (August-October 2017). The research population is the entire research object or the whole of the object researched. The population in this study were all cases of HIV in Seberang Padang, South Padang city, as many as 41 cases. Based on the calculation of the number of samples, obtained a minimum sample size taken as many as 14 people, with a comparison of sample size between cases:control=1:1. The inclusion criteria for the sample were: able to read and able to communicate well, while the exclusion criteria was the sample could not be found after two visits. Data collection is done by conducting interviews using questionnaires. 6 Data were analyzed by univariate and bivariate methods to determine correlation between independent variables (peer role and family role) with the dependent variable (HIV occurrence). Data is presented in the tabullar and narrative forms. Table 1 shows the comparison of the number of respondents based on the case group and the control group that seen from the role of peers and family roles. More than half of the group of cases had peer roles that were not good (78.6%) and a not good family role (71.4%). Table 2 shows the relationship between the role of peers and the occurrence of HIV. The results of statistical tests show that there is a significant coorelation between peer roles with the HIV case group (p value=0,023) with a value of OR=9.167; 95% CI=1,634-51,427 which means the role of peers is not good at risk of 9 times suffering from HIV compared with peers who have a good role. The problem of HIV/AIDS is not enough anymore just to be seen through medical facts but must be seen through comprehensive social social analysis related to social and cultural structures. The problem of handling HIV/AIDS is that coordination is still weak in implementing programs in each sector. The lack of a common perception, about the fundamental problems surrounding HIV/AIDS, and human rights issues related to HIV/AIDS has not been integrated in proportion. 2 The results showed that the majority of respondents stated that in the case group more than half (78.6%) of respondents had a not good peer roles. This is in accordance with the theory that if the negative influence of a strong friend and fortress of resistance in theirs are not strong, a person will be affected because someone wants to be accepted by his/her group even though it is contrary to the teachings of his parents. Even the source of information that is considered important is a friend. If you have a friend about sexual health that is not adequate, then he/she can provide wrong information to other friends. 8 From the results of the questionnaire that have distributed it was found that respondents affected by HIV tended to have peers who were not good because they brought respondents in a negative direction such as inviting them to have free sex, changing partners and using drugs. The high role of peers will bring respondents to a not good life and very easily affected by the HIV virus, if this is allowed then respondents who are exposed to HIV can transmit this disease to others because one of the transmission of the virus is by having sex freely and using syringes alternately and that is why respondents get HIV. RESULTS Based on the results of research conducted in the case group more than half (71.4%) of respondents had a family role that was not good and in the control group most (85.7%) respondents had a good family role towards HIV occurrence. The results obtained in line with Nurul's study (2012) about factors that related to HIV/AIDS prevention for students of senior high school 8 Padang. The result is the percentage of not good HIV/AIDS prevention were higher in respondents with fewer role of parents (51.4%). 5 The family will be a place to shelter, to get care, to get affection for patiens and children that left by their parents who have been taken away by the ferocity of AIDS. Family support, especially care for PLHIV at home will usually cost less, be more pleasant, more familiar, and make the PLHIV themselves more able to manage their lives. Actually diseases that related to PLHIV will usually improve quickly, with comfort at home, support from friends especially family. 9 Most of the case group respondents mostly did not live with their families, while the family control group was very much needed about what the respondents did, with whom the control group made friends and socialized, especially at night. Supposedly in teenagers, they are already at home when at night, because activities outside at night tend not to be good because at that time is very vulnerable to drug use and gathering with friends who are not good then it will be easy to consume alcoholic beverages until drunk and having free sex, here is the role of the family, especially parents, to always control all their children's activities so that they are not infected to HIV. To see the relationship between the role of peers and the occurrence of HIV, statistical tests (chi square) were conducted and obtained p=0.023 (p<0.05), it can be concluded that there is a significant correlation between the role of peers with HIV occurrence in the Seberang Padang health center working area. The high caused peer role, because teenager have a strong need to be liked and accepted by peer and group. As a result, they will feel happy when accepted and otherwise will feel very depressed and anxious when be taken out and underestimated by their peers. For teenagers, the views of friends to them are the most important thing. 10 To find correlation between the role of family and HIV occurrence, statistical test (chi square) was conducted and obtained p=0.008 (p<0.05), it can be concluded that there is a significant correlation between the role of family and HIV occurrence in the working area of Seberang Padang health center. The results of this study are in line with the research conducted by Hassanudin about the correlation between knowledge, attitudes and environment with HIV/AIDS prevention efforts on students of Senior high school 5 in Palu, the results of students who have a good family environment are more likely to make HIV/AIDS prevention efforts than with students who have a poor family environment, which shows a meaningful correlation between family environment and HIV/AIDS prevention efforts. 11 From the results of the analysis also obtained an OR=15 which means that a not good family role is at risk of 15 times suffering from HIV compared to those who have a good family role. This can be seen from daily behavior, where the case group does not live with the family, does not communicate with parents if they face problems, and there is discomfort with the family. This has resulted in the case group tend to seek comfort outside the home with things that were not good such as drinking, using drugs and also having free sex, which resulting the case group being infected with HIV. CONCLUSION There was a significant correlation between the role of peers and family and the occurrence of HIV in the Seberang Padang health center. The role of peers who are not good has a risk of 9 times getting HIV compared to the role of peers who are good. And the role of families who are not good has a risk of 15 times getting HIV compared to the good role of families.
2019-04-27T13:13:40.795Z
2019-03-27T00:00:00.000
{ "year": 2019, "sha1": "2f2cf7171ca6051b230141fea48d30ac43e53650", "oa_license": null, "oa_url": "https://www.ijcmph.com/index.php/ijcmph/article/download/4252/2914", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "a9bb64754a6960cfb032d73de535d4a6399c6810", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Geography" ] }
134821757
pes2o/s2orc
v3-fos-license
Field Trip B ( 27 September 2018 ) : Quaternary environments of Giessen and its surrounding areas Johanna Lomax1, Raphael Steup1, Lyudmila Shumilovskikh2, Christian Hoselmann3, Daniela Sauer4, Veit van Diedenhoven1, and Markus Fuchs1 1Department of Geography, Justus Liebig University Giessen, Senckenbergstr. 1, 35390 Giessen, Germany 2Department of Palynology and Climate Dynamics, University of Göttingen, Wilhelm-Weber-Str. 2a, 37073 Göttingen, Germany 3Hessisches Landesamt für Naturschutz, Umwelt und Geologie, Rheingaustr. 186, 65203 Wiesbaden, Germany 4Department of Physical Geography, University of Göttingen, Goldschmidtstr. 5, 37077 Göttingen, Germany Introduction Our 1-day field trip will first lead us to an area south of Marburg in the middle reach of the Lahn valley. After an introduction to the natural settings of the area, we will visit the gravel quarry of Niederweimar, one of the largest of its kind in Hesse. The gravel quarry exposes three units of gravel which possibly represent the remains of different Quaternary glacial periods. The gravels are covered by late glacial and Holocene floodplain fines, showing a high-resolution stratigraphy. The floodplain fines include tephra of the Laacher See eruption that took place during the Allerød, and alternating layers of sands and silts, which may reflect climatic fluctuations of the late glacial. Above the tephra, a dark soil horizon marks the beginning of Holocene conditions. Furthermore, the area around Niederweimar is rich in archaeological finds of different periods. They indicate continuous settlement in the area over the last 11 000 years. Details will be presented at our coffee break at the so-called Zeiteninsel (island of times), an open-air museum showing settlements of different archaeological periods. Our next stop will be the abandoned gravel quarry Niederwalgern, which exposes gravels of the Lahn at the base and a thick sequence of floodplain fines, in-cluding a dark palaeosol. The sediments indicate massive deposition during the Holocene, probably due to anthropogenic forest clearing in the surrounding area. At our third stop, we will visit a loess palaeosol section south of Gießen, near a small village called Münzenberg. Our luminescence ages indicate that this profile comprises Middle Pleistocene loess, and possibly also a pre-Eemian palaeosol. The last glacial loess includes the Eltville tephra, another important tephra of the area, serving as a chronological marker for the Last Glacial Maximum. Establishing a secure chronostratigraphy at the site is however challenging, due to the position on a steep slope, which triggers erosional events. Physiogeographic setting of the area The geomorphological and geological setting of the area comprises a complex pattern of different geological units ranging from the Palaeozoic to the Holocene. An overview of the topography and geological units is shown in Figs. 1 and 2. The current annual rainfall in the area approximates 700 mm, and the average annual temperature is 8.8 • C. The main unit in the western part of the excursion route is represented by the Rhenish Massif (Rheinisches Schiefer- gebirge). Marine sands, silts and clays were deposited during the Devonian era, and were later metamorphized to quartzites and slates during the Variscian orogeny (Carboniferous). Locally, limestone, greywacke and radiolarite are also present, the last two especially in an area west of Gießen and Marburg. The Variscian orogen was eroded to its shield during the Permian era. During the Tertiary, the shield was fragmented into several fault blocks, of which some were uplifted during the Tertiary and the Quaternary. Examples of these uplifted blocks are, e.g. the Rhenish Massif or the Harz further to the northeast. Many of the gravels in the gravel quarry at Niederweimar (Stop 1) originate from the Rhenish Massif to the west, like quartzite, radiolarites and greywacke. Locally, this part of the Rhenish Massif is also called the Gladenbach Uplands (Gladenbacher Bergland). It has an average elevation of around 500 m a.s.l. To the north and north-east of the excursion route, we mainly find red sandstones of the lower Triassic (Buntsandstein) and basalts which originate from the Vogelsberg eruption during the Tertiary (peak activity ca. 15 Ma ago). The Vogelsberg is the largest contiguous volcanic region in central Europe. The highest elevation of the Vogelsberg area is the Taufstein (773 m a.s.l.). The river Lahn intersects the Buntsandstein in an area north and south of Marburg, forming a relatively steep valley. At Niederweimar (Stop 1), the valley opens into a wider basin, which is filled with Pleistocene gravels and Holocene floodplain fines of the river Lahn. Buntsandstein and basalts are further important components of the gravel spectrum in the gravel pit at Niederweimar. Further geomorphological-tectonic units near Gießen and Marburg are depressions which are filled with Tertiary fines and/or Pleistocene loess. The latter will be the focus of Stop 4. Like the uplifted Rhenish Massif, these basins represent tectonic blocks, which formed and subsided during the Tertiary and Quaternary. Geology and geomorphology The gravel quarry at Niederweimar is situated south of Marburg in the central Lahn valley. It is one of the largest gravel quarries in Hesse. The middle reach of the Lahn cuts through a wide range of geological units such as the Rhenisch Massif and sandstones of Permian and lower Triassic age. Tributaries coming in from the east pass the basaltic Vogelsberg massif. This leads to a rather diverse gravel spectrum, dominated by greywacke, associated with radiolarites, sandstones, basalts and quartzites. The hard rock base of the gravel pit is formed by red to purple sandstones and claystones of upper Permian age (Zechstein). Sediments within the gravel pit have not only been deposited by the river Lahn, but also by the river Allna, a tributary flowing in from the west, sourced in the Rhenish Massif. More detailed information on the fluvial history of the Lahn valley near Marburg can be found in Heine (1970). From a geomorphological point of view, the gravel pit is situated on the lower terrace of the Lahn. It is currently not inundated by floods, and its cover sediments are of Late Pleistocene and early Holocene age, as evidenced by the Laacher See tephra (LST; 12 900 ka, van den Boogard, 1995). Chronostratigraphically, the lower terrace would be assigned to the last glacial period. However, it appears that three gravel units are exposed in the pit, of which the lower ones seem to be much older than the last glacial. Elevation differences in the past and current floodplain of the Lahn are minimal (see Fig. 3); thus it is nearly impossible to distinguish different terrace levels from a geomorphological point of view. It therefore appears that at this location of the Lahn River, we are not dealing with a classical staircase of terraces, but with vertical stacking of terrace units, possibly due to (relative) tectonic subsidence in this part of the Lahn valley. So far, several radiocarbon ages, pollen and macrofossil assignments of the cover sediments as well as the gravel units have existed (e.g. Huckriede, 1982;Schirmer, 1999;Freund and Urz, 2000;Bos and Urz, 2003). But since large parts of the gravel units are older than 40 ka, numerical ages in particular of the older gravel units have been missing so far. New optically stimulated luminescence (OSL) and 14 C ages for the gravels as well as the floodplain loams are presented on this field trip. Archaeology During more than 20 years of excavation by the State Archaeological Service of Hesse on ca. 70 ha of river floodplains and adjacent alluvial terraces, a large area of settlements has been detected, spanning from the Mesolithic (11.7 to 7.5 ka) and different periods of the Neolithic (7.5 to 4.2 ka), Bronze (4.2 to 2.8 ka) and Iron Age (2.8 to 2.0 ka) to the Middle Ages. Such an extensive colonization of a local river landscape is, as yet, unique. The possibility to settle on the drier terraces near the water, as well as the species-rich flora and fauna, made the river landscape of the central Lahn valley attractive to humans (Bos and Urz, 2003). Already in 1994, two early Mesolithic sites were found during a gravel excavation. They were dated to around 10.5 ka cal BP (Bos and Urz, 2003). Pollen and macrofossil analyses, which were part of two research projects during the DFG (German Research Foundation) priority programme "Changes of the Geo-Biosphere during the last 15 000 years, continental sediments as evidence for changing environmental conditions", suggest that forest-clearing due to deliberate burning by Mesolithic people occurred in the area (Bos and Urz, 2003). A reconstruction of the Mesolithic landscape in the central Lahn valley is shown in Fig. 4. Since 2017, a DFG-funded research project has focused on plant remains from archaeological records as a source of information on the changing environmental conditions and agricultural systems within the prehistoric settlements near Niederweimar (Ralf Urz, Department of Geography, Philipps University of Marburg). Further details on the archaeology of Niederweimar can be found on the homepage of the archaeological survey of Hesse (https://lfd.hessen.de, last access: 11 July 2018). Gravel unit The gravel unit can be divided into three subunits (Fig. 5). The oldest unit (Unit I) forms the base of the gravel pit. It is not present and/or visible in all parts of the pit and is of dark grey to dark reddish colour. Unit II consists of brown gravels with trough and horizontal bedding and with a strong overprint caused by precipitation of iron oxides. This unit can be further divided into two subunits, separated by a discontinuous layer of larger blocks. Unit III is formed by greyish gravels with marked horizontal and trough bedding and a block layer at its base. Unit II and III are separated by an erosional disconformity. Further information on the gravel units is given in Freund and Urz (2000) and . They assign the lower part of the gravels (our Unit II) to the early Weichselian, based on pollen and macrorest analyses, and the upper part of the gravels (our Unit III) to the last Pleniglacial. The latter is supported by one 14 C age at Niederweimar of around 32 ka, and two further 14 C ages between 30 and 40 ka at the gravel quarry Niederwalgern . showing the Mesolithic camp sites, the differences in relief between the floodplain and the terraces and accompanying differences in forest vegetation (Bos and Urz, 2003). Luminescence dating Different luminescence methods were applied in order to date the gravel units. Unfortunately, OSL dating of quartz has an upper age limit of around 100 ka for the sediments in question (quartz dose rate 1.5 to 2.3 Gy ka −1 ). The lower gravel units were thus too old for conventional OSL dating. For this reason, TT-OSL and post-IR IRSL 225 dating were tested as additional methods. However, both methods suffered from incomplete bleaching. Thus the ages need to be treated with care. Results are shown in Table 1 and Fig. 5. Heavy mineral analyses Sodium polytungstate with a density of 2.85 g cm −3 was used as heavy liquid to separate the heavy from the light fraction in a centrifuge. The samples were boiled with concentrated HCl before centrifugation in order to remove iron and manganese hydroxide crusts, which would complicate the identification. The disadvantage of this method is the dissolution of carbonate, apatite and parts of monazite and olivine (Boenigk, 1983). Nevertheless, this was deemed acceptable because of the benefit of being able to make comparisons with our own and other previous analyses. The lowermost gravel units (Unit I and II) show a very low content of heavy minerals, with 0.02-0.09 % in the fine sand fraction, while the other profile sections reveal heavy mineral contents ranging from 0.13 to 1.6 %. One of the key questions of this investigation was in which depth levels heavy minerals of volcanic origin, i.e. of the Laacher See tephra (LST) occur. The LST is characterized specifically by the volcanic heavy minerals pyroxene (augite), brown hornblende and titanite (e.g. Henningsen, 1980;Hilgers et al., 2003;Thiemeyer, 1993;Semmel, 2003), which comprise up to more than 75 % of the overall heavy mineral fraction. Samples from gravel Unit I and II show high amounts of extremely stable heavy minerals, especially zircon and tourmaline. The sample from the overlying gravel unit (Unit III), just below the floodplain fines, shows a significant increase of the heavy mineral content as well as high amounts of volcanic heavy minerals (pyroxene 75 %, brown hornblende 15 % and titanite 3 %). It is thus assumed that at least parts of this gravel unit post-date the Laacher See event. Stratigraphic interpretation Due to the high sedimentation age of the lower gravel units, it is difficult to provide a numerical chronology of them. So far, only the following conclusions can be tentatively drawn. The lowest gravel unit (Unit I) is probably older than the overlying unit. An assignment to a certain marine isotope stage (MIS) is impossible, but most likely the sample is older than 300 ka. Table 1. Note that in the uppermost sample, heavy minerals typical of the Laacher See tephra were found. The intermediate gravel layer (Unit II) seems to have formed during one single glacial period, because the luminescence ages are of similar age (except for one outlier), independent of the method used. However, the ages are too imprecise and too unreliable for a clear assignment to a certain MIS. According to the luminescence ages, Unit II most likely formed during MIS 8 or MIS 10. This age strongly contradicts previous findings of Huckriede (1972,1982) as well as and Freund and Urz (2000), who place the base of the unit in the Eemian and early Weichselian, based on pollen and macrofossil analyses. The uppermost gravel unit (Unit III) showed a surprisingly young age. So far, we have assigned this unit to the middle Weichselian because earlier, preliminary OSL ages clustered around 30 ka. Also, several 14 C ages between 30 and 40 ka at Niederweimar and the nearby site of Niederwalgern Freund and Urz, 2000) indicate an older age of this unit. It is possible that during the Younger Dryas, the gravels of the middle Weichselian were partially incised by the braided river that shaped the riverbed at that time. The channels were then filled with Younger Dryas gravels and sands shortly before the onset of the Holocene. In many parts of the upper gravel layer, these former channels are visible. This Figure 6. Sketch of section NW-6 in the floodplain loams of Niederweimar, together with results from 14 C and OSL dating, grain size and heavy mineral analyses. Please note that the lowermost sample for OSL dating is derived from one of the neighbouring sections NW-3, from the same stratigraphic unit. young sedimentation age of the uppermost part of gravel Unit III is supported by the heavy mineral spectrum, which shows a signature characteristic of the LST. Although the older luminescence ages have been unreliable so far, they allow the following overall interpretation: terrace units of different ages are vertically stacked onto each other, possibly indicating (relative) tectonic subsidence of the area. MIS 6 is not represented by a gravel unit in the studied section; thus it seems to have been completely removed by a later erosional period. Very large blocks in the lower part of Unit III testify to an extremely dynamic fluvial event, which may have caused this erosion. However, it cannot be ruled out that MIS 6 gravels are found in other parts of the gravel pit. Floodplain loams Floodplain loams overlie the Pleistocene gravels and show a very detailed stratigraphy, with alternating sand and silt layers in the lower part, one or several light grey layers of varying thickness in the middle and upper part, and a further dark palaeosol horizon in the uppermost part. From field observations, it is tempting to assign the greyish layers to the (relocated) Laacher See tephra, which would place the lower part of the floodplain loams in the late glacial and the upper part of the section mainly in the Holocene. This stratigraphy is supported by detailed pollen and macrofossil studies as well as radiocarbon dating carried out by, e.g. , Bos and Urz (2003) and Schirmer (1999). However, this stratigraphic interpretation contradicts the findings on the gravel unit investigated in the current study. Here, one OSL age places the uppermost part of the gravels in the Younger Dryas. Heavy mineral analyses also confirm that their deposition took place after the Laacher See event. In order to gain further insight into the chronostratigraphy of the site, further 14 C and OSL dating and palynological, granulometric and heavy mineral analyses on the floodplain loams were carried out. Methods and results Particle size distributions were determined by classical pipette and sieve procedures without decarbonation accord- ing to Köhn (ISO 11277). The chronology of the upper unit of floodplain fines is mainly based on calibrated 14 C age analyses (CalPal online; Weninger and Jöris, 2004), carried out on wooden macrofossils, obtained from the silt-rich sediment layers. Additionally, three luminescence ages were determined, using OSL on the coarse grain quartz fraction. Results are shown in Fig. 6 and Table 2. Palynological analyses were carried out mainly on the silty layers in the bottom part of the section (Unit I to Unit III). They reveal vegetation changes over a short period of only 1000 years according to the 14 C ages (Fig. 7). The layers II.1a and II.1c have similar pollen spectra with a dominance of Poaceae, Thalictrum, Artemisia and other herbs, such as the Helianthemum nummularium group, Ranunculus acris type, Apiaceae and Matricaria type, indicating a dominance of meadows. An open landscape is suggested by a low abundance of arboreal pollen (7 %), represented by Pinus and Betula. Presence of Myriophyllum and remains of Gleotrichia type and Spirogyra indicate stagnant or slowly flowing water. Pollen concentration is rather high and varies between 15 000 and 27 000 grains cm −3 , indicating a low sedimentation rate. A low abundance of mycorrhizal spores of Glomus type indicates low soil erosion rates. Charcoal concentration of up to 6000 particles cm −3 reveals the presence of fires. The next four clay layers (Units II.3, II.5, II.7, and II.9) differ from the first ones by very low pollen concentrations of 2000-3000 grains cm −3 . This can possibly be explained by increased sedimentation rates due to enhanced soil erosion in the catchment. The latter is confirmed by a high abundance of Glomus type (87-302 %). Pollen spectra of all four layers are characterized by a significant increase of arboreal pollen like Picea (Bittmann, 2007), possibly partly reworked. Non-arboreal pollen (NAP) is still dominant in the spectrum with Cyperaceae, Poaceae and Artemisia, also suggesting wetter conditions and possible spread of tundra vegetation. Spores of coprophilous fungi (Arnium, Bombardioidea, Po-dospora, Sordaria and Sporormiella) indicate the presence of the herbivores in the area, but their increased abundance can possibly be explained by an increased soil erosion in the catchment. Pollen of Myriophyllum and algal remains indicates similar aquatic conditions to before. Interestingly, there are abundant sheaths of Gleotrichia-type, which is known as having been an aquatic pioneer during the early part of a late glacial due to its ability to fix nitrogen and make conditions suitable for other aquatic plants ( van Geel et al., 1989). Layer II.11 has an increased pollen concentration (6000 grains cm −3 ), indicating a lower sedimentation rate during this period. The pollen concentration increases up to 64 000 grains cm −3 in the peat layer, indicating a slow peat growth rate. An abundance of arboreal pollen (AP) in layer 11 exceeds 50 % and it is dominated by Pinus and Betula, indicating further spread of birch-pine forests under milder conditions. The heavy mineral samples from this section reveal a spectrum which is typical of the LST throughout the whole section of floodplain loams. Since especially pyroxene and brown hornblende are not very resistant to weathering, nearsurface samples are affected by a higher grade of mineral alteration, which causes a relative enrichment of the stable heavy mineral titanite. Stratigraphic interpretation The 14 C ages place the lower part of the floodplain loams in the Meiendorf and Bölling interstadials, as well as in the Older Dryas period. They coincide with 14 C ages presented by Schirmer (1999) for the same stratigraphic unit and predate the Laacher See eruption. Based on our investigations in the field, we placed the first layers containing LST in Unit III, supporting the 14 C ages. Furthermore, comparison of the preliminary pollen data with pollen diagrams presented in Schirmer (1999) However, the heavy minerals suggest that layers from Unit II also contain significant amounts of LST, as well as the underlying gravel unit. This finding is consistent with two of the OSL dates from the floodplain loams, which yield ages of 12.6 ± 1.2 and 10.8 ± 0.9 ka, thus post-dating the Laacher See event. Another OSL age from the underlying gravel (12.8 ± 1.2 ka) agrees with the Laacher See event and is consistent with another sample dated to 12.5 ± 0.8 ka investigated in the gravel section (see Sect. 3.3.1). However, the OSL chronology shows an age inversion in the uppermost sample. This is most likely due to methodological problems, namely OSL curves in this sample that decay slower than usual. The three lower samples showed typical OSL curves; they thus appear more reliable. On the other hand, the pollen data rule out a Holocene age of the middle OSL sample in the floodplain loams. The beginning of the Holocene in the area is well defined stratigraphically by a strong increase of pine pollen up to 80 % (Bos and Urz, 2003), and this signature is absent in our investigated pollen samples. In summary, on the one hand it seems that the OSL data and the heavy mineral analyses support each other, with the uppermost gravel unit and the overlying floodplain loams post-dating the Laacher See event. On the other hand, the 14 C chronology is more consistent with the pollen data and the field observations, i.e. the onset of tephra deposition within Unit III. This stratigraphic inconsistency will be further investigated in the near future. The 14 C chronology furthermore reveals that the lower part of the section, comprising a sequence of intercalated coarser and finer layers (Unit II), was deposited within a relatively short time, resembling an alluvial channel facies (Aurinnenfazies) sensu Schirmer (1983). Abandoned gravel quarry of Niederwalgern The site Niederwalgern is a former gravel quarry, which has now been turned into a lake that serves as a natural reserve for birds and other wildlife, and is also the habitat of a small herd of water buffalos. Geomorphologically, the former gravel quarry rests on the lower terrace of the Lahn, which in turn is covered by Holocene alluvial fines. The fines include a thick unit of sediments that contains abundant fragments of ceramics and charcoal, indicating anthropogenic alluvium which originates from hillslopes further to the west (Fig. 3). Detailed litho-, bio-and chronostratigraphic investigations at the site (during active quarrying) were carried out by . Methods and results Particle size distribution was determined by classical pipette and sieve procedures without decarbonation according to Köhn (ISO 11277). In order to provide a first chronology of the section, OSL dating was applied to the underlying gravels of the lower terrace and to the upper part of the overlying fine sediments. For this purpose, the coarse grain quartz fraction was analysed. Due to incomplete bleaching, the D e values of the floodplain fines are based on a minimum age model. In contrast, material from a sand lens within the underlying gravel was well bleached. Thus, the Central Age Model was applied for deriving the mean D e . Results are summarized in Table 3 and Fig. 8. Stratigraphic interpretation The investigated section comprises three units: the gravel unit (I) at the base of this site yields an OSL age of 8.5 ± 0.6 ka. In comparison to a 14 C age obtained by , which places this unit in the Younger Dryas, our OSL age appears to be too young. Further investigations on this issue will be undertaken in the near future. The intermediate layer (II) is composed of floodplain loams with a dominant sand fraction. The unit terminates with a dark soil complex, in which the clay content increases to around 40 % in the uppermost sample. This soil can possibly be correlated with the so-called black floodplain soil which is widespread in the area. The formation of this floodplain soil in middle Hesse is assigned to the early Holocene (Mäckel, 1969;Houben, 2002;Urz, 2003) or to the early to mid-Holocene (Rittweger et al., 2000). So far, precise numerical ages of this horizon have been sparse, and the site at Niederwalgern offers the potential for improving the chronology by undertaking further OSL analyses. The sediments of the uppermost unit (III) are dominated by silt. Charcoal pieces and fragmented ceramics are also abundant, indicating strong anthropogenic impact. At the current stage of research, it is however not clear whether this sediment layer is a colluvium from the small slopes further to the west or an alluvial sediment. Five OSL ages assign the unit to the Early Medieval Period in the lower part and the High Medieval Period in the upper part. As in many other parts of Germany, the High Medieval Period was characterized by deforestation and intensive farming, not only in the lowlands, but also on the hillslopes of low mountain ranges. This led to intense soil erosion and deposition of material at toe slopes and floodplains as colluvial and alluvial deposits. Study area The section is situated on a slope within a former brickyard on the east side of the Wetter River (50 • 26 N, 08 • 46 E; 198 m a.s.l.), in the northern part of the Wetterau basin within the Hessian Depression (Fig. 9). The basin's topography is characterized by a gently rolling landscape, flanked by the northern Taunus mountains to the west and the basaltic Vogelsberg massif to the east. During the Tertiary, tectonic sub- sidence created a mosaic of small-scale depressions, accompanied by the deposition of marine, fluvial, limnic and aeolian sediments (Bibus, 1974(Bibus, , 1976. Therefore, the lithology of the study area is dominated by unconsolidated Miocene sediments consisting of sands, gravels and clays. Additionally, Miocene basalts and intensively saprolized rock form the subsurface of the northern part of the Wetterau, characterizing the lithology of the study area (Kümmerle, 1981;Sabel, 1982). Under periglacial conditions during the Pleistocene, the river Wetter formed terraces above the present-day river bed. These terraces were later covered by calcareous aeolian sands and reworked loess-derived clayey silts. On northeastfacing slopes and geomorphologically sheltered positions, loess was deposited and has been preserved to thicknesses of up to 10 m (Schönhals, 1996). Farming in the area already started in the early Neolithic, ca. 7500 years ago, favoured by a moderate climate and fertile soils. Because of this longterm cultivation, the present-day soilscape of the area is char-acterized by truncated soil profiles and anthropogenic colluvium, e.g. truncated Luvisols, Cambisols and Regosols (Houben, 2012;Lang and Nolte, 1999;Schrader, 1978). Methods According to Bibus (1974), the investigated loess section can be subdivided into 17 units, including several palaeosols showing different intensity of pedogenesis, reaching a thickness of up to 10 m. However, the chronostratigraphic interpretation by Bibus (1976) was based solely on palaeopedological criteria, whereas there has been no numerical age control so far. Therefore, the existing loess profile has been extended, described and sampled in several field campaigns since summer 2013. Magnetic susceptibility measurements were conducted in the field with a SatisGeo Kappameter KM-7 at a 10 cm depth interval, recording five measurements per depth interval. Samples for sedimentological analyses were collected at high resolution (5 cm), yielding 180 bulk samples, based on the continuous column sampling method described in Antoine et al. (2009). Sedimentological analyses included determination of particle size distribution by classical pipette and sieve procedures without decarbonation according to Köhn and gas-volumetric determination of carbonate using the Scheibler method. Additionally, spectrophotometric analysis for determination of colour and lightness was conducted using a Konica Minolta CM-5 spectrophotometer at the laboratory for Physical Geography of RWTH Aachen. Based on the colour values, the Redness Index (RI) was calculated as a proxy for soil rubification and changes in hematite content (Barron and Torrent, 1986). For luminescence dating, 16 samples ( Fig. 10; red circles) were taken at night-time by direct sampling into opaque plastic bags, after removing the light-exposed outer sediment layer of the profile wall. Samples for dosimetry measurements were collected within a 30 cm radius of each luminescence sample. Sample preparation and post-IR IRSL measurements, following a modified post-IR IRSL 225 protocol originally proposed by Buylaert et al. (2009), were carried out at the Luminescence Laboratory of Giessen University. Further information can be found in Steup and Fuchs (2017). A total of 15 undisturbed samples were collected from the profile for micromorphological analyses ( Fig. 10; red boxes). For the interpretation of relative variations in the geochemical composition along the loess section, XRF analyses were performed on a ITRAX XRF core scanner at Bremen University. The results are presented as element log ratios (Fig. 12) to characterize weathering intensity and dust provenance. Profile description and results The division of the loess section into 14 pedostratigraphic units (Fig. 10) is based on field observations including identification of major discontinuities and variations in colour, grain size distribution, magnetic susceptibility and carbonate content, as well as quantitative analyses of grain size distribution, carbonate content, spectrophotometric colour measurements and age estimates obtained from luminescence dating. The lowermost subsequence (Unit 1) of the section consists of Fe / Mn nodules of reddish brown compacted clayey silt ∼ 2 m thick, characterized by complete decarbonatization. It shows the highest content of illuvial and neoformed clay (< 2 µm; Fig. 11a, b), with almost 40 % clay at a depth of 10 m. Four subunits can be distinguished within the basal soil complex, based on grain size variability and changes in soil colour and elemental composition (based on XRF). The luminescence age estimates (Table 4) calculated from the pIRIR 225 signal in subunits a and b range from 177.6 ± 26.8 ka (GI 142) to 204.7 ± 21.8 ka (GI 143), indicating a time of deposition prior to the last interglacial (MIS 5e) and therefore soil formation during MIS 5 or 7. Unit 2 marks a transitional stage between subsequence I and II, showing several indices for translocation, e.g. coarsening substrate, only partial decalcification and diffuse boundaries (Fig. 11c, d). It is superimposed by 1 m of homogeneous yellow-grey, calcareous (11-12 % CaCO 3 ) and silty loess (Unit 3) with incorporated CaCO 3 concretions (Ø 5-6 cm). Unit 4 differs clearly from the underlying and overlying typical calcareous loess layers (Units 3 and 5) in the occurrence of reworked yellowish brown to grey silt loams and sandy layers, both containing Fe / Mn concretions and erosive and translocated structures. The uppermost laminated calcareous loess (Unit 5) of subsequence II is infiltrated with large calcareous nodules up to 15 cm in diameter and marks the boundary towards the overlying light brown reddish silt loam (Unit 6), with a tabular structure and a lower carbonate content compared to the loess sediments. Luminescence ages of the under-and overlying sediments confirm a gap of ∼ 100 ka between subsequence II and IV, implying deposition of SS II during MIS 6 (GI 146: 167.5 ± 21.9 ka). The overlying subsequence IV represents MIS 2 and is characterized by the alternation of yellow sandy loess sediments with intercalated coarser brownish sand layers (Units 8, 10, 12) and greyish yellow horizons with higher silt and lower sand contents, reflecting incipient pedogenesis (Units 7,9,11,13). Transitions between sandy loess and bleached tongue horizons are represented by disturbed boundaries accompanied by redepositional features, such as rounded Fe / Mn nodules and the highest coarse sand contents of the entire sequence. In the uppermost loess (Unit 12) a greyish-black layer of volcanic material 1-2 mm thin is observed, showing deformation features through solifluction processes. Based on the post-IR IRSL ages (GI 154 and GI 155), the volcanic ash layer can be attributed to the Eltville tephra, which serves as an important marker horizon and thus enables us to correlate the Münzenberg loess section with other sequences from central Germany containing this ash layer. The superimposed Unit 14 of subsequence V corresponds to the modern surface soil. Author contributions. JL wrote the major part of the article, led the fieldwork and sampling and carried out luminescence dating for the research areas Niederweimar and Niederwalgern. RS wrote the part on Münzenberg, led fieldwork and sampling and carried out all analyses at Münzenberg. LS carried out the palynology and CH carried out the heavy mineral analyses. DS co-led fieldwork, carried out pedological investigations and organized radiocarbon dating. VvD carried out grain size analyses and prepared profile drawings. MF is the main organiser of the research team and research content.
2019-04-27T13:10:28.657Z
2018-08-20T00:00:00.000
{ "year": 2018, "sha1": "855fe01c3790755f087cf7594d9977f2b1f987cc", "oa_license": "CCBY", "oa_url": "https://deuquasp.copernicus.org/articles/1/15/2018/deuquasp-1-15-2018.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "b26789888ff5194cb3c9e123206f555e5d5c04dd", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Geology" ] }
100661952
pes2o/s2orc
v3-fos-license
Correlating Electrolyte Inventory and Lifetime of HT-PEFC by Accelerated Stress Testing Phosphoric acid electrolyte evaporation in a polybenzimidazole based high temperature polymer electrolyte fuel cell is analyzed as a function of reactant gas stoichiometry and temperature. Based on these results a phosphoric acid vapor pressure curve is derived to predict the fuel cell liftetime with respect to electrolyte inventory. The predicted fuel cell life was validated by means of an accelerated stress test. Additionally, the correlation between electrolyte inventory and fuel cell performance was investigated by recording H2/air and H2/O2 polarization curves during the course of the stress test to gain insight into the relation between acid inventory and the different degradation modes. © The Author(s) 2015. Published by ECS. This is an open access article distributed under the terms of the Creative Commons Attribution 4.0 License (CC BY, http://creativecommons.org/licenses/by/4.0/), which permits unrestricted reuse of the work in any medium, provided the original work is properly cited. [DOI: 10.1149/2.0591512jes] All rights reserved. High-temperature polymer electrolyte fuel cells (HT-PEFC) have the potential to become an important technology for small scale heat and power (CHP) applications. However, today, fuel cell based CHP applications are dominated by low-temperature PEFC (LT-PEFC), 1 even though the possibility to sustain high CO levels of up to 3%, 2 thermal integration of the fuel processing unit and no need of additional gas clean-up render HT-PEFCs especially suitable for operation on hydrocarbon-based fuels, i.e. natural gas. The high operating temperature of 160-200 • C, reduced system complexity, due to the absence of additional gas humidification, and high system efficiencies are ideal properties of HT-PEFC for stationary CHP applications. Fuel cell durability, efficiency and cost are essential factors for commercialization. Durability is mainly determined by membrane electrode assembly (MEA) degradation. Amongst other degradation modes that HT-PEFCs share with low temperature PEFC, 3 electrolyte loss by evaporation and migration is exclusive to HT-technology and a limiting factor for CHP applications. We have recently demonstrated that PBI based membrane systems exhibit extensive electrolyte migration from cathode to anode under high current operation. 4 This was attributed to the high mobility of free hydrogen phosphate anions which carry part of the ionic current. While this work focuses on phosphoric acid loss by evaporation and its implication on lifetime and fuel cell performance, it cannot be excluded that the high PA mobility has an effect on electrolyte evaporation as it can influence the PA resupply and saturation of the electrodes. With respect to electrolyte evaporation, the phosphoric acid vapor pressure below temperatures of 300 • C is extremely low, nevertheless it is expected to be significant considering the targeted lifetime of 50,000 h for CHP systems set out by the US Department of Energy (DOE) for 2015. 5 Up to now, no literature data is available for the vapor pressure of phosphoric acid for temperatures below 200 • C. [6][7][8][9] Determining a phosphoric acid vapor pressure curve at the temperatures of interest for fuel cell operation (160-190 • C) is a tedious task, due to the low phosphoric acid concentration in the gas phase and the accompanied analytical measurement complexity. Furthermore, phosphoric acid, being a mixture of water, ortho-and polyphosphoric acid, changes concentration and chemical structure 10 with changing temperature and water partial pressure. The partial pressure of water over phosphoric acid needs to be adjusted to keep the acid concentration constant when trying to determine the vapor pressure curve. In a fuel cell, this task is even more complicated considering that water vapor pressure changes with current density, temperature and stoichiometry. Additionally, in fuel cells of technical size, gradients in concentration, temperature and gas saturation occur in through-as well as in-plane direction. Hence, predicting the lifetime of electrolyte in a technical HT-PEFC would require measuring a large parameter space. As alternative instead of determining the phosphoric acid vapor pressure, the phosphoric acid loss rates can be measured for an operating HT-PEFC. However, literature data 3,11 is rare and persistently given in mass per active area for a given temperature, stoichiometry and current density. Consequently, a change in operating conditions or even fuel cell size might result in vastly different electrolyte loss rates. The existing data base is not sufficient for life-time predictions. The aim of this work is therefore to correlate operating conditions and operating time with electrolyte inventory to predict fuel cell lifetime. Therefore, the focus is set on clarifying the effects of gas flow rate, cell temperature and current density on the phosphoric acid loss by evaporation. PA losses are evaluated by condensation at the outlet of 45 cm 2 cells and the loss rates are validated with an accelerated stress test at 190 • C where the influence of phosphoric acid inventory on different performance loss mechanisms is investigated. Experimental General.-All experiments were carried out with BASF Celtec membrane electrode assemblies (MEA). These MEA consist of a H 3 PO 4 doped polybenzimidazole (PBI) membrane with an acid loading of 36 mg H 3 PO 4 cm −2 ± 6% and a PA to PBI molar ratio of 33 ± 2. The acid loading values and molar ratios were determined before MEA assembly by measuring the acid content of several pristine membranes by ion chromatography (IC). The thickness of the membrane after cell assembly is approximately 100 μm. 4 During MEA preparation PA gets partially transferred to the electrodes and the membrane acid content defines the MEA beginning of life (BoL) acid content. The electrodes consist of Pt/Vulcan XC-72 supported platinum catalyst with a loading of 1 mg Pt cm −2 on anode and cathode, respectively, coated onto SGL 38 carbon paper gas diffusion layers (GDL) including a microporous layer. For cell assembly, additional Kapton sub-gaskets are used on cathode and anode, which are partially overlapping with the membrane and electrode. 12 Additional 320 μm thick perfluoroalkoxy alkane (PFA) sealings on top of the sub-gaskets act as a hard stop and define GDL compression by constant gap. 13 All tests were performed on 45.2 cm 2 active area single cell setups using pyrolitically surface treated and sealed graphite flow fields (proprietary surface treatment by POCO Graphite, USA). The flow field channel structure consists of a serpentine channel geometry (1.2 mm width and 2.0 mm depth) with two and three channels on anode and cathode, respectively. Cells were operated in co-flow with dry hydrogen and air unless stated otherwise. Break-in was performed for 48-120 h at 160 • C and stoichiometries of λ H 2 = 1.2 and λ air = 2.0. Phosphoric acid loss.-In order to determine the phosphoric acid loss during fuel cell operation, the phosphoric acid/water vapor mixture in the exhaust gas stream of anode and cathode was condensed by passing each stream through water in a polypropylene bottle. The sample containers were emptied every 24-300 h after flushing the PFA flange and tube between sample container and fuel cell with 50 ml of deionized water. Therefore, all phosphoric acid that condensed between cell and sample container is collected as well. The concentration of the water/phosphoric acid mixture was subsequently analyzed by ion chromatography (IC) (Metrohm 882 Compact IC plus System; Metrosepp A Supp 5 150 anion separation column). The IC system measures the HPO 4 2− concentrations in the range of 1-20 ppm (mg L −1 ) with an accuracy of ±1.5%, determined by periodically measuring standards. The loss rate of H 3 PO 4 from the MEA can be calculated as follows: ; the term corrects for the difference in molar mass between the two species), A the active area of the fuel cell and t the time the bottle was connected to the fuel cell. The downside of this acid loss rate representation is that it does not directly correlate the acid loss with gas volume flow. Therefore, a different representation was chosen where the PA loss is expressed as a PA concentration in the gas leaving the anode and cathode, respectively, as given in Equation 2: H 3 PO 4 concentration is typically in the ppt (ng L −1 ) range,V is the volumetric outlet gas flow in L h −1 as a function of fuel cell operating temperature T, gas stoichiometry λ and current density j and is calculated from the ideal gas and faraday's laws with the assumption that all produced water vapor is leaving the cell on the cathode side. Error bars in the plots indicate the variation of phosphoric acid evaporation during the measurement cycles. Phosphoric acid evaporation rates of the anode are not presented for λ Anode ≤ 3.6, due to detection limits of the IC. Standard conditions (0 • C, 101,325 kPa) are indicated for gas volumes by NL. Otherwise, the volume of gas is calculated at the respective temperature. Based on the PA content in the gas, a vapor pressure curve can be derived using Antoine's equation. The vapor pressure of phosphoric acid is calculated at the respective temperature T according to: In this equation p H 3 PO 4 is the vapor pressure of phosphoric acid and R the ideal gas constant. The Antoine Equation 4 is then given as: where A, B and C are fitting parameters and T is the temperature of the fuel cell in Kelvin. This equation assumes a constant heat of vaporization, reducing its validity to a narrow temperature range. -After a break-in period of 120 h at 160 • C, the fuel cell was operated at 190 • C, 0.2 A cm −2 and gas flow rates of 100 NL h −1 /50 NL h −1 on cathode/anode side. The challenge of an AST is to choose operating parameters that trigger only a single degradation mode, in this case electrolyte evaporation, and minimizing all other degradation effects, c.f. carbon corrosion, 3 membrane pinhole formation, 14 GDL/electrode flooding with PA, 3 catalyst particle detachment and agglomeration. 3 Since this is realistically not possible, a characterization method is necessary to gain insight into the different cathode degradation modes. Therefore, polarization curves with H 2 -air (λ H 2 /λ air = 1.2/2) and H 2 -O 2 (λ H 2 /λ O 2 = 1.2/9.5) were recorded at the beginning of life and during the course of the experiment in time intervals of 200-360 h. Ohmic cell resistances are measured with 1 kHz AC impedance measurements (Hoecherl & Hackl 80A electronic load with integrated sine wave generation). Determination of voltage loss terms.-A brief explanation of the different voltage loss terms and how to retrieve them is given here. A more extensive explanation can be found elsewhere. 2,3,[15][16][17][18] It was previously shown that all kinetic and mass transport losses of the hydrogen electrode can be neglected. Therefore, the cell voltage E cell can be described according to Equation 5: where E 0 is the equilibrium potential at a given temperature and gas partial pressures, η ORR the oxygen reduction reaction overpotential, η mtx the mass transport overpotential induced by O 2 transport through the GDL and catalyst layers and η IR the voltage loss due to ohmic resistance of the cell. where G (T,p) is the Gibbs free energy of formation 19 at a given temperature and a pressure of 101,325 Pa for gas phase water. The second term corrects the reversible potential for the mean gas partial pressures of hydrogen, oxygen and water. By correction of the IR drop and assuming a purely kinetic controlled polarization curve when using pure oxygen as the cathode gas, equation 5 can be simplified to: The oxygen reduction reaction overpotential can now be calculated as the difference between the reversible potential E 0 and the measured ohmic drop corrected oxygen polarization curve. In order to calculate the mass transport overpotential, a theoretical Tafel line for air polarization has to be calculated. The difference between the measured air polarization and this theoretical line can then be attributed to mass transport losses. The theoretical air polarization curve is shifted to lower potentials by the difference in reversible potential between the average oxygen partial pressures of air and oxygen operation as given by: [8] In this equation E 0 is the difference in reversible potential between air and oxygen, b the Tafel slope and γ the reaction order (γ = 0.52 ± 0.05). 20 The measured Tafel slope in the performed experiments is in the order of 105 ± 5 mV/decade, which is close to the theoretical value of 92 mV/decade (= 2.303 RT/F). Deviations from this ideal value can be attributed to non-ideal design of the electrodes. 21 For better comparison of the measured data, the different overpotentials are given as difference to the values at BoL after break-in according to: [9] η O Rmtx R = η mt x,BoL − η mt x,t [10] η I R = η I R,BoL − η I R,t [11] Results and Discussion In this section phosphoric acid loss rates of BASF Celtec MEA as a function of temperature and gas stoichiometries are presented. Based on these results, vapor pressure curves are derived, which give the Phosphoric acid loss.-In Figures 1 and 2 and Table I phosphoric acid concentration in the off-gas stream of cathode and anode is presented as a function of temperature from 160-190 • C and fuel cell inlet gas flow rates. It can be seen that in the temperature range of 160-180 • C the PA content in the gas scales exponentially with temperature but is independent (within measurement accuracy) of flow rate or current density as additionally pointed out in Table II for measurements at 160 • C. It should also be noted that a change in stoichiometry on only anode or cathode does not influence the electrolyte content in the gas phase, i.e., doubling the flow rate on the anode has no influence on the phosphoric acid loss rate of the cathode and vice versa. Hence, the water drag reported for HT-PEFC 22 can be neglected as influencing factor for the studied parameter range. At first sight, these results are unexpected considering the fact that the phosphoric acid vapor pressure is a function of concentration 6,9 and a change in gas stoichiometry induces a change in concentration due to a change in water partial pressure. Additionally, high current densities induce extensive phosphoric acid migration within the fuel cell 4 and are, at least locally, changing the electrolyte concentration. We attribute this insensitivity of the PA content to the gas volume flow to a negligible change in latent heat of vaporization for the concentrations compared to the measurement accuracy. Furthermore, the residence time of gas within the fuel cell is high enough to fully saturate the gas with phosphoric acid independent of electrolyte concentration and, hence, the phosphoric acid gas content scales solely with the outlet gas volume. Consequently, evaporation rate or transport limitation (diffusive gas phase transport through the porous transport medium) can be neglected for the temperature range of 160 • C to 180 • C and stoichiometries of up to λ H 2 /λ air 8/8 at 0.2 A cm −2 . At 190 • C the evaporation rates still increase compared to lower temperatures but the PA content in the gas of the cathode deviates significantly for gas flows of 18 NL h −1 and 54/90 NL h −1 . On the anode side, the effect is also present, although less pronounced. It has been shown that gas crossover between adjacent channels in a serpentine flow field occurs when gas velocities increase beyond a certain threshold. 23 The reason for this is a pressure difference between adjacent channels and consequently the residence time of gas within the fuel cell decreases at higher flow rates. This might lead to insufficient time for the gas to saturate with electrolyte vapor. Since the PA gas concentration increases roughly exponentially (c.f. Figures 1-4) and the residence time decreases linearly with the volumetric gas flow rate, a limitation by diffusive gas phase transport or evaporation rate becomes apparent at high temperatures (≥190 • C) and stoichiometries λ air ≥ 2. A comparison of PA gas content for a phosphoric acid fuel cell (PAFC) 24 and the cathode gas content of a HT-PEFC is shown in Figure 3. In the range of 160-180 • C the linear increase (in the semilog plot) and overlapping values in PA gas content for both technologies further confirm the suggested PA loss by vapor saturation. Additionally, at 190 • C the vapor concentration for the HT-PEFC shows a distinctive deviation from the linear vapor saturation behavior. In Figure 4 the vapor pressure for cathode and anode is plotted for the same gas inlet flowrates and current density (18 NL h −1 /13.6 NL h −1 cathode/anode; 0.2 A cm −2 ) as a function of temperature. Using Antoine's equation, a vapor pressure curve for cathode and anode can be calculated, which is indicated by the straight lines in Figure 4. The derived fitting parameters are given in Table III. It becomes apparent that the vapor pressure of H 3 PO 4 in the fuel cell gas outlets of anode and cathode varies noticeably in the range 160-180 • C while the two values fall together at 190 • C. In the temperature range of 160-180 • C, anode and cathode gas streams are both saturated with PA and no transport or evaporation rate limitation can be observed as discussed above. Consequently only the 160-180 • C values are used to derive the fitting parameters for Antoine's equation. We can only speculate about the reason behind the difference in vapor pressure for cathode and anode. It might be caused by a local temperature sink in the endplate of the anode, causing PA to condense before it reaches the fuel cell outlet. Another explanation could be a non-ideal behavior of the carrier gas -PA vapor mixture which leads to a vapor pressure depression effect. The measured vapor pressure values at 190 • C for anode and cathode are significantly lower than predicted by the Antoine fit, presumably due to the presence of transport or evaporation rate limitations already at the low flow rates of 18/13.6 NL h −1 . In Figure 5, a lifetime prediction with respect to electrolyte inventory was made by applying the derived Antoine fit for 160-180 • C and a stoichiometry of λ H 2 /λ air = 3.6/2. Since the acid loading (≈36 mg PA cm −2 ) of the MEA is known, a total mass of phosphoric acid in the MEA can be calculated. Assuming a constant phosphoric acid evaporation rate throughout the lifetime of the fuel cell, the time for total acid loss can be calculated. It should be noted that the results yield the highest accuracy for current densities between 0.2 and 0.8 A cm −2 at 160 • C and 0.2 -0.6 A cm −2 at 170 -180 • C. Values for higher current densities were extrapolated and therefore mass transport or evaporation limitation cannot be excluded. It is expected that the results can be transferred to single cells or even stacks of similar or bigger size. The results can be compared to the DOE's 5 50,000 operating hours requirement of medium sized CHP systems. The DOE's failure characteristic is a 10% loss of electrical performance and hence, the plotted time to 0% PA content is therefore certainly an overestimation. Additionally, as will be shown later, the phosphoric acid within the membrane is very mobile, making it necessary to also account for the phosphoric acid in the membrane underneath the sub-gasket, outside of the active area. With the present MEA design, this increases the absolute phosphoric acid amount of the MEA by about 13%. Nevertheless, the examined HT-PEFC MEA is highly unlikely to reach the desired lifetime for temperatures ≥170 • C. Even at 160 • C any operating point beyond 0.2-0.4 A cm −2 significantly increases the risk of failing the DOE requirement. With these results in mind, the question then arises as to what extent phosphoric acid loss from the MEA degrades performance. Therefore, an accelerated stress test was carried out, which is discussed in the next section. Accelerated stress test.-For interpretation of the results in the previous section, an accelerated stress testing protocol was implemented to analyze the influence of electrolyte loss on actual HT-PEFC lifetime and performance. In order to maximize phosphoric acid loss and minimize measurement time, a temperature of 190 • C with high hydrogen and air flowrates of 50 and 100 NL h −1 , respectively, corresponding to stoichiometries of λ H 2 = 13.2 and λ air = 11, was chosen. The voltage profile including the measured high frequency resistance and PA loss from the MEA is shown in Figure 6. During the first 2000 h of operation an approximately linear voltage degradation of 41 μV h −1 was observed. Up to 2000 hours, the non-linear increase of the HFR is too weak to dominate voltage degradation and indicates overlapping degradation modes. Only after 2000 hours the exponential increase in ohmic resistance ultimately leads to fuel cell failure after 2830 h. Interestingly, the measured phosphoric acid loss rate from the cell did not change throughout the test, suggesting a negligible influence of the polymer -electrolyte interaction down to low PA doping levels. At end of life (EoL) a circular part of the MEA was cut out and the phosphoric acid left was determined to be 17 ± 1% of the initial content, which is equivalent to an acid loading of 4.7 ± 0.7 mg H 3 PO 4 cm −2 . The remaining amount of acid, measured from evaporated PA was calculated to be 13 ± 6%, confirming the accuracy of the determination of evaporated PA. It is important to note, that the overlapping part of the membrane which was partially clamped in between the sub-gaskets was also taken into account for the total phosphoric acid amount. This was verified by measuring the PA amount of a piece of membrane clamped in between the sub-gaskets at EoL and comparing it to a part of the MEA from the active area. Both pieces exhibit almost identical PA loadings. We therefore conclude that PA in these membranes is very mobile and at EoL most of the remaining electrolyte can be found within the membrane. The contribution of the different overpotentials on cell performance loss was determined by measuring separate H 2 /air and H 2 /O 2 polarization curves. The procedure for evaluation of the different contributions is described in the Experimental section. The split into ohmic, mass transport and ORR overpotential contributions of the total voltage degradation is plotted for 0.2 and 0.8 A cm −2 in Figure 7. Voltage degradation can be split into three regions as indicated in Figures 6 and 7. In region I the oxygen reduction overpotential at operating points of 0.2 and 0.8 A cm −2 increases by about 30 mV and becomes the dominating factor at low current densities. It has been pointed out that this decrease in cathode kinetics during the first 500-1000 h can mainly be attributed to platinum particle growth. 3 An increase in oxygen reduction overpotential of 30 mV equals to a reduction in electrochemical active surface area (ECSA) by approximately 50%. The mass transport overpotential also increases in region I and is the major contributor to performance loss at high current density. This degradation effect can most probably be attributed to changes in hydrophobicity of the catalyst layer, resulting in higher saturations with PA effectively increasing mass-transport resistances. 25,26 The loss of ECSA could potentially also be attributed to a deficiency of electrolyte within the electrodes, effectively decreasing the three phase boundary but the high PA mobility and strong increase of η mtx render this highly unlikely. Performance loss due to increases in ohmic resistance can be neglected during the first 1200 h, leading to the conclusion that a loss of almost 40% PA (c.f. Figure 6) is of no concern for this kind of MEA. Region II is characterized by a stabilization of oxygen reduction and mass transport overpotentials at low and high current densities. Ohmic resistance on the other hand increases strongly from 1200 operating hours onward. In the constant gap set-up this can potentially be attributed to an increase in contact resistance concomitant with an increase of proton resistance of the membrane due to loss of electrolyte. In region III, fuel cell degradation is again mainly driven by the evaporation of phosphoric acid as can be seen by a strong increase in η IR . This corresponds well to literature data 28 where a strong decrease in ionic conductivity occurs for PPA processed m-PBI membranes for PA/PBI ratios <8-10. Furthermore, at 2300 h, mass transport improves slightly at 0.2 A cm −2 and significantly at 0.8 A cm −2 . This can be interpreted as evaporation of excess PA in the electrodes and consequently easier access for reactant gas to the catalyst layer. Increased loss of PA from the electrodes and reduction of the three phase boundary area then causes a strong increase in oxygen reduction reaction overpotential. This is promptly followed by the fuel cell failure after 2830 h. The final PA/PBI ratio of ≈5, is, according to literature, an optimal compromise between conductivity and mechanical stability 27 for film casted m-PBI membranes. Hence, mechanical failure or insufficient proton conductivity can theoretically be excluded as failure mode. It is therefore hypothesized that fuel cell failure is either due to complete loss of PA from the electrodes or due to catalyst layer detachment. If only ohmic losses due to electrolyte evaporation are considered, a phosphoric acid loss of 65%-70% at 0.8 A cm −2 and 75-80% at 0.2 A cm −2 can be tolerated before the DOE threshold of 10% performance loss occurs. A more sophisticated compression setup, compensating thickness loss of the MEA, might even push these limits further by reducing contact resistance and catalyst layer detachment effects. The predicted lifetimes in Figure 5 need to be reduced by about 20% (at the given stoichiometry) to give realistic lifetimes with respect to acid inventory for the tested and highly doped (≈40 PA/PBI) MEAs. PBI film casted membranes with a much lower doping level of 6-10 PA/PBI 29,30 may have different PA loss rates and it would be of interest to compare these membranes under similar conditions in future experiments. In case of similar PA loss rates, a drastically reduced lifetime can be expected and the vast surplus amount of PA in the examined sol-gel type membranes would significantly increase MEA lifetime. In order to achieve better lifetimes also at higher temperature and operation at maximum power output, an advanced phosphoric acid electrolyte management seems inevitable. It has been shown that PA within the MEA is very mobile 4 and in-cell condensation or storage of acid as well as pre-saturation of cathode and anode gas streams 31 with PA would be required strategies to extent fuel cell lifetime. In-cell condensation of phosphoric acid in phosphoric acid fuel cells (PAFC) decreased the acid vapor pressure by a factor of four concomitant with a three-fold increase in cell life. 32 This was realized by implementing a zone at the cell exit where the absence of catalyst in combination with enhanced cooling reduces the reactant temperature by 20-40 • C. 33 Additional in-cell storage of PA as demonstrated for PAFC, i.e. in porous bipolar plates, 34 can be used to continuously resupply depleted electrolyte. Conclusions In this work phosphoric acid electrolyte loss by evaporation from 45 cm 2 active area cells with BASF Celtec MEAs was examined as a function of operating parameters. Even though the variation of reactant gas stoichiometry and current density change the water partial pressure in the cell and consequently the phosphoric acid concentration, the acid loss was found to be only a function of the off-gas flowrates and temperature. This means that changes in electrolyte concentration, within realistic fuel cell operating conditions, do not significantly influence the vapor pressure of PA and consequently the electrolyte loss rate. Using Antoine's equation, a vapor pressure curve was derived for temperatures between 160 and 180 • C. At 190 • C, phosphoric acid loss is limited by mass transport or evaporation rate and values do not fit Antoine's equation. Based on these results, the time for complete phosphoric acid loss within the fuel cell for 160-180 • C and 0.2-0.8 A cm −2 was calculated and compared to the DOE's lifetime requirement for CHP systems. The calculated lifetime predictions indicate that only under few operating conditions (160 • C; 0.2-0.4 A cm −2 ) 50,000 hours are reached. The results are expected to be viable for Celtec MEA with carbon paper gas diffusion electrodes and active areas ≥45 cm 2 up to the stack level. The correlation of electrolyte inventory and fuel cell performance was explored with an accelerated stress test at 190 • C and high gas flow rates. Within 2830 h of operation ≈90% of the phosphoric acid was lost from the MEA. The PA loss rate was found to be constant for the entire operation time, suggesting negligible influence of polymer -electrolyte interaction down to a molar ratio of about 5 PA/PBI. It was further found that the cell could sustain a loss of ≈40% without a significant increase in ohmic resistance. In the first 1200 h, fuel cell degradation was driven by kinetic and mass transport losses due to platinum particle growth and changes in hydrophobicity of the catalyst layer. Subsequently, phosphoric acid loss induced increased ohmic losses concomitant with kinetic and mass transport improve-ments, possibly due to advantageous electrolyte distribution within the electrodes. The final fuel cell failure was attributed to electrolyte starvation within the electrodes, caused by insufficient resupply of PA from the membrane or even catalyst layer detachment. If only ohmic losses are considered for PA loss induced degradation, an acid loss of 65-80% can be tolerated before reaching DOE's 10% performance loss threshold. In conclusion, to achieve lifetime requirements also at temperatures above 160 • C and high current densities, an advanced phosphoric acid electrolyte management seems inevitable.
2019-04-07T13:02:52.994Z
2015-01-01T00:00:00.000
{ "year": 2015, "sha1": "0e3d0aa809cfbf8d290356627bc3b73d59db27ca", "oa_license": "CCBY", "oa_url": "http://jes.ecsdl.org/content/162/12/F1367.full.pdf", "oa_status": "HYBRID", "pdf_src": "Adhoc", "pdf_hash": "dbcc4a9ec9720a15f8f22f18b8c1f5f49cfe8140", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [ "Materials Science" ] }
250028154
pes2o/s2orc
v3-fos-license
Clinical, Radiological and Pathological Characteristics Between Cerebral Small Vessel Disease and Multiple Sclerosis: A Review Cerebral small vessel disease (CSVD) and multiple sclerosis (MS) are a group of diseases associated with small vessel lesions, the former often resulting from the vascular lesion itself, while the latter originating from demyelinating which can damage the cerebral small veins. Clinically, CSVD and MS do not have specific signs and symptoms, and it is often difficult to distinguish between the two from the aspects of the pathology and imaging. Therefore, failure to correctly identify and diagnose the two diseases will delay early intervention, which in turn will affect the long-term functional activity for patients and even increase their burden of life. This review has summarized recent studies regarding their similarities and difference of the clinical manifestations, pathological features and imaging changes in CSVD and MS, which could provide a reliable basis for the diagnosis and differentiation of the two diseases in the future. INTRODUCTION Cerebral small vessel disease (CSVD) belongs to a group of diseases involving cerebral arterioles, venules and capillaries. CSVD accounts for about 25% of ischemic stroke, and its etiology remains unclear. The pathogenesis may be related to vascular endothelial injury, hypoperfusion and ischemia, and impaired blood-brain barrier function (1,2). Currently, CSVD can be divided into 6 categories according to different pathological characteristics: atherosclerosis, cerebral amyloid angiopathy, hereditary vascular diseases [such as cerebral autosomal dominant arteriopathy with subcortical infarcts and leukoencephalopathy (CADASIL), Fabry disease, etc.], inflammatory vascular diseases (such as primary angiitis of the central nervous system (PACNS), Susac syndrome, etc.), venous collagen deposition lesion and other types (3,4). The imaging features included recent small subcortical infarcts, lacune of presumed vascular origin, white matter hyperintensities of presumed vascular origin, perivascular space (PVS), cerebral microbleed (CMB) and brain atrophy. Some patients may obtain CSVD-related symptoms, but many patients lack symptom and the occurrence of such imaging features appears more problematic within individuals above the age of 50. However, after repeated small vessel events, patients often show abnormal gait, numbness of limbs, cognitive decline and etc., which can seriously result in cognitive dysfunction and physical disability. It has been reported that the etiology of 45% patients with dementia is CSVD, which ranks second only to Alzheimer's disease (3,5). Therefore, a series of complications caused by CSVD bring immeasurable burden and cost to society. Multiple sclerosis (MS) is an autoimmune disease characterized by inflammatory demyelination of white matter centered on the central nervous system. The etiology and pathogenesis of MS remains to be unknown. In terms of anatomical structure, MRI showed that the white matter lesions of MS were often distributed around the paracele, subcortex, thalamus, brainstem, and others, showing hyperintensity signal lesions on T2-weighted images (6). MS is an important cause of non-traumatic nervous system dysfunction in young people. Epidemiologically, the incidence of MS shows obvious differences in different regions of the world, and the incidence of MS in Europe and America is higher than that in Asia and Africa. The incidence of MS in females is higher than that in males. Usually, the clinical onset age ranges from 20 to 40 years old, and its clinical manifestations are non-specific. The main symptoms include limb numbness, limb weakness, dizziness, ataxia, blurred vision and others, which are often similar to the clinical manifestations of other inflammatory-related diseases and non-inflammatory-related diseases, thus making them difficult to be distinguished. If patients cannot be correctly diagnosed and treated in the early stage of MS progression, more than half of patients will lose their independent exercise ability after 20 years (7,8). It not only seriously affects the quality of life of patients, but also brings more medical burden to all sectors of society. In clinical work, CSVD and MS often have similar characteristics in clinical manifestations, imaging features and pathological changes, which brings many difficulties to the identification, early diagnosis and treatment intervention of clinical diseases, and further delays the effective treatment time window of patients. Therefore, it is helpful to improve our diagnosis, comprehensive evaluation and formulate treatment strategies in clinical work through in-depth understanding of the similarities and differences in clinical symptoms, imaging changes and pathological features of these two diseases. This review intends to summarize and discuss the similarities and differences between CSVD and MS from three aspects, such as clinical features, imaging changes, and pathological features, so as to provide a strong basis for the subsequent clinical diagnosis and treatment. CSVD Clinical, Pathological, and Imaging Features of CSVD CSVD refers to a series of clinical, imaging, and pathological syndromes resulting from various etiologies affecting small intracerebral arteries and their distal branches, microarteries, capillaries, microvenules, and small veins. The current definition of CSVD is broader and includes not only the small vessels mentioned above, but also the vascular structures within the brain parenchyma and subarachnoid space 2-5 mm surrounding these small vessels (9). CSVD is one of the important causes of white matter lesions. Due to the occult onset and mild symptoms, it is difficult to be diagnosed by screening in clinic. However, repeated cerebrovascular events will eventually result in the limb dysfunction and seriously affect the quality of life. Pathologically, cerebrovascular arteriosclerosis often occurs in CSVD, which is highly correlated with age and vascular risk factors (VRFs) (e.g., hypertension) (10). The results of pathological sections indicated that fibrinoid necrosis in the vascular wall, lipohyalinotic degeneration, atherosclerotic plaque, thickening of vessel wall and narrowing of vascular lumen could all promote the formation of cerebral arteriolar sclerosis. The loss of smooth muscle cells in the vessel wall will lead to the dysfunction of vascular autonomic regulation and the formation of microaneurysms. In view of the fact that arteriosclerosis is a systemic pathological process, similar pathological findings exist in target organs with more small blood vessels, such as retina and kidney (5). Although many previous studies on CSVD have made some achievements, the pathogenesis of CSVD remains to be unclear. Imaging can not only assist the diagnosis of CSVD and reflect the degree of involvement, but also help to understand its pathological basis. However, imaging description has individual heterogeneity (3). Therefore, an expert consensus was organized and drafted by the international working group, which classified the imaging features of CSVD as recent small subcortical infarcts, lacune of presumed vascular origin, white matter hyperintensities of presumed vascular origin, perivascular space (PVS), cerebral microbleed (CMB) and brain atrophy (3,9) (Figure 1). Recent Small Subcortical Infarcts The infarct foci usually smaller than 20 mm are considered as the imaging features of recent small subcortical infarcts or acute lacuna infarction. These lesions progress to potential lacunae or white matter hyperintensities without cavities over time, and may also disappear (11). These lacunae are generally round or oval, with a diameter of about 3 to 15 mm, and their signal strength is consistent with CSF (3). The blood supply area of perforating arterioles is the predilection site of recent small subcortical infarcts, which can gradually evolve into lacuna and usually appear with white matter hyperintensities in chronic stage, suggesting that lacuna and white matter hyperintensities may have interrelated pathological basis (12). Another distinguishing feature is that in the fluid attenuated inversion recovery (FLAIR), the lesion area can be seen with hyperintensity at the edge, which suggests that the lesion is more likely to be a lacuna than a perivascular space. Multiple perivascular spaces in basal ganglia are called "sieve state, " which is usually related to brain atrophy and brain degenerative disease (3,5). Lacune of Presumed Vascular Origin Lacune of presumed vascular origin are generally small cavities that remain in the brain tissue after removal of the necrotic tissue of a subcortical infarction. Most symptoms lack clear corresponding clinical manifestations and may lead to progressive neurological decline such as cognitive decompensation or even vascular dementia (13), balance disorders, gait disturbances, urinary incontinence and affective disorders after several episodes of mild hemiparesis. The cognitive decrements caused by lacunae are dominated by executive function decrements. For motor impairment, an increased number of lacunae is associated with slower gait speed, wider step base and reduced balance; thalamic and frontal lacunae are associated with slower gait speed, reduced stride length and slower gait frequency (14). White Matter Hyperintensities of Presumed Vascular Origin Frontal lobe and parietal lobe are the predilection sites of WMHs (15), and about 80% of white people over 60 years old suffer from WMHs. According to the scope of WMH, it can be divided into three types: focal, initial fusion and extensive fusion (16,17). The terminal region of WMHs blood supply is the predilection site of WMHs. In the supratentorial region, it is more common in basal ganglia, corona radiata and centrum semiovale (12). In the brain stem position, it is more found in the center of brainstem (18). Previous studies have shown that the progress of WMHs is related to cognitive impairment, behavioral impairment, gait abnormality and urination disorder, and develops with time (4,19,20). PVS The PVS is an extension of the extracerebral fluid gap surrounding the arteries, small arteries, veins, and small veins. Since the PVS enters from the brain surface or through the brain parenchyma, it can be traced through the lamellar soft meninges (21,22). As patients age, PVS becomes increasingly evident and especially PVS at the base of the brain (21). Several studies have shown that enlarged PVS is associated with reduced cognitive function (22). The clinical symptoms associated with PVS are still in the exploratory stage. Several cross-sectional studies have found that PVS is associated with reduced information processing speed, reduced executive function, and increased risk of vascular dementia (23). However a meta-analysis that included five large studies found that PVS was not associated with cognitive function in healthy older adults (21). There is limited research on PVS and movement disorders. Previous case reports have suggested that severe striatal area PVS is associated with motor symptoms in Parkinson's disease, possibly because severe PVS affects striatal structure and function, which in turn leads to extrapyramidal symptoms. CMB SWI (susceptibility weighted imaging) or GRE (gradient recalled echo) is relatively sensitive to the exploration of cerebral microhemorrhage. The lesions are <5 mm in diameter on images, but not visible on other MRI sequences or CT (computed tomography, CT) (24). When complicated with chronic hypertension and atherosclerosis, microhemorrhage usually occurs in deep gray matter, and should be distinguished from calcification, normal blood vessels, iron deposition from other causes, hemorrhagic metastasis and brain trauma (3,25). Brain Atrophy Brain atrophy is also an important imaging manifestation of CSVD, and ventricular enlargement can be seen on imaging. Usually, the T1 sequence is the most suitable for evaluation, and the corresponding rating scale can also be used (26). However, atrophy caused by large vascular infarction, trauma and other focal lesions is not attributed to brain atrophy caused by CSVD (3). Cerebral Amyloid Angiopathy Based on the pathological characteristics of CSVD, cerebral amyloid angiopathy (CAA) is considered to be a vascular disease caused by β-amyloid protein deposition in the middle and adventitia of cortex and pia mater artery. This disease occurs in the white matter of occipital lobe and frontal lobe. Different from systemic amyloidosis, CAA commonly occurs in middle-aged and elderly people, and the proportion of CAA over 55 years old is as high as 10-40% (27). In addition, 80% of AD patients also coexist with CAA lesions. The pathological features of this disease include the occurrence of hemorrhage foci of different sizes in cerebral lobes, subcortex and cortex with time changes, and the signs of subarachnoid hemorrhage in the past (28). Histochemical staining (such as Congo red staining) showed that β-amyloid protein was deposited on the vascular wall, and the thinning and loosening of the vascular wall, vascular dilatation and perivascular inflammation were another important pathological features of CAA (29). Increased vascular fragility is often one of the important reasons for bleeding tendency. Previous imaging studies suggest that chronic hypoperfusion is one of the risk factors of leukoencephalopathy, Therefore, lobar hemorrhage, non-traumatic subarachnoid hemorrhage and WMH are also common imaging manifestations of CAA (30). According to the revised Boston Diagnostic Criteria, the results of imaging examination are suggestive for diagnosis, but autopsy is still needed to make a definite diagnosis (31,32). CADASIL CADASIL is an autosomal dominant disease associated with a mutation in the NOTCH3 gene on chromosome 19. The incidence rate reaches 2-4/100000, which has no significant correlation with gender. Clinically, it usually occurs at the age of 30-40 years old, with various clinical manifestations, including migraine, recurrent ischemic stroke, progressive cognitive dysfunction in young people and etc. Pathology suggests that the disease is a non-atherosclerotic CSVD without amyloid deposition (33). CADASIL is the most common genetic factor of subcortical vascular dementia. Under electron microscope, granular osmiophilic material (GOM) was found deposited in the smooth muscle layer of pia mater artery and perforator vessels. This abnormal deposition causes the loss of smooth muscle cells, which leads to the narrowing and even occlusion of the official lumen of small blood vessels (34)(35)(36). The imaging features of the disease show a dynamic evolution process with the age of patients. First of all, WMHs can be found in temporal pole in most CADASIL patients around 30 years old, which is a characteristic different from other microangiopathy. At the age of 40, WMHs gradually progressed to the posterior temporal region, frontal region, parietal region, basal ganglia and thalamus (37). Subcortical U-shaped fibers can be involved, and subcortical lacunar infarction can often be found. The outer capsule and corpus callosum are also involved, which are important landmark lesions of CADASIL. At the age of 50, cerebral microhemorrhage appeared in the white matter area of the patient's brain and gradually developed. Subsequently, at the age of 50-60, large areas of WMHs, lacunae and Cerebral microhemorrhage were often seen on imaging examination. Due to the high risk of complications in CADASIL patients, invasive procedures such as angiography should be avoided as much as possible to aggravate the disease progression (36). Since CADASIL is a non-atherosclerotic type of CSVD without amyloid deposition, we further summarized the characteristics of CADASIL, small atherosclerotic CSVD and MS ( Table 1). PACNS PACNS is a kind of CSVD characterized by idiopathic vasculitis, which is more common in patients aged 50-60 years, and its lesions are limited to the white matter area of brain and spinal cord. There is no obvious specific clinical manifestation of this disease, and headache is the common first symptom in clinic (38). Laboratory examination can find that the levels of inflammatory markers and proteins in cerebrospinal fluid are increased. The pathological manifestations of PACNS include inflammation, intimal hyperplasia, small and medium artery occlusion and necrosis and others, which often involve pia mater and cortical vessels (39). Although there are no specific imaging features in PACNS, some imaging features including multiple cortical-subcortical infarction, hemorrhage, enhancement of brain parenchyma and pia mater also provide some evidence for clinical diagnosis of PACNS. Clinical routine angiography can sometimes find multiple stenosis of small and medium-sized vessels, but brain biopsy is the gold standard for the diagnosis of this disease (38,40). MS Clinical Features of MS MS disease pedigree includes radiologically isolated syndrome (RIS), clinically isolated syndrome (CIS), and clinically diagnosed MS. According to the different characteristics of the course of disease, the clinically diagnosed MS can be classified as follows: relapsing-remitting multiple sclerosis (RRMS); secondary progressive multiple sclerosis (SPMS); primary progressive multiple sclerosis (PPMS); progressive relapsing multiple sclerosis (PRMS). Among them, RRMS is the most common in clinic, while PRMS is the rarest (41). Pathological Features of MS Generally, macular demyelination changes in white matter can be found in typical non-specific MS pathology. Classic MS lesions are considered to be slender or oblate lesions, usually distributed around venules (42). This demyelination lesion is thought to be associated with macrophage infiltration, perivascular cufflike lymphocytes, accumulation of fibrin, and proliferation of reactive microglia. The white matter of MS patients is usually accompanied by abnormal deposition of hair iron, which is related to disease activity and usually deposited in the white matter where perivenous inflammation exists (43). On MRI, Dawson fingers sign is a classic lesion shape of paraventricular WMH in MS patients, which is usually perpendicular to ventricle, oval and distributed around venules, adjacent to ependyma. This has certain significance for differential diagnosis with non-specific WMHs and CSVD. The lesions of CSVD often appear in the posterior horn of paracele, and similar lesions may appear in MS. In the early stage of MS, specific lesions can be found at the junction of corpus callosum and ependyma. These lesions have both local patchy changes and discrete distribution. We named this distribution feature as "dot-line sign, " which is usually easier to observe on sagittal T2-weighted imaging. This lesion is more likely to appear in the knee and body of corpus callosum (48). Of course, this kind of MS also needs to be differentiated from other types of diseases, including other primary or secondary demyelination diseases [neuromyelitis optica (NMO), acute disseminated encephalomyelitis (ADEM), CADASIL, Susac syndrome, progressive multifocal leukoencephalopathy (PML), etc.]; tumor diseases [lymphoma, glioma, gliomatosis cerbri (GC)-the latter is easily confused with leukoplakia due to its diffuse lesion]; traumatic injury (traumatic/diffuse axonal injury). The corpus callosum lesions of MS were enhanced in the acute phase while the corpus callosum lesions of CADASIL were not usually enhanced. Another typical lesion of MS is that WMHs Frontiers in Neurology | www.frontiersin.org near cortex involve U-shaped fibers under cortex. Generally, cortical lesions appear in the early stage of MS, which is related to cognitive dysfunction. Cortical lesions are usually smaller, and the contrast is more obvious under double inversion recovery sequence and high magnetic field intensity (49)(50)(51) (Figure 2). Summary of MS The McDonald diagnostic criteria have suggested spatial and temporal multiplicity in the distribution of MS lesions: spatial multiplicity requires the presence of white matter hyperintensity in at least two of the four typical locations in the paraventricular, subcortical, subepicentral and spinal cord regions; temporal multiplicity requires the presence of both enhancing and nonenhancing lesions on a single MRI scan or new lesions on followup MRI (45,46). New white matter hyperintensities on follow-up T2 image and contrast-enhanced white matter hyperintensities correlate with the activity of inflammation; enhancement can persist for an average of about 3 weeks (47). Dawson's finger sign is the classic lesion pattern of paraventricular white matter hyperintensity in MS patients; these lesions are usually perpendicular to the ventricles, ovoid in shape and distributed around small veins. They are contiguous with the ventricular canal, which is important in relation to the non-specific These lesions are usually perpendicular to the ventricles, ovoid and distributed around small veins (52). Early in MS, specific callosal lesions can be found at the junction of the corpus callosum and the ventricular canal, and these lesions These lesions are both localized and discrete, forming a subcallosal "dotted line sign" that is usually more easily observed on sagittal T2 weighted images in the sagittal plane are more easily observed. Such lesions are more likely to be found in the knee and body of the corpus callosum (48). In MS, the corpus callosum lesions are enhanced in the acute phase, whereas in CADASIL the corpus callosum lesions are usually not enhanced. Proximal cortical white matter hyperintensities in MS patients involve subcortical U-shaped fibers, which is also a classical lesion location. On the other hand, there is a growing awareness of cortical lesions in MS, which may have more prognostic and diagnostic value in the future. The prognostic and diagnostic value of these lesions may increase in the future. For MR spectroscopy, previous studies summarized that patients with MS have shown increased glutamate concentrations in acute MS lesions, indicating hypoxic condition (53)(54)(55). McDonald criteria also included the subtentorial area. Among them, MS occurring in the brainstem has certain characteristics. The lesions are usually located in the periphery of the brainstem (CSVD brainstem lesions are usually located in the middle of the brainstem), and the lesions are asymmetric or located on one side of the brainstem with clear boundaries (56). Cerebellar lesions usually affect larger white matter structures, such as the pontine arm (middle-cerebellar peduncle), although middle-cerebellar peduncle and white matter in cerebellar hemisphere may also be affected. Spinal cord lesions are present in 30 to 40% of CIS cases and up to 90% of clinically confirmed MS patients (52). MS also often involves cervical spinal cord, such patients are in dangerous condition, and even die in severe cases. Normally, the spinal cord lesions have a clear boundary, which is less than one or two spinal segments in longitudinal direction, <50% of the cross-sectional area of spinal cord in axial plane, and the spinal cord lesions are usually located in the white matter around the spinal cord. Extensive spinal cord swelling is rare. Active lesions may be enhanced, although less frequently than brain lesions. Spinal cord atrophy can be seen occasionally in long course and progressive MS, especially in the upper segment (57). The similar disease of MS in peripheral organs is chronic inflammatory demyelination polyradiculoneuropathy (CIDP), which is a chronic multiphase disease involving peripheral nervous system (PNS). To sum up, the diagnosis of MS should be based on the comprehensive analysis of patients' clinical data, laboratory examinations and auxiliary examinations, including the analysis of patients' medical history, cerebrospinal fluid components, such as oligoclonal bands (OCB) and intrathecal immunoglobulin G (IgG) synthesis, abnormal visual evoked potential (VEP) and imaging analysis. However, MS is often misdiagnosed due to atypical imaging features. About 25% of patients received MS treatment without suffering from MS. The imaging features of CSVD, NMO and ADEM are often confused with MS (58), but there is still no effective means to distinguish them, which poses great challenges for clinical diagnosis and treatment. RELATIONSHIP BETWEEN CSVD AND MS Currently, it is thought that there may be interaction between CSVD and MS, because the progression of CSVD and MS is positively correlated with age. Advanced age means that glial cells may have hypoxia (59), mitochondrial dysfunction (60), iron deposition (61) and other dysfunctions. These changes are obviously related to neurodegenerative changes. Recently, Geraldes et al. summarized that CSVD shares some features with MS and has been shown to contribute to the neuronal damage seen in vascular cognitive impairment (62). In addition, CSVD (35) has been found to be associated with neurodegeneration in young people with VRFs (63), which may also promote agerelated neurodegeneration in MS. Therefore, we summarized the main possible reasons for the interaction value between MS and age-related CSVD: (i) the life expectancy of MS patients is longer (64), over 60 years old (65), and the risk of vascular complications is higher; (ii) vascular complications can promote the disease progression of MS (66), shorten the expected survival time (67)(68)(69), aggravate WML load and brain atrophy (70); (iii) the focal demyelination lesions in the watershed area of MS (71) are characterized by hypoperfusion and hypoxia (72)(73)(74), suggesting that there is an important relationship between the pathology of MS and cerebral artery perfusion. In addition, Lucchinetti et al. previously reported MS lesions divided into four patterns, in which two patterns (I and II) showed close similarities to T-cell-mediated or T-cell plus antibody-mediated autoimmune encephalomyelitis, and the other patterns (III and IV) were highly suggestive of a primary oligodendrocyte dystrophy, reminiscent of virus-or toxin-induced demyelination rather than autoimmunity. The pattern III is distal oligodendrogliopathy which means oligodendrocytic apoptosis due to hypoxia or viral infection (75,76). Similar observations are reported independent researchers (77,78). Santiago Martinez Sosa et al. also reported that The deep and periventricular white matter is preferentially affected in several neurological disorders, including CSVD and MS, suggesting that common pathogenic mechanisms may be involved in this injury and considered the potential pathogenic role of tissue hypoxia in lesion development, arising partly from the vascular anatomy of the affected white matter (1); (iv) chronic inflammatory microenvironment of MS can promote CSVD, furthermore, Robin et al. found that shared mechanism of white matter damage in ischemia and MS and reported that inflammation acts in distinct pathways because of the differing nature of the primary insult (79); (v) some MS patients benefit from using drugs targeting microvascular system (80)(81)(82). Furthermore, Chitnis et al. reported a case of Balo's concentric sclerosis, a variant of MS, with CADASIL mutation and suggested systematic testing for the CADASIL mutation in patients with a demyelinating presentation consistent with Balo concentric sclerosis or significant restricted diffusion on MRI (83) ( Table 1). The influencing mechanism of CSVD on the imaging features of MS remains to be unclear. It is possible that the increased load of T2 phase lesions in MS patients with VRFs is due to the superposition effect of paraventricular vascular WML, lacuna and microhemorrhage. Similarly, the more severe brain atrophy associated with VRFs in MS patients may also be due to the superposition of VRFs on MS basis (84). Although the previous understanding of MS is that multiple demyelination areas are found in pathological sections, and axons in these areas are relatively preserved, imaging and pathological studies have found that neuron/axon damage can occur at an early stage and is related to active inflammatory activity and demyelination (85,86). However, age and duration of disease can affect the inflammatory response of WM and gray matter (GM) in lesion and non-lesion areas of MS, and the inflammatory response seems to decrease in elderly patients with longer course of disease (87,88). In these patients, neurodegeneration may be related not only to persistent low-intensity inflammatory activity, but also to various mechanisms such as energy deficiency, oxidative damage, hypoxia, and energy reserve failure (72). Energy failure is an important concept for MS pathogenesis. Glucose and lactate transporters and connexin gap junctions have a crucial roles for energy transport among glial cells in MS (89)(90)(91)(92)(93). Recently, Philips et al. summarized oligodendrocytes transfer energy metabolites to neurons through cytoplasmic "myelinic" channels and monocarboxylate transporters, which allow for the fast delivery of short-carbon-chain energy metabolites like pyruvate and lactate to neurons. These substrates are metabolized and contribute to ATP synthesis in neurons (93). But this deficient process may be a cause of pathogenesis for MS. Age-related pathological processes, such as AD or vascular disease, may amplify the effects of these mechanisms and lead to increased neuronal damage (87). The presence of CSVD in MS patients may mean an additional blow to the compensatory mechanism. In this case, chronic hypoxia caused by vascular dysfunction and hypoperfusion can cause neuron death and aggravate neurological dysfunction. In neurodegenerative diseases such as AD (94,95) and vascular cognitive impairment (96), cerebral vascular lesions promote neuronal damage, aggravate pericyte and astrocyte dysfunction caused by chronic hypoperfusion and BBB permeability changes, oxidative stress, inflammation and mitochondrial damage (97). The theory of "blood vessel-neuron-inflammation" centered on NVU dysfunction in neurodegenerative diseases is also applicable to MS (95). Energy deficiency and tissue hypoxia caused by mitochondrial dysfunction in MS will lead to ion disorder and axonal degeneration, and the accompanying CSVD may aggravate this process, especially in watershed area (72). Not only the lesions related to MS and arteries and veins need further study (72), but also the characterization and quantification of CSVD, including the score of arterial wall changes, microbleeding and microinfarction, which is conducive to further exploring the relationship between CSVD and MS. Moreover, Geraldes et al. based on post-mortem study reported that an excess burden of cerebral small vessel disease in multiple sclerosis may explain the link between vascular comorbidity and accelerated irreversibility disability (98). SUMMARY In conclusion, this review compares and summarizes the clinical, pathological and imaging features of MS and CSVD and the effect of vascular diseases on MS, which is conducive to deepening our understanding of these two diseases, making correct differential diagnosis and giving patients correct treatment strategies in time. AUTHOR CONTRIBUTIONS BW, HaoL, and XL contributed to the conceptualization of the manuscript. BW, HaoL, XL, XH, LG, RF, KangC, and WW contributed to the writing of the manuscript. XH, HaiL, KangnC, ZZ, and LX contributed to the editing and revising of the manuscript. All authors contributed to the article and approved the submitted version.
2022-06-26T15:16:42.171Z
2022-06-24T00:00:00.000
{ "year": 2022, "sha1": "ed8866bbb606b114200720f5934e1a50ad6cc5d3", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fneur.2022.841521/pdf", "oa_status": "GOLD", "pdf_src": "Frontier", "pdf_hash": "507d6e63a554954321146013352946071b6d93a4", "s2fieldsofstudy": [ "Medicine", "Psychology" ], "extfieldsofstudy": [] }
202026540
pes2o/s2orc
v3-fos-license
Seeing Is Believing: Nuclear Imaging of HIV Persistence A major obstacle to HIV eradication is the presence of infected cells that persist despite suppressive antiretroviral therapy (ART). HIV largely resides outside of the peripheral circulation, and thus, numerous anatomical and lymphoid compartments that have the capacity to harbor HIV are inaccessible to routine sampling. As a result, there is a limited understanding of the tissue burden of HIV infection or anatomical distribution of HIV transcriptional and translational activity. Novel, non-invasive, in vivo methods are urgently needed to address this fundamental gap in knowledge. In this review, we discuss past and current nuclear imaging approaches that have been applied to HIV infection with an emphasis on current strategies to implement positron emission tomography (PET)-based imaging to directly visualize and characterize whole-body HIV burden. These imaging approaches have various limitations, such as the potential for limited PET sensitivity and specificity in the setting of ART suppression or low viral burden. However, recent advances in high-sensitivity, total-body PET imaging platforms and development of new radiotracer technologies that may enhance anatomical penetration of target-specific tracer molecules are discussed. Potential strategies to image non-viral markers of HIV tissue burden or focal immune perturbation are also addressed. Overall, emerging nuclear imaging techniques and platforms may play an important role in the development of novel therapeutic and HIV reservoir eradication strategies. A major obstacle to HIV eradication is the presence of infected cells that persist despite suppressive antiretroviral therapy (ART). HIV largely resides outside of the peripheral circulation, and thus, numerous anatomical and lymphoid compartments that have the capacity to harbor HIV are inaccessible to routine sampling. As a result, there is a limited understanding of the tissue burden of HIV infection or anatomical distribution of HIV transcriptional and translational activity. Novel, non-invasive, in vivo methods are urgently needed to address this fundamental gap in knowledge. In this review, we discuss past and current nuclear imaging approaches that have been applied to HIV infection with an emphasis on current strategies to implement positron emission tomography (PET)-based imaging to directly visualize and characterize whole-body HIV burden. These imaging approaches have various limitations, such as the potential for limited PET sensitivity and specificity in the setting of ART suppression or low viral burden. However, recent advances in high-sensitivity, total-body PET imaging platforms and development of new radiotracer technologies that may enhance anatomical penetration of target-specific tracer molecules are discussed. Potential strategies to image non-viral markers of HIV tissue burden or focal immune perturbation are also addressed. Overall, emerging nuclear imaging techniques and platforms may play an important role in the development of novel therapeutic and HIV reservoir eradication strategies. INTRODUCTION Despite the overwhelming success of antiretroviral therapy (ART) to achieve complete or nearcomplete HIV suppression, residual virus that integrates into host cell genomes prior to ART initiation persists indefinitely. Blood-derived resting CD4+ T cells comprise one of the most characterized reservoirs of latent HIV, and integrated viral DNA can exist at frequencies below one copy per million resting CD4+ T cells (1)(2)(3)(4)(5)(6). However, HIV largely resides in organized lymphoid or other tissues outside of the peripheral circulation, and many anatomical regions are inaccessible to routine sampling (7)(8)(9)(10)(11)(12)(13)(14)(15)(16). Only a small amount of tissue from a small number of sites can be realistically obtained from living human participants, and one of the major barriers to the successful design and implementation of HIV eradication or immune-based therapeutic strategies is the limited ability to characterize the tissue-wide burden of HIV in the setting of ART. HIV-1 infection leads to immune activation and inflammation throughout all stages of disease. Markers of T-cell activation remain elevated in blood and lymphoid tissues in HIV-infected individuals, even in the setting of elite control or after years of suppressive ART. Certain immune privileged environments may be especially important foci of HIV persistence and viral transcriptional activity. For example, CD4+ T-follicular cells (T FH ) within lymph node B cell follicles have been shown to be highly enriched in HIV-1 DNA, are very permissible to HIV infection, and are able to produce high levels of replication competent virus upon ex vivo stimulation (12,(17)(18)(19). T FH cells may be protected from various host immune responses by their location in the unique histological makeup (12,(17)(18)(19). Even outside of infected tissues, persistent HIV has lasting and often profound effects on tissues such as vascular endothelium, gut, and brain, and leads to sustained, systemic inflammatory responses. Markers of inflammation, coagulation, and immune activation remain elevated in effectively treated HIV infection and are strong predictors of mortality and non-AIDS events, which has been demonstrated in a variety of cohorts (20)(21)(22)(23). As a result, there are direct and indirect consequences of HIV infection that are clinically relevant, even in the setting of treated and suppressed HIV. For example, HIV has been associated with increased cardiovascular disease, neurological disorders, and various hematological and solid-tumor malignancies (24). The direct and indirect impact of persistent HIV on immune activation, systemic inflammation, and increased clinical comorbidities has led to interest in positron emission tomography (PET) and other molecular imaging techniques as tools to better understand the whole-body burden and consequences of HIV infection. Molecular imaging has been critical for the diagnosis, treatment, and management of various malignancies and other diseases. Similar modalities have the potential to provide insights into the design, implementation, and analysis of immunotherapies and other interventions to reduce HIV reservoir burden, lower inflammation, and thus reduce HIV-related morbidity. NUCLEAR IMAGING APPROACHES TO HIV PERSISTENCE AND HIV-RELATED MORBIDITY The Molecular Imaging Toolbox Innovative strategies to perform molecular imaging, from microscopic visualization and characterization techniques on the tissue level, to whole-body in vivo anatomical and functional imaging incorporating techniques such as SPECT and PET, are rapidly being developed for a wide range of diseases, including HIV and other chronic infections (see Table 1). Ex vivo molecular imaging on the cellular and tissue level has already provided many important insights into HIV pathogenesis such as identifying foci of residual infected cells in the setting of ART and characterizing the immunological microenvironments of such foci (58)(59)(60)(61)(62)(63)(64)(65). These studies have focused largely on gut, lymphoid, and central nervous system tissues but may involve a wide variety of other scenarios such as tumor microenvironments Alzheimer disease (AD) plaque tracer-no increased AD risk (56,57) and quantifying vascular inflammation. However, the focus of this review covers in vivo nuclear medicine approaches with an emphasis on novel PET imaging approaches of HIV persistence. Nuclear Imaging Approaches to HIV Infection Common nuclear imaging approaches that have been applied to HIV infection for over 20 years include SPECT/CT and PET/CT imaging (44). These modalities involve the detection, anatomical location, and kinetics of radioactive tracer uptake, with SPECT involving the detection of single photon gamma emission and PET measuring positron emission. Clinically, these nuclear imaging modalities are commonly used to diagnose various malignancies and provide information on potential tumor burden or sites of metastases, disease staging, and response to various treatment strategies. They are also used to differentiate benign, metabolically quiescent tissues from metabolically active foci, which may be manifested by active infections, reactive lymphoid tissues, vascular inflammation, and more. As a result, nuclear imaging has been applied in the setting of HIV infection and HIV-related comorbidities. HIV imaging studies are diverse and have involved numerous tracers and measured outcomes. As summarized in Table 1 and below, PET imaging has been used to (1) measure cellular metabolic activity in a variety of different clinical scenarios (e.g., 18F-FDG); (2) carry out anatomical and functional neuroimaging involving various metabolic measures, cerebral fluid, dopamine transport, and cellular activation in the setting of HIV-associated neurological disease (HAND), central nervous system malignancies, and opportunistic infections; (3) determine ART-related toxicities; (4) quantify changes in various immune cell types, such as CD4+ T-cell distribution in the setting of immunomodulatory therapies in animal studies; and (5) characterize the effects of HIV on cardiovascular disease. A recent PubMed search using HIV or AIDS and PET yielded 537 references, averaging about 10 articles per year. Over the past several years, there has been increased interest in the development of HIV-specific tracers to provide direct anatomical localization and burden of infection. In vivo studies are currently taking place using techniques such as radiolabeling monoclonal antibodies (mAbs) specific for HIV or SIV envelope proteins (66,67). In addition, traditional nuclear medicine approaches, such as FDG-PET, have been applied to look at HIV persistence in the setting of active infection, HIV controllers (i.e., those who are able to suppress virus without ART), and ART-suppressed individuals (see discussion below). These immunoimaging approaches have the potential to significantly improve our understanding of where and how residual viral replication and HIV-related inflammation resides in the setting of suppressive therapy. More specifically, the diverse nuclear imaging toolbox may prove to be useful in people living with HIV to: • Understand the temporal changes that occur within the whole body as a function of disease status, ART use, viral recrudescence following cessation of therapy, or foci of HIV reactivation during a "shock and kill" approach to HIV remission. Radiopharmaceutical, Pharmacokinetic, and Nuclear Imaging Considerations The utility of a specific nuclear imaging strategy is tightly linked with the various properties of the applied radiopharmaceutical tracer. These properties include radiologic dose, exposure, decay rates and tissue uptake, drug metabolism, and excretion. PET tracers involve a radiolabeled molecule as a source of positrons. These isotopes have a wide range of radiological half-lives (t 1/2 ). Decay rates range widely from minutes to many days as summarized in Table 2, and ideally are in synergy with the pharmacokinetics of the radiolabeled tracer. For example, mAbs may take several days to reach target tissues and bind to specific targets, therefore requiring longer-acting isotopes such as zirconium-89 (t 1/2 = 78 h), whereas FDG uptake (fluorine-18 t 1/2 = 110 min) is rapid and glucose is internalized relatively quickly by metabolically active cells. Special care in matching the appropriate radioactive molecule with the target drug will be critical in the rational design and implementation of HIV-specific imaging agents. In addition, human studies are limited by the total radiation exposure to a participant, leading to challenges with administration of high enough doses for clinically meaningful target-to-background contrast, restricting the frequency of tracer administration and may limit longitudinal imaging studies. In addition, target densities may be quite low in various clinical scenarios such as ART-suppressed HIV infection, where viral proteins may be expressed in very low amounts or frequencies on cells or in tissue, if at all. As a result, there are expected to be significant challenges to increase signal-to-noise ratios in these participants, and this highlights the continued need for non-viral-specific tracers to provide information on location, burden, and immunological impact of persistent HIV infection. PET Imaging in HIV Infection-Cellular Metabolic Activity, Immune Activation, and HIV Persistence In the research setting, PET/CT has commonly been used in conjunction with FDG, which provides a measurement of glucose metabolism as a surrogate for inflammation, which is taken up substantially higher by inflammatory cells and macrophages in the tissue (68,69). FDG-PET imaging has been reported for HIV in the mid to late 1980s, with monitoring of HIV preand post-AZT monotherapy (combination ART was not widely available until the mid-1990s), and workup of HIV-associated neurological disorders along with staging of malignancies (44,70,71). In addition, FDG-PET studies have involved anatomical localization of HIV-associated immune activation, correlating lymph node inflammation with disease stage, and associating high areas of FDG uptake in non-human primates with productive SIV infection (72)(73)(74)(75)(76)(77). Since this time, studies in the general population have demonstrated that arterial inflammation assessed using FDG-PET/CT can predict future cardiovascular (CV) events (78). Furthermore, lipid lowering using statin therapy along with thiazolidinedione therapy has reduced arterial FDG-PET uptake in several clinical trials (79)(80)(81)(82)(83). Our group also has recently reported that using a mAb to IL-1β significantly reduced inflammatory markers along with arterial and bone marrow metabolic activity assessed using FDG-PET/CT in the setting of treated HIV (84). Studies involving animal models and humans showed that both relative and absolute FDG uptake within inflamed tissues (e.g., atherosclerotic plaques) correlate with the degree of immune cell infiltration (12,(17)(18)(19)(85)(86)(87)(88)(89). More recently, FDG-PET has been applied to assess altered glucose metabolism in HIV-associated inflammation and has demonstrated that HIV patients have higher arterial inflammation that is associated with sCD163 (87). Initiation of ART reduced bone marrow activity but did not affect arterial inflammation; furthermore, metabolic activity on FDG-PET/CT prior to ART was predictive of immune reconstitution inflammatory syndrome development (90). Subsequently, our group showed that HIV-infected individuals on ART have higher metabolic activity as measured by FDG-PET/CT in the arterial vasculature and lymph nodes than matched uninfected controls and that these markers correlated with measures of HIV persistence in peripheral blood (91). Importantly, individuals on ART had higher FDG uptake in lymph nodes and arterial vasculature than matched uninfected controls. Overall, lymph node FDG activity was significantly associated with levels of integrated HIV DNA measured in peripheral blood mononuclear cells (91). This study suggests that PET-based imaging of inflammation or immune activation has the potential to provide information regarding regional areas of HIV persistence. However, FDG is likely taken up by immune activation/inflammation even when not in tissue with HIV-persistent foci (e.g., arterial wall, which may be influenced by monocyte activation); therefore, more specific markers of T-cell trafficking and targeting of infected tissues are needed. Recently, advances in molecular imaging of immune activation by PET have made it possible to use non-invasive strategies to monitor immune activation with increased Tcell specificity than FDG. Increased activity of nucleoside salvage pathways has been associated with the proliferation of adaptive immune cells (92). In preclinical models, the PET probe , which targets the deoxycytidine salvage pathway, was shown to localize to focal sites of immune activation (93) and is predominantly accumulated in proliferative T cells (94). Recently, a radiofluorinated imaging agent [ 18 F]F-AraG (95) was synthesized with a goal of development for human use. F-AraG is a fluorinated purine derivative with selective T-cell uptake. A water-soluble AraG prodrug, Nelarabine, is FDA-approved for the treatment of relapsed T-cell acute lymphoblastic leukemia and T-cell lymphoblastic lymphomas (96,97). [ 18 F]F-AraG is a high-affinity substrate for deoxyguanosine kinase (dGK) and a low-affinity substrate for deoxycytidine kinase (dCK). Both dGK and dCK are over-expressed in activated T cells. Blocking the expression of either dGK or dCK causes reduction in [ 18 F]F-AraG uptake, while over-expression of either dGK or dCK leads to increased accumulation of [ 18 F]F-AraG. T-cell-specific tracers such as these may play an important role in imaging HIV persistence, with the potential to be more specific to regional areas of immune perturbance as a result of HIV replication or residual viral transcriptional activity. Neuroimaging Microglia Activation in HIV Infection and Related Neurologic Disorders PET imaging using tracers specific for activated microglial cells is another example of how non-specific markers of increased immune activation has been successfully applied to study HIVrelated comorbidities in the central nervous system. More specifically, molecules have been developed that target the 18-kDa mitochondrial translocator protein (TSPO) that shuttles cholesterol into mitochondria for steroid biosynthesis (45)(46)(47)(48)(49)(50). TSPO is upregulated in activated microglia, and, as a result, has been used in neuroimaging to determine differences between HIV-infected and uninfected individuals and to characterize differences between various HIV clinical disease manifestations, including HAND (50). PET imaging with TSPO-specific tracers appear to be more specific to innate immune activation than FDG (45) and have led to some important insights into central nervous system persistence of HIV. For example, ART-suppressed individuals without cognitive impairment have been observed to have chronically elevated microglial activation (48), whereas other studies showed that TSPO levels correlated with worse executive brain performance and other HIV-associated cognitive vulnerabilities (46,49). Despite varying results complicated by various experimental designs and definitions of cognitive impairment (50), there is continued interest in using PET-based immune activation approaches to study the direct impact of residual HIV infection in the setting of suppressive ART. Antiretroviral Drug Labeling The question of whether or not there is ongoing replication in various tissue sanctuaries in the setting of otherwise suppressive ART remains controversial. For example, there is a paucity of robust phylogenic evidence for evolution of HIV sequences or development of resistant mutations in suppressed individuals over time and ART intensification studies have not demonstrated reductions in low-level, residual plasma HIV RNA levels (98)(99)(100)(101)(102)(103). Many of these studies were performed in peripheral blood or limited by the depth of sequence coverage or tissues sampled. Other studies have shown potential indirect evidence of replication such as an increase in unintegrated episomal HIV DNA in blood and cellassociated RNA in tissue (104)(105)(106). One topic of interest is the extent to which various ART drugs reach or have activity in various anatomical tissue compartments (107), potentially creating viral sanctuaries that permit low-level replication or, at the very least, allow higher levels of viral transcriptional activity (9,106,108). Transcriptionally active cells may also lead to chronic immune activation and inflammation. However, sampling all of the potential sites of persistent HIV for concomitant ART concentrations and viral reservoir persistence is not practical. It is also difficult to obtain information on the kinetics of drug distribution within tissues outside of peripheral blood. As a result, PET-based imaging of radiolabeled antiretroviral drugs may play an important role in pinpointing areas of poor ART penetration and therefore important sites of persistent HIV burden and potential foci of viral rebound following ART cessation. Imaging studies using fluorine-18labeled raltegravir (a strand-transfer integrase inhibitor) are ongoing (NCT03174977) and have the potential to locate areas of HIV persistence. PET Immunoimaging of CD4+ T Cell Dynamics in SIV Infection CD4+ T cells are the main target of HIV infection. Active disease leads to subsequent and profound reduction in CD4+ lymphocytes throughout the blood and tissues. While counts may improve in many individuals on ART, lasting perturbations to tissues such as the lymph nodes and gut-associated lymphoid tissues are common (8,(109)(110)(111)(112)(113)(114). As a result, there has been interest in CD4+ T-cell-specific PET-based imaging techniques to follow CD4+ T-cell dynamics and recovery following various interventions. A recent investigation of the use of an α4β7 mAb in acute SIV infection in macaques demonstrated sustained virological control in mAb-treated monkeys. While these results have yet to be confirmed, the study involved PET-CT imaging using a 64 Cu-labeled F(ab ′ )2 antibody against CD4. The study demonstrated repopulation of CD4+ T cells in a number of tissues, including gut, which was unexpected based on the original study hypothesis that the α4β7 mAb would interfere with CD4+ T-cell trafficking to these areas (67). This investigation is an example of how imaging various cell-specific markers may provide critical information regarding wholebody responses to various immune-based or other therapies for a wide variety of diseases. For example, CD8+ T-cell responses can theoretically be tracked over time in response to interventions such as vaccines or therapies that remove immune checkpoint and reverse T-cell exhaustion (e.g., anti-PD1 therapy). PET-Based Direct Imaging of SIV Infection As above, PET-based imaging techniques have the potential to delineate tissue burden and sequelae of HIV infection. PET/CT imaging approaches using a radiolabeled 64 Cu-labeled SIV gp120 mAb-specific clone (7D3) have been recently applied to assess SIV envelope protein expression in infected macaques with varying degrees of viremic control and in the setting of early initiation of ART (66). Results from this pivotal study demonstrated that areas of active SIV replication can be visualized and distinguished from non-selective tracer uptake in uninfected animals, with some HIV-related signal detected several weeks following ART initiation. As would be expected, lymphoid-rich areas were localized predominately at sites of persistent SIV protein expression (66). The study also showed that anatomical regions that are often neglected by in vivo tissue sampling, such as nasal-associated lymph node tissue, may play an important role in initial HIV seeding and subsequent persistence. A follow-up sub-study of anti-α4β7 treatment in SIV-infected macaques incorporating the radiolabeled SIV gp120 mAb demonstrated a reduction in SIV protein expression in various tissues, including the lung, spleen, and lymph node chains (89). These data suggest that direct SIV or HIV imaging radiotracers have the potential to play a critical role in characterizing HIV persistence and response to curative strategies. As a result, there is currently a high level of interest in direct HIV imaging techniques to humans. However, immunoimaging in SIV infection does have several potential limitations. For example, mAb or antigen binding fragments may have heterogeneous tissue distribution in vivo, and humanization or simianization may lead to immunogenicity concerns (115). Finally, the SIV or HIV antigen-specific PET-imaging approaches do not allow for direct discrimination between actively viral producing cells, cells expressing SIV or HIV antigens at the surface, viral particles, or simply viral antigen trapping by noninfected cells. Human HIV-Specific PET Imaging: Challenges and Promises Despite the early success of direct SIV specific in the first non-human primate PET/CT imaging studies, there are several challenges in adopting these techniques to human imaging. For example: 1. Non-human primates are typically infected with a clonal SIV strain with known binding affinity to gp120-specific mAb. HIV-infected humans can be extraordinarily diverse with both minority and majority clones capable of harboring resistance mutations to the clinically available HIV-specific mAbs, which have been previously developed as therapeutic broadly neutralizing antibodies (116)(117)(118)(119)(120)(121)(122). As a result, there is expected to be a wide range of mAb binding affinities between study participants that will require implementation of mAb resistance testing and careful considerations as to data analysis and interpretation. 2. HIV gp120 expression is expected to be very low among infected tissues in participants on suppressive ART. As a result, there may be insufficient signal-to-noise ratio in order to visualize areas of persistent infection. However, PET imaging may be particularly useful during early infection and for characterizing foci of early tissue HIV recrudescence following cessation of ART; incorporating PET imaging approaches in studies involving analytical ART interruptions is of utmost importance. 3. mAbs do not readily cross the blood-brain barrier. Barring any inflammation and major perturbations of the blood-brain barrier, imaging potential foci of HIV in the central nervous system will be challenging. As a result, the development of small-molecule HIV-specific tracers with improved central nervous system or other immune privileged tissue penetration is urgently needed. 4. Longitudinal human trials are limited by radiation exposure; therefore, multiple imaging time points may be difficult to incorporate into a variety of studies. This may be a particular issue when implementing tracers conjugated with radioisotopes with longer half-lives in vivo, which are likely going to be required given the kinetics of mAb uptake as discussed above. These limitations provide the rationale to incorporate more than one radiotracer in human studies. For example, administering an HIV-specific mAb tracer following PET imaging using a non-viral specific marker of inflammation or immune activation may provide important insights into the relationship between ongoing immune perturbations and HIV persistence. Fortunately, several strategies exist or are in development to address these challenges using radiolabeled mAbs in PET imaging. For example, smaller affibody proteins or antibody fragments (e.g., minibodies, nanobodies, and singlechain variable regions) (123)(124)(125) may have improved tissue penetration and favorable pharmacokinetics for imaging lowlevel HIV protein expression in various tissues. There is also a high level of interest in the development of dual or multitargeted molecules for immunoimaging (126) or engineering antibodies to have greater anatomical barrier penetration. One exciting strategy is increasing antibody delivery across the bloodbrain barrier by developing bispecific antibodies or designer molecular shuttles that bind to the transferrin receptor (127)(128)(129)(130). Animal studies are exciting and can theoretically be applied to HIV-specific mAb or antibody fragments. The development and implementation of very-highsensitivity, total-body PET scanners, such as the EXPLORER platform (131)(132)(133), are also likely to overcome some of the signal-to-noise limitations of imaging HIV-infected cells in ARTsuppressed individuals or those with low overall HIV envelope protein expression. These platforms are just now coming on line for in vivo use, and have the potential to revolutionize immunoPET imaging. Approximately 1% of the photons emitted during traditional PET scanning are detected given a limited axial field of view and body length that can be imaged at one time. The field of view in EXPLORER is extended to the entire individual by using a large number of parallel detectors that simultaneously detect photon emission (134). Early data suggest that EXPLORER PET provides a >40-fold gain in effective sensitivity and a >6-fold increase in signal-to-noise ratio compared with standard PET scanners (135). The first-in-human imaging studies have recently been completed (131) and offer an opportunity to significantly advance PET-based imaging of HIV reservoirs. Other emerging technologies include solid-state digital photon counting PET systems, such as those that use solid-state silicon photomultiplier technology (136). These systems have led to improvements in signal-to-noise ratios and enhancing image contrast (137,138) and may play an important role in improving PET imaging in HIV infection. Limitations of in vitro Modeling of HIV-Specific Immunoimaging Techniques HIV or SIV envelope-specific PET immunoimaging strategies are likely to be semiquantitative at best. For example, PET/MR or PET/CT imaging techniques reveal relative changes in mAb tracer uptake in various tissue region of interest (e.g., lymph node tissues, gut) before or after initiation of ART or immunotherapy (66,67). However, questions arise as to what the intensity of the PET signal means in terms of the actual number of infected, HIV or SIV envelope-expressing cells. In other words, can PET imaging be used to directly quantify the burden of HIV in vivo? One solution that is often presented is to perform ex vivo studies involving PET imaging of three-dimensional clusters of known numbers of infected and uninfected cells (either laboratory infected or derived directly from infected individuals) in order to determine the sensitivity of PET to detect various levels of HIV protein expression. While appealing, these studies are limited by the multitude of variables within living organisms that determine tracer uptake and PET detection. Modern PET scanners are sensitive and able to detect tracer-derived positron emission events above normal background radiation (139). Simply labeling a cell or a group of cells that express HIV envelope will likely lead to a detectable signal. However, regardless of what threshold in the number of infected cells can be detected (e.g., 10, 100, or 1,000 in a sub-centimeter cluster) in isolation, these types of ex vivo experiments are unable to account for many biasing factors. For example, radiotracers are often delivered in microdoses, with or without a specified amount of unlabeled antibody. The distribution of these microdoses to various tissues relies on many variables, such as blood flow dynamics, tissue fibrosis, and nonspecific tracer uptake, to name just a few. In addition, there is background radiation that is given off by tracers in the macro and microcirculation and from organs involved in tracer metabolism and excretion. Coupled with the need for PET attenuation and tomographic reconstructions in image acquisition and analysis, it will likely be difficult to correlate readout of ex vivo PET sensitivity studies with actual uptake in living organisms. In addition, each individual has different metabolic and physiologic dynamics (e.g., liver function, cardiac output, body surface area and mass, renal glomerular filtration rates, local microanatomical variations, etc.). As a result, performing parallel in vivo tissue biopsy studies along with PET imaging may be the most useful strategy to provide some quantitative understanding of radiotracer uptake signal and direct cellular measures of HIV burden or cell activation state. CONCLUSIONS PET imaging offers several exciting strategies to characterize HIV and HIV-related comorbidities. Despite limitations of traditional of nuclear imaging techniques in identifying HIVinfected cells in vivo, proof-of-concept SIV non-human primate studies demonstrate that various immunoimaging approaches have potential to enhance HIV curative and persistence research. Signal-to-noise issues are likely to limit imaging in ARTsuppressed individuals when cell-surface HIV protein expression is expected to be low. However, novel approaches such as highsensitivity, total-body EXPLORER imaging, PET imaging during latent HIV reservoir reactivation or ATI, and development and implementation of non-viral markers of HIV persistence have the capacity to overcome these limitations and provide important tools for the development of novel therapeutic strategies. In addition, technical and data processing advancements may allow for combination imaging approaches, from tissue-level microscopy to whole-body PET imaging. AUTHOR CONTRIBUTIONS TH, PH, and HV wrote the manuscript and obtained funding. FUNDING Funding was provided by grant support from the Delaney AIDS Research Consortium (DARE) UM1AI126611, amfAR Institute for HIV Cure Research, Merck & Co., and K24AI112393 (PH). Additional resources were provided by CellSight Technologies.
2019-09-09T21:21:47.344Z
2019-09-12T00:00:00.000
{ "year": 2019, "sha1": "db2b2e262a7082fd95c3f3743e4c4fee5cc58bf5", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fimmu.2019.02077/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "f4be271f4f1daf7c293d991b74728e7442f03735", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
54985328
pes2o/s2orc
v3-fos-license
Where Shall We Two Meet , in East or in West : When Po-shen Lu ’ s The Witch Sonata — Psalm of Macbeth Encounters With William Shakespeare ’ s Macbeth By using tai-yu (min-nan-hua, Taiwanese local language) to stage William Shakespeare’s Macbeth, Po-shen Lu produced an experimental performance in the southern part of Taiwan in 2003. When producing Antigone in 2001, Lu was challenged by his critics in three aspects: (1) the tradition of tai-yu theatre of Tainan Jen Theatre and that of Western plays, (2) audience reception in Taiwan, and (3) the advantages and disadvantages of integrating tai-yu with Western classic texts. In spite of these criticisms on his theatrical productions, Lu has continued helping Tainan Jen Theatre transform into a professional theatrical troupe since he became an artistic director in 2002. By analyzing how and why Lu staged his The Witch Sonata—Psalm of Macbeth in the socio-historical context of intercultural adaptation, I propose to re-revaluate Lu’s artistic contribution to the theatrical development in the southern part of Taiwan. I would argue that Lu is not only challenging Taiwanese reading of Shakespeare but also exploring the possibilities of tai-yu’s theatricality, in a view to bringing new life to Taiwan’s intercultural theatre. Introduction After Po-shen Lu produced Antigone in 2001, Tainan Jen's Western plays produced in tai-yu met with numerous applauses.With these encouragements, Lu continued doing a project on staging Western classic texts in tai-yu. 1 In 2003, Lu, the new artistic director of Tainan Jen Theatre, used tai-yu (min-nan-hua, Taiwanese local language) to stage William Shakespeare's Macbeth.While producing these western plays in tai-yu, Lu has been challenged not only Taiwanese audience's reception but also Tainan Jen Theatre's traditions of tai-yu theatre.In this project, Lu tires to create a highly successful intercultural performance and explore the possibility of tai-yu theatricality.However, is such intercultural theatre acceptable in Taiwan?In order to reevaluate Lu's artistic contribution to Taiwanese contemporary theatre, firstly we undertake a study of socio-historical context of Tainan Jen Theatre to discuss how the troupe develops its tai-yu theatre.In the second part, we focus on Taiwanese audience reception towards Lu's the Witch Sonata-Psalm of Macbeth.In the third section, we Tainan Jen Theatre and Tai-Yu Theatre The development of Tainan Jen Theatre can be traced to the movement of "little theatre" during the 1980s. 2lthough the concept of "little theatre" was introduced by Man-kuei Li in 1960, such a theatrical movement did not flourish until the 1980s.In the mid-1980s, Taiwan's little theatres started to have divergent developments.Some focused on the expressions of individualism or personal emotions.Some aimed at becoming professional theatre companies. 3Others have turned out to be "community theatres".Tainan Jen Theatre, once considered as a community theatre, was established and became the first modern theatre troupe in the southern part of Taiwan after World War II. In 1987, Tainan Jen Theatre was founded by Rev. Don Glover, a Catholic Church father, and Jui-fang Hsu, the previous leader of the troupe.At the very beginning, the troupe was named "Hwa Deng Theatre Troupe" (華 燈劇團) and its enthusiasts were mostly composed of Tainan people.When the troupe was initially founded, none of the members was familiar with the theatre.With the help of the graduates of the colleges of art, the troupe started to have a prototype of the theatre, and then its participants also knew how to run an acting company.In 1992, the troupe was included in a project for developing the community theatre that had been conducted by the Council for Cultural Affairs (CCA) for three years.During these three years, the organization of Hwa Deng became more stable and stepped to the biggest and well-organized troupe in Tainan area.In 1997, the name "Hwa Deng" was changed to the current one, Tainan Jen, to mark the tenth anniversary of the troupe's birth.Meanwhile, the troupe changed from an amateur community theatre into a professional one.From then on, the troupe has not only produced new plays and theatrical adaptations but also devoted to the projects on Theatre-in-Education (TIE) and Youth Theatre that provide teachers or students a chance to receive theatrical trainings. Near the end of 2002, Jui-fang Hsu resigned from the troupe; meanwhile, Lu became a new director who was in charge of the development of the company.From then on, the contents of Tainan Jen's performances have changed, too.From 1987 to 1992, Tainan Jen's productions focused on localism, such as domesticity and Taiwanese daily life.After the troupe was conducted by the CCA in 1992, the quality of the performances was promoted and its content drew close to the modern life in Taiwan.Believing in various performance styles of the theatre, Hsu invited Lu to stage Sophocles' Antigone in 2001.For the audience, it was seemingly surprising that Lu used tai-yu to produce an ancient Greek tragedy in a Tainan historical site, Koxinga Shrine, for the first time. After that, many critics gave Lu positive comments and Chien-chung Lu, one of the critics, illustrates, Different from the historical and cultural background of ancient Greece, the pathos in Taiwan's contemporary life is not as strong as that in Greek tragedy.However, the tone and the language of Tainan Jen's Antigone give us the possibility of intercultural interpretation.4 Yu-hsiu Liu also has a similar attitude towards the play and she says, Tainan Jen's Antigone is one of highly successful adapted plays in Taiwan.On one hand, it does not distort the original.On the other hand, it skillfully fuses the local color of Taiwan with western plays… Such a play shows us a successful example that Taiwanese could have an international perspective through this play.The reason that Lu used tai-yu to stage Macbeth is related to the practice of sound spectrum, such as the performers using musical instruments (see Figure 1) and acting with the beats.Sound spectrum, considered as a theatrical training, helps the actor to find his or her way to play the character precisely.By figuring out and controlling the voice and body language of the character, the actor can naturally unearth the role that the playwright would like to portray in the play.Lu asserts, The notion of sound spectrum comes from Constantin Stanislavsky's Physical Actions and Vsevold Meyerhold's Bio-Mechnics.Actors can find out the characters' sound spectrum by playing the characters' personalities and desires.After that, actors can control the tempos of the characters' sounds and body languages.And Shakespeare's lines and verses can be interpreted precisely.Thus, sound spectrum can help not only the actors to play characters of the play precisely but also the audience to enjoy this kind of audio imagination.Furthermore, audience can experience the dramatic illusion and images of the play.(Shen, 2004, pp. 73-74) Thus, in Lu's The Witch Sonata-Psalm of Macbeth, tai-yu not only helps the actors to perform the characters correctly but also maintains the beauty of Shakespeare's verses.Those who do not understand tai-yu would not completely depend on the language; instead, they could enjoy its sound.But for those who understand tai-yu, they will be surprised that tai-yu can bring new life to the performance.(Shen, 2004, p. 52) Audience Reception in Taiwan After The Witch Sonata-Psalm of Macbeth was produced in 2003, it impressed the Taiwanese audience for Lu's daring experiment on the theatrical design.When the audience enter the auditorium, the floor is covered by the husks of the rice which is painted in red color (see Figure 2).With the change of the light effect, it provides the audience with an uneasy and horror phenomenon especially when the witches appear or Duncan is murdered. In addition, Lu adopts the vocal, a piano, and a percussion instrument to create mood music.Lu also applies many performance techniques, such as the use of stilts or a shadow show to portray Lady Macbeth's desire and ambition (see Figure 3).The way that Lu uses the androgynous witches is also impressive.The audience seems to be impressed by its visual and sound effects, but quite a few people appreciate the use of tai-yu in the play. In some positive comments, most critics aim at the sound effect of the play.For instance, Chien-hung Lan thinks Lu has successfully presented the visual and sound effects of the performance.Wen-lung Chang also exclaims, I adore those players for their excellent body language, power and strength of their voices.The audience have been charmed by Lu's production even though tai-yu may trouble their understanding of the play.In Chang's comment, the audience may enjoy the dramatic illusion that Lu created, but the problem that integrates tai-yu with Western plays cannot be ignored.In Taiwan, Mandarin Chinese is the official language and so are the characters in use nowadays.During the past decades, when Western plays were adapted in Taiwan's theatre, these plays were performed in Mandarin Chinese all the time.Definitely, these Western drama texts are written in Mandarin Chinese, too.However, when the Western play is translated into tai-yu, the meaning of the word might be twice removed from the original text because tai-yu does not have its own signs that the readers always speak tai-yu through Chinese characters.Ling-ling Shen also indicated the problem of the language in this play and she says, The gap between tai-yu and Mandarin Chinese must be connected by phonetic symbols, and then the meaning of the words can be generated.However, these phonetic symbols cannot be read by everyone.Although the play is a contribution to the translation, the translator has to make efforts to find out proper words for translation.(2004, p. 75) As for the problem of the translation that Chou used in the play, I take Banquo's words as an example to examine how Taiwan's audience evaluates the play.In scene one of The Witch Sonata-Psalm of Macbeth, when Macbeth and Banquo return from the battlefield, they meet three witches.After being informed of becoming a future king, Macbeth is shocked and Banquo says, 知iáⁿ toh一粒會puh-íⁿ,toh一粒bē, 做恁kā我講,我bē求恁恩賜,mā m驚恁ià-hūn--我。(The Witch Sonata-Psalm of Macbeth, Scene I) Good sir, why do you start; and seem to fear Things that do sound so fair?I' the name of truth, Are ye fantastical, or that indeed Which outwardly ye show?My noble partner You greet with present grace and great prediction Of noble having and of royal hope, That he seems rapt withal: to me you speak not.If you can look into the seeds of time, And say which grain will grow and which will not, Speak then to me, who neither beg nor fear Your favours nor your hate.(Mac, Act I, scene III) From the quotation, Chou's translation combines Chinese characters with Roman phonetic symbols, but the reader may be confused with those strange words if they do not learn Roman phonetic symbols before.In addition, the words or the idioms that Chou used are not colloquial even though Lu tries to present Shakespeare's blank verses in so-called poetic tai-yu.Thus, a tragic end of the play may become a comic relief because of the gap of the languages. When Macbeth laments on the death of Lady Macbeth and his doom in Act V, scene v and says, WHERE SHALL WE TWO MEET, IN EAST OR IN WEST 266 "To-morrow, and to-morrow, and to-morrow," the translation "明á再,koh明á再,koh再明á再" has made the audience laugh.The reason that the audience may laugh at the tragic end of Lu's The Witch Sonata-Psalm of Macbeth is related to the translated words that cannot convey the messages of the original.That is to say, the audience cannot be moved and grasp the deeper meaning out of the performance.Wan-yi Yang, one of the audiences, mentions, Although the theatrical elements of the play are abundant, it is a pity to see the performance that could not reach the deeper sensibility of the original play.Briefly speaking, there is a gap between the emotional expression and the structure of the play.9 Besides, when the actors are not familiar with the language that they speak, it is also difficult for them to express their feelings.Definitely, the audience cannot get the strong feeling from the performance.Oscar G. Brockett also points out, Figures of speech are likely to seem contrived and bombastic if the actor does not appear to be experiencing feelings strong enough to call forth such language spontaneously.Shakespeare's plays may be damaged in performance if actors do not rise the emotional demands of the poetry.Therefore, the very richness of expression can be a stumbling block for both performer and reader.( 2004 Advantages and Disadvantages in Integrating Tai-yu With Western Classic Texts Before the discussion of the advantages and disadvantages in integrating tai-yu with any Western classic text, the qualities and characteristics of tai-yu should be first recognized and identified.Tai-yu, like any other dialect in the world, vividly embodies the regionalism in one certain locality.Even though Taiwan is not geographically vast and people who speak tai-yu find little difficulty understanding the other tai-yu speakers, tai-yu still varies slightly from place to place, whether in accents or usage of words.When hearing some special accents or figure of speech, one may easily recognize the speaker's cultural or social identity.Therefore, tai-yu functions as a cultural index to represent one's upbringing background. However, in Taiwan, tai-yu can do more than as a cultural index showing one's cultural background; it, furthermore, works politically as a collective cultural heritage to shape a strong ethnical consciousness within tai-yu advocates.Although more than half of Taiwanese speak tai-yu, tai-yu, being oppressed for long for political reasons, has never been recognized as one of official languages used in Taiwan, or at least as a common cultural fact until the last decade of the 20th century.With the rising awareness within people to revive local cultures and the supportive encouragement from the government grants, tai-yu suddenly turned out to be a representative of a long-lost cultural icon victimized by colonialism and dictatorship.Tai-yu becomes a strong currency of languages marking culturally and politically correct when producing any artistic creation.It is no less a huge rebound than a prisoner finally being released from a dominating and confining political censorship; many productions are performed in tai-yu, and some performers who never speak tai-yu must master this new skill to demonstrate their liberal-mindedness to all races and explore more selling points.This change in language using and performing technique has come to its high tide in the first decade of the 21st century.However, this change comes too quickly that we may start wondering if there might be some problems in accommodating. As a matter of fact, tai-yu never really disappears from any theatrical work, though we seldom saw theatrical production in full tai-yu before 1990.Tai-yu has often been associated with local peasants or unsophisticated country life.Thus, when Taiwan's society finally produces plays in full tai-yu, it asserts firmly that the use of tai-yu is a cultural and social fact; moreover, it marks the recognition of tai-yu as one of the official languages-no longer marginal!Furthermore, in terms of its trait of freshness (full tai-yu has seldom been used in theatre before the 1990's) and localism, tai-yu does much in inviting more possibilities in representing a foreign work to Taiwanese audience.Take Antigone (2001) for example, Lu is very much aware that what he faces is no one but Taiwanese viewers.It is quite compulsory for him to help his audience relate to a play that is totally foreign to them.Language could function as a bridge.Using tai-yu could not only be one of many ways to communicate with the audience who could not read Shakespeare's original texts in English, but also create a strong sense of freshness and dramatic surprise on stage. Moreover, in order to preserve the Shakespearian style of the blank verse, the translator, Ting-pang Chou, also adopts the similar style and has the performers recite tai-yu verses on stage.As a result, many compliments to this theatrical challenge in mixing tai-yu and the Shakespearian verse never ceases and sees this theatrical invention as a way to explore more potential in applying tai-yu in theatrical forms. However, staging Macbeth in tai-yu is simply like a two-blade sword, which not just diminishes the hindrance of the language gap, but meanwhile creates more new gaps in translation and adaptation.As we have mentioned before, though tai-yu has been accepted widely because of the improvement of democracy in Taiwan, the late prohibition against speaking tai-yu still has a strong residual pernicious influence over the young generation.To the young generation, tai-yu might be as foreign as Greek and hard to understand.Therefore, a theatrical work in full tai-yu could be a "politically local" production, but "culturally foreign" to young viewers.Moreover, apart from young viewers, young performers also encounter similar problems.Many audience find it difficult understanding young performers' tai-yu, since these young performers are not 100% tai-yu native speakers.Actually, Taiwanese use a blend language mixed with tai-yu, Hakka, Mandarin Chinese, and sometimes a bit of Japanese.Young performers could not master tai-yu completely, and they might pronounce wrongly, which also cause communication gaps between the audience and performers. If tai-yu is a "foreign in usage" but "local in culture" language to some viewers, translation would play a huge part in helping the audience understand the theatrical work.One must bear in mind that, like many regional dialects, tai-yu is a spoken language and there is no written convention and commonly approved written characters in it.In order to write tai-yu down, one must utilize Chinese characters, and especially in Chou's case, English and Roman phonetic symbols to note down tai-yu's pronunciation. 10It may not be too difficult to understand the adapted play by means of "hearing", but it definitely causes problems in "reading" the play, since one needs the training of "decoding" this multiple writing system.As a dramatic text, Chou's revised Macbeth in 2007 may cause constant breaches in semantics and semiotics.For example, in Scene 7, as the porter answers the door knock, he says, Lòng, lòng, lòng! 門口是siáng-lah?看著鬼-oh, Káⁿ是hit 個演霹靂火ê 劉文聰來--a--leh?入--來-lah, 劉文聰, 你ē-tàng 入來kā你ê 番á 火kap 汽油準備h³好 -a.Lòng, lòng, lòng! 你是lòng soah ah 未? (Scene 7, p. 10) 11 Here we see at least two different language systems: Mandarin Chinese characters and Roman phonetic symbols.When reading this part, he or she needs to understand these two languages.However, Roman phonetic symbols are not widely accepted by the mass, which makes the translation even less readable and accessible to the audience.So the misinterpretation and misunderstanding might be foreseeable; for example, people may understand "Lòng, lòng, lòng" as "long, long, long" in English and refer this expression as a something's length instead of its sound. However, citing various Taiwanese cultural elements and using tai-yu, though creating interest to local viewers, does not mark the cultural identity of The Witch Sonata.In producing The Witch Sonata, Lu still applies the form similar to that of Macbeth, and intentionally "de-orientates" the East-ness of his production: his performers recite blank verses, and addressing each other by their English names.Moreover, we do not see local Taiwanese theatrical factors, such as Gezaixi (Taiwanese Folk Opera) or Taiwanese Puppet Theatre, emerged in Lu's production.However, the audience are constantly reminded that they are watching a non-Western play, since they are consciously reacting to the ambiguity of Taiwanese performers speaking English names and tai-yu at the same time. 10 Roman phonetic symbols were first introduced into Taiwan by Western missionaries, so this system of language is now commonly used in Christian churches, which makes this system even less popular and accepted, since Christians are minors in Taiwan. 1This part is newly added and revised in the version of 2007, different from that of 2003.Its English translation is as follows: "Knock, knock, knock!Who is at the door?Holy Ghost!Are you the one who acted Liu Wen-zong in Pilihou?Come! Liu Wen-zong!You can come in to prepare your matches and gasoline for setting fire.Knock, knock, knock!Will you stop knocking?"(my translation). 5 The positive feedback encouraged Lu to take Shakespeare's Macbeth as his second theatrical experiment.In 2003, with the translator, Ting-pang Chou's help, Lu expurgated several scenes from Shakespeare's original text of Macbeth and maintained thirteen scenes in the tai-yu version. After Lu's The Witch Sonata-Psalm of Macbeth was toured around Taiwan in 2003, Lu won much appreciation.Hsueh-chen Liu says in her review,The rhythm of tai-yu is more poetic than that of Mandarin Chinese.It is marvelous to see Tainan Jen to interpret Shakespeare's play in tai-yu.Five actors' amazing skills bring the audience to the world of Shakespeare's Macbeth.It could be thought as the first-rate performance that I have ever seen. Figure 1 . Figure 1.The way that three witches use the drum as a caldron to perform the witchcraft can be seen the practice of sound spectrum.(Photo by Yu-han Tseng and photo courtesy of Tainan Jen Theatre).There are three reasons that Lu used tai-yu to stage western dramas.The first one was related to political reasons.In 2001, Tainan Jen Theatre had a project on staging western plays in tai-yu for three years.Coincidentally, the political condition changed at the same time.In 2000, Shui-pien Chen, the President Candidate of Democratic Progressive Party, won the President Election in Taiwan.Different from Kuomintang's policy, Chen and his colleagues emphasized the recovery of local culture, including the usage of tai-yu and the recognition of Taiwanese literature.Tainan Jen's project not only fit in the DPP's political assertions but also served as government's propaganda medium to broadcast the localism of Taiwan.The second reason that Tainan Jen staged western plays, especially Shakespeare's plays, is related to the fashion that has started since the movement of little theatre during the 1980s.When the Martial Law was lifted in 1987, many acting companies started to produce or adapt Western plays, especially ancient Greek dramas and Shakespeare's plays.For instance, the Contemporary Legend Theatre (CLT) produced The Kingdom of Desire which was adapted from Shakespeare's Macbeth in 1986.Later, the CLT continued adapting Hamlet in 1990, Medea in 1993, Oresteia in 1995, King Lear in 2000, and The Tempest in 2004.Godot Theatre Company produced a musical play, Kiss Me Nana, which was adapted from The Taming of Shrews in 1997, A Midsummer Night's Dream in 1999, and Othello in 2008.The third reason that Tainan Jen Theatre used tai-yu to stage Macbeth is related to its tradition.Tainan Jen had produced many tai-yu plays because Tainan locates in the southern Taiwan where most of the inhabitants in Tainan are tai-yu speakers.Tainan Jen's first tai-yu drama, Taiwanese Comic Dialogue (Tai Yu Xiang Sheng), was produced in 1990.The themes of its following productions in tai-yu concerned about localism in Taiwan.In 1998 and 1999, the troupe produced two Western dramas in tai-yu (Eugene Ionesco's The Gap and Anton Chekhov's The Marriage Proposal), but the scale of these productions was not as big as Lu's adaptations.Before 2001, the way that Tainan Jen applied tai-yu to stage Western plays is to get close to the local people.Thus, Tainan Jen's Western plays before 2001 were adapted to cater for the audience's taste.After 2001, when Lu Figure 2 . Figure 2. The way that the red rice husks flow through Macbeth's fingers symbolizes his hands with Duncan's blood.The shadow show is applied in the performance to portray Macbeth's inner world.(Photo by Yu-han Tseng and photo courtesy of Tainan Jen Theatre). Figure 3 . Figures of speech are likely to seem contrived and bombastic if the actor does not appear to be experiencing feelings strong enough to call forth such language spontaneously.Shakespeare's plays may be damaged in performance if actors do not rise the emotional demands of the poetry.Therefore, the very richness of expression can be a stumbling block for both performer and reader.(2004,p. 109)
2018-12-07T20:38:27.062Z
2016-03-01T00:00:00.000
{ "year": 2016, "sha1": "e41a303163c933a7365ec089f1330d43666a5ced", "oa_license": "CCBYNC", "oa_url": "http://www.davidpublisher.org/Public/uploads/Contribute/56c431383c8c7.pdf", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "e41a303163c933a7365ec089f1330d43666a5ced", "s2fieldsofstudy": [ "Art" ], "extfieldsofstudy": [ "History" ] }
56388640
pes2o/s2orc
v3-fos-license
Effects of equation of state on hydrodynamic expansion, spectra, flow harmonics and two-pion interferometry We perform an extensive study of the role played by the equation of state in the hydrodynamic evolution of the matter produced in relativistic heavy ion collisions. By using the same initial conditions and freeze-out scenario, the effects of different equations of state are compared by calculating their respective hydrodynamical evolution, particle spectra, harmonic flow coefficients $v_2$, $v_3$ and $v_4$ and two-pion interferometry radius parameters. The equations of state investigated contain distinct features, such as the nature of the phase transition, as well as strangeness and baryon density contents, which are expected to lead to different hydrodynamic responses. The results of our calculations are compared to the data recorded at two RHIC energies, 130 GeV and 200 GeV. The three equations of state used in the calculations are found to describe the data reasonably well. Differences can be observed among the studied observables, but they are quite small. In particular, the collective flow parameters are found not to be sensitive to the choice of the equation of state, whose implications are discussed. We perform a systematic study of the role played by the equation of state in the hydrodynamic evolution of the matter produced in relativistic heavy ion collisions. By using the same initial conditions and freeze-out scenario, the effects of different equations of state are compared by calculating their respective hydrodynamical evolution, particle spectra and elliptic flow parameter v2. Three different types of equation of state are studied, each focusing on different features, such as the nature of the phase transition, as well as strangeness and baryon densities. Different equations of state imply different hydrodynamic responses, the impact thereof on final state anisotropies are investigated. The results of our calculations are compared to the data of two RHIC energies, 130 GeV and 200 GeV. It is found that the three equations of state used in the calculations describe the data reasonably well; differences can be observed, but they are quite small. The insensitivity to the equation of state weakens the need for a locally thermalized description of the system, at least for the observables analysed in the present work. I. I. INTRODUCTION The equation of state (EoS) of strongly interacting matter plays a major role in the hydrodynamic description of the hot and dense matter created in heavy ion collisions [1][2][3][4]. It governs how the hydrodynamic evolution transforms the initial state fluctuations into final state anisotropies in terms of collective flow and particle correlations. Motivated by the lattice QCD simulations which indicate that the quark-hadron transition is a crossover at zero baryon density [5][6][7], many different equations of state (EoSs) have been proposed by fitting the lattice data [8,9], complemented by combining with EoSs appropriate for the hadronic phase [10] at low temperatures [11][12][13][14][15][16][17][18][19]. For a study employing EoS with a first order phase transition, see Ref. [20]. The assumption of zero baryon density is a fairly good approximation for the initial conditions (IC) of the systems created at RHIC and LHC, but strongly interacting matter possesses several conserved charges such as electric charge, net baryon number and strangeness. Studies have shown [21][22][23] that the thermodynamic properties as well as phase transitions are modified when the number of degrees of freedom of the system changes. In the case of a liquid-gas phase transition, for instance, the increase of the number of degrees of freedom increases the dimension of the binodal surface and the corresponding transition is continuous rather than discontinuous [24][25][26]. In view of this, one can expect that in the case of the QCD matter the conserved charges may affect the duration of the hydrodynamic evolution of the system in the transition region and would likely manifest themselves at the stage of hadronization. Therefore, experimental data on multiplicity, ratio of particle yields and their fluctuations need to be analysed through models properly handling finite baryon density and strangeness. A statistical model with finite chemical potential is capable of describing the data reasonably well [27][28][29][30], which indicates that it might be essential for the study of the evolution of the system to use EoSs that provide reasonable description of the matter produced over a large range of densities and temperatures. Following this line of thought, a compromise was proposed by Hama et al. [31], where a phenomenological critical point is introduced to smoothen the transition region where the baryon density is smaller than that of the critical point. In the model, finite baryon chemical potential is taken into consideration in both the Quark-Gluon Plasma (QGP) and in the hadronic phase. Such an approach reflects well the main characteristic of a smooth crossover transition while explicitly considering non-zero baryon density. Unfortunately, in the QGP phase, the model does not accurately reproduce asymptotic properties of the QGP matter. The present work employs different EoSs in an ideal hydrodynamical model to study their effects on particle spectra and flow harmonics. In the following section, we briefly review different EoSs employed in the literature and then discuss those EoSs employed in the present work. In section III we present the results of our hydrodynamical simulations. We compute particle spectra and the elliptic flow parameter v 2 of charged particles as well as of identified particles. The calculations are done for RHIC energies of 130 GeV and 200 GeV, and for various different centrality windows. Conclusions and perspectives for future work are presented in section IV. II. EQUATION OF STATE AND HYDRODYNAMICAL MODEL Many different EoSs compatible with results of lattice QCD simulations have been investigated in the literature. Huovinen [11] proposed an EoS connecting a lattice QCD EoS to another one for a hadronic resonance gas (HRG) model, and requiring continuity of the entropy density and its derivatives in the transition region, where no data is available. In a later work, Huovinen and Petreczky [12] improved the parameterization of Ref. [11] by focusing on the trace anomaly, Θ ≡ T µ µ = e − 3P : the EoS adopts the lattice EoS at high temperature and connects it smoothly to an EoS of a HRG model at low temperature by requiring that the trace anomaly as well as its first and second derivatives to be continuous. Since then the EoS has been adopted in many studies. In Refs. [14,32], an EoS was proposed also based on the lattice data and a HRG model. In this EoS, the sound velocity was interpolated in the transition region and by means of thermodynamical relations, the sound velocity is constrained to match the lattice QCD entropy density by an integral in temperature. A few other EoSs were proposed in a similar fashion [15,16,18], using a lattice EoS at high temperatures and connecting it to a phenomenological hadronic EoS using different prescriptions. In some of those, there are issues of thermodynamic consistency. On the other hand, instead of interpolating lattice QCD data, some works focused on EoSs with a critial end point in the phase diagram. In Ref. [31], for instance, a phenomenological critical point is introduced via an EoS from the MIT bag model for the QGP phase, connected to an EoS of a HRG model for the hadronic phase. Another attempt was implemented in Ref. [33], where an SU(3) Polyakov-Nambu-Jona-Lasinio (PNJL) model was used for the high temperature phase. A critical end point is naturally obtained by using the Polyakov loop as the order parameter of the deconfinement transition. We note that most of the EoSs discussed above consider only zero baryon density. Moreover, in the hydrodynamical simulations, usually averaged IC were used, and only a few works previously adopted full threedimensional (3-D) hydrodynamical simulations. Though it was estimated in Refs. [11,34] that the effect of finite chemical potential is small, less than a few percent, it is not clear, whether its importance may increase for event-by-event IC. In addition, in most studies, calculations were only done for some specific collision energy and centrality windows. In view of these, it seems worthwhile to carry out an event-by-event 3-D simulation on the effects of the equation of state, covering a broader range of the published data. In this work, hydrodynamical calculations are carried out by using the full 3-D ideal hydrodynamical code NEXSPheRIO. For more realistic collisions, the effect of viscosity should be taken into account. However, the main purpose of this study is to investigate the difference between various EoSs rather than to reproduce the data precisely, and viscosity usually reduces such differences. Besides, viscosity may also introduce extra theoretical uncertainties, such as viscous correction from equilibrium distribution on the freeze-out surface [35,36]. The NEX-SPheRIO code uses IC provided by the event generator NeXuS [37,38] and solves the 3+1 ideal hydrodynamic equations with the SPheRIO code [20,39]. By generating many NeXuS events, and solving independently the equations of hydrodynamics for each of them, one takes into account the fluctuations of IC on an event-by-event basis. At the end of the hydrodynamic evolution of each event, a Monte-Carlo generator is employed to produce hadrons following a Cooper-Frye prescription, and then the hadronic decay is considered. A limited list of referenes describing studies of heavy-ion collisions using the NEXSPheRIO code can be found in Refs. [40][41][42][43][44][45][46]. In this work, we investigate three different types of EoS: • (LQCD) A lattice QCD EoS proposed by Huovinen [11] with zero baryon chemical potential, • (CEP) A lattice QCD inspired EoS [31] with smooth transition and a critical end point, which considers finite baryon chemical potential, • (FOS) An EoS with first-order phase transition [47] which considers both finite baryon chemical potential and local strangeness neutrality. The first type of EoS, LQCD, adopts a parameterization of the lattice QCD data for the high temperature region, while assumes a HRG model for the low temperature region. The EoS only considers zero baryon density. In the calculations, the pressure and energy density are obtained through the trace anomaly Θ by the following relations [12] p(T ) where a sufficiently small lower limit of integration, t low , is used in practice. The second EoS, CEP, considers the following phenomenological parametrization instead of Gibbs conditions for the phase transition where p Q and p H are the pressure in the hadronic and in the QGP phase, respectively; δ = δ(µ b ) is a function of baryon chemical potential µ b which approaches zero when µ b is larger than a critical value µ c . Eq.(2) has the following solution where It is straightforward to verify that, for small δ, p → p Q when p Q < p H and p → p H when p Q > p H . It naturally recovers the first order phase transition when δ = 0 [31]. The values of p Q and p H are those determined in an EoS with first-order phase transition (FO) [31] which considers finite baryon chemical potential, and assumes an ideal gas model of quarks and gluons for the QGP phase, as well as a HRG model for hadronic phase. Note that in both CEP and FO, finite baryon density is considered. The third EoS, FOS, introduces an additional constraint in FO, namely, strangeness neutrality, i.e. The strangeness chemical potential, µ s , is introduced in the EoS as a new variable. Therefore the strangeness chemical potential is not an independent degree of freedom in the system, it merely increases the dimension of the binodal surface of the phase transition [21]. It also modifies the phase structure, as discussed below. Before carrying out the hydrodynamical simulations we first discuss quanlitatively the differences among the different EoSs. We show in Fig. 1 the phase boundaries of the different EoSs. For LQCD, the deconfinement transition corresponds to the parameterization in the region of 170 MeV < T < 220 MeV on the temperature axis in the plot. For FO and FOS, the phase boundary is determined by the Gibbs conditions between the quark-gluon and hadronic phases. The phase boundary of the CEP is not shown explicitly in the plot, it is almost the same as that of FO beyond the critical point, and is smoothed out below that point. The top plot shows the phase boundaries in terms of temperature as a function of baryon density, while the plot in the bottom shows those in terms of the temperature as a function of baryon chemical potential. We note that FOS possesses an unique feature: the QGP phase boundary and the hadronic phase boundary have different baryon chemical potentials. It can be seen as a result of the strangeness local neutrality condition. This implies that during the phase transition, when the two phases are in equilibrium, it is not necessary that both phases simultaneously have vanishing strangeness density. This is because in the transition region, the strangeness neutrality condition Eq.(5) reads, In other words, in the case of FOS, neither the strangeness density of hadronic phase (ρ H s ) nor that of the QGP phase (ρ Q s ) is necessarily zero. Therefore, the baryon chemical potential is not fixed during the phase transition, its value being dependent on the fraction of hadronic phase (λ) of the system which is in chemical and thermal equilibrium. In general, the resulting baryon chemical potential attains different values on the hadronic phase boundary (λ = 0) and on the QGP phase boundary (λ = 1) without violating the Gibbs conditions. As a comparison, in the case of FO, the QGP phase boundary coincides with that of the hadronic phase. In Fig. 2, we show the pressure as a function of baryon chemical potential, as well as baryon density at a given temperature T = 150 MeV for different EoS. It is worth noting here that for FOS, neither the baryon chemical potential nor the strangeness chemical potential is fixed during the isothermal phase transition procedure. As a result, when expressed in pressure and chemical potential, the transition region of FO is a point, but it is a curve in the case of FOS, as shown in the top plot of the other hand, the pressure increases during the phase transition in the case of FOS. Therefore, the phase transition in FOS is smoother than that in FO. For the CEP EoS, due to its parameterizations, the transition region is smoothed out based on that of FO, the pressure also monotonically increases during the process. The ratios ǫ/T 4 and 3p/T 4 are plotted as a function of temperature T for all the EoS in Fig.3. At zero baryon density, due to its fit to the lattice QCD results, only the LQCD gives correct asymptotic behaviour at high temperature. In this region, all the other EoS converge to the non-interacting ideal gas limit. On the other hand, in the low temperature limit, all the EoS approach the HRG model. The differences between CEP, FO and FOS come from the transition region around T ∼ 160 MeV. Since a first order phase transition of one-component system occurs at a constant temperature, it gives a vertical line in the case of FO. CEP is smoother in comparison with FO due to its phenomenological parameterization. Although the strangeness chemical potential is considered in FOS, it gives exactly the same result as FO. This can be understood by studying the intersection between the phase boundaries and x-axis in the bottom plot of Fig.1. Since the two phase boundary curves coincide at zero baryon density, the choice between FO and FOS does not make any difference. The right panel in Fig.3 shows the results of CEP, FO and FOS for finite chemical potential (hence finite baryon density). The curves at zero chemical potential are also plotted for comparison purposes. It can be seen that µ B = 0.5 GeV, which is beyond the critical point in the case of CEP, results a phase transition of the first order, therefore CEP behaves similarly to FO in this case. On the other hand, FOS is slightly different from them since the corresponding transition is not isothermal at finite baryon density. Nevertheless, all three EoS show very similar features in high and low temperature limits. III. NUMERICAL RESULTS AND DISCUSSIONS Here we present results for the spectrum and flow parameter v 2 using the three EoS discussed above as input into the hydrodynamical model NEXSPheRIO. The same initial conditions and freeze-out criterion are used in all cases. For illustrating the hydrodynamical evolution, density plots for the energy density and entropy density are shown in Figs. 4 to 5. In Fig. 4 the energy density for a selected random fluctuating event is shown. The temporal evolution of the energy density in the transverse plane is calculated considering η = 0 for three EoSs. In literature, usually smoothed IC is adopted which can be obtained in our case by averaging over different fluctuating IC of the same centrality window. Since it is understood that event by event fluctuating IC leads to important effects on elliptic flow [40,41], triangular flow and two particle correlations [48,49], the calculations in this work are done using such fluctuating IC. In the ideal hydrodynamic scenario, the total entropy of the system is conserved. To see the above results more quantitatively, the frozen-out entropy for the same event shown in Fig. 4 is considered at typical time instants and the results are depicted in Fig. 5. Due to the differences in EoS, the same IC may give different total entropy, so we plot entropy in percentage instead of using the absolute value. It can be inferred from the plots that, at both √ s = 130 MeV and 200 MeV, the freeze-out process of LQCD stands out from other EoSs. This can be understood using Fig. 3, where the derivative of pressure with respect to temperature for LQCD is quite different from those of CEP, FO and FOS. In particular, for FO with a first order phase transition, the pressure remains unchanged during the transition process while the system continuously expands. In the case of CEP and FOS, the phase transition is smooth. However, in comparison to LQCD, the differences are not large. At 200 GeV, one observes that for LQCD it takes relatively less time for the system to freeze-out than the other three EoSs. This is probably due to its bigger derivative of the pressure vs. temperature curve in the high temperature region. As can be seen in Fig. 3, on the other hand, the differences between CEP, FO and FOS are very small at high temperature limit. At 130 GeV, since the incident energy is smaller, the initial temperature is lower, and the local baryon density becomes slightly bigger. Therefore the properties of EoSs at finite baryon density and at the phase transtion region play an increasingly important role. Consequently, the differences between FO, FOS and CEP become observable. Next, particle spectra and elliptic flow are evaluated using 4000 NeXuS events for each centrality window at both 130 and 200 GeV Au-Au collisions. Balancing between a good statistics and efficiency, only 200 events are used for the calculation of particle spectra, but all 4000 events are used to evaluate the elliptic flow coefficients. At the end of each event, Monte-Carlo generator is invoked 100 times for decoupling. There are two free parameters in the present simulation, namely, an overall normalization factor to reproduce correctly the multiplicity and the thermal freeze-out temperature which is adjusted to the slope of transverse momentum spectra. The results of the hydrodynamic simulations for the spectra and the flow parameters are shown in Figs. 6 to 10. Results for the p T spectra are shown for all charged particles in Fig. 6. At a given energy, the same normalization is adopted for all different EoSs to evaluate the dN/dη yields. Our results are compared with PHO-BOS' Au+Au data at 130 GeV [50] and 200 GeV [51]. For collisions at 130 GeV, a pseudo-rapidity interval −1 < η < 1 is used in the calculations of the p T spectra, which is then compared with the STAR data, where the pseudo-rapidity intervals are −0.5 < η < 0.5 [52] and 0.5 < |η| < 1 [53] respectively. The freeze-out temperatures are determined as a function of centrality to fit the slope of the spectra, as shown in Table I. For Au+Au collisions at 200 GeV, the pseudo-rapidity interval used in the calculations of p T spectra is 0.2 < y < 1.4, which is the same as that in the data [54]. Again, the freeze-out temperatures are determined as a function of centrality, which is shown in Table II. The same set of parameters for the freeze-out temperatures and renormalization factor was used for the different EoSs. It turned out that all three EoSs reproduce the measured η (not shown in the figures) and p T spectra reasonably well, although some deviations occur at p T > 3 GeV for peripheral centrality windows. Our results indicate that particle spectra are not very sensitive to the choice of EoS. This is consistent with conclusions obtained previously by using smoothed IC [12]. Next, we present the results for the elliptic flow parameter v 2 . Here all the calculations are done by using the event plane method, and the results for v 2 are presented as a function of pseudo-rapidity as well as of transverse momentum. For Au+Au collisions at 130 GeV, the calculated v 2 as function of p T is shown in the top plot of Fig. 7 and the data points are from the STAR collaboration [55]. In the top plot of Fig. 8, we present v 2 as function of η; data points are from the PHOBOS collaboration [56]. When calculating v 2 as function of p T , a cut in pseudo-rapidity |η| < 1.3 was implemented. There is no momentum cut in the calculations of v 2 as function of η. In both cases, the freeze-out temperature is taken to be T f = 135.3 MeV. Similar calculations of Au+Au collisions are carried out at 200 GeV, whose results are presented in the bottom panels of Figs. 7 and 8. In the calculations of v 2 as function of p T at 200 GeV, only particles in the interval 0 < η < 1.5 are considered, in accordance with the data of the PHOBOS collaboration [57]. The freeze-out temperature for this case is also taken to be T f = 135.3 MeV. The calculations are also carried out for identified particles. In the left column of Fig.9, the results are shown for v 2 vs. p T for identified particles at 130 GeV. The calculations were performed for 0 -50% centrality window, where the experimental data are from the STAR collaboration [58,59]. In the right column, we present the corresponding results of v 2 vs. p T for identified particles at 200 GeV. The calculations were done for 0 -50% centrality windows, which are compared to the STAR data for 0 -80% [60] and 0 -70% [61]. From these plots, it is clearly seen that at small p T the measured elliptic flow coefficients can be reasonably well reproduced by using all three EoSs. In fact, different EoSs give roughly similar results, and all of them fail to describe the data when p T increases beyond ∼ 2 GeV due to the ideal hydrodynamics employed in this study. In order to discriminate the EoSs, we presented the results from different EoSs on the same plot and focus only in the low p T region, as shown in Fig. 10. The upperleft plot of Fig. 10 shows the results of v 2 vs. p T for all charged particles at 130 GeV; the upper-right plot gives those at 200 GeV; the lower-left and lower-right plots present results at 200 GeV of 0 -5% centrality of all charged particles and of identified pions, respectively. The data are from the STAR collaboration [61]. It can be seen that the results of LQCD are slightly different from those of CEP and FOS for p T < 1 GeV. The obtained v 2 is slightly bigger, and describes better the data. This can be understood as follows. For an EoS featuring a first order phase transition, such as FO, the pressure gradient vanishes when the system enters this region. In the present case, although CEP and FOS describe smooth phase transitions, their properties are more similar to that of FO than to those of LQCD. For LQCD, it could be inferred from Fig. 3 that the pressure gradient is bigger than those from other EoSs in the high temperature region (T ≥ 0.3 GeV). As a result, in the case of LQCD, the initial spatial eccentricity of the system would be transformed into momentum anisotropy with the biggest amplified magnitude at high temperature as well as during the hadronization process. At 200 GeV, on one hand, the evolution of the matter spend more time in the QGP phase where most eccentricity is developed, therefore the asymptotic behaviour of LQCD at high temperature plays an increasingly important role, which makes it distinct from all other EoSs. On the other hand, at a lower incident energy, 130 GeV, the system develops relatively more anisotropy in the transition region, therefore the properties of the phase transition become more important. This makes the curve of CEP to become closer to that of LQCD. On the basis of the results, one concludes that the observed differences due to different EoSs are generally small in size. Very recently, LQCD was extended to consider finite chemical potential [62] where the pressure is expanded in terms of chemical potentials by an Taylor expansion coefficients which are parameterized and compared to those obtained from lattice simulations. The effect of this novel version of LQCD is unkown, but not expected to be very big. Other factors, such as differ- ent types of IC, fluctuations in the IC, viscosity, etc., should also be considered carefully. Generally, LQCD reproduces results closer to the data than the other EoSs investigated here. The EoSs with finite baryon/strangeness density also provide results with observable differences from those EoSs that do not impose conserved charges. Additionally, the time evolution, as well as momentum aniotropy, are shown to be affected. Therefore, it is interesting to introduce an EoS which considers finite chemical potential while reproduces the lattice data at high temperature and zero baryon density region. Such an EoS may be employed to consistently study physical systems over a large range of densities and temperatures. IV. CONCLUSIONS AND PERSPECTIVES A systematic study on the role of EoS on hydrodynamical evolution of the system is carried out and discussed here. By adopting the same set of parameters, which consists of an overall renormalization factor and freezeout temperatures, the particle spectra and elliptic flow coefficients are calculated by NEXSPheRIO code. The calculations cover a wide range of centrality windows at two different RHIC energies. It is found that all EoS successfully reproduce the particle spectra and elliptic flow at small p T region. The hydrodynamical evolution of the system is affected by the EoS, which consequently leads to some small differences observed in elliptic flow. V. ACKNOWLEDGMENTS The authors are thankful for valuable discussions with Wojciech Florkowski and Tamás Csörgö. We gratefully acknowledge the financial support from Fundação de Amparoà Pesquisa do Estado de São Paulo (FAPESP),
2018-06-14T12:32:01.000Z
2014-09-01T00:00:00.000
{ "year": 2018, "sha1": "660c8b6c844a44209c1206d316487610088a4293", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1409.0278", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "f397ac4a748c3885cc16f6f11f5c36245d000497", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
126211230
pes2o/s2orc
v3-fos-license
Multiscale Modelling and Analysis of Signalling Processes in Tissues with Non-Periodic Distribution of Cells In this paper, a microscopic model for a signalling process in the left ventricular wall of the heart, comprising a non-periodic fibrous microstructure, is considered. To derive the macroscopic equations, the non-periodic microstructure is approximated by the corresponding locally periodic microstructure. Then, applying the methods of locally periodic homogenization (the locally periodic (l-p) unfolding operator, locally periodic two-scale (l-t-s) convergence on oscillating surfaces and l-p boundary unfolding operator), we obtain the macroscopic model for a signalling process in the heart tissue. to a positive angle at the endocardium. In the microscopic model of a signalling process, we consider the diffusion of signalling molecules in the extracellular space and their interaction with receptors located on the surfaces of muscle cells. There are two main challenges in the multiscale analysis of microscopic problems posed in domains with non-periodic perforations: (i) the approximation of the non-periodic microstructure by a locally periodic one and (ii) derivation of limit equations for the non-linear equations defined on oscillating surfaces of the microstructure. First, assuming the C 2 -regularity for the rotation angle γ , we define the locally periodic microstructure which approximates the original non-periodic plywood-like structure. Similar approximation of non-periodic plywood-like microstructure by locally periodic one was considered in [7,29]. Then, applying techniques of locally periodic homogenization (locally periodic two-scale convergence (l-t-s) and l-p unfolding operator), we derive macroscopic equations for the original microscopic model. The l-p two-scale convergence on oscillating surfaces and l-p boundary unfolding operator allow us to pass to the limit in the non-linear equations defined on surfaces of the locally periodic microstructure. In this paper, we consider a simple model describing the interactions between processes defined in the perforated domain and on the surfaces of the microstructure. However, the techniques presented here can be also applied to more general microscopic models as well as to other non-periodic microstructures, provided the variations in the microscopic structure are sufficiently regular. Previous results on homogenization in locally periodic media constitute the multiscale analysis of a heat-conductivity problem defined in domains with non-periodically distributed spherical balls [3,8,31], and elliptic and Stokes equations in non-periodic fibrous materials [4,6,7,29]. Formal asymptotic expansion and two-scale convergence defined for periodic test functions, [27], were used to derive macroscopic equations for models posed in domains with locally periodic perforations, i.e., domains consisting of periodic cells with smoothly changing perforations [5,10,11,22,23,33]. The paper is organized as follows. In Section 2, the microscopic model for a signalling process in a tissue with non-periodic plywood-like microstructure is formulated. In Section 3, we prove the existence and uniqueness results for the microscopic model and derive a priori estimates for a solution of the microscopic model. The approximation of the microscopic equations posed in the domain with non-periodic microstructure by the corresponding problem defined in a domain with locally periodic microstructure is given in Section 4. Then, applying the l-p unfolding operator, l-t-s convergence on oscillating surfaces, and l-p boundary unfolding operator we derive the macroscopic model for a signalling process in the heart muscle tissue. In Appendix, we summarize the definitions and main compactness results for the l-t-s convergence and l-p unfolding operator. Microscopic Model for a Signalling Process in Heart Tissue In this work, we consider a receptor-based microscopic model for a cellular signalling process in cardiac tissue. A signalling system is important for proper function of cells and appropriate respond to changes in the extracellular environment. Many regulatory events in cardiac tissue are mediated via surface receptors, located in cardiac cells membrane, that transmit signals through the activation of GTP binding proteins (G proteins) [34]. Cardiac cells (myocytes) in the heart wall tissue are joined in a linear arrangement to form muscle fibres. In the left ventricular wall of the heart the orientation of layers of muscle fibre changes with position through the wall. The layers of parallel aligned muscle fibres are rotated from −60 • at epicardium to +70 • at endocardium [25] and create a non-periodic plywood-like microstructure. For simplicity, we assume that the individual muscle fibres are not connected to each other. However, it is possible to consider a periodic distribution of connections between the fibres. In the mathematical model for a signalling process in cardiac tissue, we consider the binding of signalling molecules to receptors located on the cell membrane, which through the activation of G proteins (not considered in our simple model) results in the activation of a cell signalling pathway. We consider the diffusion, production and decay of ligands (signalling molecules) c and binding of ligands to the membrane receptors. We shall distinguish between free receptors r f and bound receptors r b , which correspond to receptor-ligand complexes. We assume that the receptor-ligand complex can dissociate and result in a free receptor and ligand. We also consider the production of new free receptors and natural decay of free and bound receptors. where d f and d b are the decay rates, β denotes the dissociation rate for the receptor-ligand complex, α is the binding rate, the function p models the production of free receptors, and the function F describes the production and decay of ligands. To define the plywood-like microstructure of the cardiac muscle tissue of the left ventricular wall, we consider a function γ ∈ C 2 (R), with −π/2 ≤ γ (x) ≤ π/2 for x ∈ R and define the rotation matrix around the x 3 -axis as where γ (x) denotes the rotation angle with the x 1 -axis. Denote R x := R(γ (x 3 )). We consider an open, bounded subdomain ⊂ R 3 , with Lipschitz boundary, representing a part of the cardiac muscle tissue and the x 3 -axis to be orthogonal to the layers of parallel-aligned muscle fibres. We assume that the radius of the muscle fibres depends on the position in the tissue and define the characteristic function of a fibre by , with 0 < ρ 0 ≤ ρ(x R ) ≤ ρ 1 < ∞ and ρ(x R )a ≤ 2/5 for all x ∈ , i.e., a = 2/(5ρ 1 ). By small parameter ε we denote the characteristic size of the microstructure of cardiac tissue, given as a ratio between the characteristic diameter of muscle fibres and characteristic size of the cardiac tissue. Notice that for plywood-like microstructure the axis of each fibre can be defined by a rotated around the x 3 -axis line, parallel to the x 1 -axis and passing through a point of an ε-grid in the plane x 1 = const. Thus, for j ∈ Z 3 , we define Notice that x ε j,3 = εj 3 and the third variable is invariant under the rotation R x ε j . This ensures that for each fixed εj 3 we obtain a layer of parallel aligned fibres. Then the perforated domain * ε , corresponding to the extracellular space of cardiac tissue, is defined as * We denote 0,1 = {y ∈ R 3 : y 1 = ±1/2}. Then, assumptions on ρ and a ensure that = ∅ for any m, n ∈ ε with n 2 = m 2 or n 3 = m 3 . Hence, * ε is connected. This corresponds to our assumption that muscle fibres do not touch each other and are not directly connected, and the interactions between the muscle fibres are facilitated through the extracellular matrix. Now, using the definition of ϑ, the characteristic function of muscle fibres in cardiac tissue reads and the extracellular space is characterised by The surfaces of muscle cells, i.e., the boundaries of the microstructure, are denoted by Notice that the changes in the microstructure of * ε are defined by changes in the periodicity given by the linear transformation (rotation) R(x) and by changes in the shape of the microstructure (changes in the radius of muscle fibres) given by the linear transformation To determine the non-constant reaction rates for binding and dissociation processes on cell membranes, we consider α, β ∈ C 1 ( ; C 1 0 (Y 1 )), extended in y-variable by zero to R 3 , and define Then, the microscopic model for a signalling process in cardiac tissue reads where the dynamics in the concentrations of free and bound receptors on cell surfaces is determined by two ordinary differential equations with initial conditions defined as For simplicity of the presentation we shall assume that the diffusion coefficient A and the decay rates d f , d b are constant. We also assume that the functions F and p are independent of x ∈ . The dependence of A, d f , d b , F and p on the microscopic and macroscopic variables can be analyzed in the similar way as for α ε and β ε . Notice that the C 1 -regularity of α and β is required for the approximation of the integrals defined on the boundaries of the non-periodic microstructure by the integrals defined on the boundaries of the corresponding locally periodic microstructure. We shall consider a weak solution of the problem (1) and (2), defined in the following way. (1) and (2) are functions c ε , r ε f , Existence, Uniqueness, and a Priori Estimates for a Weak Solution of the Microscopic Problem (1) and (2) In a similar way as in [9,21,30], we can prove the existence, uniqueness, and a priori estimates for a weak solution of problem (1)- (2). Notice that for the derivation of a priori estimates a trace estimate, uniform in ε, where the constant C depends on Y 1 , Y 0 , K and is independent of ε and j ∈ ε . Then, considering the change of variables x = εR x ε j y + x ε j = εR x ε j (y + j) and summing up over where the constantμ depends on Y 1 , Y 0 , R and K and is independent of ε. (1) and (2) satisfying the following a priori estimates Lemma 1 Under Assumption 1 there exists a unique non-negative weak solution of the microscopic problem where the constant μ is independent of ε, , withμ being the constant in the trace inequality (4). Proof (Sketch) As in [30] the existence of a solution of the microscopic problem (1) and (2) for each fixed ε > 0 is obtained by applying fixed point arguments and Galerkin method. Also, using the same arguments as in [30] we obtain that c ε (t, x) ≥ 0 for (t, x) ∈ * ε,T and r ε To derive a priori estimates, we consider the structure of the microscopic equations. For non-negative solutions, by adding the equations for r ε f and r ε b , we obtain Then the Lipschitz continuity of p and the non-negativity of r ε f and r ε b imply the boundedness of r ε f and r ε b on ε T . Considering c ε as a test function in (3) and using the trace inequality (4), we obtain the estimates for c ε . Testing (2) by ∂ t r ε f and ∂ t r ε b , respectively, yields the estimates for the time derivatives of r ε f and r ε b . In the derivation of the a priori estimate for ∂ t c ε we use the equation for ∂ t r ε f to estimate the non-linear term on the boundary ε , i.e., Considering (c ε − M 1 e M 2 t ) + as a test function in (3), where M 1 and M 2 are as in the formulation of the lemma, we obtain * ε Using the non-negativity and boundedness of β ε and r ε f , along with the trace inequality (4), the last integral can be estimated as for any δ > 0, where the constants μ 1 , μ 2 and μ δ depend on and on the transformation matrices R and K, but are independent of ε. More specifically, . Using the non-negativity of c ε and r ε f , the Lipschitz continuity of F , and the assumptions on M 1 and M 2 , and applying the Gronwall inequality yield estimate (6). To show the uniqueness of a solution of the microscopic problem (1) and (2), we consider the equations for the difference of two solutions (c ε 1 , r ε f,1 , r ε b,1 ) and (c ε 2 , r ε f,2 , r ε b,2 ). The nonnegativity of α ε , r ε f,j , and c ε j , along with the boundedness of r ε f,j , ensures Testing the sum of the equations for r ε and using the estimate from above yield Combining the last two inequalities and applying the Gronwall inequality imply the estimates for r ε l, Considering (c ε − S) + , with some S > 0, as a test function in (3) and using the boundedness of r ε f and r ε b we obtain , |F (0)|}, μ 1 is some positive constant, and * ,S ε (t) = {x ∈ * ε : c ε (t, x) > S}. Then, Theorem II.6.1 in [20] yields the boundedness of c ε in (0, T ) × * ε for every fixed ε. Considering now (3) for two solutions (c ε 1 , r ε f,1 , r ε b,1 ) and (c ε 2 , r ε f,2 , r ε b,2 ), we obtain the estimates for c ε , shown above, and applying the Gronwall inequality, we conclude that The assumptions on the non-periodic microstructure of * ε and the regularity of the transformation matrices R and K ensure the following extension result. where μ depends on Y 1 , Y 0 , R and K and is independent of ε and j ∈ ε . where μ depends on Y 1 , Y 0 , R and K and is independent of ε. Proof (Sketch) The proof follows the same lines as in the periodic case, see e.g. [15,19]. The only difference here is that the extension depends on the Lipschitz continuity of K and R and the uniform boundedness from above and below of | det ) and obtain the estimates in (7). Notice that due to the definition of K(x), the fibre radius varies between different fibres in the plywood-like structure of the heart tissue, but is constant along each individual fibre. Thus, apart from the end parts of the fibres near ∂ , we have to extend u only in the directions orthogonal to the fibres. In the definition of * ε we consider those j that εR x ε j (Y 1 + j) ⊂ . Hence the extension for the end parts of the fibres near ∂ , i.e., for such , we obtain that the constant μ in (7) is independent of x ε j , ε, and j ∈ ε . Then scaling and R x ε j (Y 1 + j) by ε and summing up over j ∈ ε in (7) imply (8). In the case when the boundary ∂ crosses the fibres in a non-orthogonal way, we would obtain only a local extension to a subdomain δ = {x ∈ : dist(x, ∂ ) > δ} for any fixed δ > 0. Derivation of Macroscopic Equations To derive macroscopic equations for the microscopic problem posed in a domain with the non-periodic plywood-like microstructure, we approximate it by a problem defined in the domain with the corresponding locally periodic microstructure and apply the methods of locally periodic two-scale convergence (l-t-s) and l-p unfolding operator (see Appendix for the definitions and convergence results for l-t-s convergence and l-p unfolding operator). Notice that the regularity assumptions on the orientation angle γ are essential for the construction of an appropriate locally periodic microstructure for the non-periodic plywood-like structure. To define the locally periodic microstructure related to the original non-periodic one, we consider, similarly to [8,29], the partition covering of by a family of open nonintersecting cubes { ε n } 1≤n≤N ε of side ε r , with 0 < r < 1, such that For each x ∈ R 3 , we consider a transformation matrix D(x) ∈ R 3×3 and assume that D, D −1 ∈ Lip(R 3 ; R 3×3 ) and 0 < d 0 ≤ | det D(x)| ≤ d 1 < ∞ for all x ∈ . The matrix D will be defined by the rotation matrix R and its derivatives and the specific form of D will be given later. Then, the locally periodic microstructure is defined by considering a covering of ε n by parallelepipeds εD x ε n Y such that and points x ε n ,x ε n ∈ ε n , for n = 1, . . . , N ε , are arbitrary chosen, but fixed. Here Y = (0, 1) 3 , D x := D(x), and D x ε n = D(x ε n ) for 1 ≤ n ≤ N ε . Then, the perforated domain with locally periodic microstructure is given by * . . , N ε , and the transformation matrix K will be specified later. We shall also denotê ε n =x ε n + Int The boundaries of the locally periodic microstructure are defined as where x ε n ,K = K x ε n and = ∂Y 0 \ 0,1 . For the problem analyzed here, we shall consider x ε n = x ε n . The following calculations illustrate the motivation for the locally periodic approximation and determine formulas for the transformation matrices D and K. For n = 1, . . . , N ε , we choose such κ n ∈ Z 3 that for x ε n = R x ε n εκ n we have x ε n ∈ ε n . In the definition of covering of ε n by shifted parallelepipeds, we consider a numbering of ξ ∈ ε n and write ε n ⊂ x ε n + Then for 1 ≤ j ≤ I ε n we consider k n j = κ n + ξ j and x ε k n j = R k n j εk n j . Here R k n j := R x ε k n j and R κ n := R x ε n . Using the regularity assumptions on the function γ and considering the Taylor expansion of R −1 around x ε n , i.e. around εκ n,3 , we obtain where W x ε n = W (x ε n ) with W (x) = (I − ∇R −1 (γ (x 3 ))x). The notation of the gradient is understood as ∇R −1 (γ (x))x = ∇ z (R −1 (γ (z))x)| z=x . Thus, for x, x ε n ∈ ε n , since |x − x ε n | ≤ Cε r , the distance between R −1 is of the order sup 1≤j ≤I ε n |ξ j ε| 2 ∼ ε 2r . This calculation together with the estimates below will ensure that the non-periodic plywood-like structure can be approximated by the corresponding locally periodic microstructure, comprising Y x ε n -periodic structure in each ε n for n = 1, . . . , N ε , and | ε n | ∼ ε 3r for an appropriate r ∈ (0, 1). Here where (x 3 )). The transformation matrix K is defined as K(x) = W −1 (x)K(x) and the boundary of the muscle fibres in the locally periodic approximation is given by The definitions of R, W and γ ensure that the transformation matrices D and K are Lipschitz continuous, as well as 0 Since ϑ is independent of the first variable, we consider in the definition of W (x) the shift only in the second variable. Notice that if the original microstructure would be locally periodic, i.e. R(γ (x 3 )) = R(γ (x ε n,3 )) for x ∈ ε n and some x ε n ∈ ε n , then the matrix W would be constant in each ε n and we would obtain In the approximation of the problem posed in the domain with the non-periodic plywoodlike structure, we shall use the following lemma, proven in [6], that facilitate the estimate for the difference between the values of the characteristic function at two different points. Deriving estimates for the difference of solutions of the original microscopic problem and the corresponding locally periodic approximation and applying techniques of locally periodic homogenization, we obtain the following macroscopic equations for the microscopic problem (1) and (2). (1) and (2) converges to a solution c ∈ L 2 (0, T ; H 1 ( )) ∩ H 1 (0, T ; L 2 ( )) and r l ∈ H 1 (0, T ; L 2 ( ; L 2 ( x ))), Theorem 1 A sequence of solutions of the microscopic problem for (t, x) ∈ (0, T ) × and y ∈ x , where the macroscopic diffusion coefficient A is defined as with w j , for j = 1, 2, 3, which are solutions of the unit cell problems div(A(∇ y w j + e j )) = 0 in Y * x,K , A(∇ y w j + e j ) · n = 0 on x , w j Y x − periodic, Y * x w j d y = 0 . (12) Here (10), and Proof Using calculations from above, we consider a domain with a locally periodic microstructure characterised by the periodicity cell Y x ε n = D x ε n Y in each ε n , with n = 1, . . . , N ε and the shift x ε n ∈ ε n in the covering of ε n by D x ε n (Y + ξ), with ξ ∈ ε n . Then, the characteristic function of the extracellular space * ε in a tissue with locally periodic microstructure is defined by χ * The boundaries of the locally periodic microstructure are denoted by Notice that non-periodic changes in the shape of the perforations (radius of muscle fibres) are approximated by the same transformation matrix K(x). This is consistent with the results obtained in [10,11,23,33]. However spatial changes in the periodicity are approximated by D x = R x W x . The reaction rates (binding and dissociation rates) are defined in terms of locally periodic microstructure in the following way To show that we can approximate the problem (1) and (2) by a microscopic problem defined in the domain with the locally periodic microstructure, we have to prove that the difference between the characteristic function of the original domain χ * ε and of the locally periodic perforated domain χ * ε converges to zero strongly in L 2 ( ) as ε → 0. Also, we have to show that the difference between boundary integrals and their locally periodic approximations converges to zero as ε → 0. This will ensure that as ε → 0 the sequence of solutions of the original microscopic problem (1)-(2) will converge to a solution of the macroscopic equations obtained by homogenization of the corresponding problem defined in the domain with locally periodic microstructure. For the difference between χ * ε and χ * ε , we have We notice that ε 3 |J ε n | ≤ Cε 3r and |N ε | ≤ Cε −3r . For the first integral, we have To estimate the second integral, we use Lemma 3. Since in each ε n the length of fibres is of order ε r , applying estimate in Lemma 3, equality (9), and the estimates N ε ≤ Cε −3r and |J ε n | ≤ Cε 3(r−1) , we conclude that Thus, for r ∈ (2/3, 1), we have I 1 → 0 and I 2 → 0 as ε → 0. To estimate the difference between boundary integral we have to extend c ε , r ε f , and r ε b from * ε to . For c ε , we can consider the extension as in Lemma 2. Then, using the extendedc ε and the fact that the reaction rates and the initial data are defined on whole we can extend r ε f and r ε b to as solutions of the ordinary differential equations withc ε instead of c ε The non-negativity of c ε and the construction of the extension ensure thatc ε is non-negative. Then, in the same way as for r ε f and r ε b , using the properties of p and the non-negativity of the coefficients and initial data, we obtain the non-negativity ofr ε f andr ε b . Thus, adding the equations forr ε f andr ε b , we obtain the boundedness ofr ε f andr ε b in T , i.e. Notice that ε −1 in the estimates for ∇r ε f and ∇r ε b will be compensated by ε in the estimate for the difference between neighbouring points in non-periodic and locally periodic domains, respectively, i.e. Then, for the boundary integrals, we have Considering the regularity of K and R and the uniform boundedness from below and above of | det K|, and using the trace estimate for the L 2 ( )-norm of a H ς (Y )-function, with ς ∈ (1/2, 1), the first integral we can estimates as , with j ∈ J ε n and n = 1, . . . , N ε . Here, we used the short notations Using the regularity of γ , K, and α, and applying a priori estimates for c ε and r ε f , together with (15), we obtain for 0 < ς 1 < 1/2, with ς + ς 1 = 1, Conducting similar calculations as for I 3 and using the fact Similarly, we obtain that The definition of * ε , ε , α ε , and β ε implies that the original non-periodic problem is approximated by equations posed in a domain with locally periodic microstructure. Hence we can apply the methods of locally periodic two-scale convergence (l-t-s) and l-p unfolding operator to derive the limit equations. Using the extension of c ε , we have that the sequences {c ε }, {∇c ε } and {∂ t c ε } are defined on T and we can determine Then, the convergence results for the l-p unfolding operator and l-t-s convergence, see [29,30] or Appendix, imply that there exist subsequences (denoted again by c ε , r ε f and r ε b ) and the functions c ∈ L 2 (0, T ; H 1 ( )), ∂ t c ∈ L 2 ( T ), c 1 ∈ L 2 ( T ; H 1 per ( Y x )), r f , r b ∈ H 1 (0, T ; L 2 ( ; L 2 ( x ))) such that The coefficients α ε and β ε can be defined as locally periodic approximations of α and β, given by (13), i.e., See Appendix or [29] for the definition of the locally periodic approximation L ε . The regularity assumptions on α, β, K, and γ ensure that α, β ∈ C( ; C per ( Y x )). Considering ψ ε (t, x) = ψ 1 (t, x) + εL ε ρ (ψ 2 )(t, x), with ψ 1 ∈ C 1 ( T ) and ψ 2 ∈ C 1 0 ( T ; C 1 per ( Y x )), as a test function in (3) (see Appendix or [29] for the definition of L ε ρ ) and applying l-p unfolding operator and l-p boundary unfolding operator imply The regularity assumptions on γ and K ensure that and +∞). Applying the results from [30] Using the a priori estimates for c ε , r ε f and r ε b , the strong convergence of T ε L (c ε ) in L 2 ( T ; H 1 (Y )), the strong convergence and boundedness of T b,ε L ( α ε ), the weak convergence and boundedness of T b,ε L (r ε f ), together with the regularity of D, γ , and K, and the strong convergence of Similar arguments along with the Lipschitz continuity of F and the strong convergence of as ε → 0. Using the convergence results (16), the strong convergence of T ε L (ψ ε ) and T ε L (∇ψ ε ) and the fact that | * ε | → 0 as ε → 0, taking the limit as ε → 0, and considering the change of variables y = D xỹ forỹ ∈ Y and y = D x K xȳ forȳ ∈ , we obtain where w j are solutions of (12). Choosing ψ 2 (t, x, y) = 0 for (t, x) ∈ T and y ∈ Y x yields the macroscopic equation for c. Using the strong convergence of T b,ε L (c ε ) in L 2 ( T ; L 2 ( )), estimates (5a)-(5b) and (6), and the Lipschitz continuity of p we obtain that {T b,ε L (r ε j )} is a Cauchy sequence in L 2 ( T ; L 2 ( )) for j = f, b, and hence upto a subsequence, T b,ε L (r ε j ) → r j (·, ·, D x K x ·) strongly in L 2 ( T ; L 2 ( )). Then applying the l-p boundary unfolding operator to the equations on ε and taking the limit as ε → 0 we obtain the equations for r f and r b . The proof of the uniqueness of a solution of the macroscopic problem is similar to the corre-sponding proof for the microscopic problem, and hence the convergence of the whole sequences of solutions of the microscopic problem follows. Remark 1 Notice that for the proof of the homogenization results it is sufficient to have a local extension of c ε from * ε to δ , with δ = {x ∈ : dist(x, ∂ ) > δ} for any fixed δ > 0, and hence, the local strong convergence of T ε L (c ε ), i.e., the strong convergence in L 2 (0, T ; L 2 loc ( ; H 1 (Y ))). Remark 2 For numerical computations of the cell problems (12) and the ordinary differential equations (11), defining the dynamics of receptor densities, approaches from the two-scale finite element method [24] or the heterogeneous multiscale method [1,2,17,18] can be applied. Appendix: Definition and Convergence Results for the l-t-s Convergence and l-p Unfolding Operator We shall consider the space C( ; C per ( Y x )) given in a standard way, i.e. for any ψ ∈ C( ; C per (Y )) the relation ψ(x, y) = ψ(x, D −1 x y) with x ∈ and y ∈ Y x yields ψ ∈ C( ; C per ( Y x )). In the same way the spaces L p ( ; C per ( Y x )), L p ( ; L q per ( Y x )) and C( ; L q per ( Y x )), for 1 ≤ p ≤ ∞, 1 ≤ q < ∞, are given. Due to the assumptions on D, i.e. D ∈ Lip( ) and 0 < d 0 ≤ | det D(x)| ≤ d 1 < ∞ for all x ∈ , we obtain that the function u : → C( Y x ) is well-defined. The separability of C per ( Y x ) for each x ∈ and the Weierstrass approximation for continuous functions u : → C per ( Y x ) ensure the separability of C( ; C per ( Y x )). Also, we have the following relation for the norm ψ C( ;C per ( Y x )) := sup x∈ sup y∈ Y x |ψ(x, y)|: The assumptions on D and K ensure that u(x) ∈ L 2 ( x ) for a.e. x ∈ and u 2 L 2 ( x ) dx < ∞ are well-defined, separable Hilbert spaces, see e.g. [16,26,32]. We recall here the definition of locally periodic two-scale (l-t-s) convergence and l-p unfolding operator, see [29,30] for details. Definition 2 ([29]) Let u ε ∈ L p ( ) for all ε > 0 and 1 < p < ∞. We say the sequence {u ε } converges l-t-s to u ∈ L p ( ; L p ( Y x )) as ε → 0 if u ε L p ( ) ≤ C and for any ψ ∈ L q ( ; C per ( Y x )) where L ε is the l-p approximation of ψ and 1/p + 1/q = 1. Definition 4 ([30]) For any Lebesgue-measurable on function ψ the locally periodic unfolding operator (l-p unfolding operator) T ε L is defined as for x ∈ and y ∈ Y . Definition 5 ([30]) For any Lebesgue-measurable on ε function ψ the l-p boundary unfolding operator T b,ε L is defined as ψ εD x ε n D −1 x ε n x/ε Y + εD x ε n K x ε n y χˆ ε n (x) for x ∈ and y ∈ .
2019-04-22T13:03:58.303Z
2016-11-22T00:00:00.000
{ "year": 2016, "sha1": "7a8b1a2c77d547fde7c443e72a1f97d7160bb0e9", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s10013-016-0232-9.pdf", "oa_status": "HYBRID", "pdf_src": "MergedPDFExtraction", "pdf_hash": "2586a8cae48c6e709a884b851900ec33fe170b95", "s2fieldsofstudy": [ "Engineering", "Medicine" ], "extfieldsofstudy": [ "Mathematics" ] }
12254231
pes2o/s2orc
v3-fos-license
Hepatic Xbp1 Gene Deletion Promotes Endoplasmic Reticulum Stress-induced Liver Injury and Apoptosis* Background: The unfolded protein response (UPR) either restores homeostasis or promotes apoptosis in response to endoplasmic reticulum (ER) stress. Results: ER stress causes prolonged UPR activation, severe liver injury, and enhanced apoptosis in mice lacking hepatic Xbp1. Conclusion: Hepatic Xbp1 is critical for hepatic recovery from ER stress. Significance: We implicate Xbp1 in mediating the pro-survival response of the UPR. Endoplasmic reticulum (ER) stress activates the unfolded protein response (UPR), a highly conserved signaling cascade that functions to alleviate stress and promote cell survival. If, however, the cell is unable to adapt and restore homeostasis, then the UPR activates pathways that promote apoptotic cell death. The molecular mechanisms governing the critical transition from adaptation and survival to initiation of apoptosis remain poorly understood. We aim to determine the role of hepatic Xbp1, a key mediator of the UPR, in controlling the adaptive response to ER stress in the liver. Liver-specific Xbp1 knockout mice (Xbp1LKO) and Xbp1fl/fl control mice were subjected to varying levels and durations of pharmacologic ER stress. Xbp1LKO and Xbp1fl/fl mice showed robust and equal activation of the UPR acutely after induction of ER stress. By 24 h, Xbp1fl/fl controls showed complete resolution of UPR activation and no liver injury, indicating successful adaptation to the stress. Conversely, Xbp1LKO mice showed ongoing UPR activation associated with progressive liver injury, apoptosis, and, ultimately, fibrosis by day 7 after induction of ER stress. These data indicate that hepatic XBP1 controls the adaptive response of the UPR and is critical to restoring homeostasis in the liver in response to ER stress. Endoplasmic reticulum (ER) 2 stress is increasingly recognized as a salient feature of numerous chronic liver diseases, including hepatitis C virus infection, alcoholic liver disease, and nonalcoholic fatty liver disease (1)(2)(3)(4)(5)(6)(7)(8). Under conditions of ER stress, normal ER function becomes compromised, leading to the accumulation of unfolded or misfolded proteins, triggering an evolutionarily conserved intracellular signal transduction pathway known as the unfolded protein response (UPR) (9,10). The UPR is comprised of three branches, initiated by three distinct transmembrane proteins. Activation of the inositol-requiring enzyme 1␣ (IRE1␣) pathway, the most highly conserved arm of the UPR, induces splicing of the mRNA encoding X-box binding protein 1 (Xbp1) (11). Spliced (activated) XBP1 is a transcriptional activator that has many targets, including ER chaperones and genes involved in ER-associated degradation (12). Low levels of hepatic XBP1 expression have been associated with more advanced liver disease in patients with nonalcoholic fatty liver disease, but the role of XBP1 in the progression of liver disease remains unknown (3). The primary function of the UPR is to promote cell survival by activating genes and proteins that halt ER protein synthesis and promote protein degradation. If, however, the cell is unable to adapt to the stressor and restore homeostasis, then pathways leading to apoptosis are initiated (13)(14)(15). XBP1 has been implicated in the fine-tuning of the UPR and may mediate, in part, the process of recovery from ER stress (16,17). However, a definitive role of XBP1 in mediating the hepatic response to ER stress has not been established. Although the molecular signals within the UPR that initiate the transition from a pro-survival to a pro-apoptotic response remain incompletely understood, several key mediators of ER stress-induced apoptosis have been identified. C/EBP homologous protein (CHOP) is transcriptional activator of numerous pro-apoptotic genes, including death receptor 5 (DR5) (18 -20). CHOP overexpression has been shown to sensitize cells to apoptosis, whereas CHOP depletion attenuates ER stress-induced apoptosis (21,22). Caspase-12 is considered an ER stressspecific mediator of apoptosis (23). Under conditions of ER stress, but not other types of cellular stress, caspase-12 is cleaved to active caspase-12. JNK is a mitogen-activated protein kinase that is activated in response to ER stress and promotes inflammation and apoptosis in the liver (24,25). BAK and BAX are Bcl2 family members that localize not only to the mitochondria but also to the ER, where they promote apoptosis in response to ER stress (26,27). The role of the IRE1␣ signaling in ER stress-induced apoptosis is complex and incompletely understood. Prolongation of IRE1␣ signaling has been shown to promote cell survival (28,29). However, under certain pathophysiologic conditions, activation of IRE1␣ is thought to promote apoptosis (30 -32). Although the precise role of IRE1␣ in ER stress-induced apoptosis has not been fully characterized, there is well established interplay between IRE1␣ and several key mediators of ER stress-related apoptosis. Activated IRE1␣ recruits tumor necrosis factor receptor-associated receptor 2, leading to activation of JNK (25). BAK and BAX interact directly with IRE1␣ during ER stress, facilitating the activation of XBP1 and JNK (26). Activated IRE1␣ associates with caspase-12, allowing its proteolytic cleavage to active caspase-12 (23). The function of the major IRE1␣ target XBP1 in mediating ER stress-induced apoptosis remains unclear. In this study, we aim to determine the role of hepatic Xbp1 in mediating the hepatic response to ER stress. Specifically, we aim to identify Xbp1 as a critical mediator of the shift between the adaptive and pro-apoptotic phases of the UPR in the liver. Experimental Procedures Animals and Treatments-C57BL/6-Xbp1 fl/fl mice with loxP sites flanking exon 2 of the Xbp1 gene were provided by Dr. Laurie H. Glimcher (Cornell University, NY). Xbp1 fl/fl mice were bred with Albumin-Cre transgenic mice in a C57BL/6 background (The Jackson Laboratory) to generate mice bearing a hepatocyte-specific deletion of Xbp1 (Xbp1 LKO ). Xbp1 LKO mice expressing Cre recombinase were confirmed to be liverspecific Xbp1 knockout mice by Western blot analysis for XBP1 and real-time PCR using primers targeting a deleted region of the transcript in exon 2. Xbp1 LKO and Xbp1 fl/fl littermate control mice were treated with a single intraperitoneal injection of tunicamycin (0.5 or 1.0 mg/kg) or vehicle (20% dimethyl sulfoxide/PBS) and sacrificed 6, 24, 72, or 168 h post-injection. In a subset of experiments, mice were pretreated with a single intraperitoneal injection of a JNK inhibitor, SP600125 (Sigma-Aldrich, St. Louis, MO) at a dose of 30 mg/kg in 50% dimethyl sulfoxide/PBS for 1 h prior to injection of tunicamycin (0.5 mg/kg). Mice treated with SP600125 ϩ tunicamycin were sacrificed 3 or 7 days after treatment. At the end of the treatment protocols, mice were sacrificed by CO 2 inhalation followed by cardiac puncture. The collected blood was centrifuged immediately to collect the plasma. The livers were excised rapidly, flushed with ice-cold saline, and sectioned. An aliquot of liver was fixed in formalin, and the remaining liver was snap-frozen in liquid nitrogen. All animal protocols were approved by the Northwestern University Animal Care and Use Committee. Blood and Tissue Analysis-H&E staining, TUNEL staining, and immunohistochemical staining for Ki67 and proliferating cell nuclear antigen were performed on liver tissue by the Northwestern University Mouse Histology and Phenotyping Laboratory. Apoptotic cell death and cellular proliferation were quantified as the average number of positive-staining nuclei in five random high-powered fields (ϫ200) on TUNEL-, Ki67-, and proliferating cell nuclear antigen-stained liver sections. Liver samples were homogenized in Dulbecco's phosphatebuffered saline for hepatic lipid analysis (100 mg of liver tissue/1 ml). Triglyceride levels were measured in liver homogenate using an Infinity spectrophotometric assay according to the protocol of the manufacturer (Thermo Electron Corp., Mel-bourne, Australia). Plasma ALT was measured using a colorimetric assay according to the protocol of the manufacturer (Teco Diagnostics, Anaheim, CA). Analysis of Gene Expression and Protein Expression-Total RNA from frozen liver samples was isolated using TRIzol reagent, and real-time quantitative PCR was performed as described previously (5,33). Total protein was isolated from frozen liver samples, and Western blotting was performed as described previously (5,33). Protein detection was performed using polyclonal rabbit antibodies to total and phosphorylated JNK, phosphorylated IRE1␣, BAX, and Caspase-12 (Cell Signaling Technology, Danvers, MA). Bound antibody was detected using goat anti-rabbit polyclonal HRP antibody (Cell Signaling Technology) and developed using ECL Western blotting substrate (Cell Signaling Technology). Representative Western blots of pooled samples are shown in the figures. Statistical Analysis-Data are presented as mean Ϯ S.D. Comparisons between groups were performed using Student's t test. Deletion of Hepatic Xbp1 Sensitizes Mice to Severe ER Stress- We began by determining the effects of hepatic Xbp1 deletion on the response to severe ER stress in the liver. Tunicamycin is a well established ER stress-inducing agent in mice (34 -36). On the basis of work published previously and our preliminary dose-response data, we found that wild-type C57Bl6 mice treated with tunicamycin at doses higher than 1 mg/kg show mortality beginning on day 4 after a single injection (37). Therefore, we began by treating Xbp1 LKO and Xbp1 fl/fl control mice with 1 mg/kg of tunicamycin i.p., the maximum sublethal dose for a wild-type mouse. All Xbp1 fl/fl mice survived and were grossly well appearing on day 7, when the experiment was terminated. Conversely, all Xbp1 LKO mice died between 5 and 6 days after induction of ER stress. To characterize the hepatic effects of severe ER stress in these mice, additional cohorts of Xbp1 LKO mice and Xbp1 fl/fl mice were treated with the same protocol (1 mg/kg tunicamycin i.p.) and sacrificed on day 4. H&E staining of liver sections of Xbp1 LKO mice showed necrosis and marked hepatocyte swelling consistent with severe hep-atocyte injury (Fig. 1A). The plasma alanine aminotransferase level, an indirect plasma marker of liver injury, was elevated markedly in Xbp1 LKO mice compared with Xbp1 fl/fl mice (Fig. 1B). Xbp1 LKO Mice Appropriately Activate but Fail to Normally Deactivate the UPR Over Time-We next lowered the dose of tunicamycin to 0.5 mg/kg, a dose at which both Xbp1 LKO and Xbp1 fl/fl mice survived for the duration of the experiment. Mice were sacrificed either 6 h, 24 h, 3 days, or 7 days after treatment. 6 h after induction of ER stress, Xbp1 LKO and Xbp1 fl/fl mice demonstrated robust transcriptional activation of the UPR markers Chop, Grp78/Bip, and Atf4 ( Fig. 2A). The degree of UPR activation was similar in Xbp1 LKO and Xbp1 fl/fl mice, indicating normal induction of the UPR in Xbp1 LKO mice. As expected, Xbp1 LKO mice showed a near absence of spliced Xbp1 expression in the liver and markedly attenuated transcription of Edem and Erdj4, direct downstream targets of XBP1 ( Fig. 2A). At all time points beyond 6 h, the expression of Chop, Grp78/ Bip, and Atf4 in Xbp1 fl/fl mice was at a baseline level similar to that of unstressed mice, indicating rapid resolution of ER stress. Conversely, Xbp1 LKO mice showed a persistent elevation in Chop, Grp78/Bip, and Atf4 24 h, 3 days, and 7 days after induction of ER stress, consistent with ongoing UPR activation ( Fig. 2A). Phosphorylation of eIF2␣, a downstream consequence of PERK (protein kinase RNA-like endoplasmic reticulum kinase) activation, was greatly induced in XBP1 fl/fl mice at 6 h and showed a delayed course of deactivation relative to other UPR markers. The levels of phosphorylated eIF2␣ began to attenuate on day 7 after induction of ER stress (Fig. 2B). Unlike other UPR markers, eIF2␣ was constitutively active in Xbp1 LKO mice. Deletion of Xbp1 is associated with hyperactivation of IRE1␣ (38). As expected, we found that, in the unstressed state, Xbp1 LKO mice had increased phosphorylation (activation) of IRE1␣ relative to Xbp1 fl/fl mice (Fig. 2B). Xbp1 LKO mice and Xbp1 fl/fl mice showed similar expression of phosphorylated IRE1␣ 6 h after induction of ER stress. By day 3, Xbp1 fl/fl mice showed resolution of IRE1␣ activation, whereas Xbp1 LKO mice showed profoundly increased phosphorylation of IRE1␣, which persisted on day 7 after induction of ER stress (Fig. 2B). Phosphorylation (activation) of JNK is a well established consequence of IRE1␣ activation. Consistent with the observed hyperactivation of IRE1␣ in Xbp1 LKO mice, we found significant hyperphosphorylation of JNK on days 3 and 7 after induction of ER stress (Fig. 2B). Xbp1 LKO Mice Show Enhanced ER Stress-induced Liver Injury-We next assessed whether ongoing UPR activation in Xbp1 LKO mice was associated with enhanced liver injury. Xbp1 fl/fl mice showed no histologic evidence of liver injury at any time point after a single dose of tunicamycin (0.5 mg/kg i.p.) (Fig. 3A). Conversely, Xbp1 LKO mice showed progressive hepatic injury from days 3 to 7 after induction of ER stress. Specifically, there was increased infiltration of inflammatory cells and early hepatocyte swelling evident on day 3. By day 7, there was markedly increased inflammatory infiltrate, worsening architectural distortion, and hepatocyte swelling, consistent with progressive liver damage. Plasma ALT levels were also markedly higher in Xbp1 LKO mice compared with Xbp1 fl/fl mice on days 3 and 7, consistent with enhanced liver injury (Fig. 3B). Hepatic triglyceride accumulation is a well established consequence of chronic or severe ER stress (39). Therefore, one might hypothesize that tunicamycin-treated Xbp1 LKO mice would show enhanced hepatic triglyceride accumulation as a consequence of unrelieved ER stress. However, XBP1 has been shown to transcriptionally activate hepatic triglyceride synthe- DECEMBER 11, 2015 • VOLUME 290 • NUMBER 50 sis genes, raising the possibility that Xbp1 LKO mice may be protected from ER stress-induced hepatic triglyceride accumulation. We found no overt hepatic steatosis present histologically in either Xbp1 LKO or Xbp1 fl/fl mice at the dose and duration of ER stress used in these experiments (Fig. 3A). We did, however, find that, 6 h after induction of ER stress, Xbp1 fl/fl mice showed a modest increase in hepatic triglyceride content from which Xbp1 LKO mice were protected (Fig. 3C). By day 3, Xbp1 fl/fl mice showed normalization of the hepatic triglyceride level, whereas Xbp1 LKO mice showed significantly increased hepatic triglyceride content (Fig. 3C). Hepatic Xbp1 Regulates ER Stress-induced Apoptosis Xbp1 LKO Mice Show Enhanced ER Stress-induced Apoptosis-CHOP and JNK are critical mediators of ER stress-induced apoptosis (21,22). Given the finding of persistent CHOP and JNK activation in Xbp1 LKO mice, we hypothesized that loss of hepatic Xbp1 leads to enhanced ER stress-induced activation of pro-apoptotic pathways and apoptotic cell death. Although Xbp1 fl/fl mice showed scant apoptotic cells in response to ER stress, we found a significant amount of apoptosis in the livers of Xbp1 LKO mice 3 and 7 days after induction of ER stress (Fig. 4, A and B). Consistent with persistent CHOP activation and enhanced apoptosis, Xbp1 LKO mice showed markedly increased hepatic expression of Dr5, a downstream target of CHOP (Fig. 4C). The hepatic levels of the proapoptotic protein BAX were increased greatly in Xbp1 LKO mice compared with Xbp1 fl/fl mice 3 and 7 days after induction of ER stress (Fig. 4D). Proteolytic cleavage of caspase-12 is considered a highly specific indicator of ER stress-induced apoptosis (23). Xbp1 LKO mice showed enhanced cleavage of caspase-12 in response to ER stress (Fig. 4D). JNK activation has been shown to promote ER stress-induced apoptosis (24,25). We considered whether hyperactivation of IRE1␣ and the resultant hyperactivation of JNK may mediate the development of apoptosis and injury in Xbp1 LKO mice after prolonged ER stress. We therefore pretreated cohorts of Xbp1 LKO mice and Xbp1 fl/fl mice with a JNK inhibitor, SP600125, followed by induction of ER stress for 3 days. As expected, pretreatment with SP600125 resulted in attenuated activation of JNK in Xbp1 LKO mice in response to ER stress (Fig. 5A). JNK inhibition prevented ER stress-induced apoptosis in Xbp1 LKO mice, as evidenced by a dramatic reduction in the number of positive TUNEL-stained nuclei on liver sections (Fig. 5, B and C). Xbp1 LKO mice treated with SP600125 prior to induction of ER stress showed a similar degree of liver injury histologically and comparably elevated plasma ALT levels compared with Xbp1 LKO mice treated with tunicamycin alone (Fig. 5, B and D). Inhibition of JNK did not attenuate the activation of the UPR markers Chop and Grp78/Bip in Xbp1 LKO mice subjected to ER stress (Fig. 5E). Furthermore, treatment of Xbp1 LKO mice with SP600125 ϩ tunicamycin increased IRE1␣ activation to an even greater degree than tunicamycin alone, indicating that the observed protection from ER stress-induced apoptosis among JNK-inhibited Xbp1 LKO mice is not due to attenuated UPR activation (Fig. 5A). Treatment of Xbp1 fl/fl mice with SP600125 ϩ tunicamycin also enhanced phosphorylation of IRE1␣ relative to Xbp1 fl/fl mice treated with tunicamycin alone, indicating a compensatory hyperactivation of IRE1␣ in the absence of active JNK. Consistent with increased activation of IRE1␣, Xbp1 fl/fl mice treated with SP600125 ϩ tunicamycin also showed increased hepatic expression of spliced Xbp1 in response to ER stress (Fig. 5E). Xbp1 LKO Mice Show Enhanced Hepatocyte Proliferation- Severe liver damage triggers a compensatory response in which hepatocyte proliferation occurs (40). We next assessed whether a proliferative response to the observed hepatic injury is present in Xbp1 LKO mice. Hepatocyte proliferation, as assessed by proliferating cell nuclear antigen and Ki67 staining, was unchanged in Xbp1 LKO and Xbp1 fl/fl mice at baseline and 6 h after induction of ER stress (Fig. 6A). However, by day 3, hepatocyte proliferation was increased markedly among Xbp1 LKO mice compared with Xbp1 fl/fl mice (Fig. 6A). Induction of hepatic tumor necrosis factor ␣ (Tnf␣) is a critical mediator of the proliferative response after severe liver injury (40 -42). The expression of hepatic Tnf␣ was increased to a greater degree in Xbp1 LKO mice compared with Xbp1 fl/fl mice 3 days after induction of ER stress (Fig. 6B). Deletion of Hepatic Xbp1 Promotes Hepatic Fibrosis-The ultimate consequence of severe liver injury and attempted repair is the development of hepatic fibrosis. 7 days after induction of ER stress, Xbp1 LKO mice demonstrated pericellular collagen deposition on trichrome staining (Fig. 7A). Consistent with the finding of enhanced fibrosis, Xbp1 LKO mice showed a marked elevation in hepatic expression of the fibrosis markers ␣-Sma, Timp-1, and Collagen I in response to ER stress (Fig. 7, B-D). We have shown that inhibiting JNK attenuates ER stressinduced apoptosis in Xbp1 LKO mice. We next assessed the effect of JNK-inhibition on the development of hepatic fibrosis in Xbp1 LKO mice subjected to ER stress. On day 7 after induction of ER stress, we found that Xbp1 LKO mice pretreated with SP600125 showed significantly reduced hepatic fibrosis compared with Xbp1 LKO mice treated with tunicamycin alone (Fig. 7A). The expression of fibrosis markers was also attenuated in JNK-inhibited Xbp1 LKO mice. Discussion The unfolded protein response is a highly complex and tightly orchestrated signaling cascade that can function either as a protective or injurious response depending on the severity and duration of ER stress. The precise molecular mechanisms that control the shift from the adaptive to apoptotic phase of the UPR have remained elusive. In this work, we demonstrate that loss of hepatic Xbp1 shifts the pattern of UPR activation in the liver to preferentially activate pro-apoptotic signals, sensitizing mice to ER stress-induced liver injury and apoptosis. These data firmly establish that hepatic Xbp1 promotes the pro-survival response of the UPR in the liver. Moreover, these data strongly implicate Xbp1 in mediating the critical transition from cell survival to apoptotic cell death in response to ER stress. It has been reported previously Lee et al. (38) that mice bearing a liver-specific deletion of Xbp1 show normal induction of the UPR, as evidenced by a similar induction of Grp78/Bip and Chop and no evidence of hepatic injury 8 h after a standard dose of tunicamycin. We demonstrate findings consistent with Lee et al. (38) in that Xbp1 LKO mice show normal induction of Grp78/Bip, Chop, and Atf4 6 h after treatment with tunicamycin. However, when followed for up to 7 days after induction of ER stress, we show that Xbp1 LKO mice exhibit persistent UPR activation, whereas Xbp1 fl/fl mice show rapid and complete resolution of UPR activation. Furthermore, we demonstrate that ongoing UPR activation in Xbp1 LKO mice is associated with progressive liver injury, apoptosis, and, ultimately, fibrosis. The role of the IRE1␣ branch in mediating cell fate in response to ER stress is controversial. Activation of IRE1␣ has been shown to generate either pro-or anti-apoptotic effects depending on the experimental conditions (28 -32, 43). Among the downstream targets of activated IRE1␣ are XBP1 and JNK. Activated JNK is a well established mediator of the proapoptotic response to a variety of cellular stressors, including ER stress (24,25). We hypothesized that the relative activation of XBP1 versus JNK in response to IRE1␣ activation may underlie the divergent effects of IRE1␣ activation with respect to apoptosis. This work provides substantial support for this hypothesis, in which deletion of hepatic Xbp1 leads to marked hyperactivation of IRE1␣ and JNK associated with enhanced ER stressinduced apoptosis. Furthermore, we demonstrated that inhibiting JNK attenuates apoptosis in Xbp1 LKO mice subjected to prolonged ER stress. These findings strongly suggest that, in the absence of hepatic Xbp1, preferential activation of JNK by activated IRE1␣ may be a dominant driver of enhanced apoptosis. ER stress is known to induce hepatic triglyceride accumulation. As expected, we found that Xbp1 fl/fl mice showed a transient increase in hepatic triglyceride content when challenged with ER stress. Consistent with the known role of XBP1 in the transcriptional regulation of hepatic lipogenesis, Xbp1 LKO mice showed reduced hepatic triglyceride accumulation 6 h after induction of ER stress. However, we found that Xbp1 LKO mice exhibit enhanced hepatic triglyceride accumulation in response to prolonged, severe ER stress. Our data strongly suggest that, in the absence of hepatic Xbp1, the pro-steatotic effects of prolonged, unrelieved ER stress outweigh the anti-steatotic effects of suppressed lipogenesis. Severe liver injury triggers a compensatory repair process characterized by hepatocyte proliferation. We find that, in response to prolonged ER stress, Xbp1 LKO mice show enhanced hepatocyte proliferation associated with induction of Tnf␣, a critical mediator of injury-related hepatocyte proliferation. Therefore, we may conclude that Xbp1 is not essential for ER stress-induced hepatocyte proliferation. However, the compensatory response is clearly inadequate to permit complete recovery from the injury and prevent the development of hepatic fibrosis. Therefore, we cannot exclude the possibility that loss of hepatic Xbp1 impedes ER stress-induced hepatocyte proliferation to some degree. Activation of the UPR is a feature of many common chronic liver diseases. Dysregulation of the hepatic IRE1␣-XBP1 axis is characteristic of nonalcoholic fatty liver disease, but the precise role of XBP1 in the progression of liver disease has been poorly understood (3). On the basis of our findings, we speculate that failure to activate hepatic XBP1 in response to ER stress may promote liver disease progression. Furthermore, enhancing hepatic XBP1 levels may serve as a therapeutic strategy to prevent the progression of chronic liver disease.
2018-04-03T00:28:16.439Z
2015-10-26T00:00:00.000
{ "year": 2015, "sha1": "f17db5596a7d0be7d9f3467ef9331a51182ba5f0", "oa_license": "CCBY", "oa_url": "http://www.jbc.org/content/290/50/30142.full.pdf", "oa_status": "HYBRID", "pdf_src": "Highwire", "pdf_hash": "fbb25370305a0815b884db7191827e6f395545fa", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
235648426
pes2o/s2orc
v3-fos-license
Combined cryopreservation of canine ejaculates collected at a one‐hour interval increases semen doses for artificial insemination without negative effects on post‐thaw sperm characteristics Abstract A limiting factor in canine artificial insemination (AI) is the low number of insemination doses obtained per ejaculate. In this study, semen was collected from dogs (n = 28) either once and frozen directly after collection or the same dogs were submitted to a dual semen collection with a 1‐hr interval and the two ejaculates were combined for cryopreservation. We hypothesized that combining two ejaculates increases semen doses per cryopreservation process without negative effects on semen characteristics. Total sperm count was lower in semen from a single semen collection in comparison with the combination of the first and second ejaculate of a dual semen collection (p < .001). The percentage of motile and membrane‐intact spermatozoa determined by computer‐assisted sperm analysis (CASA) in raw semen did not differ between single and combined dual ejaculates and was reduced (p < .001) by cryopreservation to the same extent in single (motility 73.7 ± 1.8%, membrane integrity 65.6 ± 2.2%) and combined dual ejaculates (motility 72.7 ± 2.3%, membrane integrity 64.6 ± 2.5%). The percentage of spermatozoa with morphological defects increased after cryopreservation (p < .001) but was similar in single and combined dual ejaculates. The CASA sperm velocity parameters decreased with cryopreservation (p < .001) but did not differ between single and combined dual ejaculates. The number of insemination doses increased from 2.7 ± 0.4 for single to 4.7 ± 0.8 for combined dual ejaculates (p < .01), based on 100 million motile spermatozoa per frozen‐thawed semen dose. In conclusion, combining two ejaculates collected at short interval for one cryopreservation process increases the number of AI doses without compromising semen quality. | INTRODUC TI ON Although the dog is the first mammalian species were a pregnancy from artificial insemination (AI) has been reported (Spallanzani, 1785), only during the last decades AI with cooled-shipped or cryopreserved semen is increasingly applied in in this species (Farstad, 2000). A limiting factor in canine AI is the low number of insemination doses obtained per single ejaculate. This is even more true for cryopreserved than for cooled-shipped or fresh semen (Nöthling & Shuttleworth, 2005). While one ejaculate may yield between 300 and 400 cryopreserved semen doses in cattle (Humblot et al., 1993) and 10 to 15 doses in horses , usually only one to three cryopreserved semen doses per ejaculate can be collected in dogs (Farstad, 2000). With the laboratory effort associated with freezing one ejaculate increasing only marginally with the number of straws or AI doses per freezing process, semen cryopreservation in dogs is, thus, far less economic than in cattle and horses. In order to obtain the number of AI doses desired by dog breeders, male dogs often have to travel repeatedly to the veterinary centre where semen collection and freezing is performed. Both for economic and animal welfare reasons (Herbel et al., 2020), it would be desirable to collect more than one ejaculate within a shorter time window before freezing is done. On the other hand, semen quality decreases during storage at room temperature and semen should, thus, be frozen within a reasonable time after collection. When two ejaculates were collected from dogs at a 1-hr interval, there was no difference in sperm motility and percentage of morphologically normal spermatozoa (England, 1999), but both ejaculates were assessed shortly after collection and no attempt was made for liquid storage or cryopreservation. In the present study, semen was collected from dogs either once and frozen directly after collection or the same dogs were submitted to a dual semen collection at 1-hr interval and the two ejaculates were frozen directly after the second ejaculate had been obtained. We hypothesized that combining two ejaculates collected 1 hr apart for cryopreservation is without negative effects on post-thaw semen parameters but results in more insemination doses per dog in one cryopreservation process. | MATERIAL S AND ME THODS The study was approved by the Ethics and Animal Welfare Committee of Vetmeduni Vienna (study number ETK-014/01/2020). Informed consent was obtained from all dog owners before their animals participated in the study. | Animals Dogs to be included into the study were recruited from breeders who are regular clients of Vetmeduni Vienna or members of a local kennel club. Out of an initial number of 46 dogs, 11 were excluded for azoospermia (n = 3), general health problems (n = 1), pre-pubertal age (n = 1) or because semen collection was not possible (n = 6). | Experimental design All dogs were submitted for two semen collection sessions 1 week apart. At one occasion, only one ejaculate was collected, processed immediately after collection and the sperm-rich fraction cryopreserved within 1 hr (protocol 'single', see Figure 1). On the other occasion, two ejaculates were collected at an approximate 1-hr interval (57 ± 6 min, range 45 -73 min), the two sperm-rich fractions were processed for cryopreservation and combined before filling the processed semen into straws (protocol 'dual'). Ejaculates were thawed after storage for at least 24 hr in liquid nitrogen (−196℃). Semen analysis (see 2.3) was thus performed in raw semen and in frozenthawed semen. | Semen collection and semen analysis Fractionated semen collection was performed by digital manipulation as described previously (Seager et al., 1975). Swabs with vaginal secretions from an oestrous female dog were used to stimulate mating behaviour and, in addition, a non-oestrous female Beagle was brought into the examination room as an additional stimulus if needed. Immediately after collection, the sperm-rich fraction of all collected ejaculates was analysed for volume, pH and sperm concentration as described previously (Koderle et al., 2009). In both, raw semen and in semen processed for cryopreservation, the sperm concentration was determined by NucleoCounter (ChemoMetec) and the total sperm count was calculated. Semen motility was estimated in raw semen at 40 × magnification under a phase-contrast microscope. In processed semen (i.e. after addition of Uppsala 2 extender, Figure 1) and in frozen-thawed semen, after dilution with a TRIS-fructose-citric acid buffer After storage for at least 24 hr, the straws were thawed at 37℃ for 20 s in a water bath. After thawing, the same characteristics as described for fresh semen were analysed. The number of straws representing one AI dose was calculated taking into account postthaw progressive motility and sperm concentration with at least 100 × 10 6 motile spermatozoa per AI dose. The number of 0.5 ml straws per AI dose thus ranged from two to four. | Statistical analysis Statistical comparisons were made with the SPSS statistics software (version 26; IBM-SPSS). Because not all data were normally distributed (Kolmogorov Smirnov test), non-parametrical test were used throughout. Taking into account that the same animals were studied repeatedly, differences were analysed by Friedman's test for overall significance followed by pairwise Wilcoxon's test comparisons in case of an overall significant effect. A p-value < .05 was considered significant. Results are presented as scatterplots with individual values and mean ± standard error (SEM) or, for sperm velocity parameters, as mean ± SEM in table format. | RE SULTS Dual semen collection was successful in 28 out of 35 dogs and seven dogs refused to ejaculate a second time (Beagle n = 2, Australian Shepherd, Miniature Schnauzer, Terrier Brasileiro, White Swiss Shepherd dog and Rhodesian Ridgeback n = 1 each). All further F I G U R E 1 Experimental design for semen collection, processing and analysis results are presented only for the 28 dogs were both the single and dual semen collections were successfully performed. Neither ejaculate volume nor total sperm count differed between the sperm-rich fraction of a single semen collection and the sperm-rich fractions of the ejaculates of a dual semen collection, respectively. As predetermined by the experimental design, volume (single 1.3 ± 0.3, dual 5.3 ± 0.4 ml, p < .001) and total sperm count (single 0.37 ± 0.05, dual 0.63 ± 0.10 × 10 9 spermatozoa, p < .001) in the combined sperm-rich fraction after dual semen collection was higher than when only a single ejaculate had been collected (Figure 2a,b). The percentage of spermatozoa with morphological defects was close to identical before cryopreservation in semen from a single and a dual semen collection (40.8 ± 4.1 and 40.4 ± 3.8%) and increased (p < .001) in an identical way in frozen-thawed semen from a single and dual semen collection (58.7 ± 3.4 and 60.0 ± 3.5%; Figure 4a). The sperm velocity characteristics VCL, VAP, VSL, ALH and STR decreased with semen cryopreservation (all p < .001) but both for processed semen before and after cryopreservation were close to identical in semen from a single and a dual combined semen collection (Table 1). The number of insemination doses with cryopreserved semen increased when dual semen collection at a 1-hr interval was performed (4.7 ± 0.8) compared to freezing a single ejaculate (2.7 ± 0.4, p < .01; Figure 5). | D ISCUSS I ON The results from our study suggest an easy procedure to increase the number of cryopreserved semen doses obtained with one semen freezing procedure from the same male dog. At the same time, no detrimental effects on semen characteristics were determined. The protocol is, therefore, interesting for semen cryopreservation in dogs especially when they have a long journey from their home to the clinic or laboratory where semen collection and freezing is performed. A drawback in semen cryopreservation for the present study is the failure of a second semen collections or semen collection at all in several otherwise healthy and fertile dogs. Semen collection twice at a short interval is without problems for example in bulls (Seidel and Foote, 1969), stallions (Pickett et al., 1975) and also in men (Check & Chase, 1985). In contrast, individual alpaca stallions from which semen was collected three times per day, ejaculated only seminal plasma during repeated semen collections (Bravo et al., 1997). It is, however, well-possible that training of dogs for the dual semen collection protocol or extension of the interval between collections may help to perform a dual semen collection protocol for semen cryopreservation in almost all males. This emphasizes that semen collection in dogs requires systematic preparation of the animals. Experience of the personal performing semen collections and optimized environment should also not be underestimated. Dog owners planning to produce frozen semen from their animals should understand that this often cannot be achieved with just a single visit to a veterinary practice. There was no breed or size prevalent among dogs were a dual semen collection was not possible. Optimisation of the semen collection schedule with the aim to improve the reproductive capacity of a semen donor has been addressed in different species (e.g. men: Check & Chase, 1985;dog: England, 1999;bull: Seidel & Foote, 1969, Everett et al., 1978stallion: Pickett et al., 1975;alpaca: Bravo et al., 1997). With regard to ejaculate volume and pre-freeze semen characteristics, our results are in agreement with data published by England (1999) who demonstrated only a slightly lower total sperm count in the second ejaculate and no difference in the sperm motility and percentage of morphologically normal spermatozoa in semen collected from dogs twice at a 1-hr interval. Similar sperm characteristics in two ejaculates collected from male dogs 30 to 60 min apart were reported (Gunay et al., 2003;Yonezawa et al., 1991) but in these studies, total F I G U R E 2 (a) Volume and (b) total sperm count in the sperm-rich fraction of a single ejaculate, two ejaculates collected at a 1-hr interval (dual #1 and dual #2) and the combination of the two ejaculates collected at a 1-hr interval (dual combined) in dogs (n = 28), values are means ± SEM, results of statistical analysis are indicated in the figure sperm count in the second ejaculate was reduced to 20% to 40% of the first ejaculate. In contrast to our data, the increase in AI doses would thus have been only of limited economic interest. The differences between studies with regard to the total sperm count of dual semen collections may be due to sexual activity of the dogs before the respective studies, semen collection techniques or breed and size. The ability to produce a second ejaculate at a short interval after a first semen collection may also depend on the species but apparently, this procedure often results in a decreased sperm output in the second ejaculate. When semen was collected from cattle bulls twice at 40 min intervals, sperm concentration decreased by 40% and total sperm count by 35%. When both ejaculates were frozen, post-thaw semen motility in the first and second ejaculate were, however, close to identical (Seidel & Foote, 1969), which is similar to our findings in dogs. Considerable reductions in sperm count were also reported from stallions undergoing two semen collections per day (Pickett et al., 1975) and from alpacas submitted to three daily semen collections (Bravo et al., 1997). In contrast to these reports, in the present investigation the dual semen collection was only performed once and it cannot be excluded that detrimental effects on total sperm count and semen characteristics would occur if the dual semen collection protocol would be performed repeatedly. Movement of spermatozoa through the epididymal tract of the caput and corpus epididymidis depends on continuous peristaltic contractions of its smooth muscular wall and is not hastened by ejaculation. It requires approximately 4.1 days in a stallion. In contrast, the smooth muscular wall of the cauda epididymis is usually inactive and the time spermatozoa spend in this part of the epididymal tract is influenced by ejaculation and ranges from 2 to 3 days in a sexually active stallion to 10 days in a sexually rested stallion (Amann, 2011). With regard to overall changes in sperm characteristics caused by cryopreservation, results of our study are largely in agreement with previous reports. The percentages of motile and membraneintact spermatozoa after cryopreservation obtained in our present Combined dual collection Curvilinear velocity (VCL; µm/s) 160.0 ± 5.9 a,b 158.9 ± 4.4 a,b 121.2 ± 3.6 a,b 120.9 ± 3.6 a,b study are in agreement with some previous reports (Hay et al., 1997;Pena & Linde-Forsberg, 2000a,b) but higher than in some more recent studies (Lucio et al., 2016;Nöthling & Shuttleworth, 2005;Pezo et al., 2017). The studies employed different one-step freezing procedures. Some of these differences among studies may be due to non-optimal freezing curves in experiments were computercontrolled freezing equipment was not available. Besides the percentage of motile spermatozoa, also velocity characteristics determined by CASA, both pre-freeze and after freezing-thawing, in the present study were very similar to previous data from our laboratory (Schäfer-Somi et al., 2006) and other groups (Pena & Linde-Forsberg, 2000a,b). Although absolute values for VCL, VAP and VSL were slightly higher and ALH slightly lower than in another recent study (Lucio et al., 2016), the magnitude of changes induced by freezing and thawing was close to identical. The dual semen collection protocol of the present study required that processed semen from the initially collected ejaculate had to be equilibrated at 5℃ for 2 hr instead of only 1 hr to allow for pooling with the second ejaculate. This increase in the equilibration time was without detrimental effects on semen characteristics which is in agreement with previous studies (Belala et al., 2016;Okano et al., 2004). Beyond that, further extension of the equilibration period during the cryopreservation process of canine semen may even improve characteristics of frozen-thawed semen and depending on the extender is optimal after 6 hr of equilibration (Belala et al., 2016). This suggests that further improvement of a dual semen collection protocol in dogs for cryopreservation is possible with adjusting the interval between semen collections. | Conclusions Studies on the cryopreservation of semen are usually aimed at optimizing the processes of cooling, freezing and thawing in order to minimize sperm damage and maximize the number of insemination doses obtained from an ejaculate. Combining two ejaculates collected at a short interval for one cryopreservation process increases the number of AI doses to be collected without compromising semen quality. Although the ultimate test of success for semen cryopreservation techniques is the evaluation of whelping rate and litter size, this study suggests a practical way to improve the efficiency and profitability of semen freezing in dogs. ACK N OWLED G EM ENTS The authors are grateful to Silvia Kluger and Bettina Schreiner for expert technical assistance with semen analysis and cryopreservation and Lisa Kornhoffer for help with the animals. CO N FLI C T O F I NTE R E S T None of the authors has any conflict of interest to declare.
2021-06-27T06:16:31.437Z
2021-06-26T00:00:00.000
{ "year": 2021, "sha1": "35c72f0ffb873c15d905db19ccb9a637baa66199", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.1111/rda.13980", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "1f620dc2dcf97f0dfa7c772726a0de9260c66c7e", "s2fieldsofstudy": [ "Biology", "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Medicine" ] }
243726492
pes2o/s2orc
v3-fos-license
The impact of lifestyle interventions on therapy associated side effects in postmenopausal breast cancer survivors: systematic reviews and meta-analysis Background: Medically Supervised Exercise (MSE) are advisable for the prevention and treatment related side effects among breast cancer survivors. Aerobic and resistance either exercise, separately or in combination, have been shown to improve physical functioning and manage some symptoms in breast cancer patients. However, the level of evidence on the effects of lifestyle interventions on therapy related adverse events and the required dose responses of exercises are not yet systematically reviewed. This review was conducted to assess the efficacy of medically supervised exercises(MSE) coupled with diet in preventing/managing aromatase inhibitors induced adverse events and improving range of motion(ROM) and heath related quality of life (HRQOL) in postmenopausal breast cancer patients following treatment. Methods: Two independent authors extracted data using PRISMA guidelines of published clinical trials. We searched the Cochrane Central Register of Controlled Trials, PubMed, MEDLINE, EMBASE, as well as clinical practice guidelines. We included only randomized controlled trials that examined exercise interventions coupled with diet interventions in postmenopausal breast cancer women. Health related quality of life (HRQOL) and range of motion were assessed as the main outcomes. Results: Random effects meta-analysis was conducted for pooling of the effect size. The age of patients varied from 50 to 60 years. The results illustrate that the mean difference (MD) in improving ROM in the MSE group versus no supervised exercises was 1.35% (95% CI: 0.63 to 2.07%, P = 0.0002; heterogeneity: Tau² = 0.71; Chi² = 112.14, df = 5 (P < 0.00001); I² = 96%). A summary of the data shows that show Results: Random effects meta-analysis was conducted for pooling of the effect size. The age of patients varied from 50 to 60 years. The results illustrate that the mean difference (MD) in improving ROM in the MSE group versus no supervised exercises was 1.35% (95% CI: 0.63 to 2.07%, P = 0.0002; heterogeneity: Tau² = 0.71; Chi² = 112.14, df = 5 (P < 0.00001); I² = 96%). A summary of the data shows that supervised exercises significantly improved ROM and HRQOL in postmenopausal BCS on endocrine therapy compared to no supervised exercises 3.02 (95% CI: 2.59 to 3.45, P < 0.00001). These outcomes show that lifestyle interventions (MSE +diet) have positive effects on AI-associated adverse events and likely improve ROM and HRQOL in postmenopausal BC patients. Conclusion: The evidence was based on a body of research with moderate study quality. Moreover, further studies are recommended to assess the effect of lifestyle interventions on markers of inflammation as the predictors of treatment nonresponse and associated comorbidities. BACKGROUND Breast cancer (BC) is a major public health challenge globally with the greatest ramifications in low and middle-income countries [1]. GLOBACAN 2012 data indicate that 25% of women were diagnosed with BC worldwide (an estimated 1.7 million cases), and 521,900 related deaths [2]. With this devastating statistics, BC remains an ongoing clinical challenging. BC treatment is multidisciplinary, including surgery, radiation therapy, endocrine therapy and chemotherapy [1]. The two widely used endocrine therapies are aromatase inhibitors (AIs) and tamoxifen, depending to anatomical pathological classification and menopausal status. AIs are the more effective standards of care for long-term estrogen suppression and reduction of risk recurrence in postmenopausal women as compared to premenopausal women [3]. Adherences to endocrine therapy among BC patients ranges from 79.6% at 1 year to 68.3% at 5 years. Non-adherence to endocrine therapy among BC patients is well acknowledged and associated with both morbidity and mortality [4]. However, estrogen deprivation therapy accompanies various adverse events which are associated with late complications associated with poor prognosis in BCS following a number of treatment strategies [5]. A meta-analysis conducted by Dent et al., (2011) revealed that AIs increase disease free survival (DFS) and overall survivorship (OS) when sequentially administered for 2-3 years following 2-3 years of tamoxifen therapy [6]. Similarly, their use after 5 years of tamoxifen treatment also produces an increase in DFS. As for OS, a clinical and statistical difference may be obtained only when AIs are administered after 2-3 years of tamoxifen treatment. In comparison with tamoxifen, AIs reduce the incidence of thromboembolic and gynaecologic side effects, however, increases body mineral density (BMD) adverse events [6]. Although the impact of adjuvant endocrine therapy related side effects are documented to be associated with both BC recurrence and cardiovascular diseases (CVD) risk [7], medically supervised exercises (MSE) programs have been suggested to be beneficial among postmenopausal BC patients on different BC treatment strategies [8]. While multiple adjuvant therapies are used to manage endocrine related adverse events, current treatment is focused on interventions in patients who already developed symptoms (tertiary prevention) rather than primary prevention. Diverse risk factors have an impact on Health Related Quality of Life (HRQOL) due to significant functional, psychosocial and metabolic disturbances. Obesity as one of rick factors may require interventions (healthy diet intake and MSE) programs as one of treatment strategies to improve HRQOL in postmenopausal BCS [9,10]. These exercises aim to restore upper limb function, range of motion (ROM) and muscle strength, and reduce comorbidities associated with BC surgery, radiation therapy and AIs [9]. Current clinical practice for exercises recommended by the American Cancer Society (ACS) [9] and American College of Sports Medicine (ACSM) [10], suggest that aerobic exercises of 150 minutes/week of moderate-intensity or 75 minutes/week of vigorous-intensity or an equivalent combination should be initiated for each BCS upon physician fitness examination. For muscle strength, at least moderate intensity resistance exercises (2 days/week) should be performed for each major muscle group. A review revealed that exercise may be beneficial in reducing treatment-related adverse outcomes among cancer patients [9]. Moreover, cancer type-specific exercises, clinical heterogeneity, lack of blinding in many trials, frequency and exercise mode, unknown level of evidence, and timing of exercise regimen are not yet evaluated for evidence based clinical recommendations. Understanding the role of exercise in side effect prevention in postmenopausal BC patients will assist in developing more effective therapy guidelines. The relationship between lifestyle risk factors and BC recurrence has not been specifically studied in postmenopausal BCS using adjuvant endocrine therapy such as AIs. Given that BMI > 30 kg/m 2 is a consistent risk factor associated with various side effects among this population. The lifestyle modifications which aim at preventing disease recurrence, typically defined as a relapse event at a local, regional, or distal site, have consisted of a healthy diet, nutritional supplements, regular exercise, or some combination of these components [10]. Therefore, this systematic review and meta-analysis were conducted to assess the efficacy of current recommended lifestyle interventions (MSE + diet) in preventing AIs-induced adverse events in postmenopausal BCS subjected to different BC treatment strategies. Ethics proclamations No Ethics clearance is required to conduct a systematic review and meta-analysis. The primary studies included in this review were approved by the respective national Ethics Committees. The first author performed this verification. Search strategy and selection criteria This meta-analysis was conducted in accordance with Preferred Reporting Items for Systematic Reviews and Meta-Analysis (PRISMA) guidelines [11]. We conducted a Interventions: RCT studies involving postmenopausal BC following different BC management strategies with a detailed MSE program (frequency, duration, ROM, types of exercises) coupled with diet or other BC management strategies including endocrine therapies (AIs, tamoxifen (TAM)). Studies in which mean difference (MD), relative risk, risk radio, odd ratios and HRQOL tools used to measure the level of disability were extracted and compared between intervention and control groups without years and settings restriction. Exclusion criteria Studies including premenopausal women, men, pharmacological interventions, and traditional medicines. Individual studies, non-randomized studies, case controls, duplicated studies, narrative reviews, grey literature, no defined exercise interventions, studies without control groups, case studies, case reports, crosssectionals and qualitative studies. Screening and data abstraction Two medical reviewers (JPM, JM) independently selected the study abstracts and full articles, and risk of bias was performed using standard tools. A third reviewer was consulted if there were disagreements, and such disagreements were resolved by commitment. Clinical heterogeneity was assessed by comparing the study designs, settings, sample sizes, countries of publication, methods used for diagnosis and measurement of outcomes. Random effects meta-analysis was conducted for pooling of the effect size. Statistical heterogeneity was evaluated, using chi-square test of homogeneity and I 2 statistical tests were conducted on quantitative data. Subgroup analysis was conducted for different tools used to measure the side effects associated with both postmenopausal status and AIs. Articles were classified as potentially eligible if the titles indicated an RCT on the prevention of side effects associated with BC treatment. If no judgment could be made about the eligibility of a study based on the title, the judgment was based on its title and abstract. Any disagreements about eligibility were resolved at consensus meetings. The same procedure was applied for references included in this systematic review. Review articles identified in the search were screened for relevance and reference lists were checked to identify additional potentially eligible studies. Final decisions about inclusion of all articles judged potentially eligible was based on the full texts of the published articles. Quality assessment and personal study quality Two reviewers independently assessed the quality of ten eligible studies ( Table 1). Risk of bias was conducted, using the Cochrane risk of bias tool for the appraisal of RCTs, as outlined in the Cochrane Handbook for Systematic Reviews of Interventions version 5.1.0 [12]. The tool contains six domains and each domain was assigned a judgement related to the risk of bias (Table 2 and Figure 1). The judgement could be 'low risk', 'high risk', or 'unclear risk'. The latter judgement was assigned if the risk of bias of a characteristic in an included study was judged to be unclear, or if there was insufficient information on which to base the judgement. We compared excel datasets between two data extractors and a third reviewer was consulted to resolve discrepancy. A summary of the risk of bias is reported in ( Table 2). All analyses were performed using Review Manager Software. Figure 2 shows PRISMA guideline for reporting systematic reviews. RESULTS The search strategy identified 4 422 reports. After screening the articles based on inclusion criteria. A total of 109 were assessed for final screening. From these, 68 duplicates and without pre and post measurements, 5 did not describe the exercise programs, 7 did not report control groups. Fourteen were reported in narrative synthesis because of high degree of heterogeneity and 10 were considered for meta-analysis. No adverse events were reported in the included studies. Resistance and aerobic exercises were common among the selected studies. The authors described the mode and frequency of each component of exercises regimen as recommend by ACSM. [19]. The above results were also confirmed in four other trials [20 -23]. Effects of diet and MSE on AIs-induced obesity in postmenopausal BCS. Goodwin side effects prevention to counteract sarcopenic obesity [26]. were statically significant in MSE group compared to non MSE. The authors concluded that MSE should be incorporated into BC treatment and survivorship care plan because of its benefits in attenuating metabolic syndrome and risk factors for CVD [28]. Heterogeneity assessment Our screening revealed that the methods using the RCTs were rigorous. The six domains of risk bias assessed revealed that biases were reduced in the most of included studies. This meta-analysis included ten RCTs for which the ages of patients ranged from 50 to 60 years. The mean difference (MD) in age between supervised exercises and no in MSE program compared to no supervised exercises (Statistically significant with P=0.0002) forest plot Figure 4. However, the statistical heterogeneity between RCTs was high. The pooled summary of data on the efficacy of MSE in improving HRQOL has shown moderate evidence that MSE Improved HRQOL compared to no supervised exercises by up to 3.02 (95% CI: 2.59 to 3.45, P <0.00001), as illustrated by the forest plot in Figure 4. The results were statistically significant. Heterogeneities were assessed in three forest plots (Figures 2-4) Obesity is a known shared risk factor between postmenopausal BC status and NCDs, such as CVD. Inflammation is considered as a major unifying risk factor in sharing the same biological pathways of both CVD and BC [39]. Evidence revealed that lifestyle strategies that target weight loss may decrease perilymphatic inflammatory markers (cells), improve lymphatic function, and reverse pathological mechanisms in gene expression in lymphatic endothelial cells [40]. [43]. The most known side effects related to surgery and radiation therapy are upper limb edema/inflammation and pain with an incidence rate of about 40% within 5 years following treatment, depending on types of therapy within 5 years following treatment [44]. Given that postmenopausal BC status shares the same risk factors with metabolic syndromes, such as CVD, dyslipidaemia and diabetics, tailored exercise interventions are suggested to reduce BP, and psychosocial and neurological adverse events associated with adjuvant therapy [39]. The present meta-analysis confirmed with moderate evidence that all types of exercises are effective in improving metabolic function, homeostatic, lymphatic systems function and likely to improve BCS survivorship. The generalizability of these findings to other populations with severe stage of BC associated comorbidities should be established with a sufficient level of evidence. Limitations Finally, further meta-analysis with high-quality RCTs should also explore the correlation between the ACSM and ACS exercise regimens in assessing the doseresponse relationship between different types of exercises and survival outcomes stratified by BC subtypes. Further perspective A review on the effects of exercises on cancer related fatigue suggested that exercises may be used in the rehabilitation of cancer and associated comorbidities to reduce inflammatory markers and fatigue [45]. Evidence from rigorous high-quality studies to recommend the impact of different types of exercises, exercise intensity, and weight loss on inflammatory markers is still lacking in the literature. A narrative synthesis evaluating the effects of exercises on markers of inflammation among BCS and a healthy population revealed that the effects were similar in reducing inflammatory markers in both populations. However, research gaps were identified in literature; good understanding of the relationship between exercises and inflammation, as well as, the underlying biological mechanisms that are responsible for these changes in postmenopausal breast cancer patients on endocrine therapy needs further investigations as recommended in previous review [46]. Our review briefly outlined the effects on lifestyle interventions on common averse associated with endocrine therapy; specifically, with AIs and optimal exercise protocols developed to mitigate these comorbidities. Consequently, we *The basis for the assumed risk (e.g. the median control group risk across studies) is provided in footnotes. The corresponding risk (and its 95% confidence interval) is based on the assumed risk in the comparison group and the relative effect of the intervention (and its 95% CI). CI: Confidence interval; GRADE Working Group grades of evidence High quality: Further research is very unlikely to change our confidence in the estimate of effect. Moderate quality: Further research is likely to have an important impact on our confidence in the estimate of effect and may change the estimate. Low quality: Further research is very likely to have an important impact on our confidence in the estimate of effect and is likely to change the estimate. Very low quality: We are very uncertain about the estimate.
2019-12-19T09:19:54.369Z
2019-12-16T00:00:00.000
{ "year": 2019, "sha1": "8080daee9a136b5e11d293c76e69fd2e3b6595c9", "oa_license": "CCBY", "oa_url": "https://www.researchsquare.com/article/rs-9523/v1.pdf", "oa_status": "GREEN", "pdf_src": "ScienceParsePlus", "pdf_hash": "1a3f9016511bc5e3d052134547e1bedf86e9da19", "s2fieldsofstudy": [ "Medicine", "Psychology" ], "extfieldsofstudy": [] }
247845832
pes2o/s2orc
v3-fos-license
Computed Tomography Assessment of Tidal Lung Overinflation in Domestic Cats Undergoing Pressure-Controlled Mechanical Ventilation During General Anesthesia Objective This study aimed to evaluate lung overinflation at different airway inspiratory pressure levels using computed tomography in cats undergoing general anesthesia. Study Design Prospective laboratory study. Animals A group of 17 healthy male cats, aged 1.9–4.5 years and weighing 3.5 ± 0.5 kg. Methods Seventeen adult male cats were ventilated in pressure-controlled mode with airway pressure stepwise increased from 5 to 15 cmH2O in 2 cmH2O steps every 5 min and then stepwise decreased. The respiratory rate was set at 15 movements per min and end-expiratory pressure at zero (ZEEP). After 5 min in each inspiratory pressure step, a 4 s inspiratory pause was performed to obtain a thoracic juxta-diaphragmatic single slice helical CT image and to collect respiratory mechanics data and an arterial blood sample. Lung parenchyma aeration was defined as overinflated, normally-aerated, poorly-aerated, and non-aerated according to the CT attenuation number (−1,000 to −900 HU, −900 to −500 HU, −500 to −100 HU, and −100 to +100 HU, respectively). Result At 5 cmH2O airway pressure, tidal volume was 6.7± 2.2 ml kg−1, 2.1% (0.3–6.3%) of the pulmonary parenchyma was overinflated and 84.9% (77.6%−87.6%) was normally inflated. Increases in airway pressure were associated with progressive distention of the lung parenchyma. At 15 cmH2O airway pressure, tidal volume increased to 31.5± 9.9 ml kg−1 (p < 0.001), overinflated pulmonary parenchyma increased to 28.4% (21.2–30.6%) (p < 0.001), while normally inflated parenchyma decreased 57.9% (53.4–62.8%) (p < 0.001). Tidal volume and overinflated lung fraction returned to baseline when airway pressure was decreased. A progressive decrease was observed in arterial carbon dioxide partial pressure (PaCO2) and end-tidal carbon dioxide (ETCO2) when the airway pressures were increased above 9 cmH2O (p < 0.001). The increase in airway pressure promoted an elevation in pH (p < 0.001). Conclusions and Clinical Relevance Ventilation with 5 and 7 cmH2O of airway pressure prevents overinflation in healthy cats with highly compliant chest walls, despite presenting acidemia by respiratory acidosis. This fact can be controlled by increasing or decreasing respiratory rate and inspiratory time. INTRODUCTION Mechanical ventilation (MV) aids in maintaining adequate pulmonary gas exchange during general anesthesia for surgical procedures. However, it may lead to the development of areas of atelectasis and hyperinflation within the pulmonary parenchyma (1)(2)(3)(4). Hyperinflation can lead to pulmonary lesions by cyclical stretching of lung tissues, promoting inflammation, acute lung injury, and postoperative complications (5)(6)(7). Despite the evidence in other species, to the authors' knowledge, there is no evidence describing the pulmonary aeration distribution of healthy cats undergoing positive pressure ventilation in relation to lung hyperinflation and alveolar collapse. The current ventilation monitoring methods available at the bedside are unable to detect lung hyperinflation. The quantitative assessment of the distribution of aeration within the lungs during MV can be evaluated using CT (8). Using this technique, it is possible to precisely compute the amount of lung parenchyma overinflation in diverse inspiratory pressure conditions, allowing the identification of less harmful ventilatory strategies (9)(10)(11)(12). The aim of this study was to assess the inspiratory lung aeration distribution by helical CT and respiratory mechanics in anesthetized cats ventilated with 6 different levels of inspiratory pressure (5, 7, 9, 11, 13, and 15 cmH2O) to obtain the best value of inspiratory pressure to ventilate healthy lungs. The hypothesis of the study was that low inspiratory pressures would result in more areas of lung collapse, while high inspiratory pressures would result in more areas of overinflation in healthy cat lungs. MATERIALS AND METHODS The study was approved by the Ethics Committees for Animal Research at the Faculty of Veterinary Medicine and Animal Science of the University of São Paulo (FMVZ-USP protocol number 110/8) and Faculty of Medicine of the University of São Paulo (CEUA-USP protocol number 100/10). Informed consent was obtained for the enrolled cats. It was conducted at the Radiology Service of the Department of Surgery-Faculty of Veterinary Medicine and Animal Science of the University of São Paulo, São Paulo, Brazil. Animals A total of 25 intact male cats scheduled for orchiectomy in the University hospital, aging 1-5 years were investigated in this study. The inclusion criteria were the absence of respiratory and cardiovascular disease history. The presence of abnormalities observed in tomographic scouts of the thorax, in baseline arterial blood gases, or preoperative blood cell count, renal and hepatic plasma chemistry panel were considered as the exclusion criteria. Anesthetic Protocol One day before the experiment, the animals underwent catheterization of the cephalic vein and dorsal pedal artery using a 22-gauge catheter (BD Insyte R Autoguard, Juiz de Fora, MG, Brazil) under sedation with dexmedetomidine hydrochloride (Dexmedetor R ; Orion Pharma, Espoo, Finland), (10 µg kg −1 ) intramuscularly (IM). Sedation was reversed with atipamezole (Antisedan R ; Orion Pharma, Espoo, Finland) (10 µg kg −1 ) IM and then the animals were placed in cages with food and water ad libitum. The animals fasted for 8 h before the experiments. On the day of the experiment, the animals were anesthetized with propofol (5 mg kg −1 ) intravenously (IV) (Propovan; Cristália; São Paulo, Brazil) and underwent endotracheal intubation with a 3.5 mm size, 19 cm length, and a cuffed cannula (Solidor, Bonree Medical, Guangdong, China). Anesthesia was maintained with a continuous infusion of propofol (0.5 mg kg −1 min −1 ). Paralysis was promoted by the intravenous administration of 1 mg kg −1 of rocuronium (Rocuron; Cristália, São Paulo, Brazil) and supplemented with increments of 0.5 mg kg −1 if the animal presented any signs of spontaneous breathing effort based on the airflow waveform. After the CT scan protocol, animals were transported to the operation room where neutering surgery was performed. Study Protocol Prior to the beginning of each study, standard tests of the ventilator (Galileo R , Hamilton Medical, Bonaduz, Switzerland) were performed (pneumotachograph calibration, leak testing, and calculation of the breathing circuit compliance). After the initial ventilator tests, a calibrated Wright spirometer (nSpire, Hertford, UK) was connected between the pneumotachograph and the breathing circuit (neonatal, 150 cm length) to verify the accuracy of V T measurement computed by the ventilator. A variation of up to 5% between the V T values expressed by the ventilator and the Wright spirometer was accepted. Experimental Design After anesthetic induction and positioning of the patient in the dorsal recumbency, pressure-control ventilation (PCV) was initiated with an inspiratory pressure of 5 cmH 2 O, zero endexpiratory pressure (ZEEP) f R of 15 breaths per min, and an inspiratory time of 1 s and FiO 2 40%. After 20 min of anesthetic induction and hemodynamic stabilization, inspiratory pressure was progressively increased by increments of 2 cmH 2 O every 5 min until reaching 15 cmH 2 O. After that, inspiratory pressure was reduced in a descending stepwise manner in steps of 2 cmH 2 O until 5 cmH 2 O. At the end of each period, an arterial blood sample was collected using 1 ml syringes (Becton Dickinson, Curitiba, Brazil), washed with heparin for blood gas analysis, and an inspiratory pause of 4 s was performed to obtain a thoracic CT image (Figure 1). At each inspiratory pressure level, the arterial pressure (AP), heart rate (HR), and pulse oximetry (SpO 2 ) of the animals were evaluated using a multiparameter monitor (Model 2020, Dixtal, Manaus, Brazil). The blood pressure transducer (TruWave R , Edwards, California, USA) was connected to the arterial catheter by noncompliant fluid-filled tubing and was zeroed and positioned at the heart level (elbow joint). The peak and plateau pressure (Pplat, cmH 2 O), minute volume, expiratory tidal volume, and respiratory rate were obtained directly from the ventilator. The static compliance was calculated using the following formula: (1) where C stat is the static compliance, V E is the expiratory volume, P plat is the plateau pressure obtained at the end of a 4 sinspiratory pause with zero flow, and PEEP is the positive endexpiratory pressure, which was zero through the study. CT Image Acquisition and Analysis After intubation, the cats were placed in dorsal recumbency throughout the entire procedure with their thoracic limbs extended forward carefully to ensure that the spine and head of each animal were in a straight line to acquire symmetrical images. The thoracic CT scan was obtained in dorsal recumbency because most surgeries (like laparotomies) are routinely done in this recumbency. The CT images were obtained at the end of a 4-s inspiratory pause at each inspiratory pressure level, from 1 cm cranial the diaphragmatic dome using a single slice helical CT scanner (Xpress/GX, Toshiba, Japan) with the same settings (120kVp and 150 mA, matrix size 512 × 512). Images were reconstructed at 5 mm thickness and kernel FC50 for standard lung imaging. The images were analyzed using the software Osirix (Osiris 4.19, University Hospital of Geneva, Switzerland). Briefly, each pulmonary region of interest (ROI) was manually delineated and the pixels contained in it were distributed into 1,200 compartments according to their X-ray attenuation coefficient (CT number). Each pixel is characterized by a CT number that represents the attenuation coefficient of the X-ray by the structure being studied minus the attenuation coefficient of water, which is 1, divided by the attenuation coefficient of water expressed in Hounsfield units (HU) (13). Therefore, for each compartment of a known CT number, it was possible to compute tissue and gas volumes and tissue mass using the following formulas: (1) Volume of the voxel = (size of the pixel) 2 x section thickness, with an area provided for each pixel tomographic study (2) Total volume of the compartment = number of voxels × voxel volume for each track radiological density (3) Volume of gas = (CT coefficient/1,000) × total volume of the compartment, if the compartment has a considered CT coefficient between 0 and −1,000 HU; or Volume of gas = 0, if the compartment in question has a CT coefficient >0 HU; or volume of gas = total volume of the compartment if the CT coefficient is −1,000 HU (4) Volume of tissue = (1 + CT coefficient / 1,000) × total volume of the compartment, if the compartment considered has a CT coefficient between 0 and −1,000 HU; or Volume of tissue = number of voxels × voxel volume, if compartment is considered to have a CT coefficient of >0 HU; or volume of tissue = 0 if the compartment has a CT coefficient less of than −1,000 HU (5) Weight of tissue = volume of tissue, if the compartment considered has CT coefficient <0 HU; or weight of tissue = (1 + CT coefficient / 1,000) × total volume of the compartment, if the compartment in question has a CT coefficient >0 HU. The total lung tissue and volumes of each given ROI were computed by adding all the partial masses and volumes of the compartments. Lung parenchyma was further analyzed according to its aeration, defined as follows: 1) overinflated parenchyma characterized by CT numbers between −1000 HU and −900 HU (9, 14, 15), 2) normally-aerated parenchyma characterized by CT numbers between −900 HU and −500 HU, 3) poorly-aerated parenchyma characterized by CT numbers between −500 HU and −100 HU (16) and 4) non-aerated parenchyma characterized by CT numbers between −100 and + 100 HU (17). In the initial analysis, a large ROI encompassing both the left and right lungs was delineated in the CT images obtained in each peak inspiratory pressure level, to evaluate the overall lung volumes and tissue mass, as shown in Figure 1. The portions of the pulmonary hila containing the trachea, main bronchi, and hilar blood vessels were excluded from the ROI. In a second analysis, the left and right lungs were segmented in 3 regions of interest of equal height distributed along the ventral-dorsal axis (ventral, middle, and dorsal, relative to patient's anatomy) and the overinflated lung tissue mass fraction were computed in each airway pressure condition (Figure 1). Statistical Analysis The normal distribution of data was evaluated by means of visual analysis and the Shapiro-Wilks test. All data were expressed as the mean value ± SD or as median (25-75% interquartile), according to their distribution. The physiological variables, blood gases variables, and lung CT-derived variables were compared at different levels of inspiratory pressure by means of one-way ANOVA or Friedmann test, followed by multiple comparisons when indicated. The fraction of the overinflated parenchyma to the overall parenchyma mass within the 3 lung segments (ventral, middle, and dorsal) obtained at each peak inspiratory pressure level was compared using the Kruskal-Wallis test followed by Dun's test. A p-value of <0.05 was considered significant. All statistical analyses were performed using IBM SPSS package version 22 (IBM Corp, Armonk, NY, USA) and GraphPad Prism 6 for Mac (GraphPad Software, La Jolla, California, USA). RESULTS From the 25 cats included in this study, only the data from 17 cats were analyzed. Eight cats had to be excluded due to image artifacts (5 cats), pulmonary infiltrates detected on the baseline CT images (2 cats), and hypotension refractory to fluid therapy and ephedrine (1 cat). The animals had an average age of 3.2 ± 1.3 years and a weight of 3.5 ± 0.5 kg. Lung Aeration Distribution The increase in the inspiratory pressure caused an elevation in the total CT section volume, being 34.2 % greater than baseline at the airway pressure of 15 cmH 2 O. The decrease in airway pressure resulted in the return of the lung volume to baseline (p < 0.001). This variation in the total volume was due to an increase in the volume of gas, while the volume of tissue decreased but to a minor degree ( Table 1). The overinflated tissue fraction increased from 2.1% at the airway pressure of 5 cmH 2 O to 28.4% at the airway pressure of 15 cmH 2 O (p < 0.001), with a reduction of normally-aerated parenchyma from 84.9 to 57.9%. The reduction of airway pressure decreased the amount of overinflated lung parenchyma and restored the normallyaerated parenchyma fraction to baseline values at 5 cmH 2 O airway pressure. The fraction of non-aerated and poorly-aerated parenchyma did not change significantly during the study ( Table 1). Regional Overinflation in the Ventral, Middle and Dorsal Lung Regions The amount of overinflated parenchyma progressively increased within the ventral, middle, and dorsal lung segments in proportion to the increase in airway pressure up to 11 cmH 2 O in a heterogeneous fashion, as shown in Figure 2. observed in the middle lung region was greater than that in the ventral region. A stepwise decrease in inspiratory pressure toward baseline reversed the alterations in the distribution of regional overinflation within the lung parenchyma. Respiratory Mechanics The plateau pressure increased with the elevation of inspiratory airway pressures and returned to baseline after the stepwise decrease in inflation pressure (p < 0.001) ( Table 1). The increase in airway pressure promoted a significant increase in tidal volume, from 6.7 ± 2.2 mL/kg at 5 cmH 2 O airway pressure to 31.5 ± 9.9 mL/kg at 15 cmH 2 O airway pressure (p < 0.001), returning to baseline values at the pressure of 5 cmH 2 O ( Table 2). The static compliance increased from 11 cmH 2 O airway pressure until 15 cmH 2 O of airway pressure. After that pressure level, the static compliance remained higher than baseline until 9 cmH 2 0 (p < 0.001). When the airway pressure decreased below 7 cmH 2 O, the static compliance decreased returning to baseline values at 5 cmH 2 O ( Table 2). Cardiovascular Variables and Blood Gases Hemodynamic variables of the animals during the study are shown in Table 2. No changes were observed in mean arterial pressure in different airway pressure conditions, however, heart rate increased as peak airway pressure increased from 9 cmH 2 0 airway pressure up to 15 cmH 2 0 and returned to baseline as airway pressure was decreased to 5 cmH 2 O (p < 0.05). A progressive decrease was observed in PaCO 2 and ETCO 2 when the airway pressures were increased above 9 cmH 2 O (p < 0.001). The increase in airway pressure promoted an elevation in pH (p < 0.001). During the stepwise airway pressure decrease, PaCO 2 and ETCO 2 increased and pH values decreased but did not return to baseline. An increase in arterial PaO 2 was observed when airway pressure was raised until 9 cmH 2 O (p < 0.001), remaining stable until the end of the experiment. No variations were observed in base excess throughout the experiment, while HCO 3 decreased with airway pressure above 11 cmH 2 O, returning to baseline level at 5 cmH 2 O ( Table 2). DISCUSSION In this study, we observed in anesthetized and mechanically ventilated cats with increasing airway pressures: (1) a linear increase of tidal volume; (2) a linear increase in overinflation of the lung parenchyma; (3) a reduction of normally aerated areas; (4) upholding of the poorly and non-aerated areas. The airway pressure values of 5 and 7 cmH 2 O generated the most normally aerated lung areas (84.9 and 79.9 %) and tidal volumes (6.7 and 10 ml/kg respectively). To the best of the author's knowledge, there are no similar studies in cats. For several decades, it has been recognized that MV plays an important role in the development of lung injury in artificially ventilated individuals (18). In 1974, Webb and Tierney described the impact of high pressures and volumes in the lungs of healthy rats ventilated with inspiratory pressures of 14, 30, or 45 cmH 2 O with and without PEEP (19). They observed that animals ventilated with high inspiratory pressure developed alveolar and perivascular edema, severe hypoxemia, and decreased compliance, while those ventilated with low pressure showed no pathologic lung changes. In subsequent years, it was observed in several other investigations that the use of high tidal volumes was associated with alterations of vascular permeability and with the development of interstitial edema (20)(21)(22). Data from recent studies have shown that the harm caused by high tidal ventilation goes beyond alveolar and endothelial cell damage, being implicated in mitochondrial injury with mitochondrial DNA release (23) and in remodeling of the lung extracellular matrix (24). Much of these findings were attributed to the cyclical tidal overstretch of the parenchyma during the respiratory cycle. Despite the evidence showing the injurious potential, MV remains a cornerstone in anesthesia and critical care medicine. To reduce postoperative pulmonary complications such as pneumonia and postoperative pulmonary hypoxemia, the use of reduced tidal volumes ranging from 6 to 8 ml/kg has become a current practice in human anesthesia (7,25,26). Comparative respiratory physiology studies across different mammal species also indicate that spontaneous breath tidal volumes range from 6 to 8 ml/kg (27,28). Even though, in small animals undergoing general anesthesia, the recommended tidal volumes to be used are in the range of 10 to 15 ml/kg (29,30). It is unknown what the effect of such high tidal volumes and airway pressures is on the pulmonary structure and its possible long-term consequences in healthy cats undergoing general anesthesia and MV for surgical procedures, and studies like this one are important to determine the ideal inspiratory pressure and V T to be clinically used in this species. We showed here that in healthy cats ventilated with PCV, inspiratory pressure of 5-7 cmH 2 O can be safely used, generating tidal volume between 6-10 ml/kg. However, an individual approach to determine the best ventilatory settings should be used for each patient, monitoring airway pressures and tidal volumes. Quantitative CT has been used clinically and experimentally to investigate the distribution of the ventilation within the lung parenchyma. However, there are no clear definitions on the radiological limits of pulmonary parenchyma hyperinflation in small animals, and one could argue that considering the distribution of aeration within the lungs of healthy cats, it behaves in the same fashion as in humans and could be a limitation of this study. Nevertheless, other authors have described the same CT attenuation thresholds to define lung aeration compartments and as well as the relative volume in these pulmonary compartments according to aeration in cats (31,32). In this study, we observed that the relative distribution of pulmonary aeration was similar to that observed in humans, being the lung composed of 85% normally-aerated parenchyma, 10% of poorly-aerated parenchyma, about 2% of non-aerated, and 2% of overinflated parenchyma (9). These 2 last compartments are mainly composed of blood within pulmonary vessels and air filling the large airways, respectively. An increase in airway pressure from 5 to 15 cmH 2 0 increased the tidal volume from 6.7 to 31.5 ml/kg. As a result of the progressive pressurization of the airways, there was a massive increase in the amount of overinflated parenchyma and an expressive reduction of the normally-aerated parenchyma. Probably, as airway pressure increased, a portion of the normally-aerated lung parenchyma became overinflated, without any significant changes in the poorly-aerated parenchyma or in the non-aerated parenchyma. In this way, we can assume that when the airway pressure was increased, the amount of pulmonary parenchyma exposed to tidal overinflation increased correspondingly as well as the risk of ventilator-induced pulmonary injury. Furthermore, increases in airway pressure promote differential patterns of inflation in the dorsal, middle, and ventral lung regions. From 5 to 11 cmH 2 O, overinflation occurs preferentially in the ventral lung segment, the most compliant pulmonary segment physiologically. At this point, the ventral segment is fully distended and overinflation reaches a plateau and additional increases in airway pressure do not increase the fraction of overinflated lung. Meanwhile, the middle lung segment keeps on over-distending up to 15 mmHg airway pressure. At the pressure of 13 cmH 2 O onwards, dorsal lung segment overinflation is comparable to that of the ventral segment, indicating that the lungs reached entirely overinflated. Even though, ventilatory driving pressures in these ranges are currently being used in veterinary clinical practice (29). Another phenomenon that must be argued is that some part of the normally-aerated parenchyma decreased as airway pressure increased due to de-recruitment caused by the enlarging overinflated lung segment, resulting in increases in poorlyand non-aerated parenchyma fractions. According to our data, poorly-and non-aerated lung fractions remained largely unaltered as airway pressure variated, rejecting a possible derecruitment of normally-aerated lung parenchyma in the studied airway pressure variation range. In the present study, no significant areas of alveolar collapse were observed, whereas the largest area of collapse represented an average of 2.38 ± 3.6% when the inspiratory pressure reached 15 cmH 2 O. We did not use PEEP, even a fixed low PEEP, as the objective of the study was not to evaluate lung atelectasis. This study was designed to evaluate hyperinflation resulting from increasing airway pressures during pressure-controlled ventilation. With 15 cmH2O inspiratory pressure, the tidal volume was 31.5 ± 9.9 ml. Current practices for safe driving pressure (plateau pressure minus PEEP) range from 14 to 18 cmH 2 O in acute respiratory distress syndrome (ARDS) and healthy human patients (33,34). Higher inspiratory pressure would only be harmful to the cats in our study without additional recruitment benefit. It is believed that the use of a FiO 2 of 40%, the short duration of anesthesia (75 min), the low weight of the abdominal organs of cats, and their high thoracic and lung compliance, prevented alveolar collapse at any time during the study. Even though the most prevalent abnormality detected in thoracic CT scans of healthy anesthetized cats was pulmonary atelectasis (41%) (35), different studies have shown significant increases in collapsed lung areas during ventilation with FiO 2 values of 100% (7.1 ± 2.7%) under spontaneous ventilation in felines (36) and in mechanically controlled ventilation (12.8 ± 3.7%) in dogs with a tidal volume of 15 mL/kg (37). In both studies, the tendency of collapsed alveoli was the same at 100% FiO 2 , although the most striking values were observed in dogs. The most likely reason for this result is associated with the increased visceral weight in dogs compared with cats, resulting in a greater compressive effect on the lung lobe bases. In terms of the hemodynamic effects caused by the MV, the use of high pressures and volumes is related to hypotension caused by low output. Pulmonary hyperinflation can cause increased pulmonary vascular resistance and increased pulmonary artery pressure, impairing the right ventricular ejection (38). However, this result was not observed in the present study given the short time during which the animals were exposed to each time pressure, besides the fact that all of the patients being in normovolemic status and HR has increased by 10% at the highest airway pressure (13-15 cmH 2 O) in a compensatory way. The static compliance of the respiratory system increased with the elevation of inspiratory airway pressure at 63% at the highest pressure of 15 cmH 2 O when compared to the baseline. The referred increase is in agreement with the observed ones also in the tidal volume and overall CT section volume. These factors also accompanied the increase in overinflated areas showing that 15 cmH 2 O inspiratory pressure could be harmful to the healthy lungs of cats. A comparison with the values of compliance of the respiratory system in the veterinary literature is difficult because of the different methods used, although their values range from 5.1 to 6.8 ml/cmH 2 O (39). In cats, a PaCO 2 below 25 mmHg and pH above 7.46 are the limits for respiratory alkalosis, and beyond those values (PaCO 2 < 20-25 mmHg), arteriolar vasoconstriction may occur, reducing cerebral and myocardial blood flow (40). In this study, we observed a progressive decrease in PaCO 2 when the airway pressures were increased, reaching values close to life-threatening levels at 15 cm H 2 O airway pressure. Based on the arterial PaCO 2 analysis, the best airway pressure was 9 and 11 cmH 2 O stepwise increased, but in this level, there were 9.6-14.9% of overinflated parenchyma compared to 5 cmH 2 O airway pressure (2.1%) and less normallyinflated parenchyma. However, in the best aeration of the pulmonary parenchyma at 5 and 7 cmH 2 O airway pressure, there was respiratory acidosis. Respiratory acidosis, or primary hypercapnia, occurs when carbon dioxide production exceeds elimination via the lung and is mainly owing to alveolar hypoventilation (41). Despite this, respiratory acidosis can be reversed by increased alveolar minute ventilation, as increasing respiratory rate as the product of and the portion of tidal volume that reaches perfused gas exchange units (42). A significant increase in oxygenation during the airway stepwise increase was observed and was maintained during the decreases in pressure. Although oxygenation was normal at the baseline this increase probably occurred most likely because of the recruitment of collapsed areas not evaluated by the single CT images of the studied region. Limitations This study has some limitations: (A) Tomographic analyses were performed on a single CT image close to the diaphragm. Despite the CT scanning of the whole thorax enabling the evaluation of the distribution of ventilation throughout the entire lungs (43), the time to acquire the image in elevated airway pressure conditions may cause unnecessary hemodynamic derangements. Secondly, it would expose the cats to almost 10x higher doses of radiation and to an avoidable higher risk of cancer (44). Therefore, we decided to obtain fewer images to lower animal exposure to X-rays; (B) The CT section thickness used in this study was 5 mm. This spatial resolution might result in a significant underestimation of the actual overinflation occurring within the lungs when compared to higher spatial resolution with a 2 mm section (45). Conversely, the use of a thicker CT section may minimize the bias introduced in the CT analysis by the cephalic-caudal displacement of the lung parenchyma to the outside of limits of a thinner CT section; (C) Another problem related to the analysis of a single CT section is the partial volume effect that can interfere with the computation of the CT number in voxels nearing the boundaries of the ROI. These voxels may be composed of gas and much more dense structures such as bones. Therefore, it can underestimate the actual CT number values. Since the cross-sectional area of the feline thorax is much smaller than in humans, this effect may be potentially greater. The relative values calculated in one CT section may not be representative of the whole lung. Moreover, a whole lung 3D analysis may result in less overestimation of hyperinflation. (D) Another limiting factor might be the duration of the inspiratory pause used in this study (4 s). According to David et al., pauses longer than 4 s could promote alveolar recruitment (46). Moreover, some studies indicate that an inspiratory pause shorter than 10 s when examining tomographic slice areas may overestimate alveolar recruitment due to overinflation without accounting for the accumulation of gas over the wells, providing the same estimates as in regions that are not gravity-dependent (47). (E) The absence of CT measurement at zero airway pressure during disconnection of the patient from ventilator could be another limitation, as the overinflated lung area at 5 cmH 2 O inspiratory pressure cannot be compared to a nonpressurized condition and the 2.1% of overinflated lung could be a representation of voxels within the large airways. (F) Finally, the time spent in each inspiratory pressure might be considered too short, evidenced mainly by differences in PaCO 2 values during inspiratory pressure stepwise increase and decrease. Probably with more time to equilibrate CO2 washout, these values would be more similar at the same inspiratory pressure level. Moreover, it is known that a lack of PEEP is related to the appearance of collapsed areas and acute lung injury by cyclic opening and closing of the alveolar units, and increasing time at each pressure level, we could expect greater areas of the non-aerated lung. However, a longer time at each inspiratory pressure would greatly increase the total time of anesthesia in each animal, as well as the risk of hemodynamic instability. In conclusion, increases in airway pressure beyond 5 cmH 2 O cause a progressive overinflation of the lungs that can encompass 28.4% of the parenchyma at 15 cmH2O. Ventilation with low driving pressures and with adequate respiratory rate may provide the best ventilatory strategy to small animals with highly compliant chest walls. DATA AVAILABILITY STATEMENT The original contributions presented in the study are included in the article/supplementary material, further inquiries can be directed to the corresponding author. ETHICS STATEMENT The animal study was reviewed and approved by Ethics Committees for Animal Research at the Faculty of Veterinary Medicine and Animal Science of the University of São Paulo (FMVZ-USP Protocol Number 110/8) and Faculty of Medicina of the University of São Paulo (CEUA-USP protocol number 100/10). Written informed consent was obtained from the owners for the participation of their animals in this study. AUTHOR CONTRIBUTIONS AM: study design, data acquisition, analysis and interpretation of data, and drafted the manuscript. AA, DF, DO, and LM: conceived the study, study design, advice on data analysis and interpretation, and manuscript revision. AP and LV-M: study design and data acquisition. JS: revised the manuscript. All authors contributed substantially to manuscript revision and approved the final manuscript.
2022-04-01T13:42:31.421Z
2022-04-01T00:00:00.000
{ "year": 2022, "sha1": "b5fa4743091dfd55c70ff504163d96453e1767c5", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Frontier", "pdf_hash": "b5fa4743091dfd55c70ff504163d96453e1767c5", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
16811657
pes2o/s2orc
v3-fos-license
Image-based Navigation for the Snoweater Robot Using a Low-resolution Usb Camera This paper reports on a navigation method for the snow-removal robot called SnowEater. The robot is designed to work autonomously within small areas (around 30 m 2 or less) following line segment paths. The line segment paths are laid out so as much snow as possible can be cleared from an area. Navigation is accomplished by using an onboard low-resolution USB camera and a small marker located in the area to be cleared. Low-resolution cameras allow only limited localization and present significant errors. However, these errors can be overcome by using an efficient navigation algorithm to exploit the merits of these cameras. For stable robust autonomous snow removal using this limited information, the most reliable data are selected and the travel paths are controlled. The navigation paths are a set of radially arranged line segments emanating from a marker placed in the environment area to be cleared, in a place where it is not covered by snow. With this method, by using a low-resolution camera (640 × 480 pixels) and a small marker (100 × 100 mm), the robot covered the testing area following line segments. For a reference angle of 4.5° between line paths, the average results are: 4° for motion on hard floor and 4.8° for motion on compacted snow. The main contribution of this study is the design of a path-following control algorithm capable of absorbing the errors generated by a low-cost camera. Introduction Robot technology is focused on increasing the quality of life by the creation of new machines and methods.This paper reports on the development of a snow-removal robot called SnowEater.The concept for this robot is the creation of a safe, slow, small, light, low-powered and inexpensive autonomous snow-removal machine for home use.In essence, SnowEater can be compared with autonomous vacuum cleaner robots [1], but instead of working inside homes, it is designed to operate on house walkways, and around front doors or garages.A number of attempts to automate commercial snow-blower machines have being made [2,3], but such attempts have been delayed out of concerns over safety. The basic SnowEater model is derived from the heavy snow-removal robot named Yukitaro [4], which was equipped with a high-resolution laser rangefinder for the navigation system.In contrast to the Yukitaro robot, which weights 400 kg, the SnowEater robot is planned to weigh 50 kg or less.In 2010, a snow-intake system was fitted to SnowEater [5] that enables snow removal using low auger and traveling speeds while following a line path.In 2012, a navigation system based on a low-cost camera was introduced [6].The control system was designed using linear feedback to ensure paths were followed accurately.However, due to the reduced localization accuracy, the system did not prove to be reliable.Among the many challenges in the realization of autonomous snow-removal robots, is the navigation and localization issue, which currently remains unsolved. An early work on autonomous motion on snow is presented in [7]; in which four cameras and a scan laser were used for simultaneous localization and mapping (SLAM); and long routes in polar environments were successfully traversed.The mobility of several small tracked vehicles moving in natural and deep-snow environments is discussed in [8,9].The terramechanics theory for motion on snow presented in [10] can be applied to improve the performance of robots moving on snow. The basic motion models presented in [11][12][13][14][15][16][17][18] can be used for motion control.In addition, Gonzalez et al. [19] presented a model for off-road conditions.Also, the motion of a tracked mobile robot (TMR) on snow is notably affected by slip disturbance.In order to ensure the performance of path tracking or path following control against disturbances, robust controllers [20,21] and controllers based on advanced sensors [22] exist.However, advanced feedback compensation based on precise motion modeling is not necessary for this application because strict tracking is not required in our snow-removal robot. One of the goals of this study is to develop a simple controller without the need for precise motion modeling.Another goal is to use a low-cost vision-based navigation system that does not require a large budget or an elaborate setup in the working environment.In contrast to other existing navigation strategies that use advanced sensors, we use only one low-cost USB camera and one marker for motion controlling. This paper presents an effective method of utilizing a simple directional controller based on a camera-marker combination.In addition, a path-following method to enhance the reliability of navigation is proposed.Although the directional controller itself does not provide asymptotic stability of the path to follow, the simplicity of the system is a significant merit. The rest of this paper is organized as follows.The task overview and prototype robot are presented in Section 2. The control law, motion and mathematical model are shown in Section 3. The experimental results are shown in Section 4 and our conclusions are given in Section 5. Task Overview and Prototype Robots The SnowEater robot prototype was developed in our laboratory.The prototype is a tracked mobile robot (TMR).Its weight currently is 26 kg, and the size is 540 mm × 740 mm × 420 mm.The main body is made of aluminum and has a flat shape for stability on irregular terrain.The intake system consists of steel and aluminum conveying screws that collect and compact snow at a low rotation speed.Two 10 W DC motors are used for the tracks, and two 15 W DC motors are used for the screws. The front screw collects snow when the robot is moving with slow linear motion.The two internal screws compact the snow into blocks for easy transportation.Once the snow is compacted, the blocks are dropped out from the rear of the robot.In addition, the intake system compacts the snow in front of the robot reducing the influence of the track sinkage.Since the robot requires a linear motion to collect snow [5], the path to follow consists of line segments that cover the working area.In our plan, another robot carries the snow blocks to a storage location.Figure 1 shows the prototype SnowEater robot, and Figure 2 shows the line arrangement of the path-following method. Image-Based Navigation System One of the objectives of the SnowEater project is to use a low-resolution camera as the sole navigation sensor.In our strategy, a square marker is placed in the center of the working area, so as to be visible from any direction, and a camera is mounted on the robot.Robot camera-marker position/ orientation is obtained by using the ARToolKit library [23].This library uses the marker four corners position in the camera image and the information given a priori (camera calibration file and marker size) to calculate the transformation matrix between the camera and marker coordinate systems.Later the transformation matrix is used to estimate the translation components [24].However, with low-resolution cameras, the accuracy and reliability of the measurement vary significantly, depending on the camera-marker distance.This is one of the problems associated with localization when using only vision.Gonzales et al. [25] discusses this issue and provides an innovative solution using two cameras.A solution for the localization problem using stereoscopic vision is given in [26,27], while [28] presents a solution using rotational stereo cameras.In [29], a trajectory-tracking method using an inexpensive camera without direct position measurement is presented. Some studies have utilized different sensors such as a laser scanner to map outdoor environments [30].Our study places a high priority on keeping the system as a whole inexpensive and simple; hence the challenge is to exploit robot performance using a low-resolution camera. In this section, the term "recognized marker position" refers to the marker blob in the captured image, and the term "localization" means identification of the robot location with respect to the marker coordinate system. Localization Performance Evaluation To evaluate localization performance, outdoor and indoor experiments were carried out using a low-cost USB camera (Sony, PlayStation Eye 640 × 480 pixels) mounted on a small version of the SnowEater robot.Results with different cameras can be seen in Appendix A. The dimensions and weight of the small version are 420 mm × 310 mm × 190 mm and 2.28 kg, respectively.Because only a path-following strategy with visual feedback is considered, both robots (SnowEater and the small version) behave similarly.Figure 3 shows the small version of the SnowEater robot.The ARToolKit library uses monochromatic square markers to calculate the camera-marker distance and orientation, and the system can be used under poor lighting conditions [31].Our objective is to exploit the library localization merits and performance with low-resolution cameras.Localization control calculations and track commands are completed every 300 ms.Following the results presented in [6] and Appendix A, a monochromatic square marker measuring 100 × 100 mm provides enough accuracy to be used in our application; hence the following experiments are done with this marker. The camera is mounted on the robot facing toward the marker.The robot was placed above the X-axis of the marker-based coordinate system at different xn positions for 1 min.The outdoor experiment was carried out in snow environments; with air and snow temperatures of −5 °C and −1 °C, respectively.Figure 4 shows the experimental setup.The results show that the robot x-coordinate is more reliable compared to the y-coordinate.The average values are stable within 800 mm.However, to use the average value the robot must remain in a static state.For motion-state applications, the x-value (camera-marker distance) is more reliable because its variance is smaller than the y-value. Figure 6 shows the orientation angle experimental results.Since the robot was oriented toward the marker, the angle is 0°.The orientation angle variance increases with the camera-marker distance, and its application is limited to the marker proximity (x < 300 mm) when using a 100 × 100 mm marker. A simple way to obtain the camera-marker direction is to count the number of pixels between the center of the marker blob and the image center in the camera image.These pixel counts are converted into degrees by a simple pixel-degrees relation.This relation is found by finding the number of pixel counts related to a fixed angle.Figures 7 and 8 show the results.Figure 8 shows the results for the recognized marker position and, as can be seen, there is not a large variance, even when x = 1400 mm.Therefore, the data are reliable and can be used during motion. In contrast to the orientation results shown in Figure 6, as the camera-marker distance becomes larger, the recognized marker position error becomes smaller.This characteristic is very useful for our research because the relative angle between the marker direction and robot forward direction can be obtained directly from the image. In summary, from the controlling perspective, the recognized marker position is reliable when the camera-marker distance is large.When the camera-marker distance is small, localization data can be used.The x-coordinate of the camera-marker distance can be obtained more accurate than the y-coordinate.Based on these properties, a navigation method for distant and vicinity regions can be created.Section 3.2 describes a novel navigation algorithm based on these properties. Motion Model Although the SnowEater is a TMR, a differential-drive robot model [11][12][13][14][15][16][17][18] is used in the motion algorithm.Using the marker coordinate system, the robot coordinates are defined as shown in Figure 9.The longitudinal and angular velocities of the robot, and ̇, are related to the right and left track velocities and , respectively, by: = + 2 (1) where represents the distance between the left and right tracks.Among these values, ̇ is used as the control input signal for path following.Velocity is selected in relation to the snow-processing mechanism of the SnowEater robot [5]. Using the robot orientation angle , the robot velocity in the Cartesian coordinates is expressed as By differentiating = , = and using Equation (3), the velocity in polar coordinates is expressed as: where is the distance from the marker.The relative angle − (= ) corresponds to the recognized marker position, and it is used in the motion controller described below. Recursive Back-Forward Motion A recursive back-forward motion was created to cover the working area.Figure 10 shows the travel route of the robot.The robot trajectory consists of two motions: Motion 1 follows a linear path, while Motion 2 is used to connect two consecutive linear paths through a curved motion in the vicinity region.Point A1 is the initial point of the task.The robot moves in the line segment from A1 to B1 using a straight motion (Motion 1).Point B1 is in the limit of the working area in the distant region.After reaching B1, the robot approaches point C1 in the vicinity region using a straight motion (Motion 1).Later, the robot travels to point A2 using a curve motion (Motion 2).Then, the robot embarks upon a new linear path. Motion 1 control is executed using only the recognized marker position, while Motion 2 uses the distance from the marker in addition to the recognized marker position.With this combined motion, the low-resolution limitation is overcome. The camera-marker direction is obtained from the information in the image using a geometric relation.Figure 11 show this relation, where , , are the distance in the camera image corresponding to α, θ and φ.Point C is the image center, which is on a straight heading in front of the robot.Since point P (the projective point of the robot onto the Y axis) is not recognized in the image, the orientation angle is not detected directly in the camera image.Hence the angles and are unknown, even in terms of pixels ( or ) . However, the relative angle between the marker location and robot direction is available in terms of pixels ( − = ).In the following discussion, and are not distinguished. Control in Distant Region (Motion 1) Assuming the track speeds and can be manipulated, the feedback controller is expressed using the angular velocity (̇) as the input signal.Throughout the travel, accurate localization can be expected at points A1, C1, A2, C2,…, because of the camera-marker distance. Motion 1 navigation is executed by a simple direction controller using only the relative angle in the feedback law. 𝜑̇∶= 𝐾𝛼 = 𝐾(𝜃 − 𝜑) (5 where is positive feedback gain.A linear path can be executed when the marker is set to the image center ( = 0).Note this control does not provide asymptotic stability of the target path itself, because the tracking error from the target path cannot be included in control Equation ( 5) without accurate localization.The path-following error depends on the initial robot position.For this reason, the initial positioning is executed in the vicinity region.Motion 2 control is described later.Considering the snow-removal task, this method is a reasonable solution because strict tracking is not required.Moreover, the certainty of returning to the marker position is a good advantage. The basic property of the direction control used in Motion 1 is described below.The control purpose is to converge the relative angle ( − ) to zero.This convergence can be checked by a Lyapunov function [32]. By differentiating Equation ( 6) along the solution of Equations ( 4) and (5), A sufficient condition for the negative definiteness of ̇1 is: When the distance becomes small, the condition is not satisfied.One solution for this issue is to change the velocity as the distance gets smaller to = .Under this condition, Equation (7) becomes: ≤ 2(1 − ) 1 If there exist > 0 such that 2(1 − ) ≤ −, then: Therefore, for any > 0, there exists a time T such that for all > , − < .The reachability of the vehicle to the marker can be understood directly from Equation (4) because the robot is oriented to the marker.The distance decreases when | − | ≤ 2 .In appendix C, the behavior of this controller and the conventional PI controller using the available signals is shown. Control for Switching the Target Line (Motion 2) In Motion 2, the robot changes the target path from the current target line (path i) to the next line (path i + 1).During this phase, the robot trajectory is a curve convergent to the next straight target line (path i + 1).To achieve this, a modification is made to feedback Equation ( 5). 𝜑̇∶= 𝐾(𝛼 + 𝛼 where is added to change the robot direction.The value of must be designed to move the robot onto the next target line.We define the value of as a function of the camera-marker distance ( = ()) as described below. Considering a polar coordinate system oriented to the next target line (path i + 1), the curved path is defined to satisfy the relation = , where is a constant.This path smoothly converges to the origin of the O-YX coordinate system.Relying on the good accuracy of the camera localization within the vicinity region, the initial position is assumed to be known.Figure 12 shows this path.The A value is selected to create the curved path.Because the initial position is known, A can be determined.Next, is related to by: Since the robot motion follows Equation ( 3), the angle can be expressed as: Then, is defined by substituting = tan −1 ẋ ̇ and = into Equation ( 12) The parameter A has to be selected to satisfy , because the marker must remain within the camera vision range.In this particular case the vision range is limited and ≅ (because the y value is small compared to x).This approximation makes the implementation much easier.Then Equation ( 16) can be approximated as: Additionally, the minimum marker recognition distance (B) must be considered.The coordinate system is shifted from O-YX to O'-Y'X'; hence Equation ( 17) becomes: (18) In this method, the asymptotic stability of the path itself is not provided.The stability of the direction control used in Motion 2 is checked by using a Lyapunov-like function: By differentiating along the solution of Equation ( 4) and the feedback equation: .Note that the error can be suppressed by selecting the correct parameter .The velocity must be small when the robot is closer to the marker to avoid the growth of . Experimental Results Experimental verification was carried out using the small version of the SnowEater robot.The track motors of the small version are Maxon RE25 20 W motors.Each track speed is PI feedback controlled by using 500 ppr rotary encoders (Copal, RE30E) with 10 ms of sampling period.The control signals are sent through a USB-serial connection between an external PC and the robot microcontrollers (dsPIC30F4012).The PC has an Intel Celeron CPU B830 (1.8 GHz) processor with 4 GB of RAM.The interface is made via Microsoft Visual Studio 2010. Figure 13 shows the system diagram.The track response to different robot angular velocity commands (̇) can be seen in Appendix B. Motion 2 In this experiment, Motion 2 navigation is tested.The control objective is to move the robot to the target line.The experiment is carried out using two different conditions: (1) a hard floor, and (2) a slippery floor of 6 mm polystyrene beads.Figure 14 shows the experimental setups.Figure 15 shows a comparison of the Motion 2 experiment on a hard floor and on polystyrene beads repeated five times. As Figure 16 shows, direction control for the angle is accomplished.Because the feedback does not provide asymptotic stability of the path to follow, deviations occur in each experiment.If the final position is not accurate, tracking of the path generated by Motion 1 cannot be achieved.If the final error is outside the allowable range, a new curve using the Motion 2 control needs to be generated again.Due to the marker proximity, tracking of the new path is more accurate. Motion 1 in Outdoor Conditions To confirm the applicability in snow environments of this path-following strategy, outdoor experiments were carried out.These experiments were conducted at different times of the day under different lighting conditions.Figure 17 shows the setup of the test area.The test area measured 3000 mm × 1500 mm.The snow on the ground was lightly compacted, the terrain was irregular, and conditions were slippery.For the first experiment, the snow temperature was −1.0 °C and the air temperature is −2.4 °C with good lighting conditions.Figure 18 shows the experimental results for Motion 1 in outdoor conditions.In both experiments, the robot was oriented toward the marker and the path to follow was 3000 mm long.The color lines in the figures highlight the path followed.As can be seen in Figure 18, the robot follows a straight-line path while using Motion 1. Motion 2 in Outdoor Conditions Figure 19 shows the results for the Motion 2 experiment conducted under the same experimental conditions as for the Motion 1 experiment.The color lines highlight the previous path (brown), the path followed (red), and the next path (white). In the left-hand photograph in Figure 19, the next path is in front of the marker.In the right-hand photograph, the next path has an angle with respect to the marker.The robot covered the testing area using the recursive back-forward motion.For a reference angle of 4.5° between line paths, the average results are: 4° for motion on hard floor and 4.8° for motion on compacted snow.The number of paths in the experiment on snow is different because the test area on snow is smaller than the test area on hard floor.Although the motion performance on snow is reduced (compared to the motion on a hard floor), the robot returned to the marker and covered the area.Table 1 shows the results of Figure 20. Recursive Back-Forward Motion In snow removal and cleaning applications, full area coverage rather than strict path-following is required.Therefore, this method can be applied for such tasks. Figure 21 shows the recursive back-forward motion experiment on snow in outdoor conditions.The snow temperature was −1.0 °C and the air temperature was −1.3 °C. Figure 21 results confirm the applicability of the method under outdoor snow conditions.Due to poor lighting, the longest distance traveled was 1500 mm.For longer routes, a bigger marker is required and to improve the library performance in poor lighting conditions, the luminous markers shown in [31] can be used. Conclusions In this paper, we presented a new approach for a snow-removal robot utilizing a path-following strategy based on a low-cost camera.Using only a simple direction controller, an area with radially arranged line segments was swept.The advantages of the proposed controller are its simplicity and reliability. The required localization values, as measured by the camera, are the position and orientation of the robot.These values are measured in the marker vicinity in a stationary condition.During motion or in the marker distant region, only the position of the marker blob in the captured image is used. With 100 mm × 100 mm square monochromatic marker, a low-cost USB camera (640 × 480 pixels), and the ARToolKit library, the robot followed the radially arranged line paths.For a reference angle of 4.5° between line paths, the average results are: 4° for motion on hard floor and 4.8° for motion on compacted snow.With good lighting conditions, 3000 mm long paths were traveled.We believe with this method our intended goal area can be covered. Although the asymptotic stability of the path is not provided, our method presents a simple and convenient solution for SnowEater motion in small areas.The results showed that the robot can cover all the area using just one landmark.Finally, because the algorithm grants area coverage, it can be applied not only to snow-removal but also to other tasks such as cleaning. In the future, the method will be evaluated by using the SnowEater prototype in outdoor snow environments.Also, the use of natural passive markers (e.g., houses and trees) will be considered. Appendix A The camera, marker size and the pattern (inside the marker) selection was done after the following indoor experiments.In these experiments, three different parameters were considered: the camera, the marker size and the pattern inside the marker. In the first experiment, three different cameras (Buffalo USB camera, SONY PlayStation Eye and Logicool HD Pro webcam) were tested.The cameras were set to 30 fps and 640 × 480 pixels.The marker size was 100 × 100 mm and the pattern inside the marker was the same.Figure A1 shows the results. In the second experiment, the pattern inside the marker was changed.The camera was Sony PlayStation Eye set to 30 fps and 640 × 480 pixels.The size for each marker was 100 × 100 mm. Figure A2 shows these results. In the third experiment, the marker size was changed.The camera was Sony PlayStation Eye set to 30 fps and 640 × 480 pixels.The pattern inside each marker was the same.Figure A3 shows these results.These are the conclusions from these experiments: The camera hardware has not direct influence in the localization results when set to the same resolution and frame rate.The pattern inside the marker has not relevant influence in the localization results.The marker size is the most relevant factor in the localization accuracy.A larger marker will have a large accurate region but this region is in the vicinity of the marker and proportional to the marker size. Figure 2 . Figure 2. Line paths that cover the snow-removal area. Figure 3 . Figure 3. Monochromatic square marker and small version of the SnowEater robot. Figure 4 . Figure 4. Experimental setup and different camera-marker position during the outdoor experiments. Figure 5 Figure5shows the results for the robot position. Figure 7 . Figure 7. Recognized marker position in the camera image. Figure 9 . Figure 9. Motion model of the tracked robot. Figure 11 . Figure 11.Orientation angle of the robot in the marker coordinate system and in the camera image. Figure 12 . Figure 12.Curved path to reach the marker. Figure 13 . Figure 13.Control system diagram for the SnowEater robot. Figure 14 . Figure 14.Motion 2 experimental setup on a hard floor, and on a slippery floor of polystyrene beads. Figure 15 . Figure 15.Robot position throughout the experiment. Figure 16 . Figure 16. experimental results when Motion 2 is executed. Figure 18 . Figure 18.Motion 1 experimental results on snow in outdoor conditions. Figure 20 Figure 20 shows the results for the recursive back-forward motion in indoor conditions on a hard floor and on snow.The Motion 2 region is 1200 mm.The test travelled distance is 2000 mm. Figure 20 . Figure 20.Recursive back-forward motion experimental results in indoor conditions. Figure 21 . Figure 21.Recursive back-forward motion experimental results in outdoor conditions. Figure A1 . Figure A1.Localization and recognized marker position errors for different cameras. Figure A2 . Figure A2.Localization and recognized marker position errors for different patterns. Figure A3 . Figure A3.Localization and recognized marker position errors for different marker size. Table 1 . Recursive back-forward motion experimental results in indoor conditions.
2015-09-18T23:22:04.000Z
2015-04-08T00:00:00.000
{ "year": 2015, "sha1": "a1a99c96e1dd6a8dbea4b4339f6f2d82941af5d7", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2218-6581/4/2/120/pdf?version=1428570856", "oa_status": "GOLD", "pdf_src": "Crawler", "pdf_hash": "a1a99c96e1dd6a8dbea4b4339f6f2d82941af5d7", "s2fieldsofstudy": [ "Engineering", "Environmental Science" ], "extfieldsofstudy": [ "Engineering", "Computer Science" ] }
211077347
pes2o/s2orc
v3-fos-license
Enhanced bactericidal effect of ceftriaxone drug encapsulated in nanostructured lipid carrier against gram-negative Escherichia coli bacteria: drug formulation, optimization, and cell culture study Background Ceftriaxone is one of the most common types of antibiotics used to treat most deadly bacterial infections. One way to alleviate the side effects of medication is to reduce drug consumption by changing the ordinary drug forms into nanostructured forms. In this study, a nanostructured lipid carrier (NLC) containing hydrophilic ceftriaxone sodium drug is developed, and its effect on eliminating gram-negative bacteria Escherichia coli death is investigated. Methods Double emulsion solvent evaporation method is applied to prepare NLC. Mathematical modeling based on the solubility study is performed to select the best materials for NLC preparation. Haftyzer-Van Krevelen and Hoy’s models are employed for this purpose. Drug release from optimized NLC is examined under in vitro environment. Then, the efficacy of the optimized sample on eliminating gram-negative bacteria Escherichia coli is investigated. Results Mathematical modeling reveals that both methods are capable of predicting drug encapsulation efficiency trends by chaining solid and liquid lipids. However, Haftyzer-Van Krevelen’s method can precisely predict the particle size trend by changing the surfactant types in water and oily phases of emulsions. The optimal sample has a mean particle size of 86 nm and drug entrapment efficiency of 83%. Also, a controlled drug release in prepared nanostructures over time is observed under in-vitro media. The results regarding the effectiveness of optimized NLC in killing Escherichia coli bacteria suggests that by cutting drug dosage of the nanostructured form in half, an effect comparable to that of free drug can be observed at longer times. Conclusion Results confirm that NLC structure is an appropriate alternative for the delivery of ceftriaxone drug with a controlled release behavior. Introduction Ceftriaxone sodium is an antibiotic commonly used for the treatment of bacterial infection such as middle ear infection, meningitis, bone, and joint infection, intraabdominal infection, skin infection, and pelvic inflammatory diseases [1]. Ceftriaxone vials are among the most prevalent types of antibiotics with the highest mortality rate due to the injection of vials [2]. Ceftriaxone produces several side effects such as diarrhea, elevated liver enzymes, blood urea nitrogen, eosinophilia, thrombocytosis, and other local reactions [1]. Given the inevitability of using this antibiotic in today's health care system, it is essential to develop new prodrugs [1]. Recently, nanoparticle delivery systems have been employed for the encapsulation of lipophilic, hydrophilic, and poorly water-soluble drugs [3]. The use of lipids for the formation of nanoparticles such as solid lipid nanoparticles (SLN) or nanostructured lipid carrier (NLC), offers multiple benefits compared to other materials, which is due to low cytotoxicity and controlled drug release [4]. The controlled drug release is aimed to maintain drug concentration in the blood or the target tissue at the favorable level [5]. The controlled drug release can be applied to both hydrophilic and hydrophobic drugs. For example, Tamoxifen hydrochloride, as a hydrophilic drug, is encapsulated in a microemulsion system with a controlled release. It is effective in the breast cancer treatment in comparison with the commercial forms of the drug [6]. Raloxifene [7] and Clozapine [8], as hydrophobic drugs in microemulsion and nanostructured lipid forms, have exhibited essential growth in the drug release compared to free dugs. To date, lipid nanoparticles have been successfully used for hydrophobic drug entrapment, though encapsulating a high content of hydrophilic drugs in these materials is challenging. The key parameters in the preparation of NLC containing hydrophilic drugs are formation method, selection of materials as solid and liquid lipids, and choice of appropriate surfactants used in organic and water phases. Double emulsion solvent evaporation is a technique of NLC preparation that involves hydrophilic drugs [9]. However, the best materials for NLC preparation could be determined by mathematical methods. Group-contribution methods such as Hoy [10,11] and Hoftyzer-van krevelen [12] are excellent candidates for this purpose. These methods have been successfully applied to predict particle size and drug entrapment efficiency [13,14]. This is the first study to prepare a nanostructured form of ceftriaxone sodium based on NLC using appropriate controlled drug release. We then test the efficacy of this nanostructure on E.coli bacteria, as a highly resistant gram-negative bacteria, and compare results to that of free drugs. Given that particle size and drug entrapment efficiency have a noticeable effect on the bacteria death efficacy, we also investigate the methods of determining the best materials for forming small-size particles with high drug encapsulation efficiency. Mathematical modeling based on group-contribution methods is utilized for this purpose. Various formulations are explored to investigate the effect of lipid types and surfactants on the particle size and drug entrapment efficiency. The accuracy of mathematical prediction is also evaluated by an experimental study. Finally, the best NLC formulation and a more accurate mathematical model are introduced. The main objectives of the study are as follows: To determine the accuracy of mathematical modeling based on group-contribution method in predicting the particle size and drug entrapment efficiency. To determine the effectiveness of ceftriaxone drug in both free and nanostructured forms in cell culture media. To find the best drug dosage in the nanostructured form to achieve comparable antibacterial efficacy in comparison with the free drug. To determine rate of bacteria death using NLC form of ceftriaxone drug. Mathematical modeling Hoftyzer-Van Krevelen's method A group-contribution method for predicting the solubility parameter of components is Hoftyzer-Van Krevelen's method. In this method, the solubility parameter is calculated from the following equations [12]: In the above equations, V m is the molar mass, δ d and δ p are dispersion and polar forces, δ h is the hydrogen bonding of solubility parameters, δ t is the total solubility parameter, F di and F pi are dispersion and polarization components in the molar function, respectively, and E hi is the role of hydrogen bonding force in the tensile energy between molecules. Hoy's method There is another model for predicting the solubility parameter, the equations of which are as follows [11]: In these equations, α(p) is the number of molecular aggregates in each component, n is units repeated in each part of the molecular chain and B is a constant (equal to 277) [11]. Also, F t , is the molar attraction function, F p is its polar components and Δ P T is the Lydrsen-Hoy constant. Material selection based on mathematical methods In this study, two important NLC parameters, particle size and drug entrapment efficiency, are investigated. To examine the trend variation of these parameters, it is necessary to calculate the solubility parameters of all components separately, and then compare their numerical values with each other. The chemical structure of substances plays a crucial role in computing the solubility parameter. The solubility parameter of lipid components and drug can influence drug entrapment efficiency, while the solubility parameter of surfactants and lipids may affect the particle size. Since natural oils are composed of several components, it is not possible to calculate their solubility parameters. Hence, another effective parameter called hydrophilic to hydrophobic balance (HLB) is used for comparison. The HLB value of pure lipids is calculated by Eq. 11, while HLB of natural oils such as sesame oil, which is a mixture of different components, is obtained from Eq. 12. In Eq. 11, K equals 43 in emulsion systems, δ d and δ p are dispersion and polar forces, respectively, and δ h is the hydrogen bonding of solubility parameters. These parameters are obtained from Haftyzer-van Krevelen and Hoy's methods. In Eq. 12, W A, W B, and W C represent the weight fraction of components A, B, and C, respectively C [15]. Materials Ceftriaxone sodium was purchased from Exir Pharmaceutical Company (Borujerd, Iran). Stearic acid and glycerol mono-stearate as solid lipids, oleic acid as liquid lipids, soy lecithin, Span 80, polyvinyl alcohol (PVA), Tween 80 as surfactants, and ethanol as solvents were purchased from Merck Company. Deionized water was used in all experiments. Preparation of NLC In the first step, 0.3 g of GMS (solid lipid) and 0.09 g of oleic acid (liquid lipid) are mixed with 0.95 ml of ethanol (solvent) and 0.055 g of soy lecithin (oil phase). The resulting mixture is placed in a water bath at 60°C. Then, 0.15 ml deionized water containing 0.2 g/l drug is placed in a water bath (internal water phase). Due to high hydrophilicity of the drug, it dissolves rapidly in deionized water and generates a light yellowish color. We prepare the external water phase containing deionized water and Tween 80 with a concentration of 0.275 g/l and place it in a water bath. Then, the oily solution is homogenized by the ultrasonic probe at 80 rpm for 5 min. Finally, internal and external water phases are added, respectively. After 5 min of ultrasonication, the sample is cooled at 0-4°C. In the final step, a magnet is inserted inside the sample and it is stirred for 1 h at ambient temperature to evaporate the solvent [16,17]. Drug entrapment efficiency A high-performance liquid chromatography analysis is employed to measure drug loading in the NLC structure. Drug loading efficiency is calculated from the amount of drug not loaded in the structure according to Eq. 13: Drug loading was calculated by a high performance liquid chromatographic (HPLC) test using the Agilent 1260 machine. We used C18 column for this purpose. The column diameter and length were 4.6 mm and 100 mm, respectively. The injection volume was 50 μl with a flow rate of 1 ml per min at a pressure of 120 psi and the column temperature of 24°C. The mobile phase of ceftriaxone drug detection was containing as 95% methanol and 5% water solution. Drug detection was conducted using a UV-visible detector at a wavelength of λ = 240 μm. Figure 1 shows the HPLC diagram. As can be seen, the drug residence time is 1.149 min. To calculate the unloaded drug, 5 ml of each sample was centrifuged at 14000 rpm for 20 min at 4°C. Then, the supernatant solution was transferred from the filter (0.22 μm). After diluting the solution with deionized water with a 1:10 ratio, it was transferred to the chromatography apparatus. Characterization Particle size analysis (PSA) and zeta potential of lipid nanoparticles were measured by light scattering analysis (Vasco 3, Cordouan, France). NLC was also diluted with distilled water to obtain a suitable scattering intensity. The morphology of nanoparticles was measured by the TEM analysis (Leo 912 AB, American). The surface attributes of the sample were determined by an AFM analyzer (Full model Ara pazhoohesh, Iran). Crystallography of samples was examined by X-ray diffraction (XRD) analysis (Bruker D8 Advance, Germany) for lipid, drug, NLC-enricheddrug, and NLC-free drug. To investigate whether all materials are present inside NLC, we conducted the FT-IR analysis for the above four samples (Thermo Nicolet, American) In vitro drug release To examine drug efficacy in blood-resembling environment, drug release was implemented under in vitro conditions. PBS buffer at pH = 7.4 was selected for this purpose. Five milliliters NLC sample containing ceftriaxone sodium was placed in a dialysis bag (12 kDa). The same volume of suspension with an identical content of free drug was poured separately into a dialysis bag. The bags were then floated in 50 ml buffer for 48 h in an incubator shaker at 37°C. After 0.5, 1, 2, 6, 8, 24, and 48 h, 1 ml of the sample was removed from the buffer and the same pure buffer was immediately replaced in the system. Samples were sent to the HPLC machine to determine the amount of drug. According to the HPLC calibration curve, the drug content in each sample was calculated. Then, the profile of drug release versus time was plotted for both pure drug and nanostructure samples. It should be noted that the drug content removed at each sampling time was added to the next dose, and the cumulative effect of the drug was considered in calculations. Cell culture In this study, we used the 24-h stratification method for Escherichia coli strain ATCC 35218. For this purpose, the bacterial cell culture was performed in a Muller-Hinton agar medium, a suspension with a half-MacFarland equivalent to 0.85% of normal saline. Under these conditions, the number of bacteria is about 1.5*10 8 CFU/ml. This suspension was used in the next steps of the experiment [18]. Antibiotic solutions, as free and nanostructure forms, were prepared in sterile distilled water freshly during the day. MIC determination To determine the minimum growth inhibitory concentration (MIC) of ceftriaxone in free and nanostructure forms, the broth dilution method (macro dilution) was used under Clinical and Laboratory Standard Institute (CLSI) guidelines [19]. The antibiotic solution (ceftriaxone sodium) was prepared in pre-sterilized water and preserved at − 70°C in a frozen state until it was used. In the serial dilution, we used concentrations of 0.0078, 0.0156, 0.0312, 0.0625, 0.125, 0.25, 0.5, 1, 2, 4, 8, 16, 32, 64 and 128 μg/mL and a tube as the positive control (no antibiotics). We prepared a 24-h bacterial culture in a sterile suspension physiological serum equivalent to 0.5 McFarland standard turbidity. The culture was followed by incubation of media containing antibiotics, which were then transferred to 35R oven and remained there for 24 h to grow completely. After 24 h, tubers were examined for ocular hyperactivity as a sign of bacteria growth. Any type of opacity (in a specific or concise manner) was considered as resistance to dilution. The minimum concentration of bacteria without visible growth was considered as MIC [19]. Bacterial death kinetics To plot the bacterial death curve at the specified time, E. coli was cultured on a nutrient agar medium at 37°C for 24 h. In the next step, 3-5 colonies of pure culture were removed and inoculated in Muller Hinton Broth medium containing a suspension equal to 0.5 MacFarland turbidity in normal saline (0.85%). Under these conditions, the number of bacteria was about 1.5*108 CFU / ml. Then, 100 μl of this standard suspension was added to the culture tubes containing 2 ml of Muller Hinton broth medium, which consisted of the free form of ceftriaxone sodium and nanostructured ceftriaxone sodium with various concentrations. The tubes were placed at 35°C with continuous movement in a shaker incubator. Then, after 2, 4, 6, 8, and 10 h, using 100 μl of each sample, the number of living bacteria was counted using the sequential dilution method [20]. Mathematical results The results of solubility parameters calculated from Hoy and Haftyzer-Van Krevelen's methods are presented in Table 1. To study the effect of chaining materials on drug entrapment efficiency, it is necessary to calculate the difference between solubility parameters of drug and lipids. For liquid lipids, the HLB value is examined. Sesame oil is composed of linoleic acid (39-59%), oleic acid (35-54%), palmitic acid (10%) and stearic acid (5%). These values were extracted from our previous study based on GC analysis [21]. Results are shown in Table 2. As shown in Table 2, in both methods of solubility parameter calculation, the solubility parameter difference in drug-GMS is lower than the drug-stearic acid. It is therefore anticipated that the nanocarrier provided with GMS will have a higher drug content compared to nano-carriers provided with the stearic acid lipid. However, drug molecules are usually solved in liquid lipids, too. The HLB value for various lipids is reported in Table 2. As shown in Table 2, the amount of HLB associated with lipids is not significantly different, indicating a similar ratio for the hydrophobic and hydrophilic property of these lipids. However, since ceftriaxone sodium has a hydrophilic nature, it dissolves in the hydrophilic lipid. Therefore, based on HLB calculation, the selected solid and liquid lipids will be glycerol mono-stearates (HLB GMS > HLB SA ) and oleic acid (HLB oleic acid > HLB sesame oil ), respectively. The entrapment efficiency is predicted based on solid-liquid lipids as follows: Stearic acid-sesame oil< Stearic acid-oleic acid< GMS-sesame oil < GMS-oleic acid The accuracy of the predicted trend can be determined by an experimental data later. To predict the trend of NLC particle size with various solid and liquid lipids, we investigated the difference between solubility parameters of solid lipids and surfactants (Table 3). A lower difference between solubility parameters of surfactants and solid lipid indicates a smaller nanoparticle size. Since the lipid content of solid lipid is higher than that of liquid lipid, the effect of solid lipid solubility parameter on nanoparticle size is expected to be higher than that of the liquid lipid. As depicted in Table 3, based on the Haftyzer-Van krevelen's method, the solubility parameter of the active surface material for the internal NLC phase is comparable to that of soy lecithin for both solid lipids. Nevertheless, this parameter is predicted by the Hoy's method in a completely different manner. It introduces Span 80 as the best surfactant for the system containing glycerol mono-stearate and soy lecithin for a stearic acid solid lipid. With regard to the external water phase surfactant, it is recommended to use Tween 80 in Hoftyzer-vankrevelen's method and PVA in the Hoy's method for both solid lipid systems. Besides, in both methods, the solubility parameter difference in the system containing glycerol mono-stearate is lower than the system containing stearic acid lipid. Finally, as predicted by Hoftyzer-van krevelen's method, in the system containing glycerol mono-stearate and soy lecithin, Tween 80 has the minimum size, while in the system containing glycerol mono-stearate, span 80 and polyvinyl alcohol have the minimum particle size, as determined by the Hoy's method. It should be noted that the effect of external aqueous phase surfactant on decreasing the nanoparticle size is more significant than that of the internal surfactant water phase. The accuracy of mathematical methods can be verified by experimental methods. Comparing mathematical and experimental results The results of nanoparticles prepared with various components are displayed in Table 4. It is worth noting that we used the same amount of materials in all formulations and only changed the material type. As Table 4 shows, according to experimental results, the drug entrapment efficiency can be boosted by chaining solid and liquid lipids as follows: stearic acid-sesame oil< stearic acid-oleic acid< GMSsesame oil < GMS-oleic acid The comparison of mathematical prediction and experimental results confirms the accuracy of both Haftyzer van-Krevelen and Hoy's methods. The trend of particle size change based on experiments is as follows: stearic acid-sesame oil> stearic acid-oleic acid> GMSsesame oil > GMS-oleic acid According to the mathematical models and Table 3, it is predicted that in the Haftyzer van-Krevelen's method, the solubility of internal phase surfactants in both solid lipids correspond to soy lecithin solubility parameter. The sample prepared with soy lecithin as the surfactant in the internal water phase has a smaller nanoparticle size. The results of Table 4 corroborate this prediction. However, the prediction of this parameter with the Hoy's method is completely different. According to the Hoy's method, Span 80 is the best surfactant for the system containing glycerol mono-stearate and soy lecithin is the best surfactant for the stearic acid lipid. This prediction is inconsistent with experimental results. As far as the aqueous external phase surfactant is concerned, it is recommended to use Tween 80 in Hoftyzer vankrevelen's method, and Polyvinyl alcohol in the Hoy's method for both solid lipid systems. According to Table 4, the particle size of the sample with Tween 80 as the external water phase surfactant is smaller, which confirms the prediction of the Hoftyzer-van krevelen's method. Finally, the system containing glycerol mono-stearate and oleic acid as the solid and liquid lipid and soy lecithin and Tween 80 as the surfactant of internal and external water phase was chosen as the optimum sample with a small particle size and high drug loading. Characterization of optimized structure Characterization was performed following the determination of the optimum sample. Figure 2 shows TEM, PSA and AFM images of the optimum sample. As Fig. 2a illustrates, NLC particles have a spherical structure with a mean particle size of 70 nm, while DLS calculation estimated a particle size of 86 nm (Fig. 2b). TEM and DLS results are relatively identical, and the difference between TEM and DLS analysis is due to the measurement method. The DLS analysis measures the hydrodynamic diameter of particles in the solution, indicating a larger diameter for particles [22,23]. Moreover, the polydispersity index (PDI) of 0.131 reveals the uniform distribution of particles, which is aligned with AFM images (Fig. 2c). Based on the amount of PDI, values close to zero manifest higher stability of nanoparticles [24]. The zeta potential of the sample is − 20.26, which is acceptable for lipid nanoparticles. This numerical value can be attributed to functional groups with a negative charge on the structure, which generates sufficient repulsion to prevent the agglomeration of particles in the solution in a short time. FT-IR analysis was used to examine functional groups on the nano-carrier surface. Figure 3a shows the results of the FT-IR analysis for ceftriaxone sodium, Glycerol Monostearate, and nano-carriers containing ceftriaxone sodium. The FT-IR analysis of ceftriaxone sodium exhibits a band at 3444.63 cm − 1 , which can be attributed [26]. As shown in Fig. 3a, the FT-IR spectrum obtained from the NLC sample containing ceftriaxone sodium resembles that of glycerol mono-stearates. This indicates that the drug has been successfully encapsulated in the lipid structure. By comparing the spectra of nanoparticles to each pure component, the presence of all materials can be verified. The peak at 1649.02, which is caused by the C=O amide, confirms drug presence in the NLC structure. Due to low drug content in the nano-structure, the effects of drug-related peaks are weaker. In the next step, XRD patterns of ceftriaxone sodium, GMS, and NLC with/without drug are measured. According to Fig. 3b, ceftriaxone sodium has sharp edges at 11.18°, 12.56°, 18.92°, 21.24°, 22.74°, 23.80°, 25.20°, 28.28° [27]. Also, glycerol monostearate has sharp peaks in the range of 19.97°to 23.38° [28] and glycerol mono-stearate has a crystalline structure. However, as shown by XRD patterns of NLCs containing drug and non-drug, peaks are broader and less sharp than those of pure patterns. These results exhibit the amorphous structure of the prepared NLCs. The changes of crystallography nature can be attributed to the presence of other materials such as surfactants and liquid lipids. Comparing the peaks of pure drug to that of NLC-containing drug reveals that there is no comparable peak in the NLC pattern. This implies that drug has been successfully loaded in the nanocarrier [29,30]. In vitro drug release The graphs in Fig. 4 indicate the release of drug in both commercial and NLC forms. As can be seen, the commercial drug has explosive release while there are two types of drug release in the NLC form: explosive and controlled release. About 42% of the drug is released after 6 h from both formulations. In the case of the commercial market ampoule, the rapid release persists for 10 h and then drug release rate is stabilized (about 80%). However, the gradual release of the nano-carrier sample in this study lasted for 72 h with a moderate gradient. The efficacy of the two drug forms will be further determined by conducting experiments that contain bacterial culture. Cell culture results The results of MIC analysis showed that 2 μg / ml concentration of ceftriaxone in free form and 1 μg / ml concentration of ceftriaxone in nanostructured form inhibited the growth of Escherichia coli bacteria. The bactericidal activity of ceftriaxone sodium at different concentrations in both free and nanostructure forms was studied by counting bacteria after 0, 2, 4, 6, 8 and 10 h. The results of the death kinetics test are presented in Fig. 5. In this case, we investigated variation in the number of bacteria after 0, 2, 4, 6, 8 and 10 h based on MIC concentration of ceftriaxone sodium in the form of nanostructures under different concentrations of sodium ceftriaxone in free form and antibiotic-free sample. As Fig. 5 shows, at the same time, the number of bacteria in the system containing the nanostructured form of drug with a concentration of 1 μg/ml is lower than that of a system containing free drug with a concentration of 2 μg/ml. Discussion In this study, a double emulsion solvent evaporation method was used to construct NLC for the delivery of ceftriaxone sodium drug. Hoftyzer-Van Krevelen and Hoy's methods were applied to calculate solubility parameters. Based on the prediction of two mathematical models and the experimental results, both Hoy and Hoftyzer-van krevelen's models appear to be strong predictors of drug loading in different NLC systems. However, Haftyzer van-krevelen's model is capable of predicting the components of NLC to achieve a smaller particle size whereas the Hoy's method is unable to predict the trend accurately. These results align well with those reported in the literature. Tian et al. demonstrated that the solubility parameter can be a suitable guide for the design and identification of a stable micellar system with a high drug loading capacity [31]. Bahrami et al. argued that solubility parameters based on groupcontribution model offer a useful guide for preparing NLC with various formulations containing Fentanyl citrate as the hydrophobic drug. They illustrated that the surfactants-lipids solubility parameter had a bearing on the nanoparticle size, while the drug-lipid solubility parameter affected drug entrapment efficiency [13]. In vitro drug release at pH = 7.4 indicates the controlled release of drug from the structure and the accuracy of the proposed nanostructured behavior. The explosive release of antibiotics from the nanostructure in early hours of injection leads to bacterial death. Then, the gradual release of the nanocarrier sample over time with a mild gradient can diminish the bacterial resistance and enhance drug efficacy. This claim is approved by the cell culture study. The kinetics curve of bacteria mortality indicates that the system containing nanostructure form can last bacterial death for a longer time. Moreover, the slope of curve in the nanostructure sample is more controlled than that of the curve in the free drug. Furthermore, it suggests that by cutting the drug dosage of the nanostructure in half, the same efficacy can be achieved. It should be noted that the permeability of E.coli against antibiotics and antimicrobial agents is low. It is a gramnegative bacterium with a thinner peptidoglycan layer and lipopolysaccharide as the outer membrane. For this reason, E. coli's resistance is higher than bacteria such as Staphylococcus aureus [32,33]. Chaining the common form of the drug to a nanostructure with a controlled drug release can undermine the resistance of the bacteria membrane wall over time and improves the bacterial mortality rate. These observations are aligned with Kumar et al.'s study. They prepared a SLN structure containing Ceftriaxone sodium drug and examined its inhibition effect on E. coli bacteria, with their results revealing a slower and sustained release of drug from the SLN structure. Drug release profiles in their study exhibited a sustained release from the SLN structure where only 6% of the drug was released from the nano-formulation in 24 h, whereas 80% of the drug was released for the drug alone [1]. The present study has some strong points worth mentioning. Using mathematical modeling in this study helps overcome some of issues associated with the selection of best materials without experiments. It can affect assessment method and modeling in this field. Moreover, investigating the effect of the NLC form of drug with various doses on E. coli, as a highly resistant gram-negative bacterium, is another advantage of this research. However, there are a number of drawbacks that should be mentioned. In NLC preparation, we changed one parameter and kept others constant. Hence, the intervariable relationships were not investigated [34]. For the cell culture study, a strict control is required to perform experiments properly. At the same time, a greater degree of control makes experiments artificial. These problems can be solved by performing several experiments to get a clearer picture of the process [35]. Future studies can take measured to address this shortcoming. A number of limitations should also be noted. First, NLC was prepared using double-emulsion solvent evaporation method by an ultrasonication device. Future studies can utilize other methods and investigate their effects on the particle size and drug entrapment efficiency. Second, we focused on the impact of the NLC form of ceftriaxone on E-coli bacteria in a cell culture study, while the use of nanostructured form of ceftriaxone sodium can have beneficial effects on other types of gram-positive and gram-negative bacteria. A direction of research for future research can be evaluating the impact of nanostructured form of ceftriaxone on various resistant bacteria. Finally, we only measured the effect of the NLC form of the drug under in vitro environment. Future research can conduct similar tests under in vivo media. Conclusion In the present study, the nanostructured lipid carrier was prepared to load ceftriaxone sodium as a hydrophilic drug. The effect of drug on E. coli was investigated in bacterial culture media. Based on the mathematical modeling, the Haftyzer-van krevelen's model appeared to be more accurate than the Hoy's method in predicting the trend of NLC particle size with different materials. Nevertheless, both Haftyzer-van krevelen and Hoy's methods were able to predict the trend of change in drug entrapment efficiency accurately. Informed by the kinetics of bacteria cell, the results revealed that the NLC form of drug had higher antibacterial activity against E.coli gram-negative bacteria compared to the free drug. The greater antibacterial effect of the drug at a lower dose in the NLC form is another important finding of this study with regard to antibiotic-dose reduction and cost-effective treatment of resistant microbes.
2020-02-12T14:04:18.594Z
2020-02-10T00:00:00.000
{ "year": 2020, "sha1": "ee01244076d46044c0654ac78ea5609cadce8710", "oa_license": "CCBY", "oa_url": "https://aricjournal.biomedcentral.com/track/pdf/10.1186/s13756-020-0690-4", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "5b8d34b3913823c3134369663be4c087b009011f", "s2fieldsofstudy": [ "Medicine", "Materials Science", "Chemistry" ], "extfieldsofstudy": [ "Medicine" ] }
264314248
pes2o/s2orc
v3-fos-license
Cracking process in expansive soil with and without vegetation covers in dry and rainy seasons at field scale ABSTRACT The presence of desiccation cracks in the soil alters its hydromechanical behavior, increasing the soil's water infiltration capacity, mobilizing the potential for expansion. This may affect the performance of the structural elements of the construction. This study aimed to evaluate the mechanics of expansion, contraction and cracking of the expansive soil of Paulista - Pernambuco, Brazil, through field trials, subject to wetting cycles and drying. The studied soil is a sandy silty clay of high compressibility with medium to very high expansion potential. The process of formation and propagation of cracks was analyzed using digital images and the monitoring of samples subjected to drying and wetting cycles. The indices of crack geometry increased with the advancement of desiccation but did not stabilize. During the wetting period, they tend to close. The pattern of cracks in the tests varied according to the presence or absence of vegetation. It was concluded that the vegetation cover has a significant influence on the standardization and the crack formation and propagation process. INTRODUCTION Expansive soils are problematic for buildings and infrastructure worldwide, causing socioeconomic and environmental damage due to volumetric variations when suffering moisture variation.Two main requirements are needed for soil to display expansiveness.An intrinsic factor related to the soil mineralogical composition, texture and structure, and an extrinsic factor that is capable of transferring moisture from one point of the soil to another, related to climatology, hydrogeology, vegetation, and occupation capacity (Chen, 1988;Ferreira, 1995;Nelson et al., 2015). They are often identified in arid or semi-arid regions where evapotranspiration exceeds volumetric precipitation.Expansive soils increase in volume when flooded and decrease in volume with water evaporation.Climatological conditions affect soil moisture, increasing or decreasing volume (Ferreira, 1995;Guerreiro et al., 2021). Studies were carried out to evaluate the influence of the surroundings on soil moisture, such as the influence of air humidity, vegetation, and rainy seasons.Seasonal variations in the environment subject the soil to alternating dry and wet periods.It has already been observed that in a dry period longer than 30 days, micro-cracks open in the soil (Guerreiro et al., 2021(Guerreiro et al., , 2022)).During dry periods, the soil retracts and forms cracks that propagate, altering its mechanical and hydraulic properties.Hydraulic conductivity increases due to the formation of cracks, generating a quick and direct movement of water and solutes from the soil surface to the permeable substrate, and in compacted landfills, the crack reduces resistance, generating infiltration and percolation problems (Chen, 1988;Fredlund and Rahardjo, 1993;Ferreira and Ferreira, 2009). As mentioned, in addition to the climatological factor, the volume of water in the soil is also influenced by vegetation, which extracts water and dissolved minerals through its roots.Thus, vegetation reduces the saturated permeability and increases the air-entry suction of soils.However, roots in the soil generate greater tensile strength, which tends to suppress cracks and swelling (Jotisankasa and Sirirattanachat, 2017;Ferreira et al., 2020). Hence, there are five main points to be discussed about soil desiccation cracking in vegetated soil.First, transpiration-induced soil suction increases tensile stress along the crack plane.Second, root reinforcement resists soil deformation.Third, the effects of plant root exudates on altering soil aggregation.Fourth, the influence of tree spacing and plantation practices on crack initiation.At last, plant traits such as stomatal conductance and root and shoot characteristics affect crack patterns (Bordoloi et al., 2020).In this paper, the focus will be on the first two points mentioned. As already mentioned, roots in the soil absorb water and cause an increase in soil suction, reducing its hydraulic conductivity and increasing soil shear strength (Ng and Menzies, 2007;Ng and Leung, 2012).Furthermore, this vegetation-induced suction results in a change in the effective soil stress, which can cause soil shrinkage and initiate the cracking process (Fredlund and Rahardjo, 1993).It has been recognized for years that this root desiccation process can cause damage to shallow foundations and road paving in clayey soils (Croney and Lewis, 1948;Ward, 1948;Parry, 1992). So, roots have a mechanical effect on the soil, which can increase the apparent soil cohesion, also called root cohesion.The increased amount depends on the density and strength of the roots.These characteristics in turn depend on the age and growth rate of the roots, in addition to the plant species (Genet et al., 2005;Loades et al., 2015;Boldrin et al., 2017).Comparing the roots with natural fibers, it can be stated that due to the increased soil tensile strength by natural fibers, the soil tensile strength increased, the crack resistance of soil is improved, and hence CIF is reduced (Consoli et al., 2011;Tang et al., 2021). Rev Bras Cienc Solo 2023;47:e0230055 Roots in the soil induce suction, which can decrease permeability, increase soil tensile strength, and accelerate the cracking process.However, considering that roots are natural fibers that increase soil's tensile strength, the crack resistance of the soil increases.It is noted that vegetation has its positive and negative aspects from the point of view of civil engineering (Genet et al., 2005;Boldrin et al., 2017). In-depth studies of the cracking process were carried out both in the laboratory and in the field.It is important to point out that most of the research was done in the laboratory, allowing greater control of the environment.However, it is precisely this control that can make the research less representative than those carried out in the field (Miller et al., 1998). It is necessary to have a better understanding of the evaluation of the crack formation and propagation process in soils and evaluate the impact of this phenomenon to understand the mechanical and hydraulic behavior better.This study aimed to demonstrate that the hydro geotechnical behavior, the pattern of the cracking formation, and the crack propagation process in the expansive clayey soil of Paulista -Pernambuco, Brazil depend on vegetation and climate, using a field scale.The one-year period allowed the process to be studied also in the dry and wet seasons. MATERIALS AND METHODS The soil texture consists of 280 g kg -1 of sand, 250 g kg -1 of silt, and 470 g kg -1 of clay.The liquidity limit is 76 %, the plasticity limit is 30 %, and the grain density (G) of the soil is equal to 2.674 (Ferreira et al., 2017).The soil has a plasticity index of 46, indicating a soil with very high plasticity, according to Burmister's criteria (1949).The soil originated from the physical-chemical weathering of clay and limestone from the Maria Farinha Formation (Bastos, 1994).The local climate is hot and humid tropical with an accentuated dry period of 7 to 8 months, being classified as As' according to the criteria of Köppen and Geiger.The average annual temperature at the sampling site is 526 °C and the average annual rainfall is 1819 mm.The soil is acidic [pH(H 2 O) 4.93], eutrophic (V value = 56.80%), highly active (T = 40.92cmol c kg -1 ) and has irregular interstratification involving 2:1 minerals with micas and expansive minerals (smectites), as well as kaolinite (Ferreira et al., 2017). Field test location and execution Field crack tests were carried out at the Janga Sewage Treatment Station, located in Maranguape II neighborhood, in the city of Paulista -Pernambuco (Latitude: 07.00° 55.00' 35.00"S; Longitude: 34.00° 50.00' 49.00" W, Figure 1). The climate in the region is hot and humid, according to Köppen classification system (As').Through the database of the National Institute of Meteorology (INMET) it was possible to access climate data for the region during the period of tests (Figure 2).During this time, total precipitation and evapotranspiration were 1547.63 and 842.67 mm, respectively, while the average temperature was 26.30 °C. September, October, and November correspond to the dry season, in which evapotranspiration exceeds precipitation, and temperatures are the highest of the year.March, April, May, June, and July correspond to the rainy season, in which rainfall is greater than evapotranspiration and temperatures are lower.Maximum and minimum temperatures correspond to the highest and lowest temperatures of the month, which are averaged. Rev Bras Cienc Solo 2023;47:e0230055 Initially, two study areas were demarcated, both with a square section of 0.60 m on each side, with wooden pickets and plastic ribbon.In the area designated for the study without vegetation, a 0.15 m deep excavation was made to remove the vegetation with its roots and in the end, the surface was leveled.Drainage of this excavation was planned to avoid rainwater accumulation.Galvanized metal frames were fixed to the ground, and a hollow aluminum plate was welded to the end of the frame to position the camera.Wooden battens 0.20 m long and 0.05 m wide were made.Graph paper was glued to each batten to serve as a reference for the three-dimensional images.Finally, to prevent the site from being trampled on, the studied region was surrounded with nylon mesh.Details of the assembled equipment can be seen in figure 3. Twelve visits were made to the study site between October/2019 and September/2020 to capture photos for analysis of soil cracks with and without vegetation.Table 1 presents the sequence of visits carried out.On each visit to the study site, the camera was positioned on the hollow aluminum plate to collect the standardized images.Once the image collection stage was completed, a portion of the surface soil was collected to determine the matric suction using the filter paper method and soil moisture.The same procedure was performed for the two regions. Data analysis Cracking mechanism along the soil drying process was studied using the public domain computer program called ImageJ.This software processed the photos collected through a webcam or semi-professional camera and several geometric indices were obtained (cracked area, CIF, average crack width, crack total length, and the crack segment number).The procedure for processing images in the program is described by the flowchart in figure 4. The CIF (Crack Intensity Factor) is obtained by the ratio between the cracked area and the initial area of the sample (Miller et al., 1998).The average width of a crack is calculated as the shortest distance from a point on an edge to the opposite boundary of a crack.The total crack length is calculated by counting the total number of black pixels after the image has been skeletonized, and the number of crack segments is the sum of elements between two adjacent intersections. RESULTS The results of the images taken in the field to analyze the formation and propagation of cracks in the soil are presented in a vegetated area and another without vegetation between October/2019 to September/2020.Through the images taken, the crack evolution and geometric indices were evaluated.In total, twelve visits were carried out, and they were named C3 to C14 (Table 1). Analysis of images of an area without vegetation The total duration of the study was 342 days or, 8208 h.Initially, the surface moisture of the soil was 15.59 % (C3) without crack.On the second visit (C4), after seven days (168 h), the soil surface already showed the formation of some cracks, and the humidity reduced to 5.71 %.Cracks developed and propagated until the seventh visit (C9), after 104 days (2496 h), with accumulated precipitation of 46.5 mm and surface soil moisture of 5.98 %.The daily precipitation rate from visit C3 to visit C9 was 0.45 mm day -1 . An increase in rainfall was observed from the eighth visit (C10), after 127 days (3048 h), and the natural process of closing the cracks due to soil expansion began.It was verified that all the cracks were closed on the eleventh visit (C13).So, it took 91 days (2184 h) and accumulated precipitation of 522.3 mm, referring to the period between visit C9 and visit C13, corresponding to a daily precipitation rate of 574 mm day -1 and surface soil moisture of 31.91 %.The last visit (C14) was carried out 342 days (8208 h) after the beginning, and despite the soil surface moisture reducing to 15.41 %, no new cracks formed.The accumulated precipitation between visits C13 and C14 was 1070 mm, and the daily precipitation rate was 7.73 mm day -1 (Figure 5).Table 2 presents the summary of the crack analysis results.With these data it is possible to elaborate the curves period versus CIF and water content, period versus temperature and RH, the curves of the quantitative evolution of the geometric cracking indices (cracked area, CIF, average width of cracks, total length of cracks and crack segment number) and the graph period versus daily rainfall during the period of readings in the area without vegetation (Figure 6). The period versus water content curve (Figure 6a) presents three ranges of water content variation over time.The first range, corresponding to the beginning of the readings (C3) up to 104 days -2496 h (C9), shows a reduction in soil moisture from 15.59 to 5.98 % in this period and has a range of moisture loss equal to 0.09 %/day.The second range, starting at the end of the first stretch (C9) up to 195 days -4680 h (C13), shows a gradual increase in soil moisture from 5.98 to 31.91 % over time and a rate of increase of humidity equal to 0.3 %/day.The third and last range begins at the end of the second range (C13) up to 342 days -8208 h (C14), showing a reduction in soil moisture from 31.91 to 15.41 %, which results in a rate of moisture loss of 0.1 %/day. There are also three ranges of variation in the Period versus CIF curve (Figure 6a).The first range shows an increase in CIF over time from the start of the trial (C3) to 104 days -2496 h (C9), with a rate of change in CIF of 0.08 %/day.The second range shows a reduction in the CIF over time due to the increase in rainfall at the research site and comprises between the end of the first range (C9) up to 4680 h -195 days (C13), with a CIF variation rate equal to at 0.1 %/day.Finally, the third band starts at the end of the second band up to 342 days -8208 h (C14) and represents the CIF stabilization stage as a function of time. Average temperature over the readings was 30.3 ± 0.75 °C, with a coefficient of variation equal to 0.02.The mean relative humidity (RH) was 76 ± 1.83 %, with a coefficient of variation of 0.02 (Figure 6b).The variation of geometric indices (cracked area, CIF, average crack width, total crack length and the number of crack segments) as a function of the period (Figure 6c) increased as the soil dried out.With the increase in soil moisture, a decrease in all geometric indices was verified, followed by stabilization. Figure 7 shows the variation of the geometric indices of the cracks during the natural process of drying and wetting the soil.During the drying phase (Figure 7a), there was an increase in all indices with the soil drying.The humidity, when reaching the value of 5 %, presented stabilization, but all geometric indices continued to increase.Table 3 presents the fit indices used in this model.Saturated water content is the saturation moisture at which all the voids in the soil are filled with water.The saturated water content is the saturation moisture content at which all voids in the soil are filled with water.In the first stretch of the retention curve, the soil remains saturated until it reaches the air intake suction.The suction starts to vary when air enters the voids, due to the evaporation of water molecules.Increases in suction greater than air intake will produce appreciable losses in soil water content, reaching a point of change in the slope of the curve, which determines residual moisture.With the drying period, the soil contracts and forms crack.Rev Bras Cienc Solo 2023;47:e0230055 Analysis of images of vegetated area In the region with vegetation, the crack formation and propagation analysis are carried out, in addition to the percentage evolution of vegetation cover and the variation of water content over time (Figure 9).The total duration of this part of the study was also 195 days (4680 h). Initially, the soil surface moisture was 4.32 % (C3).Cracking was not noticed until visit C10 (w = 6.15 %), after 127 days (3048 h), with accumulated rainfall of 141.1 mm and daily precipitation rate equal to 1.1 mm per day.Cracks developed and propagated only from visit C11 (152 days -3648 h) to visit C12 (166 days -3984 h).The accumulated precipitation in this period was 46 mm, and the daily precipitation rate was 3.3 mm per day.Soil moisture between visits C11 to C12 ranged from 6.35 to 10.55 %.The geometric indices (CIF, average width, total length, and number of crack segments) obtained are presented in table 4. From visit C13 (195 days -4680 h) until the last reading C14 (342 days -8208 h), the total closure of cracks was verified, with no new crack formations during this period.The accumulated precipitation in this period was 1078.1 mm, and the daily precipitation rate was 7.3 mm day -1 .Soil surface moisture in visits C13 and C14 was 25.28 and 12.21 %, respectively. Table 5 presents the results of the variation of vegetation in the region.Figure 10 presents the evolution curves of the percentage of vegetation area in the region with vegetation.The average temperature throughout the readings and average relative humidity (RH) were the same as in the study in the region without vegetation, as the analyses were carried out in parallel on each visit. Rev Bras Cienc Solo 2023;47:e0230055 As for the vegetation in the area, there were three ranges of variation.The first range lasted from the beginning of the readings (C3) until the C7 visit, with a total duration of 50 days (1200 h), and there was a decrease in the vegetation area from 35.6 to 28.0 % (Figure 10a).During this time, there was a precipitation accumulation of of 21.3 mm (Figure 10c), representing a low precipitation index, and a reduction in soil moisture from 4.32 to 2.67 % (Figure 10b), to which justifies the vegetation area reduction in this period.The reduction rate of the vegetation over that period was equal to 0.15% per day. The second range started after visiting C7 until visit C13, lasting 145 days (3480 h), showing an increase in vegetation area from 28.0 to 90.2 % (Figure 10a).During this period, an accumulation of rainfall of 546.6 mm was recorded (Figure 10c).This represents a significant increase in precipitation, increasing soil moisture from 2.67 to 25.28 % (Figure 10b).The rate of vegetation increase over that period was 0.43% per day. Finally, the last lane started from visit C13 to visit C14, lasting 147 days (3528 h), showing a reduction in the vegetation area from 90.2 to 80.6 % (Figure 11a).An accumulation of rainfall equal to 1078.1 mm was recorded during this period (Figure 11c).Topsoil moisture decreased from 25.28 to 12.21 % (Figure 10b).The vegetation percentage reduction rate was 0.07% per day.Although this range had the highest accumulation of rainfall, there was a reduction in the vegetation area.This can be explained by the greater rainfall concentration right after the C13 visit, and as visit C14 approached, there was a considerable reduction in rainfall (Figure 10c). As in the region without vegetation, on each visit to the study site, the matrix suction of the surface soil was determined using the filter paper method.Figure 11 presents the soil water retention curve with van Genuchten (1980) adjusted.Table 6 presents the fit indices used in this model. Comparison of crack formation and propagation between field tests Figure 12 shows the effect of climate on the formation of cracks in the region without vegetation and on the variation of the vegetation area in the region.The formation and propagation of cracks in the region without vegetation increased during the dry season. With the onset of rains, the cracks then closed completely and remained so despite the reduction in soil surface moisture, as desiccation was not sufficient to reopen pre-existing cracks or form secondary cracks (Figure 12a). During the dry season, the vegetation area remained constant in the area with vegetation, at approximately 30 %.With the increase in rainfall, the surface soil increased moisture, and therefore, there was an increase in vegetation area, reaching approximately 90 % of the area with vegetation.With the end of the rainy season, soil moisture decreased, and there was a reduction in the vegetation area (Figure 12b).Figure 13 shows the evolution of crack formation and propagation as a function of time in regions with and without vegetation. The initial soil moisture in the region with vegetation was equal to 4.32 % and did not show surface cracks.In the region without vegetation, the initial soil moisture was equal to 15.59 %.The topsoil in this region was confined to 0.15 m below the ground surface due to the removal of vegetation and roots at the beginning of the study.Therefore, the initial humidity of the surface soil in the region without vegetation was higher.With the open-air soil exposure in the region without vegetation, there was a decrease in humidity and the geometric indices increased up to approximately 100 days from the beginning of the readings.Later, with the increase in rainfall in the area, the humidity increased, and the soil expanded.Consequently, a reduction in the geometric indices of the cracks was verified.After 180 days from the readings' beginning, the cracks' complete closure was verified, with surface soil moisture equaling 31.91 %.Then, with the decrease in rainfall, the surface soil moisture again reduced to 15.41 % after 342 days after the beginning of the readings, and there was no formation of new cracks. In the region with vegetation, no formation of cracks was observed up to 140 days after the beginning of the readings.During this period, soil moisture showed low values (2.67 to 6.15 %).A few cracks were formed 140 days after the beginning of the readings, and the geometric indices increased.Approximately 200 days after the beginning of the readings, the cracks closed, that is, the geometric indices became null again. Area without vegetation In the area without vegetation, it was noticed that there was soil desiccation with the dry season, and cracks were formed (increase in CIF).As rainfall increased at the site, soil moisture increased, causing expansion and, therefore, a decrease in the cracked area of the soil until stabilization, with zero CIF.The same phenomenon was observed by Ribeiro Filho et al. (2023). The soil had a water content of 15.59 % (initial drought), there was a decrease in water content due to evaporation, and the geometric indices increased up to approximately 100 days from the beginning of monitoring, with surface soil moisture equal to 5.98 %.In the rainy season, water content increased, the soil expanded, and cracks decreased.There was a reduction in the crack geometric indices, and after 180 days from the beginning of the monitoring, the surface cracks were closed, with surface water content equal to 31.91 %.With the decrease in rainfall, the surface soil moisture reduced to 15.41 % after 342 days from the beginning of the readings, with no formation of new cracks. The average width of the cracks increased from the beginning of the test (C3) to 104 days -2496 h (C9), with a variation rate equal to 0.08 mm per day.With increasing soil moisture, there was a reduction in the average width of the cracks during the C9 visit period up to 195 days -4681 h (C13), with a variation rate of the average width of the cracks equal to 0.1 mm per day.Between visits C13 and C14, the average width of the cracks stabilized.The total length and number of crack segments increased from the beginning of the test (C3) to 50 days -1200 h (C7), with a variation rate equal to 80.8 mm per day and 1.7 units per day, respectively.With increasing soil moisture, there was a reduction in the total length and number of crack segments during the C7 visit up to 195 days -4680 h (C13), with a variation rate equal to 27.9 mm per day and 0.6 units per day, respectively.Between visits C13 and C14, both indices stabilized. The geometric indices (CIF, average width, total length, and segment number) increase with drying.No stabilization of the geometric indices was verified.The indices continued increasing with soil water content stabilization.This is because soil water content was obtained at the surface.Probably, below the surface, soil water content was higher due to confinement.With the advance of soil cracking in-depth, they also began to dry out with the opening of the crack, and thus, these layers began to dry out, and the geometric indices continued to increase.That is, the geometric indices continued increasing as the layer below the surface continued to lose water with desiccation, even with the stabilization of topsoil water content.As for the shape of the cracks developed, it is observed that the secondary cracks were formed from the primary cracks, presenting "T" and "X" shapes (Figure 6). With the increase in rainfall, the wetting phase began (Figure 8b).With wetting, the soil expanded, and all geometric indices decreased until the complete closure of the cracks, that is, stabilization occurred with zero value of the geometric indices.In other studies, carried out in semi-arid regions, this correlation between rainfall and evolution or stabilization of geometric indices was also noted (Ribeiro Filho et al., 2023).Correlating the intensity of cracking with the clayey fraction of the soil (Elias et al., 2021;Ribeiro Filho et al., 2023), it is noted that there is a large cracking in the non-vegetated area, also because the clayey soil studied. Vegetated area In the experimental area with vegetation, the formation of cracks was not verified until 140 days from the beginning of the monitoring, in the dry period.A few cracks were formed 140 days after the readings began, and the geometric indices increased.After 200 days from the reading's beginning, in the rainy season, the cracks closed, and the crack geometric indices became null again. The vegetated area presented small number of primary cracks, absence of secondary, and its shape was linear.The few cracks that existed were not formed when soil moisture was lower, but after the increase in rainfall in the region.This late crack propagation, when compared to the area without vegetation, can be explained by the influence of vegetation on the soil, increasing the surface traction resistance of the soil by the roots during the desiccation period. It can be concluded that in the dry season, there was a reduction in the vegetation area.As there was an increase in rainfall at the site, soil moisture increased, and thus, the vegetation area increased.It was also verified by Silva (2001) that, during the summer months, when rainfall is scarce, temperature and insolation are high, vegetation practically disappears, and cracks appear in the most superficial soil layers. It is noteworthy that, in the region with vegetation, a few cracks were formed in a period in which the humidity of the surface soil increased (increase in rainfall).However, with the continuous increase in rainfall and, consequently, in soil moisture and the percentage of vegetation, the cracks closed. Thus, the surface soil water content in the experimental area with vegetation is lower than without vegetation (except in a determination in the rainy season).The values of residual water content, saturation water content, and differential moisture capacity in the soil area without vegetation (Table 3) are higher than the corresponding values of the area with vegetation (Table 6).This is due to evapotranspiration in the area with vegetation, as opposed to the area without vegetation, where there is only evaporation.It is important to note that even with higher suction in the vegetated area, the crack formation and propagation were smaller because the roots of the vegetation absorb the surface tensions generated by the suction that would be transmitted to the soil if it were not covered with vegetation.The vegetation's importance in minimizing or preventing soil crack formation and propagation was reaffirmed. According to Elias et al. (2021), the presence of roots in the soil, noted during the study, tend to make the soil drier due to moisture absorption.However, the presence of roots also restricts soil movement in the driest period. CONCLUSIONS Vegetation cover has a significant influence on the process of formation and propagation of cracks in the field and in the cracking pattern.Soil moisture in the area with vegetation is lower than in the non-vegetated area due to evapotranspiration occurring in one, while the other only evaporation occurs.During the entire observation period in the field, including the rainy and dry season, in the area with vegetation, practically the appearance of cracks was small.The root of vegetation involved in the soil mass absorbed the tensile stresses imposed by the increase in suction.In the dry period, with the reduction of water in the soil, there is a reduction in the covered area vegetation, and in the area without vegetation cover, the intensity of cracks grows.In the period of the year with higher rainfall intensities, in the area without vegetation, the superficial fissures are closed due to soil expansion and leaching of surface fines into the cracks, and, in the area with vegetation, there is an increase in the intensity of vegetation. Figure 1 . Figure 1.Location of the city of Paulista, Pernambuco. Figure 3 . Figure 3. Details of the equipment for analyzing the formation and propagation of cracks in the field: 01 -portico; 02 -support for the camera; 03 -study area without vegetation; 04 -the study area with vegetation; 05 and 06 -references for 3D image analysis; 07 -drainage of the area without vegetation; 08 -protection screen; 09 -photo camera. Figure 4 . Figure 4. Flowchart for processing images in ImageJ. Figure 5 . Figure 5.The sequence of images in the area without vegetation. Figure 8 Figure8presents the soil water retention curve with van Genuchten (1980) adjusted.Table3presents the fit indices used in this model.Saturated water content is the saturation moisture at which all the voids in the soil are filled with water.The saturated water content is the saturation moisture content at which all voids in the soil are filled with water.In the first stretch of the retention curve, the soil remains saturated until it reaches the air intake suction.The suction starts to vary when air enters the voids, due to the evaporation of water molecules.Increases in suction greater than air intake will produce appreciable losses in soil water content, reaching a point of change in the slope of the curve, which determines residual moisture.With the drying period, the soil contracts and forms crack. Figure 7 . Figure 7. Quantitative evolution of geometric cracking indices in the region without vegetation: (a) drying phase; (b) wetting phase. Table 3 .Figure 8 .Figure 9 . Figure 8. Soil water retention curve from the tests carried out when the image was collected in the area without vegetation. Figure 10 . Figure 10.Results for the region with vegetation: (a) Time versus the evolution of the area of vegetation and humidity; (b) weather versus daily precipitation during the period of readings; (c) humidity versus percentage of vegetated area. Figure 11 . Figure 11.Soil water retention curve from the tests carried out when the image was collected in the area with vegetation. Figure 12 . Figure 12.Effect of climate (rainfall) on: (a) formation of fissures in the region without vegetation; (b) variation of the area of vegetation in the area with vegetation; (c) daily rainfall from 10/17/2019 to 09/24/2020. Figure 13 . Figure 13.Period versus Geometric evolution of cracks: (a) area without vegetation; (b) area with vegetation. Table 1 . Soil physicochemical characterization before the beginning of the experiment Rev Bras Cienc Solo 2023;47:e0230055 Table 2 . Crack analysis results for the area without vegetation Figure 6.Results for the area without vegetation: (a) period versus water content and CIF; (b) period versus temperature and RH; (c) period versus quantitative evolution of geometric cracking indices (cracked area, CIF, average crack width, total crack length and crack segment number); (d) weather versus daily precipitation during the period of readings. Table 4 . Summary of crack analysis results for area with vegetation Table 5 . Crack analysis results for area with vegetation
2023-10-20T15:39:44.999Z
2023-10-27T00:00:00.000
{ "year": 2023, "sha1": "b0ecb9f82cd21041a4cad00e4068c2a11faed9d7", "oa_license": "CCBY", "oa_url": "https://www.rbcsjournal.org/wp-content/uploads/articles_xml/1806-9657-rbcs-47-e0230055/1806-9657-rbcs-47-e0230055.pdf", "oa_status": "GOLD", "pdf_src": "Dynamic", "pdf_hash": "48c795266d0953bca8e8282e5ad36f436af35c45", "s2fieldsofstudy": [ "Environmental Science", "Engineering" ], "extfieldsofstudy": [] }
255817896
pes2o/s2orc
v3-fos-license
Effect of recent and ancient inbreeding on production and fertility traits in Canadian Holsteins Phenotypic performances of livestock animals decline with increasing levels of inbreeding, however, the noticeable decline known as inbreeding depression, may not be due only to the total level of inbreeding, but rather could be distinctly associated with more recent or more ancient inbreeding. Therefore, splitting inbreeding into different age classes could help in assessing detrimental effects of different ages of inbreeding. Hence, this study sought to investigate the effect of recent and ancient inbreeding on production and fertility traits in Canadian Holstein cattle with both pedigree and genomic records. Furthermore, inbreeding coefficients were estimated using traditional pedigree measure (FPED) and genomic measures using segment based (FROH) and marker-by-marker (FGRM) based approaches. Inbreeding depression was found for all production and most fertility traits, for example, every 1% increase in FPED, FROH and FGRM was observed to cause a − 44.71, − 40.48 and − 48.72 kg reduction in 305-day milk yield (MY), respectively. Similarly, an extension in first service to conception (FSTC) of 0.29, 0.24 and 0.31 day in heifers was found for every 1% increase in FPED, FROH and FGRM, respectively. Fertility traits that did not show significant depression were observed to move in an unfavorable direction over time. Splitting both pedigree and genomic inbreeding into age classes resulted in recent age classes showing more detrimental inbreeding effects, while more distant age classes caused more favorable effects. For example, a − 1.56 kg loss in 305-day protein yield (PY) was observed for every 1% increase in the most recent pedigree age class, whereas a 1.33 kg gain was found per 1% increase in the most distant pedigree age class. Inbreeding depression was observed for production and fertility traits. In general, recent inbreeding had unfavorable effects, while ancestral inbreeding had favorable effects. Given that more negative effects were estimated from recent inbreeding when compared to ancient inbreeding suggests that recent inbreeding should be the primary focus of selection programs. Also, further work to identify specific recent homozygous regions negatively associated with phenotypic traits could be investigated. Background Over the past decade, Canadian Holstein cattle populations have experienced an increase in the annual rate of inbreeding from 0.08 to 0.23%, which were observed from 2000 to 2010 and 2010-2018, respectively [1]. Recently, Makanjuola et al. [2] estimated the effective population size for North American Holsteins to range from 43 to 66 using genotyped animals. The small effective population size and the increasing rate of inbreeding could result in a phenomenon known as inbreeding depression. Inbreeding depression is the noticeable decline in the phenotypic mean of economically important traits within a given population [3]. This decline is often attributable to decreasing heterozygosity and increasing recessive homozygosity resulting from inbreeding and random genetic drift. The underlying genetic mechanism of inbreeding depression has been categorized into three hypotheses, which includes partial dominance, over-dominance and epistasis hypotheses. In the partial dominance hypothesis, depression is observed when inbreeding exposes deleterious recessive alleles that were previously hidden in heterozygous state [4]. In the over-dominance hypothesis, over-dominance contributes to inbreeding depression by reducing heterozygous genotypes that show superiority over the two homozygous genotypes [5]. In the epistasis hypothesis, depression could result when inbreeding reduces the combination of favorable heterozygous genotypes across multiple loci [6]. From these hypotheses, partial dominance has been widely reported to account for most of the observed inbreeding depression [4,7,8]. Before the availability and popularity of genomic data, estimation of inbreeding depression was predominantly done by calculating inbreeding coefficients from pedigree data and regressing any trait of economic interest on the inbreeding coefficients [9,10]. More recently, genomic inbreeding estimates are being used to assess inbreeding depression [11,12]. Genomic inbreeding coefficients have been shown to be closer to true inbreeding estimates [13]. This could be because Mendelian sampling variation are better accounted for by genomic data [14] and genomic data are independent of pedigree depth and completeness [15]. Different methods have been used for estimating genomic inbreeding. Genomic inbreeding could be estimated by subtracting one from the diagonal of the genomic relationship matrix [16,17]. Alternatively, McQuillan et al. [18] proposed the estimation of genomic inbreeding from unbroken stretches of homozygous segments, which are referred to as runs of homozygosity (ROH). Ideally, the aim of the genetic selection being practised in livestock species is to increase the frequency of favorable alleles, thus, increasing the level of homozygosity. In essence, inbreeding could result in the depression or enhancement of any trait of economic interest, therefore not all inbreeding is detrimental. Inbreeding increases the expression of deleterious recessive alleles, which are naturally or artificially selected against in a process called genetic purging [19,20]. With the theory of genetic purging, inbreeding coefficients could be partitioned into ancient and recent inbreeding. Ancient inbreeding is inbreeding that occurred from a distant common ancestor and, as such, is expected to show less unfavorable effect due to genetic purging, whereas recent inbreeding is inbreeding that arose from a most recent common ancestor and hence is expected to exhibit larger unfavorable effects [21]. For example, Doekes et al. [22] reported a 2.42 kg decline in fat yield (FY) per 1% increase in new inbreeding and conversely, an increase of 0.03 kg for ancient inbreeding. The partitioning of inbreeding into recent and ancient inbreeding can be examined with pedigree and genomic data. For pedigree data, recent inbreeding can be estimated by tracing the pedigree back relatively few generations to the common ancestor, while ancient inbreeding traces back the pedigree to a more distant common ancestor [23]. In addition, classical inbreeding coefficients could be divided into new and ancestral inbreeding based on whether alleles carried by an individual have previously occurred in an identity-by-descent (IBD) state in an ancestor or are occurring for the first time in an IBD state [24,25]. For genomic data, recent and ancient inbreeding can be separated by allocating the length of the ROH into different classes. Over time, recombination tends to breakdown long chromosomal segments, thus longer ROH could suggest recent inbreeding due to lack of time for recombination, and shorter lengths indicate ancient inbreeding [26]. Inferring the age of inbreeding from the length of ROH segment is an expectation that follows an exponential distribution with a mean of 100/2 g centiMorgans (cM), where g is the number of generations to a common ancestor [27]. The objectives of this study were to 1) estimate the effect of inbreeding on production and fertility traits in Canadian Holsteins using pedigree and genomic information; 2) assess the effect of recent and ancient inbreeding on production and fertility traits in Canadian Holsteins. Phenotypic description, heritability and inbreeding coefficients The basic descriptive statistics of the phenotypic data are presented in Table 1. This include the total number of records for each trait evaluated with their respective observations. Moderate heritability estimates of 0.26, 0.23 and 0.22 were obtained for MY, FY and PY, respectively ( Table 2). As expected, heritability estimates were low for fertility traits and ranged from 0.01 to 0.07 ( Table 2). The correlation coefficients of all estimated inbreeding coefficients are depicted in Fig. 1. The correlation coefficients between classical pedigree inbreeding and classical genomic inbreeding were moderately high at 0.63 for F PED and F ROH and 0.61 for F PED and F GRM . A correlation of 0.97 was estimated for F ROH and F GRM (Fig. 1). More interestingly were the correlations between the classical inbreeding estimates and the different ages of inbreeding measures, where F PED , F ROH and F GRM had moderately positive correlations with recent generations and dropped to low and negative values as the generation became more ancient. For example, F PED , F ROH and F GRM had a correlation of 0.70, 0.40 and 0.40 with the most recent pedigree age of inbreeding (F PED3 ), respectively, whereas, a correlation with a more distant pedigree age of inbreeding (F PED7 − 6 ) was equal to − 0.12, − 0.13 and − 0.12, respectively. Similarly, correlations between F PED , F ROH and F GRM with F ROH > 16 were estimated to be 0.51, 0.77 and 0.76, respectively, in contrast to − 0.10, − 0.06 and − 0.08 for F ROH2 − 4 , respectively. For the model-based age of genomic inbreeding, the correlations ranged from 0.44 to 0.65 for the most recent age and − 0.01 to 0.00 for the most distant age of genomic inbreeding with the classical inbreeding measures. The movement in different directions of the correlations with the different classes of the age of inbreeding was notable, with correlations ranging from − 0.45 to 0.11 for the pedigree classes, − 0.17 to 0.17 for the ROH classes and − 0.48 to 0.06 for the model-based classes (Fig. 1). Effect of classical inbreeding on phenotypic traits Statistically significant inbreeding depression (P < 0.01) was observed for all production traits based on F PED , F ROH and F GRM (Table 3). For every 1% increase in inbreeding coefficients based on F PED , F ROH and F GRM , a corresponding reduction of 44.71, 40.48 and 48.72 kg was estimated, respectively, representing 0.49, 0.45 and 0.54% of the phenotypic means for the traits. Likewise, the effect of inbreeding was noticeable for fertility traits with heifers having a statistically significant (P < 0.05) increase of 0.29, 0.24 and 0.31 days in FSTC for every 1% increase in inbreeding coefficients based on F PED , F ROH and F GRM , respectively, which represents 1.50, 1.24 and . For instance, a 1% increase in F ROH resulted in a 0.78 and 0.83 chance of getting reinseminated after the first insemination for heifers and cows, respectively. An inbreeding depression of 0.96 was observed in NS (P < 0.01) for a 1% increase in F PED for heifers, while a statistically non-significant (P < 0.51) effect of 0.45 was observed for a 1% increase in F PED for cows. To further support the effect of inbreeding, differences in the phenotypic means of animals with low inbreeding levels (5th percentile) and high inbreeding levels (95th percentile) were estimated and are presented in Table 4. On average, lowly inbred animals produced 144.69, 342.85 and 435.77 kg more milk than highly inbred animals when estimates were based on F PED , F ROH and F GRM , respectively. In a similar fashion, animals with low inbreeding coefficients had 3.26, 5.04 and 7.10 kg more FY based on F PED , F ROH and F GRM , respectively, when compared to animals with high inbreeding coefficients. For fertility traits, heifers with high inbreeding levels had on average 6.40, 6.71 and 5.39 more days to age at first insemination (AFS) based on F PED , F ROH and F GRM , respectively. Likewise, 2.83, 4.12 and 3.87 less days for FSTC was estimated based on F PED , F ROH and F GRM , respectively, for heifers with low inbreeding compared to heifers with high inbreeding. For cows, a more evident increase in NS of 3.56, 1.04 and 1.82% based on F PED , F ROH and F GRM , respectively, was estimated for highly inbred cows in comparison to lowly inbred cows. Effect of age of inbreeding on phenotypic traits Splitting the pedigree inbreeding coefficients into different age (generation) classes showed varying effects on phenotypes. Interestingly, inbreeding occurring within the most recent five generations resulted in unfavorable and statistically significant depressing effects on phenotypic traits, however, more distant generations showed favorable, but a statistically non-significant effects on phenotypic traits (Fig. 2). A 1% increase in the inbreeding coefficients obtained from F PED3 , F PED4 − 3 and F PED5 − 4 caused a reduction of 1.56, 1.10 and 0.77 kg in PY, respectively. Whereas a 1% increase in F PED7 − 6 and F PED8 − 7 resulted in a corresponding 1.06 and 1.33 increase in PY, respectively. Similarly for fertility traits, AFS increased by 0.50, 0.55 and 0.70 days in heifers for every 1% increase in F PED3 , F PED4 − 3 and F PED5 − 4 , respectively, and conversely reduced by 0.93 and 0.84 days for a 1% increase in F PED7 − 6 and F PED8 − 7 , respectively. For cows, a similar pattern was observed with recent generations having more negative effects and remote generations showing more positive effects. However, all estimated effects were statistically non-significant with the exception of days from calving to first insemination (CTFS), which showed a 0.42 increase in days for a 1% increase in F PED4 − 3 (P < 0.05). ROH was split into age classes with longer ROH indicating more recent inbreeding and shorter ROH suggesting more remote inbreeding. Although the effect of all ROH classes were unfavorable for production traits, only ROH classes with segments longer than 4 Mb were significant at P < 0.05 (Fig. 3). For example, a 1% increase in F ROH4 − 8 , F ROH8 − 16 and F ROH > 16 led to a 1.12, 1.29 and 1.57 kg reduction in FY, respectively. For fertility traits in heifers, inbreeding effect on 56-day non-return rate (NRR) was not statistically significant for all ROH classes, however, shorter segments (ROH < 4 Mb) showed favorable effects while longer segments (ROH > 4 Mb) had unfavorable effects. For AFS in heifers, unfavorable inbreeding effects for all classes of ROH were observed, but only ROH > 8 Mb showed statistical significance (P < 0.05). Additionally, a statistically significant and unfavorable effect of 0.62 and 0.96 was obtained for a 1% increase in F ROH8 − 16 and F ROH > 16 for NS, respectively, whereas a statistically non-significant, but favorable effect of − 2.26 and − 0.07 was obtained for, F ROH1 − 2 and F ROH2 − 4 , respectively. For fertility traits in cows, only an unfavorable and statistically Table 3 Estimates of inbreeding depression on production and fertility traits per 1% increase in classical inbreeding and their standard errors Table 4 Estimates of inbreeding depression for all significant traits, expressed as the difference (Diff) in predicted phenotype between lowly inbred (5% percentile) and highly inbred (95% percentile) from the mean for F PED , F ROH and F GRM Incidence of more than one service after first d Incidence of no subsequent service between 15 and 56 days following the first service significant (P < 0.05) effect on NRR for ROH >16 Mb was observed. In general, cow traits follow a similar pattern with shorter segments tending to have favorable effects, while longer segments tending to be unfavorable. The age of inbreeding estimated using the modelbased approach provided varying effects on phenotypes. Based on this approach, more recent age of inbreeding had statistically significant and unfavorable effects on production traits and more distant ages had statistically non-significant and favorable inbreeding effects (Fig. 4). A 1% increase in F HBD1 , F HBD2 and F HBD3 corresponded to a − 40.79, − 33.76 and − 30.53 kg loss in MY, respectively. In contrast, a 1% increase in F HBD4 and F HBD5 was related to 10.06 and 15.65 kg gain in MY, respectively. For fertility traits, a 1% increase in F HBD1 and F HBD2 in heifers prolonged FSTC by 0.28 and 0.28 days, respectively. Conversely, a 1% increase in F HBD5 reduced FSTC by 0.42 days, although, this was statistically nonsignificant. In addition, a statistically significant increase of 0.83 in NS for cows with a 1% increase in F HBD1 and a statistically non-significant decrease of 0.22 for every 1% increase in F HBD4 in NS in cows was estimated. Effect of new and ancestral inbreeding on phenotypic traits The partitioning of the classical inbreeding into new and ancestral inbreeding as proposed by Kalinowski et al. [24] provided insight into how recent inbreeding affects phenotypes. For production traits, no significant effect was obtained with F k _ NEW and F k _ ANC (Fig. 5). Nevertheless, F k _ NEW showed unfavorable effects while F k _ ANC tended towards more favorable effects. A 1% increase in F k _ NEW resulted in a − 14.21 and − 0.24 kg loss in MY and PY, respectively. On the other hand, a 1% increase in F k _ ANC caused a 13.35 and 0.67 kg increase in MY and PY, respectively. FY showed a favorable effect of 0.31 and 0.77 kg for both F k _ NEW and F k _ ANC per 1% increase, however, F k _ NEW was less favorable than F k _ ANC . Similarly, for both heifer and cow traits, F k _ NEW had a statistically nonsignificant but unfavorable effect, while F k _ ANC had a Fig. 2 Effect of a 1% increase in pedigree age inbreeding estimated on phenotypes. Error bars represent one standard error and stars indicate significance level ( *** P < 0.01; ** P < 0.05; * P < 0.1). MY-milk yield; FY-fat yield; PY-protein yield; AFS_H -age at first service for heifers; NS_Hnumber of service for heifers; NRR_H -56-day non-return rate for heifers; FSTC_H-first service to conception for heifers; CTFS_C -conception to first service for cows; NS_C -number of service for cows; NRR_C -56-day non-return rate for cows; FSTC_C -first service to conception for cows statistically non-significant and favorable effects on phenotypes. For fertility traits, only AFS and CTFS showed statistically significant depressing effects, with an increase of 1.58 and 1.00 days, respectively, per 1% increase in F k _ NEW . A 1% increase in F k _ ANC corresponded to a − 0.77 and − 0.94 days in AFS and CTFS, respectively. Discussion This study sought to investigate the overall effect of classical inbreeding, different age classes of inbreeding and ancestral inbreeding on production and fertility traits using both pedigree and genomic measures. The accuracy of pedigree inbreeding estimates are largely dependent on the completeness and depth of the pedigree recording [28,29]. Therefore, only animals with a complete generation equivalence (CGE) of 10 or more and at least 0.90 pedigree completeness index (PCI) were retained for further analyses, to prevent the underestimation of inbreeding coefficients and inbreeding depression. In the present study as well as previous studies, F PED was moderately correlated with F ROH and F GRM . In Dutch Holstein-Friesian cows, Doekes et al. [22] reported a correlation of 0.66 between F PED and F ROH and a correlation 0.61 between F PED and F GRM . Similarly for Finnish Ayrshire cows, a correlation that ranged from 0.55 to 0.59 was reported by Martikainen et al. [30]. The correlations from this study and those other authors are slightly lower than those reported for bulls, which ranged from 0.67-0.87 for Australian Holstein bulls [31] and 0.70-0.75 for bulls from multiple breeds of cattle [32]. This could imply that bulls generally have more accurate pedigree records in comparison to cows. Classical inbreeding depression As with other studies, a 1% increase in pedigree inbreeding has been shown to have a significantly negative effect Fig. 3 Effect of a 1% increase in genomic age inbreeding estimated using the sliding window approach on phenotypes. Error bars represent one standard error and stars indicate significance level ( *** P < 0.01; ** P < 0.05; * P < 0.1). MY-milk yield; FY-fat yield; PY-protein yield; AFS_H -age at first service for heifers; NS_H -number of service for heifers; NRR_H -56-day non-return rate for heifers; FSTC_H-first service to conception for heifers; CTFS_C -conception to first service for cows; NS_C -number of service for cows; NRR_C -56-day non-return rate for cows; FSTC_C -first service to conception for cows on production traits [33][34][35][36], which ranged from − 19 to − 173 kg for MY and are in line with those reported in this study. The pedigree inbreeding effects estimated in the present study represented 0.49, 0.46 and 0.47% of the phenotypic means of MY, FY and PY, respectively. These results are in accordance with the 0.47 0.45 and 0.45% reported by Doekes et al. [22] for MY, FY and PY, respectively. For fertility traits, varying effects of pedigree inbreeding was observed. For all cow traits in the present study, there was no significant effect of pedigree inbreeding and this corroborates the results of Martikainen et al. [30], as they also found no significant association of pedigree inbreeding with fertility traits. However for heifers, an extension of 0.44 days per 1% increase in inbreeding was observed for AFS, which is similar to the 0.55 days per 1% reported by Smith et al. [9] for age at first calving (AFC; a similar trait to AFS). With genomic inbreeding measures, Bjelland et al. [37] reported a reduction of − 20 and − 47 kg per 1% increase in 205-day MY using F ROH and F GRM , respectively. These results are in line with those reported in this study, however, the higher effect reported for F ROH in the present study may be attributable to the differences in parameters used in detecting ROH. Furthermore, the effect of F ROH and F GRM was found to prolong interval from first to last insemination (IFL) by 0.27 and 0.42 days, respectively [22]. This trait is similar to FSTC used in this study, which was increased by 0.24 and 0.31 days per 1% increase in F ROH and F GRM , respectively. Using genomic inbreeding, Martikainen et al. [30] also found deteriorating effect on NRR and IFL, which are supported in this study. Genomic inbreeding accounted for more phenotypic mean differences between lowly and highly inbred animals when compared to pedigree inbreeding. For example, the differences between lowly and highly inbred animals for MY was estimated to be 342.85 and 435.77 kg using F ROH and F GRM , respectively. This is in line with the 301 and 315 kg difference between lowly and Fig. 4 Effect of a 1% increase in genomic age inbreeding estimated using the model-based approach on phenotypes. Error bars represent one standard error and stars indicate significance level ( *** P < 0.01; ** P < 0.05; * P < 0.1). MY-milk yield; FY-fat yield; PY-protein yield; AFS_H -age at first service for heifers; NS_H -number of service for heifers; NRR_H -56-day non-return rate for heifers; FSTC_H-first service to conception for heifers; CTFS_C -conception to first service for cows; NS_C -number of service for cows; NRR_C -56-day non-return rate for cows; FSTC_C -first service to conception for cows highly inbred cows reported by Doekes et al. [22] and 161 and 438 kg reported by Bjelland et al. [37] using F ROH and F GRM , respectively. Despite F PED having a higher estimated effect of inbreeding on phenotypes compared to F ROH , F ROH accounted for a larger difference in phenotypic means between lowly and highly inbred animals. These results are similar to those reported in Doekes et al. [22] and is most likely attributable to the wider distribution of F ROH over F PED . Age classes of inbreeding depression Few studies have investigated the effect of pedigree and genomic inbreeding age classes on phenotypes [21,23]. These age classes are supposed to represent how recent or ancient the observed inbreeding is to a common ancestor. In this study, it was hypothesized that recent inbreeding would be more detrimental than ancient inbreeding. Pedigree inbreeding traced back to ancestors in the third and fourth generation had significant negative effects on MY, FY and PY (Fig. 2). Consistent with these results, Silió et al. [23] reported a − 0.06 kg and 2.11 kg loss in daily growth rate and weight at 90 days, respectively, when pedigree was traced back to the fifth generation (F PED5 ). In addition, Doekes et al. [22] reported favorable, but non-significant, effects of F PED7 − 6 on MY, FY, PY, IFL and calving interval (CI). These findings are in line with the favorable, but nonsignificant effects of F PED7 − 6 and F PED8 − 7 on production traits, AFS and FSTC in the present study. The consistency between these studies suggest that recent inbreeding is more detrimental than ancient inbreeding. Previous researchers have found effects of different ROH length classes on phenotypes [34,38]. In US and Australian Jersey, Howard et al. [38] observed significant inbreeding depression based on ROH with at least 4 Mb on MY, FY and PY. Likewise, for Australian Holsteins, Pryce et al. [34] found that ROH longer than 3.5 Mb exhibited more significant depression on 305-day MY when compared to shorter ROH. These results are in accordance with the present study, in which significant inbreeding depression was detected for ROH > 4 Mb and non-significant, but unfavorable inbreeding effects was Fig. 5 Effect of a 1% increase in new and ancestral inbreeding estimated using kalinowski's method on phenotypes. Error bars represent one standard error and stars indicate significance level ( *** P < 0.01; ** P < 0.05; * P < 0.1). MY-milk yield; FY-fat yield; PY-protein yield; AFS_H -age at first service for heifers; NS_H -number of service for heifers; NRR_H -56-day non-return rate for heifers; FSTC_H-first service to conception for heifers; CTFS_C -conception to first service for cows; NS_C -number of service for cows; NRR_C -56-day non-return rate for cows; FSTC_C -first service to conception for cows observed for ROH < 4 Mb on MY, FY and PY. A similar pattern was observed for heifer fertility traits (AFS, NS, NRR and FSTC), with longer ROH showing unfavorable and significant effects and shorter ROH having favorable, but non-significant effects. Conversely, ROH > 2 Mb were found to have a more significant effect on total number of spermatozoa than ROH > 4 Mb [21]. Generally, inconsistent conclusions have been reported in the literature, with either shorter ROH carrying more deleterious alleles or longer ROH harbouring more deleterious alleles [39,40]. In agreement with those studies, unfavorable effects were identified for both short and long ROH in the present study. The use of a deterministic approach (sliding window) in identifying ROH assumes a uniform recombination rate across the genome, however, recombination rate has been reported to vary across the genome [41]. In an attempt to circumvent this limitation, IBD regions were identified using the modelbased approach [27,42]. To our knowledge, this was the first study to investigate the effect of genomic age of inbreeding on phenotypic traits using the model-based approach. The results from this approach were similar to those reported for pedigree age of inbreeding. For recent age classes, significant inbreeding effect was found for MY, FY, PY and heifer traits (AFS, NS, NRR and FSTC). In contrast, remote age classes were favorable, but their effects were non-significant. According to Druet and Gautier [27], the model-based approach allows for the detection of the age when the inbreeding occurred, hence this support the premise that recent inbreeding are more deleterious than ancient inbreeding. Impact of new and ancestral inbreeding on phenotypes Some studies have evaluated the effect of new and ancestral inbreeding on phenotypic traits [22,43,44]. Those authors found more evidence of large inbreeding depression resulting from new inbreeding than ancestral inbreeding, therefore, postulating that purging might have helped in removing deleterious alleles from the population. In mice, Hinrichs et al. [45] estimated an inbreeding depression that ranged from − 11.53 to − 0.74 per unit increase in F NEW and − 5.52 to 15.51 per unit increase in F ANC for the number of pups in first litter. Thus, indicating that new inbreeding causes more deteriorating effects, whereas old inbreeding causes lesser deteriorating effects and sometimes favorable effects. Using Kalinowski's [24] approach of new F K _ NEW and old F K _ ANC inbreeding, Mc Parland et al. [44] found a significant unfavorable effect of − 32.4 kg and 3.09 days for MY and AFC, respectively, per 1% increase in F K _ NEW . In addition, they observed a significant, but less unfavorable effect of − 8.8 kg and 0.52 day for MY and AFC, respectively, per 1% increase in F K _ ANC . In Dutch Holstein-Friesian cattle, Doekes et al. [22] found a significant unfavorable effect of − 2.42 kg for 305-day FY per 1% increase in F K _ NEW and a non-significant, but favorable effect of 0.03 kg for 305-day FY per 1% increase in F K _ ANC . Those authors mentioned evidence of purging due to favorable effects found with F K _ ANC . In the present study, no significant effects were observed for production traits, however, estimates for F K _ ANC were favorable for MY and PY, whereas F K _ NEW showed unfavorable effects for MY and PY. Conversely, a favorable effect was detected for FY when using estimates from F K _ NEW and F K _ ANC . For fertility traits, significant effects were found only for AFS and CTFS. A favorable effect of − 0.77 and − 0.94 days for AFS and CTFS, respectively, per 1% increase in F K _ ANC , while an unfavorable effect of 1.58 and 1.00 days for AFS and CTFS, respectively, was found for every 1% increase in F K _ NEW . The varying results among these studies could be due to the differences in the populations used, which are subjected to different selection criteria. In the present study, there seems to be no evidence of purging and given the rate at which inbreeding is increasing following the implementation of genomic selection [2], selection will have less time to remove deleterious effects resulting from fast inbreeding [19,20]. In addition, the evidence of purging due to selection in a controlled or systematic population is being widely debated [24,46]. Therefore, caution should be taken in concluding that purging has occurred as a result of selection. Furthermore, deleterious alleles could be made less effective by changing environments [24,47] and the removal of these detrimental alleles are also only partial [48]. Conclusions A significant and unfavorable effect of classical inbreeding on all production traits and some fertility traits was found. Genomic inbreeding measures seemed to capture more phenotypic differences between lowly and highly inbred animals. Recent inbreeding was found to show more detrimental effects on both fertility and production traits than ancient inbreeding. However, no substantial evidence of purging was uncovered with ancestral inbreeding. The model-based approach of classifying inbreeding into age classes provided similar results to the pedigree age of inbreeding, hence, in the absence of pedigree records, genomic measures could be used. Overall, heterogeneity of inbreeding depression was observed with recent and ancestral inbreeding. In future studies, the molecular architecture of inbreeding could be investigated to identify regions negatively associated with phenotypic traits. Pedigree data Pedigree records for all available animals with genotype and phenotype data were provided by Canadian Dairy Network (Guelph, ON, Canada). The pedigree information consisted of a total of 259,871 individuals that trace back to 1950 as the base year. To ensure that inbreeding estimates were not severely underestimated, pedigree completeness index (PCI) going back five generations and the number of complete generation equivalence (CGE) were estimated using EVA software [49]. Animals with both genotypic and phenotypic data with PCI 0.90 or greater and CGE of 10 or more were retained for further analyses. Genotype data A total of 50,575 genotyped Holstein cows were available with birth year ranging from 1999 to 2017. Cows were genotyped with the Illumina BovineSNP50 Chip (50 K) (Illumina Inc., San Diego, CA) and lower density array panels (10 K -30 K). Animals with lower density genotypes were imputed to medium density (50 K) using FImpute software [50]. Before editing, SNP information was available for 45,187 SNP markers. For quality control, only autosomal SNP with a call rate > 0.95, minor allele frequency ≥ 0.01 and a difference less than 0.15 between observed and expected heterozygosity frequency were retained for further analyses using SNP1101 [51]. After quality control, a total of 43,126 SNP were retained for further analyses. Phenotype data Phenotypic records of 46,430 cows with first calving date that ranged from 2008 to 2018 were available for production and fertility traits. For production traits, a total number of 21,194 cows had first lactation records on a 305-day basis for milk yield in kg (MY), fat yield in kg (FY) and protein yield in kg (PY). Fertility traits had a total of 52,948 records and these were split up into heifer (first parity) and cow (second parity) traits. Of these records, 33,610 were for heifers and 19,338 were for cows and all animals with cow records also had heifer records. The following fertility traits were considered in this study: age at first service in days (AFS); days from calving to first service (CTFS); number of services (NS); first service non-return rate to 56 days (NRR); days from first service to conception (FSTC). All traits recorded before and during the first parity are termed heifer traits and traits recorded after the first parity were cow traits. NRR was coded as 1 when no subsequent service took place between 15 and 56 days following the first service and coded 0 if otherwise. NS was coded from 1 to 10 and animals with more than 10 services were assigned as 10. AFS was measured in days and considered to be a heifer trait. CTFS was measured in days and considered to be a cow trait. FSTC was measured in days and considered to be both a heifer and cow trait. Measures of inbreeding Pedigree and genomic data were both used in calculating inbreeding coefficients. With pedigree data, inbreeding measures were divided into three categories: 1) classical pedigree inbreeding measure; 2) pedigree age of inbreeding measure and; 3) ancestral pedigree inbreeding measure. For genomic data, inbreeding measures were divided in two categories namely 1) classical genomic inbreeding measure and 2) genomic age of inbreeding measure. A detailed explanation of these categories follows below. Pedigree inbreeding measures Classical inbreeding coefficient (F PED ) was estimated for all individuals with phenotypic records by tracing back the pedigree to the founder generation using the algorithm proposed by Meuwissen and Luo [52] as implemented in PEDIG software [53]. Pedigree age of inbreeding coefficient ð F PEDn Þ was calculated by tracing back the pedigree n generation ago to common ancestors, where n represent the specific number of generations to the common ancestors. More specifically, the inbreeding age classes attributable to ancestors from a specific generation is the difference between two successive generations. For example, inbreeding coefficients that occurred due to ancestors in generation seven (F PED7 − 6 ) can be calculated as the difference between inbreeding coefficients obtained tracing back to seven generations ago F PED7 and coefficients obtained tracing back to six generations ago F PED6 (i.e., F PED7 − 6 = F PED7 − F PED6 ). This procedure was performed to categorize inbreeding into age classes from most recent to ancient inbreeding. The most recent age traced back was three generations ago (F PED3 ) because the number of inbred animals in generation two were less than 0.02% of the sample size, while the most ancient age was traced back to generation eight (F PED8 ) due to having similar inbreeding coefficients with older generations. Pedigree age of inbreeding was estimated using the vanrad.f function implemented in PEDIG software [53]. Ancestral pedigree inbreeding was first proposed by Ballou [25] with the concept that alleles with IBD state in an individual have been previously in an IBD state in its ancestor. Kalinowski et al. [24] further modified the ancestral pedigree inbreeding method of Ballou [25] by splitting F PED into new inbreeding (F k _ NEW ) and ancient inbreeding (F k _ ANC ), therefore, F PED = F k _ NEW + F k _ ANC . The difference between F k _ NEW and F k _ ANC is that F k _ NEW is the probability that alleles in IBD state of a given individual is occurring for the first time in the pedigree of the individual, while F k _ ANC is the probability that IBD alleles in an individual have occurred previously in at least one ancestor. Kalinowski ancestral pedigree inbreeding was calculated using a gene dropping approach with 10 6 replications as implemented in GRAIN [54]. Genomic inbreeding measures Two approaches were used in estimating the classical genomic inbreeding: 1) segment-based approach (runs of homozygosity (ROH); F ROH ) and 2) marker-by-marker based approach (F GRM ). Runs of homozygosity were identified with the deterministic sliding window approach implemented in PLINK using the following criteria: a minimum physical length of 1 Mb; a maximum gap of 500 kb between two successive SNP; a minimum of 20 consecutive homozygous SNP and a minimum density of one SNP per 100 kb. The following formula was used for calculating individual segment based genomic inbreeding: where F ROH i is the genomic inbreeding of the ith individual, L ROH j is the length of the jth ROH segment in bp, n is the total number of detected ROH and L AUTO is the total length of the autosomes covered by the SNP in bp. Inbreeding in the marker-by-marker based approach was calculated by subtracting one from the diagonal of the genomic relationship matrix (G) following the proposition of VanRaden [55] and using a 0.5 fixed allele frequency. The formula used in calculating individual marker-by-marker based genomic inbreeding was as follows: where F GRM i is the genomic inbreeding of the ith individual and G ii is the diagonal element of the genomic relationship matrix. The genomic age of inbreeding was estimated by classifying already identified ROH length into five different length classes to specify the approximate age or generation in which they occur. As mentioned earlier, deducing the age of inbreeding from ROH length is an expectation that follows an exponential distribution with a mean of 100/2 g cM with the assumption that 1 Mb = 1 cM. Therefore, ROH were classified into: 1) 1-2 Mb; 2) 2-4 Mb; 3) 4-8 Mb; 4) 8-16 Mb; and 5) > 16 Mb length classes. These length classes indicate inbreeding resulting from ancient to most recent ancestors. Additionally, genomic age of inbreeding was estimated using the model-based method that uses a Hidden Markov Model (HMM) approach to identify homozygous by descent (HBD) segments [27]. With this method, age of inbreeding is estimated for HBD classes based on a transition probability between different (hidden) HBD segments and non-HBD segments and conditional on the class specificity. The probability of staying in a particular state is calculated as e − R k , where R k is the rate specific to the kth class. Thus, the length of an HBD segment of any kth class is exponentially distributed with rate R k . In the current study, a model with 5 HBD classes was defined following predefined default rates as implemented in the R statistical package "RZooRoH" [27,42]. Statistical analyses To estimate the effect of inbreeding on phenotypes, the same models used in the national genetic evaluation for Canadian Holsteins were adapted from Jamrozik et al. [56], with the inclusion of inbreeding coefficients as a covariate. The specific fixed and random effects fitted for both production and fertility (heifer and cow) traits are presented in Table 5. The fixed effects fitted were as follows: year of calving by season of calving (YSC); age at calving by region of calving (ARC); region by year of birth by season of birth (RYS); month of first insemination (Mf); age at previous calving by month of previous calving by parity (ApMp); age at previous calving by month of first insemination (ApMf). The random effects were: herd by year of birth (HY); herd within RYS (HRYS); service sire by year of insemination (SS); artificial insemination technician (T); animal additive genetic MY milk yield, FY fat yield, PY protein yield, AFS_H age at first service for heifers, NS_H number of service for heifers, NRR_H 56-day non-return rate for heifers, FSTC_H first service to conception for heifers, CTFS_C conception to first service for cows, NS_C number of service for cows, NRR_C 56-day nonreturn rate for cows, FSTC_C first service to conception for cows b YSC year of calving by season of calving, ARC age at calving by region of calving, RYS region by year of birth by season of birth, Mf month of first insemination, ApMf age at previous calving by month of first insemination by parity, ApMp age at previous calving by month of previous calving by parity c HY herd by year of birth, HRYS herd within RYS, T AI technician, SS service sire by year of insemination, A random animal effect effect (a); and error term (e). Inbreeding depression was estimated separately for each trait using the following linear mixed model: where y is a vector of phenotypic measurement for MY, FY, PY, AFS, CTFS, NS, NRR and FSTC, b is a vector of systematic effects, β is the coefficient of the linear regression on F, F is a vector of inbreeding coefficients from pedigree or genomic data (F PED , F ROH or F GRM ), a is a vector of random additive genetic effects, c j is a vector of jth non-genetic random effects (HY, HRYS, T and SS) and e is a vector of random residual effects, n is the number of non-genetic random effects, X, Z and W j are incidence matrices that link the fixed effects, random additive genetic effects and jth non-genetic random effects to the phenotypes, respectively. The assumptions for the random effects include: a∼N 0; Aσ 2 a ), HY Nð0 ; Iσ 2 HY ), HRYS Nð0; Iσ 2 HRYS ), T Nð0; Iσ 2 T ), SS Nð0 ; Iσ 2 SS ) and e Nð0; Iσ 2 e ), where σ 2 a is the additive genetic variance, σ 2 HY is the herd year variance, σ 2 HRYS is the herd within RYS variance, σ 2 SS is the service sire by year of insemination variance, σ 2 T is the artificial insemination technician variance, σ 2 e is the residual variance, A is the numerator relationship matrix and I is an identity matrix. For age of inbreeding and ancestral inbreeding, inbreeding depression was estimated using the following linear mixed model: where β k is the coefficient of the linear regression on inbreeding coefficients within the kth class of inbreeding (F k ), m is the number of inbreeding classes, and all other parameters are the same as mentioned in model I. All linear models in this study were fitted using the restricted maximum likelihood procedure implemented in ASReml 4.1 [57].
2023-01-15T14:30:02.977Z
2020-09-01T00:00:00.000
{ "year": 2020, "sha1": "cd73cc3ba74dc34281cb1d4104f44b6ec5d1fc4e", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1186/s12864-020-07031-w", "oa_status": "GOLD", "pdf_src": "SpringerNature", "pdf_hash": "cd73cc3ba74dc34281cb1d4104f44b6ec5d1fc4e", "s2fieldsofstudy": [ "Agricultural and Food Sciences" ], "extfieldsofstudy": [] }
117678849
pes2o/s2orc
v3-fos-license
Design and test of a compressed air driven hydraulic motor system with compress air booster In this study a highly efficient compressed air driven hydraulic motor system is designed, developed and tested. The basic concept of using a hydraulic motor to transform the energy of compressed air (which compressing the hydraulic oil) to mechanical energy is developed, which can use the energy in compressed air more efficiently than the traditional piston type air engine. However, due to the air pressure exhale from the converter still has some energy. To recover the energy, a booster is used to recover the energy. In this study, both theoretical model and experiment device of the system are developed; several experiments to validate the theoretical model are made. It was found that the efficiency of the system is good when using the booster. Introduction Compressed air is one of the candidates to power the engine without pollution emissions at usage. To transform the energy of the compress air to the mechanical energy, there are a lot of research works have been done. In 1973, Brown [1] filed a pattern which modified a multi-cylinder four stroke engine to a hydraulic engine. Otto [2] also presented a patent which used compress air to push the hydraulic oil and the oil push the pistons which driving a crane shaft. A flywheel is connected with the crank shaft to stabilize the rotating speed of the crank shaft. Through this way; the energy in compress air becomes kinetic energy. The intake and expiration of the air was controlled by the air valves according the angle of the crank shaft. Shen et al [3] and Castro_Alves et al [4] used the piston type air engine to drive a motor cycle was an example of using the air engine. Several research works [5][6][7] about piston type air engine were published. However, due to the pressure of expiration air is high; the efficiency of the piston type air engine is not so good, therefore, several researches used the compress air to compress the hydraulic oil and this oil was used to drive the hydraulic motor were presented. The reasons for the efficacy of an piston type air engine is not as high as that of the hydraulic motor are (1) The hydraulic oil is incompressible and the exhaust pressure of the oil is the same as that of the atmosphere (even though the pressure is higher than the atmospheric pressure, the energy stored in the hydraulic oil is low); (2) The hydraulic motors' high efficiency can enhance the total efficiency of the system. In 2006, Lemofouet and Rufer [8] used compress air to push the hydraulic oil into two air/oil converters to generate the pressurized oil. The pressurized oil was used to drive a hydraulic motor. The 4-way 3 position solenoid valve was used to control the direction of air flow. In their research, a flywheel was used to stabilize the rotating speed of the shaft. The use of the flywheel reduce the efficiency of the system due to at the point of changing the direction of oil flow, there is no enough torque and speed to rotate the flywheel. This makes the flywheel drives the hydraulic motor and makes 2 1234567890 ''"" [9] presented a system that used two air/oil converters to convert the energy in compressed air to high-pressure oil. The high-pressure oil then used to drive a hydraulic motor. Usually, the efficiency of the hydraulic motor is higher than piston type compress air engine. To make the motor rotate continuously, an accumulator is used to reserve some amount of the compressed air and the air is used when the input oil is changed from one convertor to another convertor. The use of the accumulator increased the efficiency of the system. The volume of air in the air/oil converter at each stage of air expansion is larger than the piston type air engine for one cycle. It is found that the longer the expansion duration, the less the energy loss. It is also noted that the longer duration of expansion also let the surrounding heat transferred into the air/oil converter to make the temperature of the system close to the atmosphere temperature and increase the efficiency more. In reference [9], the energy of compressed air (which compressing the hydraulic oil) is transformed by using a hydraulic motor to mechanical energy. There are two ways to use the compress air in the air/oil converter. One is the isobaric mode (air in the converter keep in a constant pressure and the exhaust air pressure is same as the input air, therefore, the energy in the compress air is wasted) and the other is expansion mode (the air into the converter is keep in constant for some instance, after some certain volume compressed enter the converter, the air is stop to enter the converter and the air start to expand to lower pressure). In the expansion mode, the residual pressure of exhaust air is smaller than isobaric mode. However, even in the expansion mode, the exhaust air pressure is not equal to atmosphere pressure due to the hydraulic motor character. There are energy still not been used. To overcome this weakness of reference [9], in this study, a booster is used to recover the energy of exhaust air. Some energy of exhaust air is recovered to increase the pressure of exhaust air. After the exhaust air pressure is increased to certain value, the air is then import to the converter and converts the energy into kinematic energy to increase system efficiency. In this study, both the theoretical efficient analysis and experimental study will be introduced in following sections. The equations of efficiency of isobaric mode, expansion mode and booster are introduced. Experiment devices of measuring the booster and system efficiency were developed and the efficiencies of the system under different air pressure and rotating speed were tested. Layout of compressed air engine and its operating processes In this section, the system layout of compressed air engine and its operating processes are introduced. The system layout is shown in figure 1. The system is composed of a tank, 5 pneumatic ball valves, 2 air/oil converters, a booster to recover the residue energy in the exhaust, 5 check valves, several air pipes and oil pipes, a hydraulic motor, a pressure regulator, a booster, an air tank for filter of the hydraulic oil in the system. In the system, hydraulic oil only flow from high pressure to low pressure. Check valve only allow hydraulic oil flow into the motor in one direction. The compressed air is stored in the air tank; while the air is released; high pressure is regulated to working pressure using the pressure regulator. After the air enters the converter, the converter converts air pressure into hydraulic pressure and pushes the hydraulic oil to drive the hydraulic motor and finally goes into another converter (In figure 1, the oil of the red converter flow into the blue converter). According to oil flow direction the system is divided into phase A and phase B. As shown in figure 1, red is the high pressure oil, blue is the low pressure oil. In phase A, the compressed air goes to the left converter and the oil in left converter goes to the right converter through hydraulic motor. In phase B, the compress air goes to the right converter and the oil in the right converter goes to left converter. It is noted that when at the time the phase changed from one phase to another phase, there is an oil flow interrupt and this makes the motor cannot provide any output torque. To prevent this, an accumulator is used to keep the oil continuously enter the motor. To complete one cycle of operation, the hydraulic oil must flow back and forth between two converters, but the direction of the oil enter the hydraulic motor must be the same. To operate the system continuously relies on the oil in the accumulator which will provide the oil at the time the converters under phase change. Due to the energy in exhaust air is still not zero, the exhaust air input to a booster to increase the pressure of exhaust air, when the air in the booster has enough pressure, than enters the converter which pushes the oil into the motor. In this study, there are two ways to use the energy in the compressed air. The easiest way is isobaric condition (we call it isobaric mode). Under that condition, the air continuously enters the converter with constant pressure till the air fully fills the converter. The air is then release to the booster. Under this condition, a lot of energy in the exhaust air is still not used. To use the energy in the compressed air more, we can control the amount of compressed air enter the converter by closing the ball valve. After closing the valve, let the air in the converter expand and do more work. This process reduces the pressure of the exhaust air (we call this expansion mode). Though using the expansion mode has higher efficiency, there still needs isobaric operation mode. Under this mode, the output torque of the motor is constant; this mode is suitable for the condition which requires high torque, etc. The development of theoretical efficiency equations of the system Due to the complexity of the hydraulic motor, the efficiency of the motor follows the chart provided by hydraulic motor manufacture. The theoretical efficiency of the system can be calculated by product of the theoretical efficiency of the subsystem includes converter and the booster with the efficiency of the hydraulic motor. Therefore, in following sections, we will discuss the efficiency of the booster, the efficiency of the isobaric mode and expansion mode of the subsystem includes converter and the booster. Finally, the equation of theoretical efficiency will be introduced. The efficiency of booster Due to the manufacturer does not provide the efficiency of the booster. Experiments have to be done to know the efficiency of the booster. The experimental setup of measuring the efficiency of booster is lineup of a big air tank as compress air source, a booster (SMC VBA-11A-02) and a small air tank as the reservoir of the compressed air from booster. The compressed air is first stored in the big air tank and release to the booster. When the compressed air enters booster, it increases the pressure of the air and then the high pressure air stored in small air tank. The booster can increase the pressure of small air tank up to four times of that of the air of big air tank. The efficiency of the booster is calculated by dividing the energy of the air in the small air tank with the initial energy of big air tank as shown in equation (1). In equation (1), the energy equation can be found in [10]. The efficiency can be calculated by dividing the energy of air in small tank by the energy of air come out from big tank. Where, ηb is the efficiency of the booster, γ is heat capacity ratio, pt is the pressure in the small tank, pb is the pressure in the big tank, Vt is volume of small tank and Vb is volume of big tank. The results of the experimental efficiency of the booster used in this study in different pressure are almost a constant. The efficiency used in the theoretical efficiency calculating is 22.3%. The efficiency of the subsystem includes converter and the booster for isobaric operation Before analyze the efficiency of converter, the conversion follows 5 assumptions. • The real air complies with the ideal gas equation. • The hydraulic oil is the incompressible fluid • Ignore the impact velocity of the fluid and the height variation. • Each valve not to cause pressure drop. • The experiment of expansion is under isothermal condition. As shown in the study of Cai et al [11], the energy in the compressed air is divided into two parts. The first part represents the transmission power, which addresses the power required to push the air downstream. The second part represents the expansion power, which addresses the available work in the air. The basic equations of thermodynamic can be found in [11]. Definitions pressure conversion efficiency cp η is: In above equations, the total energy of the compressed air for the air tank into system is the ETotal. The ETotal combines the energy of hydraulic oil which drives the hydraulic motor Ee and the energy of the residual pressure in converter Ec. The Euse combines the energy of hydraulic oil which drives the hydraulic motor Ee and the energy of the energy comes from booster Eb. Figure 2 shows a compressed air of converter P-V diagram for constant pressure operation. Where P is the pressure of the air and Patm is the pressure of atmosphere. The total volume of the conversion is Vc, and area of the dotted line is Ee. It follows the equation to calculate the Ee: The energy of the residual pressure of the air in the converter Ec is not used to drive the hydraulic motor. Energy Ec defined as when a gas pressure Patm and volume Vatm from an atmospheric pressure environment to pressure P and volume Vc in adiabatic compression mode. The different between Ec and Ee is the energy of power output to motor and Ee is under isothermal process because the process for system is very slow and no significant change in temperature. Ec is under adiabatic process because when the compressed air stored in air tank, it is not considered the energy come from ambient environment. Ec follows the equation (6). The energy Eb is the energy recovers from the exhaust air which can be defined as following: Pressure conversion efficiency ηcp rewrite to The efficiency of the subsystem includes converter and the booster for expansion mode Expansion mode is a high efficient mode. This section develops the formulas by using assumptions as and thermodynamic equations [11] in Section 3.2. The difference of this section is established a variable N. The N is the ratio of the total stroke and the intake stroke. Definitions pressure conversion efficiency ηexp is: Where, And Ec is rewritten as Finial, the pressure conversion efficiency ηexp can be rewritten to 3.4. The theoretical efficiency of the system In the system, there are two major subsystems which dominate the efficiency of the system. The subsystem includes converter and hydraulic motor. We assume the efficiency of the system is the product of the efficiency of these two subsystems. It is noted that the effect of the pipe, valves etc. are not considered in the theoretical efficiency. The theoretical efficiency of the system can be defined as: Efficiency of system = efficiency of converter (included the efficiency of booster)× efficiency of hydraulic motor (15) DANFOSS OMM 50 is the hydraulic motor used in this system. The efficiency of hydraulic motor for different pressure can be found from manufacture's manual [12]. Configuration of the experimental set up The developed compressed air driven hydraulic motor system is shown in figure 4. The configuration of the experiment device can be found in figure 1. The parts used are list in Appendix. The measurement of the torque variation with time is at a fixed speed. The reason for that is the difference between this system and other pneumatic engines is that the expansion time is longer. If used the fixed torque to measure the power output, there is a chance of the output torque being too small to be measured due to expansion of the system. The speed is measured and fed back to the brake to keep the speed of the hydraulic motor near the set value. The configuration of dynamo meter test device is shown in figure 5. The speed of the hydraulic motor is measured by the tachometer and the signal goes to the controller. After the PID signal is processed in the controller, the signal is output to the brake actuator to control the voltage input from the power supply to the brake to control the rotation speed. Equation of the experimental efficiency of the system The experimental efficiency of the system is defined as the ratio of the output shaft energy to the energy consumption of pneumatic system as following: where is the torque of the hydraulic motor (N-m), is the rotating speed of the hydraulic motor (rpm), is the air pressure enter the converter (Pa), is the flow rate of the air enter converter ( 3 / ) and is the energy of air expansion (J). Results of analysis and experiment One may use equations (8) and (14) to calculate the efficiency of the converter. Before calculating the efficiency of the converter, one should know the effect of the booster volume on the efficiency of the converter. In figure 6, the effect of the different ratio of the booster and converter volume on the efficiency of converter is presented. It is found that although the volume ratio does not affect the efficiency much (in figure 6, only about 2%), we still can find that the smaller the volume ratio, the higher the efficiency of the converter. In this study, 2% volume ratio is used in all cases. Figure 7 is the results of efficiency of the converter for different air pressure of isobaric mode. It is found that the booster can increase the efficiency of the system compares with the system without booster. The efficiency of the converter with booster can be as high as 70% at 5 Bar. The improvement of the efficiency when using the booster is around 9% to 11.5%. However, the efficiency is higher as the pressure is lower. At low pressure the efficiency is almost 70%, but the help comes from the booster is not as much when using the air with higher air pressure. It is found that the booster can increase the efficiency of the system compares with the system without booster about 10% to 11%. However, the efficiency is higher as the pressure is lower. At low pressure the efficiency is almost 98% with booster and 89% without booster. The efficiency of the convertor is very high due to under the isothermal process, some part of the energy come from environment. In the experiment, one found that the temperature of the converter is about 3 o C lower than the room temperature. The system efficiencies are calculated by using equation (15). As shown in figure 9, the efficiency of the system of isobaric mode can be found. The Optimum efficiency occurs in 20 bar to 40 bar for isobaric operation. The maximum efficiency of this mode is 0.32 without booster and 0.41 with booster. As shown in figure 10, the efficiency of the system in expansion mode can be found (N=2). The Optimum efficiency occurs in 20 bar to 40 bar for isobaric operation. The maximum efficiency of this mode is 0.55 without booster and 0.64 with booster. It shows the optimum efficiency occurs in 10 bar to 40 bar, however, considering the mechanical affordability and durability, the majority pressures of experiments are set to 10 and 20 bar. As shown in figure 11, the experimental efficiency of expansion mode is around 30.15% to 42.01% for expansion mode with booster and 25% to 32.32% for isobaric. It is also note that the efficiency of In figure 12, the efficiencies of the system (both theoretical and experimental results) for different pressure of isobaric operation mode are shown. It can be found that the higher the pressure, the higher the efficiency. The difference between experimental and theoretical results increases as the pressure increase. This may due to the effect of the piping system, the higher the pressure, the more energy loss in the pipe. At 30 bar the efficiency is about 9% higher for the system with booster. As shown in figure 13, the experiment efficiency of expansion mode for different rotating speed with booster of 10 bar is around 30.15% to 42.01% and the theoretical efficiency of system maximum around 32.32% to 48.97%. The experiment efficiency of 10 bar without booster is around 26% to 38% and the theoretical efficiency of system maximum around 31% to 40%. In all cases, the efficiency of the system decreases when the RPM of the motor increases. The experiment efficiency with booster of 20 bars is around 50% to 52% and the theoretical efficiency of system maximum around 58%. The experiment efficiency of 20 bars without booster is around 44.5% to 49% and the theoretical efficiency of system maximum around 51%. It is very interesting to find that at 20 bar air pressure the efficiency of the system is independent on the speed of the motor. This is due to at that pressure the motor efficiency is not change at speed around 120 rpm to 200 rpm. In figure 14, the efficiencies of the system (both theoretical and experimental results) for different pressure of expansion mode are shown (rotating speed is 150 rpm). It can be found that the higher the pressure, the higher the efficiency. The maximum experimental efficiency is 53.67% at 30 bar. The difference between experimental and theoretical results increases as the pressure increase. This may due to the effect of the piping system, the higher the pressure, the more energy loss in the pipe. At 30 bar the efficiency is about 5% higher for the system with booster. Considering that in most cases, the expansion mode is used to keep the system in an efficient state. Therefore, refer to above data we get from the experiments are used to produce the performance curve of the system in figure 15. Figure 15 indicates the operating range of the system operation. One may
2019-04-16T13:29:02.524Z
2018-10-30T00:00:00.000
{ "year": 2018, "sha1": "379b88340ebae06a0cebc66040fc037a3dd98fee", "oa_license": null, "oa_url": "https://doi.org/10.1088/1755-1315/188/1/012009", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "9a32c21b20ee1bd8ea4dcb10e8fbe7ce83a7ac3a", "s2fieldsofstudy": [ "Engineering", "Environmental Science" ], "extfieldsofstudy": [ "Physics", "Environmental Science" ] }
253292764
pes2o/s2orc
v3-fos-license
Comparative study of optimization methods for optimal coordination of directional overcurrent relays with distributed generators Due to the growing penetration of distributed generators (DGs), that are based on renewable energy, into the distribution network, it is necessary to address the coordination of directional overcurrent relays (DOCR) in the presence of these generators. This problem has been solved by many metaheuristic optimization techniques to obtain the optimal relay parameters and to have an optimal coordination of the protection relays by considering the coordination constraints. In this article, a comparative study of the optimization techniques proposed in the literature addresses the optimal coordination problem using digital DOCRs with standard properties according to IEC60-255. For this purpose, the three most efficient and robust optimization techniques, which are particle swarm optimization (PSO), genetic algorithm (GA) and differential evolution (DE), are considered. Simulations were performed using MATLAB R2021a by applying the optimization methods to an interconnected 9-bus and 15-bus power distribution systems. The obtained simulation results show that, in case of distributed generation, the best optimization method to solve the relay protection coordination problem is the differential evolution DE. INTRODUCTION The main role of protection relays is to detect and eliminate faults as quickly as possible by transmitting an opening command to the related circuit breaker. This circuit breaker isolates the faulty part of the network to ensure that the electrical equipment is not affected by the fault current [1]. The directional overcurrent relay (DOCR) is the most widely used type of relay in the coordination of protection relays due to their simplicity of application and their technical and economic characteristics [2]. The coordination of DOCRs protection has been considered a necessity for distribution networks, as it quickly isolates the faulty area, keeps the system safe and overcomes current faults so that the relays are reliable, flexible and selective [3]. In a properly coordinated system, the main relay must first function on overcurrent faults within a predefined time. After this predetermined time, known as the coordination time interval (CTI), the emergency relay must operate to isolate the default if the main one failed to trip [4]. Relay coordination is usually based on the evaluation of both fault currents and power flow. To optimize relay coordination, two important parameters are considered; relay settings which include the time dial setting (TDS) and the plug The coordination constraint should be satisfied for all primary/backup relay pairs (P/B). This constraint is indicated in (4). The CTI value depends on the type of relay (digital or electromechanical) and it varies between 0.2 and 0.5 s. The parameters tp and tb are respectively the running time of the main and emergency relays [23]. The reliability constraint is presented in (5), the relay must operate within a time margin, it must respond in a minimum time tmin and it must not exceed a maximum time tmax, the relay operating time generally varies between 0.1 and 4 s [21]. The sensitivity constraints are presented in (6) and (7). The parameters TDS and PS must respect the minimum values TDSmin and PSmin and the maximum values TDSmax and PSmax. The limits of TDS are generally 0.1 and 1.1 s [24]. The limits of PS are calculated using (8) and (9), where ILmax is the maximal load current and IFmin is the minimal fault current [5]. DOCRs coordination with DGs Due to its various advantages, the integration of distributed generators into power grids has become more widespread in the global energy sector, but they also have negative impacts on the distribution network parameters in terms of load current and fault current level, which increases according to the capacity and location of DGs relative to the fault [10], [25]. Therefore, power grid protection systems may be affected. The type of DG and the characteristics of the distribution network have a significant effect on the coordination of the protection [11]. The consequences related to the connection of distributed generators to the network are nuisance tripping, blinding of protections and loss of coordination of the protection relays. This can lead to a decrease in system reliability and an increase in corrective maintenance costs [26]. Therefore, it is necessary to optimize the DOCRs protection relay parameters according to the new system configuration. OPTIMIZATION METHODS FOR THE OPTIMAL DOCRs COORDINATION This section presents the three most efficient and robust optimization methods used to solving DOCRs coordination, which are PSO, GA and DE. These algorithms have a randomly generalized initial population, in order to obtain the best solution by reaching the optimal point in the search space. x is the variable vector presented in (10); D is the dimensions of each element of the population, which is the number of variables and N is the population size. Particle swarm optimization Particle swarm optimization is an optimization approach based on the social behavior of birds and school fish, combined with the swarm intelligence. Individuals can perform extremely complex tasks when interacting with each other because each individual has little or no wisdom [20]. Each particle is initialized randomly with its velocity vj and its position xj. At each step, each particle moves in the D-dimensional search space according to three criteria: its best score (Pbset), the best score of all particles (Gbest) and random factors rand1 and rand2. In (11) and (12) each iteration k. Where c1 is the personal learning coefficient, c2 is the global learning coefficient and w is the inertia weight. Genetic algorithm Darwin's natural selection theory was based on the GA to find optimal solutions that should be best suited to the objective function of the problem taking into account the constraints. At each iteration, the genes of each individual, which are the decision variables, undergo genetic operations (selection, crossover, mutation, and elitism) to generate new individuals better at solving the problem. In this algorithm, the individual is estimated and receives a score referring to its competence to execute the objective function and the constraints. The selection process consists in choosing in the middle of the randomly generated population a series of individuals, this process is totally random and does not favour choice within the population. In the crossing process, the two best individuals obtained during the selection process will be chosen as parents. The fundamental role of the crossover process is the exchanging of genetic information in order to increase the genetic variety between the population individuals. The process of mutation inserts diversity in the population, it allows the creation of new genetic traits that are not present in any previous generation individual, which ensures the best research in the resolution space of the system. The crossover factor (CF) represents the probabilities that pairs of chromosomes will produce offspring and the mutation factor (MF) represents the probabilities of a change in status of a chromosome [7]. Differential evolution The DE algorithm represents a simple and efficient evolutionary algorithm based on natural gene selection. The DE algorithm has been shown to be faster than other evolutionary algorithms since it involves less mathematical operations and execution time [13]. An initial population is first randomly generated. For each population element, a mutant vector is created using the (13), with a1, a2 and a3 ϵ {1, 2,…, N} are three mutually different random indices and F is the mutation factor that regulates the differential variation amplification 23 kk a , j a , j . A crossover is inserted to increase the variety of perturbed parameter vectors to obtain the test vector. Crossover execution on the test solution is performed using the crossover rate (CR) and the random index randk where randk equals randi(D), as expressed in (14) [27], the Selection of the trial solution is made using system at (15), with TFi is the trial fitness. , RESULTS AND DISCUSION To ensure coordination of DOCRs in distribution networks with integrated DGs, the optimization methods described in the previous section are applied to interconnected 9-bus and 15-bus distribution systems with digital protection relays and standard characteristics. The PS limits are calculated using (8) and (9), the TDS boundaries are 0.1 and 1.1 s and The CTI value for both networks is 0.2 s. The relay operating time limits are 0.1 and 4 s. Comparative study of optimization methods for optimal coordination of … (Zineb El Idrissi) 213 and 24 protection relays (R1, R2, . . ., R24). They have 44 pairs of P/B relays between them. The CTR is 500/1 for all relays. Table 1 gives the optimal adjustments of the 24 protection relays, which are PS and TDS, obtained by PSO, GA and DE optimization methods, while Table 2 shows the values for the running time of the main and the emergency relays tp and tb, as well as the coordination time interval corresponding to 44 P/B relay combinations for this optimization approach. The last two rows of Table 1 show the objective function (OF) and the time of convergence for each method. The objective function is the total of the running times of all main and emergency relays with the obtained optimal settings. The CTI between the main and emergency relay running times of the 44 pairs of P/B relays for the different methods is illustrated in Figure 2. Distribution system 15 bus The 15-bus interconnected distribution system presents an example of a high DG penetration distribution system. Six 15 MVA generators with 15% synchronous reactance are connected to buses 1, 3, 4, 6, 13 and 15. Therefore, this system is composed of 42 directional overcurrent relays and 82 pairs of main/emergency relays with 84 decision variables including 42 variables for TDS and 42 variables for PS. More details about this system are provided in [28], [29]. This information contains the fault current values, the current transformation ratio of the relays as well as the P/B pairs of relays. Tables 3 and 4 show respectively the optimal relay setting of the 42 protection relays, the primary relay running time tp and the emergency relay running time tb as well as the CTI values corresponding to 82 P/B relay combinations, for the the proposed optimization methods. Figure 3 illustrates the plot of the CTI values. The last two rows of Table 3 show the objective function and convergence time for each method. It is observed from Table 3 that the methods applied to this network give optimal results that respect the sensitivity constraint given that the TDS and PS parameters are within the previously mentioned limits. However, it is also observed that the value of OF is the smallest for DE compared to PSO and GA as well as the convergence time is shorter for this method. From Table 4, it can be seen that the running times of all main and emergency relays are greater than 0.1 s and less than 4 s and the coordination constraint is greater than or equal to 0.2 s for all primary and emergency relay combinations for the studied methods, which means that the reliability and coordination constraints are well respected. Table 4 and Figure 3 show that the CTI values for DE have the smallest values, for the 44 P/B relay pairs, compared to the other two methods. In Figure 3, it can also be observed that the CTI values obtained by DE vary between 0 and 1, while the CTI values of PSO and GA methods belong to the interval [0,2], then it can be concluded that the best optimization technique for solving the relay protection coordination problem is differential evolution DE. CONCLUSION This paper proposes three different optimization methods, PSO, GA and DE dealing with the problem of coordination of directional overcurrent relays. These techniques are applied on two distribution networks with 9 and 15 buses integrating distributed generators in order to determine the most efficient 218 method to solve this problem with the integration of DGs, the objective function and the time of convergence obtained by each method are compared between them. The comparative analysis shows that the differential evolution gives optimal values of the objective function and a shorter convergence time compared to the other methods for both distribution networks. Even more, the CTI values obtained by DE are found to be the most optimal, which explains the choice of DE as the method that offers the most satisfactory results among the methods investigated in this work. Therefore, DE can be regarded as the most efficient method to reach the best solution respecting the constraint of coordination between relays in the presence of DGs.
2022-11-05T15:22:38.954Z
2022-01-01T00:00:00.000
{ "year": 2023, "sha1": "cac92fc4e8222feaa5008c23fd7ce564eb44fa12", "oa_license": "CCBYSA", "oa_url": "https://ijai.iaescore.com/index.php/IJAI/article/download/21715/13539", "oa_status": "HYBRID", "pdf_src": "ScienceParsePlus", "pdf_hash": "cac92fc4e8222feaa5008c23fd7ce564eb44fa12", "s2fieldsofstudy": [ "Engineering", "Computer Science" ], "extfieldsofstudy": [] }
250649767
pes2o/s2orc
v3-fos-license
Evaluation of Proton MR Spectroscopy for the Study of the Tongue Tissue in Healthy Subjects and Patients With Tongue Squamous Cell Carcinoma: Preliminary Findings Purpose To noninvasively assess spectroscopic and metabolic profiles of healthy tongue tissue and in an exploratory objective in nontreated and treated patients with tongue squamous cell carcinoma (SCC). Methods Fourteen healthy subjects (HSs), one patient with nontreated tongue SCC (NT-SCC), and two patients with treated tongue SCC (T-SCC) underwent MRI and single-voxel proton magnetic resonance spectroscopy (1H-MRS) evaluations (3 and 1.5T). Multi-echo-times 1H-MRS was performed at the medial superior part (MSP) and the anterior inferior part (AIP) of the tongue in HS, while 1H-MRS voxel was placed at the most aggressive part of the tumor for patients with tongue SCC. 1H-MRS data analysis yielded spectroscopic metabolite ratios quantified to total creatine. Results In HS, compared to MSP and AIP, 1H-MRS spectra revealed higher levels of creatine, a more prominent and well-identified trimethylamine-choline (TMA-Cho) peak. However, larger prominent lipid peaks were better differentiated in the tongue MSP. Compared to HS, patients with NT-SCC exhibited very high levels of lipids and relatively higher values of TMA-Cho peak. Interestingly, patients with T-SCC showed almost nonproliferation activity. However, high lipids levels were measured, although they were relatively lower than lipids levels measured in patients with NT-SCC. Conclusion The present study demonstrated the potential use of in-vivo 1H-MRS to noninvasively assess spectroscopic and metabolic profiles of the healthy tongue tissue in a spatial location-dependent manner. Preliminary results revealed differences between HS and patients with tongue NT-SCC as well as tongue T-SCC, which should be confirmed with more patients. 1H-MRS could be included, in the future, in the arsenal of tools for treatment response evaluation and noninvasive monitoring of patients with tongue SCC. INTRODUCTION Magnetic resonance spectroscopy (MRS) is a noninvasive quantitative tool that is based on a spectroscopic analysis of tissue metabolism. MRS can provide molecular analysis of biochemical composition and evaluate the presence of specific metabolites in the tissue being investigated. Proton MRS ( 1 H-MRS), the most used MRS method in clinical practice, has mostly been applied to study brain pathologies, particularly brain tumors for diagnosis, differentiation between nonmalignant and malignant tissues, prognosis, and evaluation of treatment responses [1]. Thanks to technological advances in both the software and hardware, clinical applications of 1 H-MRS have been applied to other organs, than the brain, including for diagnosis of cancer in the breast [2], prostate [3], liver [4], and, more recently, musculoskeletal system [5]. The value of 1 H-MRS in skeletal muscle was clearly established [6], providing information on the metabolic properties (energy, lipids) of muscles. These parameters have the potential to be used as biomarkers to detect pathological processes and to monitor, for example, patients with head and neck (HN) cancer undergoing therapy. Head and neck squamous cell carcinoma (SCC) is the sixth most common cancer type in the world with 890,000 new cases and 450,000 deaths in 2018. The incidence of SCC is expected to increase by 30% (being 1.08 million new cases per year) by 2030 [Global Cancer Observatory (GLOBOCAN)] and is especially frequent in the northeast France [7,8]. SCC can occur in various sublocations of the upper aerodigestive tract, including the pharynx, larynx, sinuses, nasal cavity, and the oral cavity with the most common subsite being the oral tongue, which is also one of the worst subsites in terms of prognosis [9]. SCC could be a very aggressive cancer and has a bad prognosis if not detected early and, thus, is associated with high mortality. The development of simple and reliable biomarkers for the early detection of SCC is one of the solutions to better diagnose, treat these tumors, evaluate and monitor treatment combinations, and, hence, reduce mortality. In this context, 1 H-MRS, being a noninvasive, rapid, informative, and quantitative technique, with a demonstrated higher sensitivity compared to MRI [1], has great potential to help with early cancer detection, diagnosis, and treatment response evaluation. Only a few previous 1 H-MRS studies (both in-vivo [10,11] and ex-vivo [12]) have been performed with a limited number of subjects. Moreover, to the best of our knowledge, no previous study has focused on the metabolic profiles of the tongue region in healthy subjects. This could be related to some difficulties specific to the HN region, as well as to the lack (in the past) of adequate methods of spectroscopic data acquisition. Indeed, performing in-vivo MRS studies on HN sites is challenging. Magnetic susceptibility artifacts arising from tissue-bone-air interfaces, as well as motion artifacts related to respiration and swallowing are the main limitations of performing in-vivo MRS studies on the HN region. However, the tongue region, one of the most common sublocations of SCC (95% of tongue tumors), is mainly constituted of muscles and is a relatively homogeneous structure in healthy subjects and could be suitable for in-vivo spectroscopic measurements. Thus, the main objective of this study was to noninvasively assess spectroscopic and metabolic profiles of the healthy tongue tissue in two different spatial locations (the medial superior and the anterior inferior parts of the tongue) in healthy subjects. As a second exploratory objective, we challenged the potential use of 1 H-MRS in differentiating normal tongue tissue from SCC before and after treatment. Subjects Fourteen healthy subjects (HSs) were recruited from the Institut "Faire Faces" (Amiens, France). Three patients with SCC were recruited from the Oral and Maxillofacial Surgery department (Amiens University Hospital). One patient with nontreated tongue SCC (NT-SCC) (TNM stage: pT4aN0MxR0); one patient with treated tongue SCC [T-SCC; TNM stage: P16-, T4 N2b Mx, chemotherapy: carboplatin and 5-fluorouracil (5FU) delivered on 3 days, radiotherapy: a curative dose of 69.96 Gy delivered on 33 fractions for the tumor, and a prophylactic dose of 54.12 Gy delivered on 33 fractions]; and patient with T-SCC [TNM stage: T3 N0 M0, chemotherapy: cisplatin (CDDP) 100 mg/m 2 delivered on 3 days, and radiotherapy: a dose of 70 Gy delivered on 35 fractions for the tumor]. The staging was based on the 8th edition of the TNM staging system of the Union for International Cancer Control (UICC). Ethical approval for this study was obtained from Clermont-Ferrand Ethical Committee (2018-A02389-46) and written informed consent was obtained from all the subjects before the study. Data Acquisition Patients with HS and SCC underwent MRI and proton magnetic resonance spectroscopy ( 1 H-MRS) using, respectively, Achieva dStream 3T (Philips Healthcare, Best, Netherlands) and GE Optima MR450w 1.5T (GE Healthcare, USA) MRI scanners, using, respectively, a 32-and 24-channel head and neck coil. Patients with HS and SCC underwent the same MRI acquisitions [three-dimensional (3D) T1-weighted spin-echo without contrast enhancement, gradient-echo T2 * , T2-weighted imaging with fat saturation, and diffusion-weighted imaging (DWI)] except that patients with SCC had, in addition, a 3D T1-weighted spin-echo with contrast enhancement. 1 H-MRS acquisitions consisted of a single-voxel Point RESolved Spectroscopy (PRESS) sequence with two echo times (TE = 35 FIGURE 3 | Lipids, choline, and creatine metabolites ratios differences between healthy subjects and patients with nontreated and treated SCC measured at TE = 35 ms (A) and TE = 144 ms (B). Note that Lip-CH 2 represents IMCL-CH 2 and IMCL-CH 2 and Lip-CH 3 represents IMCL-CH 3 and IMCL-CH 3 . HS-AI, healthy subjects-anterior inferior part of the tongue; HS-MS, healthy subjects-medial superior part of the tongue; NT-SCC, patient with nontreated SCC; T-SCC, patient with treated SCC. and 144 ms) performed using a volume voxel of 2 cm 3 × 2 cm 3 × 2 cm 3 . 1 H-MRS voxel was placed at two different locations: the medial superior part (MSP) and the anterior inferior part (AIP) of the tongue for healthy subjects (Figure 1), while it was placed at the most aggressive part of the tumor (based on the appearance of the lesions in pre-and postcontrast-enhanced 3D T1 MRI and T2-weighted imaging with fat saturation). Data Analysis Proton magnetic resonance spectroscopy data from HS were analyzed using the MR SpectroView Analysis package from Philips Achieva dStream 3.0T TX. 1 H-MRS data from patients with T-SCC and NT-SCC were analyzed on the SUN imaging workstation (Advantage Windows) using SAGE (Spectroscopic Analysis, GE). Before analysis, the quality of 1 H-MRS spectra was assessed (signal-to-noise ratio, spectral resolution, and estimation of full width at half maximum on water peak). 1 H-MRS spectra obtained from the AIP of the tongue were all (14) of good quality. However, only six of 14 1 H-MRS spectra that were obtained from the MSP of the tongue fitted quality criteria and were included in the analysis step. Data analysis was performed in the time domain directly on free induction decays (FIDs). Because the tongue is mainly composed of interlacing skeletal muscle and pockets of adipose tissue, the spectroscopic profile of the healthy tongue was expected to be close to the well-studied skeletal muscle spectroscopic profile [6,13]. Hence, for metabolite quantification, we selected the most relevant resonances measurable in skeletal muscle. These resonances were specified and described by Gaussian line shapes in the frequency domain and represented as follows: intramyocellular lipid-methyl (IMCL-CH 3 ) resonance at 0.9 ppm, extramyocellular lipid-CH 3 (EMCL-CH 3 ) resonance at 1.1 ppm, intramyocellular lipidmethylene (IMCL-CH 2 ) resonance at 1.3 ppm, extramyocellular lipid-CH 2 (EMCL-CH 2 ) resonance at 1.45-1.5 ppm, creatine-CH 3 (Cr-CH 3 ) resonance at 3.03 ppm, Cr-CH 2 resonance at 3.92 ppm, trimethylamine (TMA) and choline (Cho) resonances at 3.20-3.22 ppm, and the water resonance being centered at 4.72 ppm. Spectroscopic metabolites were then quantified as ratios to total creatine (tCr) with tCr representing the sum of Cr-CH 2 and Cr-CH 3 . Mean values (±SD) of metabolite ratios were calculated for HS (the MSP and the AIP parts of the tongue) and patients with NT-SCC and T-SCC. RESULTS The present study demonstrated the potential use of the 1 H-MRS technique to noninvasively quantify the spectroscopic and metabolic profiles of the tongue tissue in HS. Preliminary findings from patients with NT-SCC and T-SCC are also presented. Representative MR images illustrating 1 H-MRS voxel placement at the MSP and the AIP of the tongue in HS are shown on the top of Figures 1, 2. Two corresponding representatives MR spectra obtained from the MSP and the AIP of the tongue in HS are shown at the bottom of Figures 1, 2. The spectrum obtained from the MSP of the tongue in HS ( Figure 1A) showed a very prominent and large peak (from 1.1 to 1.5 ppm) centered at 1.3 ppm arising from protons in the methylene (CH 2 ) groups of lipids that are attributed to IMCL-CH 2 . A less prominent peak, resulting from protons in the methyl (CH 3 ) groups of lipids corresponding to IMCL-CH 3 , is observed at 0.9 ppm. A weak and relatively large nonassigned peak centered at 2.3 ppm is observed. Cr-CH 3 peak is observed at 3.03 ppm. TMA-Cho peak is measured at 3.21 ppm and Cr-CH 2 peak is measured at 3.92 ppm. The spectrum obtained from the AIP of the tongue in HS (Figure 1A) also showed a very prominent and large peak (from 1.1 to 1.5 ppm) centered at 1.3 ppm arising from IMCL-CH 2 that should attributed to IMCL-CH 3 and centered at 0.9 ppm. A weak but larger (compared to the MSP of the tongue) nonassigned peak centered at 2.3 ppm is observed. A higher Cr-CH 3 peak is also observed at 3.03 ppm. A more prominent and well-separated TMA-Cho peak is observed at 3.21 ppm and, finally, a higher Cr-CH 2 peak is observed at 3.92 ppm. In patient with NT-SCC, the spectrum obtained from voxel placed at the most aggressive part of the SCC of the tongue (Figure 2A) showed a very prominent peak with a larger extent (compared to HS; from 1 to 1.8 ppm) centered at 1.3 ppm arising from IMCL-CH 2 with a shoulder attributed to IMCL-CH 3 and centered at 0.95 ppm. A weak and relatively large nonassigned peak centered at 2.3 ppm is observed. A reduced Cr-CH 3 peak is observed at 3.03 ppm. However, a higher TMA-Cho peak is measured at 3.21 ppm that is more visible at TE of 144 ms (Figure 2A) and a weaker Cr-CH 2 peak is measured at 3.92 ppm. In patients with T-SCC, spectra obtained from the MSP of the tongue ( Figure 2B) showed a very prominent with smaller extent compared to patient with NT-SCC (from 1.1 to 1.5 ppm) centered at 1.3 ppm arising from IMCL-CH 2 . Spectra also contained a separate and less prominent peak, resulting from IMCL-CH 3 , observed at 0.9 ppm. A weak and relatively large nonassigned peak centered at 2.3 ppm is observed. A weaker Cr-CH 3 peak is observed at 3.03 ppm. Furthermore, a very low-intensity TMA-Cho peak is measured at 3.21 ppm and Cr-CH 2 peak is measured at 3.92 ppm. DISCUSSION The main objective of this study was to noninvasively assess spectroscopic and metabolic profiles of the tongue tissue in two different spatial locations (the MSP and the AIP of the tongue) in healthy subjects (HSs). As a second exploratory objective, we challenged the potential use of 1 H-MRS in differentiating normal tongue tissue from tongue SCC before and after treatment. The present study demonstrated the ability of the 1 H-MRS technique to noninvasively quantify the spectroscopic and metabolic profiles of the tongue tissue in HS and highlighted differences in spectroscopic profiles between HS and patients with nontreated tongue SCC as well as treated tongue SCC (Figure 3). Because the type and compositions of the muscles of the MSP and AIP (genioglossus muscles) parts of the tongue are different and because SCC lesions most often arise at the borders of the tongue before affecting the body [14], one can expect metabolic differences between the MSP and the AIP parts of the tongue. For this reason, the first objective of this study was to assess invivo single-voxel 1 H-MRS measurements, in HS, at two different locations, i.e., the MSP and the AIP of the tongue. As described in the Results section, differences were observed in HS between the spectroscopic profiles of these two locations. One important difference concerned lipid levels (Figure 3). In previous in-vivo 1 H-MRS studies of skeletal muscle [15], lipid peaks were well identified and separated with more subpeaks arising separately from EMCL and IMCL protons. In our study, it was not possible to separate IMCL and EMCL from both CH 2 and CH 3 in a reproducible way. This could be explained by different reasons, mainly tongue motion (especially at the MSP), magnetic susceptibility effects, and residual dipolar coupling that can affect metabolites peaks separation [13]. Moreover, differences in terms of muscle composition between the tongue and skeletal muscles, as well as muscle fiber orientations can also affect spectral resolution. This is consistent with previous studies [16] reporting that peaks of EMCL and IMCL in calf muscles are separated when fibers are orthogonally oriented to the main magnetic field (B0), while nonseparated peaks are observed when muscles are oriented parallel to B0. Further differences between the MSP and the AIP of the tongue were also observed in levels of Cr peaks (Figure 3), an important metabolite that is often used as a reference to estimate relative spectroscopic concentrations of all other metabolites. The AIP of the tongue showed higher levels in both the Cr-CH 3 (at 3.03 ppm) and Cr-CH 2 (at 3.92 ppm) compared to the MSP of the tongue. Cr reflects the energy metabolism and the cellularity index of the tissue; this difference may be attributed to the higher level of muscle composition in the AIP (the genioglossus muscles) compared to the MSP (with less muscle volume) of the tongue and their different functions: the anterior inferior muscles are more implicated in motor function, while the superior muscles are rather implicated in sensitive, gustative, and food transport (providing a smooth surface for food to slide into the hypopharynx) functions in the MSP of the tongue. Finally, we noticed a better separation between the TMA-Cho peak and the Cr-CH 3 peak in the AIP compared to the MSP of the tongue. This point is very important since it offers the ability to better measure and follow-up the TMA-Cho peak (particularly the Cho/Cr ratio, a cell proliferation marker) in tumor conditions. In summary, the spectroscopic and metabolic profiles of the AIP of the tongue seem to be closer to skeletal muscle profiles compared to the MSP of the tongue. This is likely in agreement with the higher level of muscle composition in the AIP of the tongue and its main function, i.e., motor function. The presented differences, observed in HS, are important and should be taken into account in the spectroscopic evaluation and follow-up of SCC lesions. The second exploratory objective of this study was to challenge the potential of in-vivo 1 H-MRS in differentiating healthy tongue tissue from SCC before and after treatment. The preliminary presented results suggest important differences in the spectroscopic profiles between HS and patients with NT-SCC and T-SCC. In NT-SCC (Figure 2A), the TMA-Cho/tCr ratio showed a higher value compared to HS. Although protons related to the Cho peak overlap with protons related to the TMA peak, the increase in the complex TMA-Cho is probably associated with an increase in Cho-containing metabolites peak [mainly phosphatidylcholine (PC) and glycerophosphocholine (GPC)] rather than TMA peak. Indeed, Cho (and its derivatives) is an important constituent in phospholipid metabolism of the cell membranes and is identified as a cell tumor proliferation marker. In clinical practice, Cho/Cr is the most studied ratio for the prognosis of brain tumors [17]. Elevated Cho/Cr is also in agreement with a previous in-vitro 1 H-MRS study [18], which showed that Cho/Cr ratio level was significantly higher in the SCC tongue than in normal tissue or posttherapeutic tissue. Moreover, in our study, patient with NT-SCC showed a Cho/Cr ratio of 2. This ratio is relatively low compared to Cho/Cr ratio levels that we can usually measure in brain tumors, such as in glioblastomas (Cho/Cr = from 3 to 6), in meningiomas (Cho/Cr = from 3 to 10, with a concomitant decrease in Cr levels), and medulloblastomas (Cho/Cr = from 3 to 16) [1]. We hypothesize that such a difference could be related to the epidermoid type of SCC tumor itself, which may have lower mitotic activity and cell membrane turnover compared to tumors located in the conjunctive and supporting tissues such as in the glial (glioblastomas), meningeal (meningiomas), and medulloblastomas cells. Interestingly, patients with T-SCC showed almost no proliferation activity (with Cho/Cr ratio <1.5) after radiotherapy or radiochemotherapy treatments. This could be explained by either the efficiency of the treatment and/or the relatively low level of proliferation before treatment (Cho/Cr = 2) or by the intersubject treatment (chemotherapy) variability. Another interesting finding concerned the variation of lipid levels. Broadening of IMCL-CH 2 (centered at 1.3 ppm) and IMCL-CH 3 (centered at 0.95 ppm) peaks combined with their increased FWHM values tend to indicate an increased amount of phospholipids. In patient with NT-SCC, this could be related to increased necrotic lipids as a result of an inadequate balance between vascularization and cell proliferation and may be responsible for metabolic stresses such as hypoxia and energy deprivation as in solid tumors [1]. However, 1 H-MRS measurements in patient with NT-SCC showed a relatively low level of proliferation (Cho/Cr ratio of 2) and could not be the only explanation for such high lipid levels. Thus, more patients with SCC and further complementary measurements are needed to investigate the tumor vascularization environment using quantitative methods, such as dynamic contrast-enhanced MRI (DCE-MRI) to take benefit from its derived pharmacokinetic parameters [19]. In contrast, 1 H-MRS measurements in patients with T-SCC showed a lower level of necrotic lipids compared to patients with NT-SCC ( Figure 2B). This result was unexpected and is not in agreement with what we can usually measure in brain tumors, especially in glioblastomas where very high levels of necrotic lipids are usually detected [1]. We are unable to explain such results, since many variable factors could interfere (intersubject treatment variability). This emphasizes the interest of further metabolic studies with different time point measurements, particularly before and after each treatment step, in order to separate treatment effects from all other sources of variability. Further, a decrease in the level of the creatine peak was observed in patients with T-SCC compared to HS. One explanation would be an increased metabolic rate, thus increased energy and energetic reserves consumption of creatine and phosphocreatine. This phenomenon is known to be particularly observed under treatments or in tumors that are with high proliferation rate [1]. Another finding in the present study was the probable presence of NMR visible polyamines and/or amino acid resonances [between Cr-CH 3 (3.03) and TMA-Cho (3.21 ppm)] peaks in NT-SCC. The amino acid-derived polyamines have long been associated with cell growth and cancer and specific oncogenes and tumor suppressor genes regulate polyamine metabolism. Upregulation of polyamine has been found to increase cell proliferation, decrease apoptosis, and promote tumor invasion and metastasis [20]. Such resonances, indicating changes in polyamines that are usually observed in prostate cancer [21], need to be better studied and may be used to help to monitor treatment responses in patients with SCC. Finally, we noticed in all the 1 H-MRS spectra (HS, NT-SCC, and T-SCC subjects), the presence of nonassigned resonances between 2 and 2.4 ppm that could be attributed to either lipid or glutamine-glutamate resonances [22] increase. Further studies using 2-dimensional total correlated proton nuclear magnetic resonance spectroscopy (TOCSY) could be of great interest. TOCSY is a technique that was demonstrated to be able to identify and quantify a wide range of metabolites such as amino acids, peptides, triglycerides, and phospholipids precursors [23]. Moreover, high-resolution magic angle spinning (HRMAS) NMR spectroscopy techniques [24] on tongue SCC tumor biopsies may also bring more insights into these resonance changes, especially for glutamine metabolite, that are usually related to malignancy or tumor evolution. The present study has some limitations: (1) Because of the differences in both the muscle content and muscle fibers orientation and motion artifacts (respiration and swallowing), in-vivo 1 H-MRS spectra obtained from the tongue MSP were not of sufficiently high quality and only six (out of 14) HS were retained and used for metabolites ratios quantifications in the tongue MSP, while all the fourteen 1 H-MRS spectra obtained from the tongue AIP were included; (2) SCC tumors (especially T-SCC) showed higher tissue heterogeneity, compared to healthy tongue tissue, reducing, consequently, the quality of 1 H-MRS spectra; and (3) Although the present study was mainly focused on studying healthy tongue tissue, the small number of patients with SCC was a major limitation. We only recruited three patients with tongue SCC to preliminary investigate the potential use of 1 H-MRS in differentiating normal tongue tissue from SCC before and after treatment. The small SCC patients' sample did not allow any generalization, but we hope that our preliminary results could serve as a starting point for future studies with larger samples and screening patients with SCC before and after each treatment step to validate the present results. Combining 1 H-MRS data with other relevant imaging tracers demonstrated in studies dealing with brain tumors, such as 11 C-methionine [25] and 3 ′ -deoxy-3 ′ -(18)F-fluorothymidine ((18)F-FLT) [26] (cell proliferation metabolism), and [F-18] Fluoromisonidazole (tumor hypoxia) [27] using PET imaging, will be of a high added value. Despite these limitations, the present study opens up new perspectives in the study of SCC lesions. 1 H-MRS studies could be compared and/or coupled to quantitative imaging techniques, such as [F18]-fluoro-2-deoxy-D-glucose-PET (18F-FDG-PET). Together, 1 H-MRS and 18F-FDG-PET have a very high sensitivity compared to MRI and X-ray CT [28] and may be very helpful in determining the extension, depth of invasion of the tumor, and prognosis evaluation in patients with oral SCC [29]. Furthermore, in line with the objective of the present study, invivo 1 H-MRS metabolic signatures could be coupled to in-vitro 1 H-MRS analysis studies of accessible biofluids such as serum and saliva. Previous metabolomics studies revealed the existence of metabolic signatures in such biofluids with several tumor-specific metabolites that could discriminate oral cancer from healthy controls or even precancerous lesions [30]. In this objective, serum, an easily accessible biofluid from a blood sample, could be concomitantly analyzed (without preparation [30]) with 1 H-MRS acquisitions. Previous blood samples from oral cancer patients were analyzed and exhibited altered metabolic profiles occurring at an early stage of cancer [31], characterized mainly by an altered energy metabolism with the presence of ketone bodies, the suppression of tricarboxylic acid cycle, and abnormal amino acid catabolism. Another previous study [32] on esophageal SCC revealed that plasma phospholipid metabolism plays a critical role in oncogenesis; this result is in agreement with our in-vivo results as depicted by a significant increase of lipids in patients with tongue SCC. Moreover, in a previous NMR spectroscopy study [33], specific variations in the salivary metabolomic profile were identified and revealed that fucose, glycine, methanol, and proline were highly discriminant between patients with HN cancer and control subjects. Further, metabolomics studies revealed high sensitivity to saliva metabolic changes [34] and to DNA methylation alterations (that are detectable in saliva) of patients with oral squamous cell carcinoma (OSCC) [35][36][37], suggesting that salivary biomarkers are valuable for OSCC early diagnosis and OSCC stratification. Altogether, these elements emphasize the importance of conducting further metabolomics studies with concomitant in-vivo and ex-vivo biofluids (serum and/or saliva) 1 H-MRS analyses. CONCLUSION The present study demonstrated the potential use of the in-vivo 1 H-MRS technique to noninvasively assess the spectroscopic and metabolic profiles of the tongue tissue, in a spatial locationdependent manner, in healthy subjects. Relevant differences, observed in healthy subjects, were presented and should be taken into account in the spectroscopic and metabolic evaluations of SCC lesions. Preliminary results revealed differences between healthy subjects and nontreated as well as treated patients with tongue SCC, which should be confirmed with a higher number of patients. These preliminary findings highlight the potential of 1 H-MRS to be, in the future, included in the existing arsenal of tools of diagnosis, treatment response evaluation, estimation of the potential of tumor recurrence and patient survival, and, thus, could improve the surgical management of patients with tongue SCC. DATA AVAILABILITY STATEMENT The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation. ETHICS STATEMENT Ethical approval for this study was obtained from Clermont-Ferrand Ethical Committee. The patients/participants provided their written informed consent to participate in this study.
2022-07-20T15:27:07.588Z
2022-07-18T00:00:00.000
{ "year": 2022, "sha1": "eb2105ce26398c5a06f78e6d8469aa3ccfa56b0a", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/froh.2022.912803/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "9303862546dc750c7100900daca1e65bf2ebf142", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
257775069
pes2o/s2orc
v3-fos-license
Cooperative motility, force generation and mechanosensing in a foraging non-photosynthetic diatom Diatoms are ancestrally photosynthetic microalgae. However, some underwent a major evolutionary transition, losing photosynthesis to become obligate heterotrophs. The molecular and physiological basis for this transition is unclear. Here, we isolate and characterize new strains of non-photosynthetic diatoms from the coastal waters of Singapore. These diatoms occupy diverse ecological niches and display glucose-mediated catabolite repression, a classical feature of bacterial and fungal heterotrophs. Live-cell imaging reveals deposition of secreted extracellular polymeric substance (EPS). Diatoms moving on pre-existing EPS trails (runners) move faster than those laying new trails (blazers). This leads to cell-to-cell coupling where runners can push blazers to make them move faster. Calibrated micropipettes measure substantial single-cell pushing forces, which are consistent with high-order myosin motor cooperativity. Collisions that impede forward motion induce reversal, revealing navigation-related force sensing. Together, these data identify aspects of metabolism and motility that are likely to promote and underpin diatom heterotrophy. Introduction Eukaryotes fall into fundamentally distinct groups based on their means of energy acquisition.Photoautotrophs-land plants and algae-derive energy from sunlight.By contrast, heterotrophs, such as animals and fungi, obtain energy by feeding on primary producers or each other.Mixotrophs combine these two strategies. Eukaryotic photoautotrophs evolved several times through endosymbiosis between a heterotrophic eukaryote and a photosynthetic microbe.Such an association between the ancestor of the Archaeplastida and a cyanobacterium led to the emergence of land plants, green and red algae, and glaucophytes.Afterwards, distinct algal lineages arose through secondary and tertiary endosymbiotic events, where the photosynthetic capacity was acquired from a eukaryotic red or green alga [1][2][3][4]. The stramenopiles (also known as heterokonts) obtained their plastid through a secondary or higher endosymbiotic event with a red alga [5].This group includes oomycetes, multicellular brown algae and unicellular diatoms [6].With an estimated 100 000 species [7], diatoms are one of the most abundant and diverse group of marine and freshwater microalgae [8][9][10][11][12].Notably, they employ biomineralization to construct silica-based [13,14] cell walls (frustules) that fit together like the two halves of a petri dish.Diatoms possess either radial (centric diatoms) or bilateral ( pennate diatoms) symmetry.A group of pennate diatoms evolved a fine longitudinal slit through the frustule known as the raphe.These raphid pennate diatoms can move using a lineage-specific form of gliding motility [15,16] and have undergone substantial evolutionary radiation to comprise the most species-rich and diverse lineage of diatoms [9]. The raphe acts as a channel for the secretion of a complex mixture of proteins and glycoproteins [17][18][19][20][21] known collectively as extracellular polymeric substances (EPS).Motility can be blocked by an antibody to EPS [17] and by actomyosin inhibitors [22,23], suggesting that both systems play essential roles.Actin filaments occur in two prominent bundles that underlie the raphe just adjacent to the plasma membrane [24].This arrangement supports a model of motility where myosin motors exert pushing forces on the extracellular EPS through a transmembrane protein [25].However, this protein has yet to be identified, and since actin is likely to be required for EPS secretion [26,27], an alternative model where force is generated from EPS polymerization has not been excluded [15,16]. Within each photosynthetic lineage, loss of photosynthesis led to secondary heterotrophs, many of which are parasites that derive energy from their host [28].Transitions to epizoic [29] and free-living [30,31] heterotrophy are also well documented.However, in most of these cases, the manner of energy acquisition remains unclear.In the diatom genus Nitzschia, loss of photosynthesis led to a group of free-living heterotrophs [32,33].These apochlorotic diatoms have been isolated from the nutrient-rich waters of the intertidal zone where they occur as epiphytes on seaweeds, on decaying plant matter and in the surrounding waters [32,[34][35][36][37].As with many photoautotrophs that transition to heterotrophy [30], they have retained their plastid genomes and certain plastidlocalized metabolic functions, but have lost key photosynthetic genes [38].Early work showed that apochlorotic diatoms can grow on a variety of simple and complex carbohydrates including cellulose and the red algal cell wall polysaccharides agarose and carrageenan [34,35,39].Recent genome sequence studies have identified lignin-degrading enzymes in Nitzschia Nitz4 [40] and the expansion of secreted proteins and functions related to organic carbon acquisition in Nitzschia putrida [41].Thus, candidates for key heterotrophy-related functions are beginning to emerge. Here, we isolate new strains of apochlorotic diatoms from Singapore's intertidal zone.Live-cell imaging documents EPS trail deposition and complex motility-related behaviours that include high force generation (approx.800 pN), cooperative motility and collision-induced reversal.Variations in motility and metabolism suggest that apochlorotic diatoms are undergoing substantial ecophysiological radiation.We propose that these new isolates provide excellent models to study the evolutionary transition to free-living heterotrophy. Isolation and characterization of apochlorotic diatoms Diatoms were cloned from organic materials collected from the intertidal zone on Sentosa Island, Singapore (see Material and methods).Five clones were initially isolated from decaying plant matter, the brown alga Sargassum and the green alga Bryopsis (figure 1a).Subsequent work revealed the ability of these diatoms to metabolize the brown algal cell wall polysaccharide alginate (figure 2a).Thus, we isolated an additional seven clones from Sargassum.Phylogenetic analysis indicates that these 12 isolates fall into three distinct clades (figure 1a). Isolates were named Nitzschia singX-Y, where X designates the clade number and Y the isolate number.Isolates in clade 1 and 2 are sister taxa, with clade 2 having an affinity for N. alba, while clade 3 is distantly related to clades 1 and 2. Isolated diatoms grown on agarose seawater [42] media form a radially expanding colony.Growth rates vary substantially both within and between clades.Rates of colony expansion for clade 1 and 2 diatoms vary between 100 and 300 nm s −1 (electronic supplementary material, figure S1a).By contrast, all clade 3 diatoms and the recently sequenced apochlorotic N. putrida [41] show very little colony expansion, with cells dividing to form aggregates at the site of inoculation (electronic supplementary material, figure S1b,c). Clade 1 diatoms were isolated from green and brown algae, and decaying plant matter, suggesting that they occupy diverse ecological niches.They are also among the fastest-growing isolates.Thus, we chose N. sing1-1 as our model apochlorotic diatom (hereafter referred to as N. sing1).F-actin staining reveals characteristic bands underlying the raphe (figure 1b), and scanning electron microscopy (SEM) of frustules identifies eccentric raphes, hymenate pore occlusions and strongly hooked distal raphe ends (electronic supplementary material, figure S1d). Growth on algal polysaccharides and catabolite repression We next examined the growth of N. sing1 on seawater media solidified with red algal cell wall polymers agarose and carrageenan.As previously observed with N. alba [39], both of these substrates could be used as the sole carbon source (electronic supplementary material, figure S2).In addition, N. sing1 grows on the brown algal polysaccharide alginate.For each polysaccharide substrate, the rate of radial colony expansion generally has a concentration optimum and tends to decrease with increasing concentration (electronic supplementary material, figure S2).In the case of alginate, the medium underwent liquefaction and browning, indicative of polysaccharide hydrolysis (figure 2a).This is confirmed by a heat-sensitive alginate lyase enzyme activity detected in the media of N. sing1 grown with alginate but not glucose (figure 2b).Representatives of each clade of Singaporean diatoms liquefy alginate, but N. putrida does not.This suggests that modes of heterotrophy vary substantially within the apochlorotic lineage.Neither agarose nor carrageenan undergo liquefaction.However, on these media, the diatoms tunnel to grow invasively (electronic supplementary material, figure S3), as has been documented for N. albicostalis [35] and N. alba [39].We next examined cellular extracts of diatoms grown on seawater agar medium with and without 0.5% glucose.SDS-PAGE reveals two N. sing1 proteins, p40 and p60, that are abundant on agarose media, but substantially diminished when glucose is present.This repression is observed for representatives of each N. sing clade, but not N. putrida (figure 2c).Alginate also promotes catabolite repression in N. sing1, royalsocietypublishing.org/journal/rsob Open Biol.13: 230148 indicating that it is also a preferred carbon source (electronic supplementary material, figure S4a).N. sing1 accumulates large lipid droplets in the presence but not absence of glucose.By contrast, N. putrida lipid droplets have a similar appearance irrespective of glucose presence (figure 2d and electronic supplementary material, figure S4b).This provides further evidence for the metabolic responsiveness of N. sing1 to a preferred carbon source. Environmental control of EPS trails and motility While measuring N. sing1 growth, we found that the EPS can be seen as a refractive trail by bright-field microscopy (figure 3a).This is probably due to swelling of the EPS to form a refractive convex cross-sectional profile.EPS trails formed on 1% agarose have a uniform width and appearance.By contrast, on 2% agarose, where motility is substantially diminished, the trails take on a broken appearance and the EPS forms refractive spherical structures.To examine how the availability of seawater affects motility, we overlayed the medium with seawater (figure 3b).In this condition, a dramatic increase in the speed of motility is observed as compared to plates without a seawater overlay (figure 3c,d).Here, trails are not seen because the EPS is not at the air interface.Together, these findings link nascent EPS swelling and material properties with the promotion of motility.Interestingly, diatoms are also observed gliding in a monolayer at the seawater-air interface, indicating that N. sing1 motility is not strictly dependent on substratum attachment (figure 3c). Cooperative motility, force generation and force sensing Movies of EPS deposition allowed us to differentiate diatoms laying fresh EPS trails from those moving on pre-existing trails (figure 4a).We refer to these as trail blazers (blazers) and trail runners (runners), respectively.Blazers instantly accelerate upon joining a trail, while runners that leave a trail instantly decelerate (figure 4b-e).These observations indicate that gliding motility is inherently more efficient when occurring on an EPS trail.Because of this relationship, runners tend to catch up to blazers and form chains of cells, particularly at the colony's expanding edge.When fast-moving runners catch up to blazers, the blazer can instantly accelerate (figure 4f,g).Thus, runners can exert pushing forces to make blazers move faster. Mathematical modelling shows that the movement of diatoms is well approximated by Brownian motion over large space and time scales (electronic supplementary material).A quasi-steady-state analysis of the model provides a mathematical relationship between motion characteristics and establishes that colony diffusivity increases with diatom speed.This relationship is preserved in runner-blazer groups, which exhibit an increase in speed compared with lone blazers (figure 4f,g).Thus, modelling confirms the tendency for cooperative motility to increase colony diffusivity. Broadside collisions between diatoms are readily observed and these frequently lead to reversal of the impacting diatom.In figure 5a, diatom 1 undergoes three successive impacts with a relatively stationary diatom 2. The initial collision is followed by a rapid reversal.However, the second and third collisions are characterized by longer periods spent pushing.This is coincident with increasing degrees of deflection of diatom 2 (figure 5a).Thus, the capacity for force generation appears to be dynamic.Not all collisions lead to reversal.In some cases, the impacting diatom slows dramatically upon collision and continues to move forward as it pushes the other diatom out of its path (figure 5b,c).Together, these observations reveal the ability of moving N. sing1 diatoms to impart force, which impacted diatoms resist through substratum adhesion. We next employed a method that allows the measurement of forces exerted by single diatoms.Here, a force-calibrated glass micropipette is placed in the path of moving diatoms and the force is estimated through the degree of pipette deflection [43,44].These data reveal forces between approximately 100 and approximately 800 pN (figure 6a,b; electronic supplementary material, figure S5).Lower force measurements are associated with glancing contact with the pipette.By contrast, high force measurements occur within the context of head-on contact and adhesion between the diatom and pipette.Adhesion is evidenced by diatom detachment from the agar surface at a load of approximately 740 pN (figure 6b, no. 3) and an apparent EPS tether that attained a length of 3-4 µm before snapping at a load of approximately 800 pN (figure 6b, no.2).Diatoms pass underneath the pipette while it undergoes deflection, suggesting that they are subject to downward pushing forces of approximately 84 pN while pushing/pulling on the pipette (based on an estimated diatom height of 4 µm).Overall, these data show that N. sing-1 is capable of producing surprisingly high forces.For comparison, single intact muscle filaments produce forces of approximately 200-300 pN [45,46]. Collision-associated reversals suggest that N. sing1 diatoms can sense force.However, reversals do not always follow collisions (figure 5c,d) and also occur in free-running diatoms.royalsocietypublishing.org/journal/rsob Open Biol.13: 230148 To compare these two events, diatoms were grown on an agar surface close to a coverslip embedded perpendicular to the medium.Diatoms undergoing collisions with the coverslip were then compared to nearby diatoms that did not experience collision (figure 6c).In this experiment, 100% of colliding diatoms reverse within 400 s of being immobilized.By contrast, 13% of individuals whose motility is unimpeded reverse within the same time interval (figure 6d).In these experiments, colliding diatoms spend a longer period immobile prior to reversal when compared with free-running reversals.Freerunning diatoms also slow prior to stopping and reversing, suggesting underlying distinctions between free-running and collision-induced reversals (electronic supplementary material, figure S6).Irrespective of this, these data show that collisions that impede forward motion significantly increase the probability of reversal.This type of force sensing is likely to increase colony diffusivity, especially where substrates possess complex morphologies. Diatom isolation Organic materials were collected at low tide from the intertidal zone on Sentosa island, Singapore (latitude 1.259895, longitude 103.810843;Singapore National Parks Board permit no.NP/RP20-016).Samples were inoculated at the centre of synthetic seawater [42], 2% (w/v) agar plates supplemented with 100 µg ml −1 ampicillin (Sigma-Aldrich, A9518) and 50 μg ml −1 kanamycin (Sigma-Aldrich, K4000).After 2-3 days of incubation at 30°C, single diatoms gliding away from the source inoculum were excised with a scalpel and transferred to a fresh plate.To ensure that all isolates are single-cell clones, the isolation process was repeated.Diatom clones were cryopreserved according to the method of Stock et al. [47].The Sargassum species present at the collection site were identified as a mixture of S. polycystum, S. cf.granuliferum and S. ilicifolium (electronic supplementary material, figure S8) through a maximum-likelihood (ML) analysis of ITS-2 alignment as previously described [48].New sequences are available in GenBank under accessions OQ165106 to OQ165109.N. putrida (strain NIES-4239) was obtained from The Microbial Culture Collection at the National Institute for Environmental Studies 16-2, Onogawa, Tsukuba, Ibaraki 305-8506, Japan. The concatenated alignment was partitioned by gene and codon position, and IQ-TREE's partitioning and model selection procedure (-m TESTMERGE) was used to identify the best fitting partition and nucleotide substitution model [54,55].The model used edge-proportional branch lengths (IQTREE's '-spp' option) to account for differences in royalsocietypublishing.org/journal/rsob Open Biol.13: 230148 evolutionary rates among partitions.Branch support was based on 1000 ultrafast bootstrap replicates [56]. Diatom growth, measurement, microscopy and movies Diatoms were cultured on a synthetic seawater medium [42] with the following modification.NaCl was employed at a final concentration of 180 mM, and for growth in liquid cultures Na 2 SiO 3 was employed at a final concentration of 840 µM.For solid media, salt solutions I and II were prepared as 4× stock solutions, while polysaccharides were prepared at 2× concentrations.necessary, polysaccharides were boiled and equilibrated to 60°C before mixing with salt solutions I and II to yield 1× final concentrations.Polysaccharides were obtained from the following sources: Bacto agar (BD, 214010), agarose (Vivantis, PC0701), carrageenan (Sigma-Aldrich, C1013), low viscosity sodium alginate (Sigma-Aldrich, A1112) and medium viscosity sodium alginate (Sigma-Aldrich, A2033). For the alginate liquefaction assay (figure 2c), medium viscosity sodium alginate was employed. To measure the speed of radial colony expansion (electronic supplementary material, figures S1a and S2), a starter culture was prepared on a seawater agar petri dish.From these confluent plates, a small block (approx.0.5 × 0.5 cm) was excised with a scalpel and used to inoculate fresh plates from which measurements were derived.After overnight growth at 30°C, the edge of the radially expanding colony was marked.A subsequent measurement was made after 24 h.These marks were used to calculate speed, in nm s −1 .Each speed measurement was done in triplicate. For movies, diatoms were grown on seawater agar medium in petri dishes.Movies were made using an Olympus BX51 upright microscope equipped with a 5× objective using a Photometrics CoolSNAP HQ (Teledyne Photometrics) camera controlled by MetaVue software (Molecular Devices).Frames were acquired every 30 s at an exposure time of 20 ms with bright-field illumination.For movies shown in figure 3c, frames were acquired every 5 s.Graphs of diatom speed were made by manually measuring frame-to-frame diatom movement in ImageJ (https://imagej.nih.gov/ij/index.html).For the quantification shown in figure 4d,e, the average increase or decrease in speed was calculated from the average speed over six frames before and after joining the trail. To examine total cell proteins (figure 2c and electronic supplementary material, figure S4a) by sodium dodecyl sulfatepolyacrylamide gel electrophoresis (SDS-PAGE), plates on which diatoms had been grown to confluence were flooded with 7.5 ml phosphate-buffered saline + (PBS+) (10 mM Na 2 HPO 4 , 1.8 mM KH 2 PO 4 , 240 mM NaCl), and cells were scraped off gently using a cell scraper (Corning Incorporated Costar, 3010).The cells were pelleted by centrifugation at 3.9 k × g for 10 min and washed once with 7 ml of PBS+ before being lysed by boiling in SDS-PAGE loading buffer.Proteins were resolved using 10% polyacrylamide gels. For staining with phalloidin (ThermoFisher, A12379) (figure 1b), a block of seawater agar medium from a confluent plate was transferred to the centre of a coverslip.The coverslip was placed in a 60 mm petri dish, overlayed with 6 ml of liquid synthetic seawater medium (0.5% (w/v) glucose), and incubated at 30°C for 24 h.The agar block was subsequently removed, and the coverslip placed cells-up on a piece of parafilm in a 90 mm petri dish.The cells were washed once with PBS+ and then fixed in PBS containing 4% paraformaldehyde (Electron Microscopy Sciences, 15713) for 30 min at room temperature.Samples were washed three times with PBS+ and then incubated with 0.17 µM phalloidin 488 (stock: 66 µM in dimethyl sulfoxide (DMSO)) in PBS+ for 1 h in the dark.Samples were washed with PBS+ (3 × fast wash + 3 × 10 min wash) at room temperature.After the last wash, 5 µl mounting medium (PBS, 20% glycerol, 2 µg ml −1 DAPI, 1× antifade), (100× antifade stock: 20% (w/v) n-propyl gallate (Sigma-Aldrich, P3130) in DMSO) was added and the coverslip was placed on a microscope slide cells down and sealed with parafilm.For staining of lipid droplets with BODIPY, diatoms were grown on seawater agar media.After 2 days growth at 30°C, a block (approx. 1 cm × 1 cm) was excised with a scalpel and transferred cells-up onto a microscope slide.The diatoms were overlayed with 4 µl of a solution containing 30 µg ml −1 BODIPY 505/515 (Invitrogen, D3921) in PBS+.After approximately 30 min a coverslip was added and the diatoms were imaged by fluorescence microscopy.The diatom frustules were prepared for SEM as previously described [57]. Force measurement To measure the force produced by diatom movement (figure 6a,b), a small block, approximately 0.25 × 0.5 cm, was excised from a starter culture and used to inoculate 1.5% agarose seawater medium contained by a u-shaped thin-wall chamber made of polydimethylsiloxane (DOW, 4019862) on top of a coverslip [43].After incubation at room temperature for 1 day, the chamber was flooded with seawater (Electrostatic attraction between the pipette and medium necessitated that these experiments be conducted with a seawater overlay as in figure 3b).The chamber was then placed on a microscope stage with a slide holder and a micropipette, prepared as described below, was inserted through the open side of the chamber and positioned in front of a diatom using an MP-285 motorized micromanipulator (Sutter Instruments).Movies were made using an Olympus IX81 equipped with a 40× objective manipulated with MetaMorph software (Molecular Devices), with frames acquired every 0.5 s.Frame-to-frame micropipette and diatom movement were manually measured using ImageJ and used to produce force/velocity graphs (electronic supplementary material, figure S5). Micropipette production: briefly, thin-wall glass capillaries (1 mm outer diameter, 0.78 mm inner diameter; World Precision Instruments, TW100F-6) were pulled using a P-97 micropipette puller (Sutter Instruments) and the tip was cut to the desired inner diameter of approximately 2 µm with an in-house developed heated platinum wire.The stiffness of the micropipette was calibrated using a standard micro glass rod (k s = 21.09± 4.22 pN µm).The preparation of a standard micro glass rod and calibration of the working micropipette were conducted as previously described [43].Before use, the tip of the micropipette was incubated overnight with 3% fetal bovine serum (Sigma-Aldrich, A7030). Alginate lyase enzyme activity assay Diatoms were grown in liquid medium for 3 days at 30°C, after which cells were removed using a cell strainer (SPL Life Sciences, 93040).The medium was then centrifuged at 3.9 k × g for 15 min and concentrated using a Pierce protein royalsocietypublishing.org/journal/rsob Open Biol.13: 230148 concentrator with a 3 kilodalton cut-off (ThermoFisher, 88526).The concentrated medium was diluted 1 : 5 with synthetic seawater medium before use.For heat-inactivated controls, the diluted culture media was heated at 100°C for 5 min and then briefly centrifuged.To perform the enzyme assay, 5 µl of diluted sample was added to 45 µl sodium alginate buffer (10 mM Tris ( pH 7.4), 200 mM NaCl, 200 mM KCl, 2 mM CaCl 2 , 0.01% sodium azide and 0.1% low-viscosity alginate (Sigma-Aldrich, A1112)) in a 384-well UV-STAR microplate (Greiner Bio-One International).Alginate lyase activity was determined by measuring the increase in absorbance at 235 nm at 1-min intervals using a Tecan Spark Multimode Microplate Reader (Tecan Inc.). Discussion The apochlorotic diatoms described here were isolated from a variety of organic materials (figure 1a) and can grow on a broad range of algal polysaccharides (figure 2a; electronic supplementary material, figure S2).Evidence for catabolite repression mediated by the presence of preferred carbon sources (figure 2c; electronic supplementary material, figure S4a) identifies an aspect of metabolism classically associated with heterotrophy in fungi [58] and bacteria [59].Overall, these findings are consistent with a general role for apochlorotic diatoms in coastal marine nutrient cycling-one akin to the role of osmotrophic fungi [60] in terrestrial environments.Interestingly, unlike the diatoms identified here, N. putrida does not metabolize alginate nor show evidence of catabolite repression (figure 2a,c).A high degree of ecophysiological variation is further evidenced by poor motility of clade 3 isolates and N. putrida as compared to clade 1 and 2 isolates (electronic supplementary material, figure S1a-c).Together, these findings suggest that apochlorotic diatoms exploit distinct feeding strategies and are undergoing substantial evolutionary radiation. A series of evolutionary innovations culminating in force generation from motility are likely to have predisposed the diatoms to a successful transition to heterotrophy.These include the advent of the silica-based cell wall, bilateral symmetry, the raphe and forceful motility.Certain marine bacteria are highly evolved for alginate metabolism [61,62], but are unlikely to generate forces necessary for tunnelling [63].Thus, high forces from diatom gliding motility (figure 6a,b) are likely to underlie invasive growth (electronic supplementary material, figure S3) and allow access to nutrient pools unavailable to competing microorganisms.This is consistent with N. alba invasive growth on brown algal tissues [39] and tunnelling in both N. alba and N. albicostalis, which appears to be stimulated by the presence of bacteria [35,39].In terrestrial environments, the fungi have a similar advantage where the force from pressurized hyphal networks underlies invasive growth [64].Thus, distinct manners of force generation appear to provide an advantage to eukaryotic heterotrophs over their bacterial counterparts. Diatom EPS trails have been visualized and characterized by electron microscopy [65], atomic force microscopy [66,67] and various staining techniques [15,17,19,68].The direct observation of EPS deposition by living cells provides the opportunity to investigate the relationship between EPS trails and motility.On agarose concentrations that favour motility, uniform refractive EPS trails are presumably visible due to their convex cross-sectional profile.With increasing agarose concentration, motility slows substantially and EPS trails lose their uniform cross-sectional profile to form aberrant refractive puncta (figure 3a).This suggests that freshly secreted EPS undergoes rapid swelling that is sensitive to timely hydration and/or the availability of critical seawater ionic constituents.A critical role for hydration is also consistent with dramatically enhanced motility when agar plates are overlayed with seawater (figure 3b-d).Faster motility on preexisting trails and increased motility with a seawater overlay have also been observed in Phaeodactylum tricornutum [69].Thus, these aspects of motility are likely to be general features of raphid pennate diatoms.The sea surface microlayer (SSM) is known to have a distinct physical, chemical and biological composition [70].The finding that gliding occurs in a monolayer at the SSM (figure 3c) suggests that the EPS has an affinity for the seawater surface underside.This provides a physical basis for previous work showing that apochlorotic diatoms are enriched at the SSM [36]. The relationship between runners and blazers leads to cooperative motility (figure 4) and is likely to be related to the dual function of EPS in adhesion and motility.Runners that go off-trail instantly decelerate.This is consistent with more nascent EPS being consumed by the adhesive function.By contrast, blazers joining a trail instantly accelerate (figure 4b-d) because adhesive contacts are already present, and only EPS-EPS contacts are required.Interestingly, line scans of trails do not change dramatically between a fresh trail and one that has been passed over by runner diatoms (electronic supplementary material, figure S7).Thus, runners may be secreting substantially less EPS than blazers.This could also factor into their tendency to move at higher speeds. Our force measurements are consistent with force generation through myosin motors; however, they do not exclude a role for EPS polymerization.Single myosin molecules exert forces of 3-4 pN [71], while isolated muscle filaments can generate maximum forces of approximately 200-300 pN [45,46].Thus, N. sing1 peak forces of 700-800 pN (figure 6a, b) are consistent with the cooperative action of multiple myosin motors arrayed along the length of raphe-associated actin filaments.Force measurements were made with single diatoms and are likely to be higher in runner/blazer chains.Thus, cooperative motility is expected to increase both colony diffusivity (electronic supplementary material) and the ability to tunnel effectively. A related set of equations describes colony diffusivity of N. sing1 (electronic supplementary material) and the mixotrophic diatom Navicula [72].Thus, periodic reversal combined with random turning is likely to be a general strategy used by raphid pennate diatoms to avoid immobilization after encountering an obstacle.By sensing immobilization (figure 6c), N. sing1 is able to reduce the period of immobility, while presumably maintaining an independent frequency of freerunning reversal.Distinct mechanisms could underlie N. sing1 force sensing.For example, mechanosensitive ion channels are established force sensors that could trigger a signalling cascade leading to reversal.In another model, strain on the force-generation machinery could trigger reversal through a feedback mechanism.More work will be required to determine the mechanism of force sensing. The transition from photoautotrophy to obligate heterotrophy is likely to have been accompanied by a variety of physiological adaptations.However, because many pennate diatoms are highly evolved mixotrophs [73][74][75][76], it remains royalsocietypublishing.org/journal/rsob Open Biol.13: 230148 unclear whether alginate utilization, cooperative motility and force sensing are unique to apochlorotic diatoms or pre-date their emergence.Identifying the genetic basis for diatom obligate heterotrophy will require an integrated approach that combines comparative genomics with molecular, biochemical, cellular and physiological studies. Ethics.This work did not require ethical approval from a human subject or animal welfare committee. ApochloroticFigure 1 .Figure 2 .Figure 3 . Figure 1.Characterization of Singaporean apochlorotic diatoms.(a) Maximum-likelihood phylogeny of apochlorotic Nitzschia (shaded box) and Bacillariales photosynthetic outgroup taxa.The Singaporean isolates are identified in magenta.The material from which they were isolated is indicated in green.Filled circles show simplified bootstrap support values.Scale bar = 0.1 nucleotide substitutions.Orange arrowheads identify apochlorotic species with sequenced genomes.(b) F-actin staining of N. sing1-1.The arrows point to the approximate position of proximal raphe ends.A bright-field (BF) image of N. putrida is shown for comparison.Scale bar = 10 µm. Figure 4 . Figure 4. Live-cell imaging of gliding and EPS deposition by N. sing1-1.(a) The image shows a field of diatoms near the colony edge.The direction from the colony interior to colony periphery is indicated.Diatoms laying a new trail are blazers (b), while diatoms gliding on a pre-existing trail are runners (r).Note that these identities are interchanged as diatoms reverse or transition on or off trails.Scale bar = 100 µm.Related to electronic supplementary material, movie S4.(b) A blazer (blue circle) instantaneously accelerates upon joining a pre-existing trail (magenta line).White arrows indicate the periods spent as blazer (b) and as runner (r).Scale bar = 100 µm.The projection is derived from electronic supplementary material, movie S5.(c) The graph shows the speed of the diatom from (b). (d ) The graph quantifies the average increase in speed when a blazer becomes a runner.Standard deviation is indicated (n = 6).(e) The graph quantifies the average decrease in speed when a runner becomes a blazer.Standard deviation is indicated (n = 6).( f ) Runners can push blazers to make them move faster.A chain consisting of a blazer and two runners is caught by a pair of runners to make a chain of five.The groups are identified at the indicated frames by symbols given in the legend.The moment when cell-cell coupling occurs is indicated.Scale bar = 100 µm.Related to electronic supplementary material, movie S6.(g) The graph shows the speed of the groups as defined in ( f ). Figure 5 . Figure 5. Pushing and collision-triggered reversals.(a) Diatom 1 collides with a stationary diatom 2 three consecutive times.Frame-to-frame speed is shown in the graph.The images at the bottom show the first (upper panel) and last (lower panel) frames of the collisions.Scale bar = 10 µm.Related to electronic supplementary material, movie S7.(b) Diatom 1 collides into a stationary diatom 2, which is pushed laterally and rotates until diatom 1 frees itself.The original position of diatom 2 is given by the dotted white outline and its degree of rotation is indicated numerically.Scale bar = 10 µm.Related to electronic supplementary material, movie S8.(c) The graph shows the speeds of diatom 1 and diatom 2 from (b). Figure 6 . Figure 6.Forces exerted by N. sing1 diatoms and mechanosensing.(a) The graph shows the maximum force produced by single diatoms pushing/pulling on the pipette.(b) Images taken from the indicated movies show first contact (contact), an intermediate time point (intermediate), the point of maximum pipette deflection (maximum) and pipette recoil (recoil).The arrows in the merge panel show maximum force values.The pipette is overlayed in opaque magenta.Note that for measurements 1 and 2 the diatom passes underneath the pipette.Scale bar = 10 µm.Related to electronic supplementary material, movies S9, S10 and S11.(c) The schematic shows the experimental set-up for immobilization-triggered reversal.Only diatoms that hit the wall at an angle of more than 20°are included in the dataset.(d ) The graph shows the percentage of cells reversing within 400 s (left y-axis).The scatterplot (right axis) shows the duration of time spent immobile prior to reversal.The light grey bar identifies the mean.Standard deviation is indicated.
2023-03-29T13:12:32.302Z
2023-04-12T00:00:00.000
{ "year": 2023, "sha1": "ec31eda68af637ed7c6fc37c0ea23e3c97e6bae5", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1101/2023.03.27.533254", "oa_status": "GREEN", "pdf_src": "PubMedCentral", "pdf_hash": "75d4a6364d3c80deb8ef5a6d3e83fef7b71b2400", "s2fieldsofstudy": [ "Environmental Science", "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
125614971
pes2o/s2orc
v3-fos-license
An alternative method for centrifugal compressor loading factor modelling The loading factor at design point is calculated by one or other empirical formula in classical design methods. Performance modelling as a whole is out of consideration. Test data of compressor stages demonstrates that loading factor versus flow coefficient at the impeller exit has a linear character independent of compressibility. Known Universal Modelling Method exploits this fact. Two points define the function – loading factor at design point and at zero flow rate. The proper formulae include empirical coefficients. A good modelling result is possible if the choice of coefficients is based on experience and close analogs. Earlier Y. Galerkin and K. Soldatova had proposed to define loading factor performance by the angle of its inclination to the ordinate axis and by the loading factor at zero flow rate. Simple and definite equations with four geometry parameters were proposed for loading factor performance calculated for inviscid flow. The authors of this publication have studied the test performance of thirteen stages of different types. The equations are proposed with universal empirical coefficients. The calculation error lies in the range of plus to minus 1,5%. The alternative model of a loading factor performance modelling is included in new versions of the Universal Modelling Method. center of pressure Aim of the work For flow path design, an instrument is necessary for calculation of centrifugal compressor gas dynamic performance. The authors use the primary design procedure described in [1]. The Universal Modelling Method presented in [2] is applied to calculate performance of a primary design and of possible better candidates. Calculated non-dimensional performance of a stage is presented in Figure 1. (1) Efficiency prediction is possible with new versions of the Universal Modelling PC programs without special skill and experience [2][3][4]. Prediction of the work coefficient performance ( ) i f y = F still requires experience and intuition. The authors' aim is to offer an alternative way of the work coefficient performance calculation that is simple and precise. Presented work is based on ideas and results of Y. Galerkin and K. Soldatova who had studied loading factor performances of impellers with inviscid flows. Scheme of work coefficient modelling In accordance with the scheme proposed previously in [5] the head transmitted to gas by an impeller i h consists of three parts. The main part of engine's mechanical head T h is transferred to gas by blades of the impeller. Two additional parts appear due to parasitic losses. The part of the head lc h is lost in labyrinth seals. The friction on outer surfaces of hub and shroud transmits additional mechanical head df h but this head does not increase gas pressure: Non-dimensional presentation of this equation: impeller with small flow coefficient des F » 0,015 this sum is about 0,055-0,065. It is not large part of a head coefficient. Semi-empirical formulae in [1] and CFD-calculations [9] are good instruments for modelling of these coefficients. Therefore the main problem is modelling of a loading factor. As a rule there is no velocity tangential component at an impeller inlet in industrial compressors. Then impeller blades transfer the head to a gas in accordance with the Euler equation: This head is presented by the non-dimensional loading factor: (6) The modelling of a loading factor performance is facilitated by the fact that it is a linear function - Figure 2. Figure 2. Linear function of a loading factor for ideal and real impellers The linear nature of it is evident for an "ideal" impeller with infinite number of infinitely thin blades: The similar equation is valid for real impellers: The existing way of modelling [2][3][4] exploits this fact. Two values of a loading factor are necessary to determine a linear performance In the method described in [1] these two values are a loading factor at a design flow rate T des y and at zero flow rate 0 T y . The design flow rate corresponds to non-incidence inlet of the critical streamline. This condition is 1 Exit and inlet velocity triangles demonstrates influence of blades' load and blockade on a critical streamline direction - Figure 3. The empirical coefficient 1 K µ > represents influence or real viscid character of flow. Its value for different types of impellers may be between 1,5-2,3. There is no satisfactory correlation with an impeller configuration. The close analog is necessary. The alternative is to model a loading factor performance by values of 0 T y and angle T b that are shown in Figure 2: The aim of this work is to model loading factor performance of impellers in real viscid flow on the basis of several impeller geometry parameters. Test data for model stages and factory test data of several pipeline compressors were reduced. The information on the objects of modelling is presented in Table 1. All stages and compressors were tested at u M =0,60 or 0,80. Model stage names mean the following. For example, 0,0604-0,527-0,290 means that the design flow rate coefficient of the stage is des F = 0,0604, design loading factor is T des y = 0527, hub ratio is 2 / h D D = 0,29. Names of compressors are given by their manufacturers on their own principles. Data on compressors are taken from [4]. Symbol * means that 2D impeller has an arc blade mean line. Mean lines are designed by the Method presented in [1] in other cases. In columns 3-9 geometry parameters of impellers are presented, they are included in the presented below equations for loading factor performances modelling. The sample of data on model stages from the "IDENT" program is presented in Figure 5. Most of 2D impellers designed by the authors have either an arc mean line of blades, or a mean line is optimized by analysis of velocity diagrams. Two mean lines and corresponding velocity diagrams are presented in Figure 7. Table 1). For impellers of this compressor the empirical coefficient is µ K = 2.62 (eq. 10). It is an unusually large value. The authors have no explanation for this fact. The modelling error in all other cases is within admissible limits. Figure 10 presents the comparison of performance of one of model stages. Graphics with individual adjustment of the loading factor performance are presented on the left. Right -loading factor is calculated by eq. (14, 16). Efficiency performance is calculated by 6 th version of the Math model in both cases. Despite visually noticeable difference of load performances, an error of calculation T des y is 1,2%. It means that an impeller designed by this method will have +1,2% of head input. It is acceptable for design practice. Conclusion The authors plan to apply the presented method of a loading factor performance calculation in parallel with the existing method in their future projects. In case of a positive result the new method will be incorporated into the Math models. This method is simple and does not require users' high experience and corrections with analogs.
2019-04-22T13:07:49.206Z
2017-08-01T00:00:00.000
{ "year": 2017, "sha1": "6be7feaac9e741c7fdaec316ab7adbfd7799db70", "oa_license": "CCBY", "oa_url": "https://iopscience.iop.org/article/10.1088/1757-899X/232/1/012039/pdf", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "283857c959aff36d3bf8e4dae55f8e1bf6e5fe82", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Physics", "Computer Science" ] }
1213994
pes2o/s2orc
v3-fos-license
A pilot study to evaluate the efficacy of adding a structured home visiting intervention to improve outcomes for high-risk families attending the Incredible Years Parent Programme: study protocol for a randomised controlled trial Background Antisocial behaviour and adult criminality often have their origins in childhood and are best addressed early in the child’s life using evidence-based treatments such as the ‘Incredible Years Parent Programme’. However, families with additional risk factors who are at highest risk for poor outcomes do not always make sufficient change while attending such programmes. Additional support to address barriers and improve implementation of positive parenting strategies while these families attend the Incredible Years Programme may improve overall outcomes. The study aims to evaluate the efficacy of adding a structured home visiting intervention (Home Parent Support) to improve outcomes in families most at risk of poor treatment response from the Incredible Years intervention. This study will inform the design of a larger prospective randomised controlled trial. Methods/design A pilot single-blind, parallel, superiority, randomised controlled trial. Randomisation will be undertaken using a computer-generated sequence in a 1:1 ratio to the two treatments arranged in permuted blocks with stratification by age, sex, and ethnicity. One hundred and twenty six participants enrolled in the Incredible Years Parent Programme who meet the high-risk criteria will be randomly allocated to receive either Incredible Years Parent Programme and Home Parent Support, or the Incredible Years Parent Programme alone. The Home Parent Support is a 10-session structured home visiting intervention provided by a trained therapist, alongside the usual Incredible Years Parent Programme, to enhance the adoption of key parenting skills. The primary outcome is the change in child behaviour from baseline to post-intervention in parent reported Eyberg Child Behavior Inventory Problem Scale. Discussion This is the first formal evaluation of adding Home Parent Support alongside Incredible Years Parent Programme for families with risk factors who typically have poorer treatment outcomes. We anticipate that the intervention will help vulnerable families stay engaged, strengthen the adoption of effective parenting strategies, and improve outcomes for both the children and families. Trial registration Australian New Zealand Clinical Trials Registry ACTRN12612000878875. Background Child conduct problems are prevalent affecting an estimated 5 to 10% of children in New Zealand [1,2]. They negatively impact on parental wellbeing and result in increased demands on health, education, and social services [1,3]. Longitudinal studies have established that conduct problems in childhood are precursors to a range of adverse outcomes in adulthood [4,5]. Without effective intervention, these problems have the potential to lead to long-term problems including substance abuse, mental health difficulties, violent behaviour, and poor physical health [6]. Conduct problems, aggressive behaviour, and poor emotional regulation in young children are important predictors of later antisocial and criminal behaviour in some adolescents, and the effectiveness of interventions diminishes with age [1,2,4,6,7]. Therefore, it is prudent to identify those young people at risk and provide an evidence-based intervention early in the life of the child before problematic behaviours have become entrenched and parent-child relationships have broken down. Intervening early in the life of the child has proven long-term benefits for children with challenging behaviour [5,8], and better outcomes for the family and the community than treatment in adolescent years [8,9]. Heckman [10] has identified the wider benefits from early childhood intervention, including improved learning in schools, as well as reduced crime, teenage pregnancy, and welfare dependency. Early childhood intervention is also cost-effective [3,11]. For example, Scott and colleagues [3] estimate the cost of public services used by an individual with conduct disorder to be ten times greater than an individual with no problems. Church [1] found similar costs in New Zealand with successful intervention for a 5-year-old costing approximately $5,000 compared to $60,000 for an adolescent. Furthermore, Church found the success rate is 70% greater for younger children. Although genetic factors have a role in the development of challenging behaviour, it is the environmental factors that are more readily addressed. Behavioural and social learning theories posit that children learn behaviour within the context of their environment. Children raised in a positive and nurturing environment are more likely to have pro-social friendship skills, an ability to regulate their emotional responses, and achieve appropriate educational standards. On the other hand, children raised in environments with limited resources, by parents who have health problems, and who use punitive parenting practices are less likely to achieve good outcomes [2]. Intervening with an effective parenting programme has been shown to address many of the environmental factors contributing to the development of anti-social and aggressive behaviours in children, and improve their long-term outcomes [5,8]. There is a small but growing body of literature demonstrating the effectiveness of IYP programmes in New Zealand, for example with Maori participants [9,[20][21][22], single parents with children with Attention Deficit Hyperactivity Disorder [23], and within the Ministry of Education [22,24,25]. However, despite these good results, a third of children with behavioural problems whose parents attend IYP still experience difficulties and are at risk of developing chronic problems in adolescence [17,26]. In a trial with children initially within the clinical range, Webster-Stratton et al. [5] found that post-treatment child behaviour scores remaining within the clinical range was a predictor of adolescent engagement in delinquent acts; achieving post-treatment scores within the normal range was more likely to result in better long-term outcomes. Those who do poorly despite treatment often have risk factors that are identifiable prior to intervention. While the literature is varied on which specific factors attribute to poor treatment outcomes, the factors generally cluster into four categories [7,15,[27][28][29][30]: i. Child variables (severity of child behaviour, referral source, sex). ii. Parent variables (maternal psychopathology/ depression, coercive/punitive parenting style, maternal age, negative life events/stressors). iii. Family demographics (single parent, family size, low income, education/occupation, maternal age, minority status). iv. Participation variables (treatment attendance, perceived barriers to treatment participation). Other factors for poor response to treatment identified in the literature [12,15,28,31] and those observed from personal experience of delivering the programme (Unpublished) include lack of partner support, resistance to change in the home, parents' unrealistic and developmentally inappropriate expectations for children, adverse child rearing practices, and negative cognitions and perceptions of child behaviour. Reyno and McGrath [29] concluded from their meta-analysis that providing additional support to parents attending parent training may improve outcomes for high risk families. In recognition of the need to provide additional support for families attending the IYP programme, the New Zealand Ministries of Health and Education established the Incredible Years Specialist Service (IYSS) as a collaborative venture. This service provides a targeted intervention, Home Parent Support (HPS), for families of young children who are identified as greater risk of nonresponse to treatment for conduct problem behaviours. The specification for IYSS is to provide a comprehensive inter-agency intervention to address conduct/antisocial behaviour and associated mental health problems in children. Key features include: Strengthening and supporting a coordinated interagency response; Bringing mental health expertise and capacity to a multi-agency team; Strengthening interventions for Maori families; A focus on children aged 3-7 years; and Prioritising those with more severe conduct problems. The joint commitment from the Health and Education sectors to work collaboratively should improve access to parent information, child health, and educational services for vulnerable families at an optimum time in the life of the child. It is expected that this support will improve engagement in IYP and improve overall outcomes. However, we do not have robust evidence that HPS does improve outcomes compared with IYP alone. The aim of this study is to evaluate the effectiveness of adding a structured home intervention while the parent/carer attends IYP. As in all group parent programmes, most home visiting programmes are based on the premise that parents play an important role in shaping the outcomes of their children, and that intervention in early childhood ensures input in a sensitive developmental period [27,32]. There is also an increasing awareness of the importance of the early caregiving environment and the impact this has on early neurological development [33]. Over the last 20 years, there has been an increase in home visiting programmes in an attempt to address child maltreatment, reduce infant mortality, and improve child wellbeing [34]. Home visiting allows interventions to be tailored to the specific needs of the family and provides therapists with the opportunity to assess and address other risk factors such as substance abuse, poor parental mental health, and violence in the home [35]. In spite of the growing popularity of home visiting programmes, reviews report mixed results [33,35]. There are only a few programmes that have demonstrated long-term benefits for parents and children [36][37][38]. The diverse results of home visiting programmes, in general, give some indication of how difficult it is to change parenting practices once dysfunctional patterns have become the established norm for the family [34]. Gomby [35] suggests that combining an effective home visiting programme with other education programmes may improve outcomes. Characteristics that contribute to an effective home visiting programme include internal consistency (adherence to the curriculum), a collaborative approach when working with parents, well trained and well supervised therapists, close relationship with other services, and low caseloads [33,35]. These factors are key components of the HPS intervention developed by DL to support families to maximise the benefits of IYP. HPS provides a structured intervention for parents in their home in conjunction with attending IYP. Parents have the benefit of the IYP group curriculum on child development and parenting practices, experiential group learning, and socialisation. HPS is provided by therapists who are trained mental health workers and accredited IYP facilitators. They are familiar with the detail of the course content and key principles, and work collaboratively with the parents in their home. They support parents to implement the key parenting principles and practice new skills, and tailor these strategies to their own circumstances. Therapists focus on building the parent-child relationship and on addressing negative cognitions and coercive patterns of interaction. They also assess barriers for change and support parents to access other appropriate health and education services such as adult mental health services, income support, relationship services, and special education services. Therapists follow a structured guide to ensure adherence to the curriculum and they attend weekly supervision to maintain fidelity. In an open trial of HPS participants reported high levels of satisfaction and retention rate was high at 92% (Unpublished). We hypothesise that the addition of a structured home intervention (HPS) will result in better outcomes for families with additional risk factors for poor treatment response, and we expect to increase the percentage of children with post-treatment scores in the non-clinical range. The current study has been designed to evaluate this intervention and, if it is found to be effective, there is the potential for national implementation. The successful widespread implementation of any intervention requires a degree of pragmatism. To identify families at greater risk of non-response it would be unrealistic to try and screen for all the factors outlined above, and a number of them cluster together. Three domains have been used in this study to identify families at greater risk of non-response. These are from the categories of the overall risk factors and would be easy to implement in a community real-world setting. Methods/design Design This study is a pilot single-blind, parallel, superiority, randomised controlled trial. Eligible participants will be randomly allocated to receive IYP plus HPS or to the control group of IYP treatment alone. Randomisation will be undertaken using a computer-generated sequence in a 1:1 ratio to the two treatments arranged in permuted blocks. Stratification will be by age, sex, and ethnicity. Data from all participants will be included in the data analysis, irrespective of whether follow-up data is available using an intention-to-treat design. Ethical approval Approval has been received from the New Zealand Northern B Health and Disability Ethics Committee (NTY/12/06/050). Setting This study is being carried out in a real-world setting within the Bay of Plenty District Health Board, Tauranga, New Zealand. Participants Participants are parents/caregivers of children with conduct problems recruited from IYP groups delivered in the community by the Child and Adolescent Mental Health Services (CAMHS), the Ministry of Education, and non-government organisations in Tauranga. Parents attending IYP are either self-referred or referred by health or education services. Criteria for parents to attend IYP are: they speak English, have the child in their custody or have regular access arrangements, and their child does not have an intellectual disability. All families attending IYP are screened for eligibility for IYSS and those who meet the criteria will be invited to take part in the trial until 126 participants have been recruited. Participants will be randomly allocated to IYP plus HPS or to IYP alone. Where there is more than one child in a family who meets the criteria for IYSS, the parent will identify the child they find most challenging as the focus child. Where more than one parent/carer is attending IYP, and their child meets the criteria for IYSS, one parent/carer will be identified as the trial participant. Inclusion criteria Participants will be eligible for inclusion in the trial if: Exclusion criteria None. Withdrawal criteria Participants can withdraw from the intervention at any time but will remain in the trial. If participants require on-going support they will be assisted to engage in an appropriate community agency. Intervention HPS Participants will receive 10 in-home sessions from a separate therapist accredited in IYP whilst they attend the 14 to 16 week Basic IYP. The intervention will include a comprehensive child assessment, including developmental, medical, and social history, pre-school or school reports, involvement of other agencies, family structure, and parental mental health. Participants will be supported to identify specific goals they wish to achieve and record them. The therapist will visit them in their homes to provide support to personalise and implement the IYP strategies and to address any barriers to implementation of these strategies that they or the therapist identifies. The therapist will follow a structured intervention guide to ensure therapist fidelity. Treatment includes follow-up contact at one-month post-intervention to assess stability of change and provide further assistance if required. The therapists delivering HPS will meet weekly to review all participants' progress and identify any additional support required for families. Therapists will have fortnightly contact with IYP group leaders to review attendance and participant progress. Participants will be reviewed monthly by a multidisciplinary team that includes a Child and Adolescent Psychiatrist, Paediatrician, Ministry of Education IYP co-ordinator, and the HPS therapists. Specialist psychiatric and/or paediatric assessment is available if required. This multidisciplinary team will also review any adverse events and assess the likelihood that this may be related to the intervention. Therapist guide The therapist guide specifies the important components of the home intervention. It identifies key elements for each session to ensure the intervention is focussed on the content and learning from IYP and that the learning occurs in a supportive collaborative manner to encourage and motivate participants. The key elements of HPS include reviewing IYP principles, tailoring strategies, practicing and rehearsing new skills, therapist modelling praise and affirmation, identifying and reviewing participant goals, and addressing barriers to implementation of new skills. Intervention fidelity HPS therapists will follow the structured guide in their intervention and keep a record of activities in each session to ensure that key activities are included. This record will be reviewed in weekly supervision. Control Participants will be in the same IYP groups as those in the intervention arm. This is to prevent real or perceived difference between the groups. All IYP groups will be delivered by trained facilitators in CAMHS, the Ministry of Education, or non-government organisations and will receive 2 hours supervision fortnightly. Those in the control will receive the usual support from IYP group leaders and will still have access to all services that would normally be available to them. Primary outcome The primary outcome is a change in child behaviour from baseline to post-intervention in the parent-reported ECBI Problem scale. Secondary outcomes The percentage of parent scores on the ECBI that are in the normal range at post-treatment. The percentage of parent scores on the Child SCS that are in the normal range at post-treatment. Changes from pre-to post-intervention in child behaviour, parenting practices, parent relationships, and parental well-being measured on the Family Questionnaire (FQ) scales. The percentage with at least 80% engagement in IYP measured on the attendance register. Levels of parent satisfaction with IYP measured using the Parent Satisfaction Questionnaire. Maintenance of improvement at 6-month follow-up measured on the FQ, ECBI, and SCS. Parent reports of competence with implementing IYP strategies in the home as reported in the Follow-up Questionnaire at 6 months. Screening measurement The IYP group leaders will carry out screening using the ECBI and the SCS -Parent Version. These measures have been used in similar studies [13,39]. The ECBI is a parent-rated inventory with two scales. The total problem scale is a measure of the type and frequency of 36 behaviours. Total problem scores over 11 are in the clinical range. The intensity scale is the degree to which parents find the behaviours problematic, rated 1 to 7. Intensity scores over 127 are in the clinical range [40]. The SCS -Parent Version was developed by the Conduct Problem Prevention Research Group [41,42]. It consists of 12 items completed by the parent on their child's pro-social behaviours, communication skills, and self-control on a 5-point Likert scale. A total score less than 17 is indicative of poor social skills and is considered a clinically important cut-point for meeting IYSS criteria. Baseline Once eligibility is confirmed a research assistant will collect pre-intervention baseline data on demographics and the FQ. The FQ was developed by the Incredible Years Pilot Study Working Group for use in a joint-agency national evaluation of Incredible Years Pilot Study [43]. The questionnaire is a comprehensive assessment of child behaviour, parenting practices, partner relationships, parental depression, life events, cultural participation, and parent satisfaction. The research assistant will read all questions out to the participant and score responses on the questionnaire. Post-treatment The IYP group leaders will collect post-treatment measurements using the ECBI, SCS, and the standard Incredible Years Parent Satisfaction Questionnaire. This is a 24-question assessment of parent views on the programme content and teaching methods. Parents rate their satisfaction on 1-to 7-point Likert scale [44]. The research assistant will repeat the relevant sections of the FQ within two weeks of the final IYP session. Follow-up At the 6-month follow-up, the research assistant will collect ECBI, SCS, and FQ and a quantitative/qualitative follow-up questionnaire. This questionnaire includes Likert-type scales and opportunities for written feedback to assess levels of engagement, helpful aspects of the trial, level of competency with implementing IYP strategies, and changes in relationships and behaviour noticed by parent/carers (Table 1). Sample size Previous research indicated that 80% of participants receiving HPS completed the IYP group [43,45]. Therefore, a total sample of 126 participants will be collected in order to achieve 50 participants in each treatment arm at post-treatment. This trial represents the first formal assessment of the HPS intervention and is being undertaken as a pilot study to assess the feasibility of a full randomised controlled trial in the wider clinical setting and to collect data to inform the power calculations for such a study. Thus, there is no formal power calculation for the proposed sample size of 126, but this represents a substantial and adequate number of participants representative of those likely to benefit from the intervention. Standard power calculations with 50 in each arm will have 80% power to detect an effect size of 0.57 between the control and experimental group (i.e., Cohen's d = 0.57). Randomisation and sequence generation On completion of baseline data collection, participants will be allocated an identification number and randomised to IYP plus HPS or to IYP alone. An independent statistician using a computer generated randomisation sequence generated prior to the enrolment of any participants will undertake the randomisation. Randomisation will be stratified on each IYP group so that each intake or source group will have approximately equal numbers allocated to each treatment. The randomisation sequence will allocate in a 1:1 ratio to the two treatments arranged in permuted blocks and will be stratified on age (under 5 years and over 5 years), sex, and ethnicity (Maori and Non-Maori). After a participant has met all inclusion criteria and signed informed consent they will be allocated the next available randomisation allocation. Allocation concealment The randomisation list will not be available to any researchers directly involved in the assessment or screening of participants. The participant will only be allocated once all inclusion criteria are met. Following randomisation, participant allocation will be returned to the primary investigator who will inform participants of their allocation and arrange for HPS to begin in the treatment group. Blinding Due to the nature of the study, it is not possible to have a completely blinded design. Participants will know which intervention they are receiving. IYP group leaders will also know who is in the treatment arm as their contribution is a part of the HPS intervention. The primary investigator leads the IYSS team and conducts the multidisciplinary team review and will therefore be aware of those participants in the treatment arm. However, the research assistant undertaking the assessments will be blind and remain blind to treatment allocation throughout the study. Participants will be asked not to reveal the intervention they are receiving to the research assistant. All participants will be given an identification number to ensure the researcher and all those involved in summarising and inputting the data are unaware of the treatment allocation. Statistical methods Standard descriptive statistics will be used to report demographics, baseline status for outcome measures, and presentation features for the sample as a whole and by randomly allocated group. These will include means, medians, ranges, and standard deviations for metric measures, and frequencies and percentages for categorical measures. The primary outcome measure, the change in the parent scores on the ECBI total problem score from pre-to post-intervention will be calculated for each individual and will be compared between randomised groups using ANOVA with randomised group and strata as fixed factors. Additional sensitivity analyses will be undertaken using an ANCOVA model and including the baseline level of the change score as a covariate. The metric secondary outcome measures that assess change from pre-to post-intervention in SCS, and child behaviour, parenting practices, parent relationships and parental wellbeing as measured by the FQ, will also be compared between randomised groups using ANCOVA models with baseline levels as covariates and randomised group and strata as fixed factors. The categorical outcomes at post-treatment including the percentage of parent scores on the ECBI and the SCS that are in the normal range at post-treatment and the percentage of participants with at least 80% engagement in IYP will be compared between randomised groups using χ 2 tests. As outlined above, the stratification factors will be included as factors in the ANCOVA models analysing the primary and secondary continuous outcomes and, depending on sample size, may also be included in a Mantel-Haenszel χ 2 analysis of the post-treatment categorical outcomes. The maintenance of post-treatment results for the primary and secondary outcomes at six months postintervention will be compared between randomised groups using ANOVA. This analysis will explore change in the metric measures from immediately post-treatment to six months between the two randomised groups. Additional exploratory analyses including correlation coefficients and further ANCOVA and logistic regression models may be used to identify the characteristics of subsets of participants who respond particularly well or poorly to the addition of HPS to IYP. A two tailed α = 0.05 will be used for all statistical testing of the results of the above analyses and results will be summarised using 95% confidence intervals of the differences between randomised groups. Should any of the above metric outcome measures not meet requisite assumptions for parametric analyses after transformation, non-parametric tests, including the Mann-Whitney U-test, will be used for analyses. All participants' data will be included in the intentionto-treat analysis. Considerable efforts will be made to obtain post-treatment and follow-up data from all randomised participants even if they do not complete the treatments. Missing data will in the first instance be managed with a 'last observation carried forward' approach with additional sensitivity analyses undertaken using multiple imputation methods. The extent of compliance, including information on those who do not complete either HPS or IYP, will be captured and summarised. A per-protocol analysis, including only those who complete the treatments without protocol violations and have all relevant assessments at each time, will also be undertaken to identify whether compliance factors affect outcomes. Qualitative analysis A small number of qualitative questions are included in the questionnaires to assess participants' unique perspective and experience of the intervention. At baseline, open questions include reasons for referral to IYP and asking parents about their expectation of the intervention. Post-treatment questions explore the parents experience of the intervention they received (HPS or IYP alone) and what, if any, benefits they have gained. Follow-up questions focus on changes in child behaviour and parent-child relationships. Questions also focus on the parents' experience of being part of the trial and any suggested improvements. Responses will be coded using a general inductive approach described by Thomas [46]. All responses will be read systematically to identify meaningful units. These will be coded and then categorised into emerging themes. Any links or relationships between the themes will be established. The frequent, dominant, or significant themes will be identified, and will inform research findings. Participants' responses to open ended questions are expected to give insight into the impact of child behaviour on the family, their expectations and hope for change, and their experience of the intervention, including unplanned or unanticipated effects. An independent coder will code 30% of transcripts to ensure reliability of coding. Any discrepancy in themes will be resolved by agreement between the two coders (Additional file 1). Discussion There is considerable evidence for the efficacy of IYP for most families who are experiencing challenges with child behaviour. Research shows that up to two thirds of families who complete IYP have child behaviour rating scores in the normal range at post-treatment and this is maintained at follow-up [5,13]. For those families whose children do not make sufficient change during treatment, the risk of later poor outcomes is raised substantially. These families may respond to extra in-home support to encourage engagement in IYP, address barriers for making change, and support the implementation of effective parenting strategies. We anticipate that providing tailored in-home coaching to vulnerable families while they are attending IYP will result in more participants having post-treatment child scores in the normal range. A structured therapist guide has been developed to ensure the intervention is delivered with fidelity. It is costly to provide intervention and treatment for conduct disorder and the cost increases with age and severity. If the trajectory of just a few young children can be changed early in the life of the child then it is more likely that the improvement will be maintained over time and this can provide a saving to health, education, and social justice. This is the first formal evaluation of adding a structured home intervention (HPS) to the IYP group-based programme and is a feasibility study to inform the design and implementation of a larger definitive randomised controlled trial. It is hypothesised that HPS will improve outcomes in families with risk factors for nonresponse to treatment, encourage them to stay engaged in IYP, strengthen their adoption of effective parenting strategies, and improve outcomes for both the children and the families. If a significant effect size is found this would justify expansion and development of HPS. However, if the effect size is small it could be concluded that HPS does not have additional benefit over IYP alone for the sample identified for this trial. These findings could provide information to inform National Ministries on policy and resource allocation. Trial status Recruitment commenced in March 2013. The final participants are expected to complete their 6-month followup assessment in December 2014. Additional file Additional file 1: Figure S1. Participant flow.
2016-05-16T04:40:04.375Z
2014-02-25T00:00:00.000
{ "year": 2014, "sha1": "9931e62e67ed0320f621ee99e4c6d462854f2acd", "oa_license": "CCBY", "oa_url": "https://trialsjournal.biomedcentral.com/track/pdf/10.1186/1745-6215-15-66", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "f5ca553011c08353507fae5c908b359f78438e31", "s2fieldsofstudy": [ "Medicine", "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
246584096
pes2o/s2orc
v3-fos-license
Proportion And Factors Associated With Intra- Procedural Pain Among Women Undergoing Manual Vacuum Aspiration For Incomplete Abortion At Mbarara Regional Referral Hospital, Uganda Background: Intra-procedural pain (IPP) is common among women undergoing Manual Vacuum Aspiration (MVA) for incomplete abortion. Globally, the proportion varies between 60% to 90% while in Sub-Saharan Africa including Uganda, the proportion varies between 80% to 98%. IPP management during MVA include Para-cervical block (using 1% lidocaine) or an opioid (using 100mg of intravenous pethidine). Objectives: This study determined the proportion and factors associated with IPP among women undergoing MVA for incomplete abortion at MRRH.Methods: We conducted a cross sectional study among 207 women who underwent MVA for incomplete abortion from 17th December 2020 to 28th May 2021. An interviewer-administered structured questionnaire was used, and pain assessment was done using VAS considering an IPP as a pain score of 6 or more. The participant characteristics were summarized. The proportion of women with IPP was calculated. We performed multivariable logistic regression to determine the factors associated with IPP. Results: We consecutively enrolled 207 women with a mean age of 25.8 ±5.8 years. The proportion of women with IPP undergoing MVA at MRRH was 82.6% (95% C.I: 76.8 — 87.2). The factors significantly associated with IPP were age and cervical dilatation. The odds of IPP increased with decreasing age of the women; compared to older women (aged >30 years), teenagers (age<20 years); OR=8 (95% CI=1.85-34.61) (p=0.005), while women aged 20-24 years; OR=3.45 (95% CI=1.47-8.20) (p=0.004), and those aged 25-30 years; OR=2.84 (95% CI=1.20-6.74) (p=0.018). Women with cervical dilatation of 1-2 cm had the odds of IPP increased; OR=2.27 (95% CI=1.11-4.62) (p=0.024), compared to a cervical dilation of 3-4 cm. Conclusion: Majority of women undergoing MVA at MRRH experienced IPP. Younger women and those with cervical dilatation 1-2cm are more likely to experience IPP. We recommend improvement of pain control among women undergoing MVA. Background Intra-procedural pain (IPP) is common among women undergoing Manual Vacuum Aspiration (MVA) for incomplete abortion (1). It refers to an "An unpleasant sensory and emotional experience associated with, or resembling that associated with, actual or potential tissue damage" (2) or it may refer to mutually recognizable experience that re ects a person's apprehension of threat to their bodily or existential integrity (3). Globally, several studies which were done among women undergoing MVA found that the proportion of intra-procedural pain varies between 60% to 90% (4). In Africa, the proportion of intra-procedural pain varies between 70% to 92% (1,5). Meanwhile in Sub-Saharan Africa including Uganda, the proportion is about 80% to 98% (6). IPP among women undergoing MVA is considered acute because it results from direct surgical trauma to afferent neuronal barrage (7). The IPP may be physiological and or psychological and may be unbearable (8,9). MVA is the main option in the management of rst trimester incomplete abortion. Other management modalities include; medical use of misoprostol, curettage and expectantly waiting for spontaneous expulsion of remaining products of conception (10). MVA is considered routinely (11), thus avoiding general anaesthesia and the need for access to theatre (12). MVA is the cheapest, fastest and safest surgical method of uterine evacuation for incomplete abortion in rst trimester (13,14). MVA has also been found to be effective in terms of completeness of uterine evacuation, shorter time during procedure, less complications and shorter duration of hospital stay (15,16). The procedure was designed to be used in low resource setting since it is associated with lower costs compared to Electric Vacuum Aspiration (17). Nevertheless some women may get incomplete uterine evacuation (4) and in some instances, there can be uterine perforation due to di culty in performing the procedure because of IPP (18). The methods of intra-procedural pain (IPP) assessments include use of Visual Analogue Scale (VAS) and use of Numeric Rating Scale (NRS) (19). The VAS is the preferred method of IPP assessment because IPP is best assessed by the person who felt or experienced the pain (20,21). Pain scores of 0-5 are considered bearable pain (22) and pain scores of 6 or more is considered unbearable pain and requires analgesics (23,24). The purpose of IPP control is to ensure that women do not suffer anxiety and discomfort as well as no risk to their health. Adequate IPP management generally requires medication for the physiological and counselling for the psychological pain (8). Methods of IPP management include verbal analgesia, local analgesia, and general anaesthesia. World Health Organization (WHO) recommends local analgesia by paracervical block or sedation with opioids (19). At MRRH, IPP management include paracervical block, opioids, and intramuscular diclofenac (unpublished). The factors associated with IPP during MVA include; previous history of abortion, partner involvement, prior uterine evacuation (25), Analgesia used (9). Therefore, this study aimed to determine the proportion and factors associated with IPP among women with incomplete abortion undergoing MVA at MRRH, Uganda. Study design, setting and population The study was a cross-sectional study conducted from 17 th December 2020 to 28 th May 2021. The study was a cross-sectional study conducted at a Gynaecology ward of Mbarara Regional Referral Hospital (MRRH). MRRH is found in Mbarara District, 260 km Southwest of Kampala the Capital city of Uganda. It is a public hospital and fully funded by Government of Uganda through the Ministry of Health (MoH). It is the referral hospital for Southwestern Uganda serving 12 districts. The hospital serves a population of more than 2.5 million people including those from neighbouring countries of Rwanda, Democratic Republic of Congo, and Northern Tanzania. Women who underwent MVA for incomplete abortion at gynaecology ward at Mbarara Regional Referral Hospital and included all women who underwent MVA for incomplete abortion at 12weeks of amenorrhea or less at gynaecology ward at Mbarara Regional Referral Hospital but excluded those who were unconscious at the time of data collection. Sample Size Calculation Sample size was estimated using the Kish Leslie's formula for cross sectional survey (Kish, 1965), n=Z 2 pq/d 2 Where; n is the sample size, Z is z-score for 95% CI (1.96), p is estimated proportion of women with intra-procedural pain during Manual Vacuum Aspiration, q is 1-p and d is desired level of precision (margin of error) set at ±5% . P=85.9% is the proportion of women with intra-procedural pain undergoing Manual Vacuum Aspiration at a teaching university hospital in Kano, Nigeria (1). Substituting into the Kish Leslie's formula, n= (1.96) 2 (0.859) (0.141)/(0.5) 2 =186, adding 10% to account for none response, n=207 Sampling method Consecutive sampling was used to recruit eligible participants until the desired sample size was achieved. Study procedure On each day of the study period, a member of the research team was stationed at the admission unit of the gynecology ward. Whenever a diagnosis of abortion was made by the clinical care team, we recorded that patient on our screening log. For women with an incomplete abortion, we tracked to see which treatment modality was offered and included any of these, MVA, curettage or medical management with misoprostol. A member of the research team was present at the time of MVA and recorded the time when the procedure ended on the screening log. This was taken as time zero and two hours later, this patient was approached for consent. We also kept track of all the women admitted with threatened or inevitable abortion and in case any of them ended with an incomplete abortion and got a MVA, we approached them for consent 2 hours after the procedure as well. After getting an informed written consent, each participant was subjected to the interviewer administered questionnaire to obtain information on sociodemographic, medical factors and gynaecological factors and information was entered directly into Redcap® software. The participant was then given a colored picture of the VAS for scoring the pain that she experienced during the MVA. She was explained to that zero (0) meant no pain while ten (10) meant the worst kind of pain and was requested to point or circle any number from 0 to 10 to represent the pain experienced. Pain scores of 6 or more was considered as intra-procedural pain in this study (23,24). Data management and analysis Data was coded and entered into Redcap® Database (26) and exported to STATA ® version 15 for cleaning and analysis. Data cleaning was done by checking for duplication, missing values and outliers and errors were corrected by cross checking with original questionnaires. We computed descriptive statistics and displayed baseline characteristics in table one and table two. We described categorical variables using simple frequencies, proportion, and percentages. While for continuous variables we summarized using mean and standard deviation. The proportion of women who experienced intra-procedural pain was determined by dividing the number of women with intra-procedural pain with the total number of women (n=207) who have undergone MVA. This proportion was multiplied by 100 and reported as a percentage. Factors associated with intraprocedural pain were determined by assessing the sociodemographic, gynaecological and medical factors at bivariable analysis level using logistic regression. The Crude Odds Ratios(cOR) were obtained and reported with their 95% Con dence Interval (CI) at alpha level of statistical signi cance, p-value less than 0.05. Variables with p-value less than 0.2 at bivariate analysis were then included in multivariate logistic regression model together with biologically plausible factors (gravidity, gestational age, analgesia used) to control for confounding and interaction between the variables. The calculated Adjusted Odds Ratios(aOR) with their 95% CI were recorded. Variables with p< 0.05 were reported as factors independently associated with intra-procedural pain. Quality assurance The research team was trained on strict adherence to the COVID 19 risk management plan. This was meant to reduce potential risk of exposure to COVID 19 for our potential participants, investigators, and other health care workers. The research team was also trained in grief assessment, preliminary counselling, and support. The team assessed and detected early signs of psychological stress. When psychological disorder occurs twice with failed attempts in counselling, the participant was to be excluded from the study and linked to mental health clinic. Access to data was limited to those directly involved in the study. Con dentiality of the information collected was observed by using numbers and not names. Participants shall not be traced back to their study variables. Ethical considerations Page 6/18 The proposal was presented to and approved by the Department of Obstetrics and Gynaecology; Mbarara University of Science and Technology and obtained clearance to carry out this research. Scienti c and ethical approval were obtained from the Faculty Research Committee (FRC), Research Ethic Committee (MUREC-08/ [11][12][13][14][15][16][17][18][19][20], Mbarara University of Science and Technology and Uganda National Council for Science and Technology (UNCST, Ref. No. HS1462ES). Administrative clearance was sought from the o ce of the Hospital Director, Mbarara Regional Referral Hospital through the Head of department of Obstetrics and Gynaecology. Informed consent was obtained from all respondents and participation was free and voluntary. Participants were free to withdraw from the study with no penalty. Privacy was observed by interviewing the study participants in a private and comfortable room. Results A total of 207 women who underwent Manual vacuum aspiration were recruited. The average age of study participants was 25.8±5.8 years with majority aged between 20-24 years 38.7% (n=80), married women 84.5% (n=175) and primary level of education 35.8% (n=74). Majority had caretaker support 83.6% (n=173) with no history of alcohol intake 95.7% (n=198), Table 1. Proportion of women with intra-procedural pain undergoing MVA at gynaecology ward, Mbarara Regional Referral Hospital. Factors associated with intra-procedural pain among women undergoing MVA at gynaecology ward, Mbarara Regional Referral Hospital. At bivariate analysis, the socio-demographic factors that were independently associated with intraprocedural pain were age and marital status (OR=2.83, 95% CI=1.21-6.65, P-value=0.017) meanwhile the medical and gynaecological factors that was independently associated with intra-procedural pain was cervical dilatation (cOR=2.25, 95% CI=1.28-3.98, p-value=0.005). Factors which were biologically plausible included gravidity, gestational age and analgesia used, Tables 2 and 3. At multivariable analysis, the factors signi cantly associated with intra-procedural pain were age and cervical dilatation. The odds of intra-procedural pain decreased with increasing age of the women. Discussion This study aimed to determine the proportion and the factors associated with IPP among women with incomplete abortion undergoing MVA at Mbarara Regional Referral Hospital in Uganda. Similarity with studies done in Sub-Saharan Africa, Nigeria and United Kingdom could be because these studies were conducted in similar hospital setting and participants had similar characteristics like in our study. In those studies, MVA were performed under similar analgesia and Visual Analogue Scale was used for pain assessment like in our study. Other studies done in Panama (28), United Kingdom (4), and India (29) found lower proportion. The study done among women who attended the gynecology department of the Complejo Hospitalario "Arnulfo Arias Madrid", Caja de Seguro Social, Panama determined a proportion of intra-procedural pain of 70.3% during MVA (28). This was lower proportion was because in their study, prior to MVA procedure prostaglandin was administered to cause cervical dilatation among their study participants and this reduces the chances of cervical manipulation hence less feeling of pain. In addition, pain scoring was done using Wong pain scale, this could have under estimated the pain level since it uses facial expression which is highly subjective and di cult to compare with actual pain experienced. A study done in the United Kingdom estimated even a much lower proportion of 25% (4) than the case of our study. This could be because in their study, misoprostol was administered to enhance cervical dilatation before the procedure. In addition, they instilled 5mL of 4% lidocaine through the cervix unlike in our study which we used 1% lidocaine, all MVA procedures were done by the specialists. Higher dosage of lidocaine could have reduced pain more and all MVA procedures were done by specialists who are expected to be more skilful at performing MVA procedure with reduced cervical manipulation hence less pain. A study done in India found a very low proportion of 8% (29) compared to the proportion in our study. This is because, in their study they had small sample size of only 50 participants who had MVA that were analysed. They also included only women aged between 18-45 years and those who had incomplete uterine evacuation unlike in our study, we enrolled all women including emancipated minors. In our study, the odds of intra-procedural pain decreased with increasing age of the women. Compared to older women (aged >30 years), teenagers (age<20 years) had 8 times higher odds, while women aged 20-24 years had 4 times higher odds, and those aged 25-30 years had 3 times higher odds of intraprocedural pain. Our nding was similar to ndings from studies done in the United States of America (30), Spain (31) and systematic review (32). The study in the United States of America noted that the degree of pain signi cantly varies with age of the woman with younger patients (teenagers) experiencing more pain compared to older patients (aged ≥ 35 years) (30). A systematic review and meta-analysis of age effect on pain threshold and tolerance that was done on 31 studies on pain threshold and 9 studies assessing pain tolerance threshold and found out that pain threshold increases with age. This age-related change in pain perception increases the wider the age-gap between groups without signi cant difference in tolerance (32). Another study which was conducted in Balearic Island, Spain demonstrated that increasing age was associated with increased pain threshold (31). Age was a signi cant factor because aging is associated with changes in the structure, function and chemistry of the nervous system. These changes directly affect the pain perception because aging is associated with decrease in the density of unmyelinated nerve bers in the peripheral system and this result into slowing nerve conduction hence reduced pain perception (33). Our study found that women with a cervical dilatation of 1-2 cm before MVA procedure had the odds of intra-procedural pain increased by 2 times, compared to those who had a cervical dilation of 3-4 cm. This nding is similar to ndings in a study conducted in United Kingdom which found that cervical dilation result into reduced intra-procedural pain (27). Meanwhile a study done in Portland, Ore (34) found no likelihood of increased intra-procedural pain with cervical dilatation. Similar nding is because increased cervical dilatation reduces the chances of cervical manipulation and trauma (13). This will result into less sensory activation of nociceptor at the cervix hence less pain owing to the fact that sensory function of the cervix is through the parasympathetic nerve bres from uterovaginal plexus through inferior hypogastric plexus from S2-S4 (35). Besides, there are four processes of pain. These include; transduction involving mechanical stimuli that activates pain receptors, transmission which involves relay of nociceptive information to central nervous system by afferent axon of the primary afferent nociceptor, modulation which is a complex processes that takes place within speci c areas of the brain and nal perception of pain (36). Our nding is different from a prospective randomized study done in Portland, Ore which found that dilatation of cervix prior to MVA procedure for rst trimester abortion has no effect on patient's intraprocedural pain (34). This is because in their study, they included elective cases who were psychologically ready for the procedure. The participants in their study were given sedative which was either oral diazepam 5mg or intravenous fentanyl 100mg prior to paracervical block using lidocaine prior uterine evacuation. These drugs, inhibits depolarization, which results in blockade of conduction causing loss of pain sensation and hence no pain associated with cervical manipulation. It's worth noting that diazepam modulates postsynaptic effects of GABA-A transmission resulting in presynaptic inhibition and acts on part of limbic system, thalamus and hypothalamus to induce a calming effect. Fentanyl on the other hand is a narcotic agonist-analgesic of opiate receptors which inhibits ascending pain pathways hence altering response to pain, increasing pain threshold and these produces analgesia. In addition, paracervical block with lidocaine is thought to block pain conduction via Frankenhauser's plexus, which causes an in ltrative effect that inhibits generation and conduction of nerve impulses by it mechanism of reducing sodium permeability and this increases action potential threshold. (37)(38)(39). Conclusion In conclusion, the proportion of women with intra-procedural pain undergoing Manual Vacuum Aspiration for incomplete abortion at Mbarara Regional Referral Hospital was very high. For every 10 women, 8 experiences intra-procedural pain. The factors associated with intra-procedural pain were age and cervical dilatation. Younger women and those with cervical dilatation 1-2cm are more likely to experience intra-procedural pain.
2022-02-06T16:20:15.791Z
2022-02-04T00:00:00.000
{ "year": 2022, "sha1": "8495e78e9093663f3c9cbf25a3835922d450b871", "oa_license": "CCBY", "oa_url": "https://doi.org/10.21203/rs.3.rs-1271589/v1", "oa_status": "GREEN", "pdf_src": "MergedPDFExtraction", "pdf_hash": "dbce465dec6ae0903925aff9228fe6fffe3e81ad", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
247518833
pes2o/s2orc
v3-fos-license
Medium Transmission Map Matters for Learning to Restore Real-World Underwater Images Underwater visual perception is essentially important for underwater exploration, archeology, ecosystem and so on. The low illumination, light reflections, scattering, absorption and suspended particles inevitably lead to the critically degraded underwater image quality, which causes great challenges on recognizing the objects from the underwater images. The existing underwater enhancement methods that aim to promote the underwater visibility, heavily suffer from the poor image restoration performance and generalization ability. To reduce the difficulty of underwater image enhancement, we introduce the media transmission map as guidance to assist in image enhancement. We formulate the interaction between the underwater visual images and the transmission map to obtain better enhancement results. Even with simple and lightweight network configuration, the proposed method can achieve advanced results of 22.6 dB on the challenging Test-R90 with an impressive 30 times faster than the existing models. Comprehensive experimental results have demonstrated the superiority and potential on underwater perception. Paper's code is offered on: https://github.com/GroupG-yk/MTUR-Net. INTRODUCTION With the development of science and technology, underwater research activities are also increasing, such as underwater object detection and tracking [1], underwater robots [2]and underwater monitoring [3].However, the light reflections, scattering, absorption and suspended particles inevitably result in poor visibility with inhomogeneous illumination in the collected underwater images.In detail, the light is absorbed and scattered by suspended particles in the underwater setting, resulting in hazy effects on the images captured by the cameras.Water also attenuates light as a function of its salinity, light wavelength and depth since the red light is more attenuated due to a longer wavelength.Besides, the light intensity decreases with the increase of water depth.Such properties reduce visibility underwater and hamper the applicability of computer vision methods.Early single-image underwater image restoration work used traditional physical methods to directly change the pixel value of the image [2] [4].However, these methods have limited capabilities when faced with diverse underwater environments.Recently, driven by the release of a series of paired training sets including [5] [6] [7], deep convolution neural networks (CNN) based models have been proposed by learning the mapping between underwater images and restored images.Representative methods include the WaterGAN Li et al. [8] and Ucolor by Li et al. [9], which consider the restoration in multi-channel spaces, and better results are obtained when compared the traditional physical designs as in Fig. 5.However, the quality improvement is limited due to the ignorance of other factors, such as distance-dependent attenuation and scattering.Considering the underwater imaging process, these factors can be considered by utilizing the semantics contained in the medium transmission map [9], such as the design proposed in this paper.By analyzing the results in Fig. 5(e), the improvement by the medium transmission map can be fully reflected by producing more visually pleasing results in terms of color, contrast, and naturalness. In this work, our goal is to eliminate the influence of light scattering and attenuation on underwater images in real time to support intelligent underwater perception systems.Inspired by the depth-guided deraining model by Hu et al. [10], we introduce the medium transmission map (MT) and formulate a MT-guided restoration framework.Specifically, a multitask learning network is designed to generate both the MT and restoration outputs jointly.A multi-level (including both feature level and output level) knowledge interaction mechanism is proposed for better mining the guidance from the MT learning space.Furthermore, to maximally reduce the computational burden caused by the MT learning branch, parameters in some specific stages are shared across these two related tasks, thus enabling a real-time process of the underwater images. In summary, this work has the following contributions: • We re-examined how to better use the medium transmission map.We can get good results by relying on RGB map alone using various preprocessing and color embedding, proving that MT map is of great significance for learning a more powerful real-world underwater image restoration network. • A multi-task learning framework is formulated for leveraging the MT map, and a novel multi-level knowledge interaction mechanism is proposed for better mining the guidance from the MT learning space. • Comparative study on two real-world benchmarks demonstrated the superiority of our MTUR-Net over the state-of-the-art in terms of both restoration quality and inference speed. The rest of our paper is organized as follows.Section 2 briefly introduces the existing underwater image enhancement methods.Section 3 presents the proposed underwater enhancement algorithm.The experimental results are reported in Section 4.2, followed by the conclusion in Section 5. RELATED WORK Physical prior based methods.Based on adjusting the pixel value to improve visual quality originally, and physical model-based methods are used widely before long, which have obtained impressive results, while still exist some shortcomings, that they are almost slow work and sensitive to different kinds of underwater images. Recently, the development of scientific and technological artificial intelligence, the method based on deep learning has achieved remarkable results.Underwater image enhancement framework is mostly based on convolutional neural network(CNN) or generative adversarial network(GAN).For example, Li et al. [5] proposed a simple CNN mdoel named Water-Net using gated fusion.Li et al. [11] proposed UWCNN that based on underwater scene prior.Li et al. [9] proposed an underwater image enhancement network: embedding a multi-color space via medium transmission-guided. J. Li et al. [8] used GANs and image formation models for supervised learning.To avoid requiring paired training data, it was proposed that a weakly supervised underwater color correction network (UCycleGAN) in [12].A multiscale dense GAN for powerful underwater image enhancement was described in [13]. Including the above research, these underwater image enhancement models often overlook the most important point, which focus in the real underwater environment, serving data under real conditions.For instance.[12] use CycleGAN [14] network structure directly, and a simple multi-scale convolutional network is used in [5]. in [11] , faced an underwater image of input, how to select the corresponding UWCNN model is challenging.[9] is still not absolutely effective under the real underwater conditions. In contrast to the above, our method has the following characteristic : (1) We trained and learned deep-guided nonlocal features and regressed the residual mapping to produce a clear output image.(2) our method adopts end-to-end training and is adaptable and convenient for most underwater scene.and, (3) our method achieves perfect performance on real underwater image datasets which is better than recently state-ofthe-art methods. METHODOLOGY Fig. 2 shows the overall architecture of our medium transmission map guided underwater image restoration network (MTUR-Net).This network takes underwater images as input, and predicts the corresponding MT map and underwater enhanced images as output in an end-to-end manner.In general, the network first uses CNN to extract semantics and generate feature maps and share weights.Then two decoding branches are generated.(i) The MT prediction subnet, which uses the encoding and decoding network, to regress a medium transmission map from the input.(ii) The underwater image enhancement network, guided by the predicted MT map, predicts the enhanced image from the input underwater image. MT Prediction Subnet We review the haze removal method based on dark Channel prior [15], which is widely used in harsh visual scenarios such as fog, dust and underwater.[16][17] [18].The image formation model can be expressed as [19]: This equation is defined on three RGB color channels.I represents the observed image, A is the airlight color vector, J is the surface brightness vector at the intersection of the scene and the real world light corresponding to the pixel x = (x,y), and T (x) is the transmission along the light.And Y. -T.Peng et al. [20] proposed a new Dark Channel Prior (DCP) algorithm that can effectively estimate ambient light Fig. 2. Schematic diagram of MTUR-Net.It consists of an encoder-decoder network for predicting MT map (green), a set of dilated residual blocks (yellow) to generate local features, convolutional layer (purple) for process MT features before fusion, and the convolutional layer (blue part) to upsample the feature map and generate underwater enhanced images.⊕ pixel-wise addition and is suitable for enhancing foggy, hazy, sandstorm, and underwater images.Inspired by DCP, transferred T (X) has wide applicability, we use the medium transmission (MT) map (T ) as our attention map.It's effectiveness will be demonstrated in ablation experiments.From [20], the actual input underwater image does not have a corresponding ground true medium transmission map, it is difficult to train a deep neural network to estimate the medium transmission map.So the medium transmission map can be estimated as: ( T is the estimated medium transmission map, Ω(x) is a local patch centered at x and c is RGB channel.The schematicdiagram of the proposed module using the MT map is shown in Fig. 3.We use MT map as a feature selector to weigh the importance of different spatial locations of features, as shown in Fig. 3. Assign more weight to high-quality pixels (pixels larger MT values), which can be expressed as: F, O represent the characteristics of the output and input respectively.In detail, the MT map prediction sub-network uses 4 blocks to extract features.Each block has a convolution operation, a group normalization [21] and a proportional exponent linear unit (SELU) [22] Then, it uses lateral connections to influence the detailed information decoded in the underwater feature map.Finally, another convolution operation is used, plus a sigmoid function, to return to T by adding a supervision (input MT map in the training data). Underwater Image Enhancement Subnet In the underwater image enhancement subnet, we use the convolution to reduce the resolution of the feature map, Then, followed by 11 dilated residual blocks(DRB) [23] to Increase the size of the perceptual field out reducing the resolution.Each DRB has a 3 * 3 dilated convolution [24], a ReLU nonlinear function, and another 3 * 3 dilated convolution that adds input and output feature maps using skip connections.To avoid gliding issues, we set the dilation ratio of these 11 DRBs as 1,1,2,2,4,8,4,2,2,1 according to [25].Moreover, use the horizontally connected convolution module to add the MT prediction feature to the output feature map.After that, we use convolution to change the feature map to the size of the MT map and concatenate them together.Finally, through the convolution operation, scale the feature map to the size of the input image. EXPERIMENTS In this section, we will first illustrate the details of the parameter design and then explain the settings of the entire experimental process.Above all, we compare our model with several existing models that performed well and provide ablation experiments at the end of this section to study the effective parts of MTUR-Net. Parameter Settings To train the network, we chose the real underwater image dataset illuminated in Li et al. [9], which contains 890 pairs of images from [5] and 1250 pairs of images from [11].We trained our network on a single NVIDIA 3090 Ti GPU with a batch size of 8, the initial learning rate is set to 1e-3, and network optimization is carried out by Adam. Experiment Setup To test the proposed model, we took the remaining 90 pairs of real data in UIEB and recorded them as Test-R90, and to synthesize the multi-faceted results, we also tested 60 challenging images in UIEB, which were recorded as Test-C60. To prove the advancement of this proposed model, we compared our method with other SOTA, including a physical model-based model and Deep-learning-based model.For the physical model is an extension of their previous work to deal with underwater image restoration called Underwater Dark Channel Prior(UDCP) [2].What's more, Water-Net [5], a simple CNN model through gated fusion, Ucolor [9], a net-work embedding with the color space guided by media transmission, while a fully-convolutional conditional GAN-based model FUnIE-GAN [26], and a method using Generative Adversarial Networks (GANs) we chose [27].To control variables, we chose the same training data and loss function as MTUR-Net. Comparitive Study In this experiment, we choose two evaluation methods, including the visual evaluation and quantitative evaluation, to compare the specific effects of our model with other models. Visual Evaluation.In open water, due to the longest wave tension and fast propagation speed, red light compared to other wavelengths is absorbed more.Therefore, the underwater image appears blue or green.In order to clearly observe the effect of the image via MTUR-Net processing, we provide a comparison chart of the corresponding results obtained in different ways.Fig. 4 shows that the output obtained by MTUR-Net has the best performance.Our solution can repair the chromatic aberration caused by different water areas and see the details in the dark water and the texture of fish in the muddy water on the restored image. Quantitative Evaluation.We provide full-reference evaluation and non-reference evaluation to quantitatively analyze the performance of different methods. We conduct a full-reference evaluation using PSNR, SSIM, and FPS.Although the real-world environmental situation may differ from the reference image, the results of a fully-reference evaluation using the reference image can provide some feedback on the performance of different methods.A higher PSNR means that the result is less distorted, a higher SSIM means that the result is more similar to the reference image structure, and a higher FPS means that the processing process is more efficient.In Table 1, We can find that our method achieves the best PSNR and SSIM, while the FPS value is also ideal.Then we use UCIQE [28] and UIQM [29] for a nonreference evaluation.In principle, the higher UCIQE score, the better balance of the standard deviation of the chroma, contrast of brightness, and average of saturation; for the higher UIQM score, the better the result is subjectively visually performed.In Table 2, our proposed model obtain one of the best scores in UCIQE and UIQM.However, when we visually compared the image with the first place, we found that there were many small squares on UGAN's image, but the score was still very high, indicating that this evaluation standard still needs to be improved. In order to further verify the effect of MTUR method, avoid the influence of subjective judgment of relevant experimenters on visualization results, and make our proposed method more convincing, besides quantitative evaluation, we also conducted a series of research: We prepared 420 pictures expend from Test-C60 test set, each image corresponding seven different type(raw, MTUR, FUnIE-GAN, UGAN, Ucolor, UDCP and WaterNet) and then we vited 20 experimenters and asked them to compare the quality of the images in terms of chromatic aberration, visibility, clarity, etc., and select the best performance without knowing the corresponding experimental method of each image.After that we summarized in Table 3.As shown in the table, MTUR received the best rating in 42 of the 60 images in the TESTC-60 test set, and especially that MTUR generally obtains better recovery for details in a dark environment, combined with the image features. Ablation Study We performed ablation experiments on test-R90 to verify the effectiveness of each part of the network.First, the first line's basic network architecture is to remove the entire Medium Transmission Map module.So the network sustained the enhanced image directly based on the feature map generated from the dilated residual block (DRB) in underwater image enhancement subnet.The second line removes the skip connection between two subnetworks.Then, we did a comparative test to remove concatenation and only retain skip Connection.From the experimental results, we can find that without the final concatenation operation, the effect will be greatly re- duced.Through these three experiments can prove that MT prediction subnetwork has a profound impact on image enhancement.After that, we try to reduce the convolution operation after concatenation, and we find that the effect also has an impact.In the last two ablation experiments, we tried to concatenate or add all DRB blocks together through skip connection to enhance the connection between shallow layer and deep layer network, and the results showed that the effect did not perform well. CONCLUSION In this paper, to solve the pain points existing in underwater image enhancement at this stage, we demonstrated the value of physical prior, in particular the medium transmission map, for restoring the real-world underwater images.By formulating a very simple network for learning both the prior and restoration results jointly, and encapsulating the knowledge interaction process across these two tasks at both feature and output levels, much better restoration features are learned thus guaranteeing much better results.Besides producing the best results on two real-world benchmarks, our model is also able to process the underwater images in a real-time speed, making it a potential framework to be deployed into intelligent underwater systems. In the future, we will explore the upper-bound of benefits by the medium transmission map, and also continue the exploration of a more suited knowledge interaction design for better fusing the physical prior. Fig. 1 . Fig. 1.Comparison of the results of different methods for processing a real underwater picture.It can be seen from the results that our method restores the chromatic aberration and enhances the contrast. Fig. 3 . Fig. 3. Medium transmission guidance module.The MT map T is a feature selector, T weighs the importance of the different spatial positions for F . Fig. 4 . Fig. 4. Visual comparison of different images (from Test-R90) enhanced by state-of-the-art methods and our MTUR-Net. Fig. 5 . Fig. 5. Test-C60 visual image comparison.Here we can see the difference between our image and the UGAN image.We don't have any obvious pixel cubes, and the contrast and color difference of objects are better. Table 1 . [5]parison the State-of-the-Arts Using the PSNR and SSIM on the Test-R90 Dataset[5] Table 3 . The generated image equality evaluation results of different methods on Test-C60. Table 4 . Component Analysis.The Basic Model is MTUR-Net without the MT-Guided Non-Local Module
2022-03-18T01:15:48.124Z
2022-03-17T00:00:00.000
{ "year": 2022, "sha1": "6441fe7711ddf3513258a9c8899cef321980d605", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2076-3417/12/11/5420/pdf?version=1653633586", "oa_status": "GOLD", "pdf_src": "ArXiv", "pdf_hash": "6441fe7711ddf3513258a9c8899cef321980d605", "s2fieldsofstudy": [ "Environmental Science", "Computer Science" ], "extfieldsofstudy": [ "Computer Science", "Engineering" ] }
21458124
pes2o/s2orc
v3-fos-license
Subunit structure of oxygenase component in benzoate-1,2-dioxygenase system from Pseudomonas arvilla C-1. Benzoate-1,2-dioxygenase system from Pseudomonas arvilla C-1 consists of two protein components, benzoate-1,2-dioxygenase reductase and benzoate-1,2-dioxygenase (Yamaguchi, M., and Fujisawa, H. (1980) J. Biol. Chem. 255, 5058-5063). Benzoate-1,2-dioxygenase exhibited two protein bands (alpha and beta) on sodium dodecyl sulfate-polyacrylamide gel electrophoresis, and their molecular weights were estimated to the 50,000 and 20,000, respectively. The intensities of protein staining on polyacrylamide gels suggested that these two subunits were present in equimolar quantities in benzoate-1,2-dioxygenase. Molecular weight of benzoate-1,2-dioxygenase was estimated to be 201,000 by sedimentation equilibrium (Yphantis method). The values of molecular weights of native enzyme and its subunits suggested that the subunit structure of benzoate-1,2-dioxygenase may be alpha 3 beta 3. Cross-linking experiments also suggested the same subunit structure. These two subunits were separated from each other by Ultrogel AcA44 chromatography in the presence of 6 M urea. Amino acid compositions of the two subunits were examined and compared with that of native enzyme. NH2-terminal amino acids of alpha and beta subunits were both serine, and isoelectric points of alpha and beta in the presence of 6 M urea were determined to be pH 5.6 and pH 4.8, respectively. The enzyme contained 8.2 mol of iron and 5.9 mol of labile sulfide/mol of enzyme, suggesting the presence of additional iron atoms besides iron-sulfur clusters. The isolated beta subunit did not contain any significant amounts of iron and labile sulfide, but the alpha subunit contained approximately 2 mol each of iron and labile sulfide and exhibited an absorption spectrum of binuclear iron cluster type. * This research was supported in part by a grant-in-aid for Scientific Research from the Ministry of Education, Science, and Culture of Japan, and a research grant from the Kuribayashi Foundation. The costs of publication of this article were defrayed in part by the payment of page charges. This article must therefore be hereby marked "aduertisement" in accordance with 18 U.S.C. Section 1734 solely to indicate this fact. subunits, the molecular weights of which were estimated to be 50,000 and 20,000, respectively. Furthermore, the ironsulfur cluster was shown to lie on the larger subunit. Sedimentation Experiments-Ultracentrifuge measurements were carried out in a Hitachi 282 analytical ultracentrifuge. Prior to sedimentation analysis, the sample was applied to a column of Sephadex G-200 equilibrated with 50 rn Tris/HCI buffer, pH 6.8, containing 5% dimethyl sulfoxide, 0.1 M NaC1, and 1 m~ dithiothreitol, and eluted with the same buffer to remove the aggregated forms of the enzyme. Sedimentation equilibrium was carried out according to the method of Yphantis (10) in a three-channel centerpiece. After centrifugation a t 12,000-13,OOO rpm at 4 "C for 24 h, scanning was done a t 280 nm by ultraviolet optics. Cross-linking Experiments-Cross-linking of benzoate-l,2-dioxygenase with DTBP was carried out in 0.2 M triethanolamine/HCl buffer, pH 8.5 ( l l ) , a t room temperature in a final volume of 100 pl. The reaction was quenched by the addition of 5 p1 of 1 M ammonium acetate solution (8). After incubation for 10 min, 4 pmol of N-ethylmaleimide (in 5 pi of ethanol) were added to prevent disulfide-sulfhy-dry1 exchange. For electrophoresis, 400 pl of 0.1 M sodium borate buffer, pH 8.5, containing 2% SDS, 10% glycerol, and 0.001% bromphenol blue, were added to the reaction mixture, and the mixture was heated for 3 min at 100 "C. Disc Gel Electrophoresis-For determination of molecular weights of benzoate-1.2-dioxygenase subunits, SDS-polyacrylamide gel electrophoresis was performed at 6 mA/gel for 4 h according to the method of Weber and Osborn (12). The gels were stained with Amido black 10B or Coomassie brilliant blue (R-250) and then scanned at 580 nm by ISCO UA-5 absorbance monitor. For analysis of cross-linked enzyme, electrophoresis was performed at 8 mA/gel for 2 h on 5% polyacrylamide gels according to the procedure of Davies and Stark (11). Gels were stained with Coomassie brilliant blue (R-250). Two-dimensional Gel Electrophoresis-Electrophoresis of the frrst dimension was performed on 5 8 polyacrylamide gels in the presence of 0.1% SDS in glass tubing (12 X 0.25 cm) essentially according to the procedure of Davies and Stark (11). After electrophoresis at 2 mA/gel for 3 h, the cylindrical gel was removed from the glass tube and placed in the slot on top of a discontinuous slab gel system similar to that of Laemmli (13). It consisted of a 376 polyacrylamide stacking gel on a 10°C polyacrylamide separating gel (10 x 15 X 0.1 cm). Agarose (1%) containing 62.5 mM Tris/HCl buffer, pH 6.8, 0.1% SDS, 10% 2-mercaptoethanol, and 10% glycerol at 80 "C was added on top of the stacking gel to cover the cylindrical gel. Amino Arid Ana/.vsi.s-Amino acid analysis was performed on a Hitachi KLA-5 automatic amino acid analywr. Samples were hydrolyzed in (i N HCI for 22.48. and 72 h at 110 "C in a vacuum (14). Halfcystine was determined as cvsteic acid after performic acid oxidation (15). Tryptophan was determined spectrophotometrically according to the method of Coodwin and Morton (16). NH,-terminn/Ancl/.vsis-ldentification of the NH2-terminal amino acids of benzoate-l;~-dioxvgenase suhunits was made by reaction of the enzyme protein with dansvl chloride followed by acid hydrolysis and two-dimensional thin laver chromatography on polyamide sheets according to the procedure of Gray (17). Isoelecfric Focusing-Anal\fticaI isoelectric focusing was performed on Sci polyacrylamide gels containing 'L'r Ampholine (mixture of 4 parts of pH :1.5-5.0 Ampholine and 1 part of pH 3.5-10 Ampholine) and 6 M urea, essentially according to the procedure described by Wrigley (18). Electrofocusing was carried out at 200 V for 4 h at 4 "C. Gels were sliced into 2-nun sections and placed into 0.5 ml of water. After standing overnight at 4 "C, the pH of water extract of each piece was measured. In a parallel experiment, gels were stained with Coomassie brilliant blue (G-250) (19). Determination ofbon-The iron content was determined by using the o-phenanthroline method described bv Massev (20). Ofher Determinations-The concentration of benzoate-1.2-dioxygenase was estimated by measuring the absorbance at 279 nm, taking 39.900 as the molar extinction coefficient ( 7 ) . The protein concentrations of subunits were estimated from the absorbance at 280 nm based on their contents of tryptophan and tyrosine residues (24). All spectrophotometric measurements were carried out with a Shimadzu UV200 recording spectrophotometer. Molecular Weights-Purified benzoate-1,2-dioxygenase, which was homogeneous on polyacrylamide disc gel electrophoresis (7), showed two protein bands on gel electrophoresis in 0.1% SDS as shown in Fig. 1A. The molecular weights of the two protein bands were determined to be 50,000 (designated a subunit) and 20,000 (p subunit) according to the method of Weber and Osborn (12) as shown in Fig. 2. When the two protein bands on SDS gel were stained with Amido black 10B or Coomassie brilliant blue (R-250) and scanned with a densitometer at 580 nm to determine the relative amount of protein in each band, a ratio of protein staining of 5:2 for the bands of a subunit and p subunit was obtained, indicating a molar ratio of 1:l for a subunit and p subunit based on their molecular weights of 50,000 and 20,000. The molecular weight of purified benzoate-1,2-dioxygenase, as determined by low speed sedimentation equilibrium, was reported to be 273,000 (7). However, the purified enzyme was found to have a tendency to aggregate as shown in Fig. 3. When the purified enzyme was stored in a frozen state at -20 "C for 1 month at a concentration of 54 mg/ml, approximately 20% of the enzyme appeared to become aggregated forms. Since the elution position of the aggregates of the enzyme preceded that of /3-galactosidase, the molecular weight of the aggregates appeared to be higher than 520,000. In order to obtain an accurate value of the molecular weight of the enzyme, high speed sedimentation equilibrium analysis (10) of the enzyme was performed immediately after removal of the aggregates by gel filtration. The value, 201,200 k 11,500 (Table I), was considerably lower than that reported previ-Cross-linking Studies-When benzoate-1,2-dioxygenase was treated with DTBP and analyzed by SDS-polyacrylamide gel electrophoresis, the patterns of protein-staining bands, as shown in Fig. 4, were obtained. The molecular weight of each ously. band was estimated to be 43,000, 70,000, 98,000, 120,000, 145,000, 170,000, and 190,000, using cross-linked bovine serum albumin as a standard marker. These values of the molecular weights tentatively suggested that each of the bands might correspond to p2, cup, a2, a$, a3, a$, and a:&, respectively. When the tube gel which had been electrophoresed for separation of the enzyme cross-linked with DTBP as described above was subjected to electrophoresis of the second dimension as described under "Experimental Procedures," the patterns of protein bands as shown in Fig. 5 were obtained. All which was stored at -20 "C for 1 month at the protein concentration of 54 mg/ml in 50 mM Tris/HCI buffer, pH 6.8, containing 5% dimethyl sulfoxide, 0.1 M NaCI, and 1 mM dithiothreitol. was applied to a column (1.9 X 50 cm) of Sephadex C-200 equilibrated with the same buffer. The column was eluted in fractions of 1 ml, and the absorbance a t 280 nm of each fraction was measured. For calibration of the column the following standard proteins were used: The species assumed to be ap and a2/3 in the first dimension produced protein bands in both positions corresponding to a and / 3 in the second dimension. The two slowest migrating bands in the fust dimension produced protein-staining bands in a position corresponding to p in the second dimension, suggesting that these species might be a:$ and but not ad. These results, taken together with the densitometric observation described above, suggested that the subunit structure of benzoate-1,2-dioxygenase might be a:&. Separation of Suhunits--Benzoate-l,2-dioxygenase was separated into a and p subunits by gel filtration chromatog- Two-dimensional polyacrylamide gel electrophoresis of benzoate-1.2-dioxygenase cross-linked with DTBP. Benzoate-1.2-dioxvgenase (2 mg/ml) was cross-linked with IITBI' (5 mg/ ml) in 0.2 M triethanolamine/HCI buffer, pH 8.5, at room temperature for 2 h. Two samples of the cross-linked enzyme, each containing 20 p g of protein, were subjected to SIX-polyacrylamide disc gel electrophoresis, and one of the two gels was stained with Coomassie brilliant blue (H-250) ( A ) . The unstained disc gel was placed over a slab gel, and electrophoresis was carried out as described under "Experimental Procedures.'' After electrophoresis of the second dimension. the gel was stained with Coomassie brilliant blue (H-250) (B). raphy on Ultrogel AcA44 in 50 mM Tris/HCl buffer, pH 6.8, containing 2 mM dithiothreitol, 6 M urea, and 5% dimethyl sulfoxide as shown in Fig. 6. Each subunit, a and /I, was eluted in fractions of 38 to 46 and 48 to 58, respectively, and each Subunit Structure of Benzoate-1,2-dioxygenase subunit thus obtained was shown to be free from the other on SDS-polyacrylamide gel electrophoresis (Fig. 1, B and C). Absorbance at 415 nm, presumably due to iron-sulfur cluster, was eluted at a region corresponding to a subunit on Ultrogel AcA44 gel filtration. Iron determination of each fraction of the gel filtration revealed that about 70% of,iron was eluted a t a region corresponding to a subunit, and the remainder was eluted in the column volume. These results, taken together, indicated that an iron-sulfur cluster might lie on a subunit of benzoate-1,2-dioxygenase. Both subunit preparations thus obtained were dialyzed against 50 mM Tris/HCl buffer, pH 6.8, containing 5% dimethyl sulfoxide and 1 mM dithiothreitol, and then used for the studies described below. Amino Acid Compositions of Subunits-The results of amino acid analyses of a and p subunits are summarized in Table 11. The numbers of a and p subunits in benzoate-1,2- dioxygenase, m and n, were calculated from the results of their amino acid analyses according to the method of least squares by the following formula. P = [ N , -(ma + n/3JI2 where Ni is the number of each amino acid residue in the native enzyme, a& is that of a subunit, and pz is that of p subunit. The following two equations must .be satisfied in order to minimize P. ,=I The values of m and n were calculated to be 2.9 and 2.8, respectively, from the two equations, supporting the contention that the subunit structure of benzoate-1,2-dioxygenase is a&. NHt-terminal Amino Acid Residues of Subunits-The NHn-terminal amino acids of both subunits of benzoate-1,2dioxygenase were determined according to the dansylation method as described under "Experimental Procedures." In each case, a single major dansyl-serine was identified after acid hydrolysis, indicating that the NHz termini of both a and p subunits might be serine. Isoelectric Points of Subunits-Isoelectric focusing of each subunit of benzoate-l,2-dioxygenase on polyacrylamide gel in the presence of 6 M urea revealed a single protein band with an isoelectric point of pH 5.6 for a subunit and pH 4.8 for p subunit. An isoelectric point for native enzyme was reported to be pH 4.5 (7). Absorption Spectra of Subunits- Fig. 7 shows the visible absorption spectra of benzoate-1,2-dioxygenase and its a subunit. Both measurements were performed at the same concentration of a subunit. The native enzyme exhibited a broad absorption spectrum with maxima at about 325 and 464 nm and with a shoulder at about 560 nm; a subunit also exhibited a broad absorption spectrum with maxima at about 325,415, and 450 nm, presumably due to iron-sulfur cluster of [2Fe-2S] type. Thus, both absorption spectra of the native enzyme and Absorption spectra of benzoate-1,2-dioxygenase and its a subunit. The concentration of benzoate-1,2-dioxygenase (-) was 4.2 p~ and that of a subunit (. . . .) was 12.5 PM in 50 mM Tris/HCl buffer, pH 6.8, containing 5% dimethyl sulfoxide and 1 mM dithiothreitol. subunit appeared to resemble each other both in shape and intensity, suggesting that the visible absorption of benzoate-1,2-dioxygenase might be primarily derived from iron-sulfur cluster on a subunit. In contrast to a subunit, / 3 subunit showed no significant absorption in the visible range. Iron and Labile Sulfide Contents of Subunits-Iron and labile sulfide contents of benzoate-1,2-dioxygenase and its a and , / 3 subunits are summarized in Table 111. The iron and labile sulfide contents of the native enzyme were calculated to be 8.2 and 5.9 mol/mol of enzyme, based on a molecular weight of 201,000 of the enzyme. The labile sulfide content of a subunit, 1.9 mol/mol, accounted for the total labile sulfide content of the enzyme, based on the finding that the subunit structure of the enzyme is a,&. The value of the iron content of subunit, 1.8 mol/mol, corresponded to approximately 708 of the total iron content of the enzyme. These results provided the evidence for the contention that an iron-sulfur cluster of [2Fe-2S] type might be on each a subunit of benzoate-1,2dioxygenase and, furthermore, suggested that additional iron atoms might be contained in the enzyme. The preparation of , 8 subunit had no significant amounts of both iron and labile sulfide. DISCUSSION The benzoate-1,2-dioxygenase system which catalyzes the double hydroxylation of benzoate consists of two protein components, benzoate-1,2-dioxygenase reductase and benzoate-1,2-dioxygenase (3-7). The former is an iron-sulfur flavoprotein containing one FAD and one iron-sulfur cluster of [2Fe-2S] type (5, 6), and the latter is an iron-sulfur protein with iron-sulfur clusters of [2Fe-2S] type (7). In the present study, this iron-sulfur protein, benzoate-1,2-dioxygenase, was shown to be composed of nonidentical subunits which include a larger iron-sulfur cluster-containing polypeptide ( a ) with a molecular weight of 50,000 and a smaller polypeptide (p) with a molecular weight of 20,000. The molecular weight estimation (Table I), SDS-polyacrylamide gel electrophoresis (Fig. 2), cross-linking studies (Figs. 4 and 5 ) , and amino acid analyses (Table 11), taken together, strongly suggested that the subunit structure of benzoate-1,2dioxygenase might be a:&. Since the majority of proteins consisting of nonidentical subunits are known to be the dimers and tetramers (25), benzoate-1,2-dioxygenase appears to be a rare case. Toluene dioxygenase which catalyzes the double hydroxylation of toluene has been reported to be composed of two nonidentical subunits, one with a molecular weight of 52,500 and the other with a molecular weight of 20,800 (26). Although the subunit structure of toluene dioxygenase has not been reported, this enzyme appears to exist as an a& form, taking into account its molecular weight of 151,000 (26). Each subunit of benzoate-1,2-dioxygenase was obtained without contamination of the other by gel filtration in the presence of 6 M urea (Fig. 6). The preparation of CY subunit thus obtained appeared to still retain iron-sulfur cluster, judging from the contents of iron and labile sulfide (Table 111) and the visible absorption spectrum (Fig. 7). The absorption spectrum of a subunit was very similar to those of spinach ferredoxin (27) and adrenodoxin (28), indicating that the absorption might be derived from iron-sulfur cluster of [2Fe-2S] type bound to a subunit. Thus, an iron-sulfur cluster of [2Fe-2S] type appeared to be attached to each cy subunit as a prosthetic group of benzoate-l,2-dioxygenase. The finding that benzoate-1,2-dioxygenase contained 8.2 mol of iron and 5.9 mol of labile sulfide/mol of enzyme suggested the presence of additional iron atoms besides ironsulfur clusters. The amounts of the additional iron atoms, 2.3 mol/mol of enzyme, corresponded to approximately 30% of the total iron atoms of the enzyme. Gel filtration of benzoate-1,2-dioxygenase in the presence of 6 M urea produced , 6' subunit, a subunit which still retained about 2 mol each of iron and labile sulfide (Table III), and unbound iron, the amounts of which corresponded to about 30% of total iron of the enzyme (Fig. 6). These, together with the subunit structure of a&, suggested that the enzyme might contain three iron atoms in addition to three iron-sulfur clusters as prosthetic groups. The terminal oxygenase of 4-methoxybenzoate-Odemethylase system has been reported to have iron-sulfur clusters of [2Fe-2S] type and high spin ferric ions as active cofactors (29,30). It is, therefore, reasonably assumed that benzoate-1,2-dioxygenase may be composed of three active units, each consisting of one iron atom, one [2Fe-2S]-a subunit, and one , B subunit.
2018-04-03T02:00:07.310Z
1982-11-10T00:00:00.000
{ "year": 1982, "sha1": "b4e1a226dfaea588f01e04a5cc2c69188258aac5", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1016/s0021-9258(18)33538-5", "oa_status": "HYBRID", "pdf_src": "Adhoc", "pdf_hash": "a871a3fffb8a7b5632ca10468df0f4301b9fa356", "s2fieldsofstudy": [ "Biology", "Chemistry" ], "extfieldsofstudy": [ "Medicine", "Chemistry" ] }
255848062
pes2o/s2orc
v3-fos-license
The Versatile Role of Uromodulin in Renal Homeostasis and Its Relevance in Chronic Kidney Disease Uromodulin, also known as the Tamm-Horsfall protein, is predominantly expressed in epithelial cells of the kidney. It is secreted mainly in the urine, although small amounts are also found in serum. Uromodulin plays an important role in maintaining renal homeostasis, particularly in salt/water transport mechanisms and is associated with salt-sensitive hypertension. It also regulates urinary tract infections, kidney stones, and the immune response in the kidneys or extrarenal organs. Uromodulin has been shown to be associated with the renal function, age, nephron volume, and metabolic abnormalities and has been proposed as a novel biomarker for the tubular function or injury. These findings suggest that uromodulin is a key molecule underlying the mechanisms or therapeutic approaches of chronic kidney disease, particularly nephrosclerosis and diabetic nephropathy, which are causes of end-stage renal disease. This review focuses on the current understanding of the role of uromodulin from a biological, physiological, and pathological standpoint. Introduction Chronic kidney disease (CKD) is a public health burden and an emerging risk factor for end-stage renal disease (ESRD), cardiovascular disease, and mortality (1)(2)(3).The prevalence of CKD increases with age and is significantly higher in hypertensive populations than in normotensive ones (4); therefore, with the increase in the elderly population of Japan, CKD is becoming a common disease.This epidemiological finding may partly underlie the recent changes in the etiology of ESRD. Diabetic nephropathy has been the most common primary disease among incident dialysis patients in Japan since 1998 (5), but the percentage has remained nearly unchanged for the past few years.In contrast, the percentage of patients with nephrosclerosis has increased, becoming the secondmost common cause of ESRD (6).Therefore, it is important to appropriately manage diabetes, hypertension, and other CKD-related conditions.Therapies targeting blood pressure, glucose, and other metabolic abnormalities can slow the progression of CKD (7); however, the detailed mechanisms un-derlying these therapeutic approaches are not fully understood.Furthermore, no biomarkers have been directly linked to the mechanism of CKD.These limitations highlight the need to elucidate the underlying mechanisms and identify causal biomarkers of CKD. Uromodulin, also known as the Tamm-Horsfall protein, is a glycoprotein that is predominantly expressed in kidney epithelial cells.Rare mutations in the UMOD, a gene that encodes uromodulin, have been known to cause autosomal dominant tubulo-interstitial kidney disease (ADTKD) (8).Recent genome-wide association studies (GWASs) have revealed various genetic loci associated with the renal function and risk of CKD in European and Asian populations, including the Japanese population (9,10).Among these, the locus UMOD shows a remarkable association with the renal function (9).The relevance of variation in the UMOD locus to CKD was also evident in another study, which showed its influence was stronger among older adults than younger ones (11).Furthermore, the UMOD locus is associated with hypertension (12).Recent advances in the investigation of uromodulin and its relevance to kidney diseases have changed our understanding of rare inherited diseases to common CKD.These observations suggest that uromodulin may be a clue to understanding the underlying mechanism of CKD, particularly for age-related or hypertensive nephrosclerosis, thus providing a novel therapeutic target. In this review, we will summarize the current understanding of the biology, function, and relevance of uromodulin in clinical practice. Biology of Uromodulin Uromodulin, or Tamm-Horsfall protein, is the most abundant protein in urine.Its structural components include a leader peptide, epidermal growth factor-like domains, cysteine-rich domain (D8C), and zona pellucida domain (13).The leader peptide directs its insertion to the endoplasmic reticulum (ER) (14).Because uromodulin has a complex structure with many cysteine residues involved in the formation of disulfide bonds, its processing in the ER is important for the maturation of uromodulin (15).Mature uromodulin is mainly accumulated in the apical membrane by polarized trafficking and is then secreted into the urine via proteolytic cleavage by the serin protease hepsin (16).Previous investigations evaluating ADTKD pathophysiology have revealed that mutant uromodulin alters membrane trafficking, resulting in decreased urinary uromodulin secretion (17)(18)(19).Defective transport of mutant uromodulin in turn causes its accumulation in the ER, leading to ER stress and inflammation (20)(21)(22)(23).In addition to membrane trafficking, defective urinary secretion of uromodulin causes stress in the ER and renal injury.In a mouse model with mutant hepsin, deficient cleavage of uromodulin induced the intracellular accumulation of uromodulin and ER stress (24). Hepsin also regulates the uromodulin structure in urine.Physiological cleavage by hepsin releases a zona pellucida domain that mediates the polymerization of uromodu-lin (16).Due to its ability to polymerize and form filaments, uromodulin can trap uropathogens or inhibit the interaction between the gallbladder epithelium and uropathogenic bacteria (13).Based on these biological and structural features, uromodulin is known to protect against urinary tract infections (UTIs).A protective role in UTIs is evident in the clinical setting.In a case-control study of UTI patients, patients with low urinary uromodulin levels were more common in the bacteremia group than those without bacteremia (25).The relevance of urinary uromodulin to UTIs was further demonstrated in a study of 953 subjects, in which elevated urinary uromodulin levels were associated with a decreased risk of UTIs (26).Recent investigations of the three-dimensional structure of urinary uromodulin by cryoelectron tomography have shown a polymerized zona pellucida domain with protruding arms of an epidermal growth factor-like domain and D8C (27).Most of the previously identified UMOD mutations in ADTKD were located in exon 4, which encodes D8C (28-30); however, the biological functions of D8C have not yet been elucidated. Overview Uromodulin has been shown to have pleiotropic roles in renal homeostasis and systemic inflammation (Fig. 1).Among these, its regulatory roles in the tubular epithelial cell function have been extensively studied.The UMOD gene is evolutionally conserved in all vertebrates (31).Uromodulin is predominantly produced in epithelial cells in the thick ascending limb (TAL) of the loop of the Henle and is present in lesser amounts in epithelial cells in the early distal convoluted tubule (DCT) of the mammalian kidney (32).It has also been found on the skin and gills of fish and in the distal tubules of some amphibians (33).Since tissues in which uromodulin is localized have a common function in handling sodium and chloride transport, it is suggested that uromodulin regulates the balance of salt and water (13).It should be noted that excessive sodium and water reabsorption in the kidney causes fluid overload and hypertension (34).The relevance of uromodulin in hypertension can be partially explained by the regulation of sodium and water balance, which is further discussed below. Another aspect of uromodulin activity is immunomodulation.Uromodulin shows a distinct distribution in the kidney, mostly located in the inner stripe of the outer medulla.Because the outer medulla is vulnerable to changes in perfusion and is rich in immune cells, it has been suggested that UMOD regulates the immune response in the kidney (35). Electrolytes and water transport Urine concentration and dilution are among the most important functions of renal tubules.TAL, which is the primary uromodulin producing segment, is impermeable to water and contributes to the reabsorption of approximately 30% of filtered sodium and dilute tubular fluid.This waterimpermeable reabsorption of sodium is critical for the countercurrent multiplier mechanism for free water conservation; the high interstitial osmolality produced by the reabsorption of sodium is the driving force of passive water transport at the collecting duct to concentrate urine (36). Uromodulin has been shown to regulate sodium transporters in epithelial cells along with TAL and DCT, where the Na-K-2Cl cotransporter (NKCC2) and the Na-Cl cotransporter (NCC) are the main transporters responsible for sodium reabsorption (37).In mice lacking UMOD (Umod -/-), NKCC2 was detected in subapical vesicles but was less strongly expressed in the apical membrane than in wild-type (WT) mice.Furthermore, NKCC2 phosphorylation was significantly decreased in Umod -/-mice (38).Because apical expression and phosphorylation are essential for the membrane protein function (39), the downregulation of apical expression and phosphorylation of NKCC2 in Umod -/-mice indicates defective transporter activation.The loss of function of NKCC2 generates a phenotype similar to Bartter syndrome, a salt-losing tubulopathy.Similar to NKCC2, uromodulin activates NCC in the early DCT (32), indicating its importance in the regulation of sodium.Recently, vasopressin has been shown to increase urinary secretion of uromodulin (40).Vasopressin plays an important role in determining the urine concentration by inducing the apical expression of the water channel aquaporin-2 (AQP2) (41,42).Because vasopressin is a hormone secreted during dehydration or volume depletion, upregulated uromodulin secretion by vasopressin is considered a reasonable physiological response.Furthermore, urinary uromodulin secretion increases under dehydration condition and, conversely, activates AQP2 in collecting duct cells (43).These findings suggest that the TAL and collecting duct work cooperatively to retain sodium and water through cross-talk with uromodulin (Fig. 2). As mentioned above, excess sodium and water retention cause fluid overload, leading to hypertension.Mice overexpressing WT uromodulin (Tg Umodwt ) showed increased levels of phosphorylated NKCC2 along with an increased blood pressure on a high-salt diet, and their blood pressure decreased on a low-salt diet (44).The relevance of uromodulin and its regulatory effect on sodium homeostasis are consistent with those of GWAS in humans.Single nucleotide polymorphisms in the UMOD gene are associated with hypertension.Conversely, variants with low urinary uromodulin are associated with a lower risk of hypertension (12).In a cohort study of the general population, subjects with high urinary uromodulin expression showed a trend for higher blood pressure with high salt intake, while subjects with low urinary uromodulin expression did not show an association between blood pressure and the sodium intake (45).These findings indicated that uromodulin is associated with salt-sensitive hypertension.Uromodulin upregulates NKCC2 and NCC through SPS1-related prolinealanine-rich kinase (SPAK) and oxidative stress response 1 (OSR1) kinase (32,46), which are involved in the pathogenesis of salt-sensitive hypertension (47)(48)(49). In addition to its key roles in modulating sodium and water reabsorption, uromodulin is involved in the regulation of calcium and magnesium.Umod -/-mice show supersaturation of urine with calcium oxalate or calcium phosphate and are prone to the formation of calcium crystals in the kidney (50).It has been suggested that osteopontin, an inhibitor of calcium crystal formation, cooperatively prevents crystal formation (51).A proposed mechanism of protection against urinary stone formation is increased Ca 2+ reabsorption in the DCT through transient receptor potential vanilloid (TRPV) 5. Uromodulin stimulates the expression of apical TRPV5 by inhibiting endocytosis (52).Uromodulin upregulates membrane proteins, such as TRPV5 and AQP2, by inhibiting endocytosis (43,52).Interestingly, the regulatory effect of uromodulin on apical membrane proteins is executed in a segment distant from its production, suggesting an autocrine-like function of uromodulin. Immunomodulation The role of uromodulin is not limited to the TAL, DCT, or the distal parts of the nephrons.Although its concentration is much lower than in urine, it is also secreted from the basolateral side and can be detected in serum (35,53,54).Serum and urinary uromodulin secretion are independently regulated and are suggested to have different functions.Immunomodulation is considered a distinct function of circulating uromodulin and protects against systemic inflammation and oxidative stress.Uromodulin activates interleukin (IL)-23/IL-17 and pro-inflammatory cytokines and induces granulopoiesis (55).In fact, Umod -/-mice show higher levels of pro-inflammatory cytokines/chemokines and neutrophilia than WT mice (56).These regulatory effects contribute to protection against ischemic reperfusion-induced proximal tubular injury (57,58), sepsis (59,60), and vascular calcifica- (61).Furthermore, uromodulin is involved in the binding of probiotic bacteria to the gastrointestinal epithelium and modulates the immune system (62).These findings indicate that uromodulin is a potential therapeutic target for systemic diseases; however, its relevance to extrarenal organs requires further research. Uromodulin in Clinical Practice Recently, uromodulin has been proposed as a novel biomarker for the diagnosis of CKD (63).Given that uromodulin is essentially generated in the kidney, primarily in the TAL, and is secreted in urine with lower levels in the serum, it would be expected that uromodulin is correlated with several renal properties, such as the nephron mass and renal function.The loss of functioning nephron causes hyperfiltration of the remaining nephrons, leading to glomerulosclerosis.Morphological parameters of the kidney or renal cortex, such as volume and length, which are proxies of the nephron mass, are well correlated with the renal function and predict the progression of CKD (64, 65).Urinary uromodulin has been shown to be associated with predictors of kidney mass, including the height, birth weight, and age in healthy subjects (66).Furthermore, urinary uromodulin was positively associated with an estimated glomerular filtration rate (eGFR) <90 mL/min/1.73m 2 and urinary volume but negatively associated with age and diabetes (67).Similarly, serum uromodulin was associated with the eGFR calculated from cystatin C and was more sensitive than conventional markers, such as creatinine and cystatin C (68, 69).Serum uromodulin is also associated with the kidney function in transplanted kidneys and predicts a delayed graft function (70, 71). The associations between uromodulin and the renal function or nephron mass indicate that uromodulin is a promising biomarker in CKD; however, several concerns have been raised about its feasibility.Urinary uromodulin secretion increased with diabetes mellitus and water diuresis (72, 73), and in turn, the serum level of uromodulin level was low in patients with diabetes (74, 75).Therefore, uromodulin secretion varies with physiological stimuli or pathological conditions underlying kidney injury.Uromodulin production per functioning nephron unit is thought to increase under conditions of kidney injury (35).These characteristics may shed light on uromodulin as a new biomarker of the tubular function, in contrast to creatinine reflecting glomerular filtration. Based on its biological aspects, it is likely that uromodulin protects kidneys against CKD.In fact, a high serum uromodulin level at baseline was protective against a decline in the renal function and urinary albumin secretion in patients at high risk for cardiovascular disease during four years of follow-up (76).A similar link has been observed between urinary uromodulin and the onset of CKD (77).Low serum uromodulin concentrations can be used to detect early kidney injury, even when serum creatinine levels are within the normal range (76), indicating that uromodulin is useful for detecting early CKD.In light of these predictive abilities and its potency as a biomarker for hypertension, metabolic abnormalities and tubular injury, measurement of uromodulin seems to be useful in stratifying patients at risk for CKD and comorbid conditions or in detecting alterations in the renal function at an early stage. Conclusions and Perspectives In this review, we summarized the current knowledge on uromodulin and its relevance in CKD.Our understanding of uromodulin has changed from its role in rare inherited diseases to a common public health problem.The biology and functions of uromodulin under physiological conditions have been widely investigated.Uromodulin upregulates tubular epithelial sodium transporters and water channels, and the inappropriate reabsorption of sodium and water leads to hypertension.It is reasonable to expect that suppression of uromodulin would modify hypertension or volume overload via natriuresis and water diuresis.It is also reasonable to propose that uromodulin regulates the tubulo-glomerular feedback (TGF) system.The macula densa, located in between the TAL and DCT, lacks uromodulin expression.Because the macula densa acts as the sensor of the luminal fluid by apical NKCC2 and regulates the TGF, urinary uromodulin may affect the TGF, thereby leading to a reduction in the intraglomerular pressure.This hypothesis potentiates uromodulin as a novel reno-protective agents.The regulatory roles of uromodulin in ion transport, especially sodium transport, require further studies to establish novel therapeutic approaches. Although evidence is still scarce at present, basolateral secretion and the immunomodulatory effects are also important characteristics of uromodulin.Renal tubular inflammation and oxidative stress are closely related to the progression of kidney disease caused by various etiologies, including diabetic kidney disease.Understanding the precise function of circulating uromodulin may open new avenues of research and advances in therapeutic strategies for CKD. In addition, whether or not uromodulin is associated with age-related kidney disease is also unclear.The evaluation of the GFR based on the serum creatinine or cystatin C levels has long been the primary index of CKD.Based on its derivation, uromodulin highlights the roles of renal tubules and the importance of assessing the tubular function, which is another aspect of kidney health.Combining conventional markers of the glomerular function and new tubular function biomarkers will improve the assessment of CKD and its complications.Further research on uromodulin will surely benefit the clinical practice of CKD. Figure 1 . Figure 1.Local and systemic roles of uromodulin.Uromodulin is proposed to be involved in urinary tract infection, blood pressure regulation, kidney stone formation, and immunomodulation. Figure 2 . Figure 2. Physiological role of uromodulin on water retention.TAL and CD cooperatively work to retain free water through cross-talk via uromodulin.Vasopressin or water deprivation increase urinary secretion of uromodulin from the TAL.Urinary uromodulin at the epithelial surface of the CD cells induces the apical sorting of the water channel aquaporin-2.AQP2: aquaporin-2, AVP: arginine vasopressin, CD: collecting duct, NKCC2: Na-K-Cl cotransporter, TAL: thick ascending limb of loop of Henle, V2R: vasopressin-2 receptor
2023-01-17T06:16:45.138Z
2023-01-15T00:00:00.000
{ "year": 2023, "sha1": "20db12e3a0a3236f578123de2163470944d776de", "oa_license": "CCBYNCND", "oa_url": "https://www.jstage.jst.go.jp/article/internalmedicine/advpub/0/advpub_1342-22/_pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "79ede2352403addae2b0c19069a164c171a853f8", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
270793482
pes2o/s2orc
v3-fos-license
A machine-learning-based alternative to phylogenetic bootstrap Abstract Motivation Currently used methods for estimating branch support in phylogenetic analyses often rely on the classic Felsenstein’s bootstrap, parametric tests, or their approximations. As these branch support scores are widely used in phylogenetic analyses, having accurate, fast, and interpretable scores is of high importance. Results Here, we employed a data-driven approach to estimate branch support values with a probabilistic interpretation. To this end, we simulated thousands of realistic phylogenetic trees and the corresponding multiple sequence alignments. Each of the obtained alignments was used to infer the phylogeny using state-of-the-art phylogenetic inference software, which was then compared to the true tree. Using these extensive data, we trained machine-learning algorithms to estimate branch support values for each bipartition within the maximum-likelihood trees obtained by each software. Our results demonstrate that our model provides fast and more accurate probability-based branch support values than commonly used procedures. We demonstrate the applicability of our approach on empirical datasets. Availability and implementation The data supporting this work are available in the Figshare repository at https://doi.org/10.6084/m9.figshare.25050554.v1, and the underlying code is accessible via GitHub at https://github.com/noaeker/bootstrap_repo. Introduction To estimate the reliability of individual clades in an inferred phylogenetic tree, it is a common practice to employ both parametric and nonparametric approaches.Felsenstein (1985) proposed using the nonparametric bootstrap (Efron and Tibshirani 1993), in which resampling of alignment columns is used to generate a set of pseudoalignments.From each such pseudoalignment, a pseudotree (also called a bootstrap tree) is generated.The bootstrap support of each branch (a bipartition of the unrooted tree) is defined as the fraction of bootstrap trees in which this bipartition exists.While bootstrap computations have become the standard in any phylogenetic analysis, this nonparametric bootstrap necessitates the repetition of the tree-search process numerous times, a task that demands a substantial amount of computational time, especially in the case of maximum-likelihood-based tree-searches.Hence, state-of-the-art tree-search software incorporate approximate versions of the standard bootstrap approach, e.g.ultrafast bootstrap in IQTREE (Minh et al. 2013, Hoang et al. 2018) and rapid bootstrap in RAxML (Stamatakis et al. 2008).The primary advantage of the bootstrap approach lies in its nonparametric nature, i.e. it does not rely on any distributional assumptions that could potentially be incorrect.However, Efron et al. (1996) showed that Felsenstein's bootstrap provides only a first-order approximation to the actual support values and may become poor depending on the curvature of the tree-space.To address this limitation, Efron proposed conducting additional secondlevel bootstrap replications, which would be utilized to correct the standard bootstrap score, accounting for the curvature of the tree-space.The number of second-level bootstrap replicates should be substantially greater than that of the first-level approximation, demanding significant computational resources.Indeed, this correction is not implemented in any common phylogenetic software. Fast parametric and semiparametric alternatives to the bootstrap were developed and incorporated in state-of-theart tree search software.The aLRT test (Anisimova and Gascuel 2006) was suggested as a fast and robust approximation to the standard likelihood ratio test.The test statistic is computed as twice the difference between the log-likelihood of the best topology to the log-likelihood of its best nearest neighbor interchange (NNI) topology around the branch in question.This statistic is then compared to a mixture distribution composed of χ 2 0 and χ 2 1 components.The aBayes test (Anisimova et al. 2011) is a Bayesian modification of the aLRT statistic, which approximates the posterior probability of a configuration around the branch in question based on the log-likelihood scores of the three possible NNI configurations.In both aLRT and aBayes tests, optimization is performed solely for the branch in question and its four adjacent branches, thus reducing running-time.The SH-like local branch support test is another variation of the aLRT test, which relies on resampling the data following the SH test (Shimodaira and Hasegawa 1999). Different branch support values have distinct interpretations and demonstrate varying sensitivity to model misspecifications.For instance, bootstrap supports are generally more conservative compared to posterior probability values (Douady et al. 2003).Anisimova et al. (2011) compared Bayesian posterior probabilities, parametric support values, and the nonparametric bootstrap, based on simulations and empirical data analysis.For a given support threshold α, they calculated the false positive rate (FPR), false negative rate (FNR), and the Matthews correlation coefficient (MCC) by labeling branches as "correct" when the support value is above a specified threshold.The authors demonstrated that for the task of binary prediction, both aLRT and aBayes support values significantly outperformed nonparametric approaches.The authors asserted that, despite the desirability of a probabilistic interpretation for branch support, none of the mentioned methods for assessing such support can provide it, even when the underlying model is correctly specified. In this work, we propose employing machine-learning algorithms to develop a new branch-support score.The score relies on multiple features extracted from the multiple sequence alignment (MSA) and the reconstructed maximumlikelihood tree.The machine-learning model was optimized on extensive training data.Analyzing test data, we demonstrate that this score is more accurate than previously suggested scores.This is also true under model misspecification conditions.One of the limitations of previously developed branch support scores is their interpretation.Our score is calibrated to represent the probability of each bipartition to exist in the true tree.We demonstrate that the probabilities obtained by our model are more accurate than those obtained using the widely used support values provided by state-ofthe-art phylogenetic software.Finally, our trained model is substantially faster than the classic Felsenstein's bootstrap. Bipartition inference as a machine-learning classification task We conceptualize branch support estimation as a classification task (as in Anisimova et al. 2011).We classify only bipartitions that are found in the inferred maximumlikelihood tree.The y label for each such bipartition is 1 when the bipartition is found in the true tree, i.e. the tree that generated the data (and y ¼ 0 otherwise).The true label is known from simulated data.We aim to predict this label based on a set of features extracted for each bipartition, e.g. the branch length associated with this partition in the maximum-likelihood tree.Independently generated labeled data are also used to evaluate performance.The trained classifier outputs a score for each bipartition.This predicted label, ŷ, is based on a cutoff value, C. If the machine-learning score is higher than C, ŷ ¼ 1 (ŷ ¼ 0 otherwise).We define true-positive predictions as those with ŷ ¼ y ¼ 1.Such bipartitions were supported based on the machine-learning classifier, and are also found in the true tree.Similarly, false positive predictions (ŷ ¼ 1; y ¼ 0) are those bipartitions of the maximum-likelihood tree that were supported by the machine-learning algorithm, yet they do not appear in the true tree.This allows us to compute confusion matrices for our classifier.We used C ¼ 0.5 for computing confusion matrices.One can consider the classic Felsenstein's bootstrap methodology as such a machine-learning algorithm, in which there is only one feature (the bootstrap score), and no training is performed.Notably, training of a classifier based on a single feature such as the Felsenstein's bootstrap should not change the ranking of results, and thus should not have any significant effect on performance measurements such as the area under the ROC curve (AUC). Branch support methods without machine-learning Assume that we use a specific program for tree inference and for branch support, e.g.IQTREE (Minh et al. 2013) with its ultra-fast bootstrap estimate.We evaluated the performance of this branch support methodology within a classification scheme.To this end, we used a test database of true trees along with their corresponding set of inferred trees.Internal branches (bipartitions) in these inferred trees are associated with ultra-fast bootstrap values.True trees were generated using simulations (see below).This allowed us to estimate confusion matrices and from these confusion matrices, performance was evaluated using AUC, MCC, FPR, FNR, and F1 score. The branch support values are often generated via programs that implement specific maximum-likelihood-based heuristic approach.All combinations of tree inference and branch support methods that were evaluated, are listed in Table 1. A novel machine-learning approach for branch support Various features can be extracted for each branch in question, e.g. its length and the lengths of the surrounding branches.We thus examined whether using multiple features can provide accurate classifications (See below for a list of features).Following feature selection, we generated a trained machine-learning classifier and evaluated its performance.We generated training data that include a large set of true trees, inferred trees, whether each branch in the inferred tree is in the true tree, and their associated branch support scores.The following classifiers were considered: Gradient Boosting Trees, Random Forest, and Neural Networks (see below). Interpreting branch support scores as probabilities Ideally, the branch support values should reflect probabilities, thus providing meaningful and intuitive interpretation.For example, we would like that on average, a branch support of 70% would signify that the branch is correctly placed in the true tree in 70% of the cases.We used the term calibration accuracy (see definition below) as a measure of how well a specific branch score corresponds to probabilities.We compared calibration accuracies of the developed machinelearning classifier as well as standard branch-support values.As we show below, the developed classifier outperforms previous approaches, both in terms of classification and calibration accuracy. Simulation of MSAs We generated train and test data as follows.Dataset1 (DS1) included 6000 simulated MSAs with 100-10 000 sites and between 30 and 1000 taxa.Each such MSA was simulated along a different tree topology using AliSim (Ly-Trong et al. 2022), based on the script provided in the Github repository of RAxML-grove (H€ ohler et al. 2022).The 6000 different trees were randomly sampled from the RAxML-Grove database, which contains trees derived from empirical datasets (H€ ohler et al. 2022).Each such MSA was simulated using the DNA model associated with that tree in RAxML-Grove.DS1 was further divided to train (DS1.a) and test data (DS1.b).Specifically, 70% of the MSAs in DS1 were randomly selected to form the training data and the remaining 30% were used as test data.Each MSA and its associated bipartitions derived from the maximum-likelihood tree were either in the train data or in the test data, never in both.We ensured that the number of sequences included in the MSAs is similar between the train and test set by dividing the data to five equally sized bins, and sampling 70% of the MSAs from each bin for the training data.Dataset2 (DS2) included 750 additional MSAs simulated along 250 independent trees sampled from RAxML-Grove database.DS2 served as validation data, specifically to assess the impact of model misspecification.DS2 comprise three datasets: DS2.a, in which there is no model misspecification (i.e.we used the model assigned by RAxML-Grove), was simulated along the 250 independent trees similar to DS1; DS2.b was simulated using the same trees as DS2.a but using the Jukes and Cantor (JC) model (Jukes and Cantor 1969) in all simulations; DS2.c was also simulated along the same 250 trees as DS2.a, but using the GTRþF þ GþI model (Rodr� ıguez et al. 1990).The trees, the models, and the MSAs have been deposited in the Figshare repository at https://doi.org/10.6084/m9.figshare.25050554.v1. Tree-searches and bootstrap estimates For each MSA in DS1 and DS2, we performed a tree-search including bootstrap estimates in FastTree (Price et al. 2010), RAxML-NG (Kozlov et al. 2019), and IQTREE (Nguyen et al. 2015).In FastTree, we used the default local support test, which is based on SH test on three alternative topologies (Shimodaira and Hasegawa 1999).In DS1 and DS2.a, treesearches were conducted assuming the default GTRþCAT model.In DS2.b and DS2.c, tree searches were carried out assuming GTRþCAT and JCþCAT models, respectively.In RAxML-NG, we utilized the default search configuration and the default nonparametric bootstrap configuration, where the number of replicates is automatically determined.Similarly, in IQTREE, we employed the default search configuration.For the bootstrap analysis, we used the ultrafast bootstrap approximation using 1000 replicates, the aBayes test (Anisimova et al. 2011), and the parametric aLRT test (Anisimova and Gascuel 2006).In both RAxML-NG and IQTREE, tree searches within DS1 and DS2.a were conducted assuming the same model used for the MSA simulation.Tree searches within DS2.b, DS2.c were conducted assuming GTRþF þ GþI and JC models, respectively. Data preparation Each analyzed MSA was simulated along a "true" tree (corresponding to a tree obtained from the RAxML-Grove database).The MSA is also associated with a corresponding inferred maximum-likelihood tree, together with its branch support estimates.Each bipartition of an inferred maximumlikelihood tree was labeled with a value of 1 if it is present in the true tree and 0 otherwise.Subsequently, as elaborated in the next section, we extract features from each bipartition, both from the inferred tree and from the MSA.This process results in a dataset in which each row represents a single bipartition, encompassing its corresponding features and a label indicating whether it is present in the true tree.Three such datasets were generated, each one inferred by a different tree search software: RAxML-NG, FastTree, and IQTREE.A machine-learning classifier was trained and evaluated on each such dataset.This was compared to several branch support scores obtained by the corresponding software (Table 1). Features For each bipartition within an inferred tree, the following features were extracted: (1) number of sequences; (2) number of MSA columns; (3) number of unique MSA columns; (4) percentage of constant sites; (5) PyPythia MSA difficulty (Haag et al. 2022); ( 6) branch length at the partition site; (7) branch length divided by the mean branch length across the tree; (8) branch length divided by the mean branch length among the four neighboring branches; (9-14) median, 25th percentile, 75th percentile, variance, skewness, and kurtosis of branch lengths distribution in the tree; (15) total tree divergence, i.e. sum of branch lengths; (16) tree deviation from ultrametricity as defined in Tria et al. (2017);(17-18) the count and proportion of taxa on the smaller or equal side of the bipartition; (19-20) the cumulative sum of branch lengths and the corresponding fraction on the smaller or equal side of the bipartition; (21-25) the average, minimum, maximum, minimumto-maximum ratio, and variance of the neighboring branches; (26) the parsimony bootstrap score (fraction of trees in which the bipartition exists across 100 parsimony trees generated by RAxML-NG); ( 27) the mean transfer distance (Lemoine et al. 2018) across these 100 parsimony trees; (28-31) same as (26-27), but considering the average and minimum values for the neighboring bipartitions; (32-33) the fraction of trees in which the bipartition exists and the mean transfer distance (Lemoine et al. 2018) across the set of suboptimal ML trees obtained by RAxML-NG; (34-37) Same as (32-33), but considering the average and minimum values for the neighboring bipartitions; (38-39) minimal and maximal log-likelihood difference between the current tree and the two NNI neighbors around the bipartition following branch-length optimization.Features 32-37 rely on suboptimal trees.The RAxML-NG software is the only software that returns suboptimal trees and hence these features were extracted only when bootstrap scores were computed using RAxML-NG.All features were computed using dedicated Python scripts. Machine-learning models The classification models were built using LightGBM, a decision-tree classifier with gradient boosting (Ke et al. 2017), as implemented in the Python package LightGBM. Prior to model fitting, we performed a recursive feature elimination procedure on the train data based on a 5-fold crossvalidation using the Python function feature_selection.RFECV in the scikit-learn library (Pedregosa et al. 2011).In this approach, features are recursively eliminated by searching for the feature with the least importance (as defined below).The feature is eliminated if removing it increases the performance in cross-validation.The process ends when there is no benefit in removing the least important features.Using the same cross-validation strategy, we optimized the following hyperparameters of the LightGBM model: number of leaves in each tree (25,50,100,200), tree depth (3, 6, 12, infinite), learning rate (0.1, 0.01, 0.001), number of tree estimators (100, 300), and subsample (0.6, 0.8, 1).In the crossvalidation procedure, we verified that all partitions associated with the same tree were assigned to the same fold, i.e. the partitions of a single tree were either used for training or testing but not both.To evaluate the importance of each feature, we estimated the average information gain, i.e. the average decrease in entropy when using that feature across the node splits of the decision trees.Aside from LightGBM, two learning algorithms were evaluated: (i) Random-forest, using the implementation of sklearn.ensemble.RandomForest Classifier in the scikit-learn library (Pedregosa et al. 2011). The following hyperparameters of the random-forest model were optimized: max depth (3,5,10) and minimal sample split (2,5,10).(ii) Neural network, using the implementation of sklearn.neural_network.MLPClassifier in the scikit-learn library (Pedregosa et al. 2011).For the neural network we used two layers, with varying number of neurons in each layer.The numbers of neurons in each layer was considered as a hyperparameter and was chosen using cross-validation from three possible options: ((10,3),(30,5),(50,10)).Two options for the learning rate (alpha) were considered: 0.0001 or 0.05.For calibrating the probabilities obtained from the classification model, we used 5-fold cross-validation based on isotonic regression, using the implementation of calibration.CalibratedClassifierCV in the scikit-learn library (Pedregosa et al. 2011). Performance evaluation AUC, MCC, FPR, FNR, and F1 score were used as evaluation metrics to assess accuracy across each dataset (DS1.a,DS1.b, DS2.a, DS2.b, DS2.c) using the implementations in the scikit-learn library (Pedregosa et al. 2011).The AUC score was also assessed individually for each MSA in the test set (DS1.b), and subsequently, a Wilcoxon signed-rank test was employed to compare our model's performance with other branch support scores.This test was implemented using scipy.stats.wilcoxonfrom the SciPy library (Virtanen et al. 2020).In addition, we evaluated how well branch support values reflect probabilities.The probabilistic interpretation of the branch support values was depicted using calibration plots and quantified using the expected calibration error (ECE) based on 30 equally spaced bins (Guo et al. 2017). Code availability The code was implemented in Python version 3.8 and is available through GitHub (https://github.com/noaeker/bootstrap_repo). Running time analysis We conducted a performance analysis, comparing the execution times of different branch support approaches on a Linux cluster system running CentOS.The cluster comprises 69 compute nodes, each equipped with a varying number of CPUs ranging from 12 to 256, along with memory configurations ranging from 54 to 754 GB.The evaluation was carried out using a single CPU for consistency. Empirical data analysis We applied our machine-learning model to empirical datasets from a database of MSAs curated by Prof. Rob Lanfear, which is available at https://github.com/roblanf/BenchmarkAlignments.From this database, we selected the first 20 DNA MSAs, each containing a maximum of 1000 sequences and 10 000 columns. The corresponding publications are listed in Supplementary Table S3.In addition, we downloaded the "animal dataset" from Yahalomi et al. (2020).These data include 78 proteincoding genes from 119 animal species and 10 outgroup species. From this dataset, we selected the first 20 MSAs.For each MSA, we conducted standard tree searches, including bootstrap analysis, using both RAxML-NG and IQTREE.We then compared the machine-learning-based support for each branch within the maximum likelihood tree to Felsenstein's bootstrap and Transfer Bootstrap support in RAxML-NG, as well as to aLRT and aBayes support in IQTREE. Model performance on test data We formulated the problem of estimating branch support values as a machine-learning classification method.The DS1.a data were used to train the machine-learning algorithm (including cross-validation).The features-based machine-learning model demonstrated high performance on these training data, regardless of the software that was used for tree search (a different machine-learning model was trained for each software).The AUC scores of the various models were 0.974 for the machine-learning models that were trained on trees inferred using IQTREE and RAxML-NG, and 0.972 for trees inferred using FastTree.When the trained model was applied to test data DS1.b, similar results were obtained: IQTREE (0.968), RAxML-NG (0.968), and FastTree (0.963).The small difference in performance between the train and test data (<0.009AUC scores for all programs) indicates little to no overfitting of the model.Moreover, the very small differences in AUC among the three programs suggest that the impact of the tree search algorithm on the inferred branch-score values is minimal.Similar results are obtained when considering other evaluation metrics such as MCC, FPR FNR, and F1 score (Supplementary Table S1). Next, we compared the performance of the machinelearning approach to that obtained by six currently used branch-support approaches, as provided by the above three tree inference software (Table 1).In all comparisons, the developed model was found to be more accurate (Fig. 1).For example, when tree searches were performed using IQTREE (Fig. 1, top panel), the machine-learning approach yielded an AUC score of 0.968 compared with ultrafast bootstrap method, which yielded an AUC score of 0.928.IQTREE also implements two additional branch-support scores, aLRT and aBayes, both of which obtained higher AUC scores than the ultrafast bootstrap, but still lower compared to the machinelearning approach (AUC scores of 0.943 and 0.942 for the aLRT test and aBayes test, respectively).The machinelearning-based branch support also demonstrated superior performance compared to the bootstrap support computed by RAxML-NG, which obtained AUC scores of 0.946 and 0.907 using either the Felsenstein's bootstrap and the Transfer Bootstrap Expectation implementations, respectively (Fig. 1, middle panel).This analysis further revealed that, among the various scores examined, the SH test employed by FastTree exhibited the lowest performance, obtaining an AUC score of 0.876 (Fig. 1, bottom panel).Finally, we evaluated the AUC score for each MSA in the test set (DS1.b) separately, comparing the performance of our model to the other branch support approaches (see Materials and methods).Our model achieved significantly higher AUC scores compared to other branch support approaches (P < 10 −97 , Wilcoxon signed-rank test).These results demonstrate that the developed model exhibits superior capability in distinguishing between branches that exist in the true tree and those that do not. Our machine-learning algorithm does not use scores obtained from any of the above three programs as features.We next examined whether further improvement can be obtained by incorporating any of the support values provided by these programs as features within the machine-learning model.However, such inclusion did not result in enhanced performance (i.e. with these features included, the same AUC scores were obtained). Probabilistic interpretation of branch support values In our machine-learning model, branch-support values reflect classification probabilities, i.e. a branch support value of 70% suggests that the probability that the bipartition is found in the true tree is 70%.We next quantified how accurate these inferred probabilities are.Specifically, using simulations, we can estimate which fraction of bipartitions that were inferred to have a branch-support between, for example, 15% and 20% are found in the true tree.In a calibrated methodology, this fraction should also be between 15% and 20%. Figure 2 displays the calibration curves for all branch support methods.We quantified the calibration accuracy using the ECE and compared the machine-learning-based methodology to all alternative methods.For IQTREE, the machine-learning method demonstrated nearly perfect calibration (ECE ¼ 0.002), i.e. almost perfect overlap with the y ¼ x line.In contrast, the ultrafast bootstrap approach provided values much higher than the true probabilities (ECE ¼ 0.043), i.e. it is overconfident across the entire range of support values.The aLRT obtained an ECE value almost identical to the ultrafast bootstrap (ECE ¼ 0.04), however, it was found to be underconfident for support values below 0.5 and overconfident for support values above 0.5.The aBayes approach obtained an ECE 0.033, and was thus also inferior to that obtained by our machine-learning model.In addition, it substantially deviated from expectation for support values below 0.6 (Fig. 2, top panel).The RAxML-NG standard bootstrap values were slightly underconfident for support values above 0.5 (ECE ¼ 0.017) (Fig. 2, middle panel).RAxML-NG Transfer Bootstrap Expectation obtained an ECE score of 0.059.Finally, FastTree branch support values substantially deviated from the expected probabilities (ECE ¼ 0.055) (Fig. 2, bottom panel).These results indicate that when conducting tree searches with all programs, the machine-learning method demonstrated high calibration, thus providing accurate probabilistic interpretation of support values. Effect of model misspecification on model performance To assess the impact of model misspecification on the accuracy of branch-support estimates, we evaluated performance on additional validation data (see Materials and methods).In the first scenario (DS2.a),MSAs were generated without model misspecification, employing the same procedure as in the train and test datasets, to serve as a control dataset.In the second scenario (DS2.b),data were simulated using the JC model, while tree-searches were carried out assuming the GTRþFþGþI model.In the third scenario (dataset DS2.c), MSA data were simulated assuming the GTRþFþGþI model, while tree-search was performed assuming the JC model.We calculated AUC scores and generated calibration plots for DS2.b and DS2.c, comparing the results to those obtained for the control dataset DS2.a for each program.For the trained machine-learning model, under both scenarios of model misspecification, the discrimination ability of the model did not decrease (maximal decrease in AUC when compared to the control dataset is 0.001).Furthermore, our model consistently outperformed the alternative support values provided by each program (Supplementary Table S1). Regarding calibration, the control dataset (DS2.a)exhibited almost perfect calibration (ECE < 0.007 across all three programs; see Supplementary Fig. S1a), as expected.The dataset DS2.b exhibited slightly worse calibration (ECE < 0.01) (Supplementary Fig. S1b) while DS2.c resulted in poorer calibration (ECE < 0.023), particularly in IQTREE and RAxML-NG, where the model predictions showed an upward bias for support values >0.4 (Supplementary Fig. S1c).In all cases, our model was better calibrated than the other branch support scores (Supplementary Table S1). Running time analysis We compared the running times of the various branch support approaches (see Materials and methods).The computation time of RAxML-NG standard bootstrap exhibited a median running time of 138 min on a single CPU.On the same data, our machine-learning model had a median running time of 6.5 min.The most time-consuming feature in our computation is the log-likelihood evaluation of NNI neighbors.Excluding this feature had almost no effect on performance (e.g. for RAxML-NG, AUC score of 0.966 compared to 0.968 with all features), but the median running time was reduced to 7.3 s.For other programs, the branch support values are computed as part of the maximumlikelihood tree search, and hence we could not compare their running times to ours. Feature analysis Next, we analyzed which features contributed most to classification accuracy.Following a recursive feature elimination procedure (see Materials and methods), 32, 39, and 31 were selected out of 33, 39, and 33 features for the models trained for trees inferred using IQTREE, RAxML-NG, and FastTree, respectively.The features chosen by the IQTREE model are detailed in Table 2 (the importance values for all features are given in Supplementary Table S2).For all models (a model for each software), the two most important features were the minimal and maximal log-likelihood differences between the current tree and NNI trees, respectively.The next most important feature, consistently identified by all models, relies on the proportion of parsimony trees, obtained using RAxML-NG, which contain the branch of interest or its neighbors.We next tested the hypothesis that accurate predictions could be obtained by relying on a single top-scoring feature.To this end, we applied the classification algorithm with each feature separately.The most informative feature, when used alone, is the minimal log-likelihood difference between the final tree and the NNI neighbors.This feature achieves AUC scores of 0.943 in both IQTREE and RAxML-NG and 0.935 in FastTree.Although these AUC scores are high, they are lower than the AUC obtained when all features are combined (0.968, 0.968, 0.963 for the same test data in IQTREE, RAxML-NG, and FastTree, respectively).These results clearly demonstrate the need to rely on a combination of features to obtain accurate predictions. Factors affecting model performance We investigated various factors which might affect the performance of our machine-learning model.suggesting that our model is applicable across a broad spectrum of MSAs. As expected, the accuracy of our machine-learning model slightly increased as a function of the number of MSA positions (Fig. 3B).However, it demonstrated minimal variation with respect to the MSA difficulty score (Fig. 3C).We also tested the dependence between accuracy and the number of taxa in the smaller side of the bipartition.Here, a value of 2, for example, indicates a branch leading to a bifurcation to two species, and higher values correspond to deeper bipartitions in the tree.The accuracy was almost the same for deep versus shallow bipartitions (Fig. 3D).Finally, We evaluated whether improved performance may be obtained by increasing the size of the training data.To this end, we examined the logarithmic loss as a function of the number of MSAs used for training.Our findings indicated that accuracy reaches a plateau when the training dataset comprises 400 or more MSAs.In other words, with the current set of features, additional training data from the same source is not anticipated to yield a significant improvement in performance (Fig. 3E).In addition to the gradient boosting ensemble method (GBM), we tested the performance of the random forest model and a neural network model.Both alternative models exhibited a slight decrease in performance, with a minimum decrease of 0.002 in AUC across all software and models. Applying the model on empirical MSAs Substantial differences among branch support values were observed when analyzing 20 protein and 20 DNA empirical datasets with the various branch-support inference methodologies (see Materials and methods and Supplementary Fig. S2).The branch support is, on average, higher for our machine-learning approach compared to Felsenstein's bootstrap and similar to that of the transfer bootstrap expectation method: the average branch support values obtained by our machine-learning model, Felsenstein's bootstrap, and transfer bootstrap expectation were 0.85 (0.65), 0.74 (0.39), and 0.88 (0.64), for the DNA (Protein) MSAs, respectively (Supplementary Fig. S2).However, the machine-learningbased bootstrap score correlated more strongly with Felsenstein's bootstrap than with the transfer bootstrap expectation: Pearson's correlation coefficients (r) to Felsenstein's bootstrap, and transfer bootstrap expectation were 0.73 (0.85), 0.6 (0.61), for the DNA (protein) MSAs, respectively.In comparison to parametric tests in maximumlikelihood trees obtained by IQTREE, our machine-learning approach yielded lower average support than both aLRT and aBayes support: the average support values of our machinelearning model, aLRT, and aBayes were 0.87 (0.76), 0.89 (0.84), and 0.89 (0.86), for the DNA (Protein) empirical MSAs, respectively.Both parametric tests exhibited a similar correlation with our machine-learning-based score: Pearson's correlation coefficients to aLRT, and aBayes were 0.87 (0.79), 0.88 (0.83), for DNA (protein) MSAs, respectively. We next focused on the gene rpl16b from Yahalomi et al. (2020), which includes 701 amino-acid positions.We reconstructed the maximum-likelihood tree using IQTREE with the WAGþG model and computed three branch support values: our machine-learning approach, and the two most accurate other methods based on simulation: aBayes, and the aLRT (the last two tests are implemented in IQTREE).The correlation between the machine-learning scores and these two scores is shown in Fig. 4 (Pearson R 2 of 0.84 and 0.74 between the machine-learning score and aBayes and aLRT, respectively).We searched for the nodes with the highest discrepancy between our approach and each of the two other approaches.The largest differences (for both methods) was in the lineage within stony corals leading to the following species: Agaricia, Galaxea, Porites, Montastraea, and Favia.For this branch, the scores for the machine-learning, aBayes, and aLRT were 0.225, 0.763, and 0.816 (see dots labeled as N1 in Fig. 4).Thus, this subclade is not supported by our methodology, while it is supported by the two others.Although we cannot determine for certain if this clade is indeed incorrect, we note that it disagrees with the tree reconstructed by the entire set of 78 protein-coding genes given in Yahalomi et al. (2020). Another large discrepancy concerns the sponge monophyly.Sponges were paraphyletic in the maximum-likelihood tree of this protein, because hexactinellid sponges were grouped together with ctenophores, placozoans, and cnidarians (rather than with the other sponges).The support for this grouping was above 0.5 for aBayes and aLRT (0.587 and 0.729, respectively.See dots labeled as N2 in Fig. 4).In contrast, the support for the machine-learning methodology was 0.32.Of note, most current research, and the tree based on the entire set of genes support sponge monophyly (Pick et al. 2010, Yahalomi et al. 2020).Our method often provides lower support compared to aBayes and aLRT (average support values over all nodes of 0.87, 0.9, 0.92 for the machine-learning approach, aBayes, and aLRT, respectively).However, in a few cases, our approach provided higher support compared to the two other ones, e.g. for the grouping of two box-jelly genera, Carybdea and Tripedalia, the three support values were 0.39,0.33,0.13 for the machine-learning approach, aBayes, and aLRT, respectively (see dots labeled as N3 in Fig. 4).Of note, this node is supported when information from all 78 proteins are considered (Yahalomi et al. 2020). Discussion Recently, machine-learning algorithms were successfully applied in phylogenetic research, contributing to both runtime efficiency and enhanced inference accuracy.Noteworthy applications include their utilization in model selection tasks (Abadi et al. 2020, Burgstaller-Muehlbacher et al. 2023), inferring phylogenetic trees (Suvorov et al. 2020), ranking candidate trees during a tree-search (Azouri et al. 2021), identification of key genomic loci for elucidating a phylogenetic hypothesis (Kumar and Sharma 2021), sampling of MSA positions to reduce tree-search running time (Ecker et al. 2022), and estimating the difficulty of the MSA (Haag et al. 2022).In this study, we have demonstrated the effectiveness of machine-learning algorithms for branch support estimation, a task traditionally relying on standard statistical tests.We developed a machine-learning classification model to estimate branch support for phylogenies reconstructed using a variety of maximum-likelihood search algorithms.The model was trained using thousands of MSAs which were simulated based on realistic phylogenetic trees, assuming various DNA models.We demonstrated that our methodology provides precise and fast branch support estimates for maximum-likelihood trees obtained using state-of-the-art treesearch software.Furthermore, the developed machine-learning approach outperformed common branch support methodologies in terms of its probabilistic interpretation.We have also shown that our classifier remains accurate under model misspecification scenarios.Finally, the empirical analysis suggests that substantial differences may be obtained by employing different branch-support methodologies, and together with the simulation results, suggest that this machine-learning methodology provides reliable estimate of branch support and should be incorporated in standard phylogenetic software. The features incorporated into this model encompass loglikelihood evaluation, including branch-length optimization, for the three NNI neighbors of each bipartition.It is worth noting that these computations or their approximations are typically executed during a tree-search, incurring no additional computational cost.Nevertheless, even when these features are removed, the model still produced favorable results (maximum difference in AUC compared to the original model across the three programs is 0.005; see Supplementary Table S1). In the development of our machine-learning models, we employed hand-crafted features specifically designed for estimating branch support.While these features exhibit strong predictive power, further improvement can be potentially achieved by adopting a more comprehensive numerical representation of the maximum-likelihood tree and MSA.The MSA can be represented numerically using an unsupervised learning model, such as the one employed by Facebook's protein language model (Rao et al. 2021), while the nodes within the maximum-likelihood tree can be embedded in highdimensional space using graph-based embeddings techniques (Cai et al. 2018, Matsumoto et al. 2021).Such an approach holds the potential to capture complex and relevant characteristics more effectively. In all analyses performed here, it was assumed that the MSA is correct.However, the MSA is inferred, and alignment errors were shown to impact many downstream analyses, including tree topology search and bootstrap estimates (e.g.Wong et al. 2008).Ideally, uncertainty in the MSA should be accounted for within the estimate of branch support.This can be achieved within Bayesian approaches, which jointly infer the posterior distributions of alignments and trees (Redelings 2021).However, how to integrate alignment uncertainty within a frequentist inference framework is more challenging.It is possible to repeat the tree search and the branch-support inference for a set of alternative alignments and assign each branch the average support over these alternative alignments (Chatzou et al. 2018, Chang et al. 2021).A set of alternative alignments can be generated by running different alignments programs, by considering co-optimal alignment solutions (Landan and Graur 2007), or by integrating Figure 1 . Figure1.ROC curves on various test data.Each panel displays the ROC curve obtained with the branch score predictions generated using the trained machine-learning procedure compared to existing scores obtained with the respective tree search software.The top, middle, and bottom panels represent the scores obtained with trees reconstructed using IQTREE, RAxML-NG, and FastTree, respectively, on the test data.The dotted diagonal line is the y ¼ x line.The remaining curves represent the performance of our machine-learning model along with support values provided by the other programs. Figure 2 . Figure2.Calibration plot on the test.Each panel displays the calibration curve obtained with the branch score predictions generated using the trained machine-learning procedure compared to existing scores obtained with the respective tree search software.The top, middle, and bottom panels represent the scores obtained with trees reconstructed using IQTREE, RAxML-NG, and FastTree, respectively, on the test data.The dotted diagonal line is the x ¼ y line.The remaining curves showcase the performance of our machine-learning model compared to other programs. Figure 3 . Figure 3. Influence of various factors on prediction accuracy in FastTree, IQTREE, and RAxML-NG models: (A) AUC as a function of the number of sequences; (B) AUC as a function of the number of MSA positions; (C) AUC as a function of MSA difficulty score; (D) AUC as a function of number of sequences in the smaller part of the bipartition (E) logarithmic loss as a function of the number of MSAs used for training.In figures A-D, the x-axis denotes the median value derived from dividing the numerical column into 30 quantile-based bins Figure 4 . Figure 4. Comparison of machine-learning-based support values to aLRT and aBayes support values for the rpl16b gene using IQTREE: The x-axis represents the machine-learning score and the y-axis represents the scores of the other methods.Dots labeled as "N1" correspond to the lineage within stony corals leading to the following species: Agaricia, Galaxea, Porites, Montastraea, and Favia.Dots labeled as "N2" indicate support for sponge paraphyly.Dots labeled as "N3" represent the grouping of two box-jelly genera, Carybdea and Tripedalia Table 1 . Several branch support methods implemented in current tree search software. Table 2 . Analysis of feature importance: Gini importance for the IQTREE model and corresponding AUC values using each individual feature. The model accuracy was not affected by the number of sequences (Fig 2.A),
2024-06-29T06:17:20.394Z
2024-06-28T00:00:00.000
{ "year": 2024, "sha1": "3f1f8c75d067016a0ee834e0570223e622fffe70", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "80d2077eef5e0840fc739ec9320d7b1ee4089284", "s2fieldsofstudy": [ "Computer Science", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
86743279
pes2o/s2orc
v3-fos-license
Development of a Genetic Algorithm Optimization Model for Biogas Power Electrical Generation Biogas power generation is renewable energy made from biological materials. Biogas power production is technology which helps in development of sustainable energy supply systems. This paper develops Genetic Algorithm optimization model for Biogas electrical power generation of Ilora in Oyo, Oyo state. The production is done using co-digestion system of pig dung and Poultry dung under the process of anaerobic digestion. The pig dung and poultry dung were mixed 50:50%. MATLAB and VISUAL BASIC Software was used to carry out simulations to develop optimized Genetic Algorithm model for Biogas power production with aims to improving electricity accessibility and durability of the community. The results of the research reveal the Empirical Biogas power production without and with Genetic Algorithm optimization. The Result showed that biogas electrical power generated without and with Genetic Algorithm Optimization were 5KW and 11.18KW respectively. The biogas power generation was increased by 6.18KW, which is 38.2% increase after Genetic Algorithm optimization. The results show the application of the Genetic Algorithm optimization model which can be used to improving Biogas power generation when amount of methane gas produced from the animal dung varies with speed of thermal rotating shaft. I. INTRODUCTION Bioenergy is renewable energy obtainable from materials derived from biological sources such as animal manure.This renewable energy supply is biological material from living organisms including plants and animals.The foremost promising amongst the renewable energy sources is Biomass, but there is more research to prove that power generation from biomass is both economically and technically viable.Biomass may be burned to provide steam for creating electricity, or to provide heat to industries and homes.In addition, biomass may be regenerate to alternative usable forms like methane series gas, ethanol fuel and biodiesel fuel.Biomass power plants [16]. The fact that fossil fuel resources required for the energy generation are becoming scarce and that climate change is related to carbon emissions to the atmosphere has increased interest tremendously in energy saving and environmental protection [20].The minimizing energy consumption depends on fossil resources by applying energy savings programs focused on energy demand reduction and domestic fields spheres and energy efficiency in industrial [7], [10].Renewable energy technologies have less competitive than traditional electric energy conversion systems, mainly because of their relatively high maintenance cost and intermittency.The benefits of renewable energy sources are the reduction in carbon emissions to the atmosphere and reduction in dependence on fossil fuel resources.Furthermore, renewable energies source prevent the safety problems derived from atomic power [17], which is why, renewable energy power plants has become more desirable and acceptable to adopt from the social point of view [18].The most important decision businesses and governments are to establish renewable energy systems in a given place and to decide the best of renewable energy source or combination of sources.Lund et al. [8] analyzed strategies for a sustainable development of renewable energy observing three major technological changes: the energy savings on the side of demand, improvements of efficiency in energy production, and the replacement of fossil fuels with many sources of renewable energy.The renewable energy technologies improvement will assist sustainable development and give several solutions to energy related environmental problems. In this sense, algorithms improvement represents an acceptable and best tool for resolution advanced issues within the renewable energy systems field. Optimization can be defined as a discipline with finding inputs of a function that minimize or maximize its value, which always subjected to constraints [12].Combinatorial optimization is a branch of optimization which deals with the discrete variables of function optimization [5].Computational optimization is the process of designing implementing, and testing algorithms for analyzing a large quantity of optimization problems.Computational optimization includes the disciplines of mathematics model formulation, model the system to research operations, computer science for design algorithmic and analysis, and model implementation of software engineering. II. RELATED WORKS Energy resources are very important and crucial for the development of the nation, which is why change in energy system technological is a very inevitable factor that researchers need to deal with [9].In the many papers optimization methods was propose for solving problems found in renewable energy systems. Development of a Genetic Algorithm Optimization Model for Biogas Power Electrical Generation Timothy Oluwaseun Araoye, C. A. Mgbachi, Olushola Adebiyi Omosebi, Oluwaseun Damilola Ajayi and Adeleye Qasim Olaniyan Reference [13] presented a binary PSO-based method to achieve biomass optimal location fueled systems for distributed power generation with biomass source of forest residues, and the results out performed those obtained by a GA when maximizing index profitability taking into technical constraints.Reference [15] proposed an optimization method for multi-biomass energy conversion applications dealing with various technical, regulatory, social and logical constraints.Also PSO has been applied for the optimal location and supply area for biomass-based power plants where the maximum electric power generated by the plant is considered as a constraint [14].Reference [19] applied an algorithm for the optimal location of a biomass power plant with the aim of providing the best profitability for investors which is nature-inspired.Reference [3] develops simplex optimization model to optimize the biogas energy.The results reveal the economics important with increase in power output.Reference [11] discovered an interesting review of the first and second generations of biofuels from the sustainable point resources.There are some promising alternatives among this second generation of biofuels, such as thermochemical conversion of biomass to biofuels.However, the modeling and optimization of the process integration methods to demonstrate an effective way for the exploitation of these interactions which requires the complexity of the conversion [4].Reference [2] developed a method to assess optimal management and energy use of distributed biomass resources, considering features such as biomass resources properties, plant size effect, heat and solid biofuels generation, CO 2 emissions balance, available technologies for power, and quantification of potential biofuel consumers. Some authors have reviewed different types of models such as emission reduction models, renewable energy models, energy supply demand models, forecasting models, energy planning models, and control models using optimization methods [6], For this reason, this paper Develop a Genetic Algorithm optimization model for improving Biogas Electrical Power generation. III. GENETIC ALGORITHM Genetic Algorithm is a technique with general principles that generated from the genetic mechanisms in living being populations and evolution of natural systems.This principle includes the solution of population maintenance to a problem (genotype) that evolve in the individual time information [1]. Therefore, genetic algorithm has basic three operators which include (a) The operator recombination also called crossover, which selects two individuals within the crossover site and the generation and moves a swapping operation of the strings bits to the crossover site of both individual's right hand [7]. Recombination operations synthesize bits of gained from both parents exhibiting better than average performance.Hence, increase the probability of the offspring more productive (b) the production operator, which produces one or more duplicate of any individual that possess high fitness value.(c) Mutation operator always acts as a background operator which can be used to explore some of invested point in space by flipping randomly a 'bit' in a population strings. A. Proposed Algorithm The Algorithm used in this paper for analyzing the Biogas electrical power system is presented in the Visual Basic model for proper analysis of animal dung used and the optimization method which was programmed in MATLAB/Simulink. The power output is formulated in order to consider the power flow in the thermal engine when the mass of dung varies with operational load.The optimization is ascertained from the result gotten from the experimental research done in Ilora, Oyo, Oyo State.The Algorithm is designed in such a way that the system will trigger the thermal engine to work with high power at low methane gas. B. Materials and Method This research paper develops an optimization model for Ilora in Oyo state with the aims of improving accessibility and durability of Electricity in the community.The waste materials used in the production of biogas include: Pig dung which was obtained from Slaughter house, Ilora, Oyo state, and poultry manure collected from BODFEM farms, Ilora, Oyo state.This system was designed calculating daily power generated under thermophilic condition.The pigs dung was mixed with water in the ratio 1:1.Poultry dung was also mixed with water in the ratio 1:1.The pig slurry and poultry slurry were mixed 50:50%.The mixture was fed into the digester through an inlet pipe in the inlet tank and the slurry flow to the digester vessel for digestion.The methane gas produced through fermentation in the digester is collected in the Gas holder.The digested slurry flows to the outlet tank through the main pipe.The slurry then flows through the overflow opening in the outlet tank to the compost pit.The gas is supplied from the gas holder to the gas Compressor which generates output Power.In Fig. 2 shows the visual Basic used to Analysis the gas flow rate when the system is on full load and Power generated.The optimization of power generated is done using Genetic Algorithm.The mass of dry solid in waste is given by: The volume of Biogas is given by: The volume of fluid in the digester is given by: The volume of the digester is given by: The Energy generated is given as: Where Hb is the heat of combustion per unit volume biogas, ɳ is the combustion efficiency of burners.Where R is the biogas yield per unit dry mass of whole input 0.2-0.4m 3 kg -1 and M0 is the mass of manure input.Where Vf is the flow rate of the digester fluid and tr is the retention time in the digester.Where ρm is the density of dry matter in the fluid.Where Na is the number of animals that produced the dung and Cw the solid in waste per animal per day/kg. D. Optimization Modeling The empirical or measured data collected from Ilora, Oyo State was used to develop a Mathematical model for the optimizing power generated.The result obtained from Linear Programming optimization is embedded in Genetic Algorithm toolbox of MATLAB, in order to obtain the best Biogas Electrical power generation.Maximize Where P is the generated electric biogas power X1 is mass of the dung X2 is volume of the biogas X3 is generated electric energy IV.DISCUSSION OF RESULTS The result of Genetic Algorithm optimization is represented in Fig. 3 to 9. Fig. 3 shows designed Simulink model for optimizing power production in biogas based electrical power generation with and without using genetic algorithm.After processing the 150kg of dung, the volume of biogas generated is 36m 3 .The electric generated power is 5KW/day.135kg of manure produced 32.4m 3 of biogas and 4.5KWh/day of electrical Power.120kg, 105kg, 90kg, 75kg, 60kg, 45kg, 30kg, and 15kg mass of dungs produced 28.8m 3 , 25.2m 3 , 21.6m 3 , 18m 3 , 14.4m 3 , 10.8m 3 , 7.2m 3 and 3.6m 3 of biogas and thus generated 4KW/day, 3.5KW/day, 3KW/day, 2.5KW/day,2KW/day, 1.5KW/day, 1KW/day, and 0.5KW/day of electrical Power respectively.With optimized Genetic Algorithm the Electrical Power generated is 1.118KW/day, 2.236KW/day, 3.354KW/day, 4.472KW/day, 5.59KW/day, 6.708KW/day, 7.708KW/day, 7.826KW/day, 8.944KW/day, 10.05KW/day and 11.18KW/day and mass of dungs 150kg, 135kg, 120kg, 105kg, 90kg, 75kg, 60kg, 45kg, 30kg, 15kg was produced respectively.Therefore, there is positive linear relationship between the Masses of dung used, the volume of the biogas produced and the electrical Power produced. Fig. 4 shows optimized genetic algorithm result.The optimized result obtained were input in the genetic algorithm and run in MATLAB environment to ascertain the authenticity of the result and it gave the same power output of 0.5KW.Fig. 5 shows result of the optimized mathematical model.The result shows that the mass of the dung X1 is 0.0295 kg, volume of the biogas X2 is 0.0160m 3 and the generated biogas electric power output is 0.5kw.Hence the Linear optimization result is embedded in Genetic Algorithm toolbox of MATLAB. Fig. 6 shows empirical power output without using optimized genetic algorithm.The result shows that there is positive relationship between mass of animal dung and the power generated.Fig. 7 shows biogas power output using optimized genetic algorithm.The result indicates the increase in power output of Biogas electrical plant Fig. 8 shows comparing power output without and with optimized genetic algorithm.The result reveals that there is increase in Biogas power output when Optimized Genetic Algorithm is used.The result of the experimental power output was 5KW while the result of Optimized genetic algorithm is 11.18KW with percentage improvement of 38.2%.Fig. 9 shows the Optimized Genetic Algorithm model for Biogas Electrical Power generation.The Genetic optimization model is given by: 6 .The results show the application of the Genetic Algorithm optimization model which can be used to improving Biogas power generation when amount of methane gas produced from the animal dung varies with speed of thermal rotating shaft. V. CONCLUSION A Genetic Algorithm optimization model for Biogas Electric power generation in Ilora, Oyo state has been formulated.The Amount of methane gas in Biogas production will affect Thermal rotating shaft of Biogas Electrical Plant.Therefore, the more the methane gas in the Biogas thermal engine the greater the power produced.The mixture of pig dung and poultry dung were used to prepare the digester of Biogas Electrical power generation in the same proportion.MATLAB and VISUAL BASIC Software was used to carry out simulations to develop optimized Genetic Algorithm model for Biogas power production with aims to improving electricity accessibility and durability of the community.The results of the research reveal the Empirical Biogas power production without and with Genetic Algorithm optimization.The result showed that biogas electrical power generated without and with Genetic Algorithm Optimization were 5KW/day and 11.18KW/day respectively.The biogas power generation was increased by 6.18KW/day, which is 38.2% increase after Genetic Algorithm optimization.The results show the application of the Genetic Algorithm optimization model which can be used to improving power generation when amount of methane gas varies with speed of thermal rotating shaft. Published on February 16, 2019.T. O. Araoye and C. A. Mgbachi are with the Department of Electrical and Electronics Engineering, Enugu State University of Science and Technology, Enugu, Nigeria (e-mail: timmy4seun@yahoo.com).O. A. Omosebi is with the Department of Works and Services, Federal College of Education, (special), Oyo, Nigeria.O. D. Ajayi and A. Q. Olaniyan are with the Department of Electrical and Electronics Engineering, University of Ibadan, Ibadan, Nigeria. Fig. 2 . Fig. 2. Visual basic GA optimization model for Biogas Electrical Power Fig. 3 . Fig. 3. Simulink for Genetic Algorithm optimization for Biogas Electrical Power Generation. of methane gas produce from Biogas plant.z= Thermal rotating shaft of Biogas Electrical Plant.y = Total Electrical Power generated Araoye Timothy Oluwaseun is a postgraduate student of Power system engineering and renewable energy in the department of Electrical and Electronics Engineering of Enugu State University of Science and Technology, Enugu, Nigeria C.A Mgbachi is a senior Lecturer of power and computer system engineering in the Department of Electrical and Electronics Engineering Enugu State University of Science and Technology, Enugu, Nigeria.He has published over forty international journals accruing from numerous research activities.He has attracted and successfully completed many research grants.Omosebi Olushola Adebiyi is a senior Engineer of federal college of Education, (special), Oyo in the Department of Works and Services.Ajayi Oluwaseun Damilola is a postgraduate student of Renewable Energy and Power system in the department of Electrical and Electronics Engineering of University of Ibadan, Nigeria Olaniyan Adeleye Qasim is a postgraduate student in the department of Electrical and Electronics Engineering of University of Ibadan, Nigeria.
2019-03-28T13:14:33.096Z
2019-02-16T00:00:00.000
{ "year": 2019, "sha1": "6e2bbf968281f754d89082a3041aec8e8a682307", "oa_license": "CCBY", "oa_url": "https://www.ejers.org/index.php/ejers/article/download/1111/449", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "6e2bbf968281f754d89082a3041aec8e8a682307", "s2fieldsofstudy": [ "Environmental Science", "Engineering" ], "extfieldsofstudy": [ "Computer Science" ] }
12812570
pes2o/s2orc
v3-fos-license
Human papillomavirus and oropharyngeal cancer in Greenland in 1994–2010 Background Oropharyngeal squamous cell carcinoma (OPSCC) is associated with the sexually transmitted human papillomavirus (HPV), smoking and alcohol. In Greenland, a high rate of HPV-induced cervical cancer and venereal diseases are found, which exposes the population for high risk of HPV infection. In Greenland, only girls are included in the mandatory HPV vaccination program. Objective To investigate the annual incidence of OPSCC and the proportion of HPV-associated OPSCC (HPV+ OPSCC) in Greenland in 1994–2010. Design At Rigshospitalet, University of Copenhagen, we identified all Greenlandic patients diagnosed and treated for OPSCC from 1994 to 2010. Sections were cut from the patient's paraffin-embedded tissue blocks and investigated for p16 expression by immunohistochemistry. HPV analyses were performed with 2 sets of general HPV primers and 1 set of HPV16-specific primer. HPV+ OPSCC was defined as both >75% p16+ cells and PCR positive for HPV. Results Of 26 Greenlandic patients diagnosed with OPSCC, 17 were males and 9 were females. The proportion of HPV+ OPSCC in the total study period was 22%, without significant changes in the population in Greenland. We found an increase in the proportion of HPV+ OPSCC from 14% in 1994–2001 to 25% in 2002–2010 (p=0.51). Among males from 20 to 27% (p=0.63) and in females from 0 to 20% (p=0.71). The annual OPSCC incidence increased from 2.3/100,000 (CI=1.2–4.2) in 1994–2001 to 3.8/100,000 (CI=2.4–6.2) in 2002–2010: among males from 2.4/100,000 (CI=1.0–5.7) to 5.0/100,000 (CI=2.9–8.9). Conclusion Even though the population is at high risk of HPV infection, the proportion of 22% HPV+ OPSCC in the total study period is low compared to Europe and the United States. This might be explained by our small study size and/or by ethnic, geographical, sexual and cultural differences. Continuing observations of the OPSCC incidence and the proportion of HPV+ OPSCC in Greenland are needed. H ead and neck cancer is the fourth most common malignant cancer worldwide and is known to be associated with high consumption of alcohol, tobacco and human papillomavirus infection (HPV) (1,2). Incidence of oropharyngeal squamous cell carcinoma (OPSCC), located in the tonsils, the tongue base and the soft palate has increased, and it is now the most frequent head and neck cancer in the United States (3). In Denmark, the incidence of OPSCC has increased from a rate of 1.5/100,000 in 1970 to a rate of 5/100,000 in 2010 (4). The increase tended to be in younger males and is hypothesized to be caused by sexually transmitted HPV (5). In Greenland, the annual incidence rate of OPSCC from 1994 to 2003 was 2.1/100,000 accounting for 11% of all head and neck cancers in Greenland (6). The development of OPSCC in Greenland is thought to be explained by high tobacco consumption (79% smokers in 1993Á1994), alcohol consumption and possibly by HPV infection (6,7). However, to date no investigation of the proportion of HPV-associated OPSCC (HPV' OPSCC) in Greenland has been conducted (6,8). In Greenland, the patients have often been diagnosed late in advanced stages (69% in stage IIIÁIV), with a low 5-year survival rate from OPSCC in 1994Á2003 of 30%, compared to 51% for tongue base cancer, and 60% for tonsillar cancer in the United States in 2000Á2002 (6,9). A lower survival rate in Greenland may be remedied by HPV immunization and improved treatment of OPSCC. Determining the proportion of vaccine-preventable HPV' OPSCC in Greenland is needed. HPV is known to cause cervical squamous cell carcinoma (CSCC). A recent study finds a significant similarity in the miRNA profile of HPV' CSCC and HPV' OPSCC (10). Other recent studies suggest an association of HPV with squamous cell carcinomas such as anal, penile and even breast cancer, which have similar non-keratinizing epithelium as the cervix (11,12). In Greenland, the annual incidence rate of CSCC from 1988 to 1996 was three to four times higher than the rate in Denmark, the age of sexual debut was lower, and a higher incidence of venereal diseases such as gonorrhoea was found (13,14). These facts and a register study reporting that partners to women with CSCC have an increased risk of acquiring HPV' OPSCC, indicate a population with a high risk of sexually transmitted HPV, that could possibly lead to HPV' OPSCC (15). The carcinogenic effect of HPV is caused by the expressed oncoproteins E6 and E7 that inactivate the tumour suppressor genes p53 and retinoblastoma protein (pRB) leading to uncontrolled growth and a reciprocal up-regulation of the tumour suppressor protein p16 (16,17). p16 is therefore considered as a surrogate marker for HPV infection. The specific HPV type can be determined using a DNA-based PCR analysis on tumour DNA (18). More than 100 types of HPV are known, but only the 15 types HPV16, 18,31,33,35,39,45,51,52,56,58,59,68,73 and 82 are of high risk and carcinogenic (19). In Greenland, 96.3% of the HPV' CSCC was found to be HPV16 induced, and we therefore expect most of the HPV' OPSCC to be induced by HPV16 (20). In Greenland, the HPV vaccine is currently administered, free of charge, only to girls aged 12Á27 years as a part of the Danish and Greenlandic free mandatory national vaccination programme (21). Introduction of the HPV vaccine to males in Greenland has been discussed, but the government has chosen not do so yet. The aims of the present study are to investigate the proportion of HPV' OPSCC and OPSCC incidence in the Greenlandic population and to find potential changes in the time period 1994Á2001 compared to 2002Á2010. The perspectives of HPV vaccination of boys will also be presented. Patients This study is retrospective, including patients born and living in Greenland who were biopsied and diagnosed with OPSCC (ICD-code C05.1, C09, C10 and C14.2) in the period 1994Á2010. The patients were biopsied in Greenland or at Rigshospitalet, and all were transferred for further treatment at Rigshospitalet, which serves as a tertiary referral hospital for Greenland. The patients and their medical records were identified by the patient registry at Rigshospitalet and the diagnosis verified by the Danish Pathology Registry. Information regarding tobacco and alcohol consumption, and the stage of cancer was obtained from medical records and from the Danish Head and Neck Cancer Group database (DAHANCA) (22). The patient's birthplace in Greenland was validated by a workgroup on Statens Serum Institut (SSI), by finding the patient in the national birthplace database in Greenland using the patient's unique civil registration system number. p16 immunohistochemistry procedure Formalin-fixed paraffin-embedded (FFPE) tumour tissue blocks (archival specimens) of OPSCC from the Greenlandic patients were collected at the Department of Pathology, Rigshospitalet, where all histological specimens from Greenland are examined. Tumour sections of 4 mm were cut, stained with HE and the OPSCC diagnosis was verified by a specialized head and neck pathologist. For p16 immunohistochemistry, the representative slides were incubated with p16 antibody (JC8, Santa Cruz) and stained on a BenchMark ULTRA (Roche, Copenhagen, Denmark) using the UltraView detection system (Fig. 1). The p16 staining results were scored by the same specialized pathologist as: 00no positive staining, 1'01Á25%, 2'026Á50%, 3'051Á75% and 4'076Á100%. Only the slides with the score 4' ( !75% staining) and staining of both the nuclei and cytoplasm were considered p16 positive (p16'). PCR analysis Four sections of 10 mm were cut from the FFPE OPSCC specimens and were transferred to Eppendorf tubes. The genomic DNA was extracted by the QIAamp procedure (23) after pretreatment of the sections with xylene and precipitation with 100% EtOH. Two DNA extractions were performed on sections from each patient and analyzed for purity, mass and quantification of DNA by Nanodrop equipment (24). Afterwards, the extracted samples were stored at (208C until the PCR procedure was performed. DNA quality of the extracted DNA samples was controlled by the expression of the housekeeping gene glyceraldehyde 3-phosptate dehydrogenase (GAPDH) (25). Only DNA samples expressing GAPDH were used for HPV-specific PCR analysis. Two general primer sets My09/My11, Gp5'/Gp6' and 1 set of specific HPV16 primers were used as described by Lajer et al. (10). If one of the primer sets amplified the HPV DNA and was visualized on 2.5% agarose gel, the sample was scored as PCR positive for HPV (Fig. 3). Definition of HPV-associated (HPV') OPSCC We defined HPV-associated (HPV') OPSCC by 2 criteria: The p16 immunohistochemistry should be of score 4' ( !75% positive cells) and the specimen should be PCR positive with one of the HPV-specific primer sets for OPSCC (26). As smoking down-regulates p16 expression, this rigorous definition was applied to clarify the OPSCC that was induced by HPV (27). Statistics Statistical analyses were performed using SPSS v. 20 from IBM. Fisher's exact test was used for testing proportions. Significant results were defined as a p-value of 50.05. Annual incidence rates were age-standardized using the world population and calculated using the mean population of Greenlandic inhabitants as the denominator in 2 study time intervals: 1994Á2001 (48,924 people, males 26,193, females 22,731) and 2002Á2010 (49,618 people, males 26,593, females 23,025) (28). Rothman/Greenland method was used for calculating 95% confidence intervals in Openepi (29). Ethics The study followed the Helsinki II declaration and was approved by the Commission for Scientific Research in Greenland. The Danish Data Protection Agency approved the use of the patient's data. Results A total of 30 patients diagnosed with OPSCC in Greenland from 1994 to 2010 were identified. Of these, 4 patients were not born in Greenland, leaving 26 ethnical Greenlanders with available OPSCC specimens for the study. Table I shows the annual incidence rate of OPSCC in the study period, the proportion of p16' OPSCC, HPV' OPSCC and HPV ( OPSCC according to sex distribution, the time intervals 1994Á2001 and 2002Á 2010, smoking habits and alcohol intake. Table II shows the median age when diagnosed with OPSCC according to p16' OPSCC, HPV' OPSCC, HPV ( OPSCC and the gender of the patients. The median age at diagnosis of the patients with HPV' OPSCC was 47 years compared to the age of 63 years when diagnosed with HPV ( OPSCC (p00.3) (Table II). Discussion In our study, we found an increase in the proportion of HPV' OPSCC from 14% in 1994Á2001 to 25% in 2002Á2010, and an overall HPV' OPSCC proportion of 22%. Compared to Europe, the proportion is low. A study conducted in the United Kingdom using similar methods and definitions of HPV' examined 108 OPSCC patients and found an increase in the proportion of HPV' OPSCC from 15% in 1988 to 57% in 2009 (30). International meta-analyses and reviews have found the proportion of HPV' OPSCC to be between 20 and 75%, increasing the last decades (5,12,31). The genetic and cultural aspects may play a role, since a study conducted in Hokkaido, Japan, using solely the sensible PCR method, including 71 OPSCC patients from 1998 to 2009 found the proportion of HPV' OPSCC to be only 32% (32). Our lower proportion of 22% HPV' OPSCC, compared to western countries, may be explained by the small study group consisting of only 26 Greenlandic patients. Also high smoking and alcohol consumption was so dominant in Greenland that the effect of the HPV maybe is difficult to observe (Table I). In 1993Á1994, the prevalence of current smokers was 79% in Greenland and decreased to 66% in 2005. As this prevalence is expected to further decrease, the proportion of HPV' OPSCC must be expected to increase. It is known that HPV is sexually transmitted and the rate of oral infection with HPV16 correlates with the number of sexual partners, oral sex and open-mouthed kissing. Also, from a register study we know that partners to women with positive pap-smears for CSCC are at higher risk of OPSCC (15,33). The Greenlandic inhabitants have an early sexual debut, high rates of venereal disease and 3Á4 times the CSCC rate of Denmark, but surprisingly this does not seem to have induced a high proportion of HPV' OPSCC in Greenland at present (13,14,20). The findings correlates with a study from 2008 that found the prevalence of cervical HPV infection to be lower in Greenlandic women aged more than 20 years, compared to Danish women (34). The lower proportion of HPV+ OPSCC could be explained by different sexual practice, hygiene, genetics, cultural differences or maybe because the spread of HPV started 10 years later in Greenland (34). These theories need further investigation. The proportion of 22% HPV' OPSCC cases demonstrates that HPV infection is more common in the Greenlandic patients with OPSCC compared to healthy people. American studies conducted in 2009Á2010 among 3,977 healthy Americans found 1.3% to have oral HPV infection (35), and another study found that 3.6% of females and 10% of males had some type of oral HPV infection (36). The proportion of HPV infection in the American study was found to correlate positively with the number of sexual partners (36). The proportion of oral HPV infection in healthy individuals still needs to be investigated in Greenland. We find a discrepancy between the proportion of 42% p16' OPSCC cases and only 22% PCR-positive OPSCC cases. These p16' and PCR-negative OPSCC cases may have been induced by other factors such as other viral infections or mutations (e.g. adenovirus, CMV or Polyoma SV40 virus) that might inactivate pRB and thereby lead to an up-regulation of p16 (37,38). p16 up-regulation may also be induced be cellular oxidative stress, cell ageing or physiological stress (39Á41). To exclude laboratory bias and older eventually degraded specimens, we controlled the DNA quality by the Nanodrop procedure and by PCR for the housekeeping gene GAPDH before performing PCR for HPV (24,25,42). Males accounted for two-thirds of the OPSCC cases. The alcohol and tobacco consumption were lower in patients with HPV' OPSCC compared to HPV ( OPSCC patients, and the HPV' OPSCC cases were diagnosed 16 years earlier (median 47 years) compared to HPV ( OPSCC (median 63 years). These results are in accordance with international findings (5,43,44). Perspectives Possible treatments of HPV' OPSCC today include surgery and radiotherapy with or without concomitant chemotherapy. But it is possible that vaccines administered today will prevent HPV' OPSCC in the future. The quadrivalent vaccine covers the oncogenic HPV16 and HPV18 viruses and the non-oncogenic, papilloma inducing HPV6 and HPV11 viruses, and the bivalent vaccine covers HPV16 and HPV18 viruses. In a multiethnic, multinational (12,45) 4-year follow-up study and in a Danish study (46), the quadrivalent vaccine was shown to be effective against HPV6, 11, 16 and 18, with fewer anal papillomas and cervical squamous cell neoplasias and with continuously high anti-HPV titres in the blood. From 1988 to 2004, the annual incidence of HPV' OPSCC in the United States has increased 225%, and it is especially found in males aged 30Á50. The incidence of HPV' OPSCC is predicted to surpass the incidence of HPV' CSCC in the year 2020 (5,47). As a response to this and the fact that HPV is associated with anal and penile cancer, the US Food and Drug Administration in 2009Á2011 approved the HPV vaccine to both girls and boys aged 9Á16 years (11,12,45). In Greenland, the vaccine is also approved for boys, but not included in the mandatory childhood vaccination programme (21). Currently, the proportion of HPV' OPSCC is low in Greenland, but an increase in HPV' OPSCC may occur in the future. Continuing studies of the proportion and incidence of HPV' OPSCC are important for the consideration of including HPV vaccination for boys in the mandatory childhood vaccination programme. Conclusion We found an increase in the annual OPSCC incidence in Greenland from 2.3/100,000 (CI01.2Á4.2) in 1994Á2000 to 3.8/100,000 (CI02.4Á6.2) in 2001Á2010; among males from 2.4/100,000 (CI 01.0Á5.7) to 5.0/100,000 (CI 02.9Á 8.9). We found an increase in the proportion of HPV' OPSCC from 14 to 25% in the same interval (p 00.51). Patients suffering from HPV' OPSCC were diagnosed at an earlier age (47 years compared to 63 years), and there was a trend towards lower consumption of alcohol (p00.13) and tobacco (p 00.15) compared to patients with HPV ( OPSCC. The overall proportion of 22% HPV' OPSCC was lower compared to Europe and the United States, possibly due to small sample sizes and/or geographical, sexual, ethnic or cultural differences. The Greenlandic population is still at high risk of HPV infection as demonstrated by the high incidence of HPV-induced CSCC. Continuing studies of the OPSCC incidence and the proportion of HPV' OPSCC in Greenland are needed.
2018-04-03T00:33:35.220Z
2013-01-31T00:00:00.000
{ "year": 2013, "sha1": "4b5949606fad31fc47370005b178367945f3fb28", "oa_license": "CCBY", "oa_url": "https://doi.org/10.3402/ijch.v72i0.22386", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "4b5949606fad31fc47370005b178367945f3fb28", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
67856351
pes2o/s2orc
v3-fos-license
Practical Prediction of Human Movements Across Device Types and Spatiotemporal Granularities Understanding and predicting mobility are essential for the design and evaluation of future mobile edge caching and networking. Consequently, research on prediction of human mobility has drawn significant attention in the last decade. Employing information-theoretic concepts and machine learning methods, earlier research has shown evidence that human behavior can be highly predictable. Despite existing studies, more investigations are needed to capture intrinsic mobility characteristics constraining predictability, and to explore more dimensions (e.g. device types) and spatio-temporal granularities, especially with the change in human behavior and technology. We analyze extensive longitudinal datasets with fine spatial granularity (AP level) covering 16 months. The study reveals device type as an important factor affecting predictability. Ultra-portable devices such as smartphones have"on-the-go"mode of usage (and hence dubbed"Flutes"), whereas laptops are"sit-to-use"(dubbed"Cellos"). The goal of this study is to investigate practical prediction mechanisms to quantify predictability as an aspect of human mobility modeling, across time, space and device types. We apply our systematic analysis to wireless traces from a large university campus. We compare several algorithms using varying degrees of temporal and spatial granularity for the two modes of devices; Flutes vs. Cellos. Through our analysis, we quantify how the mobility of Flutes is less predictable than the mobility of Cellos. In addition, this pattern is consistent across various spatio-temporal granularities, and for different methods (Markov chains, neural networks/deep learning, entropy-based estimators). This work substantiates the importance of predictability as an essential aspect of human mobility, with direct application in predictive caching, user behavior modeling and mobility simulations. I. Introduction & Related work In recent years, large-scale research on human mobility has thrived due to the availability of location data collected from portable computing and communication devices, such as laptops, smartphones, smartwatches and fitness trackers. One particular aspect of human mobility that has gained a lot of attention lately is predictability. Prediction techniques constitute fundamental mechanistic building blocks for many mobile protocols and applications, ranging from resource allocation to caching and recommender systems, among others [1], [2]. The seminal work by [3], utilizing cellular network data, established an approach towards understanding and measuring predictability of human mobility patterns, with their equally important contribution with respect to the data-driven analysis of large mobile populations, and their efforts in devising a framework to study the theoretical limits of predictability. The methods introduced in their framework are founded in information theory and have since been extensively applied in the area of mobility modeling and prediction. Later studies that built on [3] addressed either the specifics of the prediction problem (e.g., different formulations [4] of the individual's change of location, analyzed different contexts of mobility) or the shortcomings of the original approach (that relied on coarse spatio-temporal granularity). Authors in [5] used Wireless LAN (WLAN) traces from a university campus network and reported multi-modal entropy distributions which can be partially explained by the demographics of the population (i.e., age, gender, major of studies). Other entropy based studies include vehicular mobility [6], [7], [8], online social behavior [9], [10], complex systems [11], cellular network traffic [12] and public transport utilization [13]. In addition, devices' form factor affects the mode of usage and varied traffic profiles ( [14], [15], [16], [17]), but these studies either do not consider predictability or do not account for different spatio-temporal resolutions. We have chosen our methods based on the literature to measure and compare both theoretical and practical limits of predictability for Flutes and Cellos, with varying degrees of spatio-temporal granularity, while also looking at the correlation of prediction accuracy with mobility and network traffic profiles using extensive fine-granularity traces (based on our earlier work in [17]). The main questions addressed in this study are: i. How different are Flutes and Cellos in terms of predictability? ii. How does the predictability of these device types change with different spatio-temporal granularity (5, 15, 30 min, 1 hour and 2 hours; access point and building level)? iii. Does the choice of method or predictor (e.g. Markov Chain, neural networks such as LSTM and CNN, BWT or LZ based estimators, which are introduced in Section II) significantly alter the answers to aforementioned questions? This study provides the following main contributions: 1. Quantifying the differences of Flutes and Cellos for prediction analysis, evaluated on a real-world large-scale dataset. 2. Comparison of several well-known algorithms (Markov Chains, Neural Networks) and LZ/BWT-based theoretical bounds across different time and space scales for Flutes and Cellos. 3. Use of prediction accuracy as part of the user profile for modeling, and investigation of its correlation with a combination of network traffic and mobility features. The paper is structured as follows: First, the main approach and methods are presented in Sec. II. Then, the details of the dataset and experiment setup are discussed in Sec. III. The experiment results are presented in Sec. IV. Sections V and VI present the discussion on potentials implications of the findings and conclude the paper. II. Main Approach & Methods We investigate two methods to measure predictability; a theoretical method based on entropy, and a systems method based on practical predictor algorithms. Following we provide the entropy estimation based definition and discuss the different algorithms studied in this paper, including a reference-point Markov Chains approach, and a more sophisticated deep learning approach. A. Entropy Estimation Entropy is defined as the level of order (or disorder) of a system, and is founded on information theory. It has been adopted in previous studies to establish bounds on predictability under certain assumptions [3], [4]. We utilize it in our study to gauge the performance of our practical predictors. For a random process, this metric is sensitive to both the relative frequency of events and their interdependencies [13]. To estimate a baseline of predictability, we compute the time-uncorrelated entropy (S unc ) which only takes into account the frequency of the observed events. For the upper-bound of predictability we compute two time-correlated estimators based on compression algorithms (S lz and S bwt ) which also consider the memory of the system. We define maximum predictability as the probability of predicting the most likely state of x i given a state x j , which is computed from the entropy S of a given sequence of events based on [3], with the refinements proposed by [4]. For a complete description on entropy estimation, we kindly refer the reader to [18] and [19]. B. Predictors Markov Chain-based predictor: A Markov chain (MC) with a discrete state space has been applied for user mobility prediction [20], [21]. In an order-k Markov predictor, the state space consists of tuples of k location names (e.g., AP), where the next location prediction depends solely on the most recent preceding k-tuple. We build the model on the data so that observed k-tuples comprise the states. The transition probabilities are learned based on the frequency of appearances of such a transition in observations. The probability for a transition from the current state S = X i X i+1 ...X j to X i+1 X i+2 ...X j X j+1 where j −i = k and each X i is the symbol for each location, is represented as P (X j+1 = c | S = X i X i+1 ...X j ) for all c observed in data and is learned based on the reappearance frequency of such a sequence. If the predictor of order k encounters a new sequence that has never seen before, it falls back to the lower, k − 1 order recursively. The base case is O(0) which is simply the frequency distribution of all symbols observed so far. Deep learning: Recent approaches to sequence prediction use deep Recurrent Neural Networks (RNN) or Convolutional Neural Networks (CNN). Recurrent neural networks have loops within their cells, allowing information to persist and thus enabling the neural network to connect previous information to make a reasonable prediction of the future. Certain types of RNNs are capable of learning long-term dependencies. There are multiple variants of RNNs, including Long short-term memory (LSTM) [22] and Gated Recurrent Unit (GRU) [23]. These networks can learn dynamic temporal patterns and have successfully been applied in speech recognition, text-to-speech engines and predicting next location [24], [25]. CNNs learn convolutional filters to extract latent information across the data (i.e. 1D CNNs learn different temporal locality patterns) and use that information for predicting the next location. In our study, we use a multi-layer LSTM and 1D CNN to predict movements of users based on similar input tuples used for MC-based predictors. Neural networks are computationally expensive and require hyper-parameter tuning. Thus the deep model is run only on a sample of users in this study. One goal of this study is to analyze the payoff (and cost) of adding complexity to the predictor (e.g. LSTMs), versus the simpler MC-based predictors. III. Datasets & Experimental Setup To study the regularity of human behavior, we performed a data-driven analysis applying our methods to a university campus WiFi traces from the University of Florida. The datasets were collected from networks providing wireless access to a large number of portable devices via access points deployed in non-residential areas, including classrooms, computer laboratories, libraries, offices, administrative premises, cafeterias, and restaurants. Every trace entry contains a unique user identifier (uuid), time-stamp and an access point unique identifier (apid). Based on the apid's string we are able to identify the building as well as the room in which an access point (AP) was located. Only the geographical coordinates of buildings are known. Table II contains a brief summary of the dataset with mean (µ) and standard deviation (std), where N ap is number of unique access points observed per device, N day number of unique days with at least one record, N rec number of records during data collection, and total number of devices available for at least 7 days and accessed more than 5 APs. 1 A. UF traces The UF traces were collected for 16 months (September/2011-December/2012) and contain over 1700 wireless access points (APs) deployed in 140 buildings which were used by 300K devices. A sample (sythentic) record is shown in Table I. Its raw records were captured from associations and sessions timeout in which the unique user id (uuid) was the MAC address. These uuid although hashed, still contained the Organizationally Unique Identifier (OUI) 2 allowing us to distinguish Flutes and Cellos, as detailed in [17]. All collected WiFi traces are processed as discrete time-series, defined next. B. Discrete-time Series Given a set a of timely ordered events X = {x t : t = 1, · · · , n}, where x t is the realization of X at time t for t ∈ T , we say that a timeseries is discrete if T are measurements taken at successive times spaced at uniform intervals w, also referred to as sampling rate (defining the temporal granularity). Figure 1 depicts an example of how the real location of a device is sensed by the wireless management system through AP associations (red stars) and finally how the discrete-time series is obtained. For a given sampling time window w, our discrete-time series may result in different sequences depending on whether we choose an AP or a building as the level of spatial resolution. From Figure 1, for the first 4 time steps the device switched its associated AP without a real location change. This switch in AP association can be triggered by the mobile device (e.g. stronger wireless signal) or by the network management system (e.g. load balancing). Note that it is important to define the resolution for space and time, i.e., how big a location is in space (or 1 Transient devices are not counted to ensure the analysis is carried out on devices that are mobile and benefit from predictive systems the most, while stationary devices (e.g. plugged-in Cellos) and guests that never return to campus are ignored. 2 http://standards.ieee.org/faqs/regauth.html#17 point-of-interest) and how often we are going to sample from the input signal. In this example, larger values of w could eliminate this ping-pong effect of switching between APs without actually moving, but also cause loss of information when the user transits from one location to another. On the contrary, very small values of w could over-sample long periods when the user is not moving. Similarly, different values of spatial resolution could mitigate noise but eliminate information from the traces. Choosing these parameters is often influenced by the characteristics of the available dataset as well as the targeted application of the study. Step Value: A weighing mechanism is used to pick the corresponding location to represent a time step. During a time interval, we weigh every observed location of the device with the duration of time at that location and pick the one with the highest weight to represent that step. We assign a user to a specific location in the time interval δt between an association at and the next association at any other location, but only if δt < t max . After t max the device will be in an unknown state [3] until the next network event which will reveal its location for future steps. C. Experiments The design of our experiments is based on our study's questions: i. How different are Flutes and Cellos in terms of predictability? ii. How does the predictability of these device types change with different spatio-temporal granularity? iii. Does the choice of method or predictor significantly alter the answers to aforementioned questions? Thus, we evaluated a matrix, involving combinations of the following dimensions: The experiments were implemented in Python, the neural networks were implemented using Tensorflow [26] and Keras. Training is carried out in an online manner and the evaluation is through providing a sliding window of k observations to the predictor and testing the prediction correctness of the next symbol. The fraction of correct next symbol predictions is the prediction accuracy metric. A. Spatio-Temporal Resolutions To answer the first two questions of this study, particularly "ii. How does the predictability of these device types change with different spatio-temporal granularity?", Table III summarizes the median accuracy of an LSTM predictor for Flutes and Cellos with different spatial and temporal granularity. The choice of granularity is application-dependent, for example, to predict foot traffic at buildings and congestion planning based on density, building level analysis is more appropriate. Cellos show more predictable behavior overall, as the fraction of correct next symbol predictions is higher for Cellos across the board. At the AP level, with longer time bins, the accuracy for both Flutes and Cellos decreases. This observation is in line with previous findings [4]. At 15min time intervals, the difference between Flutes and Cellos is at its maximum and drops and remains stable for longer time intervals. At the building level, the accuracy follows a less regular pattern but both Flutes and Cellos are most predictable at 5min intervals (due to repeats of the same location in the sequence). Cellos' accuracy drops for 30min bins and goes back up again. On the other hand, Flutes are more predictable in 30min bins than 15min, 1h or 2h bins. Looking across all temporal bins, Fig 2 presents the empirical cumulative distribution function (ECDF) of prediction accuracy at AP and building spatial granularity. The "sit-to-use" Cellos show significantly higher predictability at every percentile; this is reasonable given their lower mobility [17] and mode of usage. In fact, prediction accuracy is highly correlated with other mobility and network traffic features of mobile wireless users, we will take a brief look at these correlations in Section V and B. Comparison of Methods To answer the third question of this study, "iii. Does the choice of method or predictor significantly alter the answers to aforementioned questions?", here we compare the experiment results for different methods: Table IV, for temporal granularity of 1h and 15min, highlighting the difference of Cellos -Flutes. In all cases Cellos are more predictable than Flutes, regardless of the choice of method (with a minor exception of LZ predictor at 15min time and building level which might be due to intrinsic instability of LZ based estimator). The difference in median accuracy for Flutes vs Cellos is up to 25% (Building level, 15min window, sequence length 40, Flutes 33.97% vs Cellos 59.03%). Other temporal choices result in a similar pattern. Another notable observation is that while the neural networks are more complex, and require vastly more computing power, they only achieve modest increase compared to Markov Chains in some scenarios (e.g., Cellos, at the Bldg. level and Seq. Len. 40, from 48.56% to 52.5%). This is a trade-off that needs to be considered in the design of predictive caching systems. In addition, increasing the sequence length k (i.e. the number of previous time steps available to the predictor) impacts the Markov Chain model more than the neural networks. This is particularly pronounced for 15min time window, in fact, the neural networks do not lose much accuracy from increasing sequence length 5 to 40 in case of the 1h time window. Also, the theoretical LZ and BWT based estimators, show higher upper bounds compared with the best of the algorithms, with Seq. Len. 5 Markov Chains and CNNs being the closest practical algorithms for the V. Discussion & Future Work In this paper, we define our research problem as predicting the next symbol in a discrete-time series for users with two categories of devices. The accuracy is evaluated as the fraction of the next symbols predicted correctly. While some earlier studies investigated a similar problem setup, our study has notable implications. For example, across device types, predictability can vary significantly. Also, with larger time windows such as 1 hour, it is easy to miss short stays (since one location visit with a duration of 31 minutes would result in other locations in that 1h window being ignored). On the other hand, a short time window results in multiple repetitions of the same location in the sequence, potentially achieving high prediction accuracy even when the method is not predicting the transitions well. It is important to consider the device type, context, and application in order to choose an appropriate time and space granularity; the best performing method differs across these dimensions. Besides, the measured accuracy only considers an exact match to be correct, so even if the method predicts a nearby location to the actual location, it would count as incorrect. We plan to investigate measuring how far a predicted location is from the actual location and embed that information in the loss function of our neural networks for possible improvements in prediction. Correlations with Mobility and Network Traffic: Figure 3 shows the correlation of prediction accuracy with a sample of features that describe the mobility or network traffic of users. PDT(W/E) and TJ(W/E) are mobility fea-tures while AAT(W/E) and AI(W/E) are traffic features. PDTW is the time spent at the user's preferred building (most common) on weekdays (PDTE for weekends). TJW is the total sum of jumps (distance) for the weekdays while TJE describes the same feature for weekends. AATW is the average of active time (as indicated by network usage) of the user for weekdays (AATE for weekends). AIW stands for average inter-arrival time of flows on weekdays, and AIE for weekends ( [17], [27]). The results present significant correlations between the prediction accuracy, with not only the mobility features, but also network traffic features. These correlations vary across device types (Flutes vs Cellos), and in time (Weekdays vs Weekends). This is a very important observation for the design of predictive caching systems, importantly, it might be possible to improve prediction of where the user is going based on network traffic profile while noting the different modes of usage based on device types. We leave the investigation of such improvements to future work. Integrated Mobility-Traffic Modeling: Given the observed correlations, we hypothesize that use of predictability as a feature in an integrated mobility-traffic generative model could lead to more realistic synthetic traces. Such a data-driven generative model would be an essential tool for network simulations and capacity planning. Notably, it can also be made privacy preserving, since collected traces would be replaced with realistic synthetic data that captures mobility, network traffic, predictability, and their relationships. Further study is beyond the scope of this work and is left for the future. VI. Conclusion In this work, we sought to answer three questions: i. How different are Flutes and Cellos in terms of predictability? ii. How does the predictability of these device types change with different spatio-temporal granularity? iii. Does the choice of method or predictor significantly alter the answers to aforementioned questions? For this purpose, we processed a large-scale dataset from a campus environment, and grouped the devices into two categories; and we chose a set of methods to make the comparisons, including Entropy-based estimators and popular algorithms such as Markov Chains and Neural Networks. The results of experiments show the movements of Cellos ("sit-to-use") are significantly more predictable than Flutes (up to 25% difference in accuracy). This pattern is consistent across various temporal granularities (5 min to 2 hours), spatial granularities (Access Point and Building level), and for different methods (Markov Chains, Neural Networks, Entropy-based Estimators). We illustrate that the performance of predictors depends strongly on the span of temporal bins. Markov Chains tend to outperform deep learning models in shorter time-bins while LSTMs and CNNs usually show a higher accuracy in longer timebins. CNNs have mostly similar accuracy to LSTMs in the latter case but have significantly better run time on a modern GPU. We also found significant correlations among prediction accuracy, mobility features, and also network traffic features, an important observation for the design of predictive caching systems where it might be possible to improve mobility prediction based on network traffic profile. We plan to further investigate the use of predictability as a feature in an integrated mobility-traffic generative model, and its application in state-of-the-art predictive caching systems.
2019-03-03T17:46:27.000Z
2019-03-03T00:00:00.000
{ "year": 2019, "sha1": "43848eca445750f225ede4c48dee3a989d9e6b17", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "43848eca445750f225ede4c48dee3a989d9e6b17", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
237299553
pes2o/s2orc
v3-fos-license
Antimicrobial stewardship, therapeutic drug monitoring and infection management in the ICU: results from the international A- TEAMICU survey Background Severe infections and multidrug-resistant pathogens are common in critically ill patients. Antimicrobial stewardship (AMS) and therapeutic drug monitoring (TDM) are contemporary tools to optimize the use of antimicrobials. The A-TEAMICU survey was initiated to gain contemporary insights into dissemination and structure of AMS programs and TDM practices in intensive care units. Methods This study involved online survey of members of ESICM and six national professional intensive care societies. Results Data of 812 respondents from mostly European high- and middle-income countries were available for analysis. 63% had AMS rounds available in their ICU, where 78% performed rounds weekly or more often. While 82% had local guidelines for treatment of infections, only 70% had cumulative antimicrobial susceptibility reports and 56% monitored the quantity of antimicrobials administered. A restriction of antimicrobials was reported by 62%. TDM of antimicrobial agents was used in 61% of ICUs, mostly glycopeptides (89%), aminoglycosides (77%), carbapenems (32%), penicillins (30%), azole antifungals (27%), cephalosporins (17%), and linezolid (16%). 76% of respondents used prolonged/continuous infusion of antimicrobials. The availability of an AMS had a significant association with the use of TDM. Conclusions Many respondents of the survey have AMS in their ICUs. TDM of antimicrobials and optimized administration of antibiotics are broadly used among respondents. The availability of antimicrobial susceptibility reports and a surveillance of antimicrobial use should be actively sought by intensivists where unavailable. Results of this survey may inform further research and educational activities. Supplementary Information The online version contains supplementary material available at 10.1186/s13613-021-00917-2. Background Antimicrobial resistance (AMR) is a concern in many regions of the world [1]. Antimicrobial use and horizontal spread of bacteria have been recognized as the most important drivers of resistance and its dissemination [2,3]. In addition to infection prevention measures, antimicrobial stewardship (AMS) is an essential step to optimize consumption of antimicrobials and thus reduce bacterial resistance [4][5][6]. Intensive care units (ICUs) are burdened by large numbers of patients with infections and sepsis, high antimicrobial use, and high rates of resistance [7]. Therefore, the rational management of anti-infectives and infection control measures are core competencies of intensive care medicine specialists [8]. In 2016, experts from the European Society of Intensive Care Medicine (ESICM) and the European Society of Clinical Microbiology and Infectious Diseases (ESCMID) in collaboration with the World Alliance Against Antimicrobial Resistance (WAAAR) held a round table meeting on antimicrobial resistance [9]. Besides the improvement of awareness for AMR and surveillance, the engagement of intensivists in multidisciplinary AMS teams in the hospital was recommended. Along the same lines, AMR and AMS were identified as integral components of the intensive care medicine research agenda to be addressed in the future [10]. In addition to a reduction of antimicrobial use to diminish ecological pressure in the ICU environment, pharmacologic optimization of antimicrobial administration is recognized as an important target in critically ill patients. Standard dosing regimens for many antimicrobials were shown to be associated with extremely variable drug levels [11], potentially causing unacceptable rates of underdosing and adverse outcomes [12,13]. Furthermore, the international ADMIN-ICU survey found a marked heterogeneity of dosing strategies and the use of therapeutic drug monitoring (TDM) [14]. The "Antimicrobial Stewardship, Therapeutic Drug Monitoring and Early Appropriate infection Management in European ICUs" (A-TEAMICU) survey was initiated by the Infection Section of the ESICM to gain insights into the development of AMS programs and TDM practices in ICUs since the publication of ADMIN-ICU in 2015 and the 2016 round table meeting on antimicrobial resistance. Results from this survey can be the basis for educational initiatives on behalf of ESICM by providing real-world information on the dissemination and structure of AMS and TDM. We hypothesized that AMS programs, the availability of TDM and the use of pharmacologically optimized infusion of antimicrobials have markedly increased in comparison to prior surveys. Survey population The survey was endorsed by ESICM and six national professional societies (Australia and New Zealand, Germany, Brazil, United Kingdom, the Netherlands, and Portugal). The societies used their respective members email addresses to send a link to an online survey. Due to data protection regulations, the numbers of professionals who were contacted and their email addresses remained unknown to the A-TEAMICU investigators. The survey was conducted in English, participation was voluntary (i.e., there was no financial remuneration), and respondents remained anonymous. The number of participants from a single center or hospital was not controlled. Data collection The A-TEAMICU study group consists of experts with clinical expertise in intensive care medicine, infectious diseases, and antimicrobial stewardship who are members of the "Infection Section" of ESICM. The idea to conduct this survey was formulated during a section meeting. Participation in this project was open to everyone interested in the topic. Using a recent survey on AMS by the "ESCMID Study Group in Antimicrobial Stewardship (ESGAP)" as a starting point [15], questions were modified to the ICU setting by the authors of this publication. The questions were preformulated and discussed by the group in video/telephone conferences. All questions were consented by the whole group. The final core survey consisted of 23 questions. Dependent on the participants' answers, 13 additional questions were asked (Additional file 1). The questionnaire was divided into four sections (hospital information, organization of an antimicrobial stewardship program, therapeutic drug monitoring, education in antimicrobial stewardship). The A-TEAMICU survey used the "Survey Monkey" platform which was provided by ESICM. Before starting the survey, ethical approval was sought at the University of Ulm (Germany). The local ethics committee waived the need for formal ethical approval of A-TEAMICU as an anonymous online survey of clinical practice. Descriptive statistics were expressed as total numbers and percentages for categorical variables. Sample size calculations were not performed, as it is not possible to estimate the number of participants before the survey. Due to this, we only performed univariate analyses for categorical variables, using the Chi-squared test. A p-value < 0.05 was considered statistically significant. Demographic information In total, 812 participants from 71 countries responded to the survey (Fig. 1). The countries of origin were classified using the criteria of the Statistics Division of the United Nations [16] ( Table 1). The majority (85%) of respondents worked in high-income countries, while 14% participated from upper-middle and lower-middle-income countries. Seven respondents did not provide information about their country of origin. To our knowledge, this is the largest survey on this topic. Part 1: hospital information and demographics Respondents were generally experienced ICU clinicians and a substantial proportion considered themselves unit leaders in infection management. 8% of participants have experience in the ICU of less than 2 years, 15% have 2-5 years, 19% 5-10 years, 33% 10-20 years, and 25% reported experience of more than 20 years. Approximately half reported having received specific training in antimicrobial therapy or infection management. 35% of respondents considered themselves the most qualified intensivist on their respective service regarding infection management. Detailed information about the hospital types and the numbers of ICU beds are included in Table 1. Infectious disease (ID) specialists were available for consultations in 67% of hospitals, with a further 16% available as external consultants. Clinical microbiologists were available for in-house consultation in 60% of hospitals, while 22% had availability of external consultation. A lack of ID support was reported by 16% of participants; 17% cannot consult a clinical microbiologist. Most respondents use an electronic medical record in the ICU (59%). Part 2: AMS and infection management A formal antimicrobial stewardship program (ASP) existed in 69% of hospitals, and 63% of participants had an ASP available in their ICU. Common members of the A-team were clinical microbiologists (62%), infectious disease specialists (57%), clinical pharmacists (50%), and infection prevention specialists (21%). In 77% of ICUs with availability of an ASP team, the intensivist was member of the team. The A-team visited the ICU weekly in 37% of hospitals; 41% had rounds of the A-team more often (several times a week in 21%, daily in 20% of ICUs). 14% of respondents had the A-team only available on demand. A restriction of selected antimicrobials with the necessity for formal authorization was in place in 62% of hospitals. Detailed information on the methods used for the implementation of restrictions is provided in Table 2. Most respondents (82%) had local guidelines for the treatment of infectious diseases available in their hospitals. In 87% of hospitals with local guidelines, recommendations were based on local susceptibility patterns. Further information on the availability of guidelines/ standards is provided in Table 2. Only 19% reported to have no specific guidance documents in the ICU. 52% of participants had a written ICU policy requiring prescribers to document the indication of antimicrobials in the patient records. The quantity of antimicrobials prescribed was monitored in 56% of ICUs. In these hospitals, daily defined doses (DDDs) were the most used statistical measure (41%), followed by days of therapy (DOTs) in 29%. 26% of participants were unsure of the details of antimicrobial usage surveillance in their hospital. Regarding cumulative antimicrobial susceptibility reports, only 70% of participants had these data available for their ICUs. 17% were uncertain about their local status, while 13% reported a complete lack of such information. Part 3: use of TDM and prolonged/continuous infusion of antimicrobials 75% of participants had written guidelines for antimicrobial dosing in their ICU, where 77% used a local guideline and 23% had national guidelines. Therapeutic drug monitoring of antimicrobial agents was used in 61% of ICUs. Where TDM was available, drug measurements were performed by the clinical chemistry service in 63% of hospitals, by the clinical pharmacy in 16% of cases and by the microbiology department in 11%. Advice on the clinical use of drug measurements was provided by various specialists, including intensivists (72%), microbiologists (30%), ID specialists (29%), clinical pharmacists (28%), and clinical chemistry specialists (14%). Antimicrobials available for TDM are listed in Table 2. To elucidate whether ASP had an association with the use of TDM, we compared respondents by their ASP 76% of respondents used prolonged and/or continuous infusion of antimicrobials in their ICU (see Table 2 for list of antimicrobials). 29% of participants had a TDM available for every antimicrobial that they give extendedly. We did not find an association of the availability of an ASP with the use of prolonged or extended infusion of antimicrobials. When focusing on beta-lactams specifically, there was also no significant association. Discussion This international survey among intensive care specialists adds knowledge to results of previous inquiries that have explored the local organization of AMS in the intensive care setting in Germany [17] and France [18]. To the best of our knowledge, therapeutic drug monitoring in the ICU has only been surveyed once in the past [14] and we are able to provide a current perspective on the evolving use of this technology. The number of participants in A-TEAMICU was considerably larger than in those surveys. Slightly more than 60% of respondents have an ID specialist and a clinical microbiologist available at their hospitals, while the remainder must rely on external consultations or cannot access this resource at all. This finding reflects both the known shortages of ID physicians and clinical microbiologists and a growing centralization of microbiology services in laboratories detached from hospitals [19]. While it is evident that the core responsibility for the management of infections is in the hands of the intensivist, the option to acquire specialized input should be available, as it is a valuable addition to good patient care [20,21]. Notwithstanding these infrastructural challenges, the widespread implementation of formal AMS programs in hospitals and ICUs is an encouraging finding, reflecting a growing dedication of the medical community to the prevention of AMR. This is all the more encouraging because of the documented lack of standardization of training in AMS, infectious diseases, and infection prevention in many countries [22]. As recommended by contemporary guidelines [4], clinical microbiologists, infectious disease specialists, and clinical pharmacists are common members of the AMS team in hospitals with AMS programs. Our results are comparable with the findings of a recent survey in four European countries, analyzed AMS on a hospital level, albeit without special focus on the ICU [15]. In the A-TEAMICU cohort, 77% of respondents with an AMS program in their ICU report that the intensivist is a member of the AMS team, demonstrating that intensive care medicine specialists are actively engaging in AMS activities. Furthermore, about half of participants have received specific training in antimicrobial therapy or infection management, adding to the profile of the intensivist as "infection manager. " The availability of local guidelines in 82% of ICUs is a finding that warrants attention. The adaption of empiric antimicrobial therapy to local epidemiology is essential to guarantee adequacy of therapy and simultaneously curb overtreatment. Thus, every hospital ought to provide such recommendations to their staff. In our survey, 87% of hospitals where such guidelines are available incorporate local resistance data. Again, this leaves room for improvement. On a positive side, a considerable number of ICUs in the A-TEAMICU cohort have special guidelines and recommendation on antimicrobial deescalation, duration of therapy, TDM, and antimicrobial discontinuation. This finding is encouraging, as it reflects recent developments in the field of intensive care medicine [23,24]. Regarding surveillance in general, cumulative antimicrobial susceptibility reports for the ICU are only available in 70% of ICUs and only 56% of participants have a monitoring of antimicrobial use. These data are indispensable for both therapeutic decisions and AMS programs in general and intensivists should actively demand the provision of surveillance information. Therapeutic drug monitoring of anti-infective substances is widely available in ICUs and intensivists are the predominant discipline to advise on the use of TDM in the A-TEAMICU cohort. We found an association of the use of TDM with the availability of an ASP, which is a plausible finding as the pharmacologic optimization of antimicrobial is a central tenet of antimicrobial stewardship. In addition to glycopeptide and aminoglycoside antibiotics, approximately 30% of respondents have the ability to monitor β-lactams. This proportion is higher than expected and demonstrates the growing use of pharmacokinetic optimization of antimicrobial therapy in the ICU. A recent position paper by ESICM (published after A-TEAMICU) explicitly recommends the use of β-lactam TDM in critically ill patients [25] and many ICUs appear to already pursue these goals. Concurrent with the finding of an increased use of TDM is the extensive use of prolonged and/or continuous infusion of antimicrobials by 76% of respondents. As pharmacologically reasonable, this practice predominantly focuses on β-lactam antibiotics, but 50% also use extended infusion regimens for glycopeptides. The latter result was unexpected, as current guidelines recommend this practice explicitly for patients in whom therapeutic targets of vancomycin are not attained with intermittent bolus dosing [26]. Whether the continuous infusion of vancomycin may also be used to reduce toxicity is still a matter of debate. The widespread use of prolonged/ continuous infusion of β-lactams is a surprising development from results of previous surveys, where only 20-30% of participants used this technique [14,27,28]. Although our questions did not differentiate between extended and continuous infusion, we found a clear move away from bolus application of time-dependent antibiotics. This likely reflects the growing evidence base for this therapeutic concept [23]. At the same time, recent evidence has identified a need for education on various pharmacologic topics related to the use of antimicrobials [29]. Of note, in our population we did not find a clear association of the use of prolonged/continuous infusions with the presence of an ASP. On a practical level, the extended infusion of suitable antimicrobials is easier to implement than TDM, as the latter has technical requirements beyond the ICU. We speculate that intensivist do not "need" an ASP to introduce prolonged/continuous infusion, whereas the provision of TDM is a more general infrastructural challenge for a hospital. Thus, it might be argued that an ASP not only propagates the use of TDM but also works to provide the possibility to monitor antimicrobial concentrations. This might be an explanation for the influence of an ASP on the use of TDM, without a clear influence on the use of prolonged/continuous infusion. Taken together, the results of the A-TEAMICU survey provide insight into many aspects of contemporary infection management in the ICU. As the prevalence of infections in critically ill patients remains high [7], knowledge of diagnostics, antimicrobial pharmacology, and infection prevention are essential for the practice of intensive care medicine. Antimicrobial stewardship as a "bundle" of coordinated actions to optimize the use of antimicrobials [30] has established itself in many ICUs and many intensivist are engaged in AMS. Some components of AMS have their primary application in the ICU setting, e.g., the optimization of antibiotic therapy by means of TDM or de-escalation. Thus, intensivists are principle proponents who assume leadership in these topics [10,12,31]. Professional organizations, like ESICM, might use results from A-TEAMICU to expand and refine their engagement on AMS and TDM in intensive care medicine. Besides research undertakings, the provision of education on infection management appears to be another relevant field of activity. This might also include the formulation of "best-practice-statements. " A specific example might be the availability guidelines for empirical therapy that must not only be available but also based on local epidemiology. ICUs without such guidelines could use recommendations by specialist organizations to advance this issue with their hospital management. As the management of infections in the ICU needs an interdisciplinary framework to achieve the best possible outcomes, specialist organizations should also assume a leading role in the development of interprofessional cooperation. A limitation of our survey is a probable selection bias of participants. The invitation to the survey was primarily distributed to ESICM members, and intensivists not affiliated with this society were harder to reach. We tried to reduce this bias by asking several national societies to use their respective members' addresses to achieve a higher dissemination among the target population. Still, it is unlikely that participants outside of these professional societies took part. Furthermore, respondents with a personal interest in infections and antimicrobials are more likely to accept an invitation to provide information about their current practice. The same probably holds true for intensivists who work in an environment (both ICU and the hospital as a whole) that has an emphasis on infection management and AMS. Therefore, a "positive" selection of hospitals with a good structure and intensivists with a personal dedication to the management of infections cannot be excluded. Additionally, we cannot exclude the possibility that several participants from the same hospital or ICU provided answers. This selection bias might also be applicable to the high rate of TDM-use. Still, we do not consider this possibility disadvantageous, as this might reflect a type of contemporary "best practice. " Lastly, a majority of participants of A-TEAMICU came from high-income and upper middle-income countries, where hospital infrastructure and health system funding can be expected to be better than in lower-income countries. This will have an influence on the availability of staff (e.g., pharmacists, ID specialists) and technology (e.g., TDM, laboratory resources), limiting the global generalizability of the survey results. A way to reduce potential selection/participation biases in future surveys might be a more stringent control of participants. As an example, allowing only one person per hospital or ICU to provide answers to the survey might diminish the impact of institutions where various aspects infection management are considered essential and are thus endued with sufficient resources. However, this might potentially reduce anonymity and thus either prevent colleagues from participation, or introduce a social desirability bias, where participants provide answers that they consider to be seen as favorable. Besides limitations relevant to participants, the set of questions used in A-TEAMICU was not comprehensive with regard to the detailed execution of AMS in participating hospitals. As an example, we did not assess how feedback on AMS interventions was provided to prescribers. This aspect and other omissions were necessary to limit the size of the questionnaire and the time needed to complete the survey. Still, A-TEAMICU focuses on AMS form a primarily intensive care medicine point-of-view and if a participant feels that AMS is implemented in their ICU, we believe that this is a relevant information. The goal of this survey was not to assess a type of "best practice of AMS" in ICUs but rather gain insight into the dissemination of basic components of ASPs. In conclusion, many ICU physicians who participated in the A-TEAMICU survey have AMS in their ICUs. A number of "core elements" of AMS are implemented in the respondents' hospitals ( Table 2). Of particular interest, TDM of antimicrobials and optimized administration of antibiotics are broadly used.
2021-08-26T13:46:12.386Z
2021-08-26T00:00:00.000
{ "year": 2021, "sha1": "508a854f5414acf3b9530199a63f2cf5f34c14bf", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "508a854f5414acf3b9530199a63f2cf5f34c14bf", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
214241427
pes2o/s2orc
v3-fos-license
Impact of ocean mixed‐layer depth initialization on the simulation of tropical cyclones over the Bay of Bengal using the WRF‐ARW model The sensitivity of the simulated tropical cyclone (TC) intensity and tracks to the different ocean mixed‐layer depth (MLD) initializations is studied using coupled weather research and forecasting (WRF) and ocean mixed‐layer (OML) models. Four sets of numerical experiments are conducted for two TCs formed during the pre‐ and post‐monsoon. In the control run (CONTROL), the WRF model is initialized without coupling. In the second experiment, the WRF‐OML model is initialized by prescribing the MLD as a constant depth of 50 m (MLD‐CONST). In the third experiment, the spatial varying MLD obtained from the formulation of depth of the isothermal layer (MLD‐TEMP) is used. For the fourth experiment (MLD‐DENS), the model is initialized with the density‐based MLD obtained from ARMOR‐3D data. The results indicate that the CONTROL exhibits an early intensification phase with a faster translation movement, leading to early landfall and the production of large track deviations. The coupled OML simulations captured the deepening phase close to the observed estimates, resulting in the reduction of errors in both the vector and along the tracks of the storm. The initialization of the different estimates of the MLD in the WRF‐OML shows that the TC intensity and translation speed are sensitive to the initial representation of the MLD for the post‐monsoon storm. The gradual improvements in the intensity and translation speed of the storm with the realistic representation of the OML are mainly due to the storm‐induced cooling, which in turn alters the simulated enthalpy fluxes supplied to the TC, leading to the better representation of secondary circulation and the rapid intensification of the storm. | INTRODUCTION Tropical cyclones (TCs) that form over warm tropical oceans are one of the most destructive weather phenomena, with a high potential to damage coastal installations due to its extreme winds, heavy rainfall, flooding and storm surges (Emanuel, 2003;Samala et al., 2013). As the coastal regions of the North Indian Ocean (NIO) mainly of the Bay of Bengal (BOB) are highly crowded, many cities with dense populations ranging from metropolitan to cosmopolitan are vulnerable. The prediction of the movement and intensity of TCs and their associated rainfall over the BOB are of great interest, and remain a challenging problem (Raghavan and Sen Sarma, 2000;Bhaskar Rao et al., 2001;Srinivas et al., 2010). Though the prediction of the movement of TCs over the BOB has improved significantly over last few decades with the application of improved dynamic models and the assimilation of new satellite information using advanced assimilation schemes (Yesubabu et al., 2014), there is still a lag and uncertainty in the prediction of storms (e.g. Greeshma et al., 2015;Mohanty and Gopalakrishnan, 2016). One of the fundamental reasons for the failure of the existing numerical models is at recapturing the observed pattern of the TC intensities, which is possibly due to the lack of the proper representation of atmospheric ocean-coupled processes in numerical models. Several theories (e.g. Gray, 1975;Emanuel, 2003;Yu and McPhaden, 2011;Vissa et al., 2013) postulate that the warm oceans play a critical role in the genesis, intensification and maintenance of TCs by supplying the required energy through enthalpy fluxes. Identifying and incorporating the critical ocean-atmospheric parameters is extremely useful to improve the skill of the numerical weather models. A warmer sea surface temperature (SST), a deeper oceanic mixed-layer depth (OMLD) and the presence of anti-cyclonic eddies (positive sea level anomalies) are the prime indicators for the higher upper ocean heat (UOH) content (e.g. Vissa et al., 2012;Mohan et al., 2015) that mainly influences storm intensity. Previous researchers (Leipper and Volgenau, 1972;Sadhuram et al., 2006;Vissa et al., 2013) suggest that storms intensify through the energy fluxes at the air-sea interface over the regions of the higher UOH. Coupled global models advocate that, along with the SST, the OMLD plays a crucial role in the prediction of TC intensity (e.g. Chan et al., 2001). Mao et al. (2000) highlighted the intensification rate of the simulated storms in a coupledoceanic atmospheric model, which largely depends on the feedback of the OMLD. The OMLD acts as a source or sink in the development of a TC that depends on its deepness. The upper ocean responds during and after the passage of a TC; turbulent and vertical (diapycnal) mixing are the primary plausible mechanisms that cool the sea's surface. It can also weaken storm intensity (e.g. Vissa et al., 2012Vissa et al., , 2013. Although this negative feedback of wind-induced ocean cooling on the intensity of the storm has already been reported (e.g. Chang and Anthes, 1979;Schade and Emanuel, 1999), it has not been incorporated into storm-prediction models both regionally and globally until recently (Kim and Hong, 2010;Dare and McBride, 2011;Samala et al., 2013). A weather research and forecasting (WRF) model is widely used for TC predictions in both research and operational applications. For the NIO, several studies (Mohanty et al., 2004;Pattanayak et al., 2008;Srinivas et al., 2012) have proven that the WRF model is capable of simulating tropical storms with reasonable accuracy. In spite of proven skill at simulating TCs across different basins, the model lacks in the implicit representation of ocean feedback as it is only supplied through the variation of SSTs supplied from lower boundaries. The oceanatmosphere coupling functionality is introduced in the WRF model (v. 3.5) with a 1D OMLD module to simulate mixed-layer depth (MLD) in an integral sense (Pollard et al., 1973) and incorporating its associate feedback into the TC. The impact of coupling the WRF with an ocean mixed-layer (WRF-OML) model is studied on the simulation of Hurricane Katrina by Davis et al. (2008), who indicates that the role of atmospheric ocean coupling successfully eliminated the spurious intensity errors found in the real-time predictions. The study by Chan et al. (2001) reveals that the impact of the MLD initialization is noticed in the case of the TC simulations using a fully 3D-coupled ocean-atmospheric model; they clearly indicate that the deeper MLD is highly favourable for the development of more intense storms at a given translation speed. The coupled model simulations of the WRFregional ocean modelling system (ROMS) by Zhao and Chan (2017) reveal that the slower translation speed and shallower MLD could negatively impact the development of TCs through the development of a symmetric cold wake and the reduction of surface energy fluxes. Most recently, Prakash and Pant (2017) showed that the coupled ocean-atmospheric model (WRF-ROMS) is capable of reproducing the observed storm-induced ocean response during the passage of Cyclone Phailin which formed over the BOB. The aforementioned studies suggested that the results of hurricanes can be improved if the realistic initial MLD is supplied to the WRF-OML. Recently, Wang and Duan (2012) introduced a secondorder OML module in the WRF (WRF-OMLM-Noh) based on the model formulation of Mellor and Yamada (1982) and compared its performance with the existing results of the WRF-OMLM developed on a Pollard formulation (WRF-OMLM-Pollard). Their findings suggest that the performance of the OMLM-Noh is better than the existing version of a mixed-layer model in the WRF (OMLM-Pollard) in the realistic representation of a storm-induced SST, the subsequent MLD changes and the exchange of air-sea fluxes. They also attributed the improvement of the OMLM-Noh due to the realistic representation of the MLD in the model's initial conditions. The tendency of the OMLM-Pollard in the overestimation of intensities has been recognized for a while (Zhu et al., 2004), and is attributed to the underestimation of storm-induced seasurface temperature (SST) cooling and the subsequent reduction of ocean-coupling feedback in the form of fluxes in the WRF-OMLM (Yablonsky and Ginis, 2009). To replace the OML, several studies on the application of sophisticated atmosphere-ocean coupled models for the TC cases have indicated the impact of oceans on the atmosphere, and vice versa (e.g. Warner et al., 2010;Zambon et al., 2014aZambon et al., , 2014b. For the BOB, the recent TC simulation study by Prakash et al. (2018) showed the advantage of using a fully coupled atmosphere-oceanwave model (COAWST) when analysing the storminduced ocean mixing over the uncoupled ROMS model. However, the operationalization of these fully coupled three-dimensional (3D) ocean demands high computational resources. Considering these limitations, Yablonsky and Ginis (2009) and Ginis et al. (2010) suggested that a simple one-dimensional (1D) mixed-layer model is sufficient to reproduce the TC-induced SST cooling with limited computational resources. For the BOB, very few coupled ocean-atmosphere modelling studies (Mohan et al., 2015;Srinivas et al., 2016;Greeshma et al., 2019) indicate that the response of storm-induced SST cooling on the storm intensity predictions is captured by the coupled WRF-OML model. Mohan et al. (2015) attempt to analyse the impact of assimilating observations and the effect of the air-sea coupling using the WRF-OML in numerical simulation of Cyclone Nilam over the BOB. Their results also indicate that the incorporation of ocean coupling with a constant MLD initialization through the OML has an influence on the improvement of the prediction intensity of TCs, but the WRF-OML has little impact on track and rainfall predictions. Few studies in the literature study the impact of air-sea coupling in the simulation of TCs over the BOB; however, they have employed the option of a constant MLD at the time of model initialization (e.g. Mohan et al., 2015). Also, a few studies on the use of ocean-coupled models and the realistic representation of the MLD in the initial conditions over other basins (Warner et al., 2010;Zambon et al., 2014a;Prakash et al., 2018). However, the role of the initialization of realistic pattern MLDs on TC predictions has not been explored over the BOB. The aim of the present study is to analyse the impact of the spatial variation of the MLD on the simulation of the TC track and intensity predictions over the BOB by considering two TC cases that formed during two different seasons (i.e. pre-and post-monsoon). The rationale behind considering the TCs in the pre-and post-monsoon seasons is to examine whether the changes in spatially varying the MLD can alter the results of storm simulation over the BOB. Initial versions of the WRF-OML (up to v. 3.8) work based on the initialization of a constant OMLD and deep-layer lapse rate, which is later updated (from v. 3.8) to use the climatological MLD is derived from the HYbrid Coordinated Ocean Model (HYCOM) and which works based on the depth of the isothermal layer (ILD) formulation. This performance of the WRF-OML based on the ILD formulation is reasonable over the regions (e.g. the mid-latitudes) where evaporation dominates or it equals the evaporation since the ILD coincides with the depth of the mixed layer (ML; calculated from the density). However, over the BOB where the precipitation exceeds the evaporation, the ILD exceeded the actual MLDs (Vissa et al., 2013), which prompt a search for the density-derived MLD sources rather than initializing with the ILD as the MLD in the WRF-OML model. The present study has used the MLD derived from the density criteria (ARMOR-3D level 4 data). The paper is structured as follows. A brief history of the two cyclones selected for study is given in Section 2. Section 3 describes the model configuration, experimental design and data used for defining the SST boundary conditions. The results of the simulations are presented in Section 4. Section 5 summarizes and gives the main conclusions drawn from the study. | DESCRIPTION OF THE HISTORY THE TWO TCs To test the sensitivity of the OMLD initialization under different upper ocean conditions with the coupled WRF-OML, two tropical storms (Nargis and Vardah), which formed over the BOB in two different seasons (pre-and post-monsoon), were considered. Detailed reports regarding the history of the two selected TCs are available from the India Meteorological Department (IMD), Delhi (IMD, 2009(IMD, , 2017. A brief description of the cyclones is now provided. Very severe cyclonic storm Vardah formed initially as a low-pressure system over the southeastern BOB in early December 2016. The depression stage of the storm was noticed on December 6 and then intensified into a deep depression on December 7 that crossed the Andaman and Nicobar Islands. It moved northwards and intensified as a cyclonic storm on December 8. Slightly changing its direction westwards, Vardah consolidated into a severe cyclonic storm on December 9. The very severe cyclonic storm attained its peak intensity on December 11 with a central sea level pressure (CSLP) of 982 hPa and maximum sustained winds (MSWs) of 120 kmÁhr −1 . After it weakened into a severe cyclonic storm, the system landed over Chennai on December 12 and thereafter it weakened rapidly into a depression on the following day and passed over the Karnataka region and moved out into the Arabian Sea. Cyclone Nargis was one of the devastating storms over the NIO in the decade 2000-2009. It originated as a depression over the southeast BOB in the morning of April 27, 2008, intensified into a cyclonic storm at 0000 UTC on April 28, and then into a very severe cyclonic storm at 0300 UTC on April 29. The system initially moved in a northwesterly direction and further recurved northeast and crossed the southwest coast of Myanmar near 16 N between 1200 and 1400 UTC on May 2. It maintained the intensity of very severe cyclone for about 12 hr after landfall. | DATA, MODEL CONFIGURATION AND EXPERIMENTAL DETAILS In the present study, an advanced research WRF (ARW) (v. 3.9.1) and a simple OML (WRF-OML) are used to analyse the impact of mixed-layer dynamics on the simulation of TCs over the BOB. The modelling system is configured into two nested domains of 27 and 9 km horizontal resolutions covering the BOB and its adjoining regions. The high-resolution inner domain covers the BOB and the east coast of India. The first domain with 27 km covers the Indian Sub-continent and NIO basin. The spatial extents of the two model domains are shown in Figure 1a. The model physics used in the study are adopted from previous modelling studies on physics sensitivity (Srinivas et al., 2013;Hari Prasad et al., 2016;Srikanth et al., 2016;Reshmi Mohan et al., 2018;Vijaya Kumari et al., 2019). The prediction of TCs over the BOB includes: the Thompson scheme (Thompson et al., 2008) for cloud microphysics; the Dudhia scheme (Dudhia, 1989) for short wave radiation; the rapid radiative transfer model (RRTM) (Mlawer et al., 1997) for long wave radiation; the Yonsei University non-local diffusion scheme (Hong et al., 2006) turbulence; the Kain-Fritsch mass flux scheme (Kain, 2004) for cumulus convection; and the Noah land surface model scheme for representing land surface processes (Chen and Dudhia, 2001). The initial conditions are provided by Global Forecast System (GFS) analysis available at a 0.5 × 0.5 horizontal resolution from the National Center for Environmental Prediction (NCEP). Lateral and lower boundaries are updated at 6 h intervals. Even though the GFS analysis is available at 3 h intervals, the lower initial boundary conditions (SST) are available at 6 h time resolutions. To maintain same time resolution for all the boundary conditions, the boundaries were updated with the GFS forecasts at 6 h intervals. For the WRF-OML system, the ocean initial conditions were obtained from the global reanalysed data of the HYCOM (https://www.hycom.org/data/glba0pt08/expt-91pt2). The WRF-OML is a conventional bulk OML model based on the formulation of Pollard et al. (1973). It incorporates the feedback obtained from the windinduced ocean mixing and deepening of the mixed layer to atmospheric model in the form of SST changes. The OML model is introduced by Davis et al. (2008) in the WRF, and is designed for the simulation of the mixedlayer process in an integral sense. Using the wind stress supplied to the turbulent mixed layer, the model simulates currents in the OML, which results in the mixing of colder temperature with the deeper layers. The entrainment of colder water from the deeper layers to the top changes the surface temperatures and cools the OML, and also, in turn, changes the SST. This process is carried out only through the vertical redistribution of temperatures as the OML assumes that no heat transfer takes place across the grid points. The effect of the Coriolis term is included in the OML to incorporate the rotation of inertial currents and its associated impact on mixing and SST changes. The model neglects the pressure gradients and horizontal advection terms. The absence of the horizontal advection and upwelling terms in the OML formulation induces uncertainties of about 15% in the simulated storm SST cooling (Price, 1981). For faster moving storms (translation speed of TCs > 4 mÁs −1 ), the horizontal advection term has a little impact on the simulation of storm-induced SST cooling (Yablonsky and Ginis, 2009). Since the WRF-OML model computes the changes in the SST mainly based on the initial fields of the MLD, deep-layer temperature lapse rate and surface wind stress at the top, Davis et al. (2008) pointed out that the accuracy of the WRF-OML depends greatly on the specification of the initial values of the ocean's thermal state. Also, they suggested that the spatially varying and realistic initial MLD values needed to obtain reliable estimates of storm-induced SST variations. The OML basically computes the simulated depth of the mixed layer (dh/dt). For an arbitrary time solution (n), the rate of change in depth of mixing is given by: where γ is a constant; u is the velocity; and R i is the Richardson number: Surface ocean temperature changes can be derived from the conservation of heat: where h is the MLD; T is temperature; Γ is the temperature lapse rate; c is the heat capacity per unit mass; ρ is seawater density; g is gravity; α is the thermal co-efficient of expansion; u and v are horizontal velocities; Δu is mean velocity; and Q is heat flux down through the surface. Four sets of numerical simulations are performed for the two TC cases by initializing the model at 1200 UTC on April 28, 2008, and 0000 UTC on December 9, 2015, respectively, and integrated up to 120 hr for Nargis and Vardah, respectively. In the first experiment (CONTROL), along with GFS analysis, the lower boundary SST fields are replaced by time-varying fields of high-resolution real-time global (RTG) SST, and another three experiments are conducted by incorporating the coupling feedback of the OML physics through the WRF-OML system for each TC. This feedback in the WRF-OML configured at every model time step (i.e. 60 s) where the WRF model supplies surface wind stresses (τ) and heat flux (moisture, latent and sensible) to the OML, while the OML intern provides the storm-induced response in the form updated SSTs to the WRF (Davis et al., 2008;Mohan et al., 2015). Of the three WRF-OML experiments, the first is configured with a constant MLD of 50 m (MLD-CONST). In the second (MLD-TEMP), the coupled system is initialized with the spatial variation of the MLD computed based on the isothermal temperature profiles of the HYCOM data set and the lapse rate for the OML configured at −0.14 CÁm −1 . In the third (MLD-DENS), the initial values of the MLD are replaced with density-based MLDs obtained from the three-dimensional (3D) thermohaline field (ARMOR-3D level 4 data). Though the density-based MLD can be derived from the global HYCOM reanalysed data set available because it has a high horizontal resolution (about 0.08 ), it has a lesser number of vertical levels (32). In particular, the major limitation arises from upper ocean surface levels as the global data contain only seven layers within the upper 100 m of the ocean's surface. Therefore, the spatial MLDs obtained from the ARMOR-3D in the MLD-DENS experiment were used. The MLD product of the ARMOR-3D was derived from the density formulation by using the high-resolution density profiles of the Argo data. The ARMOR-3D MLD is computed based on the variable density criteria, that is, the depth at which density has varied significantly from the surface value by a threshold value of 0.2 C (de Boyer, 2004). For more details about the density-based MLD data set, see Buongiorno Nardelli et al. (2017). Among the experiments (MLD-CONST, the MLD-TEMP and MLD-DENS), the present authors only changed the spatially varying MLDs while maintaining the same physics and domain configurations expected in the OML, and it was assumed that the simulated changes in track and intensity were primarily observed due to the combined effect of the initial MLD and the defined initial profile of temperature and density. To validate the model results, the intensity parameters and track information for the two cyclones were obtained from the cyclone reports of the IMD (2009,2017). The azimuthally averaged radial-height crosssections of the temperature anomaly and tangential winds, which provide a quantitative structure of the cyclone from the model simulations, are compared with the Cooperative Institute for Research on Atmosphere (CIRA) products of radial height cross-sections of Advanced Microwave Sounding Unit (AMSU) profiles and multi-platform tropical cyclone analysis products (Knaff et al., 2011). The surface wind analysis from the CIRA is available at a horizontal resolution of 10 km and at 6 hr intervals. The gridded rainfall estimates from the IMD and Tropical Rainfall Measuring Mission (TRMM) 3B42v7 satellite-merged precipitation data are used for a comparison with the model-simulated rainfall. | RESULTS AND CONCLUSIONS The results are organized into three subsections. The time variation of the simulated storm tracks and intensity that have already been discussed is first presented. Moving further, the response of the coupled model at simulating the TC-induced features over the upper ocean, such as simulated changes in the OML and a storminduced SST cold wake, are then discussed. Finally, the impact of ocean feedback on the secondary circulation, organization of winds and rainfall of TCs are examined. All the results for the two TCs (Vardah and Nargis) are presented from the high-resolution (9 km) model domain. | Tracks and intensity of the simulated storms The model-simulated tracks for the two cyclones along with the IMD best track estimates are shown in Figure 1b,c. The best track estimates of the IMD indicate a westward movement of the system in the case of the post-monsoon Cyclone Vardah, while an eastward movement is observed in the case of pre-monsoon storm Nargis. All the model simulations captured the pattern of the observed track during the initial 36-48 hr of simulation. Thereafter, the differences in the vector track are clearly visible between the coupled and control simulations. For Nargis, though the model is initialized at 1200 UTC on April 28, 2008, the best track data are available only between 1200 UTC on April 29 and 0000 UTC on May 3. The initial track deviations of the storm from the IMD in all experiments could be due to the dislocation of the initial vortex from that observed. The role of the WRF-OML coupling on track predictions is seen only after 42 hr of model simulation in both TC cases. This slow response is probably due to the limitation of the OML model discussed in several studies (Zhu et al., 2004;Halliwell et al., 2008). It is primarily due to the simulated OML response, and the resultant TC-induced cooling strongly depends on both initial MLDs as well the characteristics of the TCs (translation speeds and sizes of the storms) passing over the basin. The initial MLD values (Supporting Information Figure S1a-c) are not changing significantly (< 10 m). This could be the primary reason for the slow response in the track simulations for Nargis. The results of the coupled model simulations (Figure 1b) shifted to southwards compared with the uncoupled simulations (CONTROL) and moved closer to the observed storm positions during landfall. The track variations in the WRF-OML are due to variations in the simulated mixed layer, which induce the differences in the simulated SST cold wakes in the right forward sectors and shift the model storm to move towards the warmer SSTs (leftward forward sectors). This southward movement of the coupled model compared with uncoupled simulations has been discussed by Srinivas et al. (2017). In the present study, the translation speeds (kmÁ6 hr -1 ) are calculated for the two cyclones as a measure of distance travelled by the model storm in 6 hr intervals. The translation distances (in 6 hr) for the CONTROL, the MLD-CONST, the MLD-TEMP and MLD-DENS are 102.4, 93.5, 95.8 and 91.3 km, respectively, for Nargis, and 113.9, 111.5, 108.3 and 107.9 km, respectively, for Vardah. In the case of the translation speeds (in 6 hr) for the observed storm computed from the best estimate of the IMD are 89.7 and 86.5 km for Nargis and Vardah, respectively. Among all the experiments, the deviations in the track are great with the CONTROL and the vector track spread between the WRF-OML experiments is relatively less. Apart from the vector track deviations, the translation speed of the storm is fast in all the experiments, especially with the CONTROL in both TCs. Of the coupled model simulations, the MLD-DENS simulated tracks are closer to the observed and its translation speed improved when the model storm encountered relatively shallow MLDs. The time series of vector track error (VTE) computed with the IMD estimates for both TCs are shown in Figure 2a,b. The track errors indicate that the deviations of the CONTROL experiments are increasing with forecast length at high rate as compared with the WRF-OML runs in both TC cases. The response of mixed-layer coupling on the VTE is seen only after 30-36 hr of simulation for both TC cases; the impact and spread of the VTE appear to be higher for the post-as compared with the pre-monsoon. This coincides with a previous discussion on the slow response of the OML (Halliwell et al. 2008) due to meagre variations in the initial condition of the MLD. The coupled model simulations consistently simulated westward movement for Vardah and eastward movement for Nargis closer to the observed storm till landfall, while uncoupled simulations (CONTROL) moved northward during the mature phase of the storm. Also, the errors (VTE) are not only due to the cross-track deviations of the model, as it arises along the track deviations of the model from the variations in the translation speed of the model storm. Thus, the translation speed and VTE clearly suggest that experiment MLD-DENS has a relatively better simulation, followed by the MLD-TEMP and MLD-CONST. The present results suggest that in a basin such as the BOB, where precipitation exceeds evaporation, the variations in the initial MLD obtained from temperature and density criteria will have a significant impact on the vector track variations of the simulated storm. F I G U R E 2 Time variation of the vector track error (VTE) (km) (a, b), simulated central sea level pressure (CSLP) (hPa) (c, d) and maximum wind (mÁs −1 ) (e, f), for Nargis (left) and Vardah (right). The arrows on the x-axes indicate the landfall points in the respective simulation Time variation of the simulated intensity parameters' CSLP and MSW along with the best estimates of the IMD are shown in Figure 2c-f. For Nargis, the observed estimates indicate that the lowest pressure drops and MSW are about 962 hPa and 47 mÁs −1 , respectively, which occurred for a period of 6 hr. Though all the simulations underestimated the pressure drop, the CONTROL experiment predicted an early life cycle of the storm with a high intensification rate. It attained maximum intensity before 30 hr of the observed, and the lowest pressure simulated by the CONTROL experiment was 968 hPa. Even though the OML experiments underestimated the intensity by nearly 8 hPa, resulting in weaker winds compared with the IMD and CONTROL, the simulations seem to produce a gradual deepening and mature phases of the storm close to the timings of the observed storm. The significant underestimation in the pressure drop between the model and the observed, starting from the predeepening to weakening phases of the storm, is probably due to the cold start initialization of the WRF model without the assimilation of conventional and satellitederived data. Previous studies on the simulation of Nargis (Osuri et al., 2012;Srinivas et al., 2012) suggested that the assimilation of satellite-derived winds using the mesoscale modelling system is need to improve the intensity predictions of Nargis. Moreover, in the coupled ocean atmospheric model, the OML component, which is a simple slab ocean model that does not account for all the ocean-wave-atmospheric interactions such as the wind wave-induced positive feedback on storm intensity (Liu et al., 2011), was used. Therefore, considering these limitations, Osuri et al. (2013) highlighted the need for a fully coupled ocean-atmospheric assimilation system for the better prediction of TCs over the NIO. The time variations of the MSW in all the experiments of Nargis exhibit slight variations in attaining the maximum intensity (by 12 hr) compared with the CSLP. The maximum surface winds simulated in the coupled model are about 34, 36 and 35 mÁs −1 at 1200 UTC on May 1, 2008, by the MLD-CONST, the MLD-TEMP and MLD-DENS, respectively. Moreover, the simulated maximum surface winds in all the experiments are underestimated by 10-15 mÁs −1 . As discussed in the CSLP analysis, the underestimation of intensity by the model is probably due to the lack of an initial vortex at the time of initialization, and also mesoscale model tendency, as indicated by several authors (Trivedi et al., 2002;Osuri et al., 2012;Srinivas et al., 2012), which needs to be addressed by increasing model resolution, with the proper representation of the initial vortex and assimilation of all available observations. Thus, the intensity estimates of Nargis suggests that the impact of initialization of the spatial MLD variations is clearly seen on simulation of the TC's intensity. However, the intensity differences between the mixed-layer experiments are less sensitive for Nargis, particularly with the spatial variations of the MLD derived from density and temperature. In the case of Vardah, the IMD estimates showed a gradual deepening of the storm for a period of 36 hr and it attained its lowest pressure drop of 975 hPa and an MSW in the order of 37 mÁs −1 in its mature phase, which occurred between 0000 and 0600 UTC on December 12, 2016. The CONTROL experiment rapidly intensified and attained maximum intensities of 978 hPa and 33 mÁs −1 early by 12 hr compared with the observed estimates. The intensity differences between the OML experiments were less sensitive in the first 24 hr of the model integration, and the variations in intensity started to appear after 30 hr of simulation. The differences in both the intensity and time of attaining maximum intensity were seen when the model reached its mature stage. As discussed in the track simulations, the slow response of the OML (Zhu et al., 2004;Halliwell et al., 2008) is probably the major reason for this time lag of 30 hr. Zhu et al. (2004) reported similar variations in the coupled OML simulations of mesoscale model 5 and their study attributed it as a response of the OML. Further, Zambon et al. (2014b) suggested that the coupling of the wave component (simulating waves nearshore) with the atmospheric model (WRF) can only provide the required feedback for the co-efficients of surface roughness lengths to the atmospheric model (WRF) to produce a significant difference in simulations of the maximum wind speed and pressure drop just after initialization of the WRF model (as seen in the analysis of Hurricane Sandy). Of all three OML experiments, the MLD-CONST attained maximum intensity first by 12 hr before the MLD-DENS and 18 hr earlier compared with the MLD-TEMP. Though the MLD-DENS captured the observed intensity pattern during the deepening phase of the storm, it attained the observed maximum intensity of 975 hPa by 18 hr before the observed time. The MLD-TEMP simulated a slowly intensified storm and attained the observed maximum intensity (975 hPa) after the 60 hr of model simulation. Interestingly, the time of attaining maximum intensity is close to the observed estimates by 12 hr. However, the intensity of the storm is overestimated by the MLD-TEMP during the deepening phase between 36 and 54 hr of model simulation. Early intensification of the MLD-DENS compared with the MLD-TEMP is possibly due to the specification of lower depths obtained from density profiles which suppressed the SST-induced cooling and enhanced the flux feedback from the ocean, which favoured the rapid intensification of the storm. The incorporation of the spatial MLD on TC prediction is clearly visible in the case of post-monsoon storm Vardah, as the intensity difference are seen from the deepening phase of the storm and it could be due to the association of strong winds and a deepening of the MLD, which is examined in the next section. | Storm-induced ocean response In this section, the simulated TC-induced ocean response from the three WRF-OML experiments is presented in terms of the simulated spatial variation of the MLD, storm-induced SST cooling and flux feedback. The WRF-OML estimates the MLD after computing the rate of change in the depth of mixing, which is directly proportional to surface wind speed. The initial MLDs (Supporting Information Figure S1) supplied to the WRF-OML indicate that there are minor differences in the spatial distribution of the MLD computed from the formulation of isothermal temperature and density in the two TC cases. The deviations of the spatial MLD are large in the case of the post-monsoon TC (Vardah) as compared with pre-monsoon storm Nargis. To illustrate how the initial differences in the spatial distribution of the MLD between the MLD-TEMP and MLD-DENS are directly reflected in the storm-induced deepening, the spatial distribution of the simulated MLD along with the corresponding storm tracks are shown in Figure 3. In the case of the MLD-CONST, a maximum MLD deepening of about 5-10 m depth is only noticed during its peak intensification period, while in the MLD-TEMP, there is a clear signal of the presence of a deeper MLD up to 15-20 m along the track, and it gradually decreases from the right side to the left side of the track. A similar pattern is also seen in the MLD-DENS with a high magnitude of deepening up to 20 m. The results of the simulated response of the OML suggest that the deepening of the MLD clearly occurred on the right side of the storm's passage for both TCs (Supporting Information Figure S2). The results suggest that the simulated deepening of the OML is highly sensitive to the initial MLDs supplied to the WRF. Also, the simulated deepening is clearly seen for the MLD-DENS, and is found to be high for Vardah. The impact of deepening is found to be less (5-10 m) for the MLD-CONST. Previous studies (Maeda, 1964;Gentry, 1970) showed that when an intensified cyclone passes through a shallow mixed-layer, storm-induced strong winds lead to the upwelling of the ocean waters from the subsurface to cool the surface ocean waters. It leads to a decrease in the SST during the passage of the cyclone, which in turn acts like a negative feedback of the ocean to reduce the flux feedback to suppress the intensification of the TC (Mao et al., 2000). To present the storm-induced SST cold wake, the simulated SST difference (Figure 4) computed between the SSTs at the time of model initialization and after the passage of the TC, along with the corresponding microwave SST obtained from a special sensor microwave imager (SSMI), is presented. The SSMI SSTs clearly indicating the storm-induced cooling (> 1.5 C) for both TCs on the right side of the cyclone track. While the results of the simulated cold wake in both TC cases indicate that when the spatial variation of the MLD is set to constant (50 m) in the initial conditions (MLD-CONST), though the SST cooling is observed along the right side of the storm track, the magnitude is found to be less (0.25-0.5 C) compared with the other WRF-OML simulation. The magnitude of the storm-induced SST cooling is slightly increased from 0.5 to 0.75 C when the initial MLD is obtained spatially from the isothermal layer depth (MLD-TEMP). Though the simulated MLD of the MLD-TEMP is deeper than the MLD-CONST (Figure 3), the storm-induced SST cooling is high in the case of the MLD-TEMP, and is because the initial MLD is shallower in the MLD-TEMP compared with the MLD-CONST (Supporting Information Figure S1). The initialization of spatially varying the MLD produced the significant changes in the simulated life cycle of the storm and resulted in the simulated strong surface winds (than the MLD-CONST) that resulted in mixing with deeper layers (Supporting Information Figure S2b) and might have produced the strong cooling conditions as reflected in the form of SST cooling. The magnitude of the storm cooling (> 1.25 C) is significantly increased and a well-marked pattern is observed on the right side of the cyclone track in the simulation of the MLD computed from density (MLD-DENS). Simulation of the storm-induced SST is one of the prominent parameters for the coupled model simulations for TCs. The results indicated that the simulated SST cold wake obtained from the MLD-DENS realistically captured the observed pattern of the SSMI SST product in both cyclone cases. However, there is an underestimation of storm-induced SST cooling (0.5-1 C) even in the best simulation (MLD-DENS); it is probably due to the limitation of the 1D OML model highlighted by Yablonsky and Ginis (2009). Their study shows that the OML (initialized with deeper MLDs) has a tendency to underestimate significantly (more than half) the storm-induced cooling (SST), particularly for the TCs translating at speeds of 3.5 and 5.0 mÁs −1 . Also, their study attributed the underestimation in ocean surface cooling primarily to neglecting the process upwelling mechanism. Kossin (2018) indicates that the average translation speed for NIO storms is in the range of 3.5-4 mÁs −1 . Owing to these factors, the underestimation of the simulated storm-induced cooling can be mainly attributed to the limitation of the OML model, which can be rectified by coupling with a 3D ocean model (Zambon et al., 2014b;Prakash and Pant, 2017). The ML acts like a source enhancing or reducing the intensity of the storm passing over, and the upper ocean feedback to the TC atmosphere is primarily through the exchange of enthalpy fluxes (latent and sensible heat). To illustrate the impact of the simulated MLD changes on the surface fluxes, a time series of enthalpy fluxes averaged over the region of 4 × 4 around the TC centre are presented in Figure 5. The results show that there are significant variations in the magnitude of fluxes between the coupled and uncoupled models. Also, the variation of the sensible heat flux is large for the pre-monsoon storm, as warm summer temperatures could possibly enhance the sensible flux feedback in the coupled model simulations of Nargis, while the major differences in latent heat flux are seen after the storm reaches its highest intensity in both cases. Srinivas et al. (2017) also point out similar variations in the flux feedback in the TC simulations between the coupled and uncoupled models, and their study explained that a lack of formulation of wind-induced mixing in uncoupled models is a possible reason for these flux variations. Among the coupled model simulations, significant differences were found mainly in the mature and weakening stages of the storm. To illustrate the precise differences in the simulated fluxes, Supporting Information Figure S3 shows the time variation of area-averaged latent heat and sensible heat fluxes over the four quadrants around the centre of the TC (left forward, right forward, right rear and left rear) and in the quadrants determined based on the simulated track position of the storm for all time intervals. Maximum variations in the simulated fluxes are seen in the left quadrant of the storm as maximum winds in the left sectors of the storm, which lead to a reduction of latent heat flux, and increases in sensible heat fluxes are seen mainly in the left forward and rear sectors. Significant differences are found in the coupled simulations, mainly from the deepening to the mature stages of the storm. The CONTROL simulation exhibits minimum variations in sensible heat flux, while they are maximum in the case of the simulated latent heat fluxes in all quadrants, and this is possible due to the reduction of storm-induced cooling in the CONTROL, which enhanced surface moisture fluxes and reduced sensible flux transfer from the ocean to the atmosphere. In coupled OML F I G U R E 5 Time series of the simulated area-averaged sensible (top) and latent heat fluxes (bottom) around the cyclone grid for Nargis (left) and Vardah (right) simulations, the maximum fluxes are simulated by the MLD-CONS, while the minimum transfer of fluxes in all quadrants is seen in the case of the MLD-DENS, and it is mainly due to the simulation of enhanced storm-induced cooling in the MLD-DENS, which reduced the transfer of fluxes to the WRF. | Surface winds simulated In this section, the simulated surface winds are analysed with the multi-platform winds from the CIRA at the mature stage of the cyclonic storms considered in the study. The vector track positions clearly indicated that there are significant differences in translation speed between the coupled and uncoupled models. For Nargis, the CIRA winds ( Figure 6) are considered from 0000 UTC on May 2, 2008, while the simulated winds are considered at 1800 UTC on May 1. The observed estimates from the CIRA indicated strong winds of magnitude 65-80 kn, which are confined in the right rear sector of the cyclonic storm, and the surface winds of magnitude 35 kn are seen over wider regions from 13 to 18 N with a circularly symmetry around the centre of the TC positioned at 93 E and 16 N. The observed pattern of isotachs around the centre of the storm and the spatial distribution of maximum winds (> 35 kn) are well captured by the MLD-DENS and MLD-TEMP simulations, with simulated peak winds of 65 kn in the right forward sector of the TC. Moreover, the position of the TC centre and spatial extent of the maximum winds simulated by the MLD-DENS are close to the observed estimates of the CIRA. The MLD-CONST simulated slightly weaker winds with speeds around 55 kn. Also the organization of the isotachs is slightly shifted to left forward sectors; and the position of the system is close to the coast of Myanmar. Though the similar patterns of weaker winds (45 kn) confined over a smaller region is noticed in the CONTROL, the position of isotachs is shifted to rear sectors of the TC and the centre of the TC is dislocated from the observed by around 100 km. In the case of Vardah, the CIRA observation is considered at 1800 UTC on December 11, 2016, when the storm attained its mature stage and is about to make landfall, while the simulated winds are shown at 0600 UTC on December 11, 2016. The CIRA estimates clearly indicate strong winds of > 50 kn in the right rear sector of the TC. The spatial distribution of winds and magnitude are well captured in the MLD-TEMP and MLD-DENS, while the CONTROL and MLD-CONST predicted a weaker storm with a lesser magnitude of winds (< 50 kn), and the spatial extent of maximum winds (50 kn isotach) is confined over a smaller region. Although the experiments initialized with spatially varying MLD-simulated surface winds closely match the observed maximum winds' (50 kn) pattern and locations, the spatial extent of the isotachs is slightly overestimated in these experiments (MLD-TEMP and MLD-DENSE). This is probably due to the change in the simulated life cycle of the storm after the initialization with a spatially varying MLD. The IMD estimates suggest that the observed storm attained its maximum intensity between 0000 and 0600 UTC on December 12, 2016, for Vardah. However, the experiments CONTROL, the MLD-CONST, the MLD-TEMP and MLD-DENS showed early intensification and attained their mature stage, respectively, at 1800 UTC on December 10, 0000 UTC on December 11, 1200 UTC on December 11 and 1200 UTC on December 11, 2016. As the surface wind comparison time (1800 UTC on December 11) is close to the mature stage of the MLD-TEMP and MLD-DENS, the surface winds were slightly greater in magnitude in the two experiments. | Radius height wind and temperature of Nargis To illustrate the impact of mixed-layer coupling on the simulation of the secondary circulation of the TC, radius height distributions of azimuthally averaged tangential winds are plotted (Figure 7a-e) for Nargis at the mature stage of the TC (0000 UTC on May 1, 2008). The observed estimates of gradient wind indicate calm winds (< 10 kn) over a radius of 20 km from the centre of the TC and strong winds (40 kn) that spread vertically up to 10 km with a radial distance of 80-180 km; further the magnitude of the gradient winds decreased significantly in both vertical and horizontal directions. While the simulations show the strongest gradient winds observed when the model is initialized with the MLD-DENS, the models also overestimated the winds. The results of the coupled and uncoupled model show that the width of the calm wind region and magnitude of the maximum gradient wind are almost similar and close to the observed estimates. However, the radial distance and vertical extensions of the strong gradient wind distribution (50 kn) differ in all four simulations. The CONTROL simulations exhibited a weaker secondary circulation with maximum winds (40 kn) extended up to 4 km and radially by 60 km; similar results are also noticed in the MLD-CONST with a maximum wind confined to a very small radius of 40 km. The results of secondary circulation are improved with the MLD-TEMP with the strongest winds (40-50 kn) extended vertically and horizontally up to 5 and 80 km, respectively. The better comparisons found with the MLD-DENS indicate the presence of strongest winds up to 6 km in a radius of 100 km from the centre of the storm. Similarly, the radial height section of the azimuthally averaged temperature anomaly for Nargis at the mature stage of the storm (0000 UTC on May 1, 2008) is presented in Figure 7f-j. The observed temperature anomaly is available directly from the CIRA, whereas the corresponding simulated temperature anomaly is computed as a difference in temperature computed from the mature stage of the TC to the temperature before the formation of the storm. The observed estimates of the temperature anomaly show a cooling at lower levels up to 6 km and a warming region (3 C) found between 10 and 15 km from the centre of the storm to a radial distance of 50-100 km. The observed estimates indicate a strong convergence and associated cooling due to evaporation at lower levels and the existence of a strong outflow region and the subsidence of air, resulting in warming in the mid-Troposphere. Though the presence of upper tropospheric warming was captured in all experiments, the model failed to simulate the lower tropospheric cooling. Also, the upper tropospheric warming is overestimated (1-2 C) by the model compared with the observed CIRA estimates. For the better pattern of upper | Simulated rainfall The spatial distribution of 24 hr-accumulated rainfall both for Nargis from 0000 UTC on May 1 to 0000 UTC on May 2, 2008, and Vardah between 0000 UTC on December 12 and 0000 UTC on December 13, 2016, are compared with the observed rainfall estimates of the TRMM 3B42v7 ( Figure 8). In the case of Nargis, the model overpredicted the rainfall in all experiments, with a slight displacement of the rainfall band probably due to faster translation movement as compared with the observed estimates and the maximum overestimation found in northward regions of the track. The TRMM rainfall pattern indicates a maximum rainfall distributed in southern parts of the storm track. The uncoupled model simulated the highest rainfall (> 200 mm) with maximum rainfall distributed mainly over the north-central BOB, and simulated rainfall reduced as the storm moved close to the coast of Myanmar. The simulated pattern of rainfall from the coupled model simulations indicates that the simulated rainfall with the WRF-OML is sensitive to the initial MLDs. Among the coupled model simulations, the MLD-DENS is close to the spatial distribution of the TRMM rainfall and the overestimation of rainfall significantly reduced when compared with the MLD-CONST and MLD-TEMP. The rainfall in these simulations (MLD-CONST and MLD-TEMP) was significantly overestimated in the northern sectors close to Myanmar due to faster translation movement of the simulated storm. In the case of Vardah, all the experiments slightly underestimated the rainfall and failed to capture the observed symmetric distribution of rainfall, particularly that over southern parts of the storm track seen in the TRMM. Though the spatial pattern of the simulated rainfall was close to the observed estimates, the heavy rainfall band completely shifted northwards by 100 km due to the northwesterly movement of the storm track in the CONTROL. Among the coupled model simulations, the MLD-DENS produced a better comparison with the observed estimates along both the right and left sectors of the track, and the rainfall peak over the coast of Chennai. Thus, the role of the initialization of the MLD on the simulated TC rainfall is seen in both pre-and postmonsoon storms, but the impact seems to be more significant for post-monsoon storm Vardah. Moreover, the rainfall simulation with the MLD-DENS seems to produce a realistic pattern of rainfall compared with other coupled model simulations, suggesting that the initialization of the MLD obtained from the density criteria has a positive response on the simulation of TCs' rainfall. | SUMMARY AND CONCLUSIONS In the present study, the response of the simulated tropical cyclone (TC) intensity and track with respect to the initialization of the variable oceanic mixed-layer depth (OMLD) analysed for the two TCs that formed during the pre-and post-monsoon using the ocean mixed-layer (OML) model coupled with the weather research and forecasting (WRF) model has been analysed. The impact of the OML processes on the prediction of the TC intensity and track that formed over the Bay of Bengal (BOB) is also analysed using the best track estimates of the India Meteorological Department (IMD). Of the four numerical experiments, the differences in the simulated track are seen mainly between the CON-TROL and WRF-OML simulations, and the translation speed of the storm is found to be faster in the CONTROL simulation. As for the coupled WRF-OML simulations, the translation speeds and vector track error (VTE) clearly indicate that the cross-track variations among the coupled model simulations are less, and the differences are seen mainly along the track due to changes in the translation movement of the storm, particularly for post-monsoon storm Nargis. Of all the simulations, the MLD-DENS is relatively better simulated, followed by the MLD-TEMP and MLD-CONST, suggesting the dynamic initialization of the MLD obtained from density criteria seems to produce positive results for the track simulations, particularly for postmonsoon storm Vardah. The intensity differences between the mixed-layer experiments are less sensitive for premonsoon storm Nargis; however, significant differences in the intensity are seen for post-monsoon Vardah. In the case of the latter, the differences in intensity are seen from the deepening phase of the storm. Also, early intensification is noticed in the case of the MLD-DENS, which could be attributed to the specification of lower depths in the MLD-DENS that have reduced the sea surface temperature (SST) cold wake and enhanced the flux feedback from the ocean, leading to the simulation of the enhanced intensification of the storm. Moreover, saline stratification and the presence of thick barrier layer thickness in the BOB during the postmonsoon season are also attributed to the reduced TC SST cooling (e.g. Balaguru et al., 2014;Foltz et al., 2018). To check the major factors that play a critical role in the simulated differences in track and intensity, the results of the ocean response in terms of variations in the simulated MLD and storm-induced SST cooling and its cumulative effect on the simulated ocean fluxes are analysed. The results of the simulated MLD and storm-induced SST suggest that the deepening of storm-induced MLD and SST cooling is primarily observed on the right side of the storm passage, and the storm-induced deepening is high for the MLD-DENS, in particular for Vardah. The results of surface enthalpy fluxes indicate that significant variations exist in the magnitude of fluxes, mainly between the coupled and uncoupled models, and the variation in the sensible heat flux are large for the pre-monsoon storm, while the major differences in latent heat flux are seen in post-monsoon storm Vardah. These differences in fluxes with the numerical simulation of the WRF-OML seem to produce a better pattern for the simulated surface winds and secondary circulation of the TC, while there is also consistent improvement seen from the initialization of the constant MLD to the spatially varying MLD derived from temperature and density. A comparison of Cooperative Institute for Research on Atmosphere (CIRA) surface winds and the secondary circulation of the TC are made through the radial-height sections gradient wind and temperature anomaly. The results provide evidence for the better agreement with the MLD-DENS. Though the rainfall simulations from all the experiments are not close to the observed estimates of the Tropical Rainfall Measuring Mission (TRMM), the initialization of the spatially varying MLD shows a positive impact on the simulation of TCinduced rainfall, especially for post-monsoon storms compared with the pre-monsoon. Among the three different MLD initializations, the positive impact on rainfall is observed with the initialization where the MLD is computed from density. This suggests that the dynamic initialization of the MLD obtained from density criteria better simulates the post-monsoon storms.
2019-12-12T10:31:28.442Z
2019-12-09T00:00:00.000
{ "year": 2019, "sha1": "6a7ccedb7a5db1ba78def6e3e7ab3f80682798ff", "oa_license": "CCBY", "oa_url": "https://rmets.onlinelibrary.wiley.com/doi/pdfdirect/10.1002/met.1862", "oa_status": "HYBRID", "pdf_src": "Wiley", "pdf_hash": "ff72ef3a01d1cc42726fc5a3ec76375a31a1091f", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Environmental Science" ] }
244827238
pes2o/s2orc
v3-fos-license
Pesticides Burden in Neotropical Rivers: Costa Rica as a Case Study Neotropical ecosystems are highly biodiverse; however, the excessive use of pesticides has polluted freshwaters, with deleterious effects on aquatic biota. This study aims to analyze concentrations of active ingredients (a.i) of pesticides and the risks posed to freshwater Neotropical ecosystems. We compiled information from 1036 superficial water samples taken in Costa Rica between 2009 and 2019. We calculated the detection frequency for 85 a.i. and compared the concentrations with international regulations. The most frequently detected pesticides were diuron, ametryn, pyrimethanil, flutolanil, diazinon, azoxystrobin, buprofezin, and epoxiconazole, with presence in >20% of the samples. We observed 32 pesticides with concentrations that exceeded international regulations, and the ecological risk to aquatic biota (assessed using the multi-substance potentially affected fraction model (msPAF)) revealed that 5% and 13% of the samples from Costa Rica pose a high or moderate acute risk, especially to primary producers and arthropods. Other Neotropical countries are experiencing the same trend with high loads of pesticides and consequent high risk to aquatic ecosystems. This information is highly valuable for authorities dealing with prospective and retrospective risk assessments for regulatory decisions in tropical countries. At the same time, this study highlights the need for systematic pesticide residue monitoring of fresh waters in the Neotropical region. Introduction Neotropical regions are recognized worldwide for their biodiversity. Antonelli and Sanmartín [1] stated this is "the most species rich region on Earth", and Costa Rica is not the exception. According to data from the State of the Environment Report [2], the country has 5% of the world's biodiversity. However, the same report and [3] consider that although the country has managed to make good decisions in conservation, one of the central oversights in terms of environmental protection has been the management of agrochemicals, their excessive use, and their contaminating effects on the different environmental compartments (air, water, soil), as well as on wildlife and human health [4,5]. Tropical climates have the advantage of allowing year-round cultivation, but this implies the year-round application of agrochemicals as well. Therefore, pesticides become "pseudo-persistent" and recurrent water pollutants [26] because, even though the halflife of many pesticides is short, and they could be degraded in a few days, the high application rates in the field, result in the detection of these substances in water bodies almost permanently. For example, [11] showed that the fungicide pyrimethanil and the herbicide diuron have a detection frequency of almost 90% in the water samples from the Madre de Dios River basin, while the insecticide ethoprophos and the fungicide epoxiconazole have frequencies of more than 70%. Very high detection frequencies (>50%) are also common in other areas of the country, with different active ingredients, varying according to the predominant crops [10,27]. It is clear that monocultures (especially genetically modified crops) have expanded greatly in Latin American countries, and with this increment, higher use of pesticides has also occurred [28]. In Central America, more than 180,000 tons of 353 a.i. were imported between the years 2005 and 2009 [29], and even though not all of the imported pesticides are used in the same area, it is clear that a considerable amount of toxic substances are being released into the environment regularly in Neotropical countries. When these substances enter water bodies, they interact with the abiotic and biotic components of the ecosystem. The interaction with biota involves processes of entry, metabolization, and/or accumulation in organisms, which can produce direct or indirect deleterious effects [30][31][32]. In events of severe contamination, it is expected that species or entire groups of organisms that are more sensitive or lack escape mechanisms will disappear [33,34]. Therefore, the concentration or toxicity of pesticides themselves may explain much of the variation in aquatic species community structure even at regional scales [35,36]. Stehle and Schulz [37] present information that indicates that the richness of macroinvertebrate families was reduced~30% in the presence of pesticide concentrations that represent acceptable limits at the regulatory level and that it is possible to observe a reduction of up to 63% in sites with concentrations that exceed acceptable limits. The same authors refer to information that reports concentrations of insecticides that exceed the regulatory limits. Therefore, it is noteworthy to indicate that this situation is widespread and that aquatic organisms are exposed to unacceptable concentrations of pesticides, mainly in tropical countries, where protection measures are laxer and the use of pesticides has increased. For this reason, this study gathered the data from 11 research projects carried out in 5 different regions of Costa Rica, as a case study to generate information on the detection frequency, toxicity, and retrospective environmental risk of pesticides measured in field samples from more than 160 sites. We aimed to reflect the conditions of Neotropical agriculturally influenced rivers and calculate the potential effects of that pesticide burden on the biota of such aquatic ecosystems. Pesticide Detection and Frequency With the collection and digitalization of the information presented in Table S1, a unified database was generated. This database contains the results of pesticide residue analyses for 1036 water samples taken throughout Costa Rica. The pesticide residue analysis database reveals 85 different active ingredients (a.i) or degradation products that were analyzed in the water samples. From these, 72 were detected (Table 1). Amongst the analyzed (but not detected a.i.) are bifenthrin and deltamethrin (pyrethroid insecticides), cyproconazole, and fenbuconazole (triazole fungicides), fenthion and malathion (organophosphates), as well as various metabolites of organochlorine pesticides such as PCP, PCNB, DDT, and endosulfan. The majority of these organochlorine pesticides have already been forbidden or restricted in Costa Rica since 1999 and 2005 (SFE, 2020); however, their degradation products are still detectable in other environmental matrices (dust, air, [38]). Pérez-Maldonado et al. [39] also assessed DDT levels in samples from México and Central America, detecting both DDT and DDE metabolites in soil, fish tissue, and children's blood. The 72 detected a.i are representatives of several biocide actions and chemical groups, including triazole, benzimidazole, aromatic hydrocarbon, pyridine, imidazole, and chlorinated fungicides; triazine, uracil, urea, oxazolidinone, and triazinone herbicides; organophosphate, organochlorine, pyrethroid, carbamate, thiadiazine, and neonicotinoid insecticides; as well as other acaricides, nematicides, among others They are also representative of a great diversity of toxic modes of action, which is presented in Table S2. There are some herbicides-namely, diuron and ametryn; fungicides pyrimethanil, flutolanil, azoxystrobin, epoxiconazole, and myclobutanil; insecticides diazinon, buprofezin, chlorpyrifos, and ethoprophos for which high detection frequencies (≥20%) are observed at a national scale (Table 1). Furthermore, there are four forbidden substances (lindane, hexachlorobenzene, carbofuran, and bromacil) that were detected in water samples. Lindane and hexachlorobenzene were forbidden since 1999 and 2005, respectively; therefore, the detections imply illegal use of these pesticides in the mountain horticulture regions of the Central Volcanic Range. On the other hand, carbofuran, which was forbidden in 2014, was detected mostly prior to that year; however, one detection was registered in 2016. This could be the result of the use and application of product remnants already in existence (imported before the ban), but this would be highly improbable for the present and future years and should be analyzed with more detail by authorities since a high risk for aquatic biota has been demonstrated for this a.i. [7,18,27]. Bromacil is one of the most recently forbidden a.i. (2017), and it was also detected in posterior years (up to 2020); consequently, the risks associated with the potential lixiviation of this pesticide into groundwaters is still of concern, as it has been in other countries [40,41]. Differences in detection frequencies can be observed within regions in Costa Rica ( Figure 1), with a higher frequency of fungicides in the Caribbean > mountain horticulture > South Pacific > North Pacific > Northern Caribbean > Central Pacific. Herbicides were more frequently detected in the South Pacific > Caribbean > North Pacific > Northern Caribbean > horticulture > Central Pacific, while insecticides and nematicides frequencies were highest in the mountain horticulture > Caribbean > South Pacific > Northern Caribbean > North Pacific > Central Pacific. It is noteworthy that the Central Pacific region has a considerably lower sampling effort than other areas, and almost no pesticides were detected in the analyzed samples; however, Rodríguez-Rodríguez et al. [42] conducted an intensive sampling (84 water samples) from 2008 to 2011 in melon and watermelon influenced catchments and found one fungicide and seven insecticides in concentrations that pose an acute and chronic risk to Daphnia magna, fish, and Chironomus riparius. This situation highlights the importance of increasing the sampling effort in that region. Furthermore, the highest individual pesticide frequencies were registered where more sampling effort has occurred; for example, for the horticulture mountain regions, chlorpyriphos was detected in 60% of the samples; in the South Pacific, diuron was detected in 64% and bromacil in 49% of the samples, while in the Caribbean, diuron, ametryn, pyrimethanil, diazinon, and azoxystrobin were detected in >40% of the samples. Regarding the measured environmental concentration (MEC) of the a.i., Figure 2 shows all the field concentrations of 72 a.i. The majority of the pesticides were detected in concentrations <1 µg/L; however, in some cases, they reached values higher than 10 µg/L (e.g., diazinon, diuron, ametryn, and flutolanil), and at least 18 pesticides were >1 µg/L. Comparison with International Regulations We compared the mean and maximum detected concentrations with hazardous concentration 5% (HC5) and several international standards (EU-EQS, EPA water quality criteria, and the Australian and New Zealand Guidelines for Water Quality; Table 2). We also checked if the a.i are priority substances in the EU or US-EPA and if they were enlisted in the list of highly hazardous pesticides [43]. Comparison with International Regulations We compared the mean and maximum detected concentrations with hazardous concentration 5% (HC5) and several international standards (EU-EQS, EPA water quality criteria, and the Australian and New Zealand Guidelines for Water Quality; Table 2). We also checked if the a.i are priority substances in the EU or US-EPA and if they were enlisted in the list of highly hazardous pesticides [43]. Comparison with International Regulations We compared the mean and maximum detected concentrations with hazardous concentration 5% (HC 5 ) and several international standards (EU-EQS, EPA water quality criteria, and the Australian and New Zealand Guidelines for Water Quality; Table 2). We also checked if the a.i are priority substances in the EU or US-EPA and if they were enlisted in the list of highly hazardous pesticides [43]. Table 2. The detected maximum and mean concentrations of analyzed pesticides, as compared with HC 5 and international guidelines. Marked in bold are a.i. with mean or maximum concentration exceeding HC 5 or international regulations. Available HC 5 calculations reflect that those concentrations detected in field samples represent a risk for the biota of the aquatic ecosystems in Costa Rica. Likewise, 50% of the detected pesticides have mean and/or maximum concentrations that do not comply with one or more international standards (Table 2). Among the non-compliant a.i. are herbicides ametryn, bromacil, butachlor, diuron, hexazinone, oxyfluorfen, pendimethalin, and terbutryn; fungicides azoxystrobin, chlorothalonil, epoxiconazole, fenpropimorph, imazalil, pencycuron, and spiroxamine; insecticides cypermethrin, buprofezin, cadusafos, carbaryl, carbofuran, chlorpyriphos, cyhalothrin, diazinon, dimethoate, ethoprophos, fenamiphos, imidacloprid, lindane, phorate, profenofos, terbufos, and triazophos. Vryzas et al. [28] state that limitations in risk assessment, coupled with the low level of implementation of pesticide regulations are partially causing the presence of pesticides above the normative, which implies that environmental protection goals might not be reached. It is valuable to mention that several of the non-compliant pesticides are also the ones with a higher frequency of detection (Table 1) and higher toxicity for aquatic organisms (e.g., the organophosphate and carbamate insecticides), and this should raise alarm about the conservation of aquatic ecosystems throughout the country. Additionally, we are aware that some highly used pesticides in Costa Rica (e.g., mancozeb, glyphosate, 2,4-D, among others) were not evaluated in this study because of analytical and methodological limitations, but for no reason must these results be interpreted as evidence that those a.i. do not exert effects on the aquatic ecosystems of the country. Ecological Risk Multi-Substance Potentially Affected Fraction (msPAF) Model Of the 85 pesticides detected in this study, 21 MoA were represented. These MoAs were further subdivided when species sensitivity distribution slopes (constructed with the toxicity data) of one a.i. differed more than 10% with respect to other a.i. that shared the same MoA (Table 3). Table 3. MoA assigned to each pesticide for the msPAF calculations. The subdivision of MoA is depicted with letters (a-d). We found that 5% and 13% of the total water samples from all regions of Costa Rica (except the Central Pacific, which had the least sampling effort) pose a high (msPAF > 5%) or moderate (msPAF > 1%) acute risk, respectively, especially to primary producers (plants, algae) and arthropods (insects, crustaceans). Figure 3 shows the mean and maximum msPAF, grouped by region. In the Caribbean, several samples had an extremely high risk for arthropods (insects and crustaceans) and aquatic plants, followed by the horticulture region, South Pacific, Northern Caribbean, and North Pacific. The msPAF model illustrates the effect of the mixture of substances with different MoA in the analyzed water samples, but it is also possible to address the specific pesticides that contribute to the higher risks in each species group (Figure 4). Top risk contributors might pose a low risk on a frequent basis, or they might pose a high risk occasionally, or both. In our study, herbicides diuron and oxyfluorfen, and fungicides azoxystrobin, chlorothalonil, difenoconazole, and spiroxamine are the top contributors to the risk posed on primary producers. Furthermore, diuron itself contributes to 99% of the cumulative risk on aquatic plants. The study by Rämö et al. [48] found the same result with diuron, suggesting that aquatic plants are more sensitive to this a.i. than algae, given that they have the same exposure data. It is noteworthy that the fungicides that are contributing to the risks on algae, fish, and arthropods have multisite action (chlorothalonil) or are ergosterol biosynthesis inhibitors, which is vital for all eukaryotic cells and, therefore, general enough to cause effects on non-fungi organisms [49]. All other imidazole or triazole fungicides have the same MoA [50] and could also potentially affect other groups of species. Regarding fish, a-cypermethrin, cyhalothrin, and permethrin (all pyrethroid insecticides), and fungicide chlorothalonil seem to be the a.i. posing the higher risks. Lastly, cyhalothrin and permethrin, as well as other organophosphate or carbamate insecticides (carbofuran, diazinon, fenamiphos, terbufos, chlorpyrifos) and fungicide chlorothalonil, are the higher contributors to the risk for arthropods (crustaceans, insects). However, all these estimations are based on acute toxicity (EC50, LC50), and we cannot deny the fact that many other a.i. (such as organophosphates and carbamates) might be involved in chronic toxicity in all groups of species, but especially on fish, which require higher concentration exposures to show immobility or mortality endpoints but could be affected by the neurotoxic acetylcholinesterase inhibition properties of those insecticides [51,52]. Another relevant aspect is the presence of some high-risk pesticides identified in this study in other Neotropical countries. For example, ametryn in Ecuador [23]; azoxystrobin in Panama [19]; carbofuran in Brazil [21] and Panama; chlorpyrifos and diazinon in Ecuador, México [24], and Panama; diuron in Brazil, Colombia [22], and Ecuador; epoxiconazole in Colombia; ethoprophos in Panama; terbutryn in Ecuador. Furthermore, researchers in México and Venezuela [25] have detected very toxic pesticides such as aldrin, dieldrin, en-drin, DDT, which are forbidden in many countries and are most likely posing unacceptable risks to the aquatic ecosystems. We believe that greater efforts must be made by the government agencies and the farmers in the Neotropical region, in order to guarantee that toxic substances applied to the crops for pest control do not reach natural superficial waters in concentrations that pose unacceptable risks. The protection of the riparian vegetation is key to this purpose since it helps mitigate the effects of pesticides and excess nutrients to aquatic biota [53] and also provides habitat for refuge and later recolonization of organisms into the streams [54]. This study highlights the need for systematic pesticide residue monitoring of fresh waters in the Neotropical region, to acknowledge if the exposure to biota from specific pesticides is higher or lower than predicted by the risk analysis (toxicity tests and predictive models of exposure) executed prior to the registration [28]. Results from such a monitoring program would serve as a retrospective environmental risk assessment to address unacceptable risks. Area of Study Costa Rica is located between geographic coordinates 08 • 22 26 and 11 • 13 12 North latitude and 82 • 33 48 -85 • 57 57 West longitude in the Central American Isthmus. Its climate is tropical, with a mean annual temperature of 26-27.6 • C and mean annual precipitation of 1300 mm in the driest regions, up to a maximum of 7467 mm in the Grande de Orosi watershed [55]. Moreover, according to [56], Costa Rica harbors 12 different life zones (dry, moist, wet, and rain forests), distributed through several altitudinal ranges (lowland, premontane, lower montane, and montane), which lead to the high variability of tempera-ture and rainfall throughout the country. In this study, we used superficial water samples retrieved from 160 sites throughout 5 different regions of Costa Rica (Caribbean, Northern Caribbean, North Pacific, Central Pacific, and South Pacific, as well as the mountainous horticultural zones of the Central Volcanic Range). Database We used previously generated information. The data (region, project, date, site, watershed, and pesticide residue analysis of 1036 water samples) were derived from 11 research projects carried out by state universities in the period between 2006 and 2019 ( Table S1). All samples were analyzed in the Laboratory of Pesticide Residue Analysis at the National University (LAREP, IRET, UNA) or at the Center of Investigation on Environmental Pollution, at the University of Costa Rica (CICA, UCR). This assured uniformity of data quality irrespective of the year or the research project. Pesticide Analysis Surface water samples were collected by inserting pre-washed 2 L brown glass bottles into the water. The collected samples were transported in cooled ice boxes to the LAREP, IRET, UNA, or to the CICA, UCR, and stored at 4 • C for a maximum of 24 h before the analyses. LAREP-UNA. Before 2018, pesticide analysis was performed as specified in Rämö et al. [40], while after that year, samples were analyzed by gas chromatography Agilent 7890A with mass detector 5975C (GC-MS) and liquid chromatography Waters Acquity UPLC H-Class with Waters XEVO T-QS Micro mass detector (UPLC-MS/MS). In both cases, a solid-phase extraction (SPE) was made prior to the analysis. For GC, the sample was agitated and passed through a previously conditioned Isolute ENV+ (200 mg/6 mL) cartridge, which was later dried and eluted with ethyl acetate. The extract was concentrated with Nitrogen and changed into Isooctane. Final volume of the extract was 0.25 mL. For UPLC, the same extraction procedure was followed, except that the elution was made with methanol, and it was concentrated into methanol/water (10:90 v/v or 40:60 v/v). The final volume of the extract was~0.5 mL. CICA. The method is a solid-phase extraction (SPE) and a liquid-liquid extraction (LLE) with dichloromethane, then solvent changes to acetone (for GC analysis), or with 0.1% formic acid in deionized water (for HPLC analysis). Afterward, a high-resolution multiresidue analysis in water samples by gas chromatography and liquid chromatography was used, as detailed in [13,18]. Comparison with International Regulations We compared the mean and maximum detected concentrations of this study with environmental quality standards (EQS) from the European Union [44,47], the United States Environmental Protection Agency water quality criteria [45], the Deutsch Institute for Health and Environment maximum tolerable risk level [44], and the Australian and New Zealand Guidelines for Water Quality [46]. Ecological Risk Multi-Substance Potentially Affected Fraction (msPAF) Model To complement the assessments derived by single-substance ecological risk, the msPAF model calculates the toxicity risk of mixtures of pesticides with known toxic modes of action (MoA). This model uses concentration addition (CA) to calculate a unique risk value for all the substances that have the same MoA and then applies response addition (RA) to summarize the toxicity risks of all different MoA. The outcome is a msPAF value that defines the potentially affected fraction (as a percentage) of a species group, resulting from the exposure to a complex mixture of pesticides [57,58]. For this study, to calculate the msPAF, we followed the methods described in detail by Rämö et al. [48]. However, we updated the information regarding the acute toxicity of each pesticide to aquatic biota, using new studies registered in the US Environmental Protection Agency (EPA) ECOTOX database [59]. Additionally, to assign MoA to each pesticide, we only used the classifications of the insecticide, fungicide, and herbicide resistance action committees [50,60,61]. We used the same 6 groups of organisms (algae, aquatic plants, arthropods, aquatic insects, crustaceans, and fish), we followed the same hazard unit calculation approach (geometric mean of toxicity data for each "species group-pesticide" combination), and we also set a minimum of 4 species' toxicity data (in each species grouppesticide combination) to be included within the msPAF assessment. To interpret the results, a PAF < 1% is considered low risk, 1% > PAF < 5% is considered moderate, and PAF > 5% is interpreted as a high risk. Additionally, to address the specific pesticides that contribute to the higher risks in each species group, we followed the methods described by [48]. - Pesticides are ubiquitous contaminants of fresh waters in Costa Rica and other Neotropical countries; -Several of the highly toxic active ingredients are detected in high frequencies (>20%) throughout Costa Rica, increasing the risks for aquatic biota; -Concentrations at which individual analyzed pesticides are found in the country exceed criteria for biodiversity protection (HC 5 ) and international standards, therefore representing a risk for the integrity and ecological functioning of aquatic ecosystems; -msPAF reveals moderate and high risk derived from pesticide mixtures in water samples across Costa Rica; -Pesticides consistently representing risk in Costa Rica (high frequency of detection, exceeding environmental standards, and identified as risk contributors within the msPAF model and literature) are a-cypermethrin, ametryn, azoxystrobin, bromacil. carbofuran, chlorothalonil, chlorpyrifos, diazinon, diuron, epoxiconazole, ethoprophos, fenamiphos, hexazinone, terbufos, and terbutryn; -We believe these pesticides (except bromacil, which has already been forbidden) should be re-evaluated if their registration did not take into account current risk assessment tools; -Several high-risk pesticides in Costa Rica are detected in other Neotropical countries; -Deeper analysis of the responses of biota to the detected pesticides might be used to complement the development of numerical water-quality criteria and also for retrospective environmental risk evaluations for Neotropical countries; -There is an urgent need for systematic pesticide residue monitoring of fresh waters in the Neotropical region. Supplementary Materials: Table S1: Data and information sources for the analysis, Table S2: Characteristics (CAS identification number, biocide action, chemical group, and mode of action) of the detected pesticides, as well as references to studies in which they have been stated as highrisk pesticides for the aquatic environment in Costa Rica. References [62,63] are cited in the Supplementary Materials. Informed Consent Statement: Not applicable. Data Availability Statement: The data presented in this study are available on request from the corresponding author. The data are not publicly available yet, due to pending publication in a public repository.
2021-12-03T16:41:53.269Z
2021-11-29T00:00:00.000
{ "year": 2021, "sha1": "73d44c80a5bf05e2f1be8f573f96d8e7912b6b67", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1420-3049/26/23/7235/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "1699cbb95c95f6f24f0d0b23720205676604e703", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Medicine" ] }
16197805
pes2o/s2orc
v3-fos-license
Empirical transverse charge densities in the nucleon and the nucleon-to-$\Delta$ transition Using only the current empirical information on the nucleon electromagnetic form factors we map out the transverse charge density in proton and neutron as viewed from a light front moving towards a transversely polarized nucleon. These charge densities are characterized by a dipole pattern, in addition to the monopole field corresponding with the unpolarized density. Furthermore, we use the latest empirical information on the $N \to \Delta$ transition form factors to map out the transition charge density which induces the $N \to \Delta$ excitation. This transition charge density in a transversely polarized $N$ and $\Delta$ contains both monopole, dipole and quadrupole patterns, the latter corresponding with a deformation of the hadron's charge distribution. Using only the current empirical information on the nucleon electromagnetic form factors we map out the transverse charge density in proton and neutron as viewed from a light front moving towards a transversely polarized nucleon. These charge densities are characterized by a dipole pattern, in addition to the monopole field corresponding with the unpolarized density. Furthermore, we use the latest empirical information on the N → ∆ transition form factors to map out the transition charge density which induces the N → ∆ excitation. This transition charge density in a transversely polarized N and ∆ contains both monopole, dipole and quadrupole patterns, the latter corresponding with a deformation of the hadron's charge distribution. Electromagnetic form factors (FFs) of the nucleon are the standard source of information on the nucleon structure and as such have been studied extensively; for recent reviews see e.g. Refs. [1,2,3]. The FFs describing the transition of the nucleon to its first excited state, ∆(1232), contain complementary information, such as the sensitivity on the nucleon shape; see Ref. [4] for a recent review. In more recent years, generalized parton distributions (GPDs) have been discussed (see e.g. Refs. [5,6,7,8] for some reviews) as a tool to access the distribution of partons in the transverse plane [9], and first calculations of these spatial distributions have been performed within lattice QCD [10] and hadronic models (see e.g. [11] for a recent evaluation). By integrating the GPDs over all parton momentum fractions, they reduce to FFs. Given the large amount of precise data on FFs it is of interest to exhibit directly the spatial information which results from these data. This has been done recently in Ref. [12] for an unpolarized nucleon. In this Letter we extend that work to the case of a transversely polarized nucleon as well as to map out the transition charge density which induces the N → ∆ excitation. In the following we consider the electromagnetic (e.m.) N → N and N → ∆ transitions when viewed from a light front moving towards the baryon. Equivalently, this corresponds with a frame where the baryons have a large momentum-component along the z-axis chosen along the direction of P = (p + p ′ )/2, where p (p ′ ) are the intial (final) baryon four-momenta. We indicate the baryon light-front + component by P + (defining a ± ≡ a 0 ± a 3 ). We can furthermore choose a symmetric frame where the virtual photon four-momentum q has q + = 0, and has a transverse component (lying in the xy-plane) indicated by the transverse vector q ⊥ , satsifying q 2 = − q 2 ⊥ ≡ −Q 2 . In such a symmetric frame, the virtual photon only couples to forward moving partons and the + component of the electromagnetic current J + has the interpretation of the quark charge density operator. It is given by : , considering only u and d quarks. Each term in the expression is a positive operator sinceqγ + q ∝ |γ + q| 2 . Following [9,12], one can then define quark transverse charge densities in a nucleon as : where the 2-dimensional vector b denotes the position (in the xy-plane) from the transverse c.m. of the nucleon, and λ = ±1/2 denotes the nucleon (light-front) helicity. Using the Dirac F 1 nucleon e.m. FF, Eq. (1) can be expressed as [12] : with J n denotes the cylindrical Bessel function of order n. Note that ρ N 0 only depends on b = | b|. It has the interpretation of a quark charge density in the transverse plane for an unpolarized nucleon, and is well defined for all values of b, even when b is smaller than the nucleon Compton wavelength. In contrast, the usual 3-dimensional Fourier transform of the matrix elements of J µ in the Breit-frame (parameterized in terms of the Sachs FFs) becomes intrinsically ambiguous [13], due to the Lorentz contraction of the nucleon along its direction of motion. Although this does not affect the densities at larger distances (typically larger than about 0.5 fm) the value for the densities at smaller densities is merely a reflection of the prescription how to relate the experimentally measured Sachs FFs at large Q 2 with the intrinsic charge and magnetization FFs. A feature of viewing the nucleon when "riding a photon" is that one gets rid of the longitudinal direction. This allows one to project the charge density (in the case of the J + operator) on the transverse plane, which does not get Lorentz contracted. In this way, it was found e.g. in Ref. [12] that the neutron charge density reveals the well known negative contribution at large distances, around 1.5 fm, due to the pion cloud, a positive contribution at intermediate b values, and a negative core at b values smaller than about 0.3 fm, reflecting the large Q 2 behavior of the neutron Dirac FF. It was shown in Ref. [9] that one can also define a probability distribution to find a quark with a given momentum fraction x of P + , and at a given transverse position b in the nucleon, when considering a nucleon polarized in the xy-direction. In the following, we denote this transverse polarization direction by S ⊥ = cos φ Sêx + sin φ Sêy . When integrating the resulting GPD, depending on x and b, over all values of x, one can define a quark charge density in the transverse plane for a transversely polarized nucleon as : where s ⊥ = +1/2 is the nucleon spin projection along the direction of S ⊥ . By working out the Fourier transform in Eq. (3), one obtains : where the second term, which describes the deviation from the circular symmetric unpolarized charge density, depends on the orientation of b = b(cos φ bêx + sin φ bêy ). Furthermore, this term depends on the Pauli FF F 2 and the nucleon mass M N . In the following we are using the current empirical information on the nucleon e.m. FFs to extract the transverse charge density in a transversely polarized nucleon, complementing the pictures given in Ref. [12], for the transverse charge densities in an unpolarized nucleon. For the proton e.m. FFs, we use the recent empirical parameterization of Ref. [14] and show the resulting transverse charge density for a proton polarized along the xaxis (i.e. for φ S = 0) in Fig. 1. One notices from Fig. 1 that polarizing the proton along the x-axis leads to an induced electric dipole moment along the negative y-axis which is equal to the value of the anomalous magnetic moment, i.e. F 2 (0) (in units e/2M N ) as first noticed in Ref. [9]. One can understand this induced electric dipole field pattern based on the classic work of Ref. [16] (see also the pedagogical explanation in Ref. [17]). The nucleon spin along the x-axis is the source of a magnetic dipole field, which we denote by B. An observer moving towards the nucleon with velocity v will see an electric dipole field pattern with E ′ = −γ( v × B) giving rise to the observed effect. For the neutron e.m. FF, we use the recent empirical parameterization of Ref. [15]. The corresponding transverse charge density for a neutron polarized along the xaxis is shown in Fig. 2. One notices that the neutron's unpolarized charge density gets displaced significantly due to the large (negative) value of the neutron anomalous magnetic moment, F 2n (0) = −1.91, which yields an induced electric dipole moment along the positive y-axis. We next generalize the above considerations to the N → ∆ e.m. transition as it allows access to l = 2 angular momentum components in the nucleon and/or ∆ wave functions. We will use the empirical information on the N → ∆ transition FFs to study the quark transition charge densities in the transverse plane which induce the e.m. N → ∆ excitation. It is customary to characterize the three different types of the γN ∆ transitions in terms of the Jones-Scadron FFs G * M , G * E , G * C [18], corresponding with the magnetic dipole (M1), electric quadrupole (E2) and Coulomb quadrupole (C2) transitions respectively, see Ref. [4] for details and definitions. We start by expressing the matrix elements of the J + (0) operator between N and ∆ states as : where λ N (λ ∆ ) denotes the nucleon (∆) light-front helicities, and where q ⊥ = Q(cos φ qêx +sin φ qêy ). Furthermore in Eq. (5), the helicity form factors G + λ∆ λN depend on Q 2 only and can equivalently be expressed in terms of G * M , G * E , and G * C . We can then define a transition charge density for the unpolarized N → ∆ transition, which is given by the Fourier transform : where the helicity conserving N → ∆ FF G + can be expressed in terms of G * M , G * E , and G * C as : with M ∆ = 1.232 GeV the ∆ mass, and where the isospin factor I = 2/3 for the p → ∆ + transition, which we consider in all of the following. We also introduced the shorthand notation Q ± ≡ (M ∆ ± M N ) 2 + Q 2 . The above unpolarized transition charge density gives us one combination of the three independent N → ∆ FFs. To get information from the other combinations, we consider the transition charge densities for a transversely polarized N and ∆, both along the direction of S ⊥ as : where s N ⊥ = +1/2 (s ∆ ⊥ = +1/2) are the nucleon (∆) spin projections along the direction of S ⊥ respectively. By working out the Fourier transform in Eq. (8), one obtains : One notices from Eq. (9) that besides half the unpolarized transition density, one obtains two more linearly independent structures. The N → ∆ FF combination with one unit of (light-front) helicity flip, which corresponds with a dipole field pattern in the charge density, can be expressed in terms of G * M , G * E , and G * C as : whereas the N → ∆ form factor with two units of (lightfront) helicity flip, corresponding with a quadrupole field pattern in the charge density, can be expressed as : We show the results for the N → ∆ transities densities both for the unpolarized case and for the case of transverse polarization in Fig. 3. In this evaluation we use the empirical information on the N → ∆ transition FFs from Ref. [19]. One notices that the unpolarized N → ∆ transition density displays a behavior very similar to the neutron charge density (dashed curve in Fig. 2), having a negative interior core and becoming positive for b ≥ 0.5 fm. The density in a transversely polarized N and ∆ shows both a dipole field pattern, and a quadrupole field pattern. The latter, shown separately in Fig. 3, allows to cleanly quantify the deformation in this transition charge distribution. In summary, we used the recent empirical information on the nucleon and N → ∆ e.m. FFs to map out the transverse charge densities in unpolarized and transversely polarized nucleons and for the N → ∆ transition. The nucleon charge densities are characterized by a dipole pattern, in addition to the monopole field corresponding with the unpolarized density. The N → ∆ transition charge density in a transversely polarized N and ∆ contains both monopole, dipole and quadrupole patterns, the latter corresponding with a deformation of the hadron's charge distribution.
2007-10-03T17:34:37.000Z
2007-10-03T00:00:00.000
{ "year": 2008, "sha1": "9a010a49b3ad911d6e4c49f65698f9d45a3d01b8", "oa_license": null, "oa_url": "http://arxiv.org/pdf/0710.0835", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "77843d9b713c36520c60125661259fa9b0d37c2b", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics", "Medicine" ] }
125503507
pes2o/s2orc
v3-fos-license
MICADO: the E-ELT Adaptive Optics Imaging Camera MICADO is the adaptive optics imaging camera for the E-ELT. It has been designed and optimised to be mounted to the LGS-MCAO system MAORY, and will provide diffraction limited imaging over a wide (about 1 arcmin) field of view. For initial operations, it can also be used with its own simpler AO module that provides on-axis diffraction limited performance using natural guide stars. We discuss the instrument's key capabilities and expected performance, and show how the science drivers have shaped its design. We outline the technical concept, from the opto-mechanical design to operations and data processing. We describe the AO module, summarise the instrument performance, and indicate some possible future developments. MICADO OVERVIEW MICADO is the Multi-AO Imaging Camera for Deep Observations, designed to work with adaptive optics (AO) on the E-ELT.It has been optimised for the multi-conjugate adaptive optics (MCAO) module MAORY; 1, 2 but it is also able to work with other adaptive optics systems, and includes a separate module to provide a single conjugate adaptive optics (SCAO) capability 3 using natural guide stars during early operations (see Section 4).As this simple AO mode sets low requirements on the telescope and facilities (e.g.no lasers are required), it is an optimum choice for demonstrating the scientific capabilities of the E-ELT at the earliest opportunity.The optical relay and support structure for SCAO provide the same opto-mechanical interface as MAORY, and in principle enable MICADO to be used with other AO systems such as ATLAS. 4This phased approach means that MICADO will be able to make use of increasingly sophisticated AO systems as they become available. MICADO is compact and is supported underneath the AO systems so that it rotates in a gravity invariant orientation.It is able to image, through a large number of selected wide and narrow-band near infrared filters, a large 53 ′′ field of view at the diffraction limit of the E-ELT.MICADO has two arms.The primary arm is a high throughput imaging camera with a single 3 mas pixel scale.This arm is designed with fixed mirrors for superior stability, thus optimizing astrometric precision.In addition, MICADO will have an auxiliary arm to provide an increased degree of flexibility.In the current design, this arm provides (i) a finer 1.5 mas pixel scale over a smaller field, and (ii) a 4 mas pixel scale for a simple, medium resolution, long-slit spectroscopic capability.In principle the auxiliary arm also opens the door to other options, including a 'dual imager' based on a Fabry-Perot etalon to image emission line and continuum wavelengths simultaneously, coronography (perhaps implemented in a comparable way to that in NACO 5 ), or a high time resolution detector. KEY CAPABILITIES AND SCIENCE DRIVERS MICADO will excel at several key capabilities that exemplify the unique features of the E-ELT.These are at the root of the science cases, which span key elements of modern astrophysics, and have driven the design of the camera.The science cases are developed in detail elsewhere 6 and here we focus on how MICADO's characteristics enable it to address them. Sensitivity and Resolution MICADO is optimised for imaging at the diffraction limit, and will fully sample the 6-10 mas FWHM in the J-K bands.With a throughput exceeding 60% its sensitivity at 1-2 µm will be comparable to, or surpass, JWST for isolated point sources.MICADO's resolution means that it will be clearly superior to JWST in crowded regions.In addition, its field of view of nearly 1 arcmin yields a significant multiplex advantage compared to other ground-based cameras on ELTs.Together, these characteristics make MICADO a powerful tool for many science cases.Continuum and emission line mapping of high redshift galaxies will enable it to address questions concerning their assembly, and subsequent evolution in terms of mergers, internal secular instabilities, and bulge growth.The resolution of better than 100 pc at z ∼ 2, equivalent to 1 ′′ imaging of Virgo Cluster galaxies, will resolve the individual star-forming complexes and clusters, which is the key to understanding the processes that drive their evolution.Alternatively, one can probe a galaxy's evolution through colour-magnitude diagrams that trace the fossil record of its star formation.Spatially resolving the stellar populations in this way is a crucial ability, since integrated luminosities are dominated by only the youngest and brightest population.MICADO will extend the sample volume from the Local Group out to the Virgo Cluster and push the analysis of the stellar populations deeper into the centres of these galaxies. Precision Astrometry With only fixed mirrors in its primary imaging field, gravity invariant rotation, and HAWAII-4RG detectors (developed to meet the stringent requirements of space astrometry missions), MICADO is an ideal instrument for astrometry.A robust pipeline will bring precision astrometry into the mainstream.An analysis of the statistical and systematic effects 7,8 shows that an accuracy of 40 µas in a single epoch of observations is achievable; and after only 3-4 years it will be possible to measure proper motions of 10 µas yr −1 , equivalent to 5 km s −1 at 100 kpc.At this level, many astronomical objects are no longer static but become dynamic, leading to dramatic new insights into the three dimensional structure and evolution of many phenomena.Proper motions of faint stars within light-hours of the Galactic Center will measure the gravitational potential in the relativistic regime very close to the central black hole, and may also reveal the theoretically predicted extended mass distribution from stellar black holes that should dominate the inner region.The internal kinematics and proper motions of Globular Clusters will yield insights on intermediate mass black holes as well as the formation and evolution of the Galaxy.Similar analyses of Dwarf Spheroidals will reveal the amount and distribution of dark matter in these objects, and hence test models of hierarchical structure formation. High Throughput Spectroscopy Spectroscopy is an obvious and powerful complement to pure imaging, and is implemented as a simple slit spectrometer with a high throughput that is ideal for obtaining spectra of compact objects.The resolution of R ∼ 3000 is sufficient to probe between the near infrared OH lines.This simple addition will enhance many science cases, for example: deriving stellar types and 3D orbits in the Galactic Center; using velocities of stars in nearby galaxies to probe central black hole masses and extended mass distributions; measuring absorption lines in galaxies at z = 2-3 and emission lines in galaxies at z = 4-6 to derive their ages, metallicities, and star forming histories; and obtaining spectra of the first supernovae at z = 1-6. TECHNICAL DESIGN CONCEPT The MICADO design to achieve the capabilites outlined in Section 2 is simple, compact, and robust.As such, it minimizes risks on cost and schedule.In the following, we outline the key features of its design. Optics The optics are discussed in more detail elsewhere 9 and so only a brief outline of the main characteristics is given here.The design was reached after a scientific and technical trade-off.The main requirements from this are that: the pixel scale in the primary arm should be fixed to maximise stability; the scale should be 3 mas to cover a large field of view while being Nyquist sampled in J-band; there should be space for a large number of filters; the degree of distortion is less important than its stability (since it must be corrected anyway); the instrument should cope with a strongly curved input wavefront, as well as a flat input wavefront (for a reduced field of view, limited by anisoplanatism, during SCAO operations). As shown in Fig. 2 the MICADO optics comprises 3 sub-systems: the common path, primary arm, and auxiliary arm.The first component in the common path is a tunable atmospheric dispersion corrector (ADC; ).This is to minimize its impact on the optical quality by enabling thinner prisms with smaller wedge angles to be used.In its current location (warm and right against the interface to MAORY), the performance is just acceptable.However, this location is not optimal and during Phase B other options, such as locating it at an appropriate pupil plane within the AO system, will be addressed. Both arms use an off-axis parabola for collimation.However, the sizes and locations of the parabolae are different, and so the primary arm has a fixed mirror, while the auxiliary arm requires an alternative mirror to be rotated into position.In both cases, to keep the optical system compact, the light is reflected in both directions from a large fold mirror.Separate fixed fold mirrors then direct the light out of the common path to opposite sides of the instrument.The collimator creates a pupil image just after this fold mirror, where a large filter wheel is located.The maximum circular diameter of the pupil is 100.5 mm; however, in order to block unwanted thermal background (the pupil has a shape that depends slightly on field position), the coldstop is undersized at 99 mm diameter.The primary arm has only 3 additional working mirrors (based on a TMA), although extra fold mirrors are required to keep the volume occupied small.The auxiliary arm is similar, but includes mechanisms for changing the pixel scale.Table 1 summarizes the key characteristics of the two arms. The baseline detector for MICADO is the HAWAII-4RG.These have a number of important advantages: (i) they are large format, so that relatively few detectors need to be characterised and mosaiced, (ii) they have been designed for the stringent requirements of space astrometry missions and so are ideal for MICADO's astrometry applications, (iii) the readout speed can be adjusted (even on multiple sub-regions), greatly reducing the impact of saturation due to bright targets.The cross-talk between pixels is relatively low and electronic ghosts can largely be suppressed in the same way as is done for XShooter.There will be relatively small gaps (each a few mm compared to the 25 cm width of the focal plane) between individual detectors since they are not directly buttable.This can be considered advantageous since it provides a quasi-coronagraph, allowing one to position a bright star out of the field of view even in dithered exposures.To optimise stability of the focal plane array, all detectors will be mounted on a single baseplate.A direct result of the strongly curved input focal plane from the MCAO system (which cannot be corrected in a satisfactory way) is that the MICADO focal plane is tilted by 4.1 • and curved, with a 1500 mm radius of curvature.Because the 60 mm wide detectors are flat, there will be a small defocus across each detector due to the ±0.32 mm focal plane mismatch.This has only a minor impact on the spot diagrams, and the Strehl ratio at 0.8 µm remains above 88% across the whole field (i.e.significantly larger than that used for tolerancing). Mechanics The mechanical design, and folding of the optical path, has largely been driven by the limited space under MAORY.To keep torques small and to maintain optical alignment during cool-down, the centre of gravity is close to the optical axis, which itself is close to the centre of shrinkage.In order to minimize cable lengths and to limit the mass mounted on the derotator, the electronics racks stand on a co-rotating platform supported on the Nasmyth floor.This platform also houses the cable-wrap for external supplies.Service and maintenance are also key aspects, leading to a design in which the core instrument and optics structure are rotated by 25 • with respect to the cryostat.This provides better access through the cryostat doors to the detector arrays, the arm selection and focal plane mechanisms, the filter wheels, and the core optics. MICADO is housed in a stainless steel cryostat (Fig. 3, left), which has a fixed tapered part with sufficient space for all the through-ports and pumps.On either side are 2 large doors which provide access to all key components while MICADO is mounted to the AO system.The 3 electronics cabinets (2 of which are backto-back) are positioned on the co-rotating platform in such a way that they do not interfere with the doors.The entrance window of the cryostat is located 300 mm above the focal plane and 200 mm below the mounting interface.The warm ADC is currently located in this volume for the reasons outlined in Section 3.1.To minimise flexure (which is critical for astrometry), the instrument has been designed for gravity invariant rotation.Because of this, and since the cool-down times are limited by thermal contact of the filters, the cryostat has not been lightweighted.The total mass of the cryostat and instrument supported by the derotator is 3000 kg.An additional 2800 kg are supported on the Nasmyth floor, and 500 kg in a calibration unit located in the AO system.Supported inside the cryostat behind the radiation shield is the cold optics instrument (Fig. 3, right).It comprises 3 main structures: the primary arm, auxiliary arm, and core sub-assembly.The general design approach for each of these housings is to assemble them from plate material to keep part complexity and accuracy low, and ensure a rigid boxed structure.Stray light can be reduced by proper shielding and baffling, and also by using a wave-like finish to the surface of walls, and applying a low-reflectivity black coating.The filter wheels for each of the arms are mounted on the side panels of the core structure, and supported at their perimeter.The primary and auxiliary arm housings are mounted to the core with a three-point interface in such a way that they connect to the round side panels with direct support underneath from the core wall panels.The instrument core is supported by the cryostat via 3 V-rods and a transfer structure.The rods are co-axially aligned to maintain alignment during cool-down, and are attached to the transfer structure.This has been designed to accommodate the rotating focal plane mechanism, and acts as a bridge to the stiff support structure of the core subassembly.MICADO has relatively few cold mechanisms.In order to achieve high repeatability, these are all rotational and use spring loaded bearings and V-grooves to locate and lock each position: Focal plane selection: the input focal plane is large, and 6 positions are required (field stop for each arm, 2 long slits, a closed position, and a point source mask for initial check of internal focus).It is a large structure that is driven around the outside of the core sub-assembly.Primary/Auxiliary Arm selection: the core contains only one mechanism, which rotates in an alternative parabolic collimator mirror if the auxiliary arm is selected.Because this is recognised as being delicate, accessibility is an important design driver, and is facilitated by a large opening in the focal plane wheel.Filter wheels: The pupil sizes are 100 mm and 86 mm diameter for the primary and auxiliary arms.In order to provide space for 20 filter slots, these wheels are large, and hence supported and driven at their perimeter. Since the filters will dictate the cool-down time, care has been given to maximising the thermal contact.Scale changing mechanism: the auxiliary arm has two mechanisms to move mirrors and enable a pixel scale change.The design is similar to the primary/auxiliary selection mechanism. Cryogenics In order to avoid the use of cryo-coolers, the vibrations from which would have a strong adverse effect on the AO performance, MICADO will be cooled by continuous flow liquid nitrogen (LN2) during cool-down/warmup cycles as well as steady state phases.For cooldown, an estimated 1000 L will be needed; and to maintain steady-state the required flow rate, including some contingency, is expected to be 72 L/day. Continuous flow is preferred over a LN2 bath since it gives more freedom in the location of the detectors, it keeps the cryostat smaller and its mass lower, and it is possible to combine the precooling and steady state systems.Cooling pads are located at strategic points in the cryostat, and connected so that during a cool-down cycle the heat shield is cooled first, followed by the optical bench and finally the detectors.During steady state the sequence is reversed so that the detectors have the lowest possible temperature.This series concept is shown in Fig. 4, which demonstrates how the same circuit can be used during both phases. Electronics The electronics cabinets are physically located very close to the MICADO cryostat, on a platform that co-corates with it.This means that the cables can be kept short.At the current time, no ELT electronics standards have been defined.The preferences for MICADO include the architecture being based on SIMATIC Programmable Logic Controller (PLC), Realtime Ethernet or other Realtime architectures.A fail-safe version of the PLC SMATIC 57 is also an option for cryogenic housekeeping (cryogenics control).And Realtime LabView and PXI controller from National Instruments could also be used in the implementation of control electronics. Instrument Control The user requirements for the instrument software have been developed for observation preparation, science operations (including on-sky calibration), and maintenance operations.The main functionality has been analysed via specific use cases related to the various observing scenarios and modes.An overview of the complete scheme is given in Fig. 5. Specific use cases include: Science Observations: imaging, spectroscopy, on-sky calibrations (e.g. standard stars, twilight flats) Calibrations: dark frames, internal flatfield, linearity, wavelength & distortion calibrations, ghost assessment Maintenance Operations: telescope focus. It is likely that, apart from the actual instrument control software itself, MICADO will share a common software layer with other E-ELT subsystems.However, at the current time, no E-ELT software standards have been defined.Given the timeline of the project and the need for a stable development infrastructure, open source solutions are preferred over proprietary commercial ones in this context.For the same reason, Linux is favoured as the operating system. Data Processing The user requirements for the data reduction software have been developed from observational scenarios for imaging and spectroscopy.These are standard techniques, and lead to no surprises.The performance required for photometry and astrometry have led to additional requirements.These include the use of a special internal calibration mask to measure instrument distortions, and additional steps in the data processing for the related science projects.The Astro-WISE system is well suited for reduction of MICADO data.Astro-WISE is an integrated system where users cannot only perform data reduction but also data archiving, post-reduction analysis and publishing of the raw, intermediate and final data products.A salient feature for the data reduction relevant for MICADO is that it performs 'global' astrometry and photometry.Astrometric and photometric corrections and calibrations by combining the information from overlapping observations improving on calibrations based on individual pointings.Data reduction can take place in fully automated fashion or in a more manual fine-tuning manner.The data rates estimated for MICADO are up to about 6 Terabytes per night (if all individual exposures are kept and processed for optimal astrometric accuracy); although significantly less if either short exposures can be directly co-added in the detector control system (DCS), or longer exposures are required (e.g. when using narrow band filters). ADAPTIVE OPTICS The design of MICADO has been optimised for the multi-conjugate adaptive optics module MAORY, 1, 2 which uses multiple lasers and natural guide stars to provide diffraction limited performance over a wide field with high sky coverage.However, the Phase A study included a simpler AO system that can be used during initial operations, in order to mitigate the risk associated with such a complex AO system, and to enable MICADO to produce diffraction limited images at the earliest opportunity.This AO system will be single-conjugate and use a single natural guide star as the wavefront reference: there will be sufficient science targets near suitable guide stars for 2-3 years of operation.Including SCAO in the MICADO design is necessary because, at the current time, the E-ELT baseline does not include a wavefront sensing capability for scientific instruments (although any WFS can make use of the E-ELT's deformable and tip-tilt mirrors).And because MICADO cannot interface to the Nasmyth port, a major part of the SCAO system is an optical relay and support structure that provides the same mechanical and operational interface as to MAORY.Since it can in principle be re-used with other AO systems such as ATLAS, 4 this means that MICADO will be able to make use of increasingly sophisticated AO systems as they become available. The top level requirements for the SCAO module are relatively simple: the optical, mechanical, and communication interfaces should be the same as those to/from MAORY; the WFS bandpass should be 0.45-0.8µm, to maximise sensitivity without compromising the scientific wavelength range; the WFS should be able to guide on stars anywhere within a 45 ′′ diameter patrol field; and the transmitted scientific field of view need only be 27 ′′ ×27 ′′ (commensurate with the isoplanatic patch size).This last requirement means that even though there is no field curvature from the telescope (in contrast to the strongly curved field from MAORY for which MICADO is designed), the image quality over the SCAO field is unaffected.It also means that initially, only the central detectors need be mounted; the remainder can be integrated later.SAMI, the SCAO module for MICADO shown in Fig. 6, is described in detail elsewhere. 3It comprises 4 sub-systems: an optical relay made of a 3-mirror Offner relay, a folding mirror directing the light downward to MICADO and the WFS, and a dichroic plate splitting the light between MICADO and the WFS. a field derotator to compensate for the telescope movements while tracking, for both MICADO and the WFS. a support structure for the optical bench of the relay optics, the WFS, the derotator and MICADO. the WFS, including all opto-mechanics after the dichroic.It comprises a pupil steering mirror, and, mounted on XY stages, a field stop, a K-mirror for pupil derotation, a lens triplet and the WFS camera itself.The initial study suggests that an ADC in the WFS should not be necessary. The performance of the MICADO SCAO module has been estimated using analytical formulae (e.g. for the anisoplanatism error), information from ESO (e.g. for the fitting error) and two home-made simulation tools. It takes into account the control laws that will be implemented: a classical integrator, with a modal control, together with a Kalman filter for windshake compensation; and also that smart algorithms, such as weighting or pixel selection, will be used for the centre of gravity computation.The results of these computations are summarized in Tables 2 and 3. Multiplying values from both tables together will yield an estimation of the strehl ratio expected for a guide star of a given magnitude at a given distance off-axis. OPERATION AND CALIBRATION The basic operational scenario for MICADO is very similar to other imaging cameras and spectrometers.For imaging, the sky background will be derived either by combining dithered exposures or, when necessary, by offsetting to sky.For spectroscopy, the source will be nodded back and forth along the slit.Typical exposure times will be a few seconds (broad band filters) up to tens of seconds (narrow band filters).For the shortest exposure times, several exposures will be made at the same pointing before dithering.The main issue is the size of the dithers, which must be optimised for science while minimizing the AO and telescope overheads.Table 4 summarises the definition of dithers and offsets for MICADO. Table 4. Definition of dithers and offsets to reduce AO and telescope overheads Small dither Offset of up to ±0.3 ′′ (goal±0.5 ′′ ) from initial pointing in each of X-and Y-directions, with an accuracy of <2 mas.AO loops remain closed during operation.Cadence: 10-30 sec.Large dither Offset of up to ±10 ′′ from the initial pointing.AO loops open during the offset, but reclose at the new position.The telescope is involved.Cadence: a few minutes. Sky Offset Offset of up to 15 ′ (when background cannot be recovered by dithering).AO loops do not need to close in the new position.Cadence: 10-30 minutes (depending on overhead). Sky Return Offset back, after a 'Sky Offset'.AO loops should reclose. Most of the calibrations can be performed internally during the day while the dome lights are on: flatfields, wavelength calibration, darks.Additional twilight flats will be required in order to correct illumination gradients in the internal flats.The only non-standard calibration required is that to correct instrument distortions in the AO system and MICADO.This will also be possible during the day with the dome lights on, and will be achieved by inserting a special calibration mask into the focal plane in front of the AO system.The only standard nighttime calibration is to observe standard stars for flux calibration. Astrometric Calibration In the Galactic Centre, it is possible to achieve a relative astrometric precision of 200-300 µas 10 in the H-band on an 8-m class telescope.This corresponds to about 0.5% of the FWHM of the PSF.If this performance is projected forward to the E-ELT, one can hope to reach a precision of about 40 µas.In Table 5 we summarise the conclusions from a study 8 of the ten error sources that need to be controlled in order to achieve this.Measuring these effects clearly requires careful calibration, and the scheme outlined in Fig. 7 shows how this can be done.An internal calibration mask is used to measure discontinuities between the detectors and instrumental distortions in order to map the detector plane onto the sky.This provides a set of relative (or artifical) coordinates to which subsequent exposures can be matched by applying a low order transformation based on point sources in the field.Exposures within a single epoch can then be combined.Deep integrations obtained in this way at different epochs are mapped to each other via another low order transformation, using either faint compact galaxies (which have negligible proper motions) or by using the ensemble of stars.In the latter case, their relative proper motions (a high order effect) are preserved during a low order transformation. Table 5.Sources of astrometric uncertainty Absolute plate scale pre-imaging (seeing limited) can give better than 10 −4 accuracy (i.e. 5 mas over 50 ′′ field).Sampling & pixel scale no measurable errors if pixel scale does not exceed 3 mas pix −1 (note that a finer scale is beneficial for highly crowded fields).Instrumental distortions are measured to 0.01 pixel in many current instruments.For MICADO, calibration with an internal mask will reduce this error to ∼30 µas.Telescope instabilities plate scale, rotation, etc., are low order effects that can be absorbed into a coordinate transformation.Achromatic differential refraction this large ∼10 mas effect is linear and so is removed by a coordinate transformation. Chromatic differential refraction produces ∼1 mas scale effects that depend on the source colour.A tunable ADC can reduce it to <20 µas in most cases.AO instrumental & atmospheric effects shift the relative positions of the NGS used by the AO system.MAORY uses 3 NGS and so this is expected to be a low-order effect.Since they are slow effects, they can also be removed by tracking the barycentre of each NGS.Differential tilt jitter introduces errors of ∼100 µas into diffraction limited E-ELT observations.It scales as t −1/2 and can be integrated down to ∼10 µas within about 30 min.PSF variations even with MCAO the PSF changes across the field of view.Measurements on simulated PSFs indicate this should intrdouce errors of <10 µas.Galaxies as astrometric references galaxies are spatially resolved, but making use of their detailed internal structure enables one to reach ∼20 µas accuracy with deep integrations. PSF Calibration and Photometry The precision with which photometry can be performed is determined by the accuracy to which the PSF is known.This issue is being addressed in 2 complementary ways.The MAORY consortium are developing a simple model for the PSF which enables its shape to be determined with only a few parameters.In principle, one can map the variation of each parameter across the field of view.With relatively few empirical measurements, this might yield a quantitative estimate of the PSF at any point.Such a tool would be extremely important for many science cases.In particular, for studying black hole and host galaxy growth across cosmic time, an accurate estimate of the PSF is needed in order to separate the QSO and host galaxy emission.In crowded stellar fields, simulations indicate that it is possible to derive the PSF from the data itself and perform accurate photometry using currently available tools.This works over a small field where PSF variations are negligible.For MCAO, simulations indicate that although spatial variations in the PSF are small, they will have some impact on photometric accuracy.Therefore, to cover a larger field, one would either need to stitch together multiple sub-fields that are analysed separately; or develop the photometry tools to cope with a spatially variable PSF. PERFORMANCE The broadband imaging performance for the MICADO primary field is show in Fig. 8.This has been calculated for isolated point sources using PSFs provided by the MAORY consortium and for standard broadband filters similar to those in HAWK-I.It shows that the 5σ sensitivity will be better than a few nano-Jy (30 mag AB) for the I-H in only 1-2 hours.The K-band performance depends strongly on the thermal background and hence the ambient temperature, but is likely to be about 1 mag less.Advanced filters (see Section 7) will have a very significant impact on MICADO sensitivity.A prototype J-band filter pair increases the sensitivity in a given integration time by 0.3 mag.More advanced design optimisation techniques could lead to a 0.5 mag sensitivity gain in this band, and comparable gains may be expected for the I-band and H-band.The spectroscopic performance has been calculated for isolated point sources that are nodded back and forth along a slit that is 8 ′′ long and 12 mas wide.Because of the unusually extreme core+halo shape of the adaptive optics PSF, this width maximises the signal-to-noise reached for point sources in the J and H-bands.In the K-band, additional diffraction losses at the slit reduce the throughput slightly.The sensitivity calculation takes account of all effects (including the Strehl ratios predicted by MAORY, the limited coupling efficiency due to the PSF shape, diffraction losses at the slit, and the thermal background).The resulting 5σ sensitivities are J AB =H AB =27.2 mag between the OH lines in a 5 hour integration; and similarly K AB =25.7 mag (which is, as less primarily due to the thermal background). TECHNOLOGICAL DEVELOPMENTS AND RISKS MICADO is a simple camera and has been designed to have few risks.Indeed, the preliminary risk register contains no technical or programmatic risks above a low level.Those that do exist at this level are common risks associated with all (cryogenic) instruments, and not specific to MICADO.There are several future developments that could be beneficial.Although none of these is required for the successful functioning of MICADO, they would each increase the competitiveness of MICADO with respect to other facilities.The developments include: Advanced filters: Substantial gains in sensitivity of ground-based near-infrared instruments can be attained by sky line suppression or avoidance.We have begun a research project with Laser Zentrum Hannover to develop high throughput broad band filters and OH suppressing interference filters.The initial work, nearing completion, is to make a prototype for the J-band comprising low-pass and high-pass filters coating opposite sides of a substrate.Together these make a broad-band filter with >95% throughput (filters with 96-99% throughput, but suppression over a shorter baseline, were already maunfactured several years ago 11 ).The OH suppression is achieved by transmitting several narrow bandpasses within this range where the background is sufficiently low.Future development will focus on extension to other bands, optimisation of the filter profile, process qualification, and coating homogeneity, stress, and characterisation. Dual Imager: Fabry-Perots are complementary to integral field spectroscopy, but provide higher quality images (greater fidelity, and higher resolution over a larger field) of individual emission lines.The key to success would be to enable simultaneous imaging of emission line and continuum wavelengths.This would avoid problems with variable seeing or AO performance when subtracting the continuum to obtain the line emissiom map.Some development is required to achieve a good optical design. High Time Resolution Astronomy: 12 Scientific applications include the stochastic behaviour of neutron stars and white dwarf accretion disks, and pulsar magnetospheres; and time resolved observations of gamma-ray and X-ray transients and anomalous repeaters.Detector technology is available now in the range 0.8-1.2µm using avalanche photodiodes (APDs) and pnCCDs, 13 and there is every expectation that it will extend towards 2 µm over the next few years.A high time resolution instrument is essentially an imaging device with a fast detector, and so there would be very little additional opto-mechanical development required to include such a capability in MICADO's auxiliary arm. Figure 1 . Figure 1.Illustration of MICADO mounted under the MCAO system MAORY on the E-ELT Nasmyth platform. Figure 2 . Figure 2. Overview of the MICADO optics.The major components are labelled. Figure 3 . Figure 3. Left: The main components of MICADO, with the cryostat access doors open.Right: overview of the MICADO mechanical structure, with the main structures labelled. Figure 4 . Figure 4. Overview of the MICADO reversible cryogenic scheme Figure 5 . Figure 5. Overview of top-level use cases and actors for the MICADO instrument control software Figure 7 . Figure 7. Astrometric calibration for MICADO is achieved in several stages. Figure 8 . Figure 8. MICADO sensitivity (as a function of integration time) for broad-band imaging through standard filters.A few reference points for 5 hr integrations are shown for comparison. Table 1 . Characteristics of the optical designs for the 2 arms Table 2 . SCAO performance as a function of guide star magnitude Table 3 . Anisoplanatic effect on strehl ratio for SCAO Table 6 . Sensitivity (AB mag) for isolated point sources to 5σ in 5 hours
2010-05-27T09:25:15.000Z
2010-05-27T00:00:00.000
{ "year": 2010, "sha1": "c1b80be39a796739513890df2ba905ed806cadec", "oa_license": null, "oa_url": "https://www.spiedigitallibrary.org/conference-proceedings-of-spie/7735/77352A/MICADO-the-E-ELT-adaptive-optics-imaging-camera/10.1117/12.856379.pdf", "oa_status": "BRONZE", "pdf_src": "Arxiv", "pdf_hash": "af548e40effd875fb7dde773f4390aa711c8e0cf", "s2fieldsofstudy": [ "Physics", "Engineering" ], "extfieldsofstudy": [ "Physics", "Computer Science", "Engineering" ] }
264994956
pes2o/s2orc
v3-fos-license
A Universal Digital Lock-in Amplifier Design for Calibrating the Photo-Detector Responses with Standard Black-Bodies The lock-in amplifier (LIA) is widely utilized to detect ultra-weak optical periodic signals based on the phase-sensitive and enhanced detecting theory. In this paper, we present an all-digital and universal embedded LIA platform that accurately and conveniently describes the spectrum generated by standard black bodies at various temperatures with different optical detectors. The proposed design significantly reduces the complexity and cost of traditional analog LIAs while maintaining accuracy. The LIA components are implemented using a single field programmable gate array (FPGA), offering flexibility to modify parameters for different situations. The normalized mean-square error (NMSE) of the captured spectra in the experiments is within 0.9% compared the theoretical values. Introduction There are various methods have been proposed to capture the precise optical spectra from heavy noise and conduct accurate measurements, including lock-in amplifiers (LIAs), signal averaging, boxcar integrators, and correlators [1][2][3].These methods share a similar philosophy, using differential strategies to reduce the noise bandwidth and amplify the desired signals simultaneously.Among the above methods, LIAs are particularly suitable for suppressing both the environmental and intrinsic detector noise to extract the desired signal.LIAs have been widely applied in diverse applications [4][5][6], including gravitational wave detection, quantum phenomenon demodulation, and imaging. Since the LIA utilizes the coherence theory [7] to enhance the ultra-weak signal, its core is the phase coherence theory.The LIA utilizes the signal's time dependence, sometimes termed as the down-mixing or heterodyne/homodyne detection [8], to enhance the weak signals by performing phase-sensitive detection, where the reference signal has the same modulating frequency as the desired one.Through the LIA, the desired signal components, both the amplitude and initial phase, can be extracted.Additionally, with the increasing development of integrated circuits, all-digital LIAs have become advantageous over analog designs in terms of the size, cost, and flexibility.Over the past few decades, field-programmable gate arrays (FPGAs) have been proven as a great prototyping platform for all-digital integrated circuit demonstrations [9,10].Equipped with a parallel processor, multiplier unit, and direct digital synthesizer (DDS) [11] in the embedded platform, FPGAs enable designers to achieve all-digital LIA designs with higher speed, greater flexibility, and smaller size than traditional analog LIAs [12,13].In addition, the proposed design in this paper contains extra functions, such as auto-phase alignment, oversampling, and dual-phase, which are applicable for accurate and flexible applications in the future. To demonstrate the LIA ability to capture precise optical spectra of various standard black-bodies, some experimental measurements were conducted based on a single printed circuit board (PCB).By combining the circular variable filter (CVF) with optoelectrical Sensors 2023, 23, 8902 2 of 13 detectors, such as MCT (HgCdTe) and InSb, the various blackbody spectra can be captured.Compared with the theorical values, the normalized mean-square error (NMSE) of simulations and measuring spectrum experiments are less than 0.826% and 0.9%, respectively, which confirms the system accuracy of the proposed design. Overview Figure 1 shows the basic structure of the two-channel lock-in amplifier (LIA), which consists of the signal channel, in-phase and quadrature reference channels that are termed as P and Q, both of which contain the phase sensitive detection (PSD) part, low pass filter (LPF).The PSD function is achieved with a mixer in the block diagram by multiplexing the orthogonal reference signals, which has π/2 constant phase difference between each other with the input signal, and the LPFs utilized in two channels share the same parameters, such as the suppressing ratio, passband, and stopband.Then, the amplitude and phase of the original signal can be extracted, and the mathematical relation is presented in detail as following. oversampling, and dual-phase, which are applicable for accurate and flexible applications in the future. To demonstrate the LIA ability to capture precise optical spectra of various standard black-bodies, some experimental measurements were conducted based on a single printed circuit board (PCB).By combining the circular variable filter (CVF) with optoelectrical detectors, such as MCT (HgCdTe) and InSb, the various blackbody spectra can be captured.Compared with the theorical values, the normalized mean-square error (NMSE) of simulations and measuring spectrum experiments are less than 0.826% and 0.9%, respectively, which confirms the system accuracy of the proposed design. Overview Figure 1 shows the basic structure of the two-channel lock-in amplifier (LIA), which consists of the signal channel, in-phase and quadrature reference channels that are termed as P and Q, both of which contain the phase sensitive detection (PSD) part, low pass filter (LPF).The PSD function is achieved with a mixer in the block diagram by multiplexing the orthogonal reference signals, which has /2 constant phase difference between each other with the input signal, and the LPFs utilized in two channels share the same parameters, such as the suppressing ratio, passband, and stopband.Then, the amplitude and phase of the original signal can be extracted, and the mathematical relation is presented in detail as following. where the is the original electronic signal captured by the optoelectronic detector with the noise that is with unknown form and frequency distribution; the and are the reference signals for two channels; , , , , , , and are the amplitudes, angular speeds, the initial phases of input and reference signals of the input signal and reference signals, and time, respectively.It is clear that there is a constant phase shift between two reference channels.Then, the signal after the multiplication of the in-phase part can be expressed as: And the quadrature part signal is similar with the in-phase one.It can be seen that the output signal consists of a high frequency component with an angular speed of Suppose there are the following signals: where the x(t) is the original electronic signal captured by the optoelectronic detector with the noise N that is with unknown form and frequency distribution; the r p (t) and r q (t) are the reference signals for two channels; A, B, ω s , ω r , ϕ s , ϕ r , and t are the amplitudes, angular speeds, the initial phases of input and reference signals of the input signal and reference signals, and time, respectively.It is clear that there is a constant phase shift between two reference channels.Then, the signal after the multiplication of the in-phase part can be expressed as: And the quadrature part signal is similar with the in-phase one.It can be seen that the output signal consists of a high frequency component with an angular speed of ω s + ω r and a low frequency component with the angular speed of ω s − ω r .The noise part, Nr p (t), is modulated by the referencing frequency, ω r .If ω s = ω r , the low frequency part will directly become DC signal, and then through the ideal low-pass filter (LPF), the output signal, u p LPF (t), is: Sensors 2023, 23, 8902 3 of 13 in which Nr p (t) LPF is the modulated noise in the LPF passband from the in-phase channel, and the output signal for the quadrature part is termed as u q LPF (t) = AB 2 sin(ϕ s − ϕ r ) + Nr q (t) LPF . It can be seen from the above formula that if the phase difference between the two signals is constant, the output signal is proportional to the amplitude of the input signal; thus, the noise can be limited within the LPF narrow passband.To suppress the noise effectively in the narrow bandwidth, an appropriate filter ought to be selected.Thus, both the amplitude and phase can be captured from the in-phase and quadrature outputs u p LPF (t), u q LPF (t), which are sensitive to the amplitude and phase of x(t). The Design of the Digital Lock-in Amplifier Currently, the digital LIAs are mostly built using Microcontroller Units (MCUs), Digital Signal Processors (DSPs) [14], FPGAs [15,16], and Personal Computers (PCs).Compared to analog devices, the quantified signals of digital circuit platforms are more robust and flexible, allowing them to overcome issues caused by temperature drift, random noise sources, and relatively poor stability.Additionally, due to the repeatability of their design, digital LIAs significantly reduce the cost of circuit replacement and have become the mainstream technology for LIA. To generate artificially controlled referencing waves, the digital platform utilizes the DDS, in which the corresponding relationship between the phase and amplitude of the referencing signal that is a 4-bit digitalized number is depicted in Figure 2. The time it takes for the circle to complete one full rotation determines the frequency of the sine wave, namely: put signal, , is: in which is the modulated noise in the LPF passband from the in-phase channel, and the output signal for the quadrature part is termed as sin .It can be seen from the above formula that if the phase difference between the two signals is constant, the output signal is proportional to the amplitude of the input signal; thus, the noise can be limited within the LPF narrow passband.To suppress the noise effectively in the narrow bandwidth, an appropriate filter ought to be selected.Thus, both the amplitude and phase can be captured from the in-phase and quadrature outputs , , which are sensitive to the amplitude and phase of . The Design of the Digital Lock-in Amplifier Currently, the digital LIAs are mostly built using Microcontroller Units (MCUs), Digital Signal Processors (DSPs) [14], FPGAs [15,16], and Personal Computers (PCs).Compared to analog devices, the quantified signals of digital circuit platforms are more robust and flexible, allowing them to overcome issues caused by temperature drift, random noise sources, and relatively poor stability.Additionally, due to the repeatability of their design, digital LIAs significantly reduce the cost of circuit replacement and have become the mainstream technology for LIA. To generate artificially controlled referencing waves, the digital platform utilizes the DDS, in which the corresponding relationship between the phase and amplitude of the referencing signal that is a 4-bit digitalized number is depicted in Figure 2. The time it takes for the circle to complete one full rotation determines the frequency of the sine wave, namely: Among the above formula, is the phase accumulation word width, indicating the number of bits of the phase point, and is the main frequency of the digital system.The phase-sensitive detector plays a crucial role in phase identification and can be treated as a phase comparator that compares the differences between the reference and the original signals, then generating the phase error between them [17].When the input signal and the reference signal have the same frequencies but a constant phase difference, Among the above formula, B nco is the phase accumulation word width, indicating the number of bits of the phase point, and f clk is the main frequency of the digital system. The phase-sensitive detector plays a crucial role in phase identification and can be treated as a phase comparator that compares the differences between the reference and the original signals, then generating the phase error between them [17].When the input signal and the reference signal have the same frequencies but a constant phase difference, as shown in Equation ( 3), the phase and original amplitude can be extracted from the output signal u LPF (t), which consists of u p LPF (t) and u q LPF (t).Generally, in the digital system, the PSD can be regarded as being composed of a multiplier, followed by the low-pass filter (LPF), as shown in Figure 3. as shown in Equation ( 3), the phase and original amplitude can be extracted from the output signal , which consists of and .Generally, in the digital system, the PSD can be regarded as being composed of a multiplier, followed by the lowpass filter (LPF), as shown in Figure 3.For the lock-in amplifier, the LPF is utilized to suppress both the system noise and the high frequency modulated signals mentioned above.Considering that the modulated signal is similar to the DC signal in Equation ( 3), the low-pass filter with a narrow passband is selected to extract the desired signal.Furthermore, the narrower the passband, the less noise left for the amplitude and phase calculations.The general amplitude-frequency response of the LPF is illustrated in Figure 4. Generally, there are two categories of digital filters [18]: the finite impulse response (FIR) and infinite impulse response (IIR).Even though the FIR filters are inherently stable, IIR filters can achieve better filtering effects with lower orders, which means less resources and time delays in the embedded digital systems.In addition, only linear magnitude linearity is desired in the optical spectrum capturing, whereas the nonlinearity phase would not affect the spectrum measuring.However, to ensure stability when implementing an IIR digital filter in the LIA, the selecting poles should be designed within the unit circle. Compared with other types IIR filters, the Chebyshev type II filter does not contain any amplitude fluctuation in its passband, making it suitable for the spectrum capturing applications. The system function H(z) of the direct IIR filter can be expressed as: in which, and are the Z transforms of and , respectively.For the L-th order IIR filter, its schematic diagram can be obtained by the graphical description of Equation ( 5), as shown in Figure 5 below.For the lock-in amplifier, the LPF is utilized to suppress both the system noise and the high frequency modulated signals mentioned above.Considering that the modulated signal is similar to the DC signal in Equation ( 3), the low-pass filter with a narrow passband is selected to extract the desired signal.Furthermore, the narrower the passband, the less noise left for the amplitude and phase calculations.The general amplitude-frequency response of the LPF is illustrated in Figure 4. as shown in Equation ( 3), the phase and original amplitude can be extracted from the output signal , which consists of and .Generally, in the digital system, the PSD can be regarded as being composed of a multiplier, followed by the lowpass filter (LPF), as shown in Figure 3.For the lock-in amplifier, the LPF is utilized to suppress both the system noise and the high frequency modulated signals mentioned above.Considering that the modulated signal is similar to the DC signal in Equation ( 3), the low-pass filter with a narrow passband is selected to extract the desired signal.Furthermore, the narrower the passband, the less noise left for the amplitude and phase calculations.The general amplitude-frequency response of the LPF is illustrated in Figure 4. Generally, there are two categories of digital filters [18]: the finite impulse response (FIR) and infinite impulse response (IIR).Even though the FIR filters are inherently stable, IIR filters can achieve better filtering effects with lower orders, which means less resources and time delays in the embedded digital systems.In addition, only linear magnitude linearity is desired in the optical spectrum capturing, whereas the nonlinearity phase would not affect the spectrum measuring.However, to ensure stability when implementing an IIR digital filter in the LIA, the selecting poles should be designed within the unit circle. Compared with other types IIR filters, the Chebyshev type II filter does not contain any amplitude fluctuation in its passband, making it suitable for the spectrum capturing applications. The system function H(z) of the direct IIR filter can be expressed as: in which, and are the Z transforms of and , respectively.For the L-th order IIR filter, its schematic diagram can be obtained by the graphical description of Equation ( 5), as shown in Figure 5 below.Generally, there are two categories of digital filters [18]: the finite impulse response (FIR) and infinite impulse response (IIR).Even though the FIR filters are inherently stable, IIR filters can achieve better filtering effects with lower orders, which means less resources and time delays in the embedded digital systems.In addition, only linear magnitude linearity is desired in the optical spectrum capturing, whereas the nonlinearity phase would not affect the spectrum measuring.However, to ensure stability when implementing an IIR digital filter in the LIA, the selecting poles should be designed within the unit circle. Compared with other types IIR filters, the Chebyshev type II filter does not contain any amplitude fluctuation in its passband, making it suitable for the spectrum capturing applications. The system function H(z) of the direct IIR filter can be expressed as: in which, Y(z) and X(z) are the Z transforms of y(n) and x(n), respectively.For the L-th order IIR filter, its schematic diagram can be obtained by the graphical description of Equation ( 5), as shown in Figure 5 below.For the digital signal processing, it is often necessary to solve trigonometric function values and modulus values.The coordinated rotation digital computer (CORDIC) algorithm [19][20][21] is a hardware-efficient iterative method that uses the rotations to calculate a wide range of elementary functions to achieve the above tasks.In essence, the CORDIC algorithm takes a successive approximation of mathematical calculation.Since the basic operation unit of the algorithm only includes shifters and adders, the algorithm is simple and efficient in the digital system. The basic principle of the CORDIC algorithm is shown in Figure 6.The coordinate transformation relation between two vectors is: The pseudo-rotation equation can be obtained by dividing the two sides by , namely: At this time, the rotation angle is correct, but the modulus of the vector changes. The essence of the CORDIC algorithm is to rotate the coordinate , .Each rotation is fixed angle , and the rotation direction is .The goal of rotation is that the ordinate is close to 0. When N-step iteration is carried out, there is an antitangent angle.For the digital signal processing, it is often necessary to solve trigonometric function values and modulus values.The coordinated rotation digital computer (CORDIC) algorithm [19][20][21] is a hardware-efficient iterative method that uses the rotations to calculate a wide range of elementary functions to achieve the above tasks.In essence, the CORDIC algorithm takes a successive approximation of mathematical calculation.Since the basic operation unit of the algorithm only includes shifters and adders, the algorithm is simple and efficient in the digital system. The basic principle of the CORDIC algorithm is shown in Figure 6.For the digital signal processing, it is often necessary to solve trigonometric function values and modulus values.The coordinated rotation digital computer (CORDIC) algorithm [19][20][21] is a hardware-efficient iterative method that uses the rotations to calculate a wide range of elementary functions to achieve the above tasks.In essence, the CORDIC algorithm takes a successive approximation of mathematical calculation.Since the basic operation unit of the algorithm only includes shifters and adders, the algorithm is simple and efficient in the digital system. The basic principle of the CORDIC algorithm is shown in Figure 6.The coordinate transformation relation between two vectors is: The pseudo-rotation equation can be obtained by dividing the two sides by , namely: At this time, the rotation angle is correct, but the modulus of the vector changes. The essence of the CORDIC algorithm is to rotate the coordinate , .Each rotation is fixed angle , and the rotation direction is .The goal of rotation is that the ordinate is close to 0. When N-step iteration is carried out, there is an antitangent angle.The coordinate transformation relation between two vectors is: The pseudo-rotation equation can be obtained by dividing the two sides by cosθ, namely: At this time, the rotation angle is correct, but the modulus of the vector changes. The essence of the CORDIC algorithm is to rotate the coordinate (j in , k in ).Each rotation is fixed angle θ i , and the rotation direction is d i = sign(k i ).The goal of rotation is that the ordinate k i is close to 0. When N-step iteration is carried out, there is an anti-tangent angle. In order to simplify the calculation process [22], the CORDIC algorithm uses a series of small rotation angles, denoted as θ i , to satisfy the equation tanθ i = 2 −i , which allows for multiplication using simple shifting operations.This simplification transforms the original algorithm into an iterative shift-addition algorithm.The iterative equation is as follows: in which l[i] and d i are a template parameters, and and Figure 7 shows the implementation of a single iteration in the CORDIC algorithm: Sensors 2023, 23, x FOR PEER REVIEW 6 of 13 𝜃 𝑑 𝜃 In order to simplify the calculation process [22], the CORDIC algorithm uses a series of small rotation angles, denoted as , to satisfy the equation 2 , which allows for multiplication using simple shifting operations.This simplification transforms the original algorithm into an iterative shift-addition algorithm.The iterative equation is as follows: in which and are a template parameters, and and Figure 7 shows the implementation of a single iteration in the CORDIC algorithm: Suppose that the modulus of the vector needs to be maintained, it can be achieved by adding the rotation compensation factor , which keeps constant after N iterations, as: For the imbedded system, to simplify the calculating progress, the serves as the initial value of , 0 , and is saved at the digital memory. Spectrum Response Capturing Setup To accurately capture the optical spectrum emitted by a black body, the experimental setup shown in Figure 8 is utilized.The setup includes a chopper, a circular variable filter (CVF), and an MCT detector to modulate the radiance from the source, select the corresponding wavelengths from the electromagnetic spectrum, and convert the radiant signal to an electrical one.However, the gains of the MCT and InSb photodetector are not sufficient to convert the radiance to the processing range of the LIA, so an additional amplifier (AMP) is introduced.The LIA requires referencing sync signals from both the chopper and CVF to extract the spectrum precisely.Finally, by combining the LIA output with the CVF wavelength index, accurate black body spectra can be obtained at various temperatures.Suppose that the modulus of the vector needs to be maintained, it can be achieved by adding the rotation compensation factor K, which keeps constant after N iterations, as: For the imbedded system, to simplify the calculating progress, the K serves as the initial value of j, j[0] = K, and is saved at the digital memory. Spectrum Response Capturing Setup To accurately capture the optical spectrum emitted by a black body, the experimental setup shown in Figure 8 is utilized.The setup includes a chopper, a circular variable filter (CVF), and an MCT detector to modulate the radiance from the source, select the corresponding wavelengths from the electromagnetic spectrum, and convert the radiant signal to an electrical one.However, the gains of the MCT and InSb photodetector are not sufficient to convert the radiance to the processing range of the LIA, so an additional amplifier (AMP) is introduced.The LIA requires referencing sync signals from both the chopper and CVF to extract the spectrum precisely.Finally, by combining the LIA output with the CVF wavelength index, accurate black body spectra can be obtained at various temperatures. Simulations for LIA Design To judge the performance and stability of the proposed digital LIA, the MATLAB is implemented to simulate the proposed design with the parameters, as shown in Table 1, in which the chopper rate is set as 1000 Hz.The noise signal is generated by using the rand function, then multiplied by signal to noise ratio (SNR), which utilizes the dB unit and is defined as: in which, is the original amplitude, and is the root mean square of the noise.For the LPF part, an IIR filter with Chebyshev type II, whose coefficients are set in advance, is utilized to process the mixed signals. The mixed input signal of the original signal and noise, or , and the raw lock-in output signals from the in-phase and quadrature channels after phase locking, or and , are shown in Figure 9. Simulations for LIA Design To judge the performance and stability of the proposed digital LIA, the MATLAB is implemented to simulate the proposed design with the parameters, as shown in Table 1, in which the chopper rate is set as 1000 Hz. −10 The noise signal is generated by using the rand function, then multiplied by signal to noise ratio (SNR), which utilizes the dB unit and is defined as: in which, A is the original amplitude, and N is the root mean square of the noise.For the LPF part, an IIR filter with Chebyshev type II, whose coefficients are set in advance, is utilized to process the mixed signals. The mixed input signal of the original signal and noise, or x(t), and the raw lock-in output signals from the in-phase and quadrature channels after phase locking, or u p LPF (t) and u q LPF (t), are shown in Figure 9.The comparison between the output phase/amplitude and the reference phase/amplitude is shown in Figure 10.The performance of the designed digital LIA in accurately detecting the amplitude and phase information of the input signal is demonstrated in Figure 10.The simulating results show that the measurement accuracies of amplitude and phase have improved compared to the conventional lock-in amplification algorithm.To quantify the average error, the Normalized Mean-Square Error (NMSE) is used, which takes the range of the data into account, and evaluates the accuracy of the prediction model.A smaller NMSE value indicates better accuracy in describing experimental data.The formula for NMSE is: where n is the number of samples, is the average of sample y. The NMSE is calculated to evaluate the accuracy of measurement results as shown in Table 2. NMSE of Phase NMSE of Amplitude 0.18% 0.826% The comparison between the output phase/amplitude and the reference phase/amplitude is shown in Figure 10.The comparison between the output phase/amplitude and the reference phase/amplitude is shown in Figure 10.The performance of the designed digital LIA in accurately detecting the amplitude and phase information of the input signal is demonstrated in Figure 10.The simulating results show that the measurement accuracies of amplitude and phase have improved compared to the conventional lock-in amplification algorithm.To quantify the average error, the Normalized Mean-Square Error (NMSE) is used, which takes the range of the data into account, and evaluates the accuracy of the prediction model.A smaller NMSE value indicates better accuracy in describing experimental data.The formula for NMSE is: where n is the number of samples, is the average of sample y. The NMSE is calculated to evaluate the accuracy of measurement results as shown in Table 2. NMSE of Phase NMSE of Amplitude 0.18% 0.826% The performance of the designed digital LIA in accurately detecting the amplitude and phase information of the input signal is demonstrated in Figure 10.The simulating results show that the measurement accuracies of amplitude and phase have improved compared to the conventional lock-in amplification algorithm.To quantify the average error, the Normalized Mean-Square Error (NMSE) is used, which takes the range of the data into account, and evaluates the accuracy of the prediction model.A smaller NMSE value indicates better accuracy in describing experimental data.The formula for NMSE is: where n is the number of samples, y i is the average of sample y. The NMSE is calculated to evaluate the accuracy of measurement results as shown in Table 2.The value range of NMSE is [0, +∞].The closer the value is to 0, the smaller the error of measurement results.The NMSE of phase and amplitude measurement results are both less than 0.01, therefore, the measurement results are relatively accurate. Experiments with LIA To illustrate the actual effect of the lock-in amplification algorithm, a hardware platform has been completed. The development board used in the experiment is equipped with the Zynq-7000 series chip, XC7Z-0710 of Xilinx Company.The circuit board has two-channel SMA inputs, using a high performance, 24-bit ADC, AD7760.The pre-amplifier circuit obtains the input photoelectric signal through the SMA interfaces.The proposed LIA design is shown in detail in Figure 11. Sensors 2023, 23, x FOR PEER REVIEW 9 of 13 The value range of NMSE is 0, ∞ .The closer the value is to 0, the smaller the error of measurement results.The NMSE of phase and amplitude measurement results are both less than 0.01, therefore, the measurement results are relatively accurate. Experiments with LIA To illustrate the actual effect of the lock-in amplification algorithm, a hardware platform has been completed. The development board used in the experiment is equipped with the Zynq-7000 series chip, XC7Z-0710 of Xilinx Company.The circuit board has two-channel SMA inputs, using a high performance, 24-bit ADC, AD7760.The pre-amplifier circuit obtains the input photoelectric signal through the SMA interfaces.The proposed LIA design is shown in detail in Figure 11.In the experimental setup, the desired parameters and embedded resources of the FPGA project are shown in Table 3.The original signal is provided to the LIA PCB board through the SMA connector, then the ADC that has a 625 k samples/s sampling rate is utilized to convert such a signal into a digital one to the FPGA, in which for the proposed digital LIA project, the processed values are finally sent out to the computer by the USB connector, with a 40,000 results/s processing rate.For the inner data of the FPGA project, the AXI Bus is utilized to connect the modules.An arbitrary waveform generator is utilized to provide the original signal for the LIA, and the measuring results are shown in Figure 12.DSP 20 Figure 12a illustrates the 0.4000 Vpp measuring results, in which the x-axis is the time, y-axis is the digital output, and the red dash lines are the maximum and minimal values of the output, respectively.The oscillation in Figure 12a is 24, corresponding with In the experimental setup, the desired parameters and embedded resources of the FPGA project are shown in Table 3.The original signal is provided to the LIA PCB board through the SMA connector, then the ADC that has a 625 k samples/s sampling rate is utilized to convert such a signal into a digital one to the FPGA, in which for the proposed digital LIA project, the processed values are finally sent out to the computer by the USB connector, with a 40,000 results/s processing rate.For the inner data of the FPGA project, the AXI Bus is utilized to connect the modules.An arbitrary waveform generator is utilized to provide the original signal for the LIA, and the measuring results are shown in Figure 12. 0.01 mV for the 10 Vpp system.Furthermore, to illustrate the linear voltage response of the proposed LIA platform, the input signal ranging from 0 to 0.5 V with 0.1 mV step is implemented, and the measuring results are shown in Figure 12b, in which the x-axis and y-axis are the input analog signal and LIA digital output, respectively. Spectrum Response Since the LIA itself is not able to capture the optoelectrical detector spectrum response, the CVF is introduced.The wavelength-index relation of the CVF, which enables the optical wavelength ranging from 2.4 μm to 14 μm response, is drawn as follows.As shown in Figure 13, there are 500 points or index of the CVF, each of them corresponding with the given wavelength, then the link between LIA digital outputs and the spectrum wavelength.With the setup shown in Figure 8, the temperature of the standard blackbody and the rate of the chopper are set as 50 °C and 800 Hz, respectively, and the results are exhibited in Figure 14. Figure 14a shows the MCT spectrum response at the given wavelengths, and Figure 14b shows the five measuring results with the NMSE are 0.9%, 0.46%, 0.59%, Figure 12a illustrates the 0.4000 Vpp measuring results, in which the x-axis is the time, y-axis is the digital output, and the red dash lines are the maximum and minimal values of the output, respectively.The oscillation in Figure 12a is 24, corresponding with 0.01 mV for the 10 Vpp system.Furthermore, to illustrate the linear voltage response of the proposed LIA platform, the input signal ranging from 0 to 0.5 V with 0.1 mV step is implemented, and the measuring results are shown in Figure 12b, in which the x-axis and y-axis are the input analog signal and LIA digital output, respectively. Spectrum Response Since the LIA itself is not able to capture the optoelectrical detector spectrum response, the CVF is introduced.The wavelength-index relation of the CVF, which enables the optical wavelength ranging from 2.4 µm to 14 µm response, is drawn as follows.As shown in Figure 13, there are 500 points or index of the CVF, each of them corresponding with the given wavelength, then the link between LIA digital outputs and the spectrum wavelength. Sensors 2023, 23, x FOR PEER REVIEW 10 of 13 0.01 mV for the 10 Vpp system.Furthermore, to illustrate the linear voltage response of the proposed LIA platform, the input signal ranging from 0 to 0.5 V with 0.1 mV step is implemented, and the measuring results are shown in Figure 12b, in which the x-axis and y-axis are the input analog signal and LIA digital output, respectively. Spectrum Response Since the LIA itself is not able to capture the optoelectrical detector spectrum response, the CVF is introduced.The wavelength-index relation of the CVF, which enables the optical wavelength ranging from 2.4 μm to 14 μm response, is drawn as follows.As shown in Figure 13, there are 500 points or index of the CVF, each of them corresponding with the given wavelength, then the link between LIA digital outputs and the spectrum wavelength.With the setup shown in Figure 8, the temperature of the standard blackbody and the rate of the chopper are set as 50 °C and 800 Hz, respectively, and the results are exhibited in Figure 14. Figure 14a shows the MCT spectrum response at the given wavelengths, and Figure 14b shows the five measuring results with the NMSE are 0.9%, 0.46%, 0.59%, With the setup shown in Figure 8, the temperature of the standard blackbody and the rate of the chopper are set as 50 • C and 800 Hz, respectively, and the results are exhibited in Figure 14. Figure 14a shows the MCT spectrum response at the given wavelengths, and Figure 14b shows the five measuring results with the NMSE are 0.9%, 0.46%, 0.59%, 0.84%, and 0.61%, respectively, which express the robust performance of the proposed LIA system.To analyze the influence of the chopper rate of the LIA, experiments at various chopper rates of 400 Hz, 600 Hz, 800 Hz, 1200 Hz, and 1800 Hz were conducted with the InSb detector.Since the InSb is sensitive at the spectrum for less than 6 μm, a blackboy at 1000 °C was implemented to complete the experiments, and the three measuring results with the NMSE are 0.602%, 0.722%, 0.629%, 0.815%, and 0.664%, as depicted in Figure 15, respectively. Discussion From the above results, even the chopper rate would affect the measuring results, especially in the relatively long wavelength part where the detector spectrum responses are similar.It is clear that the LIA proposed in this manuscript are sufficient to capture the spectrum response with choppers with various rates.Furthermore, since the results from lower chopper rates are much more stable, a low chopper rate should be set to capture the calibrating spectrum response. Conclusions In this paper, a digital implementation of the LIA is presented, which reduces the complexity and size of the LIA instrument, making it more practical for various applications.The digital LIA is designed and simulated using MATLAB, and experimental results demonstrate its effectiveness in accurately measuring the amplitude of the signals.To analyze the influence of the chopper rate of the LIA, experiments at various chopper rates of 400 Hz, 600 Hz, 800 Hz, 1200 Hz, and 1800 Hz were conducted with the InSb detector.Since the InSb is sensitive at the spectrum for less than 6 µm, a blackboy at 1000 • C was implemented to complete the experiments, and the three measuring results with the NMSE are 0.602%, 0.722%, 0.629%, 0.815%, and 0.664%, as depicted in Figure 15, respectively. Sensors 2023, 23, x FOR PEER REVIEW 11 of 13 0.84%, and 0.61%, respectively, which express the robust performance of the proposed LIA system.To analyze the influence of the chopper rate of the LIA, experiments at various chopper rates of 400 Hz, 600 Hz, 800 Hz, 1200 Hz, and 1800 Hz were conducted with the InSb detector.Since the InSb is sensitive at the spectrum for less than 6 μm, a blackboy at 1000 °C was implemented to complete the experiments, and the three measuring results with the NMSE are 0.602%, 0.722%, 0.629%, 0.815%, and 0.664%, as depicted in Figure 15, respectively. Discussion From the above results, even the chopper rate would affect the measuring results, especially in the relatively long wavelength part where the detector spectrum responses are similar.It is clear that the LIA proposed in this manuscript are sufficient to capture the spectrum response with choppers with various rates.Furthermore, since the results from lower chopper rates are much more stable, a low chopper rate should be set to capture the calibrating spectrum response. Conclusions In this paper, a digital implementation of the LIA is presented, which reduces the complexity and size of the LIA instrument, making it more practical for various applications.The digital LIA is designed and simulated using MATLAB, and experimental results demonstrate its effectiveness in accurately measuring the amplitude of the signals. Discussion From the above results, even the chopper rate would affect the measuring results, especially in the relatively long wavelength part where the detector spectrum responses are similar.It is clear that the LIA proposed in this manuscript are sufficient to capture the spectrum response with choppers with various rates.Furthermore, since the results from lower chopper rates are much more stable, a low chopper rate should be set to capture the calibrating spectrum response. Conclusions In this paper, a digital implementation of the LIA is presented, which reduces the complexity and size of the LIA instrument, making it more practical for various applications.The digital LIA is designed and simulated using MATLAB, and experimental results demonstrate its effectiveness in accurately measuring the amplitude of the signals. Furthermore, a setup is proposed for accurately capturing the optical spectrum from the black bodies, which utilizes a chopper, circular variable filter, and MCT/InSb detectors.The accuracy of the setup is demonstrated through experimental results. Figure 1 . Figure 1.Structure diagram of phase-locked amplifier: AMP, amplifier, LPF, low-pass filter.Suppose there are the following signals: Figure 2 . Figure 2. Corresponding relationship between phase word and amplitude of trigonometric function. Figure 2 . Figure 2. Corresponding relationship between phase word and amplitude of trigonometric function. Figure 3 . Figure 3.The phase sensitive detector with digital filter. Figure 4 . Figure 4.The amplitude-frequency response diagram of digital low-pass filter, where represents the passband cutoff frequency, represents stopband cut-off frequency, ∆ is the transition band, represents the passband ripple, and is the stop band ripple, respectively. Figure 3 . Figure 3.The phase sensitive detector with digital filter. Figure 3 . Figure 3.The phase sensitive detector with digital filter. Figure 4 . Figure 4.The amplitude-frequency response diagram of digital low-pass filter, where represents the passband cutoff frequency, represents stopband cut-off frequency, ∆ is the transition band, represents the passband ripple, and is the stop band ripple, respectively. Figure 4 . Figure 4.The amplitude-frequency response diagram of digital low-pass filter, where ω p represents the passband cutoff frequency, ω st represents stopband cut-off frequency, ∆ω = ω st − ω p is the transition band, δ p represents the passband ripple, and δ T is the stop band ripple, respectively. Figure 5 . Figure 5.The schematic diagram of the low-pass IIR filter. Figure 6 . Figure 6.The CORDIC algorithm vector rotation diagram with rotating the input vector by . Figure 5 . Figure 5.The schematic diagram of the low-pass IIR filter. Sensors 2023 , 13 Figure 5 . Figure 5.The schematic diagram of the low-pass IIR filter. Figure 6 . Figure 6.The CORDIC algorithm vector rotation diagram with rotating the input vector by . Figure 6 . Figure 6.The CORDIC algorithm vector rotation diagram with rotating the input vector by θ. Figure 7 . Figure 7.The i-th single iteration of CORDIC algorithm implementation.SHIFTER, phase shifter; ADD/SUB, adder or subtractor that contains conditional complementor; SIGN, sign bit capture. Figure 7 . Figure 7.The i-th single iteration of CORDIC algorithm implementation.SHIFTER, phase shifter; ADD/SUB, adder or subtractor that contains conditional complementor; SIGN, sign bit capture. Figure 9 . Figure 9.The simulation results.(a) Mixed input signal, , with SNR = −10 in two periods; (b) Output signal from in-phase channel, ; (c) Output signal from quadrature channel, . Figure 10 . Figure 10.(a) The output amplitude and the reference amplitude.(b) The output phase and the reference phase. Figure 9 . Figure 9.The simulation results.(a) Mixed input signal, x(t), with SNR = −10 in two periods; (b) Output signal from in-phase channel, u p LPF (t); (c) Output signal from quadrature channel, u q LPF (t). Figure 10 . Figure 10.(a) The output amplitude and the reference amplitude.(b) The output phase and the reference phase. Figure 10 . Figure 10.(a) The output amplitude and the reference amplitude.(b) The output phase and the reference phase. Figure 12 . Figure 12.The linear response experimental results.(a) Single voltage measuring results with 0.4000Vpp for 25 s; (b) Amplitudes measuring results with input amplitude ranging from 0 to 0.5Vpp. Figure 13 . Figure 13.The wavelength-index of the CVF. Figure 12 . Figure 12.The linear response experimental results.(a) Single voltage measuring results with 0.4000 Vpp for 25 s; (b) Amplitudes measuring results with input amplitude ranging from 0 to 0.5 Vpp. Figure 12 . Figure 12.The linear response experimental results.(a) Single voltage measuring results with 0.4000Vpp for 25 s; (b) Amplitudes measuring results with input amplitude ranging from 0 to 0.5Vpp. Figure 13 . Figure 13.The wavelength-index of the CVF. Figure 13 . Figure 13.The wavelength-index of the CVF. rs 2023, 23, x FOR PEER REVIEW 11 of 13 0.84%, and 0.61%, respectively, which express the robust performance of the proposed LIA system. Figure 14 . Figure 14.The results of the MCT experiments.(a) The spectrum response of MCT; (b) The blackbody spectrum measuring results. Figure 15 . Figure 15.The responses of InSb at various chopper rates. Figure 14 . Figure 14.The results of the MCT experiments.(a) The spectrum response of MCT; (b) The blackbody spectrum measuring results. Figure 14 . Figure 14.The results of the MCT experiments.(a) The spectrum response of MCT; (b) The blackbody spectrum measuring results. Figure 15 . Figure 15.The responses of InSb at various chopper rates. Figure 15 . Figure 15.The responses of InSb at various chopper rates. Table 1 . Parameters for simulation. Table 1 . Parameters for simulation. Table 2 . The NMSE results of Phase error and amplitude captured in simulations. Table 2 . The NMSE results of Phase error and amplitude captured in simulations. Table 2 . The NMSE results of Phase error and amplitude captured in simulations. Table 3 . The parameters in experiments. Table 3 . The parameters in experiments.
2023-11-04T15:03:29.445Z
2023-11-01T00:00:00.000
{ "year": 2023, "sha1": "98a48e81aed1b24fba1b0726d87518e5caff0b59", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1424-8220/23/21/8902/pdf?version=1698851829", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "cf0526c75690ee11486ab5f59de070b37a50e71a", "s2fieldsofstudy": [ "Engineering", "Physics" ], "extfieldsofstudy": [] }
17648388
pes2o/s2orc
v3-fos-license
Rights and duties policy implementation in Chile: health‐care professionals’ perceptions Abstract Objective To explore the perceptions of health professionals in an integrated network of public provision of health services regarding the implementation of the Law on Rights and Duties of People in Chile. Method Qualitative descriptive study. A stratified qualitative sample of 53 professionals from five low complexity centres and one from a high complexity centre, all part of the integrated network of health services in Valdivia, Los Rios Region, Chile, were selected according to the criteria of an overall saturation of the explored dimensions. The information was gathered through a semi‐structured, in‐depth interview carried out after signing the informed consent. Data were analysed using an inductive approach of content analysis. Results Three categories emerged from the interviews: conceptualization and knowledge, factors influencing the implementation and recommendations for strengthening the implementation, and seven subcategories. It was highlighted that health professionals in the health‐care network perceived difficulties in implementing the Law on rights and duties of patients. Among them were the lack of knowledge about the Law, poor exposure and a lack of resources for its implementation. They suggested adapting the infrastructure of the institution and offering training as recommendations to improve the implementation of the Law. Conclusions There are hindering factors for the implementation of the Law related to organizational and professional gaps in the institutions providing health care. Introduction Health systems are facing greater demands regarding health financing and provision of health services coverage, but also new kinds of patients who are aware of their rights and windows of opportunity for further public engagement. 1 Some European Union member states have showed greater improvements in shifting their health systems to patients' rights and duties by passing laws and ratifying multilateral conventions. 2 Even though changes in the provision of health services originated by the design and implementation of laws on rights and duties of patients; these have been widely described from the patients' perspective. [3][4][5][6][7][8][9] The way health professionals perceive these changes regarding their key roles in the provision of health services remains without major evidence. Chile has a mixed health system regarding its financing and provision of health services. The public provision, financed through general taxation, mandatory health tax and copayments, serves middle-and low-income earners. Meanwhile, private provision, funded through mandatory health tax, copayments and out-ofpocket expenses that can be accessed by highincome earners in the country. The health system reform at the beginning of the 2000s, internationally known as the AUGE reform (acronym in Spanish for Universal Access with Explicit Health Guarantees) established changes in the organization of the health system which, although did not change its mixed structure, it did strengthen the stewardship of the system. This system separated the functions of public health from those related to the management of integrated health services networks, created the Superintendence of Health as an institution responsible for regulating the health system, and introduced an explicit health prioritization process based on explicit health guarantees. 10,11 Currently, there are 80 health problems and their interventions under the Regime of Explicit Health Guarantees. On the other hand, the reform included a bill on the rights and duties of people regarding their health care. Although the project has entered discussion in the National Congress along with other projects of the reform, its promulgation and publication in the national Official Newspaper did not occur until April 2012. The Law introduced broad-related rights from the protection in health care to the involvement of people in their health care. 12 Furthermore, it established duties for patients with regard to respecting the internal rules of the institutions, staying informed on the functions of the institutions and providing respectful treatment to members of the health team. 12 The rights and duties introduced by the Law are mandatory not only for public health-care providers, but also for private ones. Therefore, health managers and health professionals are mandated to ensure the rights and duties independently of the public or private ownership and funding of health-care providers. 12 Moreover, both public and private providers are mandated to display the Patients' Rights and Duties Chart in their facilities, and both are under the surveillance of the Superintendence of Health. The World Health Report 2010 published by the World Health Organization, placed Chile as an example in the establishment of a system of guarantees that allows for moving towards universal health coverage. 13 Thus, the Chilean health system is presented as an original case on how to provide universal health coverage under a law that establishes rights and duties for people. However, a year after the launch of the implementation phase there is no published empirical evidence on how health professionals perceive this implementation regarding the provision of health services in Chile. This research aimed to explore the perceptions of health professionals in an integrated network of public provision of health services regarding the implementation of the Law on Rights and Duties of People in Chile. Study design A study of qualitative, descriptive and exploratory design was conducted. This type of design allows for the description and investigation of a scarcely understood phenomena, identifies or discovers meaningful units, and generates a new hypotheses of investigation 14 .This type of design was considered suitable for the study, given the absence of published empirical evidence regarding the perceptions of health professionals on the implementation of the Law on Rights and Duties of People in Chile and the public provision of health services in Chile. Study context Los Rios Region is located in the far south of the country. Prior to 2007, it was part of Los Lagos Region, as a Province of Valdivia. Since that year, the new region was politically and administratively divided into the Provinces of Valdivia and Ranco. Considering the above, the data from the population census of 2002 estimated 356 396 inhabitants for the Province of Valdivia, which territorially corresponds to the current Los R ıos Region. The city of Valdivia is the regional capital, with a population of 140 559 inhabitants according to the same population census. 15 The public health network of the city consists of a high complexity, teaching and assistance-oriented hospital which is a reference centre for the region, and five family and community-oriented centres for primary health care. Sampling The sample population corresponded to health professionals from public health-care providers who delivery health-care services directly to patients. A first criterion for selecting professionals from public health-care providers was due to the fact that these institutions manage a higher proportion of health professionals and deliver health care for almost the entire population of the city of Valdivia. The second criterion was selecting health professionals who provide health-care services directly to the patients given its legal responsibility according to the Health Code of the Republic of Chile. Professionals in administrative-only positions were excluded, because they do not take part in the direct provision of health services to patients. A stratified sample of 53 health professionals was considered. 16 Participants were recruited until achieving the overall saturation of the dimensions and layers that were explored in the study. 17 The professionals were personally contacted at their institutions by the research team. There was no refusal from the contacted professionals to participate in the study. Data collection The qualitative technique of in-depth and semistructured individual interviews was implemented. 18 An interview protocol was designed to explore the various dimensions of the study. The topics addressed during the interview include; knowledge of the Law, factors that influenced its implementation and their recommendations on policy options to improve implementation. The interview protocol was subjected to a pilot test with professionals that had not been included in the final sample, as a quality criterion in qualitative investigations. 19 Fieldwork was conducted between April and November 2013. Interviews lasted a minimum of 20 min and a maximum of 30 min and were conducted at the workplace of the respondents. The interviews were conducted in Spanish by trained interviewers, in order to obtain consistency and reduce the variation in the approach to the subjects presented to the respondents. After the informed consent process and the submission of the respective document, the interviews were carried out, recorded and transcribed verbatim. Once transcribed, and as a criterion of good quality regarding the report of qualitative inquiries investigations, they were sent to the respondents for comments and corrections. 19 Data analysis The content analysis technique was used in its conventional approach. 20,21 This technique reduces the information through the definition of analysis units and coding trees. Once the units and their respective codes were established, subject categories and subcategories that emerged from the analysis of the talks were established. During the third stage, verbatim quotes from the interviews were selected. The quotes were translated into English and coded to protect anonymity and differentiate the workplace of the respondents: low complexity centres (LC) and high complexity centres (HC). Qualitative criteria rigour and quality The quality of the research was supervised through the scientific criteria of credibility, dependability, confirmability and transferability. [22][23][24] Credibility was achieved through triangulation of researchers and delivery of transcripts to the respondents. Dependence and confirmability were accomplished through the systematic description of the methodology used and reflexivity in conducting the analysis. Finally, transferability was achieved through structural, organizational and legal homogeneity of the public provision of the Chilean Health System, which allows to transfer part of the findings to other regional and community care networks that meet the different characteristics of the studied network. Ethical considerations The research protocol was approved by the Ethics Research Committee of the Health Service of Valdivia (Ref: No. 044-2013). Also, the respective authorizations were requested from the administrative authorities of the institutions providing health services. Finally, an informed consent document was given to the respondent that stated the goal of the research, as well as how they would be participating on a volunteer basis and acknowledged the freedom to leave the investigation without any explanation. Results According to the level of complexity of the institution providing health services, 54.7% of the professionals work at low complexity centres and 45.3% at high complexity centres. According to gender, 67.9% of them were professional women (Table 1). Thematic categories and subcategories were considered, which emerged from the analysis of the interviews to describe the results (Table 2). In Fig. 1, the broader results are exhibits according the thematic categories and type of health-care provider complexity. Conceptualization and knowledge The professionals at low complexity centres report that knowledge regarding the Law was acquired in their workplace through the information provided by administrative staff. The best-known rights were those related to respect and good treatment towards the user, providing timely care and information on issues related to health care. In the case of duties, the compliance of the regulations of the health centre, providing truthful information during care and not to abuse workers was mentioned. Professionals at high complexity centres reported that the information they were given at work was scarce, there was no training, and providing information was emphasized only at the beginning of the implementation of the Law. They mentioned that the most important rights to maintain were; privacy, to keep the user informed, and confidentiality of the medical records. Among the duties, they know the obligation to comply with the internal regulations of the institution, relating it to reported conflicts with the families of patients. Professionals at low complexity centres showed ignorance about how the implementation stage is being assessed. Similarly, professionals differ regarding the organizational body responsible for evaluation: Information Office, Complaints and Suggestions (OIRS) for the patients, through the User Satisfaction Survey or undercover. In the case of professionals at high complexity centres, they perceive that the implementation of the Law is not being monitored. Also, they say they have not been subject of a compliance assessment. . . .I don't know, because here it has always been fulfilled, the patients are kept informed. . . at least in my area I report what the patient has, the privacy issue is maintained and I think it happens that way, but I don't know if at supervising level the Health Service might be implementing some kind of supervision, so to speak. . . . . .I think it was the same for accreditation at least here, because everyone focused just on that and nothing else. . .but more at national level. I don't know, I think it because of misinformation, because it is a law and not a protocol, like infection procedures or that sort of thing, I don 0 t know either if there will be any penalty. . . [HC-01] The perceptions of the professionals at low complexity centres, as well as those from professionals at high complexity centres, agree regarding the absence of an information strategy for the implementation of the Law. Moreover, they perceived a lack of assessment on the implementation and enforcement of the Law. Factors influencing the implementation For the professionals at low complexity centres, the factor that facilitates the implementation has been the set of skills, motivation and commitment of the health team to educate both workers and users regarding the relevance of the Law. However, for the professionals at high complexity centres, the organizational and political factors that facilitated the implementation of the Law were scarce, highlighting only the provision of information by the staff and the willingness of the professionals to assist the patients and their families. The hindering factors identified by the professionals at low complexity centres were the general ignorance on the content of the Law by patients, professionals and health technicians. Although professionals at the high complexity centres also perceive that the limited knowledge of the Law is an obstacle to its implementation, they think that the hospital accreditation process to which the institution was subjected also contributed as an obstacle to its implementation. This is because it was given institutional priority over the implementation of the Law. . . .The lack of exposure, it might little information. Although it is in the waiting room and somehow people might have heard of it, but. . . you have to read it, but there are people who cannot read. It is as if it was only depending on that, whoever is interested in finding out is going to read it but perhaps more is needed. . . . . .I think that yes sometimes they were giving talks, posters were put up, but I think that lasted about a month and then no longer, I saw more people worried about accreditation more than anything, about studying the different parameters it has, but the relationship with the rest of the patients seems to be left aside. . . The professionals perceive the design of the Law as a highly centralized stage, which did not consider the opinion of those who would be responsible for implementing it. This characteristic is attributed to the design stage and has caused the main obstacles to its implementation. Recommendations for strengthening the implementation To improve the implementation of the Law, professionals at the low complexity centre identify as policy options to develop strategies to improve the delivery of information to patients, as well as to educate people through the training of community leaders so they impart the Law in a way that is comprehensible to the people. In the case of professionals at high complexity centres, a policy option should be directed to the training of health professionals on matters of standardization processes through protocols to implement the Law. . . .I think that to inform. . .inform the people, maybe to have more community meetings, maybe channel it through leaders. . . . . .It's super important that training is continuous, well communicated. It is something every professional must know in order to work in health, so maybe it could be something to be included in the training, when they are students. And everything should be standardized, so that we all speak the same language. . . Professionals at both low and high complexity centres perceive the need to deepen the fundamentals behind the design and implementation of the Law. For them, the fact that a Law had been designed focused exclusively on the rights of patients, without considering a similar number of duties, is perceived as a barrier to improve the provision of health services. Moreover, they perceive that to deepen its concept would be an advantage to move towards a greater commitment to the fulfilment and dissemination of the Law to patients. . . .We should empower it a bit more, get to know it better, it would be super important to know how this Law was created, why was it created. What was the need for a new bill of rights and duties from the authority and not from the community. . . On the other hand, professionals at the low complexity centres recommend improving structural aspects of institutions that act as barriers to the implementation of the Law. They recommend investing in infrastructure and increasing the number and size of the rooms for patient care. Similarly, practitioners at the high complexity centres recommend expanding the physical space of complex services with high demand to guarantee privacy, respect and dignity of patients. . . .We feel affected by the user, because he demands, we know that the center is under a process of normalization and that there will be in infrastructure, we are all tight, there is no box. I will I see you here today, there another day and the patient gets angry because he doesn 0 t find you and there he goes, letting his anger out, I don't know. . . I think one has to know how to work with what you have. But a small caring facility is not fit to serve so many people. I think they have to start by sorting things out at all levels. Because we try to comply with what we have but overall it should change. . . For professionals in both levels of complexity, the changes that are needed to improve the implementation of the Law, as well as its purpose, must include structural changes in the institutions that provide health services, as well as working the alignment with professionals regarding the contents of the Law. Discussion The findings of this investigation allow establishing the relevance of the perceptions of professionals who provide health services on the implementation of laws aimed at strengthening the Rights and Duties of patients. Moreover, they help to identify factors that contribute to overcome the incompatibility of moving forward on rights of patients, for which part of the health system is not prepared to meet. The experience of Chile shows the complexity of moving towards the implementation of Rights and Duties of patients under a double logic of providing health services. Firstly, the workforce must face the demand of pathologies with Explicit Health Guarantees, and secondly other pathologies whose provision is not subjected to that scheme. Even though several investigations on the implementation of the health reform have shown how the gaps between design and implementation capacity of the institutions and the workforce have acted as obstacles to reach the objectives of the health system. 25,26 This is apparent in the case of the implementation of the Law on Rights and Duties of patients as they are again placed as a central aspect. An interesting finding was related to the perception of professionals on the technical discussion behind the content of the Law. Although the results of the investigation were part of a widely debated and divulgated reform proposal, they could be explained by the time elapsed between the project and the enactment of the Law. In the same sense, this can be noted in the perception of professionals at different levels of complexity regarding the few duties that the Law request from patients. Even if the studies on perceptions of health professionals regarding laws on Rights and Duties of patients are scarce, their main finding is the ignorance of the professionals on the rights they must safeguard in their patients. 27,28 Even if the implications for and against implementing duties on patients of publicly funded health systems has been discussed, 29 in the case of the Chilean experience, rights and duties include providers with public and private owner- ship and, therefore, funding. Even though this feature makes the Chilean experience more comprehensive at implementing patients' rights and duties, the obtained results pose the difficulties faced by the public provision of health services, both structurally and organizationally to enforce not only duties but also rights. In this regard, emphasis has been placed on how the policies of institutions, equipment, supplies, work environment and professional deficits affect the implementation of the rights of patients. 30 Nevertheless, in the Chilean case, a top-down policy approach could explain why professionals perceived a lack of a thorough implementation process. In fact, the first inspection of the Superintendence of Health regarding the compliance of the Law reported a significant number of public providers that do not have internal regulations for the compliance of the Law. 31 As a path for overcoming these shortages, the key role of health professionals have been identified along patient's awareness. 32,33 Although the reported results are limited as they represent the perceptions of professionals of an integrated network of health services in a medium-sized town in southern Chile, having a design process of public policies at a country level, which is centralized and implemented under an organizational and administrative structure in health and is being carried out in all regions, allows transferability of results. In conclusion, the policy implementation process has shown gaps between design and implementation stages from the health-care reform. Thus, there are hindering factors for the implementation of the Law related to organizational and professional gaps in the institutions providing health care. By surpassing the professional gaps, it is also an opportunity for strengthening professionalism in a scenario to further patient's rights and duties. It has been argued that social factors such as increased attention to health-care issues, changes in philosophy of care for patients and rapid changes in management of care are influencing the current changes on professionalism. 34,35 In the Chilean case, it seems these factors should be a matter of concern as professionals perceived a high concentration in patient's rights instead of duties. Overcoming the implementation gaps will require moving towards policy options focused on bringing the divide between health professional's education and health system performance on patient's rights and duties. There will be also a priority moving towards a comprehensive design of strategies for supporting health professionals and patients in the implementation phase. However, further research is needed regarding monitoring the enforcement of the Law, as well as the relationship of the implementation of the Law with better results linked to the accomplishment of the non-medical expectations of people either in public or private health-care providers.
2018-04-03T05:41:59.715Z
2015-08-18T00:00:00.000
{ "year": 2015, "sha1": "49a853ef6c370647099010e189b5272d5a024da5", "oa_license": "CCBY", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/hex.12396", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "49a853ef6c370647099010e189b5272d5a024da5", "s2fieldsofstudy": [ "Political Science", "Medicine" ], "extfieldsofstudy": [ "Sociology", "Medicine" ] }
57375966
pes2o/s2orc
v3-fos-license
The Impact of CRISPR-Cas System on Antiviral Therapy Clustered regularly interspaced short palindromic repeats (CRISPR)-associated protein nuclease (Cas) is identified as an adaptive immune system in archaea and bacteria. Type II of this system, CRISPR-Cas9, is the most versatile form that has enabled facile and efficient targeted genome editing. Viral infections have serious impacts on global health and conventional antiviral therapies have not yielded a successful solution hitherto. The CRISPR-Cas9 system represents a promising tool for eliminating viral infections. In this review, we highlight 1) the recent progress of CRISPR-Cas technology in decoding and diagnosis of viral outbreaks, 2) its applications to eliminate viral infections in both pre-integration and provirus stages, and 3) various delivery systems that are employed to introduce the platform into target cells. Introduction The clustered regularly interspaced short palindromic repeats (CRISPR)-associated protein nuclease (Cas) is a prokaryotic antiviral adaptive immune system, which is present in most archaea (~90%) and some bacteria (~50%). The genomic components of the CRISPR system are made up of trans-activating crRNA (tracrRNA), the cas operon, a leader sequence and arrays of short direct repeats. These repeats are interspersed by non-repetitive spacer sequences, which are acquired from mobile invasive elements mainly viruses and plasmids ( Figure 1). The CRISPR-Cas system confers the organism's resistance against foreign genetic elements that have previously rendered parts of their genome spacer sequences into the CRISPR array. CRISPR-Cas9 system is derived from type II, the simplest and most commonly used system in genome editing approaches. 1 Host codon-optimized Cas9 is recruited on target site by designable guide-RNA (gRNA) and precisely introduces double strand break (DSB) ~3-base pair (bp) upstream of the protospapcer adjacent motif (PAM). Then, the DSB is repaired with either the error-prone non-homologous end joining (NHEJ) or homology-directed repair (HDR) pathways. NHEJ leaves the genome vulnerable to a lethal genomic mutation, by frameshifting an open reading frame (ORF) on the target gene. Giant viruses also have a defense structure reminiscent of the CRISPR-Cas system. The viral defense system known as the mimivirus virophage resistance element (MIMIVIRE) is composed of proteins with both nuclease and helicase activities, representing an adaptive immune system based on nucleic acid against virophages. 2 Over the recent years, CRISPR-Cas technologies have been well-optimized in eukaryotic cells, particularly in human cells. Infectious viral diseases are serious global health concerns and despite the huge efforts invested in their eradication, only limited success has been achieved. Establishment of long-term infections leading to chronic disease and also development of antiviral resistant mutants are factors that lead to the persistent viral infections. Novel strategies are required to eliminate even traces of viruses within the host. 3 During the last few years, the applications of CRISPR-Cas9 system have introduced novel antiviral therapeutic options. The advantage of CRISPR-Cas9 technology lies in their ability to directly target the viral DNA or RNA. In this line, the viral infection would be eliminated in the host. CRISPR-Cas systems have shown their efficacy in different viral infections in both pre-integration and provirus stages. 4 Similarly, CRISPR-Cas has generated striking insights for development of novel vaccination strategies in poultry industry. It has been reported that CRISPR-Cas9 system can efficiently modify the genome of duck enteritis virus (DEV) C-KCE strain. The envelope glycoprotein gene and pre-membrane proteins of duck tembusu virus (DTMUV) as well as the hemagglutinin gene of highly pathogenic avian influenza virus (HPAIV) H5N1 were inserted at the suited sites in C-KCE strain to develop a trivalent vaccine that can efficiently prevent the infection of DTMUV, H5N1, and DEV in ducks. 5 In addition to targeting DNA viruses, CRISPR-Cas9 system demonstrated its feasibility and versatility in targeting RNA viruses. Engineered Francisella novicida Cas9 (FnCas9) can successfully target positive-sense single strand RNA hepatitis C virus in eukaryotic cells. In contrast to Streptococcus pyogenes Cas9 (SpCas9) which needs synthesized PAM-encoding oligomer in targeting RNA in vitro, FnCas9 targets the RNA virus PAM-independently. In addition, the ability of FnCas9 to target RNA in cytosol can reduce off-target activity of the system on the host DNA compared to Cas9 which targets DNA in nuclear. 6 In the current study, we recapitulated the CRISPR-Cas9 system impact on different kinds of viral genomes which can cause either detrimental acute or persistent infection in humans. Decoding and diagnosis of the obscure viruses The rapid expansion of human flavivirus infections namely dengue virus (DENV) and Zika virus (ZIKV) have persuaded the research community to devise effective therapies against them. A recent insight about the signaling pathway of flaviviruses, which drives the primary steps of their infection has been successful in providing a schematic diagram of the biology of these viruses. Genome-wide CRISPR-Cas9 screening has identified nine host genes that are involved in flavivirus infectivity. The endoplasmic reticulum (ER) plays an indispensable role in replication, translation, polyprotein processing, virion morphogenesis and consequently, controlling the life cycle of flavivirus. 7 In this line, most of the suspicious genes were associated with ER. Studies have elucidated the unique dependency of flaviviridae on ER-associated signal peptidase complex 1 (SPCS1) proteins. Disruption of SPCS1 processing pathway reduced the infection level of all flaviviridae members. 8 Moreover, orthologous functional genomic CRISPR-Cas9 screening revealed various host factors involved in virus entry (AXL), endocytosis (RAB5C, RABGEF) and transmembrane protein processing and maturation (EMC) which are associated with the infection of the DENV and ZIKV. 9 TLR7/8 agonist R848 strongly restrains ZIKV replication. It is indicated that replication inhibitory effect of R848 is mediated by viperin, an IFNinducible protein. To confirm this claim, CRISPR-Cas9 genome editing tool was used to knock out (KO) viperin in human MDM cells, as a result, R848 inhibitory effect relieved in KO-cells. 10 The emergent outbreak of ZIKV and the complexities of its infection highlight the need for a low-cost sequence-specific diagnostic platform that can be used in pandemic regions. Likewise, the inferior performance of the detection method based on antibodies and their limitations just as encountering problems with offtargets and gaining false positive results of sequence-based diagnostics; make these conventional methods to meet CRISPR-Cas9 technology, as an alternative strategy. 11 Many strain-specific PAM sites in the Zika strain provide the opportunity to discriminate viral lineages by utilizing a newly established freeze-dried platform termed as 'Nucleic Acid Sequence-Based Amplification (NASBA)-CRISPR'. As part of NASBA reaction 1) the strain-specific PAM sequence, 2) appropriate gRNA, and 3) the double-stranded DNA are produced and subjected to Cas9-mediated split. The presence of a strain-specific PAM leads to the production of truncated RNA product which lacks the sensor H trigger sequence. Contrary to the full-length RNA, the truncated RNA is unable to stimulate the sensor H toehold. Hence, this method could be employed for detecting the strain-specific lineage of the virus without any contamination from other flavivirus types. 12 The therapeutic application of CRISPR-Cas9 technology to human viruses Hepatitis B viruses CRISPR-Cas9 editing tool presents an alternative approach to uproot HBV replication and abolish its latent viral reservoir, i.e. a form of covalently closed circular DNA (cccDNA), in infected cells. Compared to other potential sites, conserved sequences including C, P, S, and X ORFs in HBV genome are more precedent in order to be used as potential targets for designing gRNA. Owing to minor concordance between the conserved sequences of HBV and human genome, the off-target mutations will restrain on host's genome while alleviating viral infections simultaneously. Designed gRNAs have been shown to reduce the HBV DNA level from 77 to 98% in cultured cells. 13 Likewise, designed gRNAs targeting HBV cccDNA in HBV-infected HepG2/NTCP cells has resulted in eightfold reduction in the expression of HBcAg. 14 Targeting multiple regions of HBV genome by co-transfection of several gRNAs has been reported to increase the effectiveness of the approach. 15 A number of studies have been designed to take advantage of introducing large deletions via CRISPR-Cas9 system in combination with the efficiency of lentivirus mediated gene transfer to effectively | 593 CRISPR-Cas and Antiviral Therapy Advanced Pharmaceutical Bulletin, 2018, 8(4), 591-597 prevent the HBV replication. 16 Owing to fact that CRISPR-Cas9 technology can affect the off-target sites, design and characterization of fastidious CRISPR-Cas9 system for more precise targeting of invasive elements should be a matter of focus. In this context, a more accurate form of CRISPR-Cas9 technology, Cas9 nickases (Cas9n), has been proposed for targeting conserved sequences in the S and X ORFs of the HBV genome. This strategy was able to disrupt HBV replication in chronically and de novo infected hepatoma cell lines as well as episomal cccDNA and chromosomally integrated HBV target sites. 17 As a proof of concept, it is required to evaluate the antiviral effect of CRISPR-Cas9 system in more pertinent in vivo models of HBV infection. Human immunodeficiency virus Human immunodeficiency virus (HIV-1) is a major global health problem for which no effectual vaccine is available. The latent reservoir of HIV-1 can persist for as long as 60 years in CD4+ T cells. Purging of HIV-1 reservoirs is the effective cure to obviate the expansion of the virus into healthy cells in patients. Two main strategies are currently followed to cure the HIV-1 infection: 1) a functional cure, in which the viral replication is controlled while latent reservoir still remains; e.g. the impairment of the CCR5 receptors, and 2) a sterilizing cure, in which even viral traces are eliminated from the infected cells. 18 Individuals carrying a 32-bp deletion in their CCR5 gene (CCR5∆32) are instinctively resistant to HIV-1 infection. By transplanting CCR5∆32 hematopoietic stem cells, one can easily devise a sterilizing cure strategy. However, tropism shift to CXCR4 can occur to cope with the impairment of CXCR5. Exploiting CRISPR-Cas9 system can overwhelm this hurdle, because this system has the potential to disrupt CXCR4 without affecting the cell propagation. 19 Introduction of the homozygous CCR5∆32 mutation in induced pluripotent stem cells (iPSCs) using the combination of CRISPR-Cas9 system and a PiggyBac transposon caused a significant resistance to HIV infection. Also, downstream lineage, the monocytes and the macrophages derived from these engineered iPSCs, represented the same resistance. Therefore, these new established cells could be considered as a source for autologous therapy in HIV infection. 20 To overcome little activity of CRISPR system in CD4 + T cells, it is possible to utilize a dual gRNA approach for inducing biallelic deletion in CCR5 gene and consequently, improve the disruption of CD4+ T cells and CD34 + HSPCs. 21 Recently, Cas9 ribonucleoprotein (RNP) complex has been used to target host factors that are involved in HIV infection. As a result, a tropism-dependent resistance to HIV infection is pointed out in CXCR4 or CCR5 disrupted T-cells. Remarkably, simultaneous targeting of CXCR4 and CCR5 by CRISPR-Cas9 system, significantly decreased tropic-dependent HIV-1 in CXCR4-and CCR5-modifed cells (TZM-bl cells, Jurkat T cells, and human CD4 + T cells) without any cytotoxic effects on cells viability. 22 Moreover, targeting the factors that are involved in later stages of initial HIV infection, such as LEDGF or TNPO3, represented a tropism-autonomous diminution in infected T-cells. 23 Furthermore, CRSPR-based genetic screen discovered that three host dependency factors (TPST2, SLC35B2, and ALCAM) play vital roles in HIV infection in primary CD4 + T cells. 24 In order to target the proviral DNA efficiently, it is utterly crucial to eradicate the viral remnant sequences from cells completely. Long terminal repeat (LTR) is an important element in augmenting transcription of potentially toxic proteins in HIV infectivity. To eliminate the entire viral genome, recruiting Cas9 simultaneously to 5' and 3' LTR will untwist HIV genome from infected cells. 25 Recently, it was reported that HIV-1 genome can be eradicated from the host genome in 2D10 CD4 + T-cells, where CRISPR-Cas9 system was delivered by lentiviral vector to target LTR U3 regions. 26 In a further attempt, recombinant Adeno-associated virus 9 delivery of SaCas9, a shorter variant of Cas9 derived from Staphylococcus aureus, was adapted to excise segments of integrated HIV-1 by targeting within the 5′-LTR and the gag gene in transgenic mouse and rat. This was the first report that clarified the promising results of CRISPR-cas9 system for in vivo studies. 27 What would happen if we design gRNAs targeting nonconserved regions in HIV-1 genome? The question was answered recently in CD4 + T cells expressing Cas9 and gRNA ceaselessly. It is elucidated that targeting nonconserved regions resulted in noticeable obstacle of the infection in transient assays but after a variable time all targeted infections came up with a high level of HIV-1 production. Moreover, after a longer time, targeting conserved regions in HIV-1 genome showed an escape as well. Genome sequencing of escaped viruses has disclosed that the gRNA binding site and PAM region in HIV-1 genome were eradicate by some mutations that were introduced by error-prone NHEJ repair pathway. 28 Several approaches can be used to vanquish HIV-1 escape including multiplex targeting by designing strong gRNAs to direct Cas9 on conserved regions, 29 utilizing Cas9 variants that recognize different PAM formats, 30 using CRISPR-like enzyme such as Cpf1 that introduces cut in the distal site of the binding site, 31 and abrogation of NHEJ by chemical drugs for instance SCR7. 32 Table 1 shows CRISPR-Cas9 targeting sites in other virus infections. Delivery of CRISPR-Cas9 components Despite investing considerable effort in gene therapy during the last decades, limited success has been achieved due to the shortcomings of existing viral and non-viral gene delivery approaches. Generally, viral delivery systems can be categorized into four main classes 1) adenoviruses, 2) adeno-associated viruses (AAV), 3) retroviruses, and 4) lentiviruses. Lentiviruses are derived from HIV-1 and have the potential to cause undesirable modifications in longterm expression cell lines. Thus, integrase-defective lentiviruses which are replication incompatible or at least single-cycle replicable are more preferred. This preference is more prominent in the case of genome editing that requires long-term expression of the genome editing components and engages an increased risk of unwanted off-target changes. 33 CRISPR-Cas9 system packaged with lentiviral vectors has shown promising results in eliminating latent HIV-1 infection. Moreover, prepackaged Cas9 in a transient form of virus-like particles that target CCR5 represents a reduced off-target effect in target cells. 34 Recombinant AAV vectors have low pathogenicity and low immunogenicity compared to other viral vectors, but their main obstacle is their limited packaging size. The size limitation of AAV vectors can be overcome by exploiting SaCas9 (3.3 kb) or by using split-Cas9 approaches. 35 In vivo genome editing generally requires an effective method to deliver the components of editing tools appropriately. For the first time, it was demonstrated that delivering multiplex saCas9/sgRNA, targeting two LTR sites and two structural proteins, in an all-in-one AAV-DJ/8 vector can be applied to precisely excise HIV-1 proviral from pre-clinical mouse models. 36 This strategy is a promising approach to eradicate even trace of proviral in different organs by simultaneously introducing indels and large deletions at HIV-1 reservoirs. Despite the high productivity of viral vectors, certain limitations such as immunogenicity and random integration of conventional viral vectors led the studies to fluctuate delivery approaches with a view to non-viral gene delivery. 37 So far, different classes of non-viral vectors have been introduced. Non-viral expression plasmid is the most convenient delivery approach that can express CRISPR-Cas system in a safe mode. However, random integration of the plasmids and difficulty in controlling their timeframe expression are the main obstacles. To address these drawbacks, CRISPR mRNA delivery system has been employed, which illustrates great refinement in decreasing the risk of off-target activity by controlling the amount of Cas9 protein and gRNA level. 38 Besides, rapid deterioration of plasmid and mRNA by serum nucleases is another major hurdle that must be resolved. The use of RNPs is another approach to delivering Cas9-gRNA with higher control on editing timeframe. Delivering RNPs by using electroporation method has showed promising results in targeting host factors that are involved in HIV-1 infected CD4 + T cells. 23 However, some complications such as the negative charge of RNAs, flimsy structure, and the large molecular size of proteins limit the diffusion rate of RNPs across the cell membrane. To overcome reduced delivery efficiency of non-viral delivery platforms, positivelycharged nano-carriers can be employed as an ideal delivery system. Yarn-like DNA nano-clew is a form of cationic nano-carriers which can be loaded with CRISPR-Cas9 technology to shuttle Cas9-gRNA into the target cell. This method provides stability between binding and discharge of the CRISPR-Cas9 system. 39 Microfluidic membrane deformation (MMD) through the transient disruption of the cell membrane has been exploited as a Cas9-gRNA delivery platform. Similar to microinjection, MMD can deliver payload across different cell types even hard-totransfect cells, but in an easier manner with a higher yield. Moreover, MMD has portrayed more cell viability than electroporation method. Collectively, MMD seems to warrant a precise and efficient genome editingapproach. 40 Conclusion The versatility and feasibility of the CRISPR-Cas9 system remove some of the impediments that has challenged gene therapy approaches and introduce new opportunities in antiviral therapies. Despite the massive growth spurt of CRISPR-Cas9 technology over the last years, major efforts are needed to address the remaining impediments and develop CRISPR-Cas9 based safe delivery technologies. Further studies are required to investigate the immune responses to exogenously expressed CRISPR-Cas9 system and devise strategies to mask this system and thus reduce their immunogenicity. High-fidelity Cas9 variants introduced their efficacy in the field of genome editing by reducing off-target effects. 1 Application of these variants to eradicate viral infection from host genome may bring forth new perspective. Viral and non-viral delivery systems have their own drawbacks when applied in gene therapy approaches. Recent studies have shown that by combining lipid nanoparticle-mediated delivery of Cas9 mRNA and AAVs encoding gRNA and donor template, efficient in vivo restoration of > 6% can be achieved in a mouse model of human hereditary tyrosinemia. 37 The combination of these two conventional delivery methods could pave the way for curing viral infections in clinical settings.
2019-01-22T22:25:22.231Z
2018-11-01T00:00:00.000
{ "year": 2018, "sha1": "4581c7f3ed4e354cc845f92fb9e809c73822c697", "oa_license": "CCBY", "oa_url": "https://apb.tbzmed.ac.ir/PDF/apb-8-591.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "4581c7f3ed4e354cc845f92fb9e809c73822c697", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
1807017
pes2o/s2orc
v3-fos-license
Approximate Inference for Nonstationary Heteroscedastic Gaussian process Regression This paper presents a novel approach for approximate integration over the uncertainty of noise and signal variances in Gaussian process (GP) regression. Our efficient and straightforward approach can also be applied to integration over input dependent noise variance (heteroscedasticity) and input dependent signal variance (nonstationarity) by setting independent GP priors for the noise and signal variances. We use expectation propagation (EP) for inference and compare results to Markov chain Monte Carlo in two simulated data sets and three empirical examples. The results show that EP produces comparable results with less computational burden. Introduction Gaussian processes (GP, Rasmussen and Williams, 2006) are commonly used as flexible non-parametric Bayesian priors for functions. They provide an analytical framework that can be applied to various probabilistic learning tasks, for example, in geostatistics, gene expression time series (Hensman et al., 2013), and density estimation (Riihimäki and Vehtari, 2014). A typical assumption is that the parameters of the GP model stay constant over the input space. However, this is not reasonable when it is clear from the data that the phenomenon changes over the input space (see, e.g., Silverman, 1985). As an improvement to these cases, Goldberg et al. (1997) proposed heteroscedastic noise inference for Gaussian processes using a second GP to infer the log noise variance and doing the inference by Markov chain Monte Carlo (MCMC). More recent work on heteroscedastic noise models include solving the problem by transformation of the mean and variance parameters to natural parameters of Gaussian distribution (Le et al., 2005), considering a two-component noise model (Naish-Guzman and Holden, 2007), and an expectation maximization like algorithm (Kersting et al., 2007). Adams and Stegle (2008) used expectation propagation (EP, Minka, 2001a,b) to the model input-dependent signal variance (signal magnitude) in GPs by factoring the output signal to a product of a strictly positive modulating signal and a non-restricted signal, with independent GP priors for both of the signals. Non-stationarity can also be incorporated to the length-scales as proposed by Gibbs (1997) and further developed by Paciorek and Schervish (2004), where both used MCMC for the approximative inference. In general, the length-scale and the signal variance of a GP are underidentifiable and the proportion of them is more important to the predictive performance (Diggle et al., 1998;Zhang, 2004;Diggle and Ribeiro, 2007). Therefore, we assume that a GP with input-dependent signal variance and a GP with input-dependent length-scale would produce similar predictions. Thus, in this paper we concentrate on the input-dependent signal variance. In this work, we present a straightforward and fast approach to integration over the uncertainty of the noise and signal variance in GP regression using EP. This approach can also be applied to input-dependent noise and signal variance by giving them independent GP priors. We extend the heteroscedastic noise model by Goldberg et al. (1997) to EP inference, and extend the nonstationary model by Adams and Stegle (2008) to analytical predictions. We consider the joint posterior of the modulating signal and the nonrestricted signal and show that modeling the posterior correlations leads to significant improvements in the convergence of the EP algorithm compared to the factorized approximation. We also obtain stable analytical gradients of the log marginal likelihood. We still need to infer other covariance function parameters such as the characteristic length-scale by maximizing the marginal likelihood or posterior density, or using quadrature or MCMC integration. The performance of the EP implementation is compared to full MCMC (Neal, 1998) which produces the exact solution in the limit of an infinite sample size. We also compare the EP approximation of the latent posterior to an MCMC approximation, where we sample only the posterior of the latent values but use the EP-optimized hyperparameters. This paper is structured as follows. In Section 2 we briefly go through Gaussian process regression. Section 3 is dedicated to the models and methods including the EP algorithm for posterior approximation, marginal likelihood evaluation and predictions. The experiments in Section 4 present the performance of our EP approach in two simulated data sets and three empirical problems. Finally the methods and results are discussed in Section 5. Gaussian Process Regression In standard GP regression the output y is modeled as a function f plus some additive noise such that The function f is given a Gaussian process prior, defined by its mean and covariance functions. In this work we use zero mean Gaussian processes for notational convenience. As for the covariance function, we use the common squared exponential (exponential quadratic): where x, x ∈ R d , σ 2 f is the magnitude or signal variance of the covariance function and i is the characteristic length-scale corresponding to the ith input dimension. Given a data matrix X = [x 1 , x 2 , . . . , x n ], we can write our GP prior for the latent function f ( where the elements In this work, we focus on models where either the noise variance in (1), or both the noise and signal variances in (3) depend on the input. These cases are handled analogously to (2), where the noise and signal variances are just some functions of the input, and the observation is combination of the three signals: From now on θ = log(σ 2 (x)) and φ = log(σ 2 f (x)). We set the GP prior for the logarithm of the variances to handle the positive restriction. We use the squared exponential covariance function also for k n (x, x ) and k m (x, x ), although other covariance functions could be used as well. Approximate Inference In this section we go through the EP approximation, different models we use, and the algorithmic details. Expectation Propagation Expectation propagation is a general algorithm for forming an approximating distribution (from the exponential family) by matching the marginal moments of the approximating distribution to the marginal moments of the true distribution (Minka, 2001a,b). The notation in this section follows mainly the notation of Rasmussen and Williams (2006). With Gaussian processes we wish to form the posterior distribution of the latent variables f given the observations and inputs p(f | X, y). However, the posterior distribution cannot be computed analytically in most cases, because the likelihood function and the prior distribution cannot be combined analytically. EP forms a Gaussian approximation to the posterior distribution by approximating the independent likelihood terms with Gaussian site approximationst i . This enables the analytical computation of the posterior distribution because both the likelihood approximation and the prior are Gaussian: whereZ i ,μ i andΣ i are the parameters of the site approximations, or site parameters. We use EP to approximate the posterior of f such that where Z is the normalization constant or marginal likelihood, Z EP is the EP approximation to the marginal likelihood, p(f | X) is the prior of the latent variables f , and q(f | X, y) is the Gaussian approximation to the exact posterior distribution p(f | X, y). Noise Variance To integrate over the uncertainty of the noise variance in GP regression, we approximate the Gaussian likelihood as a product of two independent Gaussian site approximationst i for the mean f i and for the logarithm of the noise variance θ: The posterior approximation of the latent variables f and θ can now be written in a factorized form, if we set an independent prior distributions for f and θ Signal Variance If we wish to use the same approach as for noise variance to also integrate over the uncertainty of the signal variance, we need to move the signal variance from the GP prior to the likelihood function. Otherwise we would need to integrate over an n-by-n matrix determinant, which is computationally expensive. To move the signal variance to the likelihood function, we reparameterize f as f = σ ff , where σ f is the square root of the signal variance. Now, if where K is covariance matrix computed with identity signal variance in (3). As noted in Section 2, we model the logarithm of the signal and noise variances to take into account the restriction for them to be positive. Because both f and φ model the mean of the distribution, we expect them to have strong correlation. Thus, instead of doing a factorized approximation as for the noise variance, we approximate the likelihood with two site approximations: one for the noise variance and a joint two-dimensional Gaussian Assuming independent priors for the latent variables f , φ and θ, the posterior approximation is also analogous to the noise variance case, such that It should be noted that we also tested the fully fac- , but it gave worse predictions, and the EP algorithm needed clearly more iterations to converge. Input-Dependent Noise and Signal Variance We can easily extend the presented likelihood approximations to also include input-dependency on signal and noise variances (or either one), by setting independent GP priors for both the logarithm of the noise variance and logarithm of the signal variance: If we integrate over the input-dependent signal variance, we have otherwise we have K v = K f . The covariance matrices are computed from the squared exponential covariance function (3). By setting the GP priors, we assume that the signal and noise variances are also some unknown functions that depend on the input x. The site approximations are of the same form independent of the input-dependency of the parameters If we integrate over the (input-dependent) signal variance, we havẽ Here we have used Σ for both the scalar variance of the univariate Gaussian and the covariance matrix of the bivariate Gaussian, but it should be clear from the context which one it represents. The posterior distributions can be computed with where where each block is diagonal. Cross-diagonal terms, Σf φ =Σ φf , collect the marginal covariancesΣf φ,i and the main-diagonal terms,Σf andΣ φ , collect the marginal variancesΣf ,i andΣ φ,i . If we do not integrate over the signal variance, we haveΣ v =Σ f . EP Algorithm The full EP algorithm is presented in Algorithm 1. The main points in the algorithm are the same as in the standard EP approach for Gaussian processes (Rasmussen and Williams, 2006, pp. 52-60). However, there are some implementation details that should be noted: 1. The overall stability of the EP updates can be improved by working in the natural parameter space of the site approximations. This means that we use the natural parameterization,ν =Σ −1μ andτ = Σ −1 , for the site approximations. This way we can avoid inverting the site covariance matrices at every iteration. 2. Even though the algorithm should be stable and robust, there are some cases where the site updates exhibit oscillations, for example, due to weird hyperparameter values in the covariance functions. Thus, the updates should be damped after computing the new site approximations in step 4, with some suitable damping factor δ, for example δ = 0.8. 3. In step 3 of the algorithm we minimize KL divergence with respect to Gaussian distributions. This means that we match the first and second moments of the one-dimensional distributions and in addition to these the cross-moment if we have a bivariate Gaussiant i (v i ). The integrals over f i orf i can be computed analytically in every case in steps 2 and 3. If we don't integrate over signal variance, this can be done trivially as both the cavity and likelihood are Gaussian with respect to f i . If we integrate over signal variance, we can utilize the standard factorization of the multivariate Gaussian The integrals over θ and φ must be computed numerically, but this can be done effectively, for example, with Simpson's method. 4. We use parallel EP updates for the site parameters. This means that we compute the site updates for every site approximations before we update the posterior distribution and compute the marginal likelihood. This usually results in a few more EP iterations than sequential EP, but the overall speed of the algorithm is faster. Algorithm 1 Parallel EP algorithm Compute the cavity distributions: 2. Compute the normalizationẐ i : 3. Find the best marginal posterior approximation for q i (v i ) and q i (θ i ) by Marginal Likelihood Marginal likelihood can be used for model selection under GP framework as it has good calibration and the maximum of the marginal likelihood usually corresponds to good predictions (Rasmussen and Williams, 2006;Nickisch and Rasmussen, 2008;Ri-ihimäki et al., 2013). Marginal likelihood in Gaussian processes is defined as For our noise and signal variance GPs, an EP approximation to the marginal likelihood is (20) where v = (f , φ) or v = f . Following Cseke and Heskes (2011), we define the term Now the EP approximation for marginal likelihood can be computed with where µ and Σ are the parameters of the posterior distribution approximation q(·|X, y), µ i and Σ i are the ith marginal terms of µ and Σ, µ −i and Σ −i are the ith marginal mean and variance parameters of the cavity distributions q −i (·), and K j are the prior covariances from the GP. Note that for θ the marginal parameters are onedimensional, but for v they are two-dimensional if we integrate over the signal variance like for the site approximations in (15). Predictions For predicting a future observation y * for input x * , we need to compute the predictive distribution where q(v * |x * , X, y) = p(v * |v)q(v|X, y) df (24) can be easily computed using properties of Gaussian processes. Note that if we assume stationary signal or noise variance, the respective posterior distributions reduce to one-dimensional Gaussian distributions. This means that q(v|X, y) becomes n + 1 dimensional, and the posterior predictive distribution equals the posterior distribution. Because we approximate the posterior predictive distribution of the latent variables and the predictive distribution of y * by a Gaussian distribution, we can always compute the predictions analytically, regardless whether we have input-dependent signal or noise variance. For a GP with EP marginalized noise variance we get the following predictive distributions For a GP with EP marginalized noise and signal variance the results are quite lengthy and are omitted here to save space (see supplementary material). Factorized Approximation and Converge In this section we discuss certain key properties of the posterior approximations introduced in Sections 3.2-3.4. More precisely, we illustrate the importance of the utilized factorization assumptions in terms of both accuracy and convergence of the resulting EP algorithm. Red contours correspond to the factorized approximation q(f | X, y)q(θ | X, y) and the blue contours correspond to the full joint approximation q(f , φ | X, y). Figure 1 visualizes the marginal posterior distributions of the latent values related to both the unscaled function valuesf i (x-axis) and the magnitude process φ i (y-axis). Each of the four subplot shows the latent values associated with four different observations (likelihood terms) resulting from a nontrivial simulated data set (see Section 4). MCMC samples from the true posterior distribution are plotted with black dots together with two different EP approximations: the partially coupled approximation q(f , φ)q(θ) introduced in Section 3.4 (blue contours) and a fully factorized approximation of the form q(f )q(φ)q(θ) (red contours). Subplots on the left show strong posterior dependencies between the latent values resulting from the combined effect of the within-observation couplings f i =f i exp(φ i /2) and the between-observation correlations controlled by the GP priors. On the other hand, subplots on the right show much weaker couplings indicating that the the within-observation coupling does not necessarily introduce strong posterior dependencies. Comparison of the joint posterior approximations of θ i with either φ i ,f i , or f i =f i exp(φ i /2) did not show strong dependencies, which is why we used a factorized approximation for θ to facilitate computations. Panel (a) of According to our experiments, neglecting the posterior couplings does not significantly affect the predictive performance compared to the fully-factorized approximation. However, representing these couplings has a significant effect on the convergence properties of the EP algorithm. Subfigure (b) of Figure 1 shows the EP marginal likelihood approximations as a function of EP iteration in both settings. The fully-coupled approximation (red line) converges very slowly compared to the partially coupled approximation (blue line); the former requires often hundreds of iterations whereas the partially-coupled approach converges usually in less than 50 iterations. In our experiments the convergence properties of the full-coupled algorithm could not be improved by adjusting damping. This behavior can be explained by slow propagation of information between the latent values from different likelihood terms with the fully-coupled approximation. Because each likelihood term is updated separately from the others, information on the posterior dependencies in other site terms is not available during the update. These findings are fully congruent with the convergence differences in multi-class GP classification when between-class dependencies are omitted (Riihimäki et al., 2013). Experiments In this section we go through the different data sets we use for experiments, different methods and the assessment criteria for the results. Simulated data 1. The first simulated data was generated by the following setup: where ∼ N(0, σ(x)). The training data was generated by first drawing 200 random x values from U (−8, 8). After this we computed the mean signal by combining the modulating signal σ f (x) andf (x). Then some random noise with standard deviation σ(x) was added. For the test set we used uniform grid of 1000 points in the interval (−8, 8) and computed the function values analogously to training set, without adding noise. The experiment was repeated 100 times for different realizations of the training data set to assess the variation in the final predictions of the test set. Simulated data 2. The second simulated dataset was generated with f (x) = sin(x), σ f (x) = exp(2 sin(0.2x)), σ(x) = exp(0.75 sin(0.5x + 1)) + 0.1, The training and test data were generated analogously to the first experiment. We used 150 training points and the different generating signals for the observations. The second experiment was also repeated 100 times as in the first experiment. Motorcycle. The motorcycle data (Silverman, 1985) consists of 133 accelerometer readings in a simulated motorcycle crash. Concrete. The second empirical experiment uses concrete quality data (Vehtari and Lampinen, 2002;Jylänki et al., 2011), where the output is volume percentage of air in concrete, air-%, with 27 different input variables. The input variables depend on the properties of the stone materials, additives and the amount of cement and water. SP500. The last empirical experiment is concerned with predicting the SP500 index. The data set consists of monthly averages of the index between years 2001-2014, with a total of 169 observations. We demonstrate on this data how a GP with input-dependent noise variance works as a stochastic volatility model. We compare 8 different methods: GP (Standard GP regression), EP(n) and MCMC(n) (integration over input-dependent noise variance with EP and MCMC), EP(n+m) and MCMC(m+n) (integration over input-dependent signal and noise variance with EP and MCMC), EP-MC(n) and EP-MC(m+n) (EP optimized hyperparameters for covariance functions and sampling of the posterior of the latent variables). In standard GP regression we use maximum a posteriori (MAP) values for all the model parameters (signal variance, noise variance, length-scales). In the EP methods, when integrating over input-dependent noise variance, we use MAP values for signal variance and length-scales, and when integrating over inputdependent signal and noise variance, we use MAP values for the length-scales. Latent MCMC means that we use EP optimized MAP values for the covariance function parameters and sample only from the latent posterior. We also ran the experiments by integrating over stationary (not input-dependent) signal and noise variances. However, results with these methods coincide with standard GP regression and the results can be regarded trivial. Thus they are not reported in this paper in order to save space. The performance of the different methods was assessed by computing the mean log-predictive density Figure 2: One-dimensional data sets and the EP predictions with uncertainty intervals. Thin black lines correspond to the true signal in the simulated data sets, and the thick gray lines are the GP predictions with EP. The grey area is the 95% credible interval of the prediction. Red lines correspond to the standard GP prediction with MAP values for the signal and noise variance (credible intervals only shown for SP500). where p(y * i | x * i , X, y) is the posterior predictive density for y * i and p(y * i | x * i ) is the true distribution of y * i . For the three empirical datasets, we computed the approximate MLPD of the n training data points with 10-fold cross-validation: where p(y i | x i , X −i , y −i ) is the cross-validated posterior predictive density for y i . Higher MLPD values correspond to better predictions. MLPD values from the experiments are shown in Table 1. We can conclude from the results that integrating over the input-dependent noise variance increases predictive capability greatly in our experiments compared to standard GP regression. Furthermore, integrating over the input-dependent signal variance tends to enchance the predictions even more. In some cases integration over the signal variance is not needed prediction wise, but our results show that even in these cases, it does not harm the predictive quality. The results show that our EP implementation is comparable to the MCMC methods. The predictive distribution with the SP500 data in Figure 2d illustrates the practical benefits of the input-dependent noise: The period of steady growth between samples 40-80 has clearly lower signal variance compared to the more volatile periods related to financial crisis of 2008 (samples 90-110) and the subsequent shaky growth characterized by debt crises and monetary interventions (samples 110-140). With our implementation, MCMC was roughly two orders of magnitude slower than EP. This depends highly on the implementation and number of MCMC draws required for convergence. For example, with the SP500 and Concrete data with ARD lengthscales forf , the state-of-the-art MCMC methods based on elliptical slice sampling had convergence issues even after thousands of samples, as the results indicate. Discussion In this work we have introduced a straightforward but an easily implementable and computationally efficient way to integrate over the uncertainty of the noise and signal variance in Gaussian process regression. Our implementation is easy to apply also for input-dependent noise and signal variance, and it further extends the well-known nonstationary GP models. We have tested our EP implementation on several different data sets and showed that the EP results are on par with state-of-the-art MCMC methods. Furthermore, our results show EP can be used in complex problems where even the state-of-the-art MCMC methods have convergence problems. The scope of this paper was not to compare GPs to other models, but to investigate how integration over signal and noise variance works in the GP framework. Thus, we have ommited comparisons to other models in this work. The results indicate that there exists phenomena, where it is advantageous to have input dependent signal variance in addition to the input dependent noise variance. While adding the input-dependent noise variance greatly enhances the predictive quality, we are still left with oscillation of the estimated mean. Using the input-dependent signal variance in addition to the noise makes the estimates smoother and further enhances the predictions.
2014-04-22T10:04:10.000Z
2014-04-22T00:00:00.000
{ "year": 2014, "sha1": "66d3f6d63c69e50833800ede2d4cf8f996f6600f", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1404.5443", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "66d3f6d63c69e50833800ede2d4cf8f996f6600f", "s2fieldsofstudy": [ "Computer Science", "Mathematics" ], "extfieldsofstudy": [ "Mathematics", "Computer Science" ] }
267366103
pes2o/s2orc
v3-fos-license
Risk factors for diagnosis and treatment delay among patients with multidrug-resistant tuberculosis in Hunan Province, China Background Multidrug-resistant tuberculosis (MDR-TB) is a global health threat associated with high morbidity and mortality rates. Diagnosis and treatment delays are associated with poor treatment outcomes in patients with MDR-TB. However, the risk factors associated with these delays are not robustly investigated, particularly in high TB burden countries such as China. Therefore, this study aimed to measure the length of diagnosis and treatment delays and identify their risk factors among patients with MDR-TB in Hunan province. Methods A retrospective cohort study was conducted using MDR-TB data from Hunan province between 2013 and 2018. The main outcomes of the study were diagnosis and treatment delay, defined as more than 14 days from the date of symptom to diagnosis confirmation (i.e., diagnosis delay) and from diagnosis to treatment commencement (i.e., treatment delay). A multivariable logistic regression model was fitted, and an adjusted odds ratio (AOR) with a 95% confidence interval (CI) was used to identify factors associated with diagnosis and treatment delay. Results In total, 1,248 MDR-TB patients were included in this study. The median length of diagnosis delays was 27 days, and treatment delays were one day. The proportion of MDR-TB patients who experienced diagnosis and treatment delay was 62.82% (95% CI: 60.09–65.46) and 30.77% (95% CI: 28.27–33.39), respectively. The odds of experiencing MDR-TB diagnosis delay among patients coming through referral and tracing was reduced by 41% (AOR = 0.59, 95% CI: 0.45–0.76) relative to patients identified through consultations due to symptoms. The odds of experiencing diagnosis delay among ≥ 65 years were 65% (AOR = 0.35, 0.14–0.91) lower than under-15 children. The odds of developing treatment delay among foreign nationalities and people from other provinces were double (AOR = 2.00, 95% CI: 1.31–3.06) compared to the local populations. Similarly, the odds of experiencing treatment delay among severely ill patients were nearly 2.5 times higher (AOR = 2.49, 95% CI: 1.41–4.42) compared to patients who were not severely ill. On the other hand, previously treated TB cases had nearly 40% (AOR = 0.59, 95% CI: 0.42–0.85) lower odds of developing treatment delay compared with new MDR-TB cases. Similarly, other ethnic minority groups had nearly 40% (AOR = 0.57, 95% CI: 0.34–0.96) lower odds of experiencing treatment delay than the Han majority. Conclusions Many MDR-TB patients experience long diagnosis and treatment delays in Hunan province. Strengthening active case detection can significantly reduce diagnosis delays among MDR-TB patients. Moreover, giving attention to patients who are new to MDR-TB treatment, are severely ill, or are from areas outside Hunan province will potentially reduce the burden of treatment delay among MDR-TB patients. Supplementary Information The online version contains supplementary material available at 10.1186/s12879-024-09036-2. attention to patients who are new to MDR-TB treatment, are severely ill, or are from areas outside Hunan province will potentially reduce the burden of treatment delay among MDR-TB patients. Background Antimicrobial drug resistance is a human-made threat that results from either inadequate treatment, sub-optimal adherence to treatment regimens, or the persistence of resistant strains because of diagnosis or treatment delay [1].Multidrug-resistant tuberculosis (MDR-TB), defined as tuberculosis (TB) resistant to at least isoniazid and rifampicin [2], is a major threat to global health [3].MDR-TB is more difficult to diagnose and treat than drug-susceptible tuberculosis (DS-TB) and is associated with higher treatment costs, longer treatment durations, and poorer treatment outcomes [2].Globally, nearly half a million people developed MDR-TB in 2021, with only 32% of these patients receiving treatment [2]. China has the second-highest MDR-TB burden, accounting for 14% of the global burden share [4].China aims to achieve the World Health Organization (WHO) 2021 report END-TB strategy targets to reduce TB incidence by 90%, reduce TB mortality by 95%, and eliminate catastrophic costs in TB-affected households by 2035 [5].China has implemented phenotypic and molecular diagnosis tests, including Gene-Xpert MTB/RIF, to minimize diagnosis delay, which may help achieve these targets.Moreover, China's health system adopted the Directly Observed Treatment Short course approach to improve MDR-TB treatment success rates [6], but its implementation is poor [7]. Early diagnosis and initiation of MDR-TB treatment are crucial for achieving the global END-TB strategy targets by reducing MDR-TB infection and improving favourable treatment outcomes [8].Delays in diagnosis and treatment of MDR-TB are associated with increased risk of morbidity [9], mortality, and community transmission, as well as increased treatment costs [10].Recently, the introduction of the Gene-Xpert MTB/RIF Assay has improved the early detection of TB, MDR-, and extensively drug-resistant (XDR)-TB. Studies among patients with DS-TB have reported that sociodemographic, behavioural, and clinical factors, including attitudes, knowledge, occupation, and educational status of patients, the presence of diabetes mellitus, and access to TB treatment centres, affect timely diagnosis and commencement of TB treatment [11].However, the risk factors associated with diagnosis and treatment delays in patients with MDR-TB are yet to be robustly investigated, including in highburden countries such as China.Understanding the factors that influence diagnosis and treatment delays will assist clinicians in identifying patients at higher risk of diagnosis and treatment delay, increasing the chance of timely diagnosis and treatment and good MDR-TB treatment outcomes, and reducing the chance of onward transmission.Therefore, this study aimed to evaluate diagnosis and treatment delays and risk factors for these delays among MDR-TB patients in Hunan province, China. Study area A retrospective cohort study was conducted in Hunan province, China.Hunan is one of the 34 provincial regions in China.It has an estimated total area of 211,800 square kilometres' (81,800 square miles) and has just over 66 million inhabitants as of 2020.According to the census conducted in 2022, Hunan province has 41 ethnic groups.Of these, 89.79% identified as Han, and the remaining 10.21% as one of the minority groups, including Tujia, Miao, Dong, Yao, Bai, Hui, Zhuang, Uyghurs, and so on [12].The proportion of TB cases that are MDR-TB in Hunan province ranges from 10.6% to 25.2%, which is significantly higher than the national average of 1.8% [13].Hunan Chest Hospital is the only chest hospital in Changsha (the province's capital) that provides diagnosis and treatment services for patients with MDR and Extensively Drug-resistant (XDR)-TB.The hospital commenced MDR-and XDR-TB care in 2011 and serves as a referral hospital for presumptive DR-TB patients. Data sources and variables The study population was all MDR-TB and XDR-TB patients enrolled at Hunan Chest Hospital between 2013 and 2018.Data were obtained from an internetbased TB management system administered by the TB Control Institute of Hunan province.All pulmonary or extrapulmonary MDR-and XDR-TB patients enrolled during the study period were included.The data contained demographic variables (age, gender, year of diagnosis, occupational status, residence, ethnicity, patient source, and detainee status) and clinical factors (initial date of symptoms, date of diagnosis, date of treatment commencement, registration type, diagnosis type, diagnosis institution type, and treatment delay). Definition of variables Multidrug-resistant tuberculosis (MDR-TB) is defined as a TB patient who is resistant to at least isoniazid and rifampicin.Extensively drug-resistant (XDR)-TB is defined as a TB patient resistant to isoniazid, rifampicin, fluoroquinolones, and either bedaquiline or linezolid (or both) [14].We defined a total interval as the entire time required from the onset of symptoms to the start of MDR-TB treatment, which was divided into diagnosis and treatment intervals [15].Diagnosis interval was defined as the time from the onset of symptoms to the date of MDR-TB confirmation [11].Treatment delay was defined as the time from MDR-TB confirmation to commencement of MDR-TB treatment [16].Severely ill was defined as a patient with severe comorbidities, having persistent coughing with breathlessness and weakness with extreme weight loss.Hunan ethnic classification recognizes 55 minority groups, including the Han majority. MDR-TB diagnosis in Hunan province Hunan Chest Hospital follows WHO-recommended methods for the diagnosis and treatment of MDR-TB.Clinical assessments based on symptoms, microscopic sputum examinations, radiological examinations, and molecular techniques such as Line Probe Assay and Gene Xpert are commonly used to diagnose MDR-TB in the province. There are 131 counties in Hunan province, and only 32 counties can provide comprehensive diagnosis services, including culture.In Hunan Chest Hospital, drug susceptibility testing (DST) is mainly carried out to diagnose MDR-TB.As a result, sputum specimens from all culture-positive TB patients from all parts of the province are referred to the Hunan Chest Hospital for DST.In the hospital, phenotypic DST based on solid and liquid culture techniques and molecular methods using line probe assays and Xpert ® MTB/RIF are performed.The Hunan Chest Hospital performs DST for rifampicin, isoniazid, ethambutol, streptomycin, kanamycin, and ofloxacin.Solid and liquid cultures are used to follow up on patients' progress and treatment outcomes. Data analysis Data in an Excel spreadsheet was translated from Mandarin to English and exported to Stata version 17 software for analysis.Descriptive statistics were conducted and presented as frequencies (percentages) for categorical variables and medians with interquartile ranges (IQR) for continuous variables.The outcome variables (diagnosis and treatment intervals) were calculated in days and categorized as delayed and non-delayed.The number of days between the onset of symptoms and diagnosis confirmation and between diagnosis and commencement of MDR-TB treatment were calculated.A 14-day cut-off point was used to dichotomize diagnosis and treatment delay to be consistent with a previous study (https:// www.health.nsw.gov.au/ Infec tious/ contr olgui deline/ Pages/ tuber culos is.aspx). We used chi-square tests to assess associations between outcome and explanatory variables.A univariable logistic regression model was first fitted, and variables with a p-value of < 0.2 were entered into a multivariable logistic regression model.An adjusted odds ratio (DST) with a 95% CI was used to determine the statistical significance and strength of associations between risk factors and delays in diagnosis and treatment initiation.As there is no cut-off point to classify delay, sensitivity analysis was conducted using a median value to determine the threshold for delay.We also run two separate analyses based on the type of treatment category (new versus retreatment).A sensitivity analysis was also conducted using a negative binomial regression model, whereby the outcomes were taken as counts.Hosmer-Lemeshow's goodness of fit test was used to assess model fitness.As the mean and the variance were not equal, the negative binomial model was used over the Poisson regression model for the count data analysis.An adjustive relative risk (ARR) with a corresponding 95% CI was used to declare statistical significance. Median time to diagnosis and treatment interval among multidrug-resistant tuberculosis patients Overall, the median diagnosis interval among MDR-TB patients was 27 days (IQR 7-66 days), and the median treatment interval was one day (IQR 0-24 days) (Table 2).The overall prevalence of diagnosis delay among MDR-TB patients was 62.8% (95% CI: 60.1-65.4),and 30.8% (95% CI: 28.3-33.4) of MDR-TB patients experienced treatment delay.The diagnosis and treatment delay distribution are summarized in Supplementary files, Figs.S1 and S2. The trend in diagnosis and treatment delay among MDR-TB patients Diagnosis delay among MDR-TB patients increased yearly but not linearly.On the other hand, the treatment delay increased over 2013 and 2015.Then, the trend was steady from 2015 to 2016 and showed a slight reduction between 2016 and 2017.The trend of treatment delay increased alarmingly over a period of 2017 and 2018 (Fig. 1). Factors affecting diagnosis delay among patients treated for MDR-TB The odds of experiencing MDR-TB diagnosis delay among patients who came through referral and tracing was 41% lower (AOR = 0.61, 95% CI: 0.47-0.80)relative to patients identified through consultations due to symptoms.Compared to patients younger than 15 years, patients aged ≥ 65 years had 65% lower odds of experiencing diagnosis delay (AOR = 0.35, 95% CI: 0.14-0.91)(Table 3). In sensitivity analysis (i.e., using the median 27 days as a cut-off point instead of 14 days), age, occupation of respondents, and patient source of diagnosis were statistically significant factors associated with treatment delay, suggesting the results are sensitive to the selection of the threshold defining a delay.The odds of experiencing diagnosis delay among elderly MDR-TB patients (≥ 65 years) was 60% lower (AOR = 0.40, 0.17-0.94)than among children under 15 years old.The odds of developing diagnosis delay among MDR-TB patients identified through referral and tracing was 25% lower (AOR = 0.75, 0.59-0.96)compared with MDR-TB patients who came to seek consultations due to having had symptoms (Supplementary file Table S1). No significant variables were identified in the sensitivity analysis that stratified MDR-TB patients as new or previously treated (Supplementary file Table S2). Supplementary file Table S3 shows the negative binomial regression assessment of factors affecting diagnosis delay among MDR-TB patients.The finding showed that gender, diagnosis institution type, age, and treatment category were significantly associated with diagnosis delay.Similarly, other ethnic minority groups had 67% lower odds (AOR = 0.55, 95% CI: 0.33-0.93) of experiencing treatment delay than the Han majority (Table 4). A sensitivity analysis was conducted using different thresholds defining a delay, and sensitivity to this threshold was identified.A sensitivity analysis using the third quartile (24 days) showed that only two variables (ethnicity and patient source) remained significant; the odds of experiencing treatment delay among ethnic minority MDR-TB patients were 62% lower (AOR = 0.38, 95% CI: 0.19-0.74)compared to the Han majority.The odds of experiencing treatment delay among patients identified through referral and tracing was 7.39 (4.75-11.49)times higher than patients who came to seek consultations (Supplementary file Table S4). In a sensitivity analysis based on treatment category (new vs re-treatment), other occupations had 38% higher odds (AOR = 1.38, 95% CI: 1.01-1.90) of experiencing treatment delay than farmers (Supplementary file Table S5). Findings from the negative binomial model showed that patient source was significantly associated with the risk of treatment delay (Supplementary file Table S6). Discussion This study evaluated risk factors of diagnosis and treatment delay among MDR-TB patients in Hunan province.About 2/3 and 1/3 MDR-TB patients experienced diagnosis and treatment delays, respectively, using a 14-day threshold to define a delay.The median diagnosis and treatment intervals among MDR-TB patients were 27 and one day, respectively.Elderly patients (≥ 65 years) and patients identified through tracing and referral had lower odds of diagnosis delay than their counterparts.Patients with Han ethnicity, previous TB treatment history, residence outside Hunan province, and who were severely ill had a significantly higher probability of experiencing MDR-TB treatment delay. Diagnosis and treatment interval for MDR-TB The median diagnosis interval found in this study was longer than the nine days reported in Myanmar [17] and five days in Bangladesh [18].The difference could result from using different diagnosis modalities and algorithms to diagnose MDR-TB.On the other hand, the interval was shorter than previously reported in China, with a median diagnosis of 84 days [19].The difference Fig. 1 Proportions of diagnosis and treatment delay by year of enrolment could be because of the introduction of Gene-Expert, which reduced laboratory result turnaround time in diagnosing MDR-TB patients [20].Previous studies also showed that using GeneXpert had significantly reduced treatment delay among DR-TB patients [21,22].The study also revealed that 2/3 of MDR-TB patients developed diagnosis delays.However, our finding was higher than the WHO recommendations that every patient should have an early diagnosis of TB, including universal Drug Susceptibility Testing (DST) [2].The WHO recommends that all DR-TB patients need to commence their treatment as soon as possible to prevent unnecessary side effects, complications, and poor treatment outcomes. The trend of treatment delay increased alarmingly over a period of 2017 and 2018.A possible justification could be the high case burden of MDR-TB in Hunan province between 2017 and 2018, and patients may be obliged to wait longer to commence their treatment.Moreover, study subjects' variation in terms of MDR-TB signs and symptoms and poor health-seeking behaviour may contribute to the difference.Also, previous studies suggested that the initiation of MDR-TB treatment mainly depends on baseline laboratory investigations and PMDT panel team decisions on DST test results and availability of treatment options [23,24].So, the alarming treatment delay in 2017/2018 could be due to the lack of treatment options.However, further research is needed to point out possible factors contributing to the high burden of treatment delay in 2017 and 2018. Many MDR-TB patients experienced treatment delays in Hunan province, and it is highly recommended that the initiation of prompt MDR-TB treatment following confirmatory diagnosis is prioritized to achieve the END-TB strategy targets.Early diagnosis and treatment of TB is particularly important to minimize community transmission of the disease, reduce side effects, and improve disease progression treatment outcomes and quality of life of the patients.It can also help minimize the patients' catastrophic costs [25]. There are few studies previously conducted in China to determine MDR-TB diagnosis and treatment delay.Our study is the first to investigate MDR-TB diagnosis and treatment delays, focusing on Hunan Province (one of the provinces with high MDR-TB burden in China).Our study has incorporated important variables such as ethnic minority and residence that have been missed in the previous studies.The first study conducted in Taizhou, Zhejiang Province, showed that the overall diagnosis and treatment delay was not illustrated.Instead, it primarily determines factors associated with waiting time for DST, pre-attrition, time of waiting for treatment, and treatment outcomes and associated factors.Moreover, it did not report the overall diagnosis delay and associated risk factors.On the other hand, a study conducted in China to determined diagnosis and treatment delay among MDR-TB patients lacked socio-demographic variables such as ethnicity and residence [26].However, in our study, ethnic minorities and residents outside Hunan Province were identified as risk factors for diagnosis and treatment delay.This will add a piece of literature to the existing knowledge.Moreover, none of the studies conducted a sensitivity analysis, which is valuable for better decision-making and more reliable predictions and highlights areas for improvement.As there is no cut-off point to decide diagnosis and treatment delay, applying a sensitivity analysis to show the result at different points is significant. Risk factors of MDR-TB diagnosis delay Previous studies suggested that elderly patients often struggle to access diagnosis and treatment centres and usually rely on their families to visit health facilities [27].However, our findings showed that patients ≥ 65 years of age had lower odds of diagnosis delay than patients aged < 15 years.A possible explanation could be that MDR-TB diagnosis in children is impacted by different diagnostic approaches; for example, children may be treated for other respiratory tract infections before diagnosis testing is undertaken, resulting in delayed MDR-TB diagnosis.A previous study revealed that the diagnosis of MDR-TB is bacteriological, and children need a systematic diagnosis approach that leads to a longer time to diagnosis of MDR-TB [28].Moreover, a lack of awareness that children develop MDR-TB, a perceived inability to diagnose active MDR-TB without TST and CXR, and no international guidance on preventive therapy against MDR-TB could contribute to MDR-TB diagnosis delay In this study, patients identified through tracing and referral had a lower diagnosis delay than patients identified through symptomatic consultations.Moreover, tracing MDR-TB patients might have a significant contribution to MDR-TB patients visiting the health facility timely due to possible consultation, which is a basic component of support and overcoming major barriers that lead to not disclosing the disease due to the high nature of MDR-TB stigmatization.Also, referral reduces waiting times for primary care and minimizes double investigation as they can rely on their pre-referral work-up.The shorter delay among tracing and referral sub-groups may also be caused by the bias of symptom onset time. Moreover, our findings agree with previous studies carried out on DS-TB patients.For instance, age and active case findings [30] were significantly associated with diagnosis delay among DS-TB patients.Moreover, other sociodemographic variables (educational status, knowledge of MDR-TB, and personal beliefs and attitudes) [31], using traditional medicines and healers, and limited health service access due to geographical location [32] were significantly associated with diagnosis delay among DS-TB patients. Risk factors of MDR-TB treatment delay Regarding treatment delay, Han ethnicity, patients with previous TB treatment history, residence other than Hunan province, and becoming severely ill were significantly associated with MDR-TB treatment delay.This study found that ethnic minority groups in Hunan province had lower odds of experiencing treatment delay than the Han majority.This could be due to improving educational attainment among ethnic minorities, which is highly prioritized in China [33].This finding is consistent with a study conducted by Gilmour et al. among DS-TB patients in Hunan province; ethnic minority groups had lower odds of experiencing treatment delay than the Han majority.A systematic review and meta-analysis conducted among DS-TB patients in Ethiopia showed that lower educational level was significantly associated with poor health-seeking behaviour [34].However, further research is recommended to illuminate why the Han majority groups are at higher odds of experiencing treatment delay than ethnic minority groups. In this study, previously treated MDR-TB cases had lower odds of experiencing MDR-TB treatment delay.This might be because previously treated cases were more knowledgeable about what to do and aware of the severity of the disease and the risk of developing a resistant form of TB, and they already had relationships with clinicians that expedited their care; they were also less likely to experience diagnosis delay than new MDR-TB patients [34,35]. Patients from locations other than Hunan province had an increased likelihood of experiencing treatment delay.In China, the hukou system, a household registration system, permits a permanent resident for a single address issued by the government of China to each family member given at birth or by employment in formal sectors [36].The hukou system allows each family member access to essential public services, education, health services, social benefits, and job recruitment by the governmental or private sectors [37].As a result, patients from Hunan province will have better access to health care and are less likely to experience treatment delays than patients from outside Hunan province.According to previous reports, a previous history of DS-TB treatment [38] and health system factors (staff shortages, cost of services, drug stockout, and poor health infrastructure) [11] are significantly associated with treatment delay. Limitations of the study The study had several limitations.The absence of a WHO-endorsed, international standard cut-off point to determine diagnosis and treatment delay can significantly impact estimates of the burden of delays and cause misclassification bias when investigating factors associated with the delays.Some important variables that might be associated with diagnosis and treatment delay, such as education status, income level, geographical inaccessibility, knowledge of MDR-TB, and presence of comorbidities (e.g., diabetes mellitus, HIV infection, mental ill health), were not assessed in the current study.Future research should prioritize undertaking prospective studies that measure these variables.Only patients attending the designated health facilities were included in the study.This might have caused selection bias, and findings might not be generalizable to all MDR-TB patients in Hunan province.Recall bias might also have been an issue for patients specifying the exact date of symptom onset.Finally, all sputum specimens were transported to Hunan Chest Hospital for DST might introduce turnaround time that will further influence on the diagnosis delay. Conclusion Long diagnosis and treatment delays still occur for many MDR-TB patients in Hunan province.Under-15 children and patients identified through passive case detection were found to experience a higher probability of diagnosis delay.Ethnic minority groups and patients who were previously treated for MDR-TB had lower odds of treatment delay.On the other hand, the odds of treatment delay in patients coming from areas other than the Hunan province and severely ill patients were high.Giving attention to new MDR-TB patients, severely ill patients and patients from outside Hunan province will potentially reduce the burden of treatment delay among MDR-TB patients. Table 1 Socio-demographic and clinical characteristics of multidrug-resistant tuberculosis patients in Hunan province Table 2 Median diagnosis and treatment delay among multidrug-resistant tuberculosis patients in Hunan province stratified by demographic and clinical characteristics CDC Communicable Disease Control, IQR Inter Quartile Range a Housekeeping, housework and unemployed b Bai, Buyi, Dai, Gelao, Hani, Hui, Jingpo, Kazakh, Kirgiz, Korean, Lahu, Li, Lisu, Manchu, Mongolian, Salar, She, Tibetan, Tu, Uighur, Wa, Yi, ZhuangFactors affecting treatment delay among patients treated for MDR-TBCurrent residence, treatment category, ethnicity, and severity of illness were associated with treatment delay. Table 3 Univariable and multivariable logistic regression of factors associated with 14 days of multidrug-resistant tuberculosis diagnosis delay Other ethnic minorities: Dong, Miao, Tujia, Yao CDC Communicable Disease Control, Hosmer and Lemeshow test (Prob > chi2 = 0.3135), COR Crude Odds Ratio, AOR Adjusted Odds Ratio, TB Tuberculosis.Other a Housekeeping and retired, AOR Adjusted Odds Ratio, COR Crude Odds Ratio, CI Confidence Interval Table 4 Univariable and multivariable logistic regression for treatment delay among multidrug-resistant tuberculosis patients in Hunan Province, 2013-2018 AOR Adjusted Odds Ratio, COR Crude Odds Ratio, CI Confidence Interval
2024-02-02T14:19:33.830Z
2024-02-02T00:00:00.000
{ "year": 2024, "sha1": "86013b7644d0363b32f45b9b6cc211d65fcc8d1d", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "7a7a27ea1203f5d5e7e5d0fc1fede5012204455d", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
30737956
pes2o/s2orc
v3-fos-license
Detection of Borrelia burgdorferi in a Sick Peregrine Falcon ( Falco peregrinus ) – A Case Report The importance of Lyme borreliosis, a tick-borne disease caused by the bacterium Borrelia (B.) burgdorferi sensu lato, is constantly increasing. It is the most frequent arthropod-borne disease in the northern hemisphere today. Numerous studies about the prevalence of B. burgdorferi in ticks have been published. A review summarising 1,186 abstracts on epidemiological studies of the tick I. ricinus infected with B. burgdorferi sensu lato between 1984 and 2003 in Europe describes infection rates from 0% (Italy) to 49.1% (Slovakia). The highest infection rates were found in the countries of Central Europe [1]. The annual worldwide number of reported human cases is about 85,000 [2,3]. Lyme disease is a multisystemic infectious disease. It appears that different genospecies have certain organ tropism. The highest risk of infections is posed by infected tick nymphs, as they are easily overlooked due to their small size and wide distribution [4]. Introduction The importance of Lyme borreliosis, a tick-borne disease caused by the bacterium Borrelia (B.) burgdorferi sensu lato, is constantly increasing. It is the most frequent arthropod-borne disease in the northern hemisphere today. Numerous studies about the prevalence of B. burgdorferi in ticks have been published. A review summarising 1,186 abstracts on epidemiological studies of the tick I. ricinus infected with B. burgdorferi sensu lato between 1984 and 2003 in Europe describes infection rates from 0% (Italy) to 49.1% (Slovakia). The highest infection rates were found in the countries of Central Europe [1]. The annual worldwide number of reported human cases is about 85,000 [2,3]. Lyme disease is a multisystemic infectious disease. It appears that different genospecies have certain organ tropism. The highest risk of infections is posed by infected tick nymphs, as they are easily overlooked due to their small size and wide distribution [4]. Case Report A three-year-old female peregrine falcon (Falco peregrinus) weighing 980 g was presented with a mildly swollen left intertarsal joint. The owner also observed diarrhoea over the last five days. The bird was kept at a weathering of about 5×2.5 metres and was trained for falconry. The diet consisted of day-old chickens and miscellaneous hunting prey animals (e.g. pheasant, rabbit, pigeon). On clinical examination, the falcon showed a reduced general condition and a swollen left intertarsal joint. There were slight signs of diarrhoea. Following these clinical signs, radiographs were taken (laterolateral and dorso-ventral beam) of the whole bird, which revealed a periarticular soft-tissue swelling without lysis of the articular surfaces of the left intertarsal joint. The walls of the intestinal loops were thickened and showed slight gaseous distention. Blood samples were taken from the brachial vein (V. ulnaris) for biochemical and haematological analysis. Interestingly, all parameters were in normal ranges. The parasitological faecal examinations (direct smear and flotation process) were negative in both cases. Subsequently, bacteriological examinations of the faeces and the periarticular soft-tissue swelling were performed. Escherichia coli, a gram-negative, nonspore-forming bacillus was isolated from the cloacal swap and also from the joint in a middle-rate quantity. An antibacterial treatment was administered based on the bacterial culture and sensitivity (enrofloxacin, 10 mg/ kg p.o.). Furthermore, a supportive therapy was carried out: fluid was given via subcutaneous infusions (Stereofundin ® ). Meloxicam was given for analgesia. In addition to the normal food, the falcon was fed with a eupeptic emergency diet via crop gavage (Carnivore Care ® ). Unfortunately, there was no significant clinical improvement during the next six days after starting the treatment. A new blood sample was taken and tested for Borrelia burgdorferi -antibodies according to a modified indirect immunofluorescence-test described by Büker et al. [5]. The bird showed an antibody-titre of ≥ 1:256 ( Figure 1). An antibiotic treatment similar to the protocol used in humans and dogs was provided (doxycycline 50 mg/kg p.o.). The Abstract Borrelia (B.) burgdorferi, the causative agent of Lyme disease, is the most important zoonotic pathogen in the northern hemisphere. This report describes a peregrine falcon (Falco peregrinus) infected with B. burgdorferi. The patient was presented with a swollen intertarsal joint, diarrhoea, and a reduced general condition. Radiographs were inconspicuous. Antibacterial treatment against the bacterium Escherichia coli found in the intestine and joint did not lead to success. The serological testing for B. burgdorferi was positive. The bird recovered well after a therapy for borreliosis similar to that in humans and mammals. In future, it should be taken into account that raptors are susceptible to B. burgdorferi. Discussion This case reports describes for the first time a presumably clinical borreliosis in a bird. Birds of prey respond immunologically to infections with B. burgdorferi and may therefore play a role in the transmission, maintenance, and movement of Lyme disease [5]. It seems that an appropriate treatment similar to that administered in humans and mammals can be effective against B. burgdorferi. It is known that several bacteria, including Escherichia (E.) coli, are commonly implicated in bacterial joint diseases and also in enteritis in raptors [6]. Most are secondary pathogens; the treatment is based on bacterial culture and sensitivity, and identification and elimination of predisposing factors and concurrent disease [7]. Interestingly, it seems that the E. coli found in this case was not the cause of the clinical findings, because the concerted antibacterial treatment did not prove satisfactory. Nevertheless, it should be considered that the supportive therapy led to the physical recovery. Until now, it appeared that B. burgdorferi is asymptomatic in avian species [8]. Further research is necessary to confirm this evidence. However, infected birds are thought to play a role in the transmission, maintenance, and long-distance movement of Lyme disease [8][9][10]. A large number of bird species, primarily ground foraging passerines but also sea birds, act as competent reservoirs for B. burgdorferi [8,10]. Information about the prevalence of B. burgdorferi in different bird species or in birds generally is scarce. Large-scale studies with more than one thousand examined birds have reported values of 4.4%-19% [11][12][13]. Two different main enzootic cycles for the widespread of B. burgdorferi by birds have been described [8]. 1. The Terrestrial Enzootic Cycle: many birds are associated with the dispersal of vector ticks and therefore the distribution of B. burgdorferi across their annual migration routes [14][15][16][17][18]. During these routes, migrating birds use different stopover sites where they feed and rest, and at these locations ticks may attach and later detach further along the migration routes or even in breeding and wintering areas. New foci of tick-borne diseases may become established in this way [8]. 2. The Marine Enzootic Cycle: the seabird-associated tick Ixodes uriae is the main vector in this cycle. Seabirds often live in large colonies of thousand to millions of individuals, especially during the breeding season. Therefore, ticks and also Borreliae can easily be spread [8,19]. A global transmission cycle including a transhemispheric exchange is also assumed, because the same B. garinii spirochetes were found in seabirds in the northern and southern polar regions, even on mammalfree islands [8,20]. The relatively low body temperature of seabirds may play a role in the maintenance of spirochetemia [8,19,21]. Conclusion Birds should be considered as potential carriers of the Lyme disease; this applies particularly to predisposed persons (e.g. falconers, biologists, zookeepers, hunters, veterinarians).
2019-03-11T13:10:54.864Z
2014-01-23T00:00:00.000
{ "year": 2014, "sha1": "192ab9c194b86b49a08b82ca2dc71eb29ac9a2b9", "oa_license": "CCBY", "oa_url": "https://doi.org/10.4172/2161-1165.1000143", "oa_status": "HYBRID", "pdf_src": "MergedPDFExtraction", "pdf_hash": "e6111b6f76a84d6300332800d723eaf3a0ee5d0e", "s2fieldsofstudy": [ "Environmental Science", "Medicine" ], "extfieldsofstudy": [ "Biology" ] }