id
stringlengths
3
9
source
stringclasses
1 value
version
stringclasses
1 value
text
stringlengths
1.54k
298k
added
stringdate
1993-11-25 05:05:38
2024-09-20 15:30:25
created
stringdate
1-01-01 00:00:00
2024-07-31 00:00:00
metadata
dict
86300784
pes2o/s2orc
v3-fos-license
266 Florida Entomologist 79 ( 2 ) June , 1996 INTERSPECIFIC INTERACTIONS AND HOST PREFERENCE OF ANASTREPHA OBLIQUA AND CERATITIS CAPITATA ( DIPTERA : TEPHRITIDAE ) , TWO PESTS OF MANGO IN CENTRAL AMERICA The larvae of Anastrepha obliqua L. and Ceratitis capitata Wied. (Diptera; Tephritidae) destroy mango fruits (Jir6n & Hedstrom, 1988), while fruit flies of the genus Anastrepha, (including A. obliqua, A. ludens, and A. striata) cause extensive damage to a large variety of Neotropical commercial and wild fruit hosts (Norrbom & Kim, 1988). A. obliqua has been found on fruits in the Anacardiaceae, frequently in hog plums (Spondias spp.) and mangos (Jir6n 1995). This species of fly began to attack mango extensively as a host when mangos were introduced into tropical America during the 18th century (Jir6n 1995). In the past, C. capitata was found only in Africa and the Mediterranean basin, but in the 1950's, it was accidentally introduced to the Western Hemisphere (Jir6n & Salas 1992). In these new areas, it occurs in a wide variety of habitats with a large number of host species (such as Citrus and Terminalia catappa) (Nishida et al. 1985). A. obliqua oviposits in a few plant species belonging predominantly to the Anacardiaceae, while C. capitata attacks numerous fruit species belonging to different families (Jir6n et. al. 1988). Studies by Salas (1958), Christenson & Foote (1960), and Jir6n & Zeledon (1979) suggest that species of Anastrepha compete with an opportunistic C. capitata for fruits on which to oviposit. Castillo (1987) observed that Anastrepha striata adults attacked adult C. capitata and displaced them from a guava fruit. On mango, 94% of infested fruit contained A. obliqua and 6% contained C. capitata (Jir6n & Hedstrom 1988). Interspecific interactions of adults on mango were examined by placing 30 females of each species, A. obliqua and C. capitata, in a 30 x 30 x 30 cm screen cage with a nearly ripe mango fruit placed on the middle of the floor. Adults ofA. obliqua were obtained from colonies maintained in the mass rearing laboratory at the Estacion Experimental Fabio Baudrit, Alajuela, Costa Rica. C. capitata adults and their diet were supplied by H. Camacho, Escuela de Biologia, Universidad de Costa Rica. The following data on the behavior of the flies were taken: 1) incidence of A. obliqua displacing C. capitata from the fruit, 2) incidence of C. capitata displacingA. obliqua from the fruit, and 3) incidence of both species occurring on the same fruit with no interaction. The flies were observed for 30 min for each trial. Each trial was replicated four times. The outside of the cage screen was moistened by wiping the screen with a wet sponge. This procedure kept the humidity within the cage relatively high. The flies were observed for four repetitions of 30 min each. Host fruit preference tests with nearly ripe mangos and ripe oranges were done by placing fifteen females of each species in a 30 x 30 x 30 cm screen cage. Three mangos, The larvae of Anastrepha obliqua L. and Ceratitis capitata Wied.(Diptera; Tephritidae) destroy mango fruits (Jirón & Hedström, 1988), while fruit flies of the genus Anastrepha , (including A. obliqua , A. ludens , and A. striata ) cause extensive damage to a large variety of Neotropical commercial and wild fruit hosts (Norrbom & Kim, 1988).A. obliqua has been found on fruits in the Anacardiaceae, frequently in hog plums ( Spondias spp.) and mangos (Jirón 1995).This species of fly began to attack mango extensively as a host when mangos were introduced into tropical America during the 18th century (Jirón 1995). In the past, C. capitata was found only in Africa and the Mediterranean basin, but in the 1950's, it was accidentally introduced to the Western Hemisphere (Jirón & Salas 1992).In these new areas, it occurs in a wide variety of habitats with a large number of host species (such as Citrus and Terminalia catappa ) (Nishida et al. 1985). A. obliqua oviposits in a few plant species belonging predominantly to the Anacardiaceae, while C. capitata attacks numerous fruit species belonging to different families (Jirón et. al . 1988). Interspecific interactions of adults on mango were examined by placing 30 females of each species, A. obliqua and C. capitata , in a 30 × 30 × 30 cm screen cage with a nearly ripe mango fruit placed on the middle of the floor.Adults of A. obliqua were obtained from colonies maintained in the mass rearing laboratory at the Estacion Experimental Fabio Baudrit, Alajuela, Costa Rica.C. capitata adults and their diet were supplied by H. Camacho, Escuela de Biologia, Universidad de Costa Rica. The following data on the behavior of the flies were taken: 1) incidence of A. obliqua displacing C. capitata from the fruit, 2) incidence of C. capitata displacing A. obliqua from the fruit, and 3) incidence of both species occurring on the same fruit with no interaction.The flies were observed for 30 min for each trial.Each trial was replicated four times.The outside of the cage screen was moistened by wiping the screen with a wet sponge.This procedure kept the humidity within the cage relatively high.The flies were observed for four repetitions of 30 min each. Host fruit preference tests with nearly ripe mangos and ripe oranges were done by placing fifteen females of each species in a 30 × 30 × 30 cm screen cage.Three mangos, a reportedly preferred host of A. obliqua (Jirón et. al. 1988), and three oranges, a reportedly preferred host of C. capitata (Nishida et. al. 1985), were arranged alternately on the floor of the cage.Observations were made on the frequency of visits of each fly species to each host fruit.The flies were observed in four replications, each of 30 min duration.Statistical analysis was performed with a Paired t-test. A. obliqua displaced C. capitata from the mango in 60.4 percent of encounters by charging.In 35.8 percent of the cases, there appeared to be no competitive behavior.In the remaining 3.8 percent of the observations, C. capitata appeared to displace A. obliqua from the fruit.When both fruits were presented simultaneously to the two species of flies, C. capitata landed 49 times on oranges and 29 times on mango, results that were statistically significant (t value = 6.12, p= 0.0088).A. obliqua landed 20 times on mangos and the remaining 12 on oranges, results that were not statistically significant (t value = 1.36, p = 0.2674). The results of the interspecific competition studies suggest that A. obliqua is a better competitor than C. capitata .This competitive dominance may be attributed to A. obliqua being three times as large as C. capitata .The difference in size can be a factor since the reaction of A. obliqua towards the presence of C. capitata is to shift its wings from a horizontal to a vertical plane, then to circle around the smaller fly, and finally to charge directly towards C. capitata forcing it off of the fruit.The distribution of these two species of fruit flies in naturally-infested fruit supports our findings.C. capitata is present early during the mango fruiting season (Jirón & Hedström 1988).Additionally, C. capitata can infest mango before A. obliqua because of physiological restrictions in the fruit that do not allow the eggs of A. obliqua to develop and hatch once they are laid (Soto-Manitiu & Jirón 1989), therefore allowing an early survival of C. capitata on the fruit. Our results showed that in 35.8 percent of the encounters there were no aggressive interactions.The apparent lack of competition could have been caused by C. capitata and A. obliqua not noticing each other on the fruit because of the fruit's curvature.The 3.8% of encounters where C. capitata displaced A. obliqua from the fruit by attacking it, can be attributed to chance. A. obliqua showed a slight preference for mango as a landing site when both mango and orange were available.This result supports the findings by Jirón et.al. (1988) and Eskafi & Cunningham (1987) that A. obliqua showed a strong preference for Anacardiaceous host fruits.C. capitata showed a preference for orange as a landing site compared to mango. We wish to thank Daniel Janzen for his valuable comments, Lisa Bradshaw and the faculty at the Center for Sustainable Development Studies/School for Field Studies in Costa Rica for their support and advise throughout this project, and Hernan Camacho at OIRSA for providing us with Mediterranean Fruit Flies and other supplies used.This research was partly supported by CONICIT of Costa Rica. S UMMARY Adult aggressive interactions between Anastrepha obliqua and Ceratitis capitata were examined by exposing adults of both species to each other in laboratory conditions as well as exposing them to natural host fruits such as mangos and oranges.A. obliqua successfully displaced C. capitata from mango fruit 60.4% of encounters, 35.8% of encounters showed no adult aggressive interactions, and 3.8% of encounters resulted in C. capitata displacing A. obliqua from the fruit.When adults of both species were exposed to oranges and mangos in the same cage, C. capitata preferred oranges 62.8% of the time and mangos 37.2%, while A. obliqua preferred mangos 62.5% of the time and oranges 37.5%.
2017-09-08T03:38:07.917Z
1996-06-01T00:00:00.000
{ "year": 1996, "sha1": "de85136e550edc70144c848c15a64f60d8fe2246", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.2307/3495824", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "de85136e550edc70144c848c15a64f60d8fe2246", "s2fieldsofstudy": [ "Biology", "Environmental Science" ], "extfieldsofstudy": [ "Biology" ] }
212927813
pes2o/s2orc
v3-fos-license
Graphene oxide–iridium nanocatalyst for the transformation of benzylic alcohols into carbonyl compounds A catalyst constructed from graphene oxide and iridium chloride exhibited high activity and reliability for the selective transformation of benzylic alcohols into aromatic aldehydes or ketones. Instead of thermal reaction, the transformation was performed under ultrasonication, a green process with low byproduct, high atomic yield and high selectivity. Experimental data obtained from spherical-aberration corrected field emission TEM (ULTRA-HRTEM), X-ray diffraction (XRD), Fourier-transform infrared spectroscopy and Raman spectra confirm the nanostructure of the title complex. Noticeably, the activity and selectivity for the transformation of benzylic alcohols remained unchanged within 25 catalytic cycles. The average turn over frequency is higher than 5000 h−1, while the total turnover number (TON) is more than one hundred thousand, making it a high greenness and eco-friendly process for alcohol oxidation. Introduction Aromatic aldehydes or ketones are widely used in various elds, such as biological, pharmaceutical, chemical, and materials sciences. [1][2][3][4][5] A large number of aromatic carbonyl compounds are consumed by certain industries, for example, the global consumption of benzaldehyde is around 7000 tons per year and that of vanillin is estimated to be 12000 tons per year. Therefore, a great effort has been dedicated to developing highly selective and reliable approaches to prepare aromatic aldehydes or ketones in modern synthetic chemistry. 6,7 There are several ways to prepare aromatic aldehydes or ketones, such as Friedel-Cras acylation, 8,9 ozonolysis of alkenes, [10][11][12] hydration of alkynes, [13][14][15] partial reduction of carboxylic acid derivatives, 16,17 and the oxidation of alcohols, 18,19 in which the oxidation of alcohols is the most important method and most widely used, due to the availability of the alcohols. Traditionally, the oxidation of alcohols uses special reagents such as Dess-Martin periodinane, 20 ) and chromium trioxide (CrO 3 ) as oxidants. 24,25 Most of the above-mentioned methods are well-developed for the oxidation of alcohols to make carbonyl compounds, but all of them use stoichiometric oxidants, and release some toxic by-products, such as heavy metals and chemical wastes. Therefore, tremendous efforts have been devoted to the design of catalytic systems that use O 2 as the primary oxidant for the catalytic oxidation of benzyl alcohols to prepare benzaldehydes. [26][27][28][29][30] Although there are some advantages in the new approaches, they could be ameliorated more effectively. Graphene-based two-dimensional (2D) atomic materials have attracted great attention in the past decade. [31][32][33][34] The singlelayered graphene structure as a uniform platform provides a large specic surface area with binding points to interact with substrates and/or active metals, and may form strong p-p interlayer interactions for electrical or energy transfer. 35,36 Therefore, it has been explored in various elds, such as catalysts, [37][38][39][40][41][42][43] biosensors, [44][45][46][47][48] supercapacitor, [49][50][51] and Li-ion batteries. 52,53 Iridium named from the Greek goddess of the rainbow possesses some unique characteristics and has been extensively investigated in various elds such as organic light-emitting diode (OLED), 54,55 solar cells, 56,57 water splitting to produce hydrogen or oxygen, 58,59 and carbon dioxide reduction. 60,61 This element could form compounds with wide range of oxidation states between À3 and +9, resulting in a number of organometallic compounds used in industrial catalysis. For example, the Cativa process for the manufacture of acetic acid, 62,63 the carbon-hydrogen bond activation for the functionalization of saturated hydrocarbons, 64,65 and the asymmetric hydrogenation of natural products. 66,67 Because of the diversity of oxidation state for iridium, the coordination number of iridium atom could exibly change in iridium complexes; therefore, the complexes would easily undergo reductive elimination to afford a coordinatively unsaturated metal center which could accept substrates and promote some reactions. When iridium atoms landed on the surface of graphene oxide, the functional group of the graphene oxide interacted with iridium atoms to form a graphene-oxide complex with a nano structure. Because of the weak interactions, some donor atoms could departure from the iridium centers to generate open coordination sites on iridium for catalysis. Since the graphene oxides are around the iridium centers, the formation and breakage of the Ir-O and/or Ir-C bonds are reversible and fast. Consequently, the graphene oxide-iridium complex may act as a molecular switch accepting substrates and promoting the catalysis. Most of chemical reactions were performed at high temperature, and sometimes under high pressure, which would consume a lot of energy and usually bring about side reactions, and result in low selectivity and low transfer yield for desired products. Though catalyst can lower the temperature and improve the selectivity for some reactions, the reaction condition by heating usually lead to catalyst deactivation. In contrast, ultrasonication is usually performed at lower temperatures, which can enhance the efficiency by as much as a millionfold. [68][69][70][71][72] For a reaction with ultrasonication, it should be noticed that the reactants should be able to efficiently absorb the ultrasonic energy. Herein, we report a graphene oxide-iridium complex that can be activated by ultrasonic energy, which was applied to the catalytic system for the transformation of benzylic alcohols into aromatic aldehydes or ketones. The preparation and structure of the title complex and the catalytic selectivity and productivity for the target products will be reported. Catalyst preparation and structural characterization An illustration for generating graphene oxide-iridium complex is shown in Scheme 1. The graphene oxide (GO) was prepared from the direct oxidation of graphite by the method of Hammers. The graphene oxide-iridium complex was synthesized by reacting of GO with iridium chloride in mixed solvent under argon atmosphere. Infrared spectra ( Fig. 1) showed the progress for the combination of iridium ions with GO. For graphene oxide, some functional groups can be observed on the GO surface, including carboxyl and hydroxyl group (n O-H from 2500 to 3600 cm À1 ), carbonyl group (n C]O at 1723 cm À1 ), C]C stretching (n C]C at 1619 cm À1 ) and C-O stretching (n C-O at 1037 and 978 cm À1 ) (Fig. 1a). When GO was reacting with iridium(III) chloride, the intensity for n O-H was gradually decreasing and nally almost disappeared aer 96 hours of reaction time. The intensities of n C]O , n C]C and n C-O were also decreasing but still remained during the reaction. Fig. 1b and c, which reveals that in the reaction progress, GO lose the protons of carboxyl and hydroxyl group, and the conjugate base, carboxylate and alkoxide ions, trapped the iridium ions through the Ir-O interactions. Aer 96 hours of reacting time (Fig. 1d), most of iridium ions in solution were trapped, and IR spectra do not change further. To verify the existence of iridium ions on GO surface, the normalized X-ray absorption near edge structure (XANES) spectra at the Ir K-edge for GO-Ir complex was measured ( Fig. 2c) to compare with the spectra of Ir 0 (Fig. 2a) and [(dfpbo) 2 Ir II ] 2 (Fig. 2b). The white line intensity of GO-Ir complex in Fig. 2c shows that the iridium atoms in complex Scheme 1 Illustration of the methods used for preparing graphene oxide-iridium complex. 2 , and (c) GO-Ir complex. An Si (111) double crystal monochromator was employed for energy scanning. Fluorescence data were obtained at room temperature using an Ar-filled ionization chamber detector, each sample was scanned 3 times for averaging. have formal oxidation states higher than Ir 0 and Ir II , 75,76 which reveal that the iridium ions have been trapped on the GO surface to form GO-Ir complex. The XRD patterns of graphene, synthesized GO, and GO-Ir complex were obtained to realize their ne structures, which were shown in Fig. 3. The characteristic XRD peak of the synthesized GO is at around 2q ¼ 10.2 corresponding to the (001) of GO. For the pattern of GO-Ir complex, the intensity of the XRD peak of GO decreased and a new peak at 2q ¼ 24.5 corresponding to the (002) of graphene was observed, indicating that parts of GO nanostructure were reduced to the graphene structure. It is interesting that there were no typical reduction reagents such as hydride compounds, hydrazine or hydrogen in the reaction condition, 77-79 but the GO has been reduced to graphene, which could be attributed to the decarboxylation-protonation of carboxyl group, and the dehydration for hydroxyl group on the GO surface. Furthermore, the carboxyl or hydroxyl groups bound to iridium atoms should be stable and the decarboxylation-protonation and dehydration do not occur to them. As shown in Fig. 3, GO-Ir complex exhibits three peaks at around 34.2, 43.6, and 56.8 cannot be attributed to carbon and iridium elements, which were attributed to the characteristic XRD peaks of the GO-Ir complex, indicating the formation of GO-Ir complex. Fig. 4 shows the EDS elemental mappings from a TEM image of GO-Ir, which shows that the image in the marked area was made up of iridium, carbon, and oxygen. The carbon element distribute homogeneously on the marked area, but the oxygen appears mostly at the location where iridium exists, implying that the oxygen atoms are bound to iridium. Quantitative analysis by EDS revealed that the atomic ratio of C, O, and Ir is 90.97 : 6.25 : 2.78, which shows that the atomic ratio of iridium and oxygen is about 1 : 2. By comparing the atomic ratio of carbon with iridium, we can deduce that about 76% of iridium ions were trapped by GO in the reaction condition for preparing graphene oxide-iridium complex. Fig. 5 displays spherical-aberration corrected eld emission TEM images of GO-Ir complex. The Fig. 5a demonstrates that GO was successfully exfoliated by ethoxyethanol-water mixed solvent media without using any surfactant, which is a concise process to prepare graphene complex. The selected area electron diffraction (SAED) pattern (Fig. 5b) shows the graphene lattice on the surface of GO-Ir complex without iridium atom, indicating that parts of GO nanostructure were reduced to the graphene structure. The SAED pattern of the surface containing iridium ions (Fig. 5c) shows that there are two d-spacing values of 0.237 and 0.137 nm for corresponding to the structure of GO-Ir complex. A representative high-resolution TEM (Fig. 5d) reveals that the grain sizes of iridium range between 0.5 and 5 nm. Fig. 5e is the enlarged fragment taken from Fig. 5d (shown by the square symbol), in which the hexagonal lattice of graphene and iridium atom could be observed, and the corresponding ultra-high resolution TEM image was also shown in Fig. 5f. TEM analysis reveal that the iridium ions were bound to the GO layers to form nanocatalyst. This nanostructure provides a large specic surface area (SSA) and reactive centers for accepting substrates and promoting catalytic reactions. Catalysis using GO-Ir complex We have previously demonstrated that iridium complexes are highly active catalysts for redox reactions, such as the reduction of carbon dioxide to carbon monoxide, 80 and the selective oxidation of toluene to benzaldehyde. 81,82 Here, graphene oxideiridium complex (GO-Ir complex) was used as a heterogeneous catalyst for a heterolytic catalysis by ultrasonication. The reaction condition is quite simple (Fig. 6a). The benzylic alcohols and GO-Ir complex were mixed in a reaction ask exposed to air and irradiated with ultrasound. During the catalytic reaction, the iridium atoms of GO-Ir complex absorbed the ultrasonic energy (Fig. 6b) to form coordinatively unsaturated metal centers by breaking the Ir-O bonds, which accepted alcohols and promoted the selective transformation of alcohols into carbonyl compounds. The reaction progress was monitored by a high-performance liquid chromatography (HPLC) and GC-MS to characterize the products. Five kinds of aromatic alcohols were used to test the catalytic ability of GO-Ir complex, and all the products are useful in various elds such as agriculture and food industries. The reaction process was shown in Fig. 7, which shows that most of the benzylic alcohols were transformed into the corresponding carbonyl compounds in an hour. Though there was a small by-product observed in the reaction mixtures, the selectivity for the transformation of benzylic alcohols into carbonyl compounds are higher than 96%. In most case, the selectivity achieved 100% (Fig. 8 and Table 1). Table 1 also shows that the atom efficiency for the transformation of alcohols into carbonyl compounds are higher than 98% with an excellent turn over frequency (TOF) (near to 5000 h À1 ). Atom efficiency (atom economy) is an important concept of green chemistry, one of the most widely used metrics for measuring the "greenness" of a process or synthesis, and of importance for human health and the environmental sustainability. The catalysis of GO-Ir complex provides an eco-friendly and market value procedure to transform alcohols into carbonyl compounds. Though for the secondary alcohol, the transformation yield is lower than 90%, the selectivity for it can reach to 100%, and the unreacted reactants can be recovered during the isolation of product. That is, most parts of reactant molecules can be transformed into the target molecules. The selective transformation of alcohols into carbonyl compounds is an important process for industrial applications, and various catalytic system were developed for this topic, such as Au-Pd nanoparticles on TiO 2 /GO, 83 Pd nanoparticle encapsulated in a hollow porous carbon sphere, 84 AuPd core-shell nanoparticles, 85 and supported gold nanoparticles. 86 Some of those reports showed good reliability for the catalysts, high transient TOF for catalysis, acceptable TON or selectivity for the transformation of benzylic alcohols, but there are still some challenges to be overcome. A main issue is catalyst deactivations resulting from the destruction of the "non-bonding catalysts", such as coprecipitated nanoparticles. In contrast, the reactive centers of GO-Ir complex are bound to graphene surface forming a very stable catalytic structure and have a good performance on the catalysis. To evaluate the activity and stability of catalyst in long run production, we studied the catalyst reuse of GO-Ir complex by evaluating the selective oxidation of benzyl alcohol over twenty-ve catalytic cycles without a regeneration step (i.e., without washing of catalyst or heat treatment between run). The TOF of the catalytic cycles for the transformation of benzyl alcohol into benzaldehyde was shown in Fig. 9, which showed that the catalytic capability of GO-Ir complex for the selective transformation of alcohol is quite steady, the TOF of the rst cycle is 4956 h À1 , the average TOF within the 25 cycles is about 5020 h À1 , and the TOF of the latest cycle is 5085 h À1 . The deactivation is not obvious for GO-Ir complex for the selective transformation of alcohol. The TON of the catalytic cycles for the transformation of benzyl alcohol into benzaldehyde was also tested by using a 125 ppm solution of GO-Ir complex. The reaction time is 2 h per cycle with ultrasonication (ROCKER SONER 220, 0.5 kW, 53 kHz). The TON of the catalytic cycles for the transformation of benzyl alcohol into benzaldehyde was shown in Fig. 10, revealing that the catalytic capability of GO-Ir complex for the selective transformation of alcohol are fairly stable. The TON of the rst cycle is 5334, while the TONs of the other cycles is similar to the rst cycle (Fig. 10a), and the average TON within the 25 cycles is about 5982; therefore the TON could be steadily accumulated (Fig. 10b). Aer 25 cycles, the total TON was more than one hundred thousand (149 500), which imply that the accumulated benzaldehyde productivity attain 233 880 mol benzaldehyde kg GOÀIr complex À1 within 25 cycles, that is, by using one kilogram of GO-Ir complex, 24 760 kg of benzaldehyde could be produced in 50 hours, which demonstrates a very large productivity. Comparative data for oxidation of benzyl alcohol over reported catalysts and present catalyst is given in the Table 2. For each catalytic cycle, most of the benzyl alcohol can be transformed (Fig. 11). The transformation percentage of benzyl alcohol for the rst cycle is 96.2%, and the average percentage of transformation within the 25 cycles is 96.8%, which is a process with high efficiency for the transformation of alcohol, that is, most of starting material can be transformed into products, and only a minor amount of reactant should be recovered. It is a time-saving, and energy-saving process. 9 The TOF of the catalytic cycles for the transformation of benzyl alcohol into benzaldehyde, which show that the catalytic capability of GO-Ir complex for the selective transformation of alcohol is quite steady. Moreover, the selectivity for the transformation of benzyl alcohol into benzaldehyde is also quite high and steady in each catalytic cycle (Fig. 12). The selectivity for the rst cycle is 98.5%, and the average selectivity within the 25 cycles is 99.1%, indicating that most of benzyl alcohol has been transformed into benzaldehyde, and only small amount of by-product (benzoic acid) was observed. The purity of the desired product is very high, and the purication procedure for target compound is quite simple. Because of very low level of waste product forming in the catalytic system, it is a high greenness and eco-friendly strategy for producing carbonyl compounds. To check the structural stability of GO-Ir complex during catalysis, Raman spectra of original and reused GO-Ir complex were compared by inspecting the characteristics of GO-Ir complex structure. The Raman spectrum of original GO-Ir complex displays a broad D band peak (the vibration of carbon atoms with sp 3 electronic conguration) at 1340 cm À1 and a G band peak (in-plane vibration of sp 2 -bonded carbon atoms) at 1574 cm À1 (Fig. 13a). Aer 25 catalytic cycles, the location of D band and G band peaks of the reused GO-Ir complex keep intact at 1340 and 1574 cm À1 , respectively ( Fig. 13b-g). There was also no signicant change in the integrated area under the D band for the reused catalysts. The I D /I G ratio is 1.20 for the original GO-Ir complex, 1.21 for the GO-Ir complex aer one run, and 1.21 for that aer the 25 th run (Fig. 14). Furthermore, there was also no obvious change in the full width at half maximum (FWHM) of D band for GO-Ir complex; the FWHM of original GO-Ir complex is 55.2 cm À1 . For GO-Ir complex aer the 1 st and 25 th runs, the FWHM are 56.7 and 56.9 cm À1 , respectively, and the average FWHM within 25 catalytic cycles is 56 cm À1 (Fig. 15), which implies that the bonding of GO-Ir complex structure is quite stable, and GO-Ir complex is a robust catalyst. Materials Iridium chloride (IrCl 3 , anhydrous) was obtained from the Seedchem Co. Crystalline graphite was purchased from SHOWA Co. All other chemicals including ethoxyethanol were purchased from Acros and used as received. Aqueous solutions were prepared with double-distilled water from a Millipore system (>18 MU cm). Preparation of graphene oxide Graphene oxide (GO) was synthesized from graphite powder following a modied Hummers' method. 73,74 The synthesized GO was dispersed in DI water (0.5 mg ml À1 ) under ultrasonically treatment for 40 min, and then the solution was heated in an oil bath (90 C) for 60 min. The obtained GO was ltered through a nylon microporous membrane (0.22 mm) and dispersed in DI water. Preparation of graphene oxide-iridium complex A solution prepared by dissolving 0.3 g of iridium chloride in 50 ml of mixed solvent (ethoxylethanol : water ¼ 3 : 1, v/v) was added to the 100 ml of GO solution (3 mg ml À1 ). The mixture solution was stirred at room temperature for 0.5 h and ultrasonicated for 0.5 h, and then the mixture was reuxed for 96 hours under argon. The obtained graphene oxide-iridium complex dispersion was puried by ltration and washing with DI water and ethanol and then redispersed in ethanol. Characterizations NMR spectra were measured on a Bruker AVIIIHD-600 MHz or a Mercury 300 MHz NMR spectrometer. UV-vis spectra were obtained using a Hitachi U-3900 Spectrophotometer. The infrared spectra were recorded on Agilent Technologies Model Cary 630 FTIR instruments. Mass spectra were taken with a Finnigan/Thermo Quest MAT 95XL instrument with electron impact ionization for organic compounds or fast atom bombardment for metal complexes. Transmission electron microscopy (TEM) images were carried out on a JEOL JEM-ARM200FTH microscopy operated at 80 kV with cold eld emission gun (CFEG), spherical-aberration corrector, and high angle annual dark eld detector. For the TEM resolution: point image resolution is 0.19 nm, lattice image resolution is 0.10 nm, information limit is 0.10 nm, bright-eld lattice image resolution is 0.136 nm, and dark-eld lattice image resolution is 0.08 nm, respectively. Energy-dispersive X-ray spectroscopy (EDS) was also performed on the TEM. Raman spectra (RAMaker Raman spectrometer) with an excitation laser of 532 nm were also used to characterize the samples. X-ray Absorption Fig. 12 The selectivity of each catalytic cycle, all the catalytic cycle show high selectivity for the transformation of benzyl alcohol into benzaldehyde. Near-Edge spectroscopy (XANES) was measured on equipment from the National Synchrotron Radiation Research Center (NSRRC, Taiwan). An Si (111) double crystal monochromator was employed for energy scanning. Fluorescence data were obtained at room temperature using an Ar-lled ionization chamber detector, where each sample was scanned 3 times for averaging. Catalytic activity of catalysts The reaction temperature was well controlled in a water bath under a constant temperature (AE1 C). For the catalytic reaction, 0.05-1 g of benzylic alcohol, 0.0005-0.01 g of GO-Ir complex, and 3 ml of toluene were mixed in a reaction ask irradiated with ultrasound, and the reaction progress was monitored by a high-performance liquid chromatography (HPLC) and GC-MS to identify the product composition. Catalyst reuse studies To check the catalytic activity of reused GO-Ir complex, 1 g of aromatic alcohol, 0.0005 of GO-Ir complex, and 3 ml of toluene were mixed in a reaction ask irradiated with ultrasound. Aer 2 h of reaction time, the reaction mixture was centrifuged to separate out the catalyst, the residual clean supernatant was analyzed by HPLC and GC-MS to identify the product composition. And then, 1 g of aromatic alcohol and 3 ml of toluene was added to a ask containing the recovered GO-Ir complex for the next catalytic cycle. The GO-Ir complex was recovered and used again for 25 times without any evident loss of catalytic activity. Conclusions A robust catalyst constructed from graphene oxide and iridium chloride absorb ultrasonic energy to create active site on the graphene oxide surface, which could accept substrate and selectively transform benzylic alcohols into carbonyl compounds. The transfer yield and selectivity can reach 99 and 100%, respectively. The productivity of carbonyl compounds is 233 880 mol benzaldehyde kg GOÀIr complex À1 within 25 cycles, during which the structure of the complex was stable, and the capability and selectivity retained consistent. As catalysis was performed at low reaction temperature with ultrasonication, the over-oxidation was suppressed obviously with very little waste produced, demonstrating a high greenness and eco-friendly process for alcohol oxidation. Conflicts of interest There are no conicts to declare.
2020-01-30T09:07:07.900Z
2020-01-24T00:00:00.000
{ "year": 2020, "sha1": "1d4a75496ff308fa02ab70f4a94f6b837eb2d79a", "oa_license": "CCBY", "oa_url": "https://pubs.rsc.org/en/content/articlepdf/2020/ra/c9ra10294a", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "a32ed030278cf78e15712c9c527ebfa7deff38c1", "s2fieldsofstudy": [ "Chemistry" ], "extfieldsofstudy": [ "Chemistry" ] }
252907664
pes2o/s2orc
v3-fos-license
Toward Secure and Private Over-the-Air Federated Learning In this paper, a novel secure and private over-the-air federated learning (SP-OTA-FL) framework is studied where noise is employed to protect data privacy and system security. Specifically, the privacy leakage of user data and the security level of the system are measured by differential privacy (DP) and mean square error security (MSE-security), respectively. To mitigate the impact of noise on learning accuracy, we propose a channel-weighted post-processing (CWPP) mechanism, which assigns a smaller weight to the gradient of the device with poor channel conditions. Furthermore, employing CWPP can avoid the issue that the signal-to-noise ratio (SNR) of the overall system is limited by the device with the worst channel condition in aligned over-the-air federated learning (OTA-FL). We theoretically analyze the effect of noise on privacy and security protection and also illustrate the adverse impact of noise on learning performance by conducting convergence analysis. Based on these analytical results, we propose device scheduling policies considering privacy and security protection in different cases of channel noise. In particular, we formulate an integer nonlinear fractional programming problem aiming to minimize the negative impact of noise on the learning process. We obtain the closed-form solution to the optimization problem when the model is with high dimension. For the general case, we propose a secure and private algorithm (SPA) based on the branch-and-bound (BnB) method, which can obtain an optimal solution with low complexity. The effectiveness of the proposed CWPP mechanism and the policies for device selection are validated through simulations. I. INTRODUCTION The vast amount of valuable data generated at the edge of wireless networks has enabled various artificial intelligence (AI) services for end-users by exploiting deep learning [1]. In most applications, such as the internet of things (IoT), unmanned aerial vehicles (UAVs), or extended reality (XR), data from sensors normally needs to be constantly collected and processed. There are numerous machine learning (ML) algorithms that have been developed to leverage these large-scale datasets. Most of the conventional ML algorithms are centralized, which aggregates all the raw data to a powerful central server where models are trained [2]. However, such centralized ML approaches may become increasingly undesirable as privacy concerns and the size of the dataset increase. Specifically, uploading large amounts of raw data from the edge of the network to the centre is generally not feasible due to latency, bandwidth or power constraints and, most importantly, it may directly expose personal information. To overcome these challenges, federated learning (FL) [3] has been proposed as a privacy-enhancing distributed ML technique, which enables devices to train a model in a decentralized manner with the help of a central controller, such as a base station (BS). Specifically, edge devices firstly download the latest global model parameter from the BS and then compute gradients or update model parameters locally based on local datasets. Then, the gradients or updated model parameters are sent to the BS for the global model update. By training models locally, FL not only makes full use of the computing power of the edge devices, but also effectively reduces the power consumption, latency, and privacy exposure due to the transmission of raw data. In practice, FL as a promising technique has been widely applied to several systems, such as model training in wireless networks [4] and content recommendations for smartphones [5]. Inspired by the high communication efficiency of over-the-air computation (AirComp) [6]- [9], over-the-air FL (OTA-FL) has been proposed and attracted a great deal of attention. OTA-FL schedules devices concurrently uploading their local models or gradients through a wireless multiple-access channel (MAC). The gradients in prevailing OTA-FL are normally transmitted via an analog transmission. In this way, the BS receives an aggregated gradient or model directly thanks to the waveform-superposition property. The number of dimensions used for transmitting the gradients or models is independent of the number of devices, which makes it highly energy and bandwidth efficient compared with the traditional communication-and-computation separation method, especially when the number of devices is large [6], [10]. Although FL offers basic privacy protection, which benefits from the fact that all raw data is processed locally, it is far from sufficient. On the one hand, some studies revealed that various attacks [11]- [13] can infer the individual data or recover part of training data by attacking the exchanged messages, i.e., the gradients and models [14]. This is because the updated model or gradient is obtained based on local data and therefore may contain some information from raw data [15]. Specifically, if the BS is "honest but curious", it may silently infer the private information from the intermediate gradients or the trained models. One of the countermeasures to prevent such privacy leakage of FL is differential privacy (DP) [16] which randomizes the disclosed statistics. Some studies focused on differentially private OTA-FL. In [17], artificial Gaussian noise was added to each gradient before transmitting if channel noise cannot provide sufficient privacy protection, and a static power allocation scheme was proposed to determine the scale of the artificial noise. Instead of introducing artificial noise, the work of [18] proposed a more energy-efficient strategy to guarantee DP by adjusting the transmit power. The authors of [19] investigated differentially private FL in both orthogonal multiple access (OMA) and nonorthogonal multiple access (NOMA) channels and proposed adaptive power allocation schemes. Nevertheless, similar to the most prevailing OTA-FL studies, all the above works considered an aligned aggregation by controlling transmit power. Although gradient alignment can ensure an unbiased gradient estimation at the BS, the signal-to-noise ratio (SNR) of the system will be limited to a pretty low level if any device suffers from a poor channel condition. On the other hand, the broadcast nature of wireless channels makes FL vulnerable to security attacks. Although security and privacy are used interchangeably in the existing literature, it is important to highlight the distinction between them. In particular, privacy concerns normally refer to the disclosure of personal information from open access data, while security concerns refer to unauthorised access or alteration of data [20]. Eavesdropping on the FL communication channels is often the first step for malicious third parties to launch security attacks. The main difference between eavesdropping attack and privacy issue at the BS is that the BS is authorised to receive the gradients and updated models while the eavesdropper is prohibited. To enhance the security of FL communications, the authors of [21], [22] adopted a covert communication (CC) technique with which a friendly jammer transmits jamming signals to prevent an eavesdropper from detecting the update transmission of the local model from mobile devices in FL. The work of [23] utilized power control to improve the security of FL in the internet of drones (IoD) networks where security rate was employed to measure the security of wireless communications. However, all of these existing works on FL security investigated how digitally coded communication rounds can be protected against eavesdropping. To the best of our knowledge, the security of analog OTA-FL over a wiretap channel has not yet been considered. A. Contributions Inspired by these research gaps, in this paper, we propose a secure and private OTA-FL (SP-OTA-FL) framework where we try to utilize noise to protect privacy and security which are measured by DP and mean square error security (MSE-security) [24], respectively. To mitigate the impact of noise on learning accuracy, we propose a channel-weighted post-processing (CWPP) mechanism, which does not require gradient alignment. We also conduct the privacy, security and convergence analysis to illustrate the impact of noise on privacy, security protection and learning performance. In addition, we propose different device selection policies for different cases of channel noise. Particularly, we formulate an integer nonlinear fractional optimization problem to minimize the optimality gap while guaranteeing privacy and security in the case of insufficient channel noise. The closed-form solution to the optimization problem is obtained when the model is with high dimension. Based on the branch-and-bound (BnB) algorithm, a secure and private scheduling algorithm (SPA) with low complexity is proposed for the general case. The proposed post-processing mechanism and the device selection policies are validated through simulation. Our main contributions are summarized as follows. • We firstly propose a novel SP-OTA-FL framework where noise is employed to protect privacy and security. DP and MSE-security are used to measure the privacy leakage and the security level of the OTA-FL system, respectively. In the proposed framework, a part of the devices are selected as uploaders to participate in the training process and some other devices are selected as jammers to send Gaussian artificial noise aiming at deteriorating the eavesdropper's SNR so as to guarantee the security of the system and privacy of user data at the BS. To the best of our knowledge, this is the first work to consider privacy and security issues together in an FL system. More importantly, this is the first to focus on the security of analog OTA-FL, where gradients are transmitted in an uncoded way and therefore, are more vulnerable to security threats. • We propose a CWPP scheme to alleviate the negative impact of noise on the utility of aggregated gradient, which could improve learning performance. By employing the CWPP mechanism, the gradients are no longer forced to be aligned during the transmission, and therefore, can avoid the issue that the SNR is limited by the device with the worst channel condition. • The privacy, security and convergence analysis are conducted. The results theoretically reveal the benefits of noise on privacy and security protection as well as the negative impact on learning performance. The convergence analysis shows that the proposed SP-OTA-FL can converge with the rate of O 1 T . In particular, if the data is independently and identically distributed (IID), the optimality gap tends to zero as the number of data samples and training rounds grows. • We analyze the decision of device selection in three cases where: (1) channel noise is sufficient for protecting privacy and security with all device participation; (2) channel noise is sufficient for protecting privacy and security with partial device participation; (3) channel noise is insufficient for protecting privacy and security with any device participation. We propose two policies to schedule devices to ensure user privacy and system security when channel noise cannot guarantee security and privacy. • We formulate an integer nonlinear fractional optimization problem in the case of insufficient channel noise. For the special case where the model is with high dimension, the closed-form solution is obtained and useful insights are drawn. A BnB-based algorithm is proposed to solve this problem in the general case, which could achieve the optimal solution with low computational complexity. • Finally, simulations are conducted to verify the effectiveness of the proposed algorithms and their superiority over conventional schemes. B. Organization The remainder of this paper is organized as follows. In Section II, we present the system model and introduce the procedure of FL, the definitions of DP and MSE-security. The details of SP-OTA-FL and CWPP mechanism are introduced in Section III, where we also conduct the privacy, security and convergence analysis of the proposed SP-OTA-FL. We propose device selection policies in Section IV. The simulation results are shown in Section V and we conclude the paper in Section VI. II. SYSTEM MODEL AND PRELIMINARIES In this section, the considered system model is presented. We introduce the basic concepts and procedure of FL in Section II-A. The definitions of DP and MSE-security used to measure the privacy leakage and security level of the system are introduced in Section II-B and Section II-C, respectively. Typical notations used in this paper are summarized in Table I. We consider an SP-OTA-FL system where N edge devices, indexed by the set N = {1, 2, ..., N }, collaboratively train a deep neural network (DNN) model with the help of a BS. The devices and BS communicate through a shared MAC where all devices transmit their gradients simultaneously using the same channel. As a result of the waveform-superposition property of MAC, the gradients are aggregated over the air. Specifically, the BS is assumed to be "honest but curious" that may attempt to learn the personal information from the received gradients, which is regarded as a privacy threat. Additionally, the security threat is that there is an eavesdropper (Eve) in the system that tries to wiretap the gradients. To prevent the privacy leakage of user data and the security attack, in the considered system, we employ channel noise and artificial noise as protection in different cases. More specifically, three scenarios are considered as shown in Fig. 1. In the case that channel noise is sufficient for privacy and security protection, all the devices are selected to participate in the training process as shown in Fig. 1 (a). Fig. 1 (b) and Fig. 1 (c) are proposed in the case of insufficient channel noise. In Fig. 1 (b), we only select devices that the privacy and security can be guaranteed by channel noise as uploaders and others, referred to as offline workers, will be absent in the training. In the third case, part of the devices are selected as uploaders and others are selected as jammers to send Gaussian artificial noise for protecting privacy and security, as shown in Fig. 1 (c). The details of SP-OTA-FL and the three cases are described in Section III and Section IV, respectively. A. Federated Learning In the considered FL network, each device of index n ∈ N is assumed to have a local dataset D n which contains D n pairs of training samples (u, v) where u is the raw data and v is the corresponding label. For simplicity, we assume that D 1 = · · · = D N . Typically, the purpose of the FL task is to obtain the model parameter that can minimize the loss function. Mathematically, the goal of the learning can be expressed as follows: where m ∈ R d is the model parameter to be optimized. More specifically, the objective function of device n is defined as follows: where l (m; (u, v)) is an empirical loss function defined by learning task, quantifying the loss of m at sample (u, v). Some typical loss functions l (m; (u, v)) applied in ML are listed in Table II. To solve the problem in (1), an iterative approach referred to as gradient descent (GD) can be applied. However, it may be impractical to perform GD over the whole local dataset because of the considerably massive data samples in reality. Alternatively, stochastic gradient descent (SGD) as one of the practical solutions is more widely used in FL as it computes the gradient over a batch of data samples, randomly chosen from the local dataset, as an approximation of the full gradient (obtained based on the whole dataset). The main procedure of basic SGD applied in FL is given as follows: Linear regression Cross-entropy on cascaded linear and non-linear transform, see [1] for details. (2) Each device randomly selects a batch of data B n of size B n from D n and computes the stochastic gradient based on B n . More specifically, the stochastic gradient is given by By contrast, the full gradient in GD is given by • Step 3: Gradients aggregation: (1) Devices send the obtained gradients to the BS. (2) Upon receiving all the gradients from the participants, the BS makes aggregation of the received gradients as follows: where w t n is the weight of gradient from device n in round t and satisfies n∈K t w t n = 1. In most of existing studies, w t n = Bn B where B = n∈K t B n . • Step 4: Model update: The BS performs global model update as follows: where τ t is the learning rate (also termed as step size in SGD). The above iteration steps are repeated until a certain training termination condition is met. In order to formally quantify the privacy leakage and the security level of the system, we introduce the DP and MES-security concepts in the following. B. Differential Privacy DP [16] is defined on the conception of the adjacent dataset, which guarantees the probability that any two adjacent datesets output the same result is less than a constant with the help of adding random noise. More specifically, DP quantifies information leakage in FL by measuring the sensitivity of the disclosed statistics (i.e., the gradients) to the change of a single data point in the input dataset. The basic definition of ( , ζ)-DP is given as follows. Definition 1. ( , ζ)-DP [16]: A randomized mechanism O guarantees ( , ζ)-DP if for two adjacent datasets D, D differing in one sample, and measurable output space Q of O, it satisfies, The additive term ζ allows for breaching -DP with the probability ζ while denotes the protection level and a smaller means a higher privacy preservation level. Specifically, the Gaussian DP mechanism which guarantees privacy by adding artificial Gaussian noise is introduced as follows. Definition 2. Gaussian mechanism [16]: A mechanism O is called as a Gaussian mechanism, which alters the output of another algorithm L : D → Q by adding Gaussian noise, i.e., is the sensitivity of the algorithm L quantifying the sensitivity of the algorithm L to the change of a single data point. According to the Gaussian mechanism described above, privacy leakage depends both on the sensitivity of the algorithm L and on the power of the added Gaussian noise. C. MSE Security In this work, the gradients are transmitted in an analog way and aggregated via AirComp. In accordance with [24], the way to improve the security of an analog AirComp setting, is to employ noise as jamming to degrade the eavesdropper's SNR and thus prevent it from recovering a low-noise estimate of the transmitted message. MSE-security has been proposed in [24] to measure the security of analog messages and is introduced as follows. and bounded output space, guarantees (E, φ)-MSE-security if under a uniform distribution of E g t n n∈K t , for any Eve's estimator e : Z → Y, there is a real number φ 0 satisfies, In statistical terms, a scheme guarantees (E, φ)-MSE-security means that all estimators that the eavesdropper can apply have MSE at least φ. III. SP-OTA-FL FRAMEWORK In the proposed framework, we employ Gaussian noise to improve the security and privacy inspired by DP and the AirComp security [24]. Given the condition that channel noise may be insufficient for all the devices to participate in training with security and privacy protection, we assume that the BS selects some of the devices as uploaders in each training round t, denoted by K t ⊆ N , to participate in the training, and selects some devices as jammers in some cases, denoted by J t ⊆ N \ K t , to send Gaussian artificial noise to enhance system security and protect privacy. Particularly, J t is empty in the cases that there is no device selected as jammers. Assume that the upper bound of g t n 2 is G. Given the selected uploader set K t and the jammer set J t , we next present the details of SP-OTA-FL. The signal from device n is given by where P n is the maximum transmission power of device n and e t n ∼ N (0, I d ) is the artificial noise sent from jammer n in round t. The h t n,B ∈ R + , h t n,E ∈ R + are the channel gain coefficients between device n and the BS and that between device n and the eavesdropper, respectively. We assume real channel gain coefficients for simplicity [25]. The coefficients are independent across devices and training rounds but remain constant within one round. Consequently, the received signals at the BS and eavesdropper in round t are given as follows: are the received noise at the BS and eavesdropper, respectively. For ease of presentation, we use to denote the aggregated noise at the BS and eavesdropper, respectively. A. CWPP Mechanism Similar to [26], we propose a channel-weighted aggregation scheme to alleviate the negative impact of noise on the utility of aggregated gradient. By employing channel-weighted aggregation, the gradient does not need to be aligned during the transmission and therefore, can avoid the issue that the SNR of the system will be limited by the device with the worst channel quality. In order to recover an estimate of averaging gradient from the aggregated gradients, the BS performs post- where H t = n∈K t h t n,B √ P n . More specifically, an insight into the CWPP scheme can be given bỹ from which one can find that, by performing the CWPP scheme, the gradient from the device with poor channel quality is assigned a smaller weight in the aggregation, thereby, mitigating the negative impact of noise on the learning process. Additionally, different from the conventional gradient-aligned OTA-FL [17], the power allocation in the CWPP scheme does not force gradient alignment. As a result, the overall SNR of the system will not be limited by the device with the worst channel condition. B. Privacy, Security and Convergence Analysis 1) Assumptions: For analysis purposes, we provide the following assumptions first. Assumption 1. The assumptions on gradients: (1) Assume that the stochastic gradient is an unbiased estimate of the full gradient. (2) The variance of stochastic gradients at each device is bounded: (3) The expected squared norm of stochastic gradients is bounded: G. Proof: : Please refer to Appendix A. 2) Privacy analysis: We here present the privacy analysis for SP-OTA-FL. Following (13), the variance of the aggregated noise at the BS is given by Lemma 2. Assume that Assumption 1 holds. SP-OTA-FL guarantees t n , ζ -DP of uploader n in round t when the following condition is satisfied, where ∆S t n = 2h t n,B √ P n . Proof: : Please refer to Appendix B. Lemma 2 reveals an important insight that devices with better channel quality are more prone to privacy disclosure. Therefore, for reducing the privacy leakage in the system, one can either increase the power of noise or select devices with smaller channel condition coefficient h t n,B √ P n to participate in training. Remark 1. Note that when the "=" in ∆S t n = 2h t n,B √ P n is replaced by " ", it indicates a stronger privacy protection so it still satisfies t n , ζ -DP. 3) Security analysis: We here present the security analysis for SP-OTA-FL. Assume that the goal of eavesdropper is to recover an averaging estimate of the gradients, denoted by g t ave = 1 |K t | n∈K t g t n , which can be used for global model update and further exploring the sensitive information of each device. We define the aggregation mechanism in (12) as E t : g t n n∈K t → z t ∈ Z, then we have the following result. Lemma 3. Assume that the elements of g t n are distributed uniformly in [a, b]. The aggregation mechanism -MSE-security in training round t. Specifically, where Λ t = max n∈K t h t n,B √ P n , and where ϕ N (·) and Φ N (·) denote the probability density function and the cumulative distribution function of the standard normal distribution. Proof: Please refer to Appendix C. -MSE-security means that the gradient estimates recovered in [24]. Therefore, a bigger γ t E means higher system security and we use γ t E to indicate the security level of the system which is referred to as the security coefficient. Similar to the privacy analysis, (24) proves that one way to secure the FL process is to increase the aggregated noise at Eve or to select devices with relatively poor channel conditions to participate in the training to make a smaller Λ t . to denote the noise-free aggregated stochastic gradient and full gradient, respectively. Then, it thus follows (6), (13), (15) and (26) that the update of the global model performed by the BS can be given by where r t B,T ot = n∈J t p t n,B √ d e t n + r t B . Then, we have the following results. Lemma 4. Assume that Assumption 1 holds. The noise-free aggregated stochastic gradient is an unbiased estimate of the noise-free aggregated full gradient, i.e., The variance of the noise-free aggregated stochastic gradient is bounded: Proof: Please refer to Appendix D. Lemma 5. [26] Assume that Assumption 1 holds and m * = [m * 1 , · · · , m * d ], m * n = m * n,1 , · · · , m * n,d are the globally optimal model and the locally optimal model of device n, respectively. Then, for each device n, the upper bound of the gap between L n (m * ) and L n (m * n ) is given by where Γ = max Furthermore, if the data is IID, Γ goes to zero as the number of samples approaches infinity [27]. Theorem 1. Assume that Assumption 1 to Assumption 3 hold and let 1 τ t 1 θ with been a constant. The bound of the gap between model m t+1 and the optimal model m * is given by which characterizes the impact of the device schedule in training round t. The expectation is with respect to the stochastic gradient function and the randomness of Gaussian noise. Proof: Please refer to Appendix E. In (33), the impact of noise and the device selection on learning performance has been theoretically illustrated. According to Lemma 2 and Lemma 3, noise and low power of gradient transmission contribute to the security and privacy protection, however, it has a negative impact on the learning process according to Theorem 1. The scale of the noise and the power of the uploaded gradients depend on the scheduling of the devices. Therefore, an appropriate device selection decision is significant for SP-OTA-FL. Corollary 1. Assume that Assumption 1 to Assumption 3 hold. Let τ t = 2 ρt+2θ and = ρT +2θ 2 . When the training process terminates after T rounds and m T is returned as the final solution, the bound of the optimality gap can be given by Proof: Please refer to Appendix F. From (34), one can find that the first term on the right hand side decreases with T , and will go to zero when T approaches infinity, which implies that the proposed SP-OTA-FL can converge with the rate of O 1 T . In particular, if the data is IID, then the optimality gap goes to zero as the number of samples and the number of training rounds T grow. IV. DEVICE SELECTION FOR SP-OTA-FL In this section, we propose device scheduling strategies based on the above analytical results. For simplicity, we consider that all the devices have the same privacy constraint in all the training rounds and the Υ is the coefficient in terms of security requirement. We take one round as an example to analyze the device selection process and therefore omit the index t of the training round in the rest of the paper. We also define κ = 2 ln 1. considered for guaranteeing privacy and security. We replaced the K t in (24) with N for simplicity, which offers a stronger security guarantee. Then, we propose the following policies in different cases of channel noise. 1) Channel noise is sufficient for protecting privacy and security with all device participation: In the case of p M p, the received noise at the BS and Eve is sufficient for the device with the best channel condition to participate in training while satisfying the privacy and security constraints. Therefore, all the devices can be selected as uploaders to participate in training in this round as shown in Fig. 1 (a) in Section II. 2) Channel noise is sufficient for protecting privacy and security with partial device participation: In the case of p m p p M , if no device is selected as a jammer that sends Gaussian artificial noise to increase the power of the aggregated noise, the received noise at the BS and Eve can only provide qualified privacy and security protection when the devices satisfying p n p are selected to participate in the training process. In such cases, there are two approaches to device scheduling. • Policy-1: Select those devices with p n p as uploaders to participate in the current training round, and other devices will be absent in this training round, as shown in Fig. 1 (b) in Section II. • Policy-2: Select some devices as uploaders and others as jammers, as shown in Fig. 1 (c) in Section II, via solving optimization problems which aim at minimizing the optimality gap in each round with the guarantee of user privacy and security. The details of the optimization problem are presented in the next section. 3) Channel noise is insufficient for protecting privacy and security with any device participation: In the case ofp p m , channel noise at the BS and Eve cannot guarantee qualified privacy and security for any device as an uploader if no jammer is selected. Therefore, some devices need to be selected as jammers to send jamming signals to degrade the SNR of the eavesdropper. Consequently, Policy-1 will no longer be applicable, and the only solution is Policy-2. Then, how to choose devices that can ensure privacy and security while having a minimal negative impact on learning performance is a tradeoff problem. Therefore, we formulate an optimization problem aiming to minimize the adverse impact on the optimality gap with the consideration of privacy and security constraints in the following subsection. A. Optimization Problem of Policy-2 In this problem we consider that a device is either selected as an uploader or as a jammer. We introduce vector a = [a 1 , ...a N ] to denote the role of devices in each round. Specifically, a n = 1 indicates that device n is selected as an uploader, otherwise, device n plays the role of a jammer. Then, the optimization problem can be formulated as follows: (1 − a n ) p 2 The objective of this problem is to minimize the impact of noise on the optimality gap as shown in (33). Constraint (35b) ensures privacy protection and constraint (35c) indicates the requirement of security. Note that Problem P1 is a discrete nonlinear programming problem. By using the exhaustive search method (ESM), we can obtain an optimal solution to Problem P1. However, this method has exponential complexity. Thus, the computation cost in optimally solving Problem P1 is prohibitive when N is large. To understand the property of the problem, we first consider a special but useful case with high-dimension learning models. B. Closed-form Solution for High-dimensional Models In practical scenarios, the learning model is normally with high dimensions to guarantee the learning performance. In this case, we are able to simplify the optimization problem and therefore propose closed-form optimal solutions, which could provide useful insights for practical FL systems. Assuming that d → ∞, Problem P1 can be recast as, P2. max a N n=1 a n p n,B (36) s.t. a n ∈ {0, 1} , ∀n ∈ N , (36a) where Proof: Firstly, it can be found that larger number of variables that are equal to one yield a larger objective value. Therefore, we need to identify at most how many variables a n can be set to one and what indexes n are they. From constraint (36b), we know that some variable a n along with large p n,B cannot be one since it would violate the constraint. Assuming that p i,B is the largest one in p B which satisfies (36b), it is only feasible to let a i , ..., a N equal to one. By analyzing (36c), we can find that there are only N −i+1 solutions which may achieve the best performance. Specifically, the x-th solution corresponds to the setting that a 1 = ... = a i+x−2 = 0 and a i+x−1 = 1. In this case, from (36c), we have max{a n p n,B } = p i+x−1,B and then the maximal number of variable a n which could be equal to one is . Then, we need to decide which K x variables are equal to one. Based on the objective function (36), clearly, the optimal allocation is to Based on Lemma 6, we can perform the one-dimension search method to obtain the optimal solution. The optimal solution for Problem P2 is a y where The solution in (37) proves that only a part of the variables in the middle can be set as one, which means that some devices with best and worst channel conditions cannot be selected as uploaders. This validates the trade-off of using noise between achieving privacy and security and guaranteeing learning performance. This is because the devices with the best channel conditions could result in a high risk of privacy leakage and security issue, and therefore are usually not selected as uploaders. Meanwhile, to benefit learning performance, the devices with the worst channel conditions are not usually selected as the uploader either. Inspired by the insight of the closed-form solution, we propose a heuristic algorithm based on BnB, referred to as SPA, to solve Problem P1, which can achieve the solution as the same as ESM with lower computational complexity. C. BnB-based SPA for Problem P1 In the proposed algorithm, we utilize the idea of BnB to quickly cut down the branch of infeasible solutions by checking the constraints. Assume that the elements in p B = [p 1,B , ..., p n,B , ..., p N,B ] are sorted in the ascending order. It is clear that when N n=1 a n p n,B is small, the value of objective function is large. By contrast, it can be observed from constraints (35b) and (35c) that the fewer the number of a n = 1, n ∈ N and the smaller the p n,B are, the easier the constraints can be satisfied. More specifically, if a = [1, 0, ..., 0, ..., 0] cannot satisfy constraints (35b) and (35c), any other solutions a = [1, a 2 , ..., a n , ..., a N ] cannot meet constraints (35b) and (35c) either. In this context, all the solutions with a 1 = 1 are infeasible and should be discarded. Following this idea, we can delete half of the solution space of the subproblem in each branch-and-bound round, and therefore, we can keep narrowing the search space effectively. Given the property of the objective function, we try to get the solution with more variables equal to 1 while satisfying the constraints. Besides, to introduce more diversity to the solutions, we will branch and bound starting from the different indexes of the nodes, i.e., from a n , ∀n. Specifically, one round of the detailed branch-and-bound process from a n is described as follows: • Branching: Select the current node a n that has not been not branched yet. We branch it into two nodes: one is to set it as the uploader, and the other is to set it as the jammer. • Pruning: If a = [0, ..., 0, a n = 1, 0, ..., 0] satisfies constraints (35b) and (35c), the node is selected as an uploader since this selection scheme would definitely lead to a better objective value than selecting this node as a jammer. On the other word, the branch with a n = 0 is cut off. Otherwise, this node is selected as jammer and the branch with a n = 1 is cut off. V. SIMULATION RESULTS In this section, we evaluate the performance of the proposed SPA algorithm, the scheduling policies and the CWPP mechanism. In the case that channel noise is sufficient for the private and secure participation of full Compute the objective value Ψ a (iter) . 10: if Ψ a (iter) Ψ * then 11: Ψ * = Ψ a (iter) and a * = a (iter) . A. Simulation Setting We assume that the wireless channels from edge devices to the BS and Eve follow Rayleigh distribution in different communication rounds. We evaluate our proposed scheme by training a convolutional neural network (CNN) on the popular MNIST [28] dataset used for handwritten digit classification. The MNIST dataset consists of 60,000 images for training and 10,000 testing images of the 10 digits. We have the general assumption that there is an equal number of training data samples for each device and no overlap between the local training data sets [7] [29]. We assume that local datasets are IID, where the initial training dataset is randomly divided into N batches and each device is assigned to one batch. In particular, CNN consists of two 5×5 convolution layers with the rectified linear unit (ReLU) activation. The two convolution layers have 10 and 20 channels respectively, and each layer has 2×2 max pooling, a fully-connected layer with 50 units and ReLU activation, and a log-softmax output layer, in which case d = 21840. The learning rate is set as η = 0.1. B. Evaluation of SPA for Ploicy-2 In this section, we evaluate the performance of the proposed BnB-based SPA algorithm in solving the optimization problem in Policy-2 by comparing it with the genetic algorithm (GA), ESM, and random solution. In GA settings, we utilize the python tools named geatpy. The transmit power budgets at each device are assumed to be the same and are set to P k = 5W . Both the powers of the additive Gaussian noise at the BS and Eve are set to σ B = σ E = 1. The privacy and security coefficients are = 12 and Υ = 1.5, respectively. In Fig. 2 (a), the result validates that SPA and GA can achieve the same learning performance as ESM, which means that the SPA and GA could obtain the same optimal solution as ESM. When N becomes larger as shown in Fig. 2 (b), the proposed SPA can still achieve the same performance as GA, which demonstrates that the proposed SPA is effective to achieve the optimal solution even with the large N . We evaluate the performance of Policy-1 and Policy-2 in Fig. 4 and study the impact of the number of devices and power budget on the performance where the security and privacy coefficients are set to Υ = 0.5 and = 20, respectively. The power budgets of each devices are set to P = 5W . In Fig. 4, we can observe that as the number of devices increases, Policy-1 gradually performs better than Policy-2. It is well known that more participants and less noise distortion can make the model more accurate. The drawback of Policy-1 is that the number of selected devices to participate in training is less than Policy-2. Due to the introduction of artificial noise, Policy-2 results in more noise distortion in the training process, which makes the model less accurate. As the total number of devices becomes larger, more devices are qualified to be selected as participants in Policy-1 where the noise only contains channel noise. By contrast, although there are more devices selected to participate in training in Policy-2, the power of noise is also bigger than that in Policy-1. Therefore, Policy-1 performs better than Policy-2 in the case when the number of devices is bigger. We evaluate the performance of the proposed CWPP mechanism by comparing it with the averaging OTA post-processing mechanism [17]. In the averaging post-processing mechanism, which has been referred to as aligned OTA-FL in our previous work [30], the gradients need to be aligned by an alignment coefficient, i.e., c = min n∈K {h n,B P n }. However, in reality, most edge devices are low-powered, which will result in a quite small alignment coefficient. Therefore, the SNR of the system will be degraded to a quite low level and the aggregated gradients will be less accurate. A more detailed analysis can be found in [30]. E. Evaluation of CWPP Mechanism in case of Sufficient Channel Noise For clarity, we compare the two post-processing mechanisms with Policy-1 device selection algorithm where the aggregated noise only contains channel noise and is independent of the channel quality of the selected uploaders. The security and privacy coefficients are set to Υ = 0.5 and = 20, respectively. The results in Fig. 5 validate that the proposed CWPP mechanism is superior to the averaging post-processing. First of all, the CWPP scheme avoids gradient alignment, and therefore the SNR of other devices will not be limited by the devices with poor channel conditions. Additionally, from Equation (16), we can learn that the BS assigns a bigger weight to the gradient from the device with better channel conditions, and thereby, mitigating the negative impact of noise on the learning process. VI. CONCLUSION To enhance the privacy of user data and the security of the FL system, we have proposed an SP-OTA-FL framework in this work. In the proposed FL system, noise is used to protect both user privacy and system security. Specifically, three cases have been considered. In particular, we have quanitified the privacy leakage of user data using DP, and measured the security of the analog gradient using MSE-security. To reduce the impact of noise on the aggregated gradient, we have proposed a CWPP mechanism, which assigns less weight to badly distorted gradients. Further, we have conducted the privacy, security and convergence analysis and theoretically characterized the impacts of noise on privacy and security protection as well as the optimality gap. To obtain an appropriate device selection decision, we have analyzed the proposed framework and proposed two policies for device selection when the channel noise is insufficient. In particular, we have formulated an integer fractional optimization problem, which can be solved with low complexity via a BnB-based SPA algorithm. The effectiveness of the proposed CWPP and the device selection policies has been validated through simulation. The proof of smoothness is given as follows: where (a) is from the definition of L (·) in (1) and (b) is from Assumption 2. The proof of convexity has a similar process and is therefore omitted here. APPENDIX B PROOF OF LEMMA 2 Here we use index k instead of n to avoid confusion between the specific index of device n and the notation n in the summation. Assume that B k and B k are two adjacent datasets differing in one sample. y t is the received signal at the BS, which only differs in one gradient with y t . The gradient g t k from uploader k in y t is obtained based on B k . Based on the definition of sensitivity and Assumption 1, one has where (a) is from Triangular Inequality and Assumption 1. In accordance with the Gaussian mechanism of DP and the above result, one completes the proof of Lemma 2. APPENDIX C PROOF OF LEMMA 3 Firstly, since the elements in g t n are uniformly distributed in [a, b], the g t ave follows the same distribution in [a, b]. For analysis, we defineẼ t : g t n n∈K t →z t ∈Z wherẽ Assume that the variance ofz t is σ. Following Lemma 3 and Lemma 4 in [24], the minimum MSE estimator e z t for estimating g t ave from the observationsz t satisfies: The lowest-variance unbiased estimator is: with the variance given by where Λ t = max n∈K t h t n,B √ P n . It thus follows from (42) and Definition 3 thatẼ t guarantees Ẽ t , γ t On the other hand, one has Similarly, we also have where (a) comes from E r t E,T ot = 0. Obviously, E e z t − g t ave 2 is smaller than E e z t − g t ave 2 , therefore, e z t is a closer estimate of g t ave . Then, e z t has a larger variance and can achieve at least Alternatively, from the communication point of view, one can also get thatz t could have a better recovery of gradient than z t because of a higher SNR as Λ t = max n∈K t h t n,B √ P n . Therefore, ifz t can guarantee at least -MSE-security, then so can z t . Then, we complete the proof of Lemma 3. where (a) is obtained by using Jensen's inequality and (b) is from (18). We therefore complete the proof of where (a) and (c) are obtained by applying Lemma 4. Step (b) is from the fact that E r t B,T ot = 0. Then, we obtain the upper bounds for each term in (49), separately. Firstly, we have the upper bound of term A in (49) as follows: where (a) is obtained by applying Jensen's inequality and we applied the property of θ-smooth function that where (a) is from Assumption 3. By combining (50) with (52), we obtain the upper bound of term A + B as follows: τ t 2 E ḡ t 2 2 − 2τ t m t − m * ,ḡ t where (a) comes from that 2τ t 1 − θτ t 0 and L n (m * n ) − L n m t n 0. Substituting (54) back into (53), we finally get the upper bound of term A + B in (49) as follows: where (a) is from 1 τ t . Then, the upper bound of term C in (49) is given by where (a) is from the fact that E e t n = E r t B = 0 and step (b) is obtained by applying Jensen's inequality. Then, we complete the proof of Theorem 1.
2022-10-17T01:16:29.167Z
2022-10-14T00:00:00.000
{ "year": 2022, "sha1": "35a02a2680811e2433ca103b0bde235f021810df", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "35a02a2680811e2433ca103b0bde235f021810df", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Engineering" ] }
9024002
pes2o/s2orc
v3-fos-license
Feasibility and validity of accelerometer measurements to assess physical activity in toddlers Background Accelerometers are considered to be the most promising tool for measuring physical activity (PA) in free-living young children. So far, no studies have examined the feasibility and validity of accelerometer measurements in children under 3 years of age. Therefore, the purpose of the present study was to examine the feasibility and validity of accelerometer measurements in toddlers (1- to 3-year olds). Methods Forty-seven toddlers (25 boys; 20 ± 4 months) wore a GT1M ActiGraph accelerometer for 6 consecutive days and parental perceptions of the acceptability of wearing the monitor were assessed to examine feasibility. To investigate the validity of the ActiGraph and the predictive validity of three ActiGraph cut points, accelerometer measurements of 31 toddlers (17 boys; 20 ± 4 months) during free play at child care were compared to directly observed PA, using the Observational System for Recording Physical Activity in Children-Preschool (OSRAC-P). Validity was assessed using Pearson and Spearman correlations and predictive validity using area under the Receiver Operating Characteristic curve (ROC-AUC). Results The feasibility examination indicated that accelerometer measurements of 30 toddlers (63.8%) could be included with a mean registration time of 564 ± 62 min during weekdays and 595 ± 83 min during weekend days. According to the parental reports, 83% perceived wearing the accelerometer as 'not unpleasant and not pleasant' and none as 'unpleasant'. The validity evaluation showed that mean ActiGraph activity counts were significantly and positively associated with mean OSRAC-P activity intensity (r = 0.66; p < 0.001; n = 31). Further, the correlation among the ActiGraph activity counts and the OSRAC-P activity intensity level during each observation interval was significantly positive (ρ = 0.52; p < 0.001; n = 4218). Finally, the three sedentary cut points exhibited poor to fair classification accuracy (ROC-AUC: 0.56 to 0.71) while the three light PA (ROC-AUC: 0.51 to 0.62) and the three moderate-to-vigorous PA cut points (ROC-AUC: 0.53 to 0.57) demonstrated poor classification accuracy with respect to detecting sedentary behavior, light PA and moderate-to-vigorous PA, respectively. Conclusions The present findings suggest that ActiGraph accelerometer measurements are feasible and valid for quantifying PA in toddlers. However, further research is needed to accurately identify PA intensities in toddlers using accelerometry. Background Early childhood (defined as between 0 and 5 years of age) is one of the critical developmental periods in life in which health behaviors such as physical activity are established [1,2]. Regular physical activity during early childhood protects against unhealthy weight gain and contributes to the overall development and well-being of children under 5 years of age [3,4]. In line with this acknowledgement, governments and professional groups have developed physical activity recommendations for children from birth to age 5 in accordance with their developmental stage, namely for infants (0-to 1-year olds), toddlers (1-to 3-year olds) and preschoolers (3to 5-year olds) [5,6]. One of the challenges faced by researchers trying to promote physical activity is having access to accurate and practical instruments to measure physical activity [7][8][9]. Valid measurement of physical activity in children under 5 years of age is challenging, largely due to the sporadic and intermittent nature of their activity behavior [7][8][9]. Proxy reports from parents can be useful for rank ordering children across the 0 to 5 age group on physical activity behavior. However, inaccurate estimates of the amount and intensity of physical activity remain a primary concern with this approach [8,10]. On the other hand, accelerometers and direct observation have become well-established and preferred methods for monitoring physical activity performed by children aged 0 to 5 years [7,8]. Both accelerometers and direct observation are developmentally appropriate and allow detection of short spurts of activity [8,11]. Accelerometers have the additional advantage of enabling objective quantification of the frequency, intensity and duration of physical activity during all waking hours over several days. Moreover, the relatively low researcher and participant burden associated with accelerometers, and their lower cost as compared with direct observation makes them particularly attractive [7][8][9][10]. Therefore, accelerometers are currently considered to be the most promising tool for measuring habitual physical activity in freeliving children aged 0 to 5 years [7,8,10]. Nevertheless, validity, reliability and feasibility of accelerometer measurements have only been established in preschoolers [10,11]. Further, accelerometers have never been applied to evaluate physical activity levels in children younger than 3 years [12]. Only two studies could be located describing physical activity levels in toddlers, one using direct observation [13] and one using parental reports [14]. More research in these areas is warranted to gain a comprehensive understanding of physical activity during the infant and toddler years [10,12]. Additionally, testing accelerometer feasibility and validity and describing physical activity levels in infants, toddlers and preschoolers separately are a necessity as movement patterns during these unique developmental periods differ, and physical activity recommendations are specific for each age group [3,5,6,10]. During the first 12 months of life, activity is characterized by the learning of rudimentary movement skills, such as rolling, reaching, grasping, sitting, crawling, standing and walking. With the onset of walking, the toddler period begins and children start to develop proficiency in locomotor skills such as running, jumping, hopping, galloping and skipping. Further, manipulative and stability skills also begin to emerge in this period. Finally, the preschool period generally encompasses ages 3 to 5 and is characterized by further development and refinement of locomotor, manipulative and stability skills. Next to these differences in activity patterns, daytime naps are more common in children younger than 3 years of age compared to older children suggesting that younger children have less time during the day to be physically active and that the number of hours of monitoring required to represent a typical day might be less in this age group [10]. To date, the potential of accelerometers to assess physical activity in children under 3 years of age is largely unknown. Accordingly, the first purpose of the present study was to test the feasibility of ActiGraph accelerometer-based physical activity assessments in toddlers. A second purpose was to examine the validity of the ActiGraph accelerometer in toddlers. Finally, as toddler specific accelerometer cut points are currently lacking, the predictive validity of three independently developed ActiGraph cut points for classifying physical activity intensity among preschoolers was evaluated in toddlers. Participants Forty-four child care centers from Ghent, Flanders, Belgium were randomly selected from the official database of the Flemish governmental agency "Child & Family" (Kind & Gezin) [15]. The head of each child care center was contacted by phone and 27 (61.4%) were willing to participate. Following approval from the head of each center, all parents of 1-to 2-year old children were invited to enroll their child in the present study (n = 272) using written information letters and consent forms, distributed directly to parents by the child care center staff. The study was approved by the Ethics Committee of the University Hospital of Ghent. Fifty-five parents from 11 child care centers agreed to let their child participate in the present study (20.2% of eligible children). Children were finally included in the study if it was observed by a researcher that the child could walk independently. Eight children did not meet this criterion and were consequently excluded, resulting in a final sample of 47 toddlers from 11 child care centers. Of the 47 participants, 31 (63.3%) were observed during free play. Descriptive characteristics for all participants and the observed participants are presented in Table 1. Procedure All participants wore an accelerometer during waking hours for 6 consecutive days, including 4 weekdays and 2 weekend days. Parents and child care staff were instructed to only remove the accelerometer when the toddlers performed water-based activities (e.g., bathing, swimming) and during sleeping and napping because the aim was to define physical activity during waking time. During the measurement period, no reminders were provided to the parents or the child care staff to comply with the protocol. A diary was provided for the parents to log the times the accelerometer was put on and taken off and the reason for doing so. Additionally, parents reported their child's average sleeping and napping time during weekdays and weekend days. Finally, parents were asked to report on toddler's perception of wearing the accelerometer on a 5-point scale with endpoints ranging from 1 (very unpleasant) to 5 (very pleasant). Any other remarks about wearing the accelerometer were also recorded. On the first day of the protocol, the accelerometer was attached and toddler's height and weight were measured by a researcher. After the accelerometer was attached, toddler's activity behavior was videotaped by a researcher during free play at child care (indoors or outdoors) using a Sony Digital Handycam DCR-PC101E. A digital watch was synchronized with the computer used to upload data and download data from the accelerometers. The start and stop times of toddler's free play were recorded. On day 6, the accelerometer and the diary were collected by a researcher at the child care center. Child characteristics While toddlers were barefoot and in light clothing, height was measured to the nearest 0.1 cm using a portable stadiometer SECA 214. Weight was measured to the nearest 0.1 kg using a digital scale SECA 813. Height and weight were used to determine the body mass index (BMI; kg/m 2 ). BMI z-scores were calculated on the basis of the WHO reference data using the LMS method [16]. A z-score indicates by how many standard deviations a child deviates from the age and sex specific reference value. Toddlers' demographics (gender and date of birth) were acquired through the child care centers. GT1M ActiGraph accelerometer Toddlers' physical activity levels were objectively measured using the GT1M ActiGraph accelerometer, a uniaxial accelerometer designed to detect vertical accelerations. At present, ActiGraph accelerometers are the most widely used accelerometers for physical activity research in children and adolescents [9,11,17]. Consistent with previous research in preschoolers, the accelerometers were fastened snugly around the waist and positioned on the right hip, using an adjustable elastic belt [10,17,18]. Accelerometers were programmed to start measuring on the first day and to record data every 15 seconds [9,10,19]. After data collection, accelerometers were downloaded for subsequent data reduction and analysis. The software Meterplus 4.2 was used to screen and clean the accelerometer data from the 6 days of measurement [20]. Because of the absence of methodological studies investigating accelerometer data reduction in toddlers, decision rules in accordance with previous research in preschoolers were applied. Both the first and the last day of the registration period were omitted because these days were incomplete [21]. Periods containing 10 minutes or more of consecutive zero activity counts were regarded as non-wearing time and were excluded [10,21]. The minimum number of minutes with recorded accelerometer data (registration time) required to constitute an eligible weekday and weekend day was determined by defining the period during which at least 70% of the study population had recorded accelerometer data and 80% of that observed period was the minimum registration time [18,21]. Days on which participants did not achieve the minimum registration time were considered as non-eligible days and were excluded. Because of toddlers' sleeping and napping patterns, it was expected that toddlers' registration time would be lower compared to preschoolers' registration time. Therefore, the 70/80 rule was preferable to a priori determined criteria previously used in preschoolers (e.g., a minimum registration time of 8 hours) as this rule uses the sample from the study under investigation [18]. Minimum registration time was defined for weekdays and weekend days separately as this may potentially vary [10,18,19]. Ultimately, participants were included in the analyses if data were available for 3 valid days [10,22]. As a measure of toddlers' total physical activity, mean counts per 15 seconds during weekdays and weekend days was calculated. To compare the accelerometer data with physical activity levels during the free play session as measured using direct observation, the start and stop times were applied to extract the corresponding accelerometer data, namely activity counts per 15 seconds epoch. Each 15 seconds epoch was subsequently classified as sedentary behavior, light physical activity and moderate-to-vigorous physical activity. ActiGraph cut points to define physical activity intensities have not yet been developed in toddlers and, therefore, cut points specifically developed for the age group nearest to toddlers (3-to 5-year olds) using a 15 seconds measurement interval were selected. The cut points developed in the following calibration studies were used: Pate et al. [23,24] for 3-to 5year old children (sedentary behavior: ≤ 37; light physical activity: 38 -419; moderate-to-vigorous physical activity: ≥ 420), Sirard et al. [25] for 3-year old children (sedentary behavior: ≤ 301; light physical activity: 302 -614; moderate-to-vigorous physical activity: ≥ 615) and Van Cauwenberghe et al. [21] for 5-year old children (sedentary behavior: ≤ 372; light physical activity: 373 -584; moderate-to-vigorous physical activity: ≥ 585). Observational System for Recording Physical Activity in Children-Preschool The criterion measure of physical activity intensity during the free play sessions was assessed by means of an adapted version of the Observational System for Recording Physical Activity in Children-Preschool (OSRAC-P) [26]. OSRAC-P is a focal child, momentary time sampling observation system and scores children's physical activity intensity level every 30 seconds on a 1 to 5 scale where 1 = stationary and motionless; 2 = stationary with movement of limbs or trunk; 3 = slow, easy movement; 4 = moderate movement; and 5 = fast movement [27]. As such, the original coding protocol does not allow for frequent recording of physical activity, possibly resulting in an inability to adequately assess the typical sporadic and intermittent physical activity patterns of young children [7]. In addition, the 30 seconds observation interval does not correspond with the chosen accelerometer measurement interval of 15 seconds. Consequently, it was decided to use a computerized 15 seconds-by-15 seconds coding protocol using the software Vitessa 0.1 [28]. The free play session was videotaped and afterwards the video footage was downloaded and scored every 15 seconds, except for instances when the toddler was not visible in the video footage (e.g., behind furniture, other child or adult). In accordance to the manual and to previous research in toddlers, the highest level of intensity achieved by the toddler during each 15 seconds interval was recorded [13,27]. Afterwards, intervals coded as stationary and motionless or stationary with movement of limbs or trunk were classified as sedentary behavior, intervals coded as slow, easy movement as light physical activity and intervals coded as moderate or fast movement as moderate-to-vigorous physical activity [27]. In addition, the OSRAC-P scale assessing toddler's topography of physical activity behavior (e.g., running, sitting, walking, riding) was used to score toddler's physical activity type during each observation interval. Before data collection, two observers were engaged in a training protocol for 40 hours, including reading OSRAC-P articles, studying observation procedures and practicing and discussing coding definitions. To assess inter-observer reliability, it is recommended that approximately 12% of observations should be independently coded [27]. Inter-observer agreement scores, using stringent interval-by-interval comparisons, were calculated separately for toddler's physical activity level and physical activity type. The remaining observations were coded by both observers individually. Statistical analyses To investigate the feasibility of the ActiGraph accelerometer in toddlers, descriptive statistics of the feasibility variables were conducted. Paired Samples t-tests were performed to check for differences in registration time and daily activity level between weekdays and weekend days and Independent Samples t-tests and crosstabs to examine demographic or anthropometric differences between the total sample and the observed sample and between toddlers providing eligible and non-eligible accelerometer data. To evaluate the criterion validity of the ActiGraph accelerometer against direct observation, the correlation between mean accelerometer activity counts and mean directly observed activity intensity levels during the observation of each toddler was calculated. Further, the correlation between the accelerometer activity counts and the directly observed activity intensity during each observation interval across all observations was calculated. Depending on the normal distribution of the variables, Pearson (skewness < 0.7) or Spearman (skewness ≥ 0.7) correlations were performed. For all analyses, SPSS for Windows 15.00 was used and statistical significance was set at an alpha level of 0.05. The predictive validity of each accelerometer cut point was tested by performing two different analyses using Medcalc 11.4.4. First, Bland-Altman plots were conducted to determine systematic bias and 95% limits of agreement between observed and predicted time in each activity intensity by each cut point [29]. Second, the ability to accurately classify sedentary behavior, light physical activity and moderate-to-vigorous physical activity was evaluated for each set of cut points by calculating sensitivity, specificity and area under the Receiver Operating Characteristic curve (ROC-AUC). ROC-AUC provides a measure of classification accuracy that jointly considers sensitivity and specificity [30]. A Receiver Operating Characteristic curve plots the false positive rate (1 -specificity) on the x-axis and the true positive rate (sensitivity) on the y-axis. ROC-AUC of 1 represents perfect classification, whereas an area of 0.5 represents a complete absence of classification accuracy. ROC-AUC values of ≥ 0.90 are considered excellent, 0.80 -0.90 good, 0.70 -0.80 fair and < 0.70 poor [31]. Feasibility During weekdays and weekend days, a minimum registration time of 452 min (7.5 h) and 464 min (7.7 h) per day, respectively, was determined using the 70/80 rule. Twenty-nine weekdays and 36 weekend days did not meet the minimum registration time and were excluded. This resulted in three toddlers having 0 eligible days, six having 1 eligible day, eight having 2 eligible days, 19 having 3 eligible days and 11 having 4 eligible days. Ultimately, accelerometer measurements of 30 toddlers (63.8%) were included. No demographic or anthropometric differences were observed between toddlers providing eligible or non-eligible accelerometer data (all p > 0.05). The mean registration time of the included toddlers was 564 ± 62 min (9.4 h; range: 452 -714 min) during weekdays and 595 ± 83 min (9.9 h; range: 468 -846 min) during weekend days. The difference in registration time between weekdays and weekend days was not statistically significant (t = -1.609; p = 0.115). Daily activity level was 126 ± 39 counts/15 s (range: 51 -213 counts/15 s) during weekdays and 115 ± 35 counts/15 s (range: 62 -209 counts/15 s) during weekend days and did not differ significantly (t = 1.526; p = 0.135). The diary was filled out by 39 parents (83.0%) and all logged the times when the accelerometer was taken off for sleeping, napping, bathing, swimming and when the accelerometer was put back on. Six parents (15.4%) reported that they forgot to put on the accelerometer during one or more days; five of these children (83.3%) provided non-eligible accelerometer data. Five parents (12.8%) reported that there was some delay in refitting the monitor after their toddler had woken up or taken a nap during one or more days; one of these children (20.0%) provided non-eligible accelerometer data. During weekdays, median sleeping and napping time was 11.3 h (IQR: 10.9 -11.8) and 2.0 h (IQR: 1.9 -3.0), respectively, and during weekend days 11.0 h (IQR: 10.9 -12.0) and 2.0 h (IQR: 2.0 -3.0), respectively. Almost all the parents (82.9%) reported that their toddler found it 'not unpleasant and not pleasant' to wear the accelerometer while none found it 'very unpleasant' or 'unpleasant' to wear the accelerometer (median: 3; IQR: 3 -3). Three parents reported that sometimes the accelerometer did not stay on the correct position and one parent informed that this was due to the curiosity of his/ her child. Free play observations Sixteen of the 47 toddlers could not be observed because they were having dinner, were taking a nap or were being picked up by their parents to go home. Table 1 shows that no significant demographic or anthropometric differences were found between the observed sample and the total sample (all p > 0.05). Seventeen (54.8%) free play observations were indoors, 10 (32.3%) were outdoors and four (12.9%) were both indoors and outdoors. The observation period ranged from 19.5 (78 observation intervals) to 60.0 min (240 observation intervals) resulting in a total of 4553 observation intervals of 15 seconds. Of these observation intervals, 335 (7.4%) could not be scored because the child was not visible. As a result, 4218 observation intervals could be used. To assess inter-observer agreement, four randomly selected participants were independently coded by two observers (490 observation intervals; 11.6%). Inter-observer agreement was 91% and 96% for toddler's physical activity level and type, respectively. Tables 2 and 3 display descriptive statistics for OSRAC-P activity intensity level (scale 1 -5) and Acti-Graph activity counts during each intensity level and activity type. The mean intensity level across all the observations, as assessed by OSRAC-P and accelerometry, was 2.6 ± 0.9 (inter-child range: 1. (50.4%) and were observed in slow, easy movement (36.3%) and moderate to fast movement (13.3%) less frequently. The inter-child range for physical activity level was 14.8% -81.2% for stationary and motionless to stationary with movement of limbs or trunk behavior, 17.1% -64.9% for slow, easy movement and 0.9% -38.5% for moderate to fast movement. Three physical activity types, namely sit and squat (24.3%), stand (24.3%) and walk (33.1%), accounted for the greatest proportion of toddler's physical activity topography during the free play observations. Criterion validity Mean ActiGraph activity counts were significantly and positively associated with mean OSRAC-P activity intensity levels (r = 0.66; p < 0.001; n = 31). Further, the correlation among the ActiGraph activity counts and the OSRAC-P activity intensity level during each observation interval across all observations was also significant (ρ = 0.52; p < 0.001; n = 4218). Predictive validity Bland-Altman analyses for each set of cut points are displayed in Table 4. The Pate sedentary behavior cut point tended to underestimate stationary and motionless to stationary with movement of limbs or trunk behavior (mean difference of 1.5 min) while the Sirard and Van Cauwenberghe sedentary behavior cut points tended to overestimate stationary and motionless to stationary with movement of limbs or trunk behavior (mean difference of -11.5 min and -13.0 min, respectively). The smallest bias was achieved when the Pate sedentary behavior cut point was used. Slow, easy movement was underestimated when the Sirard and Van Cauwenberghe light physical activity cut points (mean difference of 8.2 min and 9.9 min, respectively) were applied while slow, easy movement was overestimated with the Pate light physical activity cut point (mean difference of -3.0 min). Using the Pate cut point resulted in the smallest bias in predicted slow, easy movement. All three cut points underestimated moderate to fast movement (mean difference of 1.5 to 3.3 min) with the Pate moderate-tovigorous physical activity cut point demonstrating the highest level of agreement. In Table 5 overall agreement, sensitivity, specificity and ROC-AUC values for each cut point to categorize activity counts as stationary and motionless to stationary with movement of limbs or trunk behavior, slow, easy movement and moderate to fast movement are presented. The Pate cut points showed the highest level of Discussion The present study is the first to investigate the feasibility and validity of GT1M ActiGraph accelerometer measurements in a convenience sample of 1-to 2-year old toddlers. Furthermore, this study is also the first to examine if ActiGraph cut points developed among preschool children (3-to 5-year olds) are appropriate to accurately identify directly observed physical activity intensities in a toddler population. To test the feasibility of the GT1M ActiGraph accelerometer, 47 toddlers from 11 child care centers wore the device during waking hours for 6 consecutive days. It is of concern that many day care centers were not interested in participating in the present study (response rate of 61.4%). A possible explanation could be that the contacted child care centers were anxious that the results of the present study would be used to evaluate the quality of the child care center. There was also a very low parental response rate in this study (20.2%). Perhaps parents were reluctant to participate because the study included an observation of their child. Another possibility is that child care staff did not motivate the parents to participate in the present study as no incentives were provided for both the child care center and the parents. Based on the experience from this study it can be suggested to use a more active recruitment approach to recruit parents (e.g., parent information session) or to provide incentives for parents and/or child care centers to achieve a higher response rate in this age group. Using the 70/80 rule, a minimum registration time of 7.5 h during weekdays and 7.7 h during weekend days was determined. Applying this criterion, resulted in 29 excluded weekdays and 36 excluded weekend days, and 17 out of 47 toddlers (36.2%) failing to meet the inclusion criteria. Compared to previous research in preschoolers, applying the 70/80 rule and requiring 3 eligible days for inclusion, the proportion of ineligible data is higher in the present study [21,32]. In the study of Van Cauwenberghe et al. [21] applying these decision rules resulted in 97 excluded days and 40 out of 154 5year old children (26.0%) providing non-eligible data and in the study of Verbestel et al. [32] this resulted in 48 out of 261 3-to 5-year old children (18.4%) providing non-eligible data. The rather high proportion of data not eligible for inclusion can probably be explained by the fact that wearing the accelerometer in the present study was a responsibility of both the parents and the child care staff. As child care staff were not instructed to fill in a diary, no information is available on the times the accelerometer was put on and taken off and the reasons for doing so when the child was at child care. One way to increase compliance could be to provide reminders (e.g., daily telephone calls) to parents and/or child care staff to put the accelerometers back on after sleeping and napping. Additionally, future studies could test the feasibility of wearing the accelerometer during day time napping as this would alleviate the issue of parents and child care staff having to remove and refit the monitors during the day. Further, it is possible that the inclusion criterion of 3 eligible days was too high in this population. If a criterion of 2 eligible days had been applied, another 8 toddlers (17.0%) could have been included for analyses. However, this suggestion could possibly decrease the reliability of the measurements as previous research in preschoolaged children indicated that the number of days of monitoring was more important to reliability than the number of hours [22]. Future research should establish the minimum number of days accelerometers need to be worn in order to represent habitual physical activity in toddlers. Ultimately, mean registration time of the included days in the present study was 9.4 h during weekdays and 9.9 h during weekend days. Considering the parental reported sleeping patterns of the toddlers, namely 11 h sleeping per night and 2 h napping per day, these findings suggest that there was good compliance to the study protocol. Further, a threshold of 7.5 hours registration time per day appears to be a reasonable suggestion for future research in this age group. Nevertheless, further research is required to determine the variability in toddlers' physical activity behavior within days and the minimum number of minutes required to represent a typical day in toddlers. To evaluate the validity of the GT1M ActiGraph accelerometer, observations during free play at child care were conducted. Results of the free play observations were similar to previous research at child care in 2-to 3-year old children with the majority of the observations classified as stationary and motionless to stationary with movement of limbs or trunk behavior and a minority as moderate to fast movement [13]. The proportion of time spent in each activity level during free play was highly variable between toddlers, reflecting the different activities being undertaken by toddlers during free play. Descriptive statistics of the accelerometer output revealed that median ActiGraph activity counts increased in accordance with physical activity intensity but also demonstrated substantial variability. Results of a previous study, using a second-by-second coding protocol and the activity levels categories of the Children's Activity Rating Scale (CARS), indicate that mean accelerometer outputs for sedentary behavior, light physical activity, moderate physical activity and vigorous physical activity during free play are systematically higher in preschoolers, namely 448 ± 196, 734 ± 185, 823 ± 182 and 1115 ± 233 counts per 15 s, respectively [21]. Several explanations are possible for this discrepancy in accelerometer output during free play, including differences in the observation system, the protocol and the activities undertaken during free play and age related changes in anthropometrics, movement patterns and walking biomechanics. In the present study, the criterion validity of the GT1M ActiGraph accelerometer for measuring physical activity in toddlers was considered acceptable (r = 0.66 and ρ = 0.52). A recent review, summarizing the evidence on the validity of the ActiGraph accelerometer to assess physical activity in older children and adolescents, suggests that the results of the present study are in line with previous validation studies where ActiGraph activity counts were moderately to highly correlated with observed activity (r = 0.52 -0.77) [11]. Accelerometer activity counts are a dimensionless unit and researchers have attempted to calibrate these counts into biologically meaningful and interpretable data, such as time spent in activity levels of different intensities [10,18,19]. Calibration studies in toddlers are lacking, but numerous investigations involving preschool children have attempted to calibrate ActiGraph activity counts [10]. Therefore, the present study aimed to explore whether previously developed ActiGraph cut points for 3-to 5-year old children [21,23,25] allow for the accurate categorization of directly observed physical activity intensities in toddlers. It is critical to understand which cut points are able to accurately classify physical activity intensity in young children as others demonstrated that lack of consensus on this issue results in an inability to estimate population prevalence levels of physical activity in young children [21,33]. The Bland-Altman plots illustrate that the mean bias between directly observed and predicted time spent in each activity intensity was the lowest when the Pate cut points were used. Yet, wide limits of agreement were found, indicating that the time in the directly observed physical activity intensities was not accurately classified. Large mean differences and wide limits of agreement were established for the other two sets of cut points, suggesting that they were unable to accurately identify time spent in each observed activity intensity level in toddlers. To evaluate the predictive validity of the cut points thoroughly, sensitivity, specificity and ROC-AUC were calculated. The Pate sedentary behavior cut point performed fairly well to classify activity counts as stationary and motionless to stationary with movement of limbs or trunk behavior while the Sirard and Van Cauwenberghe sedentary behavior cut points exhibited an unacceptably high false positive rate. These findings do support the use of the Pate cut point to define sedentary time and non-sedentary time (a combination of light, moderate and vigorous physical activity) in toddlers. With respect to detecting slow, easy movement, all three cut points performed poorly. These findings endorse the development of toddler specific light physical activity cut points. Finally, for the purpose of categorizing activity counts as moderate to fast movement, all three moderate-to-vigorous physical activity cut points performed poorly as a function of a low true positive rate, indicating the need for toddler specific cut points to classify moderate-to-vigorous physical activity. Most importantly, the cut points currently used, appeared to be too high to accurately identify the time toddlers spent in moderate to fast movement. A very important consideration is that the GT1M ActiGraph is a hip-mounted accelerometer and measures accelerations in the vertical plane. Consequently, the accelerometer registers a reduced amount of vertical acceleration when non-ambulatory activities with limited trunk movement occur (e.g., climbing, pulling, pushing, peddling on a tricycle), resulting in misclassification of light physical activity as sedentary behavior or moderate-to-vigorous physical activity as light physical activity. Moreover, results from the present study illustrate that toddlers often engage in such activities during free play at child care. Therefore, combining accelerometers with monitors capable of detecting posture, using multiple monitors to measure movement of the trunk and the limbs simultaneously or applying pattern recognition may provide more accurate information beyond the capability of the GT1M ActiGraph when defining physical activity intensities in toddlers as well as in older children [10,17,18,34]. Research in these areas is urgently needed. Some limitations of the present study need to be acknowledged. To measure physical activity intensity during free play, a 15 seconds measurement interval was used for both the accelerometers and the OSRAC-P. A 15 seconds measurement interval has been put forward to measure the spontaneous activities in young children [9,10]. However, there is a possibility that the 15 seconds measurement interval does not allow for the accurate detection of intermittent changes in physical activity intensity and fidgeting might be missed [10,35]. Furthermore, the OSRAC-P protocol requires to code the highest level of activity and the corresponding activity type during the 15 seconds observation interval which may mask other activity intensities and types during the observation interval. Especially for the classification of slow, easy movement and moderate to fast movement, it can be expected that levels of agreement were reduced because of this coding system. A continuous coding protocol may have been more appropriate to capture physical activity intensity and type during free play. Further, although the OSRAC-P decision rules to classify physical activity intensities in young children are well-established [7,26], the classification of standing as sedentary behavior is questionable [36]. Finally, the present study was limited by the small convenience sample used. Larger and more variable samples are needed to determine if individual factors, such as body size and motor development, modify the findings. Several strengths of the present study are also noteworthy. First, the ActiGraph accelerometer was validated against directly observed free play activities with excellent inter-observer reliability. Additionally, by using direct observation, in-depth information on the physical activity types of toddlers was gathered. Second, the criterion validity and the predictive validity were evaluated using appropriate statistical approaches [7,10]. Conclusions In summary, the results of the present study endorse the use of the GT1M ActiGraph accelerometer to assess habitual physical activity in free-living toddlers. In addition, the present findings suggest that the Pate cut point can be used to classify sedentary behavior and nonsedentary behavior in toddlers. However, further research is warranted to classify sedentary time in toddlers with excellent accuracy using accelerometry. None of the three cut points developed among preschool children appeared to be suitable to differentiate light and moderate-to-vigorous physical activity in this age group. Until such equations are developed in toddlers, researchers could use accelerometer counts (e.g., counts per minute) as a measure of physical activity participation in toddlers. List of abbreviations OSRAC-P: Observational System for Recording Physical Activity in Children-Preschool; ROC-AUC: area under the Receiver Operating Characteristic curve.
2014-10-01T00:00:00.000Z
2011-06-26T00:00:00.000
{ "year": 2011, "sha1": "d1920cb0882aa2ff2c958359b0031dc53b6e106c", "oa_license": "CCBY", "oa_url": "https://ijbnpa.biomedcentral.com/track/pdf/10.1186/1479-5868-8-67", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "d1920cb0882aa2ff2c958359b0031dc53b6e106c", "s2fieldsofstudy": [ "Medicine", "Psychology" ], "extfieldsofstudy": [ "Psychology", "Medicine" ] }
206252996
pes2o/s2orc
v3-fos-license
The validity of the no-pumping theorem in systems with finite-range interactions between particles The no-pumping theorem states that seemingly natural driving cycles of stochastic machines fail to generate directed motion. Initially derived for single particle systems, the no-pumping theorem was recently extended to many-particle systems with zero-range interactions. Interestingly, it is known that the theorem is violated by systems with exclusion interactions. These two paradigmatic interactions differ by two qualitative aspects: the range of interactions, and the dependence of branching fractions on the state of the system. In this work two different models are studied in order to identify the qualitative property of the interaction that leads to breakdown of no-pumping. A model with finite-range interaction is shown analytically to satisfy no-pumping. In contrast, a model in which the interaction affects the probabilities of reaching different sites, given that a particle is making a transition, is shown numerically to violate the no-pumping theorem. The results suggest that systems with interactions that lead to state-dependent branching fractions do not satisfy the no-pumping theorem. I. INTRODUCTION One of the prominent technological trends of the last century is miniaturization. The race to build smaller devices and machines has led to the realization that when a system is sufficiently small its properties change qualitatively. Molecular machines are one such fascinating example of the notion that "small is different" [1][2][3][4][5][6]. Macroscopic machines work in an ordered and predictable manner, since their interaction with their environment is relatively weak and does not disrupt their operation. In contrast, microscopic machines are strongly agitated by their surroundings [7], resulting in large fluctuations in their dynamics. The coupling of a small system to a thermal environment is naturally described in terms of stochastic equations of motion, expressing our inability to predict the dynamics of the environment. Our theoretical understanding of such systems significantly improved in the last two decades, following the discovery of the celebrated fluctuation theorems [8][9][10][11][12][13][14][15][16][17]. A theory, termed stochastic thermodynamics, allows to assign thermodynamic interpretation to single realizations of a nonequilibrium process [18,19]. It can be viewed as extending thermodynamics to small out-of-equilibrium systems. The small size of molecular machines means that finding reliable ways of driving them is not a trivial task. Molecular machines of biological origin typically exhibit coupling between mechanical motion and a chemical reaction. Many biological motor proteins are driven by the chemical potential difference between ATP and its hydrolysation products. This chemical potential difference changes on a time scale which is much larger than the time it takes such a motor to make a step or complete a cycle. Such machines are therefore driven by an effectively time-independent chemical driving force. Not all microscopic machines are of biological origin. Many research groups are designing and building artificial nano-machines in their labs [3][4][5][6]. These artificially designed machines can be driven by mechanisms which are very different from those of their biological counterparts. One interesting driving mechanism involves a cyclic variation of external parameters, which then results in some kind of directed motion. Thermal ratchets are one well known type of a periodically driven system where diffusion plays an integral part of the dynamics [20]. But cyclic variation of parameters can also be used to drive systems modeled by a set of discrete states, where diffusion is not as prominent. The latter class of systems is often referred to as stochastic pumps [21][22][23][24][25][26][27][28][29] since they are driven in a way which is similar to everyday, macroscopic pumps. In a beautiful experiment Leigh et al. have built an artificial molecular machine that operates as a stochastic pump [30]. A complex of interlocked ring-like molecules was synthesized. Relative motion of the ring-like molecules in this machine was achieved by a cycle of local chemical changes at binding sites along a larger molecule. These can be modeled by a time variation of local binding energies, so that the smaller ring-like molecules makes thermally activated transitions in a time-dependent energy landscape. Surprisingly, it turns out that temporal variation of local binding energies does not result in directed motion, even when the time-ordering of the variation breaks the symmetry between directions [31,32]. One needs to also vary local barriers during the cycle to generate directed motion. This result is known as the no-pumping theorem. It's non intuitive nature and simplicity have generated some interest [33][34][35][36][37]. In many-particle stochastic pumps interactions between the particles may have considerable effect on the resulting dynamics. In the context of no-pumping one may wonder whether interactions may result in violation of the theorem. Already in the experiment of Leigh et al. it was noted that exclusion interaction allows one to achieve directed motion in a pump driven by variation of site energies alone [30], meaning that exclusion interaction breaks the no-pumping theorem. (See also [27] for a theoretical discussion.) In contrast, several recent papers studied many-particle pumps with local, or zero-range, interactions and found that the no-pumping theorem holds [38][39][40]. It is not surprising that the two types of interactions used so far in studies of stochastic pumps are exclusion and zero-range. These two interactions serve as paradigmatic models of many-particle nonequilibrium steady states, and have been extensively investigated in that context [41][42][43][44][45]. But the current state of knowledge, where it is known that exclusion interaction violates no-pumping, while zero-range interaction do not, leaves an open question. Which property of particle-particle interactions leads to the breakdown of the no-pumping theorem? A closer inspection of exclusion and zero-range interactions reveals that they differ by two qualitative aspects. One is the range of interaction. Exclusion interaction is a nearest neighbor interaction, and therefore has a finite range, while zero-range is a local interaction. The other qualitative aspect has to do with the conditional probability of a particle to reach different sites when it is known that this particle is making a transition. The so-called branching fractions expressing this conditional probability are time-independent in the zero-range pump, while they gain time dependence on average -through their dependence on the location of neighboring particles -for exclusion interactions. Which of these qualitative aspects of interaction is responsible for the violation of the non-pumping theorem? In this paper I answer this question by studying the validity of no-pumping in models with finite range interactions that still exhibit similarities to zero-range interactions. I find that if the interaction is such that the branching ratios are fixed, for instance because they are independent of the state of the system, the no-pumping theorem still holds. I also study numerically a two-particle system with a more complicated interaction which results in branching fractions that depend on the configuration of particles, and find that the no-pumping theorem breaks down. The structure of the paper is as follows. In Sec. II a model of a stochastic pump with a particular form of finiterange interaction between all particles in the system is constructed. It is then shown that the no-pumping theorem holds for this model. A model of a two-particle stochastic pump with more general interaction is studied numerically in Sec. III, and is found to violate the theorem. In Sec. IV the qualitative differences between the interactions are discussed, and the aspect of interaction that leads to breakdown of the no-pumping theorem is identified. II. NO-PUMPING WITH FINITE-RANGE INTERACTIONS In this section I define a particular model of stochastic pumps with non-local interactions. I then demonstrate that this model satisfies the no-pumping theorem. The model, which is described below, can be viewed as a generalization of the zero-range interaction studied in Refs. [38][39][40]. It also have similarities with models that were studied in the context of pair factorized steady states [46,47]. Consider a system of N particles and M sites. The system's state can be specified by denoting the precise location of each particle, or alternatively by considering the occupation of each site. In this section I will adopt the latter notation since I will be interested in the total particle current, and therefore there is no need to distinguish between different particles. The states of the system are denoted by the occupation numbersn ≡ (n 1 , n 2 , · · · , n M ), with the constraint M i=1 n i = N . When a particle resides in site i it has a local (or site) energy E i (t), which can be controlled externally. In addition, all the particles interact with each other, via some finite-range many-body interaction U (n). Crucially, this interaction depends on the locations of all the particles in the system. As a result every state of the system has an overall energy of The interaction U (n) can have any finite value, as long as it does not destroy or add metastable states (that is, sites) to the system. A model of a stochastic pump used to study no-pumping should relax to equilibrium when not driven. The equilibrium distribution is the Boltzmann distribution, which is given by where Z ≡ n| i n i =N N ! n1!n2!···n M ! e −βEn is the corresponding canonical partition function. The rates of transitions between many-particle states are chosen so that this Boltzmann distribution is indeed the equilibrium distribution. To do so one defines operatorsb ± i that add or subtract a particle to site i. A transition of a particle from site i to site j then connects the statesn 1 andn 2 =b + jb − in 1 . While making this transition the system must overcome an energetic barrier, which has the form where B i,j = B j,i are local energy barriers for transitions between sites. (When n 1i =0 this expression is not well defined, but this is not a problem because the transition rate is defined later in a way that ensures that it vanishes for transitions out of an empty site.) The energetic barrier for transitions is therefore a sum of the energy of the N − 1 stationary particles and the local barrier for the transition. The transitions are assumed to be thermally activated, with rates forn 2 =b + jb − in 1 , while Rn ,n = 0 when the statesn ,n are not connected by a single transition. The factor of n 1i expresses the fact that each of the particles in site i can make the transition, and they are all equally likely to do so. Crucially, these many-particle barriers are symmetric, namely Wn 2,n1 = Wn 1,n2 . The system's probability distribution evolves according to a master equation where Rn ,n ≡ − n =n Rn ,n ensures conservation of probability. The transition rates (4) are consistent with the Boltzmann distribution (2) as they satisfy the detailed balance condition Rn ,n P eq n = Rn ,n P eq n . This is true for any symmetric choice of barriers. The specific choice made in Eq. (3) neglects the interaction between the particle that makes a transition and other particles. It is this property of the many-particle barriers which makes the model qualitatively similar to the zero-range process. For this representation of many-particle states the total particle current between sites i and j is expressed by [39] where in all the terms in the sumn 2 =b + jb − in 1 . It should be noted that Jn 2,n1 (t) = Rn 2,n1 (t)Pn 1 (t) − Rn 1,n2 (t)Pn 1 (t) is the flux of probability between the configurationsn 2 ,n 1 , whereas J j,i (t) is the total particle current between sites i and j. This model of a jump process relaxes to equilibrium when the site energies and barriers are not varied in time. It can be operated as a pump by varying the site energies E i (t) and local barriers B i,j (t) periodically in time. The Floquet theorem ensures that in the long time limit the system converges to a periodic state, P ps n (t + τ ) = P ps n (t) [48]. In this periodic state particles may preferentially move in different directions at different times. Directed motion is achieved when more transitions are accumulated in one direction than in its opposite. The time-integrated currents can be used as a measure for the directed motion of the particle in the pump. When Φ j,i = 0 for at least one pair of sites (i, j) directed motion is achieved and the system can be operated as a thermodynamic engine, and perhaps do some useful work against a resisting force [28]. (In the following I suppress the superscript ps, as I will only consider systems that are in this periodic state.) The no-pumping theorem states that both the site energies, E i (t), and the local barriers B i,j (t), must be varied in time to generate directed motion. Variation of only site energies, or alternatively of only local barriers, will not result in currents that have a preferred direction (after a full cycle). The no-pumping theorem is non-intuitive because variation of the site energies can be done in a way that gives the system a preferred direction. As an example consider a system with three connected sites, A, B and C. The site energies can be varied so that during the cycle the most binding site changes from A to B, then to C, and finally back to A. Such a variation gives a preferred direction to the driving protocol, but due to the no-pumping theorem we know that the particles will not respond in a way that exhibit preferred direction after a full cycle. I now demonstrate that this theorem, which was shown to be valid for non-interacting particles and for zero-range interactions, is also valid for the model described above. Crucially, this model includes an interaction U (n) which clearly has a finite range, as it allows for interactions between particles in all sites. The derivation of the no-pumping theorem for the case of time independent local energies is straightforward. The system relaxes to the time-independent equilibrium distribution given by Eq. (2). This distribution is undisturbed by the time dependence of the local barriers. Once the system has relaxed to this distribution all the currents vanish, J i,j (t) = 0. There is no directed motion. The derivation for time-independent local barriers is based on the approach developed by Mandal and Jarzynski for single-particle stochastic pumps [36]. This approach was later extended to many-particle stochastic pumps with zero range interactions [38,39]. Below I show that it applies to the model of Eqs. (2)-(4) with very little modifications. This proof of the no-pumping theorem is based on a derivation of two sets of equations, termed conservation laws and cycle equations. These two sets are then shown to be incompatible with each other, so that the only possible solution is one without preferred direction for particle flow (Φ j,i = 0 for all transitions). The introduction of the finite-range interaction only modifies the derivation of the cycle equations. To avoid repeating already published arguments I will focus this part of the derivation and will only heuristically describe the rest of the steps. The conservation laws follow from the periodicity of the driving protocol and the fact that the number of particles in the system is conserved. In fact, their derivation does not depend on the form of the interaction, as they just express the fact that particles do not appear or disappear. They have been derived before for many-particle pumps with zero-range interaction, and that derivation remains be valid for the model studied here. An example of the derivation which uses site occupations to describe states of a system can be found in Ref. [39]. The conservation laws are given by Equation (8) simply expresses the fact that the total flux of particles out of site i must vanish. It should be noted that Ref. [39] considered an open system in which particles could enter or leave to external particle reservoirs, and as a result also had somewhat different conservation laws for sites connected to particle reservoirs. There are no such sites here. By themselves the conservation laws (8) do not mean that all integrated currents must vanish. Instead they imply that particles must flow in closed cycles of transitions. Any other flow pattern will result in accumulation of particles and will not be consistent with the time-periodicity of the dynamics. This is the periodically-driven analogue of the well known network theory of steady-states [49]. The other set of equations is called cycle equations since they refer to particle currents along a closed cycle of transitions between sites. As pointed out by Mandal and Jarzsynki, these equations essentially result from the detailed balance form of the transition rates. The derivation of cycle equations is slightly modified by the presence of the finite-range interaction. Consider the combination where we again used the notationn 2 =b + jb − in 1 , so that n 2j = n 1j + 1. Cycle equations will emerge when the summation of such terms from transitions between neighboring sites will show telescopic cancelations. For instance, if one considers also the j → k transition between many particle statesn 2 andn 3 =b + Note that in both transitions the configuration of the stationary particles isb − in 1 , leading to the appearance of the same factor of U (b − in 1 ) in all the terms. When the two contributions above are added the negative term in the first will cancel the positive one in the second. One concludes that a complete cancellation of terms will happen for any closed cycle i 1 → i 2 → · · · i l → i 1 whose many-particle transitions connect a series of many-particle statesn 1 ,n 2 , · · · ,n l , resulting in e βBi 2 ,i 1 Jn 2 ,n1 (t) + e βBi 3 ,i 2 Jn 3,n2 (t) + · · · = 0. One can then sum over all many particles statesn 1 , use Eq. (6) to recast the result in terms of total particle currents between sites, and then integrate over time. This series of steps results in the so-called cycle equations for the net flux of particles between sites e βBi 2 ,i 1 Φ i2,i1 + e βBi 3 ,i 2 Φ i3,i2 + · · · = 0. Crucially, once such cycle equation holds for every closed cycle of transitions between sites. Note that Eq. (10) is valid only for time-independent barriers. When the barriers B i,j (t) are time dependent integration of Eq. (9) over time can no longer be expressed in terms of the integrated fluxes, Φ i,j . It is also crucial to notice that this derivation relies on the fact that the same prefactor of e βBi,j appears for all currents in cycles that correspond to a transition i → j independently of the many-body state of the system. This allows to sum cycle equations over the possible particle configurations and replace the probability currents with the total particle currents between sites. The derivation of Eq. (10) will break down when either one of these system properties is modified. The form of the conservation laws (8) and cycle equations (10) derived here is identical to those found for single particle pumps in Ref. [36] and for zero-range interactions in Refs. [38] and [39]. As a result the rest of the proof of Mandal and Jarzynski [36] is also valid for the interacting model considered here. I will not repeat that part of the argument here. Heuristically the proof is based on the observation that cycle equations and conservation laws try to impose different signs on the flux of particles. Conservation laws try to impose the same sign on all fluxes in a cycle. In contrast, due to the positivity of the e βBj,i factors, the cycles equations mandate different signs for some Φ's in a cycle. Mandal and Jarzynski then showed that this means that the equations are incompatible and the only solution is one where Φ j,i = 0 for all transitions, even if the system have more than one cycle of transitions. One sees that the existing derivation of the no-pumping theorem can be extended to include finite-range interactions of the form (1) as long as the transition rates are given by Eq. (4). This means that it is not the range of interaction that leads to the violation of the no-pumping theorem in the exclusion process. Instead this result suggests that the ability of the interaction to affect the likelihood of different transitions from the same site at different times is the crucial property of the interaction that may lead to directed motion. III. NUMERICAL EXAMPLE The model studied in Sec. II can be criticized as being somewhat artificial. The main reason for such criticism is an assumption that was made about the interaction between particles. Specifically, it was assumed that particles residing in sites interact with each other, but not with a particle making a transition. While this assumption seems reasonable and internally consistent for zero-range interactions, where particles affect each other only if they are in the same site, it is somewhat problematic for finite-range interactions, since it means that the interaction is felt before and after, but not during the transition. In this section I consider a model of a stochastic pump with interactions between all particles, including ones in the midst of a transition. The model is chosen to have five sites. These sites, and the topology of transitions between them, are depicted in Fig. 1. The vertices of the graph denote the sites, while the links correspond to the possible transitions between sites. Missing links are meant to represent forbidden or impossible transitions. This specific five-site configuration was chosen due to a combination of two reasons: i) It has more than one closed cycle of transitions, and ii) it allows for different "distance" between particles, measured as the minimal number of jumps needed to bring one of them to the other. The latter property will be used in the definition of the interaction, as explained below. In what follows I will focus on a system of two particles moving in the 5 sites depicted in Fig. 1. When the number of particles is much smaller than the number of sites the easiest way of denoting the state of the system is by listing the location of each particle. (I assume here that the particles are distinguishable.) The model has 25 many-particle states and a transition matrix of size 25 × 25 whose off diagonal entries are the transition rates. Diagonal elements are chosen so that probability is conserved. Limiting the number of particles in the system to two restricts the type of interactions that one can study. Nevertheless, I expect that the intuition gained from this model can be applied to more general types of periodically driven interacting systems. I assume that one can neglect events where both particles make a simultaneous transition. The interaction between particles is chosen to depend on the "distance" between them. This distance is defined as the smallest number of transitions needed to bring them to the same site. For the graph depicted in Fig. 1 this interaction can get one of three values, U 0 , U 1 or U 2 . For instance U (CC) = U 0 while U (BE) = U 2 . The proof of no-pumping presented in Sec. II was valid for a finite-range interaction between the particles. But the interaction used was of a special type. During a transition only the stationary particles were assumed to feel the interaction. The jumping particle was unaffected. As discussed above it makes more sense that a finite-range interaction between particles is also felt during the transition. In this section I therefore consider such an interaction, which is also assumed to depend on the "distance" between particles, and therefore can have the three values U 1/2 , U 3/2 and U 5/2 . As will be evident later, this part of the interaction can result in the breakdown of the no-pumping theorem. These are all the ingredients needed to fully define the model and determine the transition rates between states. Let us take, for example, the transition from state from state AB to state CB. In this transition the first particle jumps from A to C, while the second particle resides in site B. The many particle energy of the initial state is E AB (t) = E A (t) + E B (t) + U 1 , while the many-particle barrier is W CB,AB (t) = E B (t) + B A,C + U 3/2 . The resulting transition rate is given by The rest of the transition rates are determined using the same prescription. They are too numerous to be listed explicitly here. Note that all the many particle barriers are symmetric, since the same particle configuration appears for a transition and its reverse. For the example above W CB,AB = W AB,CB . This ensures that the system relaxes to equilibrium when parameters are not varied in time. This two-particle system is operated like a pump by cyclic variation of the site energies E X (t) and/or the local barriers B X,Y (t). The master equation that describes the dynamics is solved numerically, using fourth order Runge-Kutta integrator. The system is propagated until it settles into a periodic state, and then the time integral of the total particle currents between sites is calculated. These total particle currents are given by sums of fluxes of the many-particle transitions in which a particle makes a specific transition. For example the total particle current for the A → B transition is given by The cyclic variation of parameters results in directed motion when Φ X,Y (τ ) =´t +τ t dt J tot Y →X (t ) = 0 for any of the j → i transitions. To check the validity of the no-pumping theorem the integrated currents Φ A,B (t) ≡´t 0 dt J ps B→A (t ) and Φ D,E (t) were calculated numerically. Choosing U 1/2 = U 3/2 = U 5/2 = 0, with U 0 = 1, U 1 = −0.3, and U 2 = −0.6, allows to examine a finite-range interaction of the type discussed in Sec. II, where it was predicted to satisfy the no-pumping theorem. The two-particle pump was driven by a cyclic variation of site energies using E A (t) = cos(2πt), E B (t) = 2 + 1.1 cos(2πt + π/4), E C (t) = 1 + cos(2πt + π/2)/2, E D (t) = 1/2 + cos(2πt + π), and E E (t) = 2 + 2 cos(2πt + 3π/2). This prediction is tested in Fig. 2, which compares the integrated particle fluxes for fixed and time-dependent local barriers. The solid lines correspond to a stochastic pump with time-independent local barriers. The values The no-pumping theorem concerns with the net accumulation of particles after a full cycle. One should therefore examine the value of Φ(t = 1) to see if it vanishes or not. The results depicted in Fig. 2 show that after a full cycle the particle fluxes vanish for the case of time-independent local barriers. In contrast, there is a net particle flux when the local barriers are varied in time. This matches the prediction of the no-pumping theorem. One should keep in mind that the interaction between particles used to generate Fig. 2 in non-local. The results include the time-integrated particle fluxes for intermediate times and not only after a full cycle to demonstrate that these fluxes vary with time in a non-trivial manner, and vanish only after a full cycle. To test the possible influence of an interaction that affect the jumping particle I compared the dynamics of a pump without barrier interactions, U 1/2 = U 3/2 = U 5/2 = 0, to that of a pump with interaction dependent barriers. The latter were chosen as U 1/2 = −2, U 3/2 = 0, and U 5/2 = 3. Here the barriers B X,Y were taken to be time-independent, since the no-pumping theorem tells us that directed motion will appear for time-dependent barriers independently of the interaction. Beside the barrier interactions all other parameters, namely the local barriers, interactions, and site energies used in the numerics are exactly those used in the fixed barrier case presented in Fig. 2. The results of the comparison are depicted in Fig. 3. FIG. 3. Testing the validity of the no-pumping theorem with interaction dependent barriers. Solid lines correspond to manyparticle barriers that do not depend on the state of the system, that is to the case of U 1/2 = U 3/2 = U 5/2 = 0. Dashed lines correspond to many-particle barriers that are affected by the interaction as described in the text. All the curves were calculated for time-independent BX,Y . The system with the interaction-dependent barriers clearly violates the no-pumping theorem. One immediately notices that the inclusion of barrier interactions leads to the breakdown of the no-pumping theorem. Such interactions were not considered in the derivation of Sec. II, so there is no contradiction with the conclusions obtained there. Instead, one should view this numerical result as evidence that it is precisely the presence of interactions in the many-particle barriers results in violation of the no-pumping theorem. IV. DISCUSSION The results presented in Secs. II and III help to clarify the relationship between the properties of particle-particle interactions and the validity of the no-pumping theorem. Specifically, the results indicate that no-pumping holds for quite general interactions, as long as they affect only the many-particle energies. Once the many-body interaction enters into the transition barriers in a non trivial way, as happened for the model with U j/2 = 0 that was studied in Sec. III, no-pumping is likely to be violated. While this gives an answer to the question posed in the introduction, it is worth to take a closer look at the mechanism through which these interactions break the no-pumping theorem. A better understanding of the reason for violation of no-pumping can be gained by trying to reproduce the proof presented in Sec. II using the model studied in Sec. III and seeing where the derivation breaks down. Since particle conservation holds for any interaction, the problem must arise in the derivation of cycle equations. Let us consider all the possible cycles of transitions between many-particle states in which one particle makes the series of transitions A → B → C → A while the other is a stationary spectator. Without affecting the generality of the argument, let us assume that the first particle is the spectator. When it is located at site A, one finds e βB A,B +βU 1/2 J A,A→B (t) + e βB B,C +βU 3/2 J A,B→C (t) + e βB A,C +βU 1/2 J A,C→A (t) = 0. One can derive similar equations for cycles in which the spectator is at C, D, or E. The derivation of the cycle equations (10) involved a summation of equations of this type. A crucial part of the derivation was the identification of certain sums of currents J X,Y →Z as the total particle current between sites. Such an identification was possible because all contributing currents came with the same prefactor, see Eq. (6) or Eq. (12). One immediately sees that the U j/2 part of the interaction gives different prefactors to currents in the expressions above, and therefore prevents one from combining these equation in a way that would result in cycle equations for total particle currents. As a result there are no simultaneous system of cycle equations and conservation laws for the same type of fluxes -the total particle current in this setup -and no-pumping is not ensured. The numerical results presented in Sec. III explicitly show that no-pumping is violated in such cases. One way of interpreting the way that interaction affects the dynamics of a stochastic pumps is by examining the so-called branching fractions. The branching fractions are defined as the conditional probability that a particle will make a specific transition, e.g. X → Y , when it is known that the particle is making a transition out of site X. Branching fractions therefore express the probability of ending in different sites in a transition. Consider the model studied in Sec. III. Let us calculate the probability that a particle ends at a site B, given that it leaves site C. This branching fraction will depend on the location of the other particle (namely, the spectator). A short calculation shows that this branching fraction is P (C → B|C) = e −βB B,C e −βB A,C + e −βB B,C + e −βB C,D , when the spectator is at C. In contrast, when the spectator is at A the branching fraction is This demonstrates that interactions like the one studied in Sec. III result in branching fractions that depend on the many-particle state of the system, expressed here through the dependence on the location of the spectator. This gives the branching fractions an effective time dependence, which is mediated through the evolution of the many-particle probability distribution during the driving cycle. At different times during the cycle different many-particle states are more likely, and with them different values of branching fractions. A point worth emphasizing is that for single-particle stochastic pumps, as well as for pumps of particles with zerorange interactions, the condition on time independence of barriers and of branching fractions are essentially equivalent. (Strictly speaking one can add a time-dependent term to all barriers without affecting branching fractions.) This means that previous work could have used branching fractions to express the conditions for no-pumping. Indeed, Maes et al. have presented a derivation of the no-pumping theorem for non-Markovian systems which uses conditions on the time dependence of branching fractions [35]. The condition that the branching fractions will be time independent is equivalent to the so called "direction-time independence" that was used in other contexts in semi-Markov processes [50,51]. The results presented in Sec. II demonstrate that in models where the interaction does not affect the branching fractions the no-pumping theorem holds. In contrast, the example studied in Sec. III shows that no-pumping breaks down once the branching fractions depend on the state of the system. While this is a numerical result for a single model, and not a general proof, its connection to the known role that branching fractions play in no-pumping strongly suggests that the results are quite general. Namely, it suggests that a typical interaction that results in state-dependent branching fractions would lead to the breakdown of the no-pumping theorem. Let us briefly return to the paradigmatic examples of zero-range and exclusion interactions discussed earlier. A stochastic pump of particles with zero-range interactions will have state-independent branching fractions, and indeed it is known that such systems satisfy the no-pumping theorem [38]. In contrast, systems with exclusion interactions between particles must have state-dependent branching fractions, simply because the particles block each other. The results presented here mean that it is precisely this property of the interaction, and not its finite range, that causes no-pumping to be violated.
2017-02-01T06:57:16.000Z
2017-02-01T00:00:00.000
{ "year": 2017, "sha1": "20cf8dbb7e61ffd9f7cc0cc2a05e0e142c3ab2bb", "oa_license": null, "oa_url": "https://arxiv.org/pdf/1702.00149", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "20cf8dbb7e61ffd9f7cc0cc2a05e0e142c3ab2bb", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics", "Medicine" ] }
5007106
pes2o/s2orc
v3-fos-license
Influence of Obesity on the Course of Malignant Neoplastic Disease in Patients After Pulmonary Metastasectomy . Background/Aim: The aim of the study was to determine whether increased body mass index (BMI) in patients operated on for lung metastases influences the course of the disease. Materials and Methods: The retrospective data of 97 patients previously operated on for different malignancies were analyzed. There were 40 obese patients (BMI >30 kg/m 2 , mean 33.9±4.5) and 57 non-obese patients (BMI 25.8±2.7 kg/m 2 , p<0.001). Disease-free interval (DFI), the overall survival (OS) and survival after pulmonary metastasectomy were analyzed. Results: DFI and OS were longer in obese than in non-obese patients (82.1±83.5 months vs. 43.0±44.4, p<0.01 and 110.7±81.3 months vs. 69.9±52.9 p<0.005, respectively). Survival after pulmonary metastasectomy was 27.2±25.6 months and was longer in obese and overweight patients than in normal weight patients (20.2±18.4 months vs. 29.4±26.5, p<0.05). Conclusion: Being obese or overweight is a favorable prognostic factor in patients after surgical resection of lung metastases of different malignancies. With the increasing epidemic of obesity in the world, the incidence of malignancies related to obesity also increases (1). Obesity-related cancers include breast cancer in postmenopausal women, colon cancer, cancer of the lower esophagus, gastric cancer, liver cancer, gall bladder cancer, pancreatic cancer, uterine cancer, ovarian cancer and renal cancer (2). Obesity is also associated with increased risk of metastases, including lung metastases, in some cancers (3). In the course of some malignancies a paradoxical phenomenon has been observed, indicating that obesity may be an oncogenic factor and -at the same time -may constitute a favorable prognostic factor (4,5). The dual and opposite influence of obesity on the course of the same disease has been called the obesity paradox and has been described in some chronic diseases, including cardiovascular (6) and cerebrovascular diseases (7). These paradoxical effects of obesity may occur also in patients with metastases, including patients with malignancies not related to obesity. In a recent large-scale study of 4,010 cancer patients in good general condition, with distant metastases, median OS was twice as high in obese patients as in normal weight patients (8). However, there are also reports stating that there is no beneficial effect of obesity on metastatic neoplastic disease (9)(10)(11). The problem of the influence of obesity on the course of metastatic malignancies has not yet been unequivocally explained. Especially, there are no studies on the influence of obesity on survival of patients with operable lung metastases. Therefore, this study was undertaken to evaluate the long-term outcome of surgical treatment of obese and non-obese patients who after resection of primary neoplasm had lung metastases removed. Materials and Methods Data from 99 patients who had a resection of lung metastasis from different primary malignancies between 2001 and 2016 were analyzed. The study was retrospective, and the condition for including patients in the study was access to anesthesia documentation containing body weight and height prior to performing pulmonary metastasectomy. Analysis was performed in the groups depending on body mass index (BMI). Underweight was diagnosed when the BMI was below 18.5 kg/m 2 (1 patient), normal body weight when BMI was in the range of 18.5-24.9 kg/m 2 (18 patients), overweight when the BMI was in the range of 25.0-29.9 kg/m 2 (39 patients), obesity class I-IV when BMI was in the range of 30.0-49.9 kg/m 2 (40 patients), and obesity class V when BMI was > 50.0 kg/m 2 (1 patient). Two patients with extreme body weight -underweight and obesity class V -were excluded from the analysis. In the final analysis the material of the study consisted of 97 patients, including 50 women and 47 men, mean age 62.8±8.7 years, with a mean BMI of 29.2±5.3 kg/m 2 . There were 18 normal weight patients (19%), 39 overweight patients (40%) and 40 patients with obesity class I-IV (41%). Among normal weight patients, there were 9 women and 9 men, aged 61.3±9.1 years, with BMI of 22.5±1.2 kg/m 2 . Among the overweight patients there were 18 women and 21 men, aged 61.4±8 years, with a BMI of 27.4±1.1 kg/m 2 . Patients with normal weight and overweight were referred to as non-obese patients. The group of obese patients consisted of 23 women and 17 men, aged 64.9±9 years, with mean BMI of 33.9±3.2 kg/m 2 . BMI differences between the group of 40 obese patients (41%) and the group of 57 normal weight and overweight patients (59%) were statistically significant (25.8±2.7 kg/m 2 vs. 33.9±4.5 kg/m 2 , p<0.001). The most common coexisting diseases were cardiovascular diseases (hypertension, coronary heart disease, myocardial infarction, stroke, cardiac arrhythmia), previously treated cancers, diabetes mellitus, hypothyroidism and respiratory diseases such as chronic obstructive pulmonary disease and asthma. The incidence of these diseases according to BMI groups is shown in Table I. Surgical removal of lung metastases consisted of mechanical resection in 69 patients and of anatomical resection in 28 patients. In 1 patient, mechanical resection was associated with partial resection of the chest wall. In 9 patients, metastatic resection was non-radical, because of multiple small nodules detected during the surgical procedure (Table II). The following parameters were calculated: the time between primary tumor resection and pulmonary metastasectomy, i.e. disease-free interval (DFI); the time between resection of the primary tumor and death or date of the last information about the patient (for all alive patients in mid-November 2016), i.e. overall survival (OS); and the time between pulmonary metastasectomy and the death or date of last information about the patient. Information about the date of death was obtained from the Regional Office for Civil Status and the Center for Personalization of Documents of the Ministry of Interior and Administration in Warsaw. In the statistical analysis of the results the program Statistica 12 (Krakow, Poland) was used. The survival time analysis was based on the Kaplan-Meier curves with Wilcoxon's test according to Gehan for group comparison. When comparing normal distribution data, Student's t-test was used, and for non-parametric data the Mann-Whitney U-test was used. The value of p<0.05 was considered statistically significant. Results DFI was 59.2±6.1 months (median: 39 months) for the entire group and was longer in obese than in non-obese patients: 82.1±83.5 vs. 43.0±44.4, p<0.01. Figure 1 shows Kaplan-Meier in vivo 32: 197-202 (2018) 198 Figure 2 shows Kaplan-Meier curves for OS in the groups of obese and non-obese patients. The time from pulmonary metastasectomy to death or until the last observation of the patient was 27.2±25.6 months (median: 23 months). The time from pulmonary metastasectomy to death or until the last observation of the patient in the group of 79 obese and overweight patients was longer than in the group of 18 normal weight patients (20.2±18.4 months vs. 29.4±26.5 months, p<0.05). Discussion In this study, encompassing a group of 97 patients operated on for pulmonary metastases of different histopathological types, including squamous cell carcinoma, adenocarcinoma, melanoma and sarcoma, and of different localization of primary tumors, obesity was found to be a factor indicating longer course of the neoplastic disease -counting both from the surgical removal of the primary tumor to the removal of the lung metastasis and to death or to the last observation. Thus, longer DFI and longer OS in obese as compared with non-obese patients revealed a protective effect of obesity in the course of some neoplastic diseases with pulmonary operable metastases. Moreover, obesity and overweight were found to be favorable prognostic factors, indicating longer survival after pulmonary metastasectomy. Occurrence of pulmonary metastases is usually a poor prognostic factor, but in selected cases, mainly isolated, peripheral and small (<3 cm) metastatic lesions in patients with good general condition and without contraindications to surgical treatment, their removal is possible (12). In our material pulmonary metastasectomy was performed in patients with various neoplasms -mostly cancer of the large intestine, kidney, breast, uterus and lung, less often malignancies of other organs. Mean DFI was 59.2±6.1 months and was almost twice as long in obese as in nonobese patients indicating that obesity was a factor related to slower progression of neoplastic disease after resection of the primary tumor. Mean OS was 86.7±60.6 months andsimilarly as DFI -was longer in obese than in non-obese patients (110.7±81.3 months vs. 69.9±52.9 months, p<0.005). Moreover, comparison of survival after pulmonary metastasectomy in the group of obese and overweight patients and in the group of normal weight patients revealed More than half of our patients, i.e. 62 (64%), suffered from a neoplasm belonging to the group of cancers related to obesity, such as colorectal, breast, uterine, ovarian, and kidney cancer. Obesity is a factor associated with the risk of occurrence of multiple malignancies (1,2,(13)(14)(15) and the risk of developing metastases in the course of some malignancies (3,16). However, a favorable influence of overweight and obesity on the course of some neoplastic diseases have been also observed (17)(18)(19). Recently, a few studies have reported a relation between obesity and overweight and better prognosis in some cancer patients (20)(21)(22). In patients with locally advanced non-small-cell lung cancer obesity was associated with increased OS (22). In the early stage of colorectal cancer, the risk of death was lowest in the overweight patients (20). In different histopathological types of cancer, the risk of death decreased with increasing subcutaneous fat tissue (21). Increased OS in obese patients with metastases has been observed in some other studies (8,23,24). In patients with remote metastases of nasopharyngeal cancer, obesity and overweight were favorable prognostic factors related to increased OS compared to non-obese patients (24). In patients with colorectal cancer who had surgery because of liver metastases (median follow-up of 39 months), obesity was a factor related to longer OS (23). In patients undergoing radiotherapy due to metastases of neoplasms of various organs, lower all-cause mortality was observed in both obese patients and overweight patients compared to patients with normal weight (8). However, we could not find any published data regarding the influence of obesity at the time of pulmonary metastasectomy on DFI and OS. In considering the relationship between obesity and the course of neoplastic disease, the impact of such obesityrelated factors as adiponectin and impaired glucose metabolism (23) can be taken into account. In patients with non-metastatic kidney cancer a negative correlation between adiponectin and BMI and shorter OS in patients with high serum adiponectin was observed (5). In obese patients surgically treated for liver metastases of colorectal cancer, diabetes had an unfavorable influence on OS, unlike in obese patients without diabetes (23). The strengths of our study are the large number of patients with operable pulmonary metastases and the long time of observation of the patients. The weak point in our study is that the causes of death of the observed patients were unknown; whether these were cancer-related causes of death or related to cardiovascular or other diseases remains unexplained. The relationship between obesity and survival in vivo 32: 197-202 (2018) 200 Figure 2. Probability of overall survival in 40 obese and 57 non-obese patients operated on for primary neoplasm and lung metastasis. may differ depending on the cause of death: for example, obese patients with renal cancer have been shown to have shortened OS, but after taking into account the causes of death, it turned out that survival related only to cancer was longer in obese than in non-obese patients; the authors called this phenomenon a 'paradox within a paradox' (25). The disadvantage of our work is also that we did not have data on BMI at the time of primary tumor surgery or data on the dynamics of BMI changes in the course of the disease. Conclusion In patients with operable pulmonary metastases of different malignancies, increased BMI significantly influences the course of the neoplastic disease. Overall survival and diseasefree interval are longer in obese than in non-obese patients undergoing pulmonary metastasectomy. Being obese or overweight is a favorable prognostic factor in patients after surgical resection of lung metastases of different malignancies. Conflicts of Interest The Authors declare no conflicts of interest in relation to this article.
2018-04-03T05:14:46.387Z
2018-01-01T00:00:00.000
{ "year": 2018, "sha1": "ec8f3dffbb0c5c610970d34984107c260f136e61", "oa_license": null, "oa_url": "http://iv.iiarjournals.org/content/32/1/197.full.pdf", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "521f71c85598c00770f3987cf9d9cc9c627e23bc", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
243489424
pes2o/s2orc
v3-fos-license
Landscape of Hopx expression in cells of the immune system Homeodomain only protein (Hopx) is a regulator of cell differentiation and function, and it has also emerged as a crucial marker of specific developmental and differentiation potentials. Hopx expression and functions have been identified in some stem cells, tumors, and in certain immune cells. However, expression of Hopx in immune cells remains insufficiently characterized. Here we report a comprehensive pattern of Hopx expression in multiple types of immune cells under steady state conditions. By utilizing single-cell RNA sequencing (scRNA-seq) and flow cytometric analysis, we characterize a constitutive expression of Hopx in specific subsets of CD4+ and CD8+ T cells and B cells, as well as natural killer (NK), NKT, and myeloid cells. In contrast, Hopx expression is not present in conventional dendritic cells and eosinophils. The utility of identifying expression of Hopx in immune cells may prove vital in delineating specific roles of Hopx under multiple immune conditions. Introduction Hopx is a highly evolutionarily conserved, homeodomain-containing, small (73-amino acid) protein that lacks consensus residues required for protein-DNA interactions but can function as a transcription co-factor (Chen et al., 2002;Shin et al., 2002). It is encoded by the Hopx (HOPX) gene, which produces 3 murine and 5 human mRNA splice variants which encode for 1 isoform in mice and 3 in humans (Mariotto et al., 2016). Although Hopx lacks DNA binding capacity, it can interact with various protein complexes that modulate the transcription of various genes thereby regulating cell differentiation and mediating tumor suppression. Hopx expression is associated with differentiation and stemness in various cells including cardiomyocyte-committed cardiac progenitor cells (Jain et al., 2015b), adult intestinal stem cells (Takeda et al., 2011), hair follicle bulge stem cells , type 1 alveolar cells (Jain et al., 2015a) and neural stem cells (Zweifel et al., 2018). The Hopx locus is hypermethylated resulting in decreased Hopx expression in various cancers and metastases including head and neck squamous cell carcinomas (Yap et al., 2016), breast (Kikuchi et al., 2017), colorectal , pancreatic neuroendocrine (Ushiku et al., 2016), and lung cancers (Chen et al., 2015). In tumors, Hopx mediates the promoter silencing of SNAIL, a transcription factor that initiates epithelial-mesenchymal transition (Ren et al., 2017). Hopx also activates Ras-induced senescence to suppress metastasis and tumorigenesis (Chen et al., 2015). Expression of Hopx has also been previously identified in human acute myeloid leukemias (AML) (Gentles et al., 2010;Lin et al., 2017;Torrebadell et al., 2018). However, in contrast to tumor suppressor roles of Hopx in other types of cancer Yap et al., 2016), the upregulation of Hopx expression in AML cells has been correlated with decreased remission and survival (Lin et al., 2017;Torrebadell et al., 2018). Hopx recruits histone deacetylases (HDACs) (Trivedi et al., 2010), Smads, and members of Mi-2/NuRD complex (Nucleosome remodeling deacetylase) to repress Wnt signaling during cardiogenesis (Jain et al., 2015b). Hopx regulates primitive hematopoiesis in human cardiac progenitor cells and endothelial cells by repressing Wnt/β-catenin signaling in order to promote hemogenesis (Palpant et al., 2017). In the immune system, Hopx expression and specific functions have been identified in vivo in Foxp3 þ regulatory T (Treg) cells. In peripherally induced Treg (pTreg) cells, Hopx regulates expression of activator protein 1 (AP-1) transcription factors, production of interleukin 2 (IL-2) and the fitness of pTreg cells under inflammatory conditions (Hawiger et al., 2010;Jones and Hawiger, 2017;Jones et al., 2015). In addition to its specific molecular functions described above, Hopx has recently emerged as a crucial marker of the specific developmental and differentiation potentials of progenitor populations in various non-hematopoietic tissues both in human and mouse systems (Mariotto et al., 2016). The most recent results now extended such capacity of Hopx also to cells of the immune cells by uncovering the key roles of Hopx in indicating specific pre-effector differentiation potentials induced early in CD4 þ T cells following their antigen-specific activation in the steady state (Opejin et al., 2020). The expression of Hopx can also be found in other types of murine immune cells, including hematopoietic stem cells, some subsets of effector CD4 þ and CD8 þ T cells, natural killer (NK), and NKT cells (Albrecht et al., 2010;Baas et al., 2016;Bezman et al., 2012;Cano-Gamez et al., 2020;Capone et al., 2021;Crawford et al., 2014;De Simone et al., 2019;Descatoire et al., 2014;Gordy et al., 2011;Lin et al., 2020;Mariotto et al., 2016;Patil et al., 2018;Serroukh et al., 2018;Wirth et al., 2010;Zhou et al., 2015). Further, Hopx has been found to be expressed in some subsets of human CD4 þ T cells, γδ T cells, and B cells (Albrecht et al., 2010;Cano-Gamez et al., 2020;Capone et al., 2021;Descatoire et al., 2014;Mariotto et al., 2016;Patil et al., 2018;Pizzolato et al., 2019;Serroukh et al., 2018;Szabo et al., 2019). However, the comparative expression of Hopx in a broad cross-section of immune cells has not been studied systematically, therefore hampering rigorous elucidation of the specific roles of Hopx in the immune system. Unfortunately, a reliable flow cytometry adaptable antibody specific to murine Hopx is not available. Here we identify the previously uncharacterized pattern of Hopx expression in various immune cells under steady state conditions by using single-cell RNA sequencing (scRNA-seq) as well as flow cytometric analysis based on a Hopx reporter mouse model (Hopx 3FlagV2AGFP ) (here referred to as Hopx GFP ) . This previously established and validated reporter model expresses a fusion protein of Hopx, 3Flag peptides, viral 2A self-cleaving peptide, and green fluorescent protein (GFP), therefore faithfully tracking equimolar expression of Hopx and GFP reporter (Jain et al., 2015a(Jain et al., , 2015bJones et al., 2015;Opejin et al., 2020;Takeda et al., 2013;Zacharias et al., 2018). Within lymphoid tissues, we identify specific expression of Hopx in various subsets of murine CD4 þ and CD8 þ T cells, naive B cells, NK cells, NKT cells, and some myeloid cells, with notable exceptions of dendritic cells and eosinophils. Therefore, these results open new directions and ways to investigate the roles of Hopx in the cells of the immune system. scRNA-seq reveals Hopx expression in various immune cells To identify Hopx mRNA expression in various immune cells, we utilized a publicly available scRNA-seq dataset of CD45 þ splenocytes from un-perturbed C57BL/6 mice prepared by the ImmGen Consortium (Heng and Painter, 2008). After quality control, unsupervised clustering followed by two-dimensional uniform manifold approximation and projection (UMAP) of 9629 cells revealed 12 clusters that could be identified as 7 immune cell types by differential expression of marker genes ( Figure 1A and S1A) (Butler et al., 2018;Stuart et al., 2019). We next characterized Hopx mRNA expression within these cell types and identified Hopx expression predominantly within Treg cells, monocyte/macrophages, and NK cells, as well as in a few non-Treg CD4 þ T cells, CD8 þ T cells, and B cells. Conversely, very little Hopx mRNA was detected in dendritic cells ( Figure 1B and 1C). Overall, results of scRNA-seq indicated varied expression of Hopx mRNA in different immune cell types. Multiple T cell subsets express Hopx To further robustly characterize Hopx expression in T cells in vivo, we used Hopx GFP Foxp3 RFP double reporter mice on a C57BL/6J background that we originally prepared (Jones et al., 2015) by crossing Hopx GFP reporter mice with Foxp3 RFP reporter mice in which Foxp3 is co-expressed with red fluorescent protein (RFP) (Wan and Flavell, 2005). We isolated thymic, splenic, and peripheral lymph node cells Figure 1. scRNA-seq reveals Hopx expression in various immune cells. scRNA-seq data obtained from CD45 þ splenocytes of C57BL/6J mice were retrieved from the ImmGen database and processed using Seurat (see methods). (A) UMAP plot shows the cluster distribution of various immune cells characterized by differential expression of marker genes (see Figure S1). Doublets were identified as expressing multiple marker genes from unrelated cell types. (B) Feature plot shows the expression of Hopx (in blue) in individual cells on a UMAP plot of the various clusters. (C) Dot plot shows the average expression of Hopx and percent of cells expressing Hopx in each cluster. from adult double reporter mice and analyzed them by flow cytometry. We identified Hopx þ Foxp3 þ and Hopx þ Foxp3 neg T cells in the thymus, spleen, and lymph nodes ( Figure 2A). As expected, almost all Hopx þ Foxp3 þ T cells expressed CD4 in all three tissues, consistent with a Treg cell phenotype (Figure 2A and 2B). However, among Hopx þ Foxp3 neg T cells, the relative proportions of CD4 þ and CD8 þ T cells differ across the tissues, with the highest percentage of CD4 þ T cells in the thymus and the highest percentage of CD8 þ T cells in the lymph nodes (Figure 2A-C). We extended this analysis to characterize Hopx expression in all CD4 þ Foxp3 neg and CD8 þ T cells ( Figure 2D). We identified the highest proportion of Hopx þ cells among CD4 þ Foxp3 neg and CD8 þ T cells in the spleen and lymph nodes ( Figure 2E). Further, whereas no more than 20% of CD4 þ Foxp3 neg T cells expressed Hopx, about 60% of CD8 þ T cells in the spleen and lymph nodes expressed Hopx. Overall, we confirmed Hopx expression among Treg cells and some other CD4 þ T cells and identified Hopx expression in the majority of CD8 þ T cells present under steady state conditions. Graphs show the percentages of CD4 þ cells among Hopx þ Fopx3 þ (left) and Hopx þ Foxp3 neg (right) T cells from the thymus, spleen, or lymph nodes as indicated (n ¼ 5-6 mice from three independent experiments). (C) Graph shows the percentages of CD8α þ cells among Hopx þ Foxp3 neg T cells from the thymus, spleen, or lymph nodes as indicated (n ¼ 5-6 mice from three independent experiments). (D) Representative overlaid histograms show Hopx (GFP) expression among CD8α þ (gray) and CD4 þ Foxp3 neg (red) T cells from the thymus, spleen, or lymph nodes as indicated. (E) Graphs show the percentages of Hopx þ cells among CD4 þ Foxp3 neg (left) and CD8α þ (right) T cells from the thymus, spleen, or lymph nodes as indicated (n ¼ 5-6 mice from three independent experiments). (B, C, and E) Graphs show mean AE SD. nsnot significant, *P < 0.05, **P < 0.01, ***P < 0.001, and ****P < 0.0001 determined by one-way ANOVA with Tukey's multiple comparisons. Hopx is expressed in naïve B cells We next characterized Hopx expression in B cells by analyzing cells from the bone marrow, spleen, and peripheral lymph nodes by flow cytometry. We observed Hopx expression among B cells from all three tissues, with the most Hopx þ B cells in the spleen and lymph nodes in comparison to the bone marrow ( Figure 3A and 3B). To determine whether Hopx was expressed in specific subsets of splenic B cells, we identified naïve B cells as IgD þ cells ( Figure 3C). In comparison to IgD neg cells, IgD þ (naïve) B cells from the spleen expressed more Hopx. Additionally, CD24 has been shown to be expressed at high levels in transitional and memory B cells (Mensah et al., 2018). We identified CD24 hi and CD24 int B cells in the spleen and observed higher expression of Hopx in CD24 int B cells ( Figure 3E and 3F), further suggesting a more specific expression of Hopx in undifferentiated B cells. Overall, we conclude that many B cells, especially naïve B cells in secondary lymphoid organs, express Hopx, but Hopx expression may decrease upon B cell differentiation. Hopx is expressed in most NK and NKT cells Hopx mRNA has been detected in NK and NKT cells ( Figure 1) and (Bezman et al., 2012;Gordy et al., 2011). To confirm such expression and further characterize Hopx in specific subsets of NK and NKT cells, we examined such cells by flow cytometry. We observed Hopx expression in almost all NK and NKT cells ( Figure 4A and B). Further, we observed higher expression of Hopx in NKT cells than in NK cells as determined by median fluorescence intensity (MFI) ( Figure 4C). Killer cell lectin-like receptor G1 (KLRG1) is a C-type lectin inhibitory receptor that is expressed by some NK cells (Huntington et al., 2007). We observed Hopx expression in most KLRG1 þ cells, but also in many KLRG1 neg NK cells, therefore indicating a heterogeneity among Hopx þ NK cells ( Figure 4D and 4E). In conclusion, we identified Hopx expression in most NK and NKT cells regardless of their expression of other key markers. Variegated Hopx expression pattern in myeloid cells We next examined by flow cytometry Hopx expression in myeloid cells in which cells Hopx expression has not been previously reported but was identified by scRNA-seq (Figure 1). We identified different splenic myeloid cells using previously validated cell surface markers (Guilliams et al., 2016;Rose et al., 2012;Tsai et al., 2017;Yeung and So, 2009) ( Figure 5A). Consistent with the scRNA-seq results (Figure 1), we did not observe appreciable Hopx expression in the conventional dendritic cell and eosinophil populations ( Figure 5B and 5C). In contrast and also in agreement with the scRNA-seq, many monocytes and some macrophages were characterized by high expression of Hopx, whereas neutrophils had intermediate Hopx expression (Figure 5B and 5C). Interestingly, expression of Hopx in monocytes, macrophages, and neutrophils had a roughly bimodal distribution, indicating possible functional or differentiation states within these cell populations ( Figure 5B). In conclusion, we observed Hopx expression in splenic monocytes, macrophages, and neutrophils while Hopx was mostly absent from eosinophil and conventional dendritic cell populations. Discussion Our results provide a comprehensive and comparative analysis of Hopx expression in multiple types of immune cells in primary and secondary lymphoid organs. By using the flow cytometric analysis of cells from the Hopx GFP reporter mice, we crucially complemented and extended the results of scRNA-seq analysis. This new analysis confirmed some of the previously reported specific profiles of Hopx expression in immune cells including some CD4 þ and CD8 þ T cells, NK, NKT cells, and also some B cells ( Patil et al., 2018;Serroukh et al., 2018;Wirth et al., 2010). However, the previous results were based on multiple techniques and varying species and showed Hopx expression in mostly purified cell populations, including those induced in vitro. In contrast, we now provide a systematic analysis of unsorted material obtained ex vivo and performed using standardized methodology focused on individual murine immune cells. Therefore, our results allow for comparative analysis of Hopx expression among various immune cells across multiple tissues under the same immune conditions. Homeodomain only protein (Hopx) is a regulator of immune cell function, and it has also emerged as a crucial marker of the specific developmental and differentiation potentials among some hematopoietic cells (Opejin et al., 2020;Zhou et al., 2015). We identified Hopx expression in some CD4 þ and CD8 þ T cells as reported previously (Albrecht et al., 2010;Bezman et al., 2012;Cano-Gamez et al., 2020;Capone et al., 2021;Hawiger et al., 2010;Jones et al., 2015;Serroukh et al., 2018;Wirth et al., 2010). Interestingly, in contrast to the majority of all CD8 þ T cells in spleen and lymph nodes that expressed Hopx, only a relatively small portion of CD4 þ T cells expressed Hopx, further underscoring important differences between CD4 þ and CD8 þ T cells and consistent with the recent identification of a population of Hopx hi pre-effectors among CD4 þ T cells (Opejin et al., 2020). Further, contrary to the results previously reported in B cells (Descatoire et al., 2014), we found a preferential expression of Hopx in naïve B cells and not IgD neg or CD24 hi B cells. Interestingly, the proportion of such of Hopx þ B cells is higher in the secondary lymphoid organs than in the bone marrow. Future research may reveal potentially diverse specific fates and functions of such Hopx þ and Hopx neg B cells. Among all major types of the immune cells analyzed, the highest proportion of Hopx þ cells was registered among NK and NKT cells. While the biological significance of such pervasive Hopx expression in these cells remain to be revealed, it is interesting to speculate that such absence of Hopx expression diversity may be determined by the lack of antigenic receptor variety. Finally, our results identified unique Hopx expression patterns among myeloid cells. Hopx expression was previously reported in monocytes (Monaghan et al., 2019). Our results now revealed that the vast majority of monocytes expresses Hopx. However, only about half of macrophages and neutrophils showed such expression. In contrast, Hopx expression was not observed in conventional dendritic cells and eosinophils, therefore raising further questions about the specific roles of Hopx in different immune cell types. Overall, we provide a comprehensive analysis of the expression pattern in immune cells of Hopx, an emerging regulator and marker of various cellular programs. Our results may inspire future avenues of research into the specific roles of Hopx in these immune cells. Limitations of the study We focused our analysis to the steady state conditions to provide a baseline for future studies. Hopx expression in specific cell types may change upon disruption of homeostasis. We also limited our focus to major secondary lymphoid organs, and we do not characterize expression of Hopx in non-lymphoid resident immune cells including those at the anatomical barriers. Finally, due to available experimental methods, the study is focused on a complete characterization of murine only immune cells. Mice Foxp3 IRES-mRFP (Foxp3 RFP ) reporter mice (Wan and Flavell, 2005), Hopx 3XFlagGFP (Hopx GFP ) reporter mice , Hopx GFP-Foxp3 RFP double-reporter mice (Jones et al., 2015), all on the C57BL/6J (B, C, and E) Graphs show mean AE SD. nsnot significant, **P < 0.01, and ****P < 0.0001 determined by unpaired two-tailed t test (B and C) or one-way ANOVA with Tukey's multiple comparisons (E). background, were previously described. 6-9 weeks old sex-and age-matched littermates were used for experiments. All mice were maintained in our facility under specific pathogen free conditions and used in accordance with the guidelines of the Saint Louis University Institutional Animal Care and Use Committee. Flow cytometry Cells from thymi, peripheral (axial, brachial, and inguinal) lymph nodes, spleens, and bone marrow (from femurs and tibiae) were isolated and analyzed separately. For surface marker cytometry staining, cells were first incubated with Zombie Aqua Live/Dead viability dye according to manufacturer's protocol (BioLegend), pre-incubated with Fc-block (anti-CD16/32, clone 2.4G2, produced in-house from corresponding hybridoma obtained from ATCC), and then incubated in FACS buffer (PBS supplemented with 2% fetal bovine serum (FBS)) with fluorochrome-conjugated antibodies (listed in Key Resource Table) for 25 min at 4 C. Myeloid cells were isolated from spleens by incubating shredded tissues in 2.5 mg/mL Collagenase D (Roche) in RPMI 1640 media (Hyclone) supplemented with Penicillin/Streptomycin (100U/ mL), HEPES (10mM), Sodium Pyruvate (1mM), and 2-Mercaptoethanol (55μM) (all Gibco) at 37 C for 37 min, followed by incubation with EDTA (10mM) for 5 min at 37 C. After incubation, cells were passed through 100μm strainers (VWR) and washed using Hanks' Balanced Salt Solution (Gibco) supplemented with 2% fetal bovine serum (FBS) (Gemini Bio) and 1mM EDTA to obtain single cell suspensions which were then stained for cell surface markers as described above. All samples were acquired on a BD LSRFortessa (BD), and data was analyzed with FlowJo software (FlowJo, LLC). Single-cell RNA sequencing analysis Data from sorted CD45 þ splenocytes from a C57BL/6J mouse were obtained from the ImmGen Consortium (Heng and Painter, 2008). The raw count matrix was processed the R package Seurat (v3.2.0) following the workflow and recommendations in the Guided Clustering Tutorial from the Satija Lab (https://satijalab.org/seurat/v3.2/pbmc3k_tutorial.html). Quantification and statistical analyses Sex-and age-matched mice of specified genotypes were randomly assigned into individual experimental groups. Data are presented as mean AE standard deviation (SD). No statistical method was used to predetermine sample size. P values were calculated in Prism 9 (GraphPad Software) using unpaired two-tailed t tests and one-way ANOVAs with Tukey's multiple as indicated in corresponding figure legends. Differences were considered to be statistically significant when p 0.05. Declarations Author contribution statement Jessica Bourque, Adeleye Opejin: Conceived and designed the experiments; Performed the experiments; Analyzed and interpreted the data; Wrote the paper. Alexey Surnov, Cindy Gross: Performed the experiments. Courtney A. Iberg, Rajan Jain, Jonathan A. Epstein: Contributed reagents, materials, analysis tools or data. Representative overlaid histograms show Hopx (GFP) expression among the indicated cell types. (C) Graph shows the percentages of Hopx þ cells among the indicated cell populations (n ¼ 5 mice from two independent experiments). Graph shows mean AE SD. **P < 0.01 and ****P < 0.0001 determined by one-way ANOVA with Tukey's multiple comparisons. Daniel Hawiger: Conceived and designed the experiments; Analyzed and interpreted the data; Wrote the paper. Funding statement This work was supported by National Institute of Allergy and Infectious Diseases of the National Institutes of Health (R01AI113903) (to D.H.) and Burroughs Wellcome Fund (1014786) to R.J. Data availability statement Data included in article/supplementary material/referenced in article. Declaration of interests statement The authors declare no conflict of interest. Additional information Supplementary content related to this article has been published online at https://doi.org/10.1016/j.heliyon.2021.e08311.
2021-11-05T15:17:37.213Z
2021-11-01T00:00:00.000
{ "year": 2021, "sha1": "71f745686b5febcefc09c426cbe9cfeba5cd9bcf", "oa_license": "CCBYNCND", "oa_url": "http://www.cell.com/article/S2405844021024142/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "b8e7921bc614da7cb1a1311e69475d7344b33b98", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
247210462
pes2o/s2orc
v3-fos-license
Identification of Crucial Amino Acids in Begomovirus C4 Proteins Involved in the Modulation of the Severity of Leaf Curling Symptoms Begomoviruses frequently inflict upward or downward leaf curling symptoms on infected plants, leading to severe economic damages. Knowledge of the underlying mechanism controlling the leaf curling severity may facilitate the development of alternative disease management strategies. In this study, through genomic recombination between Ageratum yellow vein virus Nan-Tou strain (AYVV-NT) and Tomato leaf curl virus Tai-Chung Strain (TLCV-TC), which caused upward and downward leaf curling on Nicotiana benthamiana, respectively, it was found that the coding region of C4 protein might be involved in the determination of leaf curling directions. Sequence comparison and mutational analysis revealed that the cysteine and glycine at position 8 and 14 of AYVV-TC C4 protein, respectively, are involved in the modulation of leaf curling symptoms. Cross-protection assays further demonstrated that N. benthamiana inoculated with AYVV-carrying mutations of the aforementioned amino acids exhibited attenuated leaf curling symptoms under the challenge of wild-type AYVV-NT. Together, these findings revealed a new function of begomovirus C4 proteins involved in the modulation of leaf curling severity during symptom formation and suggested potential applications for managing viral diseases through manipulating the symptoms. Introduction The family Geminiviridae consists of viruses with single-stranded circular DNA genomes encapsidated in twinned incomplete icosahedral virions [1]. Geminiviruses in the genus Begomovirus are transmitted by whiteflies (Bemisia tabaci cryptic species complex) and infects dicotyledonous plants, causing severe damages to crops worldwide [2][3][4][5]. Typical symptoms caused by begomoviruses include leaf curling, vein or leaf yellowing, and mosaic [4]. However, the mechanisms underlying the elicitation of specific symptoms and the modulation of disease severity remain largely elusive. Knowledge on the regulation of the formation of specific symptoms may provide deeper insights into the developmental and physiological processes of host plants and pave the way for the development of alternative viral disease management measures through the manipulation of symptom formation. Several proteins encoded by geminiviruses have been demonstrated to be determinants of specific symptoms. Geminiviruses harbor either one or two genomic DNA components, designated mono-or bi-partite genomes, respectively. The single-stranded circular genomic DNA is ambisense, encoding proteins on both the sense of the virion-encapsidated DNA (V-sense) and on the complementary sense (C-sense). On the V-sense are the open reading frames (ORFs) for coat protein (CP, or V1) and movement protein (MP, or V2). On the C-sense reside the overlapping ORFs for replication-associated protein (Rep, or C1), transcription activator protein (TrAP, or C2), replication enhancer (REn, or C3), and C4, which is completely embedded in the ORF for Rep but on a different frame [1]. C4 protein has been studied extensively in recent years and demonstrated to be an important symptom determinant in geminivirus infections [6][7][8][9][10][11][12][13][14][15][16][17]. In addition to C4 protein, Rep [9,11,18], C2 [19][20][21], and V2 [9,11,15] have also been shown to be symptom determinants in the infection cycles of different geminiviruses. Furthermore, a protein, βC1, encoded by a betasatellite associated with Tomato yellow leaf curl China virus, has been demonstrated to interfere with leaf development by directly interacting with the host protein asymmetric leaf 1 (AS1) [22]. However, the determinant(s) for the specific directions and severity of leaf curling symptoms remained to be explored. Among these known symptom determinants, C4 protein has attracted much attention in recent years. Being the smallest and one of the least-conserved proteins encoded by geminiviruses [8], C4 proteins may exhibit the highest number of functions in infection cycles and pathogenesis, with new functions continuing to be identified [23,24]. The mechanisms of certain functions of C4 protein have been elucidated. For example, C4 proteins may induce the abnormal development of plants by regulating the brassinosteroid signaling pathway through interactions with members of SHAGGY-like protein kinase [7,12,[24][25][26]. C4 proteins of certain begomoviruses have been shown to be a viral suppressor of RNA silencing [27][28][29][30]. The suppression was mediated either by direct binding to the singlestranded microRNAs (miRNAs) and small interfering RNAs (siRNAs) [27,31] to block the cleavage or by interacting with plasma membrane proteins BAM1 or BAM2 to interfere with the intercellular spread of miRNAs or siRNAs [8,32,33]. C4 protein has also been reported to be involved in nucleocytoplasmic shuttling [34] and drought resistance [35]. However, it should be noted that the C4 proteins of different geminiviruses do not share equal functions [36], as C4 gene is the least conserved among geminivirus-encoded genes [8]. Despite the richness of the previous studies on C4 proteins of geminiviruses, the role of C4 protein in the determination of leaf curling severity has not been revealed in detail. Previously, we have developed a simplified method for the generation of infectious constructs of begomoviruses through rolling circle amplification (RCA) and demonstrated the applicability of the method by constructing infectious clones of two mono-partite begomoviruses, Ageratum yellow vein virus Nan-Tou strain (AYVV-NT), Tomato leaf curl virus Tai-Chung strain (TLCV-TC), designated pBinAYVV, pBinTLCV, respectively, and a bi-partite begomovirus, Squash leaf curl virus Yun-Lin strain (SqLCV-YL), designated pBinSqLA and pBinSqLB [37]. The symptoms associated with these begomoviruses on their orginal hosts are yellow veins without leaf curling for AYVV on ageratum, upward leaf curling for TLCV on tomato, and downward leaf curling for SqLCV on muskmelon plants [37]. With the diverse types of leaf curling symptoms, these constructs provide a practical system for studying the mechanisms underlying the determination of leaf curling symptoms with specific directions and severity. For comparison within the same host genetic background, N. benthamiana was chosen as the test plant for these constructs in the following inoculation assays. In this study, we have focused on the analyses of viral determinant(s) for the specific direction or the degree of severity of leaf curling symptoms elicited by specific begomoviruses. By using AYVV-NT, TLCV-TC, and SqLCV-YL as the experimental system, we have mapped the region on the begomovirus genome for the determination of leaf curling directions in N. benthamiana and further identified the crucial amino acids in C4 protein involved in the regulation of the severity of leaf curling symptoms. The applicability of mutants carrying alterations at these two amino acids in the modulation of leaf curling symptoms was also assayed. Taken together, these studies provided further insight into the regulation of leaf curl symptom formation and suggested potential usages in viral disease management. Plant Materials and Virus Inoculation The aforementioned infectious clones of three begomoviruses, AYVV-NT, TLCV-TC, and SqLCV-YL [37] were used as the starting material for the construction of other chimeric viruses or mutants to investigate the viral factors involved in the determination of the severity of leaf curling symptoms. N. benthamiana plants, at the stage with 6-7 fully expanded leaves, were used as the assay hosts. Plants were cultivated in a growth room maintained at 25-28 • C, with 16 h light/8 h dark cycles. The infectious constructs of the wild-type viruses or mutants were inoculated into N. benthamiana plants by using Agrobacterium-mediated infection as described previously [37][38][39][40]. A. tumefaciens cells harboring various constructs were adjusted to an optical density of 1.0 at 600 nm (OD 600 = 1.0) and infiltrated into the back side of the leaves using 1-mL syringes. Symptoms were recorded at 14, 21, and 28 days post inoculation (dpi). The nucleotide sequences of the viral progenies were further verified by DNA extraction from upper un-inoculated leaves and sequencing as described in the following sections. Plasmid Construction and RCA The plasmids used in this study were constructed essentially following the protocols as described previously [37]. Mutant constructs were generated by inverse-polymerase chain reaction (inverse-PCR) [41] using specific primer pairs to introduce the mutations as specified in the relevant section using the circular genomic DNA of the wild-type viruses as the templates. The inverse-PCR makes use of the circular template and a pair of back-to-back primers pointing to the inversed directions as opposed to those used in the conventional PCR. The primers were designed with the mutations and could be used to generate linear full-length DNA of the circular template by PCR. The amplified products containing the desired mutations were then self-ligated into a circular form, digested with unique restriction enzyme (BamHI in our case), and inserted into the cognate restriction enzyme sites in vector pUC119 [42]. The full-length genome of the virus harboring the desired mutations were then released from pUC119 [42] by restriction enzyme digestion, self-ligated into circular form, and subjected to RCA as described previously [37,43]. The amplified products were partially digested with restriction enzyme BamHI to generate dimers of the viral genome, since at least two copies of the origin of replication were required for the construct to be infectious, and subsequently cloned into the BamHI site of the vector pBin19 as described previously [37], followed by nucleotide sequencing to confirm the presence of the desired mutation(s). To confirm the infectivity and the maintenance of mutations in the progenies of the infectious constructs, total DNA was extracted from the newly emerged leaves of the inoculated N. benthamiana, and subjected to RCA as described previously [37,43]. The amplified DNA was digested with restriction enzyme BamHI, cloned into pUC119 vector [42], and subsequently sequenced. At least 10 independent clones of the progeny viruses were selected for sequencing analysis to verify the nucleotide sequences. Sequence Analyses The full-length sequences of AYVV-NT, TLCV-TC, and SqLCV-YL genomes have been deposited in GenBank under the accession numbers EF458639, MZ713252, and EU479710/EU479711 (for SqLCV-YL DNA A/B) respectively. Multiple sequence alignment of different C4 protein amino acid sequences encoded by the above begomoviruses was performed using the software CLUSTAL W [44] with the default settings (using the Gonnet scoring matrix with a gap opening penalty of 10 for both pairwise and multiple alignments and gap extension penalties of 0.1 and 0.2 for pairwise and multiple alignments, respectively) and the alignments were presented using the software GeneDoc [45]. For the design of primers for creating mutations in C4 proteins without affecting the amino acid sequences of the overlapping Rep protein on another reading fame, the software BioEdit [46] was used. Identification of the Determinant of Leaf Curling Direction by Genome Recombination As an initial step to analyze the viral factors involved in the determination of specific leaf curling symptoms, we examined the ability of a specific begomovirus to inflict leaf curling symptoms with specific direction on the same host, N. benthamiana, to standardize the host background. A. tumefaciens cells harboring the infectious constructs of AYVV-NT, TLCV-TC, and SqLCV-YL [37] were infiltrated into N. benthamiana. As shown in Figure 1, pBinAYVV induced severe upward leaf curling symptoms, whereas pBinTLCV and pBin-SqLA + pBinSqLB elicited downward leaf curling symptoms on the young leaves of the N. benthamiana plants inoculated. Although the symptoms on N. benthamiana inflicted by these infectious constructs were different from those on their original host plants, the directions of leaf curling symptoms remained consistent. Thus, these infectious clones provided useful starting materials for the analyses of the determinant of leaf curling direction. To identify the genomic region involved in the determination of the direction of leaf curling symptoms, we adopted the strategy of genomic recombination. The prerequisite for creating chimeric geminiviruses is that the iteron sequences, required for the recognition by Rep protein to initiate replication, of the two viruses to be recombined should be the same or highly similar, so that the Rep proteins of the parental viruses may still recognize those of the recombinants. Therefore, we performed sequence analysis and found that AYVV-NT and TLCV-TC share identical iteron sequences, 5 -GGTGTCTC-3 and 5 -GGTGTACT-3 , for recognition by Rep protein for specific replication [47,48], suggesting that these two viruses are suitable candidates for genome recombination. Since C4 proteins have been implicated as pathogenesis determinants in recent studies [7,8,12,17], our initial attempt was to exchange the C4 protein coding regions between AYVV-NT and TLCV-TC. Thus, chimeric viruses between AYVV-NT and TLCV-TC were constructed as illustrated in Figure 2. The unique restriction sites for NcoI and BamHI were used for the exchange of genome fragments (the inserts) from positions 1969 and 136, respectively. The genomic DNA of AYVV-NT or TLCV-TC was amplified by RCA and digested with NcoI and BamHI to generate the respective backbones and inserts. The inserts were then exchanged and ligated to different backbones and cloned as monomers into the vector pUC119 [42]. The recombinant virus with TLCV-TC backbone harboring AYVV-NT insert was designated TA, and the one with AYVV-NT backbone and TLCV-TC insert was designated as AT. Two independent clones were selected for each chimeric virus (TA3-4, TA3-12 for TA chimera and AT5-20 and AT5-24 for AT chimera). Following the verification of the recombination through nucleotide sequencing, the monomeric genomes were released from pUC119 by digestion with BamHI, self-ligated into a circular form, and amplified by RCA. Following partial digestion of the RCA products with BamHI, the dimeric genomic fragments were cloned into the vector pBin19 as described previously [37] to generate the constructs, pBinTA3-4, pBinTA3-12, pBinAT5-20, and pBinAT5-24, used for inoculation assays. These constructs were inoculated into N. benthamiana plants through Agrobacterium-mediated infiltration, and the progenies of the chimeric viruses were sequenced to verify the maintenance of the recombination. The results of the inoculation assay are summarized in Table 1. Representative symptoms caused by these constructs are shown in Figure 3. For unknown reason, pBinTA3-4 was not infectious, possibly due to problems that happened during the construction of dimeric constructs, and the infectivity of pBinAT5-20 was slightly lower than that of pBinAT5-24 (Table 1). Nevertheless, the results clearly showed that the direction of leaf curling symptom was determined by the genome fragment from nucleotide positions 1969 to 136 (highlighted by the green-lined boxes in Figure 2). All plants successfully infected with viruses harboring the AYVV-NT inserts exhibited severe upward leaf curling symptoms, whereas those infected by viruses with TLCV-TC inserts developed downward leaf curling symptoms (Table 1). This insert region encompass the full open reading frame (ORF) of C4 protein, the partial ORF for the N-terminal half of Rep protein, and the intergenic region (IR) including the origin of replication (indicated by the stem-loop in Figure 2). Thus, further experiments were performed to identify the major determinant of leaf curling direction. We have used several approaches to directly test the function of C4 proteins of AYVV-NT and TLCV-TC in the modulation of leaf curling directions by expressing different C4 proteins in plant, either transiently [49,50] or through transgenic plants. However, these approaches have been unsuccessful. Thus, to distinguish the effect of C4 proteins from the Rep protein or IR sequences on the modulation of leaf curling directions, we performed a loss-of-function experiment. The start codon of the C4 protein ORFs of both chimeric viruses were mutated to a stop codon to disrupt the expression of C4 proteins without affecting the translation of the overlapping ORF for Rep protein by using primers AT3-AC4mut F (5 -CTCGCTAGCTCGTGCAATTCTCTGCAGAT-3 , with the mutated nucleotide underlined), AT3-AC4mut R (5 -CGAGCTAGCGAGCCCTCATCTCCACGTGC-3 ), TA5-AC4mut F (5 -GTCGCTAGCTCGTGAAGCTCTCTGCAAAC-3 ), and TA5-AC4mut R (5 -CGAGCTAGCGACTCCTCACCTGCACATTC-3 ) through inverse PCR [41], followed by RCA, and cloned into pBin19 as described above. However, the mutation abolished the infectivity of these constructs, confirming that the expression of C4 proteins is important for the viruses to complete their infection cycles. It has been reported that the premature termination of the C4 protein expression of the Beet curly top virus, a geminivirus in the genus Curtovirus, may cause the change of directions of leaf curling symptoms from upward to downward [16], indicating that the C4 protein of curtoviruses is involved in the modulation of leaf curling directions. The involvement of the C4 protein of begomoviruses in the control of leaf curling tendencies has not been reported previously. Thus, we performed the following point mutational analyses to corroborate the hypothesis that the C4 proteins of AYVV-NT and TLCV-TC are involved in the determination of leaf curling directions. In Search of Amino Acids Involved in the Modulation of Leaf Curling Symptoms To fine-map the critical amino acids for the development of leaf curling symptoms with different directions, the C4 protein amino acid sequence of AYVV-NT (GenBank Accession number EF458639), which causes an upward leaf curling symptom, and those of TLCV-TC and SqLCV-YL, which elicit a downward leaf curling symptom [37], were aligned using CLUSTAL W [44]. As shown in Figure 4, candidate positions were highlighted in which the amino acids in AYVV-NT differed from that conserved in TLCV-TC and SqLCV-YL. These highlighted amino acids were selected based on the criterion that the codons for these amino acid should allow for mutations that change the ones in C4 protein of AYVV-NT into those in the corresponding positions in C4 proteins of TLCV-TC and SqLCV-YL without affecting the amino acids of Rep protein encoded on another reading frame. To test the functions of the amino acids in these positions on the determination of the leaf curling directions, specific mutations were generated in the C4 protein of AYVV-NT infectious clone pBinAYVV for inoculation assays. These positions were divided into three groups for mutational study to simplify the experimental design. In A-mut of AYVV-NT, the cysteine and glycine at positions 8 and 14 were replaced with phenylalanine and glutamic acid, respectively, conserved for TLCV-TC and SqLCV-YL (abbreviated as C8F and G14E). Similarly, B-mut contains the phenylalanine to serine exchange at position 64 (F64S). C-mut harbors glutamine to leucine, aspartic acid to valine, and methionine to threonine mutations at positions 71, 74 and 79 (Q71L, D74V, and M79T), respectively (Figure 4). Since the openreading-frame of C4 protein overlaps with that for Rep protein, but on a different frame, the mutations in AYVV C4 protein were designed to maintain the original amino acids in Rep protein, which is required for the initiation of geminivirus replication. Thus, for the generation of A-mut, the primer pair, AY-A-mut-F, 5 -CTGGATAAGAACGTGGAGATGA-3 , AY-A-mut-R, 5 -TTCGAAGGAAAATACCAGTGCA-3 , were used to introduce the C8F and G14E mutations in the C4 protein, while maintaining the V (GTG to GTT) and G (GGG to GGA) in the Rep protein. By the same token, the primer pair, AY-B-mut-F, 5 -CCCATTCGAGGGTGTCTC-3 and AY-B-mut-R, 5 -GTGAGTTCCAGATCGATG-3 were used to generate B-mut; while primer pair AY-C-mut-F, 5 -TGTTGACCTCCTCTAGCAG-3 and AY-C-mut-R, 5 -GACAGCCAACGACGCTTAC-3 were used for creating C-mut. The mutants were constructed by inverse-PCR using the aforementioned primer pairs with the genomic DNA of AYVV-NT as the template [37], verified by nucleotide sequencing. The mutants of AYVV-NT were subsequently inoculated into N. benthamiana through Agrobacterium-mediated infection to examine the effects on leaf curling symptom. The results of inoculation assays are summarized in Table 2, with representative phenotypes shown in Figure 5. It was found that N. benthamiana plants inoculated with A-mut of AYVV-NT showed mild symptoms, with the margins of the young leaves slightly curled up ( Figure 5) at 24 dpi. In contrast, B-mut and C-mut of AYVV-NT elicited severe upward leaf curling symptoms, similar to those caused by the infectious clone of wild-type AYVV-NT, pBinAYVV ( Figure 5). The plants inoculated with the empty vector pBin19 or buffer did not show any symptom, as expected. To verify the maintenance of mutations, the genomic DNA of the progenies of A-mut, B-mut, and C-mut was extracted from the upper un-inoculated leaves, amplified by RCA [43], and fully sequenced. It was confirmed that the mutations were maintained in the progenies, supporting the notion that the symptoms indeed resulted from the infection of the mutants. The results indicated that the hypothesis that these three groups of amino acids in C4 proteins are involved in the determination of leaf curling directions could not be confirmed. Nevertheless, these observations demonstrated that Cys8 and/or Gly14 play a role in the severity of upward leaf curling. In contrast, Phe64, Gln71, Asp74, and/or Met79 may not contribute significantly to the severity of leaf curling symptom in the current experimental settings, as shown by the results of inoculation assays with B-mut and C-mut. benthamiana were aligned using the CLUSTAL W program [44] and represented using GeneDoc [45]. Among the begomoviruses shown, AYVV-NT caused severe upward leaf curling, whereas TLCV-TC and SqLCV-YL induced downward leaf curling. The amino acids selected to create mutants in subsequent analyses were boxed and indicated. The consensus sequence is shown at the bottom of the alignment. The degrees of conservation were indicated by the background color: black, 100% conservation; gray, conserved in two sequences; white, non-conserved. Potential Application of A-Mut in Disease Management The above results demonstrated that A-mut could effectively attenuate the upward leaf curling symptoms caused by the original pBinAYVV construct. To test whether A-mut could be utilized as preventive or therapeutic agents for the treatment of severe symptoms caused by pBinAYVV infections, inoculation assays were performed with different inoculation orders for pBinAYVV and A-mut. For the prevention of the formation of severe upward leaf curling symptoms, A. tumefaciens cells harboring A-mut construct were first infiltrated into N. benthamiana for 7 days, followed by the challenge of pBinAYVV. For use of A-mut as the therapeutic agent, A. tumefaciens cells harboring pBinAYVV were first infiltrated into N. benthamiana, followed by the inoculation of A-mut at 7 dpi. The result of the inoculation assay is summarized in Table 3, with the representative symptoms caused by different treatments shown in Figure 6 and Figure S1. The result revealed that Amut could indeed be used as a preventive treatment to avoid the formation of severe upward leaf curling symptoms caused by pBinAYVV. The preventive effect was maintained for at least 34 days (Figure 6 and Figure S1). However, A-mut could not be used as a therapeutic to heal or attenuate the severe upward leaf curling symptoms caused by pBinAYVV. To examine the progeny population ratio of A-mut and pBinAYVV in the "prevention" and "therapeutic" treatments, total DNA was extracted from the upper uninoculated leaves, amplified by RCA, and cloned, for Experiment I and III. From each treatment, 10 independent clones were sequenced. It was found that 18 out of a total of 20 clones from the two independent experiments in the "prevention" treatment were A-mut, whereas all 20 clones were pBinAYVV in the "therapeutics" treatment. It should be noted that, in the "prevention" treatments, one out of four plants in each experiment was not "protected" and exhibited severe upward leaf curling symptoms following the challenge of pBinAYVV (the representative plant indicated by the white arrow in Figure S2). Sequencing analysis of 10 clones from the progeny viruses in Experiment III revealed that only AYVV-NT was present in this plant. This observation provided evidence suggesting that the "preventive effect" of A-mut was dependent on the successful establishment of A-mut prior to the challenge of pBinAYVV and not from the wound-induced resistance through the inoculation process. The results reflected the lower infectivity observed for A-mut (Table 2) compared to pBinAYVV and suggested that the ability of the C4 protein to modulate the symptoms is tightly associated with the accumulation levels of the respective virus and the C4 protein. Figure 6. Inoculation assay on the ability of A-mut in the modulation of leaf curling symptom. To test whether A-mut could be used as a potential "preventive" or "therapeutic" agent for begomovirusinduced leaf curling symptoms, N. benthamiana plants were first inoculated with AYVV-NT or A-mut, followed by the infiltration of either A-mut or AYVV-NT, respectively, as indicated at the bottom, at 7 dpi, with the arrows representing the order of the infiltration. The effects on leaf curling symptoms were recorded at 34 dpi. Discussion The C4 proteins encoded by geminiviruses are multifunctional, with ever-expanding roles being identified continuously [23]. However, the functions of C4 proteins are not conserved for all geminiviruses [36]. In this study, we have revealed an additional function of C4 proteins in the modulation of the severity of the leaf curling symptoms of AYVV-NT and TLCV-TC. Two amino acids, Cys8 and Gly14, of AYVV-NT C4 protein were further fine-mapped as playing crucial roles in modulating the severity of upward leaf curling symptoms caused by AYVV-NT. We further tested the possibility of using A-mut as a preventive or therapeutic measure against the infection of wild-type AYVV-NT. The findings in this study may contribute to the understanding of the mechanisms of symptom formation, which may further be applied in the development of alternative methods to manage viral diseases. One of the initial goals of this study was to identify the viral determinant(s) of the specific directions of leaf curling symptoms caused by the infection of specific begomoviruses. We have created chimeric viruses by exchanging partial genomic regions between AYVV-NT and TLCV-TC and found that the region between the unique NcoI and BamHI restriction sites ( Figure 2) exhibited the ability to modulate leaf curling directions in N. benthamiana (Figure 3). We then focused on the only intact open reading frame, C4, in this region for the following mutational analyses, assuming that the C4 protein might be the viral component for the determination of specific leaf curling direction. However, this assumption might be premature, as the exchanged genomic region also contains partial coding sequence for the Rep protein and the non-coding sequences ( Figure S3) required for replication initiation (Ori, iterons) and gene expression (promoters). The possibilities that these viral factors, other than C4 protein, may be involved in the determination of leaf curling directions could not be ruled out. It is also possible that the other viral proteins may also be required for specific symptom development, since it has been shown that the expression of C4 protein alone may not necessarily recreate the symptoms caused by the specific virus, and C4 proteins encoded by different begomoviruses may exhibit different abilities in symptom elicitation [7]. Further experiments are required to resolve this issue. In the assays with A-mut as the "preventive" or "therapeutic" measure in mitigating the severe upward leaf curling symptoms caused by wild-type AYVV-NT in N. benthamiana, the initial results seemed to indicate that the mutations in Cys8 or Gly14 in AYVV-NT C4 protein may only be used as an preventive measure, since the competitiveness of the A-mut was lower compared to that of wild-type AYVV-NT (Table 2 and the sequencing results of the progenies in the "therapeutic" experiments). However, the possibility of using A-mut as a therapeutic agent has not been ruled out, as the conditions for the application of A-mut have not been exhaustively tested. For example, inoculation of A. tumefaciens cells harboring A-mut with a much higher concentration may increase the competitiveness of A-mut, which might ultimately exhibit the therapeutic effect. In addition, several controls should be included in these assays, such as pre-inoculation with pBin19 vector alone followed by inoculation with AYVV-NT or by using test plants in different developmental stages. These controls are important to test thoroughly the abilities of A-mut in the modulation of the severity of leaf curling symptoms. These experiments will be performed in our forthcoming studies. On the other hand, the reciprocal A-, B-, and C-mutations in TLCV-TC were unable to be designed and tested, since these mutations in C4 protein of TLCV-TC would also affect the amino acids of Rep protein in the overlapping open reading frame. Thus the effect of amino acids in these positions on the modulation of symptoms could not be thoroughly tested. Nevertheless, the current results indicated that amino acids Cys8 and Gly14 in the AYVV C4 protein are involved in the modulation of the severity of the upward leaf curling symptoms caused by AYVV-NT in N. benthamiana. Additional experiments are required to comprehend the mechanism of leaf curling symptom formation and develop alternative measures to manage the leaf curling disease. To further explore the possible function of Cys8 and Gly14 in AYVV-NT C4 protein, we have performed 2-and 3-dimensional (2D and 3D) structural predictions using Phyre2 web portal (http://www.sbg.bio.ic.ac.uk/phyre2/html/page.cgi?id=index, as accessed on 20 January 2022) [51]. For 3D prediction, no suitable templates were identified for modeling the structures in this region. However, for 2D analysis, Cys8 is predicted to be located at the first beta-strand and Gly14 in the coil region, similar to those for the corresponding amino acids in TLCV-TC and SqLCV-YL ( Figure S4). The mutations in A-mut did not affect the predicted secondary structure ( Figure S4), suggesting that the difference in leaf curling severity might not result from the difference in C4 protein structures. Instead, amino acids in this region might be involved in the interaction with specific host factors. Possible candidates of such host factors involved in the regulation of directions for leaf curling symptoms include host proteins and miRNAs that have been reported to modulate the differentiation and development process of leaf tissues [12,17,[22][23][24][25]30,[52][53][54][55][56][57][58]. Whether the C4 proteins of AYVV-NT or TLCV-TC exhibit differential interactions with these host factors will be one of the focuses in our upcoming studies. To test the ability of C4 proteins of AYVV-NT and TLCV-TC in the modulation of leaf curling directions independent of other viral factors, expressing the C4 proteins in plants is the direct approach for confirmation. Several attempts have been made in our earlier studies to express the C4 protein of AYVV and TLCV in N. benthamiana, either transiently using Bamboo mosaic virus (BaMV)-or satellite BaMV RNA-based expression vectors [49,50] or by transgenic lines. However, these attempts have been unsuccessful, possibly because the levels of C4 protein overexpression posed certain toxic effects on the plant cells. Plants inoculated with the over-expressing constructs usually died or exhibited severe leaf necrosis symptoms before the development of any leaf curling symptoms. Thus, we mutated the strat codon of the C4 gene to stop codon in the infectious construct to analyze the functions of the C4 protein in manipulating leaf curling symptoms. However, the mutant viruses without the expression of C4 proteins were not infectious. Therefore, other approaches in directly testing the functions of C4 proteins, such as the infiltration of leaves using a much smaller amount of A. tumefaciens cells harboring the C4 protein expression constructs or with other expression systems will be applied in our upcoming studies. It is worth noting that the C4 proteins of different begomoviruses may have different subcellular localization, which may significantly affect the functionality of different C4 proteins during infection [36]. Therefore, the difference in the subcellular localization of the C4 proteins of different begomoviruses inflicting leaf curl symptoms in specific directions should also be verified. Together, knowledge on the regulation of leaf curling directions or severity by different begomovirus C4 proteins may contribute to our understanding of plant developmental processes and the development of management measures for virus diseases. Conclusions In conclusion, a new function of begomovirus C4 proteins in affecting the severity of leaf curling symptoms has been elucidated in this study. In addition to the previously known functions in viral infection cycles, begomovirus C4 proteins may serve as the modulator of the severity of leaf curling symptoms. The amino acids Cys8 and Gly14 of AYVV-NT C4 protein were found to be involved in the development of leaf curling symptoms elicited by AYVV-NT in N. benthamiana. The application of A-mut in "preventing" the development of severe upward leaf curling symptoms caused by AYVV-NT was also demonstrated. Taken together, the results of this study opened up an avenue for further dissecting the processes of leaf curling symptom elicitation and developing new strategies for the management of begomovirus infections.
2022-03-03T16:26:48.555Z
2022-02-28T00:00:00.000
{ "year": 2022, "sha1": "ca135840368f4660e5dadc2f0c0f71a841423b02", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1999-4915/14/3/499/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "7f748f1f35ea1eb3ab085615dcd60cfaae711b95", "s2fieldsofstudy": [ "Biology", "Environmental Science" ], "extfieldsofstudy": [ "Medicine" ] }
54941654
pes2o/s2orc
v3-fos-license
Ultrasonochemical-Assisted Synthesis of CuO Nanorods with High Hydrogen Storage Ability Uniform CuO nanorods with different size have been synthesized in a water-alcohol solution through a fast and facile ultrasound irradiation assistant route. Especially, the as-prepared CuO nanorods have shown a strong size-induced enhancement of electrochemical hydrogen storage performance and exhibit a notable hydrogen storage capacity and big BET surface area. These results further implied that the as-prepared CuO nanorods could be a promising candidate for electrochemical hydrogen storage applications. The observation of the comparison experiments with different concentrations of NaOH, ethanol, CTAB, and HTMA while keeping other synthetic parameters unchanged leads to the morphology and size change of CuO products. Introduction It is widely accepted that the properties of nanomaterials are not only closely related to their sizes but also to their shapes. Therefore, controlling the morphologies of nanomaterials is one of the most important issues and effective ways to obtain desirable properties. one-dimensional (1D) nanoscaled materials such as carbon nanotubes (CNTs) [1], semiconductor nanowires, and nanobelts [2][3][4][5][6][7] exhibit interesting and useful properties and may be applied as building blocks for the integration of the next generation of nanoelectronics, ultrasmall optical devices, biosensors, and so forth. As a p-type semiconductor with a narrow band gap (1.2 eV), CuO nanomaterials have been widely exploited for diverse applications, such as heterogeneous catalysts [8][9][10], photoelectrochemical materials [11], gas sensors [12], lithium ion electrode materials [13], electrochemical hydrogen storage materials [14], and field emission (FE) emitters [15,16]. Many recent efforts have been directed toward the fabrication of nanostructured CuO to enhance its performance in currently existing applications. In particular, a variety of 1D CuO nanostructures have been prepared by high-temperature approaches [17][18][19][20][21][22][23][24][25] and low-temperature wet chemical approaches [26][27][28][29][30][31][32]. Generally, the abovementioned methods for 1D CuO nanomaterials require high temperature or additional templates to act as a support and are constrained by expense and complex. Comparatively, the ultrasonic approach was more attractive for both its simplicity and commercial feasibility. Different from other traditional chemical methods, the sonochemistry route is based on acoustic cavitations through the formation, growth, and collapse of bubbles in the liquid. The implosive collapse is an adiabatic process, which generates localized hot spots with transient temperatures of 5000-25000 K, pressures of about 1800 atm [33,34], and heating and cooling rates in excess of 1010 K/s [35,36]. These extreme conditions have been applied to prepare amorphous metals, carbides, oxides, sulfides, and so forth in various media [37][38][39][40][41]. Herein, we present a fast and facile ultrasound (US) irradiation assistant route to mass-synthesize uniform CuO nanorods with different size in a water-alcohol solution. Especially, the as-prepared CuO nanorods have shown a strong size-induced enhancement of electrochemical hydrogen storage performance and exhibit a notable hydrogen storage capacity and big BET surface area. These results further implied that the as-prepared CuO nanorods could be Experimental In a typical experiment, all of the chemicals were of analytical grade and were used as received. Aqueous solutions were prepared using distilled water. Firstly, 0.11 g CuAC 2 ·H 2 O was dissolved in the solution mixed with 15 mL ethanol and 35 mL distilled water in a 100 mL beaker. Then 8 mL NaOH aqueous solution (0.5 M) and 0.3 g CTAB were added slowly. After that, the solution was kept under US irradiation at room temperature for 45 minutes. Finally, the obtained precipitates were collected and washed several times with absolute ethanol and distilled water. The comparison experiments with different concentrations of NaOH, ethanol, CTAB, and HTMA have also been conducted as listed in Table 1. X-ray powder diffraction (XRD) analysis was carried out with a Japan Rigaku D/max-rA X-ray diffractometer with graphite monochromatized Cu Kα radiation (λ = 1.54178Å). The scan rate of 0.05 • /s was used to record the patterns in the 2θ range of 20-70 • . SEM image was obtained by a JSM-6700 F field emission scanning electron microanalyser (JEOL, Japan), whereby the resulting powder was mounted on a copper slice. HRTEM images were recorded on a JEOL-2010 TEM at an acceleration voltage of 200 KV. The porous nature of the nanorods was further confirmed by the measurement of the pore size distribution, which was obtained by the nitrogen adsorption-desorption isotherm and Barrett-Joyner-Halenda (BJH) methods on an OMNISORP-100CX accelerated surface area and porosimetry system. The electrochemical measurements were carried out following the method reported in [14] with slight modification. Briefly, the electrode was fabricated by directly pressing the synthesized CuO powders on a sheet of nickel foam at 50 MPa. All of the experiments were performed in a three-electrode cell in 6 M KOH at 25 • C under normal atmosphere. The CuO nanostructures were used as the working electrode, Ni(OH) 2 /NiOOH as the counterelectrode, and Ag/AgCl as a reference electrode. The CuO nanostructures electrode was charged for 3 h at a current density after a 2 s rest. All of the electrochemical hydrogen storage experiments were carried out using the Land battery system (CT2001A) at room temperature. Results and Discussion The overall crystallinity and purity of the as-synthesized samples were investigated by XRD and HRTEM measurement. As shown in Figure 1 Figures 1(b) and 1(c) show the large-scale CuO nanorods obtained, from which it can be seen that the nanorod is more than 200 nm long and 10 nm in diameter, and a high yield (>95%) of this 1D form can be easily observed. The HRTEM image in Figure 1(d) indicates that the nanorod is of single crystal and can be indexed as the monoclinic CuO phase, which is in accord with the XRD result. As shown in Figure 1(d), the lattice interplanar spacing has been determined to be 2.76Å, corresponding to the (110) plane of monoclinic CuO, which suggests that the nanorods grow preferentially through (110) plane stacking. The measurement of BJH methods showed that the BET surface area of CuO nanorods was 49.8 m 2 /g, which implied that the CuO nanorods obtained here were a kind of potential porous material, as shown in Figure 2(a). Further study of the pore size distributions of the sample is illustrated in Figure 2(b). The Curve shows the relative pore volume distributions according to the average size of pores, in which there is a distribution centered around 40 nm. Interestingly, the electrochemical study demonstrated that the discharge capacity of CuO nanorods displayed a noticeable electrochemical hydrogen storage ability (∼ 165 mAh/g), which amounts to the 32.8% hydrogen storage capacity of SWNTs, whose discharge capacity is 503 mAh/g, corresponding to 1.84 wt% hydrogen [42]. In the charge curve of CuO nanorods, as shown in Figure 2(c), with the increase of the electrochemical capacity, the potential increases quickly but remains unchanged when the charge capacity reaches 3 mAh/g. One new obvious plateau of potential is observed between 5 mAh/g and 170 mAh/g. This indicates that two different hydrogen adsorption sites [42] exist in the synthesized CuO nanorods; in other words, Journal of Nanomaterials It is assumed that the H was first adsorbed onto the surface of each nanorod and then diffused into the interstitial sites among CuO. The discharge curve also displays two different hydrogen release processes, which further confirms the above results. The cycle life of CuO nanorod electrode is shown in Figure 2(d). After being cycled 50 times at the charge-discharge current density of 30 mA/g, the discharging capacities of CuO nanobelts remain over 20 mAh/g. Compared with the capacities (100 mAh/g and 130 mAh/g) of other 1D CuO nanostructures obtained by us, as shown in Figures 2(e) and 2(f) and Figures 3(a), 3(c), and 3(d), the as-prepared CuO nanorods exhibited higher capacity. The relatively high capacity was considered to be pertinent to the enhanced electrocatalytic activity of the highly porous and layered structures of the synthesized CuO nanorods. And we believe that the investigations of electrochemical hydrogen storage of CuO nanostructures help us to understand the relationship between morphology, size, and properties and thus inspire us to explore new nanostructures with higher hydrogen uptake. Comparison experiments with different concentrations of NaOH, ethanol, and CTAB while keeping other synthetic parameters unchanged leads to the morphology and size change of CuO products. As shown in Figures 3(a) and 3(b), when the NaOH concentrations were reduced, CuO nanorod bundles composed of rods with smaller size (several nanometers) were obtained. When the ethanol concentrations were increased, short CuO nanorods (∼50 nm long) formed, as shown in Figures 3(c) and 3(d). CuO nanorod bundles were also produced when the CTAB mount was reduced or free, as shown in Figures 3(e) and 3(f). When using other surface active reagent (such as hexamethylene tetramine (HTMA)) instead of CTAB, different CuO 1D nanostructures have also been prepared, as shown in Figure 4. Therefore, suitable thermodynamic experimental conditions favor the oriented crystalline growth process of the CuO nanorods. It is also implied that different shapely CuO nanostructures can be controllably synthesized through adjusting the reaction parameters of this US chemical reaction process. Table 1. Conclusions In summary, CuO nanorods were synthesized through a fast and facile ultrasound irradiation assistant route. The products exhibit excellent hydrogen storage capacity and big BET surface area. Different shapely CuO nanostructures have been controllably synthesized. The comparison experiments show that the reactant concentrations are critical to the formation of 1D CuO nanostructures. Further research will be performed on more novel cupreous 1D nanostructures exhibiting different electrochemical hydrogen storage performances, in which more excellent hydrogen storage materials might be found.
2018-12-07T15:29:23.225Z
2011-01-01T00:00:00.000
{ "year": 2011, "sha1": "e78f44305e41dba3c42b6759586f503ee71a25a6", "oa_license": "CCBY", "oa_url": "http://downloads.hindawi.com/journals/jnm/2011/439162.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "e78f44305e41dba3c42b6759586f503ee71a25a6", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [ "Materials Science" ] }
207911567
pes2o/s2orc
v3-fos-license
Trigger Word Detection and Thematic Role Identification via BERT and Multitask Learning The prediction of the relationship between the disease with genes and its mutations is a very important knowledge extraction task that can potentially help drug discovery. In this paper, we present our approaches for trigger word detection (task 1) and the identification of its thematic role (task 2) in AGAC track of BioNLP Open Shared Task 2019. Task 1 can be regarded as the traditional name entity recognition (NER), which cultivates molecular phenomena related to gene mutation. Task 2 can be regarded as relation extraction which captures the thematic roles between entities. For two tasks, we exploit the pre-trained biomedical language representation model (i.e., BERT) in the pipe of information extraction for the collection of mutation-disease knowledge from PubMed. And also, we design a fine-tuning technique and extra features by using multi-task learning. The experiment results show that our proposed approaches achieve 0.60 (ranks 1) and 0.25 (ranks 2) on task 1 and task 2 respectively in terms of F_1 metric. Introduction Using the natural language processing methods to discover and mine drug-related knowledge from text has been a hot topic in recent years. For the goal of drug repurposing, an active gene annotation corpus (AGAC) was developed as a benchmark dataset (Wang et al., 2018b). The AGAC track is part of the BioNLP Open Shared Task 2019, aims to gather text mining approaches among the BioNLP community to propel drugoriented knowledge discovery. It consists of three tasks for the extraction of mutation-disease knowledge from PubMed abstracts: trigger words NER, thematic roles identification, and mutation-disease knowledge discovery. We participated in the trigger words NER and thematic roles identification tasks. Recently, pre-trained models have been the dominant paradigm in natural language processing. They achieved remarkable state-of-the-art performance across a wide range of related tasks, such as textual entailment, natural language inference, question answering, etc. BERT, proposed by Devlin et al. (2019), has achieved a bettermarked result in GLUE leaderboard with a deep transformer architecture (Wang et al., 2018a). BERT first trains a language model on an unsupervised large-scale corpus, and then the pretrained model is fine-tuned to adapt to downstream tasks. This fine-tuning process can be seen as a form of transfer learning, where BERT learns knowledge from the large-scale corpus and transfer it to downstream tasks. While BERT was built for general-purpose language understanding, there are also some pre-trained models following BERT architecture that effectively leverage domain-specific knowledge from a large set of unannotated biomedical texts (e.g. PubMed abstracts, clinical notes), such as SciBERT (Beltagy et al., 2019), BioBERT , NCBI BERT , etc. These models can effectively transfer knowledge from a large amount of unlabeled texts to biomedical text mining models with minimal task-specific architecture modifications. In this paper, we investigate different methods to combine and transfer the knowledge from the three different sources and illustrate our results on the AGAC corpus. Our method is based on finetuning BERT base , NCBI BERT and BioBERT using multi-task learning, which has demonstrated the efficiency of knowledge transformation (Liu et al., 2019) and integrating models for both tasks with ensembles. The proposed methods are proved effective for natural language understanding in the biomedical domain, and we rank first place on task 1 (Trigger words NER) and second place on task 2 (Thematic roles identification). Figure 1: The pipeline of our approach. We first split PubMed abstracts into sentences, tokenize them into words and extract some features like POS tags, then a BERT-based method for NER offset and entity recognition, and finally predict relations for each potential entity pair. Background The model architecture of BERT (Devlin et al., 2019) is a multi-layer bidirectional Transformer encoder based on the original Transformer model (Vaswani et al., 2017). The input representation is a concatenation of WordPiece embeddings (Wu et al., 2016), positional embeddings, and the segment embedding. A special classification embedding ([CLS]) is inserted as the first token and a special token ([SEP]) is added as the final token. It is firstly pre-trained with two strategies on large-scale unlabeled text, i.e., masked language model and next sentence prediction. The pre-trained BERT model provides a powerful context-dependent sentence representation and can be used for various target tasks, i.e., text classification and machine comprehension, through the fine-tuning procedure. Hence, the BERT model can be easily extended to the medical domain information extraction pipeline, first extracting the trigger words and then determining the relationship between them, as illustrated in Figure 1. Task 1: Trigger Words NER Task 1 aims to identify trigger words in the PubMed digest and annotating them as correct trigger markers or entities (Var, MPA, Interaction, Pathway, CPA, Reg, PosReg, NegReg, Disease, Gene, Protein, Enzyme). It can be seen as an NER task involving the identification of many domainspecific proper nouns in the biomedical corpus. We first split each PubMed abstracts into sentences using '\n' or '.', and convert each sentence into words by NLTK 1 tokenizer. After that, words are further tokenized into its word pieces x = (x 1 , . . . , x T ). Then we use a representation based on the BERT from the last layer H = (h 1 , . . . , h T ). In order to make better use of the word-level information, POS tagging labels and word shape embedding representation (Liu et al., 2015) of each word 2 are also concatenated into the output of BERT, passing through a single projection layer, followed by a conditional random fields (CRF) layer with a masking constraint 3 to calculate the token-level label probability p = (p 1 , . . . , p T ). When fine-tuning the BERT, we found that the performance of the model performed better in the case of BIO for the selection of the tagging schemes compared to BIOES. We further extend our model to multi-task learning joint trained by sharing the architecture and parameters. Although the differences in different datasets, multi-task means joint learning with other biomedical corpora. The assumption is to make more efficient use of the data and to encourage the models to learn more generalized representations. More specially, the same token-level information and BERT encoder are shared and each data set has a specific output layer, e.g., CRF layer. Our final loss function is obtained as follows: where y c i denote true tag sequence and x c i denote the input tokens for corpora c i , λ c i and λ r are weighted parameters. Task 2: Thematic Roles Identification Task 2 is to identify the thematic roles (ThemeOf, CauseOf) between trigger words. We treat it as a multi-label classification problem by introducing "no relation (NA)" label. When constructing the training data of task 2, we use the relationship of two entities with a distance of no more than one sentence. For NA label, random sampling is performed. In the testing process, relation label will be assigned to the corresponding thematic role when its probability is maximum and larger than the threshold. Otherwise, it will be predicted as no relation. We also anonymously use a predefined tag (such as %Disease) to represent a target named entity. And we additionally append two concrete predicted entity words separated by the [SEP] tag after each sentence. Following Shi and Lin (2019), we also add the token-level relative distance to the subject entity information for each token, i.e. 0 for the position t between two entities, t − s for tokens before first entity and t − e for tokens after second entity, where s, e are the starting and ending positions of first and second entity after tokenization, respectively. The relation logits of two entities are performed using a single output layer from the BERT, as where h cls denotes the hidden state of the first special token ([CLS]). Experiments In this section, we provide the leaderboard performance and conduct an analysis of the effect of models from different settings. Experimental Setup The AGAC track organizers develop an active gene annotation corpus (AGAC) (Wang et al., 2018b;Gachloo et al., 2019), for the sake of knowledge discovery in drug repurposing. The track corpus consists of 1250 PubMed abstracts: 250 for public, 1000 for final evaluation. We randomly split the public texts into train and development data sets with the radio of 8:2. The training set is used to learn model parameters, the development set to select optimal hyper-parameters. For evaluation results, we measure the trigger words recognition and thematic roles extraction performance with F 1 score. Table 1 shows the external data sets used under the joint learning method. The BIO form of these data sets is different from that of task 1, hence we use different projection and CRF layers. But not the more data sets, the better. We found that the NCBI disease (Dogan et al., 2014) and BC5CDR (Li et al., 2016) datasets are helpful for the final results, and the performance is reduced when using BC2GM (Smith et al., 2008) and 2010 i2b2VA dataset (Uzuner et al., 2011). Implementation and Hyperparameters We tried the original BERT 4 , BioBERT 5 and NCBI BERT 6 pre-trained models. Each training example is pruned to at most 384 and 512 tokens for named entity recognition (NER) and relation extraction (RE). We use a batch size of 5 for NER, and 32 for RE. We also use the hierarchical learning rate in the training process so that the pretrained parameters and the newly added parameters converge at different optimization processes. For fine-tuning, we train the models for 20 epochs using a learning rate of 2 × 10 −5 for pre-trained weights and 3 × 10 −5 for others. The learning parameters were selected based on the best performance on the dev set. For NER, we ensemble 5 models from 5-fold cross-validation and 2 models using the normal training-validation approach. For RE, we ensemble 3 models that used all the construction data in training. Table 2 compares the results of the two tasks of the pre-trained model in trigger words NER and thematic roles identification. We report the impact of using different pre-training models on the 4 https://github.com/google-research/ bert 5 https://github.com/dmis-lab/biobert 6 https://github.com/ncbi-nlp/NCBI_BERT The results for task 1 is summarized in Table 3. The difference in the performance in the different labels is partly sourced by the imbalance distribution of trigger labels in the corpus. Our method ends up first place on the leaderboard and substantially improving upon previous state-of-the-art methods. The results for task 2 is summarized in Table 4. Our method ends up second place on the leaderboard. Our method has a large discrepancy between the development set performance and test set performance. It may be the test set is quite different from our constructed data set. This is also related to how we use recognized entities, sentence-or document-level combinations. Ablation Study As shown in Table 5, we found that adding a layer of BiLSTM behind the BERT encoder did not improve the performance of the model, resulting in a 0.04 loss of F 1 . For NER tasks, external features are effective for the model's performance. So we verified the efficacy of word shape and POS tags on task 1, and we found that adding this information can increase the F 1 value of our model by more than 0.01. Conclusion In this paper, we have explored the value of integrating pre-trained biomedical language representation models into a pipe of information extraction methods for collection of mutation-disease knowledge from PubMed. In particular, we investigate the use of three pre-trained models, BERT base , NCBI BERT and BioBERT, for fine-tuning on the new task and reducing the risk of overfitting. By considering the relationship between different data sets, we achieve better results. Experimental results on a benchmark annotation of genes with active mutation-centric function changes corpus show that pre-trained representations help improve baseline to attain state-of-the-art performance. In future work, we would like to train the entity recognition and relation extraction tasks simultaneously, reducing the cascading error caused by the pipeline model in biomedical information extraction.
2019-11-06T05:18:08.647Z
2019-11-01T00:00:00.000
{ "year": 2019, "sha1": "1384bed1d7f552f2717caf06f03b9f9f2d9fa2d3", "oa_license": "CCBY", "oa_url": "https://www.aclweb.org/anthology/D19-5711.pdf", "oa_status": "HYBRID", "pdf_src": "ACL", "pdf_hash": "1384bed1d7f552f2717caf06f03b9f9f2d9fa2d3", "s2fieldsofstudy": [ "Computer Science", "Medicine" ], "extfieldsofstudy": [ "Computer Science" ] }
191133988
pes2o/s2orc
v3-fos-license
A Power and Resource Efficient Binary Content-Addressable Memory on FPGAs : Content-addressable memory (CAM) is a type of associative memory, which returns the address of a given search input in one clock cycle. Many designs are available to emulate the CAM functionality inside the re-configurable hardware, field-programmable gate arrays (FPGAs), using static random-access memory (SRAM) and flip-flops. FPGA-based CAMs are becoming popular due to the rapid growth in software defined networks (SDNs), which uses CAM for packet classification. Emulated designs of CAM consume much dynamic power owing to a high amount of switching activity and computation involved in finding the address of the search key. In this paper, we present a power and resource efficient binary CAM architecture, Zi-CAM, which consumes less power and uses fewer resources than the available architectures of SRAM-based CAM on FPGAs. Zi-CAM consists of two main blocks. RAM block (RB) is activated when there is a sequence of repeating zeros in the input search word; otherwise, lookup tables (LUT) block (LB) is activated. Zi-CAM is implemented on Xilinx Virtex-6 FPGA for the size 64 × 36 which improved power consumption and hardware cost by 30 and 32%, respectively, compared to the available FPGA-based CAMs. Introduction Field-programmable gate arrays (FPGAs) are becoming an increasingly favorable platform for systems implementation because of their hardware-like performance and software-like reconfigurability.Modern FPGAs provide a vast amount of configurable logic and embedded memory blocks operating at a high clock rate [1,2].For example, the 16-nm Xilinx UltraSCALE FPGA and 14-nm Intel Stratix FPGA provide a massive amount of memory and logic resources which can be configured on-the-fly to implement the given functionality.Software defined networks (SDNs) are implemented using FPGAs where flexibility and reconfigurability are the required features for future 5G/SDN networks.Support for packet classification is one of the key element of networking devices, which in most cases are implemented using a content-addressable memory (CAM). In a CAM, every location is accessed by its content rather than by address which is considered to be similar to the behavior of a human brain [3,4].Typically, it returns the address of the input search word in one clock cycle.CAM is classified into binary (BCAM) and ternary CAM (TCAM) [5,6].BCAM stores '0' and '1' while TCAM can store a don't care bit 'X' too.The stored data, in the form of CAM cells, are arranged as rows and columns.The number of rows represents the depth of CAM memory, while the number of columns represents the width of CAM memory.The input search word is compared with each stored word using search lines (SLs) to form the corresponding matchlines (MLs), as shown in Figure 1.The encoder (Enc.)translates MLs into an address where the search key is found.For example, the input search word of value "0101" is matched with the corresponding bits of location 2 to produce a logic-high ML2, which is translated to the address "10" by the encoder as shown in Figure 1.Due to high speed searching capability, CAMs have found its place in almost every scientific field where the look-up time is of utmost importance such as pattern recognition [7], security systems, networking applications [8,9], artificial intelligence [10,11].The efficiency of packet classification and packet forwarding in network routers is tremendously increased by using CAM as its searching memory where look-up time was previously unable to catch the bandwidth of the communication link [12].FPGAs are used in many applications other than networking.To include a hard core for CAM in FPGA is not a good option for FPGA vendors (such as Xilinx, Altera).Thus, CAMs are emulated using the available hardware resources on FPGAs and are continuously improving in terms of speed, power consumption, and hardware resources. Conventional CAM (on ASIC) has the drawback of high power consumption, limited storage density, non-scalability, and high implementation cost [3,13], compared with a random-access memory (RAM) which has high storage density and lower power consumption.One CAM cell requires 9-10 transistors, while one RAM cell needs only 6 transistors which makes conventional CAM power inefficient and limits its storage density.Similarly, modern FPGAs do not have a built-in core to support the searching-based applications.Researchers have utilized the RAM blocks (in the form of distributed RAM and block RAM (BRAM)) to emulate the functionality of CAM, such as HP-TCAM, REST [14,15] etc.As the resources on FPGAs are limited, so it needs to be used efficiently in designing complex circuits such as CAM.Thus, a binary CAM design based on distributed memory (LUTRAMs) with lower hardware cost as well as reduced power consumption is proposed and is successfully implemented on FPGA. BRAMs are large blocks of memory available as 18K and 36K blocks while distributed RAM consists of LUTRAMs which can be configured as 64-bit memory and is beneficial in small size designs.In our proposed design, we use the LUTRAMs and exploit its 6-input structure in modern FPGAs to save power consumption.The input search key is observed for a specific pattern and one of the two different blocks is searched which saves the power consumed by the other block. The remaining of the paper is arranged as follows: Section 2 discusses the motivations and key contributions of the proposed work.Section 3 provides the prior work on CAM to justify the need for the proposed design.Section 4 elaborates the proposed Zi-CAM architecture, presents an example size CAM architecture, search algorithm, and updating mechanism.Section 5 shows the implementation of Zi-CAM on FPGA and its comparison with other RAM-based CAM architectures.Section 6 concludes this work and discuss the future directions. Motivations Packet classification in modern networking architectures, such as SDN and OpenFlow, is implemented using CAM.The incoming packet needs to be redirected to the destination node based on a table maintained by router in the form of CAM.FPGAs are favorable to these networking application due to its re-configurabiltiy and high performance.Several architectures are available in literature to emulate the functionality of CAM using BRAMs [14,15], distributed RAM [16] as well as flip-flops [17][18][19] on FPGAs.The available CAM architectures are power inefficient due to the accessing of complete memory in each search operation.Therefore, a power efficient binary CAM architecture is proposed in the form of Zi-CAM which also shows reduction in hardware resources. The applications of FPGAs are constantly increasing, specifically, in the field of networking, security, and artificial intelligence [1,19,20], but modern FPGAs lack a soft core for CAM which is an essential element in searching-based applications.Thus, there is a need to develop an optimal CAM core, which can be used for packet classification in modern re-configurable networking systems on FPGAs. Key Contributions Following are the key contributions of the proposed architecture: • The average power consumption of Zi-CAM for size 64 × 36 is 16.6 mW, which is 30% less than the state-of-the-art BCAM counterparts [14,21].The dynamic power consumption of [21] is 24 mW.• The number of static-random access memory (SRAM) cells used by Zi-CAM for 64 × 36 is 208,256, while the available state-of-the-art RAM-based CAM uses 305,856 SRAM cells [21].Zi-CAM uses 32% fewer SRAM cells than state-of-the-art BCAM counterpart [21,22].• Unlike Xilinx CAM [16], there are no useless SRAM cells in the Zi-CAM architecture for any size of CAM implementation on FPGA.• The update latency of the proposed CAM is only 64 clock cycles, compared to the 513 clock cycles of other RAM-based CAM architectures [23,24], which makes it more suitable for practical applications where the data is frequently updated.The update latency of Zi-CAM is independent of the size of CAM while in other RAM-based CAMs, it varies with the size of CAM. Related Works RAM-based CAMs were initially not implementable on FPGAs due to large resource requirement.The hardware resources required for the architecture presented in patent [25] doubles by increasing the search word by a single bit which becomes impossible to be implemented on FPGA.A search word of size 36 bit requires 64 Gb of RAM and other computational resources is a plus to this requirement.An energy efficient design [26] is also available based on RAM, but it lacks the optimal configuration of the storing patterns. The CAM memory is partitioned in [14] by mapping CAM bits to different RAM blocks which enabled its implementation on FPGA, but it still suffers from inefficient usage of RAM.UE-TCAM [21] reduced the requirement of RAM blocks and used only half of the memory resources required for HP-TCAM implementation.The hardware utilization as well as power consumption for both binary and ternary CAM implementation of [14] and [21] are almost the same [18] which is why we are comparing its results with Zi-CAM in Table 1.RAM-based CAM architectures [14,15,27,28] are continuously evolving and improving in terms of speed, area, update latency, resources utilization as well as power consumption.We use the same efficient RAM in our implementation but with controlled switching activity and an efficient arrangement of the SRAM cells.With controlled switching activity, 70% to 80% of the hardware is de-activated which results in reduced dynamic power consumption for some input combinations.By using LUTs as memory elements, our proposed architecture uses fewer hardware resources than the available SRAM-based CAM architectures for the same size configuration.EE-TCAM [23] select one set of BRAMs in M BRAMs by pre-classifier bits, to achieve memory efficiency compared to other CAM architecture.BRAM-based CAMs including EE-TCAM have higher update latency of at least 512 clock cycles because of the monolithic structure of BRAM [24].BU-TCAM [29] updates the memory in the form of multiple words and not word by word, but the worst case latency still remains 512 clock cycles for a CAM having 512 locations.On the other hand, our proposed BCAM architecture has update latency of 64 clock cycles independent of the number of words in CAM memory which is discussed in Section 4.5.Similarly, EE-TCAM involves pre-processing of the incoming new words to map it to the SRAM blocks, which is a computational overhead on the whole system.The computational power involved in the pre-processing of the updating stage significantly affect the resultant power consumption where the updates frequency is higher, which is the case in most of the practical applications, i.e., network routers. Energy consumption at the circuit level [3,30] is reduced by lowering the supply voltage which is an obvious way of power reduction.In our proposed architecture, we are not changing the supply voltage but arrange LUTs in a novel fashion which leads to the controlling of two blocks on the pattern of bits in the search key.Only one block is active at a time and the power required by the other block is saved.Reconfigurability is a major advantage of FPGA-based CAM, which lacks in circuit level implementation of CAM presented in [6] CAMs that are emulated using block RAMs (BRAMs) [15,31] are efficient only for specific sizes.At some points, an increase of one bit in search word infers a complete BRAM (16k or 32k).For example, Xilinx CAM implements a basic block of CAM (32 × 10) using available BRAM (1024 × 32) and then allows the implementation of 64 × 10, 128 × 10, and so on or 32 × 20, 32 × 30, and so on [16,32].If we want to implement a CAM of 64 × 36, it uses the resources equal to a 72 × 40 CAM on Virtex-6 FPGA [33] where a lot of SRAM cells are wasted and can not be used somewhere else.In our proposed architecture, one can increase/decrease the depth and width by a single number, and there are no useless SRAM cells.Thus Zi-CAM efficiently utilizes the SRAM cells on FPGAs irrespective of the size configuration of CAM.LUT-based CAM, DURE [34], also utilizes LUTs similar to our proposed BCAM to implement TCAM and provides an update latency of 65 clock cycles. RAM-based TCAM, REST, is efficient in terms of memory resources and uses only 3 and 25% of HP-TCAM and Z-TCAM resources, respectively [15].Multiple accesses in REST [15] increase power consumption and the search latency is five clock cycles which is not in favor of the high-speed characteristic of CAM.On the other hand, our proposed architecture is not compromising on the search latency which is the core feature of a CAM and enables high-speed searching operation in only two clock cycles.Zhuo et al. increased the pipeline stages of RAM-based CAM and accesses the RAM blocks in multiple clock cycles rather than just one clock cycle to achieve the output match address.This increased the search latency up to nine clock cycles when nine stages of distributed RAM are used.Our design in worst case takes two clock cycles search latency.Hash-based CAMs [35] have non-deterministic search latency and the drawback of collision as well as bucket overflow are always part of it.Our proposed architecture provides the match address in a deterministic time of two clock cycles and achieves power efficiency. Terminology Table 1 lists the terminologies used in describing the architecture. Zi-CAM Architecture FPGAs have two types of slices: SLICEM and SLICEL.Each slice consists of four LUTs in Xilinx Virtex-6 FPGA.The LUTs inside SLICEM can be used as 64-bit memory, while those in SLICEL can only be used for logic implementation.Zi-CAM is mainly based on LUTs of a SLICEM, which is also called as distributed memory.Zi-CAM consists of two basic blocks; RAM block (RB) and LUTs block (LB).The RB is implemented using distributed RAM or BRAM of the target FPGA.The LB has some combinational circuitry (for logical comparison) and lookup operation (distributed memory in the form of LUTs or LUTRAMs). Search word (Sw) is divided into two parts during the search operation.One part consists of the most significant bits (MSB) and is represented by M_bits.The second part consists of the least significant bits (LSB) and is named as L_bits.The number of bits in M_bits & L_bits are represented by p & q, with r as the total number of bits in Sw.The division of the proposed CAM design into two blocks, RB and LB, is to search for the Sw in only one block and save the power consumed by another block.In other FPGA-based CAMs [14,27], all of the RAM is accessed to generate the output address.Zi-CAM is accessing only one of the two blocks to search for the input search key, so that the power consumption of the other block can be saved.The concept is illustrated in Figure 2 in which Sw is searched in RB or LB based on the bit pattern in Sw.If RB is searched for the search key, the power consumption of LB is saved and vice versa.Skipping one of the two blocks results in saving the resultant power consumption of the proposed design, which is given in the experimental implementation with FPGA results.The two blocks (RB and LB) is activated and deactivated by the output of Block Selector (BSel) which takes L_bits of the search word as input and provides a one-bit signal (named as flag_bit) equal to '1' or '0' as shown in Figure 3. BSel checks for a sequence of zero's in the search word.If all of the q bits of L_bits are zeros, the flag_bit gets a value of 1 and RB is activated.If at least one bit in L_bits is '1', flag_bit gets a value of '0' and LB is activated.Both RB and LB generates an address, out of which only one is transferred to the output by 2:1 multiplexer (MUX) as shown in Figure 3. RB is a simple SRAM taking M_bits as input, which serves as an address to this memory, and provides Radd at the output.LB is a complex circuit containing LUTs, XNOR gates, ANDing operation, and priority encoder.A 6-input LUT (LUTRAM) has 64 memory cells, starting from 0th cell and end at 63rd cell.The value of M_bits decides the bit location in each LUT.The depth of RB is always 64 in case of 6-input LUTs, because the same 6-inputs are mapped to the input of RB.The width of RB is dependent on the depth of Zi-CAM, for example, the width of RB is 3 bits for eight locations of Zi-CAM.If the number of locations (depth of Zi-CAM) changes to 32, the width of RB becomes 4 bits.Priority encoder is part of LB because the output of LB are matchlines while the output of RB is an address.To provide two similar inputs to the 2:1 MUX, matchlines of LB block should be converted to an address by the priority encoder.The output of RB does not need an encoder as it is already in the form of an address, and not matchlines. Flag_bit serves as input to three units; RB, LB, and MUX.Algorithm 1 describe the steps in the searching process of Zi-CAM.RB or LB is activated if the value of flag_bit is '1' or '0', respectively.Similarly, the flag_bit is given at the selector input of the MUX.The two inputs to MUX are the address from RB (Radd) and address from LB (Ladd).One of these two addresses (Radd or Ladd) appears at the output of MUX, which is ultimately the address (Add) where the Zi-CAM architecture finds the search word.The number of LUTs used by 8 × 10 are 32, which is the product of LUTs in a row (four LUTs) and the total number of rows (eight rows).The number of inputs to a LUT of the target FPGA decides the value of p.The proposed architecture is mainly evaluated and implemented on Xilinx FPGAs which contain 6-input LUTs in modern devices.Thus, the size of p is 6.Increasing the size of p requires high-input LUTs on target FPGA device.If newer FPGAs are released with 7-input LUTs, the proposed design will perform better.For instance, 8 × 10 Zi-CAM is implemented using 32 6-input LUTs.If 7-input LUTs are available, 8 × 10 Zi-CAM will be implemented using only 24 7-input LUTs. To simplify the proposed architecture, we explain it with an example size of 8 × 10 binary CAM in a detailed fashion as follows: 2 shows the content of the CAM that has 10 bits in each of the 8 memory locations (0 to 7).The operation inside LB and RB for 8 × 10 CAM is as follows: Location # BCAM Data 0 0 0 0 1 0 0 1 1 0 0 1 0 1 0 0 1 1 0 0 0 1 2 0 0 0 1 1 1 0 0 0 0 3 0 1 0 0 0 0 1 0 1 0 4 0 0 1 1 0 0 1 1 0 0 5 0 0 1 0 1 1 0 0 0 0 6 1 1 0 0 1 0 0 1 1 0 7 1 0 0 1 0 1 0 0 1 1 LB: For an 8 × 10 Zi-CAM, LB consists of 8 × 4 = 32 LUTRAMs; four LUTRAMs for each of the eight locations.Four LUTRAMs are arranged in a row and a total of eight rows.The Sw is divided into two parts; one part (6 MSBs) is given as input to every LUTRAM, second part (4 LSBs) is given to the four XNOR gates in such a way that the output of each LUTRAM is compared with one of these four LSBs.The output of four XNOR gates is provided to 4-bit AND gate to create a matchline as shown in Figure 4a.Eight locations create eight mathlines which are provided to the 8:3 encoder to generate 3-bit Ladd.RB: Only those two words are mapped to RB which has all four LSBs equal to zero; words at 2nd and 5th location of Zi-CAM as shown in Table 1.The six MSBs of these words are given to RB, which is a 64 × 36 RAM as shown in Figure 4b."010" is stored in the 7th location and "101" is stored in 11th location which is the output address whenever the words stored in 2nd and 5th location of Zi-CAM is given as input search word.Other six words of Table 2 are mapped into LB. Finally, the 2:1 MUX generates Add by getting Ladd and Radd as two inputs.Flag_bit decides between the Ladd and Radd.If the sw has the continuous sequence of zeros like in 2nd and 7th location in Table 2, Ladd is generated; otherwise Radd.Add is the address where the input search word is found. Searching Operation Algorithm 1 represents the search operation of Zi-CAM from taking the input search word (Sw) to generate the output address (Add).To simplify the whole process, an example size of 8 × 10 is assumed for Zi-CAM in this algorithm.There is only one IF statement which decides the type of block (RB or LB) to be activated for searching the memory.If the sequence of zeroes is found in the input search word, the IF condition (flag_bit == 1) becomes true, and RB is activated.Six MSBs of Sw is provided as input to RB which produces the address (Radd) where the input search word is found. Sw is forwarded to LB when the IF condition is false.Eight matchlines (M_L [1], M_L [1], ..., M_L [7]) are created by comparing the 4 LSBs of Sw with the output of 4 LUTRAMs in each of the eight rows of LUTs.The comparison is shown using XNOR gates in line #7 of Algorithm 1.A priority encoder converts the eight matchlines into Ladd.Line #10 of Algorithm 1 shows the MUX operation which selects one of the two addresses, Ladd and Radd coming from LB and RB, respectively. Updating Mechanism The key component used in Zi-CAM implementation is a LUT, which in Xilinx Virtex-6 FPGA is a 64-bit memory when used from SLICEM slices.In our proposed architecture, each page of LUTRAMs represents a row of CAM as shown in Figure 4a.A page of LUTRAMs is selected on the basis of location where the incoming word needs to be written.In 64 clock cycles, all of the 64-locations in LUTs in the corresponding page is updated by the bit values of the incoming word and a 6-bit counter.The counter's value act as a 6-bit address to each LUT in the corresponding page.For a TW × r Zi-CAM, the update latency is independent of the size of CAM and can be updated in a constant time of 64 clock cycles because each LUT in modern FPGAs need only 64 clock cycles to be updated completely. A Generalized Form The example in Section 4.3 can be generalized to visualize the larger sizes of Zi-CAM.The increase in depth and width by a single bit infers the appropriate number of LUTRAMs and other FPGA components, unlike other architecture [14,16] which sometimes infer useless circuitry in the form of SRAM cell (BRAMs) as discussed in Section 2. A TW × r Zi-CAM will infer TW*(r − 6) LUTRAMs for storing of the mapping data with a few additional LUTs for logic implementation.The relationship between the size of Zi-CAM and the hardware cost is linear.Larger the size of Zi-CAM, higher is the hardware cost.The number of XNOR gates are also equal to TW*(r − 6) and the size of priority encoder is TW:log 2 (TW).The dimension of RB is 64 × log 2 (TW), if the number of inputs to a single LUT remains constant.In latest FPGAs, the number of inputs to a single LUT remains 6.The MUX needed is always a 2:1 MUX for directing of the two addresses (Radd & Ladd) to the output of Zi-CAM.The size of the bit-vectors coming to MUX is equal to the address of Zi-CAM. Figure 2 . Figure 2. Dividing of the CAM design in two blocks to save power consumption.(Sw: Search word, Add: Output address where the search word is found).
2019-06-14T14:32:16.746Z
2019-05-27T00:00:00.000
{ "year": 2019, "sha1": "4966dd28abfcd708065dd685b265e16ad6c22372", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2079-9292/8/5/584/pdf?version=1558943694", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "34d2f44414ca2173bf037315f4f1197dc28e1e1c", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Engineering" ] }
241843224
pes2o/s2orc
v3-fos-license
Remaining useful life prediction based on a modified SVR method with health stage division, cluster sampling and similarity matching Remaining useful life (RUL) prediction is an advanced methodology of prognostics and health management (PHM), and is in advantage of life cycles management of equipment and maintenance cost reduction. Among the data driven methods, support vector regression (SVR) is one of the most suitable methods when there are limited failure history data for analysis. However, many uncertain factors such as individual variation and time varying operating conditions, will lead that the failure time of all equipment statistically dispersive, and with the increase of sample set, such dispersity may inevitably increase and further reflects in the model training. In consequent, the dispersity may cause two drawbacks. On the one hand, the linearity of SVR model will increase with the increase of sample set, and overfitting or underfitting tends to occur. And on the other hand, a single model only performs well in its generalization and robustness, but may lost its effectiveness that it may fail to work well for a new on-service equipment. In order to deal with the two drawbacks, this paper proposes a modified SVR method with health stage division, cluster sampling and similarity matching. Through health stage division and cluster sampling, each state of the whole degradation process can obtain the optimal parameters, then the irrelevant linearity can be reduced. In addition, since that similar input results similar output, the optimal parameters of the most similar testing sample are also suitable for the on-service equipment, and through similarity matching the most similar testing sample can be obtained, thus the drawback of a single model can be avoided. Finally, the effectiveness of the proposed method is verified systematically by a simulated dataset of fatigue crack growth and a real-world degradation dataset of GaAs-based lasers. Introduction The operating reliability of mechanical equipment significantly influences the sustainability and competitiveness of manufacturing industry. As the reliability of mechanical equipment decreases with the extension of the operating time, prognostics and health management (PHM) techniques attract more and more attentions [1][2][3]. It determines the optimal maintenance time, inspection interval, spare parts order quantity and other logistics management strategies through efficient and low-cost diagnostic and prognostic activities to achieve the lowest economic cost or equipment failure risk. The key of PHM technology is to predict the equipment's remaining useful life (RUL), i.e. the remaining useful time for a certain part or component to perform its function before its final failure [4]. The accurate results of RUL prediction can help decision makers to make maintenance plans in advance and optimize supply chain management. Compared with traditional reliability-based methods [5,6], existing RUL prediction methods mainly learn the potential degradation law and the failure information through analyzing the condition monitoring data. They can be generally divided into three categories: physics-based methods, data-driven methods and their combinations [7,8]. A physics-based method always requires a broad understanding of the physical mechanism of the equipment, but an excess of physical parameters make it difficult to get an accurate physical model [9]. In comparison, when the explicit physical model is unknown, data-driven methods can directly analyze the condition monitoring data, and predict the RUL. Existing data-driven methods can be divided into: (1) the degradation-modeling based methods [8], which aims to model the degradation evolution process of equipment; (2) the machine learning methods [10], which attempts to map the relationship between degradation data and its RUL directly [11]. A degradation-modeling based method mainly uses a statistical model to learn the degradation evolution from the condition monitoring data, and then estimates the RUL by comparing the predicted results with the pre-set failure threshold. Examples such as: Wiener process models [12,13], Gamma process models [14,15], inverse Gaussian process models [16,17], hidden (semi) Markov models (HMM/HSMM) [18,19], etc. The inherent drawbacks of the degradation-modeling based methods derive from two strong premises: (1) the degradation process of equipment's performance should follow a certain statistical model, such as the continuous-time Markov chain, the hidden Markov model, the hidden semi-Markov model or the Wiener process, .etc.; (2) the statistical property of the degradation model, for example the transition probability matrix for Markov-based models, should be a priori known or estimated [20]. However, for practical instances, theoretical statistical models such as Markov chain are very hard to be verified, and the transition probability matrix is often hard to estimate or inaccessible. Therefore, the applicability of the degradation-modeling based methods is limited in engineering practice. In contrast, the machine learning methods such as artificial neural networks (ANN) [21,22], similarity-based RUL prediction (SbRP) method [11,23] and support vector machine (SVM) [24][25][26][27][28][29][30][31], can effectively avoid these issues. Although the machine learning methods don't provide a probability density function (PDF) estimate of the RUL, they are capable of dealing with prognostic issues of complex systems whose degradation processes are difficult to be interrelated by physics-based methods or degradation-modeling based methods, and have been widely studied and applied in recent years. In practice, considering the validity and expenditure, the main challenge of PHM is how to perform RUL prediction with the limited amount of failure samples. To the best of our knowledge, support vector regression (SVR) as the most common application of SVM algorithm in PHM field, was originally proposed for small-sample analysis [24], it can effectively estimate the RUL with limited failure samples. Widodo and Yang [25] developed an intelligent system using survival analysis and SVR to predict the survival probability of bearings. Loutas et al. [26] reported the ɛ-SVR, and employed it for estimating the RUL of rolling element bearings. Benkedjouh et al. [27,28] used SVR to map the degradation time series into nonlinear regression, and then fitted it into the power model for RUL prediction of the mechanical equipment. Liu et al. [29,30] proposed an improved probabilistic SVR model to predict the RUL of equipment components of a nuclear power plant. Fumeo et al. [31] developed an online SVR model to predict the RUL of bearings by optimizing the tradeoff between accuracy and computational efficiency. Tao et al. [20] considered the dynamic multi-state operating conditions, and trained the corresponding SVR model under different operating conditions, then the RUL of the on-service equipment can be predicted under time varying operating conditions. Some other SVR-based methods were developed in [32][33][34][35]. These studies promote the application of SVR-based methods in PHM, but due to the dispersity of samples' failure time, there is an inevitable drawback that the linearity of SVR model will increase with the increase of sample set, and overfitting or underfitting tends to occur [9]. Furthermore, such dispersity also represents the diversity of equipment's degradation processes, thus a single SVR model is not always the best one for a brand-new equipment. Therefore, in order to deal with the two drawbacks, this paper proposes a modified SVR method with health stage division, cluster sampling and similarity matching. As we shall show, the novelties and contributes of this paper exhibit in the following aspects: (1) The reasons of SVR model linearization increasing with data set are analyzed. (2) It combines health stage division and cluster sampling in SVR model training, and can obtain the optimal parameters of each health stage, thus the drawback that the linearity of SVR model will increase with the increase of sample set can be solved effectively. (3) It applies similarity matching to select the optimal parameters in the online prediction phase. Based on the idea of similar inputs mean similar outputs, the optimal SVR model for the on-service equipment can be obtained from its most similar testing sample, thus the limitation of a single SVR model can be avoided. The rest of this paper is organized as below. Section 2 introduces several related theories of the proposed method. Section 3 discusses the problem of a traditional SVR-based RUL prediction method. Section 4 describes the modified SVR method for online RUL prediction. Section 5 analyzes systematically the proposed method by two datasets. Finally, Section 6 concludes the whole paper. Support vector regression SVR is the successful extension of SVM for dealing with regression problems [36]. It aims to find a maximum margin in order to minimize the error of the missed training data. Given a training set = { , } =1,2,…, with input data ∈ and output data ∈ , the regression function is expressed as follow: = ( ) = ( ) + (1) where ( ) denotes the feature of inputs, and are the coefficients, and can be estimated by minimizing the following regularized risk function: where the first term 1 2 ‖ ‖ 2 is called the regularized term and used to estimate the flatness of the function, the second term is the so-called empirical error or training error. is the specified criteria chosen to balance the tradeoff between the flatness of the function and the training error. ℓ is the ε linear insensitive loss function: where ε is the tube size of SVR. If the penalty factor is larger, it is proved that the training error is large and the sample penalty of ε is larger, otherwise the opposite is true; if ε is smaller, the error of the regression function is smaller. Then two positive slack variables and * are introduced to represent the distance from desired values to the corresponding boundary values of the ε-tube. Eq. (2) can be rewritten as: Finally, by introducing Lagrange multipliers and exploiting optimality constrains, Eq. (1) is transformed into the form rewritten as follow: where , * are the Lagrange multipliers which satisfy the equality * = 0, ≥ 0, * ≥ 0 and obtained by maximizing the dual function of Eq. (4) which Karush-Kuhn-Tucker conditions are applied. The dual Lagrange form of Eq. (4) is given by: with the constrains: where � , � is the kernel function, and its value is equal to the inner product of two vectors and in the feature space ( ) and � �, i.e. � , � = ( ) · � �. Any function satisfied Mercer's condition [24] can be used as the kernel function, such as linear kernel, polynomial kernel, radial basis function (RBF) kernel and sigmoid kernel, etc. Cluster sampling In sampling theory, the coherent sample set composed of several connected basic units is called a cluster. And for cluster sampling, the population is first divided into different sub-clusters according to some criteria, then each sub-cluster is taken as the sampling unit that be extracted integrally for analyzation. Due to the advantages of simplicity, convenience and cost-reduction, cluster sampling is widely used for statistical analysis, and developed a variety of variants for different scenarios, such as adaptive cluster sampling [37], stratified cluster sampling [38], etc. The main disadvantages of cluster sampling are that the sample distribution is not wide and the representation for population is relatively poor. However, it does a good job of preserving the integrity of local information. Thus, the method of cluster sampling is employed in this paper to improve the SVR-based method. Similarity matching Similarity matching is the key procedure of the SbRP method. As an emerging and universal data-driven method, SbRP has been rapidly studied and developed in recent years. It predicts the RUL of an onservice equipment by a weighted arithmetic mean of those of reference ones [23,39] and is capable to realize long-term estimation and dispense with the degradation modeling [40]. The core idea of the SbRP method is that similar inputs product similar outputs. And based on similarity matching, the most similar trajectories of the reference samples are obtained, then the RUL of the on-service equipment can be calculated by the weighted average RUL of those trajectories [41]. Thus, we consider that the most similar sample's optimal SVR model parameters are also the optimal for the on-service equipment, and employ similarity matching method for parameters selection in the online prediction stage. The most commonly used method of similarity matching is the distance-based algorithms, such as the Euclidian, Manhattan, and Canberra distances. Problem statement A large number of experiments and engineering cases show that even the same category or the same batch of equipment, their degradation curves are often different with each other due to the difference of internal structure and the variability of operating environment [42], i.e. the failure time of all equipment are statistically dispersive, reference to Fig. 1. With the increase of sample set, such dispersity may inevitably increase and further reflects in model training. Fig. 1 The degradation curves of the same category of equipment The goal of SVR is to solve the function ( ) that has at most ε deviation from the obtained targets of for all the training data, errors can be ignored if they are less than ε, namely the loss function ℓ doesn't calculate the data points between ( ) + ε and ( ) − ε (according to Eq. 3). For a trained SVR model, ε is a constant and the ε-tube is unique. However, the uncertainties [43] in the early stages are always more than that in the late stages, thus the dispersity of RUL in the early stages is usually more obvious than that in the late stages, which causes that in the early stages there are more data points outside the ε-tube, and therefore, follow a constant ε may cause that the data points in the late stages have less impact in model training, thus the linearity will increase with the increase of sample set, and overfitting or underfitting tends to occur. For example, Fig. 2 shows the data point distribution of a training data set, they spread out in the early stages and then gradually concentrate, and when using an SVR model to fit them, there is an underfitting for all those data if > . Therefore, the ε-tube may be too narrow in the early stages and too wide in the late stages. Fig. 2 The problem of RUL prediction by SVR method In addition, the diversity of equipment's degradation processes may lead that a single SVR model constructed offline fails to work well for a brand-new equipment. Methodology As the problem discussed in Section 3, the dispersity of equipment's failure time leads the constant εtube don't suit for the whole regression process of SVR, health stage division is suit for dealing with this issue. By health stage division, the degradation processes can be divided into several stages according to the different degradation patterns, each stage can be analyzed separately. For instance, Hu et al. [44] divided the degradation processes of generator bearings into four stages based on the change points of confidence levels. Kimotho et al. [45] divided the degradation processes of bearings into five stages by analyzing the changes of frequency amplitudes in the power spectral density. They consider that the degradation process of a product can usually be divided into two or more stages, each stage presents different degradation patterns, and by the multi-stage RUL prediction, the accuracy is improved. However, many kinds of equipment's degradation processes don't have the obvious change points, and the high-quality failure historical data often less in engineering practice, such issues lead the multi-stage RUL prediction in challenge. For a multi-stage SVR method, if the training sample set is small, there is an inevitably issue that the corresponding SVR model of two consecutive stages may be overfitting or underfitting in their joining part, for the reason that this part is the predicted part of regression. Reference to Fig. 3, the predicted parts are always full of uncertainty, and may be inaccuracy and unsatisfactory. Therefore, cluster sampling is introduced to solve this problem. The application mode of cluster sampling in the proposed SVR method is shown in Fig. 4. In order to obtain the optimal model parameters of stage k, the corresponding data of stage k of the testing data (such as the data = { , ⋯ , } shown in Fig. 4) are cluster sampled about times and added into the original testing data. Then on the one hand, the weight of one of the data points of increases from 1 to , and where 1 ≤ − + 1 ≤ and 1 = +1 + ≤ ≤ +1 + , thus ≥ 1 is always right, and the partial testing data plays more roles in parameters optimization. On the other hand, other data except greatly decrease the uncertainty in the process of SVR model construction, even though their weights are relatively lower, if these data outside the ε-tube (far from the optimal regression curve), the loss function ℓ will increase. Thus, by introducing the method of cluster sampling, overfitting or underfitting in the joining part of two consecutive stages can be reduced. Fig. 3 The issue of just simply divide the degradation into several stages and construct the corresponding SVR models Fig. 4 The difference of model construction between the proposed SVR method and the traditional SVR method Additionally, in order to refrain from focusing on generalization and neglecting the impact of the diversity of equipment's degradation processes, we introduce similarity matching to select the better parameters in the online prediction stage. The main flowchart of the proposed method is shown in Fig. 5, and described as follows. Step 1: Divide the original data into training set and testing set. As far as possible, the degradation processes of the testing samples should be significantly different from each other. Thus, each testing sample is of representative. Step 2: Select the appropriate kernel function. The key of SVR is to select the type of kernel function, which mainly includes linear kernel, polynomial kernel, RBF kernel and sigmoid kernel. The most widely used of these functions should be RBF kernel. Whether the small sample set or large sample set, high dimensional or low dimensional, the RBF kernel function is applicable. Compared with other functions, it has the following advantages: (1) The RBF kernel can map a sample to a higher dimensional space, and the linear kernel is a special case of RBF, which means that if you consider using RBF, there is no need to consider the linear kernel. (2) Compared with polynomial kernel function, RBF needs to determine fewer parameters, and the number of kernel function parameters directly affects the complexity of the function. In addition, when the order of the polynomial is relatively high, the element value of the kernel matrix will tend to infinity or infinitesimal, while the RBF is above, which will reduce the difficulty of numerical calculation. (3) For some parameters, RBF and sigmoid kernel have similar performance. Step 3: Divide the HI of testing sample into stages. Health stage division should follow some criterions, either according to the change points or just evenly divide. Anyway, compared to the entire time series, the internal dispersion of each stage has shrunk. Step 4: Cluster sampling times for the stage (1 ≤ ≤ ), and put back into the original data. Reference to Fig. 4, the modified testing sample will be used for training parameters for the stage . Step 5: SVR model training, and get the optimal parameters of the stage . Some algorithms can be used for solve the parameters optimal problem, such as genetic algorithm (GA), particle swarm optimization (PSO), grid search, etc. Step 6: Repeat step 3, 4, 5 until the whole testing set get the optimal parameters for all the stages. Step 7: For the HI of the on-service equipment, perform similarity matching with the whole testing set. Similarity matching in this paper mainly focuses on the similarity of degradation curves. Step 8: Online prediction, select the parameters of the most similar testing sample's corresponding stage for RUL prediction. Step 9: Finally, output the RUL of the on-service equipment. Case studies In this section, we first use a simulated dataset to demonstrate the effectiveness of the proposed method. Then, it is applied to the GaAs-based lasers dataset. The RBF kernel is used in the proposed method and traditional SVR method, and the parameters optimization problem is solved by Grid search method. where is the radius of the RBF kernel function. In order to demonstrate the performance of the proposed method, we employ mean absolute error (MAE) and root-mean square error (RMSE) as the performance metrics to evaluate the accuracy of the proposed method: represents the total number of data points in the on-service sample's HI, is the accurate value and � represents the predicted value. the lower the or , the better the accuracy. The performance of the proposed method is illustrated by comparing with the traditional SVR method and SbRP method. Simulation study In this subsection, the prediction is illustrated and the performance is evaluated through numerical simulations. The degradation curves dataset of 30 units is obtained by an exponential degradation model [46]: ( ) = + + ( ) (12) where ( ) is the log value of the degradation data, ~( 0 , 0 2 ), ~( 1 , 1 2 ) and the noise ( )~(0, 2 ). In this simulation study, all the initial damages start at 0 inch, and the failure threshold (FT) is set to be 50 inches. The preset values of the model parameters are given in Table 1. Fig. 6 shows the degradation curves of the 30 units. The numerical simulations are all terminated at 21 million of cycles. In this study, we choose 3 units as the on-service samples, 8 units as the testing samples with their failure time have relatively balanced distribution, others as the training samples. Online prediction is performed for the three on-service samples by different methods. Table 1 Preset parameters of the exponential degradation model Parameters Preset values 1 0.2 4 0.6 0.06 Fig. 6 The degradation curves of the simulation dataset For the 8 testing units, we divide their respective degradation processes into 5 stages as the interval of each stage is 10 inches, i.e. =5. Then the optimal number of cluster sampling can be analyzed from repeated experiments. When =0, we define the training error of the corresponding stage as a standard unit, i.e. define ( = 0) = 1, then the variation trends of the training error with the cluster sampling number are shown clearly in Fig. 7. Obviously, as the increasing, training errors are nonincreasing, and finally converges at a definite level. Therefore, according to Fig. 7, =25 is an appropriate value for cluster sampling in this study. In the online prediction phase, the prediction results of the three on-service samples are shown in Fig. 8. For the 1st on-service sample, 8 predictions were made in the whole life cycles, the proposed method has 5 prediction errors with the minimum (62.5%), while traditional SVR method has 0 (0%) and SbRP method has 3 (37.5%), results are listed in Table 2. And for the 1st on-service sample, the MAE and RMSE of the proposed method are all lower than the other two methods. For the 2nd onservice sample, 12 predictions were made in the whole life cycles, the proposed method has 3 prediction errors with the minimum (25%), while traditional SVR method has 1 (8.33%) and SbRP method has 8 (66.67%), results are listed in Table 3. And for the 2nd on-service sample, the MAE and RMSE of the proposed method are all lower than traditional SVR method, but higher than SbRP method. For the 3rd on-service sample, 15 predictions were made in the whole life cycles, the proposed method has 7 prediction errors with the minimum (46.67%), while traditional SVR method has 7 (46.67%) and SbRP method has 5 (33.33%), results are listed in Table 4. And for the 3rd on-service sample, the MAE and RMSE of the proposed method are all lower than traditional SVR method, but the RMSE of the proposed method is higher than SbRP method while its MAE is lower than SbRP method. Results show that the proposed method is of the better performance than traditional SVR method, but the pros and cons between the proposed method and SbRP method are unclear, and need further analysis in the next subsection. Application to GaAs-based semiconductor lasers In this subsection, the proposed method is applied to a degradation dataset of GaAs laser devices, which is taken from Meeker and Escobar (1998) [47]. Degradation of the laser device is measured as the operation current, which increases over time. Degradation data from 15 testing units are collected over time with measure frequency 250 h and terminal time 4000 h, their curves are shown in Fig. 9. In this paper, a laser is assumed to have failed if the percentage of its operating current exceeds a predefined threshold level FT=6%. Note that the observation time points are recorded at every 250 h, this leads to the inability to know the true life, i.e. the time arriving the failure threshold. Due to the reasonable degradation process of laser is an approximate linear degradation, the most possible failure time considered as the real one can be calculated by local linear regression method. We choose unit 5, 6, 7 and 13 as the testing samples and unit 1 and 15 as the on-service samples, while others as the training samples. Online prediction is performed for the two on-service samples by different methods. Fig. 9 The degradation curves of GaAs-based laser data For the 4 testing units, we divide their respective degradation processes into 6 stages equally, i.e. =6. Then the variation trends of the training error with the cluster sampling number are shown clearly in Fig. 10. Obviously, as the increasing, training errors are non-increasing, and finally converges to a definite level. Therefore, according to Fig. 10, =25 is an appropriate value for cluster sampling in this study. In the online prediction phase, the prediction results of the three on-service samples are shown in Fig. 11. For unit 1, 6 predictions were made in the whole life cycles, the proposed method has 4 prediction errors with the minimum (66.67%), while traditional SVR method has 2 (33.33%) and SbRP method has 0 (0%), results are listed in Table 5. And for unit 1, the MAE and RMSE of the proposed method are all lower than the other two methods. For unit 15, 11 predictions were made in the whole life cycles, the proposed method has 7 prediction errors with the minimum (63.64%), while traditional SVR method has 2 (18.18%) and SbRP method has 2 (18.18%), results are listed in Table 6. And for unit 15, the MAE and RMSE of the proposed method are all lower than the other two methods. Fig. 10 The variation trend of the error in each stage of the testing samples Fig. 11 The prediction results of the on-service samples of GaAs-based lasers data Results and discussion From the above two subsections, the proposed method all performs better than traditional SVR method, the introduction of cluster sampling in SVR method can effectively reduce the training error. But we also have to be aware of that as the number of training samples increase, the proposed method sometimes performs not as good as SbRP method. Nonetheless, when the training samples are limited, the proposed method performs better than SbRP method. Conclusion This paper proposes a modified SVR method with health stage division, cluster sampling and similarity matching. Through health stage division and cluster sampling, the training error of each stage is nonincreasing, and the whole training error is decreasing with the number of cluster sampling, until it finally converges to a definite level. Therefore, the drawback that the linearity of SVR model will increase with the increase of sample set can be solved effectively. Additionally, similarity matching is introduced to select the optimal parameters in the online prediction phase, it solves the limitation of a single SVR model. To demonstrate the advantages of the proposed method, a simulation dataset and the GaAs-based lasers dataset are applied to analyze. Results show that the proposed method is of better effectiveness than the traditional SVR method, and when the training samples are limited, the proposed method performs better than SbRP method. Despite the prediction results by SbRP method are sometimes more accurate than the proposed method, pointwise similarity measure as the key procedure of SbRP method often require extensive computation, and with the training samples increasing, it is much more time consuming than the proposed method. Consequently, the proposed method is of great significant, it extends the studies of SVR methods for RUL prediction, and is of effectiveness in engineering practice.
2020-04-09T09:19:36.173Z
2020-04-06T00:00:00.000
{ "year": 2020, "sha1": "4a8284bf18666f5809437abc25853273eb90f722", "oa_license": "CCBY", "oa_url": "https://www.researchsquare.com/article/rs-20832/v1.pdf?c=1586207454000", "oa_status": "GREEN", "pdf_src": "MergedPDFExtraction", "pdf_hash": "00565986daec9432c8b9d4afc84cc7b19feebc45", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [] }
56440686
pes2o/s2orc
v3-fos-license
Metagenomics — A Technological Drift in Bioremediation Nature has its ways of resolving imbalances in its environment and microorganisms are one of the best tools of nature to eliminate toxic pollutants. The process of eliminat‐ ing pollutants using microbes is termed Bioremediation. Metagenomics is a strategic approach for analysing microbial communities at a genomic level. It is one of the best technological upgradation to bioremediation. Identification and screening of metage‐ nomes from the polluted environments are crucial in a metagenomic study. This chapter emphasizes recent multiple case studies explaining the approaches of metagenomics in bioremediation in different contaminated environments such as soil, water etc. The second section explains different sequences and function-based metagenomic strategies and tools starting from providing a detailed view of metagenomic screening, FACS, and multiple advanced metagenomic sequencing strategies dealing with the preva‐ lent metagenomes in bioremediation and giving a list of different widespread metagenomic organisms and their respective projects. Eventually, we have provided a detailed view of different major bioinformatic tools and datasets most prevalently used in metagenomic data analysis and processing during metagenomic bioremediation. temperature. In this study, scientists have sequenced the soil metagenome and performed reverse-transcriptase real-time PCR (RT-qPCR) to quantify the expression of several hydrocarbon-degrading genes. Pseudo‐ monas species were detected as the most abundant organisms in diesel-contaminated soils at cold environments. RT-qPCR assays confirmed that Pseudomonas and Rhodococcus species actively expressed hydrocarbon degradation genes in arctic biopile soils. The results of this study indicated that biopile treatment leads to major shifts in soil microbial communities which favors aerobic bacteria to degrade hydrocarbons [31]. metagenomic step Introduction From the day humans started invading this planet, Earth has been crammed with numerous toxic pollutants from multiple sources. Advance scientific technology has given rise to multiple tools to reduce pollutants in different ways, and bioremediation is considered to be the best way to neutralise polluted environments on Earth [1,2]. In this genomic era, metagenomic approaches have been developed and are known as effective methods of removing various kinds of pollutants [3,4]. Metagenomics is a strategic approach of analyzing microbial communities at a genomic level. This provides a glimpse of the microbial community view of "Uncultured Microbiota". Recent studies suggest that microbial communities are the potential alternatives to eliminate toxic contaminants from our environment [5][6][7][8]. The term metagenomics was coined by Jo Handelsman et al. in 1998. They have accessed the collective genomes and the biosynthetic machinery of soil microflora during a study of cloning the metagenome [9]. Bioremediation has always been adapting new advances in science and technology for establishing better environments. Compared with the previous years, there has been a gradual increase of interest in metagenomics-based bioremediation studies [10][11][12]. These studies can prove that metagenomics is one of the best adaptations of bioremediation leading to the establishment of a pure nontoxic environment. In this chapter, we discussed recent approaches of metagenomics in bioremediation with the help of recent multiple case studies. Preliminarily, we explained the methodology behind metagenomic analysis, starting from the sample screening and ending up with metagenomic analysis with respect to bioremediation. Metagenomic bioremediation reviews and extracts microbial communities applying their extensive biochemical pathways in degrading toxic pollutants. A part of our study aims to emphasize multiple case studies of metagenomic applications on air, water, and soil contaminations. Our analysis provided a topic-specific landscape with respect to metagenomic bioremediation of water contaminations, soil contaminations, and followed by air contaminations. The following part of our study focuses on recently developed sequence and function-based metagenomic strategies to analyze metagenomes from contaminated environments. In addition to this, our study explains the highly prevalent metagenomes derived from metagenomic communities which are also highly capable of degrading contaminations and toxins in the environment. Finally, we provided a landscape view of multiple bioinformatic tools used in the processing and analysis of metagenomic bioremediation data. Applications of metagenomics in bioremediation Environmental scientists consider metagenomic bioremediation as one of the potential tools to remove contaminants from the environment [13][14][15]. As cited earlier, recent multiple studies have reported metagenomic approaches in bioremediation. When this was compared with the other approaches of bioremediation, metagenomic bioremediation provided best outcomes with better degrading ratios. The results of a recent study emphasized the potential of metagenomic bacteria derived from petroleum reservoirs [16]. In this study, microbial strains and metagenomic clones have been isolated from petroleum reservoirs, and petroleum degradation abilities were evaluated either individually or in pools using seawater artificial ecosystems. The results showed that metagenomic clones were able to biodegrade up to 94% of phenanthrene and methyl phenanthrenes with rates ranging from 55% to 70% after 21 days [16]. The authors concluded that bacterial strains and metagenomic clones showed high petroleum-degrading potential. Metagenomic approaches in bioremediation aid in comprehending the characteristics of bacterial communities in different kinds of contaminated environments. A metaproteogenomic study was carried out on long-term adaptation of bacterial communities in metal-contaminated sediments [17]. The aim of this study was to understand the effect of a long-term metal exposure (110 years) on sediment microbial communities. In this study, the authors selected two freshwater sites differing by one order of magnitude in metal levels. The samples extracted from the two sites were compared by shotgun metaproteogenomics which resulted in a total of 69-118 Mpb of DNA and 943-1241 proteins. The two communities were found to be functionally very similar. However, significant genetic differences were observed for three categories: synthesis of exopolymeric substances, virulence and defense mechanisms, and elements involved in horizontal gene transfer. This study can be considered as a best example of advanced metagenomic approaches applied in bioremediation of different contaminated environments. Metagenomic bioremediation of different contaminations The environment where human activity abounds is being more polluted and contaminated by different kinds of toxic contaminants [18][19][20]. The contaminations are diverse and cover almost all sources of life including water, soil, and air which are considered the most important sources of life [21][22][23]. Metagenomic analysis is applied to multiple kinds of polluted environments primarily soil-and water-contaminated environments [24,25]. Metagenomic bioremediation of soil contaminations Soil contamination is a serious contamination [26,27] as soil is considered as one of the major sources of life [28]. Compared with other approaches of bioremediation, microbial and environmental researches are more inclined in applying metagenomic approaches to bioremediation [10,29,30]. A recent case study discusses the metagenomic analysis of arctic soils contaminated by high concentration of diesel in Canada [31]. As this study was on arctic soils, the objective framed was to trace out microorganisms and their functional genes which are abundant and active during hydrocarbon degradation at cold temperature. In this study, scientists have sequenced the soil metagenome and performed reverse-transcriptase real-time PCR (RT-qPCR) to quantify the expression of several hydrocarbon-degrading genes. Pseudomonas species were detected as the most abundant organisms in diesel-contaminated soils at cold environments. RT-qPCR assays confirmed that Pseudomonas and Rhodococcus species actively expressed hydrocarbon degradation genes in arctic biopile soils. The results of this study indicated that biopile treatment leads to major shifts in soil microbial communities which favors aerobic bacteria to degrade hydrocarbons [31]. Metagenomic bioremediation of water contaminations Water pollution has dramatically increased in comparison with the conditions of the 20th century [32,33]. Metagenomic application in the bioremediation of water contamination is one of the best ways to reduce water contaminations [34][35][36][37]. Recent multiple case studies suggest that metagenomic applications have been widely used for the identification and treatment of pollutants and contaminations in the sea, ground water, and drinking water [34][35][36][37]. A recent research performed at the Gulf of Mexico beaches precisely talks about the longitudinal metagenomic analysis of water and soil affected by deepwater horizon oil spill [34]. Approximately 7×105 cubic meters of crude oil were released into the Gulf of Mexico as a consequence of deepwater horizon drilling rig explosion, where thousands of square miles of the earth's surface were covered in crude oil. During this study, researchers performed high throughput DNA sequencing of close-to-shore water and beach soil samples before and during the appearance of oil in Louisiana and Mississippi. The sequencing results have identified an unusual increase in the human pathogen Vibrio cholera, a sharp increase in Rickettsiales sp., and decrease of Synechococcus sp. in water samples [34]. In addition, a metagenomic analysis was also performed for the bioremediation of hexavalent chromium-contaminated water that existed in fixed-film bioreactor [38]. This study talks about hexavalent chromium (Cr 6+ ) contamination from a dolomite stone mine in Limpopo Province, South Africa, causing extensive groundwater contamination. To restrict any further negative environmental impact at the site, an effective and sustainable treatment strategy for the removal of up to 6.49 mg/l Cr 6+ from the groundwater was developed. The microbial community shifted in relative dominance during operation to establish an optimal metal-reducing community, including Enterobactercloacae, Flavobacterium sp. and Ralstonia sp., which achieved 100% reduction. This study provides a glimpse of effective demonstration of a biological chromium (VI) bioremediation system [38]. Metagenomic strategies and tools for bioremediation Advanced scientific technology has given rise to the advancements in research tools applied in different fields of scientific research [39]. These technologically advanced inventions have driven scientific researchers towards finding out some unrevealed things of nature [40]. Multiple technologies have started getting embedded to metagenomics for a better understanding of biological and life sciences [41]. Thus, in this section, we have discussed recent major metagenomic strategies and tools applied in the process of metagenomic bioremediation. Screening of metagenomes from polluted environments Identification and screening of metagenomes from polluted environments are crucial in a metagenomic study. The microbial community interaction can be detected precisely when metagenomes are finely screened from a contaminated environment. A methodology proposed from a recent study [42] suggested an updated technology of high throughput genetic screening of a soil metagenomic library. The study was initiated by adding a typical composition of oligonucleotide probes to soil metagenomic DNA for hybridization. The pooled radiolabeled probes were designed to target genes encoding specific enzymes. The soil metagenomic DNA of fosmid clone library were spotted on high-density membranes before the addition of oligonucleotide probes. This next step was followed by affiliation of positive hybridizing spots to the corresponding clones in the library and sequencing of metagenomic inserts. When assembly and annotation were completed, new coding DNA sequences related to genes of interest were identified with low protein similarity against the closest hits in the databases. This work basically highlights the sensitivity of DNA/RNA hybridization techniques as an effective and complementary way to recover novel genes from large metagenomic clone libraries with respect to soil microbiota. Nevertheless, multiple molecular biological-based techniques [43] may also be applied during the process of metagenome extraction and screening. The basic workflow of extracting metagenomes out of contaminated soil has been explained in Fig. 1. The steps were initiated by collecting contaminated soil from the environment. The collected contaminated soil sample can be processed in two ways; one is by direct cell lysis and DNA purification and second, by separation of cells from contaminated soil and then followed by cell lysis and DNA purification. The isolated DNA is then cloned using specific cloning vectors. The cloned contaminated soil DNA is then delivered into host cells using different gene delivery systems. The multiplied host cells containing contaminated soil DNA forms a Metagenome library and these contaminated soil metagenomes were then screened. A recent study conducted screening of biosurfactant producers from petroleum hydrocarbon contaminated sources in cold marine environments. In this study, the researchers have isolated and characterized 55 biosulphant microbiota of 8 different genera including 1 Alcanivorax, 1 exiguobacterium, and 2 halomonas strains [44]. Florescence-Activated Cell sorting (FACS) Florescence-activated cell sorting is one of the most widely used cell sorting techniques which is applied to sort microbial cells based on florescence during the process of metagenomic screening [45], with an accuracy rate of 5,000 cells per second [46]. Figure 2 shows the schematic flow of SIGEX and intercellular biosensors methods. High-throughput screening does not require a selectable phenotype. This phenomenon has led to the focus on phenotypes such as pigments that are readily visible providing the use of fluorescence-activated cell sorting. Moreover, FACS can be used to detect expression of certain types of genes by regulation of a fluorescent biosensor present in the same cell as the metagenomic DNA [47,48]. Hence, these screen methods will be a critical tool for rapid selection of cells from metagenomic libraries. Metagenomic sequencing strategies Genome sequencing technologies have been frequently upgraded [49] since the completion of the human genome at the beginning of the 21 st century [50]. Multiple next-generation genomic sequencing strategies are applied to sequence the metagenomes of different microbial communities [51,52]. Sequencing technologies were initiated by the Sangers sequencing method which was widely used during the process of human genome sequencing [53,54]. Technological drift has gifted next-generation sequencing techniques like pyrosequencing [55,56], ligation sequencing [57,58], reverse terminator [60,61], and single-molecule sequence by synthesis [62,63], providing a high throughput that reads comparatively in less time [64][65][66]. A comparative overview of recent sequencing technologies applied in metagenome sequencing is provided in Table 1 for a more detailed understanding. However, most metagenomics researchers prefer the pyrosequencing method for sequencing the metagenomes of microbial communities [67][68][69][70]. Prevalent metagenomes for bioremediation Metagenomes extracted from uncultured microbial communities from multiple contaminant sites are screened and further identified for degrading properties [71]. Microbial communities vary according to the characteristics of source and site of contamination [72]. A metagenomic analysis conducted on the heavy metal-contaminated groundwater revealed metagenomes of γ-and β-Proteobacteria dominated by Rhodanobacter-like γ-proteobacterial and Burkholderialike β-proteobacterial species from the habitat of extremely high levels of uranium, nitrate, technetium and various organic contaminants [73]. Moreover, multiple metagenome projects have been taking place around the world; we have sorted out a list of multiple environmental metagenome projects with top microbe having the highest percentage of presence in the metagenomic community ( Bioinformatic tools for metagenomic bioremediation In the last two decades, bioinformatics has been advanced and simultaneously adapted to multiple fields of science such as basic sciences and advanced applied sciences [74]. Our previous study has given a glance of basic applications of bioinformatics in bioremediation [75]. Bioinformatics holds multiple tasks in the field of metagenomic bioremediation, majorly during metagenomic data analysis [76,77]. A special issue on bioinformatics approaches and tools for metagenomic analysis has provided an advanced view towards comprehensive bioinformatic tools and methodologies used in metagenomics [78]. Multiple metagenomic projects are generating a large chunk of metagenomic sequence data challenging bioinformatics to develop more robust and better tools to analyze metagenomic sequence data. A recent study reveals the metagenomic characterization of soil microbial community using metagenomic approaches [79]. In this study, researchers have used 33 publicly available metagenomes obtained from diverse soil sites and integrated some state-ofthe-art computational tools to explore the phylogenetic and functional characteristics of the microbial communities in soil. Recently, multiple advancements have taken place in the field of bioinformatics with respect to metagenomic bioremediation. In this section, most of our study focuses on recent bioinformatic tools and datasets majorly used in the analysis of metagenomic data in bioremediation. A comparative overview of functions and suitability of mostly used tools for metagenomic analysis is given in Table 3. MEGAN Meta Genome Analyzer (MEGAN) is one of the most widely used software tools for efficiently analyzing large chunks of metagenomic sequence data [80,81]. This tool is most preferably used to interactively analyze and compare metagenomic and metatranscriptomic data, taxonomically and functionally. To perform taxonomic analysis, the program places reads onto the NCBI taxonomy and functional analysis is performed by mapping reads to the SEED, COG, and KEGG classifications. In addition, samples can be compared taxonomically and functionally, using a wide range of charting and visualization techniques like co-occurrence plots. This software also performs PCoA (Principle Coordinate Analysis) and clustering methods allowing high-level comparison of large numbers of samples [82]. Different attributes of the samples can be captured and used during analysis. Moreover, MEGAN supports different input formats of data and is capable of exporting the results of analysis in different text-based and graphical formats. Multiple methods of analysis, acceptance and comparison of high throughput data, robustness and being easy-to-handle are some of the features that made MEGAN as one of the most used metagenome analyzers. SmashCommunity Simple Metagenomics Analysis SHell for microbial communities (SmashCommunity) is a stand-alone metagenomic annotation and analysis pipeline that shares design principles and routines with SmashCell [83]. It is suitable for data delivered from Sanger and 454 sequencing technologies. It supports state-of-the-art software for essential metagenomic tasks such as assembly and gene prediction. It also provides tools to estimate the quantitative phylogenetic and functional compositions of metagenomes, to compare compositions of multiple metagenomes, and to produce intuitive visual representations of such analyses [84]. It provides optimized parameter sets for Arachne and Celera for metagenome assembly, and GeneMark and MetaGene for predicting protein coding genes on metagenomes. SmashCommunity also includes scripts for downstream analysis of datasets. They can generate intuitive tree-based visualizations of results using the batch access API of the interactive Tree of Life (iTOL) web tool. SmashCommunity can also compare multiple metagenomes using these profiles, cluster them based on a relative entropy-based distance measure suitable for comparing such quantitative profiles, perform bootstrap analysis of the clustering, and generate visual representation of the clustering results. CAMERA Community Cyberinfrastructure for Advanced Microbial Ecology Research and Analysis (CAMERA) is a database and associated computational infrastructure that provides a single system for depositing, locating, analyzing, visualizing, and sharing data about microbial biology through an advanced web-based analysis portal [85]. CAMERA holds a huge chunk of data including environmental metagenomic and genomic sequence data, associated environmental parameters, pre-computed search results, and software tools to support powerful cross-analysis of environmental samples. CAMERA works on a pattern of collecting and linking metadata relevant to environmental metagenome datasets with annotation in a semantically aware environment that allows users to write expressive semantic queries to the database. It also provides data submission tools to allow researchers to share and forward data to other metagenomic sites and community data archives. CAMERA can be best considered as a complete genome-analysis tool allowing users to query, analyze, annotate, and compare metagenome and genome data [86]. MG-RAST Rapid Annotation using Subsystems Technology for Metagenomes (MG-RAST) is an automated analysis platform for metagenomes, providing quantitative insights into microbial populations based on sequence data [87]. This pipeline performs quality control, protein prediction, clustering, and similarity-based annotation on nucleic acid sequence datasets using a number of bioinformatic tools. Users can upload raw sequence data in FASTA format; the sequences will be normalized and processed, and summaries will be automatically generated. The MG-RAST server provides several methods of access to different data types, including phylogenetic and metabolic reconstructions, and has the ability to compare metabolism and annotations of one or more metagenomes and genomes. In addition, the server also offers a comprehensive search capability. The pipeline is implemented in Perl by using a number of open-source components, including the SEED framework, NCBI BLAST, SQLite, and Sun Grid Engine. Table 3. A comparative overview of functions and suitability of mostly used tools for metagenomic analysis IMG/M Integrated Microbial Genomes and Metagenomes (IMG/M) system supports annotation, analysis, and distribution of microbial genome and metagenome datasets. IMG/M provides comparative data using analytical tools extended to handle metagenome data, together with metagenome-specific analysis [88,89]. IMG/M consists of samples of microbial community aggregate genomes integrated with IMG's comprehensive set of genomes from all three domains of life: plasmids, viruses, and genome fragments. Function-based comparison of metagenome samples and genomes is provided by analytical tools that allow examination of the relative abundance of protein families, functional families or functional categories across metagenome samples and genomes. It seems like registered users can gain more advantage out of IMG/M as the tools focus on handling substantially larger metagenome datasets, are available only to registered users as part of the 'My IMG' toolkit, and support specifying, managing, and analyzing persistent sets of genes, functions, genomes or metagenome samples and scaffolds. Summary Metagenomics is a strategic approach for analyzing microbial communities at a genomic level. This gives a glimpse towards the microbial community view of "Uncultured Microbiota". Bioremediation has always been adapting new advances in science and technology for establishing better environments, and metagenomics can be considered as one of the best adaptations ever. Identification and screening of metagenomes from the polluted environments are crucial in a metagenomic study. The second section emphasizes recent multiple case studies explaining the approaches of metagenomics in bioremediation. Accordingly, the third section speaks about metagenomic bioremediation in different contaminated environments such as soil and water. The fourth section explains different sequences and function-based metagenomic strategies and tools starting from providing a detailed view of metagenomic screening, FACS, and multiple advanced metagenomic sequencing strategies. The fifth section deals with the prevalent metagenomes in bioremediation giving a list of different prevalent metagenomic organisms and their respective projects. The last section gives a detailed view of different major bioinformatic tools and datasets most prevalently used in metagenomic data analysis and processing during metagenomic bioremediation. [20] Brimblecombe, P. (2011). The big smoke: a history of air pollution in London since medieval times. Routledge.
2018-10-13T21:31:00.851Z
2015-09-09T00:00:00.000
{ "year": 2015, "sha1": "81512eb706047bc224a23b124e00cca34e7c673f", "oa_license": "CCBY", "oa_url": "https://www.intechopen.com/citation-pdf-url/48561", "oa_status": "HYBRID", "pdf_src": "ScienceParsePlus", "pdf_hash": "3f0a36d5f4033b8d581fac4c2a405b40e10d329b", "s2fieldsofstudy": [ "Biology", "Engineering" ], "extfieldsofstudy": [ "Environmental Science" ] }
108569984
pes2o/s2orc
v3-fos-license
Development of TMY database in Northeast China for Solar Energy Applications To relieve the dual pressure from rising energy demand and growing environmental problems, renewable energy sources like solar energy are more favored. In this respect, solar radiation data, particularly typical solar radiation data, are the most basic and important parameters in many solar energy applications. In the past, several approaches for generating TMYs have been proposed. These methods are similar–the main differences lie in the number of daily indices (weather parameters) to be included and their assigned weightings [1]. In the paper authored by Hall et al. [2], 13 meteorological indices were examined and 4 of the 13 indices were of very little importance so zero weightings were given to them. Said and Kadry [3] analyzed and researched seven weather indices and gave different weightings. Kalogirou [4] applied and selected 15 weather parameters. Moreover, Marion and Urban [5], Wilcox and Marion [6], Petrakis et al. [7] also made attempts to generate TMYs for different locations with respective weather parameters and assigned weighting factors. In recent years, a few individual studies were performed to select the TMYs for different zones of China. Chow et al. [1] developed the typical weather year files for two neighboring cities, namely, Hong Kong and Macau. In the paper of Zhou et al. [8], typical solar radiation years and typical solar radiation data for 30 meteorological stations of China were produced only using the long-term daily global solar radiation records. Jiang [9] generated TMYs only for eight typical cities representing different climates of China, using nine weather parameters. Although a few attempts have been carried out on this subject, the work is going on or immature for China. In this paper, in view of the actual situation in China, eight meteorological indices and novel assigned weighting factors are chosen and proposed in the procedure of forming TMY data. Based on the latest and accurate long term weather data and novel weighting factors, this paper generates the TMYs of eight cities for three provinces of Northeast China. Introduction To relieve the dual pressure from rising energy demand and growing environmental problems, renewable energy sources like solar energy are more favored.In this respect, solar radiation data, particularly typical solar radiation data, are the most basic and important parameters in many solar energy applications. In the past, several approaches for generating TMYs have been proposed.These methods are similar-the main differences lie in the number of daily indices (weather parameters) to be included and their assigned weightings [1].In the paper authored by Hall et al. [2], 13 meteorological indices were examined and 4 of the 13 indices were of very little importance so zero weightings were given to them.Said and Kadry [3] analyzed and researched seven weather indices and gave different weightings.Kalogirou [4] applied and selected 15 weather parameters.Moreover, Marion and Urban [5], Wilcox and Marion [6], Petrakis et al. [7] also made attempts to generate TMYs for different locations with respective weather parameters and assigned weighting factors. In recent years, a few individual studies were performed to select the TMYs for different zones of China.Chow et al. [1] developed the typical weather year files for two neighboring cities, namely, Hong Kong and Macau.In the paper of Zhou et al. [8], typical solar radiation years and typical solar radiation data for 30 meteorological stations of China were produced only using the long-term daily global solar radiation records.Jiang [9] generated TMYs only for eight typical cities representing different climates of China, using nine weather parameters.Although a few attempts have been carried out on this subject, the work is going on or immature for China. In this paper, in view of the actual situation in China, eight meteorological indices and novel assigned weighting factors are chosen and proposed in the procedure of forming TMY data.Based on the latest and accurate long term weather data and novel weighting factors, this paper generates the TMYs of eight cities for three provinces of Northeast China. Region applied and data used In China, the related weather data are recorded and managed by China meteorological stations.Attributing to new observation instrument, the relative errors of global solar radiation measured data in China meteorological stations are changed from ±10% to ±0.5% since 1993.The measured weather data at the eight stations are obtained over the periods between 1994 and 2009 in this study.The relevant information for the eight stations in the northeast three provinces of China is shown in Table 1. Method used The Typical meteorological year (TMY) method, which was developed by Sandia National Laboratories, is an empirical methodology for combining 12 typical meteorological months (TMMs) from different years to form a complete year.The process adopted to select the 12 typical weather months is illustrated as follow: According to the Finkelstein-Schafer (FS) statistic [10], the cumulative distribution function (CDF) for each weather index x, which is a monotonic increasing function, is formulated by a function CDF(x): where n is the total number of elements; i is the rank order number (i = 1, 2, 3,…, n-1).From its definition, CDF(x) is a monotonically increasing step function with steps of sizes 1/n occurring at x i and is bounded by 0 and 1.The FS statistic is calculated for each of the weather index by the following equation 1 ( , ) , where δ i is the absolute difference between the long-term CDF of the month and one year CDF for the same month at x i (i = 1, 2, 3,…, n-1); N is the number of daily readings of the month (e.g. for January, N=31). Considering the characteristics of solar energy systems, eight weather indices are considered in this paper.These indices are maximum, minimum and mean dry-bulb temperature (T max , T min , T ma ); minimum and mean relative humidity (RH min , RH ma ); maximum and mean wind velocity (W max , W ma ); and daily global solar radiation (DGSR).Only eight indices are used because some data (for instance, maximum relative humidity and minimum wind velocity) are not available in Northeast China. The weighted sum (WS) of the FS statistic for the above eight weather indices is then calculated for each year.Moreover, the five years with the smallest WS values are chosen as the candidate years.The WS is defined and calculated as follows where WS(y,m) is the average weighted sum for the month m in the year y; WF x is the weighting factor for the x th weather index; M is the number of meteorological indices.Various sets of the weighting factors were suggested in different references.The weighting factors in this paper, which are significant for forming TMY data, are shown in Table 2.A large weighting factor of 0.5 is assigned to the solar radiation because the criteria is mainly used for solar energy systems and the other weather variables (e.g.dry bulb temperature and relative humidity) are affected by solar radiation.For instance, in general, the higher for the solar radiation, the higher for the dry-bulb temperature.The last step is to select the typical meteorological month (TMM) from the five candidate years.This paper applies a simper selection process introduced by Pissimanis [11].The month with the minimum root mean square difference (RMSD) of global solar radiation is selected as the TMM.The RMSD is defined as follows where H y,m,i is the daily global solar radiation values of the year y, month m and day i; H ma is mean values of the longterm global solar radiation for the month m; N is the number of daily readings of the month. Results and discussion Based on the above TMY method and the data of the eight stations listed in Table 1, the TMYs of the eight stations in three provinces of Northeast China are formed and analyzed in the following. To illustrate the selection procedure, the Shenyang station in Liaoning province of Northeast China is chosen as an example.In addition, to reflect the seasonal changes, January and July are selected as the typical months for winter and summer, respectively. For each calendar month, CDFs of each index between short term and the long term are compared and calculated by ( 1) and (2).With mean dry-bulb temperature and daily global solar radiation as example, the comparison between the short term CDFs and the long term CDFs for Shenyang station is given in Fig. 1 and 2. It is obvious that, in general, the short term CDFs appearing the typical "S" type distribution follow quite closely their long term counterparts.In Fig. 1, using the January of Shenyang as example, the CDF of mean dry-bulb temperature (Tma) for 2003 is most similar to the long term CDF (smallest value of FS statistic), while the CDF of Tma for 2000 is least similar (largest value of FS statistic).Also, the CDF of Tma for TMM of 2009 is between the two.Likewise, From Fig. 2, the CDFs of daily global solar radiation (DGSR) for January 1997 and July 2008 are closest to the long term CDF for January (in Fig. 2(a)) and for July (in Fig. 2(b)), while the DGSR CDFs for January 2008 (in Fig. 2(a)) and July 2000 (in Fig. 2(b)) are most dissimilar.It is also found that the years considered representative for a particular index might not be necessarily representative for another index at the same month.And similarly, the years considered typical for a certain month might not be inevitably typical for another month at the same weather index.For example, in Fig. 1(b), the CDF of Tma for July of 2005 follows the long term CDF remarkably well, whereas in Fig. 2(b), the CDF of DGSR for July of 2005 is not the best agreement with the long term CDF.Also, for instance, in Fig. 1(a), the CDF of Tma for January 2009 is compared the good with the long-term, whereas in Fig. 1(b), the CDF of Tma for July 2009 is the worst with respect to the long term CDF. The FS statistic is estimated and examined for each weather index and for each month of every year in the database.Due to space limitation, only the FS values of daily global solar radiation for Shenyang station are shown in Table 3.It is found that the FS statistic (e.g. the FS statistic of DGSR in Table 3) often varies month to month and differs from one index to another.4. The RMSD values of daily global solar radiation are solved by the above (4).The RMSD results of Shenyang station and the minimum value of RMSD for each month (bold characters) are shown in Table 5.The smallest RMSD values for each month vary between 1.3528 and 6.5429 MJ/m2. Then, the month with smallest RMSD is selected as the TMM.Finally, the 12 TMMs is used to form a TMY.The TMY for Shenyang station can be found in Table 6. These database would be useful for the utilization of solar energy system in Northeast China. Table 6 shows a summary of the TMYs selected for eight stations in three provinces of Northeast China.In order to know which years tend to follow the 16 year (1994)(1995)(1996)(1997)(1998)(1999)(2000)(2001)(2002)(2003)(2004)(2005)(2006)(2007)(2008)(2009) long term weather patterns more closely than the others, the TMYs acquired for the eight stations in Northeast China are analyzed and investigated.Fig. 3 shows the year selection frequency for the TMYs derived from the 1994-2009 database.It can be found that 2004 and 2007 are the most and least frequent years respectively.In Fig. 3, the frequency occurrence of the 2004 year is up to12.5%.This means the typical data derived from 2004 is the prime eement with the long term (1994)(1995)(1996)(1997)(1998)(1999)(2000)(2001)(2002)(2003)(2004)(2005)(2006)(2007)(2008)(2009) data.Additionally, the accuracy of TMY data is excellent on monthly bases.The monthly average values of the long term measured data and typical solar radiation derived from the TMY data for the eight cities in three provinces Heilongjiang, Jilin, Liaoning) of Northeast China are compared and shown in Fig. 4(a).As can be seen from Fig. 4(a), the degree of the deviation from the diagonal between the TMY data and the recorded data is small.To be obvious, the corresponding mean absolute percentage error (MAPE) between monthly mean values of the long term measured data and typical solar radiation data from TMY data for each month and for eight stations are shown in Fig. 4(b).From Fig. 4(a) and (b), the TMY data generally represent good agreement with the long-term data.In particular, the TMY data for Fuyu station is the best.The R value in the Fuyu station is up to 0.9983, and the MAPE lies between 0.03% and 5.04%. Conclusions The generation of the TMY data are essential and important for solar energy utilization.In this paper, the TMY method using the Finkelstein-Schafer statistical and novel assigned weighting factors is applied and utilized.Typical meteorological years for eight stations located in three provinces of Northeast China are formed based on the recent and accurate 16 years (1994)(1995)(1996)(1997)(1998)(1999)(2000)(2001)(2002)(2003)(2004)(2005)(2006)(2007)(2008)(2009) recorded weather data.It is found that the cumulative distribution functions of each weather index for the TMMs selected tend to follow their long term counterparts well.It is also seen that the typical data from the 2004 is the prime agreement with the long-term data.In addition, comparison analysis between the monthly data from TMYs and the long term recorded data for this region show that TMYs perform well on monthly bases. From the analysis and results, it is concluded that the solar energy resource in the three provinces of Northeast China is abundant and potential.It is believed that the TMY data developed by this paper will exert positive effects on some energy-related scientific researches and engineering applications in Northeast China.Future researches will focus on the TMY data on a larger regional scale. bFig. 1 .Fig . 2 . Fig. 1.Comparison of individual monthly CDF with long term CDF (T ma ) in January (a) and July (b) for Shenyang station Fig. 4 . Fig. 4. Comparison of monthly mean values of long term solar radiation data and solar radiation from TMY data (Units: MJ/m2) (a), and the corresponding MAPE (b) for each month and for eight stations in Northeast China Table 1 . Geographical locations and data period Table 2 . Weighting factors for FS statistics T max T min T ma RH min RH ma W max W ma DGSR Table 3 . Summary of FS statistics of DGSR for Shenyang station Table 4 . Summary of WS of FS statistic for Shenyang station ( bold numbers correspond to the five candidate years of each month) M Table 5 . RMSD (MJ/m 2 ) of DGSR for the five candidate years at Shenyang station ( bold numbers correspond to the min value in the month) Table 6 . Summary of the TMYs selected for eight stations in three provinces of Northeast China
2019-04-12T13:57:13.540Z
2012-04-09T00:00:00.000
{ "year": 2012, "sha1": "1215aa02c0c653ac2d34109e78bd7b8f86710188", "oa_license": "CCBY", "oa_url": "https://eejournal.ktu.lt/index.php/elt/article/download/2386/1810", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "1215aa02c0c653ac2d34109e78bd7b8f86710188", "s2fieldsofstudy": [ "Environmental Science", "Engineering" ], "extfieldsofstudy": [ "Environmental Science" ] }
246667160
pes2o/s2orc
v3-fos-license
Construction and Analysis of Macroeconomic Forecasting Model Based on Biclustering Algorithm In recent years, with the globalization of information and Internetization, the phenomenon of information overload has appeared in the production of large amounts of data, and data mining has emerged as the times require. Clustering technology is a representative data mining technology. Cluster analysis has been applied to data mining and has achieved significant results. However, with the deepening of people’s understanding, it has been found that this either/or classification is increasingly not suitable for fuzzy classification problems. 'erefore, fuzzy clustering technology, which combines the strengths of machine learning and fuzzy mathematics, has become the new darling of clustering technology and has achieved outstanding results in clustering accuracy. How to obtain a more accurate division from the vast economic statistics of the Statistical Yearbook has become a difficult problem, especially when there is no prior information. Based on China’s macroeconomic statistics, this paper applies the biclustering method to the field of economic zoning for the first time, researches and predicts the economic region division plan of China’s provinces and the economic growth model of each province, and combines the results with traditional levels. 'e results of the class methods are compared. 'e research results show that the hierarchical clustering algorithm is relatively intuitive and easy to apply to the overall analysis of the national economic divisions. 'e result of the biclustering algorithm has its unique advantages in mining the commonalities of various provinces under certain attribute sets. Introduction Macroeconomic zoning is the foundation of regional economic research and is the most basic unit used to analyze regional gaps, conduct regional adjustments, and promote regional development. In the process of regional socioeconomic development, the scientific division of socioeconomic regions is a prerequisite for rational formulation of regional economic development plans. My country is a typical large developing country, with large differences in the economic, social, resource, and environmental aspects of various regions. How to divide my country's overall macroeconomic regions by analyzing macroeconomic statistical data that have both time and spatial distribution characteristics over several years is a problem that this article will focus on. e division methods of economic regions are generally divided into traditional classification methods and numerical classification methods. Traditional classification methods are usually based on experience and relevant professional knowledge for qualitative classification. Although it can achieve certain results, the results are relatively general. It is difficult to make a more detailed description of the differences and connections between the research objects, sometimes because of the researcher. e subjective intent of the classification affects the objectivity of the classification. Numerical classification method can weaken the subjectivity and arbitrariness of traditional classification method to a certain extent. It has become more and more widely used in the study of economic zoning. Many scholars at home and abroad have used various numerical analysis methods for national economic zoning and regions. In-depth research work has been carried out on economic zoning. However, the differences between different methods and sample data will affect the final classification results [1][2][3][4][5]. erefore, when solving specific problems, we need to combine subjective judgments with objective facts to give a more reasonable analysis. e biclustering algorithm is shown in Figure 1. e most notable feature of the traditional classical taxonomy is that it has the characteristic of either or the other; that is, the same thing belongs to and only belongs to a certain category, and there cannot be two situations where it does not belong to any category or belongs to more than one category at the same time. It is precisely because the result of this classification is clear and distinct, and there is no ambiguity, so this classification method is also called hard classification [6,7]. However, in real life, people often use inaccurate but meaningful language in their daily communication, that is, vague language. If computer technology is used to recognize and analyze these vague language and information, it seems very difficult. e famous American cybernetics expert, Professor L.A. Zadeh, fully realized this contradiction and put forward the core idea of fuzzy mathematics, which is to use fuzzy thought to make clear and accurate mathematical explanations, which gave rise to fuzzy mathematics [8,9]. Since the 1950s, Western countries have begun to standardize regional planning and regional policy work. John analyzed in detail the concept of regional planning on the three levels of country, metropolis, and city. e United States first began to implement standardized regional divisions. In 1969, in order to meet the standardization needs of regional analysis and policy makers, the US Bureau of Economic Analysis (BEA) divided detailed standard economic zones based on county data and the division of metropolitan areas. Its division method became important for other countries. Bongaer S. D. divides regions according to the functional consistency within the economic zone. is division method of functional consistency can better compare the divided regions and solve the injustice caused by the many types of economic divisions in the past for interregional policy making. Davis believes that regional planning should play a central role in coastal management, and the federal guidelines for special regional management plans should support advanced regional planning through coastal management behavior. Bryan proposed a systematic regional planning method, which uses the integer planning method within the framework of multicriteria decision analysis to set priorities for vegetation management and vegetation restoration to achieve the goal of multiple natural resource management. R. I. Chman uses principal component analysis and core principal component prefiltered cluster analysis to regionalize and classify sea level pressure. e results show that the filtered cluster analysis of the core principal component analysis method captures more accurately than the core principal component analysis. When it comes to the nature of the input data, the clustering calculation after the core principal component analysis filter is more efficient [10][11][12][13][14]. After China changed from a planned economy to a market economy, more and more scholars realized the important role of numerical classification in economic regionalization. On the basis of China's existing administrative regions, Liu Dongliang used the clustering method of multivariate analysis to discuss China's large-scale economic regionalization at the national level. rough principal component analysis and cluster analysis, Liu Qinpu believes that the level of economic development in Henan Province can be divided into four levels, which are spatially represented as four geographic regions. Liu Zheng used the basic ideas and principles of the AHP model to construct an index system for ecological economic zoning and established a research method for ecological economic zoning, taking Tanghai County's national ecological demonstration area as an example. Zheng Dexiang used the economic indicators reflecting the forest location as a factor and applied a selforganizing competition network to establish a model, and, after continuous learning and testing, the obtained network simulation results were used to carry out economic zoning of the forest land in 68 counties (cities) in Fujian Province. Based on the analysis of traditional regional economic difference analysis methods, Zhang Yanwen proposed a new method of regional economic spatial difference analysis based on spatial clustering, hierarchical maps, and axis analysis and used the data of per capita GDP in Northeast China in 2000 for empirical analysis. Jiang Ling used the MFPT matrix method established based on short-and medium-distance passenger flow to divide economic function zones and tested the effectiveness of the zoning plan. Chen Shuangying applied SPSS to carry out an empirical analysis on the level of circular economy development in various regions of China, focusing on the process and results of clustering using the method of systematic clustering. Peng Ping used the gravity model to study the economic zoning of 91 counties and cities in Jiangxi Province and divided the province into four major economic regions, Nanchang Yijiujiang, Jingdezhen Yiyingtan, Xinyu Yipingxiang, and Ganzhou, and coordinated the development of the economic regions. e question puts forward scientific planning measures. Li Xuemei and Zhang Suqin analyzed the application of cluster analysis technology in data mining and explained the implementation process of cluster analysis technology with an example of regional division in macroeconomics. Based on the principles of ecological economics, Zhang Yongming selected 37 characteristic indicators suitable for the classification of the ecological economic system in Shandong Province and used principal component analysis and systematic clustering to divide the 17 prefecture-level cities in Shandong Province into three major ecological economic categories and 7 subcategories. Lin Aiwen proposed a grey clustering method based on weighted common origin. is method uses segmentation and common origin to calculate the clustering function and, after reasonable weighting, distinguishes each clustering element under its clustering index in order to evaluate the regional natural resources and select Hubei Province for case analysis [15][16][17][18]. e rapid development and progress of fuzzy clustering theory has spurred the collaborative development of related fields, especially the computer intelligence of fuzzy clustering technology [19][20][21][22]. In this paper, the biclustering method is applied to the field of economic zoning for the first time, and the characteristics of the biclustering method are analyzed in detail and the results are compared with the results of traditional 2 Journal of Mathematics clustering methods. e study found that the biclustering method has its unique advantages in mining the correlation between provinces, especially the local correlation. Biclustering Algorithm It can be summarized in this way that the biclustering algorithm can effectively identify the set of objects that show similar behavior patterns in a specific set of attributes. e biclustering algorithm is also widely used in many other different fields, especially the analysis of gene expression data in biological information [23][24][25][26][27][28]. e hierarchical method is to create a layered structure by decomposing a given set of data objects. According to the formation method of hierarchical decomposition, hierarchical methods can be divided into two types, bottom-up and top-down. e bottom-up aggregation hierarchical clustering method is to initially treat each object (itself ) as a cluster and then aggregate these original clusters to construct larger and larger clusters, until all objects are aggregated into one cluster or until certain termination conditions are met. Most of the hierarchical clustering methods belong to this type of method, but they are different in the definition and description of the distance between objects within the cluster. e top-down decomposition hierarchical clustering method [29][30][31] is the opposite of the bottom-up method. It first regards all objects as the content of a cluster; it is continuously decomposed to make it become smaller and smaller but with more and more small clusters, until all objects constitute a cluster by themselves or satisfy a certain termination condition (such as a threshold of the number of clusters or a threshold of the shortest distance between the two closest clusters). e disadvantage of the hierarchical method is that it cannot be traced back after the (group) decomposition or merging. is feature is also useful because there is no need to consider the combinatorial explosion caused by different options when decomposing or merging. But this feature also makes this method unable to correct its own wrong decisions. When doing clustering analysis of data, this paper adopts the hierarchical clustering method embedded in EisenbergCluster3.0 software [32][33][34]. e software provides four cores of hierarchical clustering algorithms, namely, centroid clustering, single associative clustering, full associative clustering, and average associative clustering. After comparison, the average associative clustering algorithm is selected. e distance between class A and class B is defined as follows: Given a value with n rows and m columns to represent matrix A, the element aij is given specific value, which represents the relationship between row i and column j. Such a data matrix A with n rows and m columns is defined by its row set and column set: Different biclustering algorithms produce different types of biclustering; they are as follows: (1) Constant value biclustering (Figure 2 e rapid development and progress of fuzzy clustering theory have spurred the collaborative development of related fields, especially the computer intelligence of fuzzy clustering technology. e problem solved by the last biclustering method analyzed here is to find a bicluster with consistent evolution, which is also the most general biclustering model. ese methods treat the elements in the matrix as symbolic values and try to find a subset of rows and e biclustering algorithm assumes one of the following situations: there is only one bicluster in the matrix (Figure 3(a)), or the matrix contains K biclusters, where K is what we expect to be certain. Although most algorithms assume that there are several biclusters in the matrix, general algorithms only expect to find one bicluster. In fact, although some algorithms can find more than one bicluster, the target bicluster is usually the best one through some index tests. When the biclustering algorithm considers that there is more than one bicluster, the following biclustering structure can be obtained (Figures 3(b)-3(i)): (b) Diagonal matrix biclustering (rows and columns are reordered and form diagonal matrix blocks) (c) Nonoverlapping biclustering of checkerboard structure (d) Row-specific biclustering (e) Column-specific biclustering (f ) Nonoverlapping double clustering of tree structure (g) Nonoverlapping nonspecific biclustering (h) Overlapping biclustering with a hierarchical structure (i) Overlapping biclusters placed randomly e biclustering algorithm has two different goals: to identify one or a given number of biclusters. Some algorithms try to identify one bicluster at a time, such as Cheng and Church, and Sheng et al. identify one bicluster each time and repeat the process to finally find other biclusters. Lazzeroni and Owen also tried to find biclustering in an iterative process to get the lattice model. ere are also some algorithms that try to find all biclusters at the same time. FL0C is the method used. First, the data is added to each row or column with independent probability to generate an initial set of biclusters, and then the number of biclusters is increased iteratively. It has been widely and successfully applied. One of the outstanding advantages of fuzzy theory and technology is that it can be better. Describing, imitating, and the way of thinking of human beings using fuzzy thinking to make those disciplines seem to have nothing to do with mathematics in the past or have little to do with mathematics in the past can use quantitative and clear mathematical descriptions to build models. Considering the complexity of the problem, some heuristic solving algorithms are used to solve this problem. ese algorithms can be divided into the five following categories: (l) Clustering of iterative rows and columns (2) Subsystem method (3) Greedy iterative solution (4) Exhaustive method (5) Identification of distribution parameters QUBIC is a qualitative biclustering algorithm proposed by Li Guojun and Ma Qin. Compared with other current methods, this algorithm can solve the biclustering problem on a more general model and basically overcomes all the difficulties faced by the current biclustering problem. A core feature of the QUBIC algorithm is that it can identify all statistically significant biclusters. Another important feature is that it can find the most general biclustering model, that is, the biclustering with the scaling mode. At the same time, QUBIC is a very efficient method, which can solve the biclustering of thousands of objects under thousands of conditions within a few minutes (desktop CPLT time). is method has been well applied in the field of bioinformatics. (e) S1 S1 S1 S1 S1 S1 S1 S1 S1 S1 S1 S1 S1 S1 S1 S1 (f ) S1 S4 S3 S2 S4 S2 S3 S1 S1 S1 S2 S3 S4 S4 S3 S2 (g) S1 S2 S2 S2 S1 S1 S1 S2 S3 Journal of Mathematics ere are three main steps of the QUBIC algorithm. e first step is to construct a representation matrix through a discrete method based on outlier thinking. A qualitative method is used to represent the expression value, so that a new matrix can be obtained to represent the object expression data set under multiple attributes. e purpose of this is to effectively construct a biclustering model in the general sense. Since the object of this article is the collection of macroeconomic data of various provinces and the attribute is a macroeconomic indicator, if the original data dispersion method of QUBIC (each row is discrete) is used for clustering, its economic significance cannot be explained. e discrete method in the algorithm has been improved as follows. e following chart is shown in Figure 4. When constructing the representation matrix for each object, thus In the above equation, q is an optional parameter value. Object i is in column j of the attribute initial data matrix (m rows and n columns), and its values are arranged in ascending order as follows: e value under j is considered invalid if and only if its expressed value belongs to Here e algorithm considers that all values satisfying the following formula are low expression, and they are represented by −1 by default. All the values of the following formulas are highly expressed and are represented by 1 by default. Of course, the data with high (low) expression can be further subdivided according to the size of the value. Use 1 (2) to represent the high (second highest) expression, and use −1 (−2) to represent the low (second low) expression (see below for details) (case analysis). At present, fuzzy clustering has made outstanding achievements in computer simulation technology and e-commerce and other high-tech aspects. Similarly, fuzzy clustering analysis theory has also been successfully applied in economic management, environmental science, and traditional industries such as biology, environment, agriculture, and medical care and achieved good results. e matrix constructed above is called a representation matrix, in which the expression level of each object under any attribute is represented by an integer value. Two objects under a subset of attributes are considered to have related expression modes if their corresponding integers in the two corresponding rows in the expression matrix are equal. Here, the correlation level between two objects under a specific condition set is defined as the number of attributes that satisfy the condition. In practical applications, I am also very interested in the completely opposite state: the integers in the corresponding column have the same absolute value but the opposite sign. If each pair in a row of a submatrix is either correlated or negatively correlated, it is said to be feasible. e biclustering problem is to find all the local optimal submatrices in a given matrix. e second step is to Journal of Mathematics build an empowered graph model. For a given representation matrix, construct a weighted graph G, with objects as vertices and edges connecting a pair of objects, and the weight of each edge is the correlation level of the corresponding two objects. e greater the weight, the stronger the correlation between the corresponding two rows. Intuitively, the objects in the biclustering should constitute a subgraph G with extremely large weights, because, in the conditional subset, because these objects have a high degree of correlation, the weight of each involved edge is larger. However, it is worth noting that not all subgraphs with extremely high weights correspond to a bicluster. ere is no polynomial algorithm for identifying all subgraphs with maximum weights in a weighted graph, because the problem of identifying the maximum clique in a graph is a special case of this problem, and the maximum clique problem is a wellknown NP-C problem. erefore, in QUBIC's solution, the problem of finding subgraphs in the graph is not directly solved, but a heuristic algorithm is constructed based on the constructed representation matrix to solve the biclustering problem. In the third step, based on the constructed model, a heuristic algorithm is used to find the subgraphs with extremely large weights corresponding to the biclustering. At the beginning, an edge with the largest weight is taken as the seed to construct the initial biclustering, starting from the selected seed, iteratively expanding the biclustering in this matrix. Consider a matrix M with m rows and n columns discussed above, representing the representation level of n objects under m attributes, a corresponding weighted graph G, a set of vertices V, and a set of edges E. e weight of each edge is the number of columns with the same nonzero integer that the two objects have. is algorithm iteration is performed on the edge set S arranged in descending order. e edges are e � g i g j . At least one of g i and g j is not in the previously determined bi-cluster. e basic idea is to iteratively expand biclustering in the vertical and horizontal directions according to the selected seed. When it can no longer be expanded, that is, when the following formula reaches the maximum, output the submatrix (I, J) of M found, and I is the row submatrix and J is the column submatrix. is algorithm has some unique and powerful features: (1) It will not miss a meaningful biclustering. If the construction of a significant bicluster is not completed due to some reasons in the algorithm, resulting in the failure to recognize the bicluster, this problem will be corrected later by selecting other edges as seeds. (2) is algorithm can not only find objects related to expressions but also find objects whose expressions are exactly the opposite. (3) Although this is a greedy algorithm, because the algorithm traverses all the seeds, it will not miss the optimal solution. Research on Macroeconomic Zoning Based on Biclustering Algorithm is paper collects and sorts out the macroeconomic data of 31 provinces, municipalities, and autonomous regions in China for 9 years from 1999 to 2007 and focuses on selecting 17 macroeconomic indicators. e selection of indicators is mainly based on the indicator design that affects the sustainable development of China's economic regions. Certainly, the data for each year comes from the National Statistical Yearbook. Some indicators are missing in some years, but this does not affect the clustering results as a whole. At present, the commonly used clustering methods can respectively cluster the rows or columns of the data matrix, while the biclustering method is a method of clustering in both the row and column dimensions at the same time. Because the data of macroeconomic analysis has both time and space characteristics, it is necessary to reduce the dimensionality of the data first to make it suitable for biclustering analysis. In this paper, the indicators plus the year mark are used as new indicators, thus reducing the data to the two-dimensional space of the new indicators and provinces. Such a dimensionality reduction method can overcome the lack of information caused by averaging a certain index over a certain period of time. In order to make the data comparable, the data is normalized by the software EisenbergCluster3.0. is paper uses the hierarchical clustering method embedded in Esenbergcluster3.0 software to analyze the data. It shows the clustering results of various provinces and cities. e different industries are compared in Figure 5. In the first category, Beijing, Tianjin, and Shanghai are my country's three municipalities directly under the Central Government (except Chongqing). ese three regions are all economically developed regions in my country. Selecting the regional GDP as a representative economic indicator, the average GDP growth rates of Beijing, Tianjin, and Shanghai from 1999 to 2007 were 10.8%, 11.4%, and 9.4%, respectively, which are relatively similar in value, and they have maintained a steady growth rate. Under other indicators (the gross value of the primary industry, the employment population, and the population of the primary industry), the three cities also have great similarities, which is also in line with the actual situation of the three municipalities. Take Beijing as an example. As the national political, economic, and cultural center, agriculture only accounts for a small proportion of the regional economy. Statistics show that, in 2009, the three industrial structures in Beijing were 1 : 23.2 : 75.8, and the tertiary industry had already accounted for more than 75% ( Figure 6). is means that the clustering method is looking for the overall optimum, while the biclustering method produces a partial pattern, so it is also looking for a local optimum. e second category covers nine provinces from south to north in the central and eastern regions of my country. Because of their high values under several attributes such as gross domestic product, secondary industry output value, and local fiscal revenue, this category can be known. It belongs to a collection of economically developed provinces. From 1999 to 2007, the average contribution rate of these nine provinces to the national GDP was 51.77%, accounting for more than half of the national GDP. rough more indepth comparison and analysis, it can be found that the three provinces of Liaoning, Heilongjiang, and Hubei are closer and can be classified into a subcategory. e average GDP growth rates from 1999 to 2007 were 18.2%, 16%, and 15.5%, respectively. Fujian, Zhejiang, Jiangsu, Guangdong, and Shandong are more similar and can be divided into the second subcategory, with average annual GDP growth rates of 18%, 27.8%, 26%, 29.7%, and 26.5%. Shanxi is a subcategory of its own, with an average annual GDP growth rate of 31.2%. Generally speaking, the provinces under this category have maintained a relatively rapid growth rate, which basically represents the overall speed and level of my country's economic development. e prediction is shown in Figure 7. e third category has five provinces: Hebei, Henan, Anhui, Hunan, and Sichuan. ese five provinces are basically located in the central region of my country in terms of geographic location. e main feature of this category is that the primary industry accounts for a large proportion of the province's economy, and it is also a province with a large labor force. Under the four attributes of urban per capita disposable income, rural per capita disposable income, urban residents' living consumption expenditures, and rural residents' living consumption expenditures, the values are low, indicating that the living standards of the people in this type of economic region are not high. When using a Journal of Mathematics clustering algorithm, each object in an object cluster is defined by all attributes, and each attribute in a similar attribute cluster is characterized by the activities of all objects. However, when using the biclustering algorithm, each object in the biclustering is only determined by a certain subset of attributes, and each attribute in the biclustering is also only determined by a certain subset of the objects. e fourth category is the seven provinces and cities in the central and western regions of my country: Jiangxi, Shaanxi, Guangxi, Guizhou, Chongqing, Yunnan, and Gansu. ese provinces and cities belong to several provinces and cities with relatively slow economic development in China. e average contribution rate of these seven provinces to my country's GDP from 1999 to 2007 was only 11.5%, but the average economic growth rate was 20%. It is shown that the economic development of these provinces and cities is in a good state. e data are compared in Figure 8. e fifth category almost covers the border provinces and regions in the north and southwest of my country, and the economic development speed is relatively slow. Among them, the three provinces of Inner Mongolia, Jilin, and Xinjiang are more similar and belong to provinces with a shortage of labor, and all other attributes have no obvious characteristics. e share of these three provinces in my country's GDP is between 1% and 2%, and their economic conditions are average. e other two provinces, Hainan and Tibet, have lower values under the attributes of gross national product, local fiscal revenue, and employed population, indicating that the level of economic development is still relatively low. ese four provinces account for less than 1% of my country's GDP, and they are economically backward areas. erefore, the purpose of the biclustering algorithm is to find a subset of common objects and a subset of attributes by clustering in the row and column directions at the same time, instead of clustering in these two dimensions separately. e QUBIC algorithm is a biclustering algorithm, which generates a matrix containing different number attributes and objects through program analysis of the data. e objects within the same matrix have greater similarity under the attributes of the matrix and are clustered in one category. Among them, this method therefore gets rid of the limitation of clustering under the premise that all attributes are involved and can observe economic phenomena from a relatively novel perspective, thereby discovering its regular characteristics. A total of 9 biclusters were obtained by running QUBIC software on standardized data. Due to space limitations, we selected two typical biclusters for analysis. e convergence is shown in Figure 9. e cluster is a matrix with a scale of 81 (9 × 9). e 9 provinces and cities are classified under the conditions of a subset of all attributes, that is, the urban per capita disposable income of individual years, the living consumption expenditure of urban residents, and local fiscal revenue. e expression values are relatively high and they are selected to be clustered into one category, indicating that the living standards of the people in these provinces and cities are better. In the clustering results of the aforementioned cluster, the nine provinces and cities belong to different categories. In comparison, the biclustering algorithm can find out those parts of provinces and municipalities that are similar under more specific attributes, thereby discovering some details hidden behind general economic phenomena. e cluster will be simultaneously selected from the provinces and cities with high and low expression values under the two attributes of tertiary industry output value and government fiscal expenditure. Statistics show that Jiangsu, Shandong, and Guangdong contributed 8.6%, 8.1%, and 10.8%, respectively, of the output value of the tertiary industry to the output value of the tertiary industry in the country from 1999 to 2007. ey are the three provinces with very developed tertiary industry, while Guizhou, Tibet, Hainan, Qinghai, and Ningxia, from 1999 to 2007, had average contribution rates of 0.9%, 0.17%, 0.5%, 0.7%, and 0.4%, respectively, to the output value of the tertiary industry, which are relatively backward provinces. erefore, the clustering result conforms to the status quo. Such double clustering allows us to compare similar and opposite situations conveniently, which is also a highlight of the QUBIC algorithm. Since the result of the previous running parameter only screens out those objects with high expression values, it is rarely involved in provinces and municipalities whose expression values are not significantly high or low. erefore, after we adjust the parameters, we can adjust the expression values to the second most significant ones. e objects are filtered out. Considering more detailed division when the data is discrete, QUBIC software is run to get a total of 16 biclusters. We select two of them for analysis. Under the conditions of primary industry output value, industrial output value, and construction industry output value, Anhui, Hunan, Sichuan, Fujian, Hubei, Heilongjiang, and Shanghai are the better regions, while Inner Mongolia, Gansu, and Xinjiang are the better regions. Under this parameter, we can also find clusters with both low expression and sublow expression in an attribute set. For example, double cluster 4, Shanxi, Guizhou, and Gansu are in the secondary industry output value and local fiscal expenditure attributes. e next low expression is the three provinces with low expression under the consumption expenditure of rural residents. rough the analysis of the operating results of the above two parameters, we can see that biclustering has a high degree of flexibility in the study of economic zoning, and the scale of clusters and the classification of clusters can be controlled by adjusting the parameters. Conclusion In summary, this article uses clustering and biclustering algorithms to analyze my country's macroeconomic data and obtains some meaningful results. First of all, in the research method, this paper uses the hierarchical clustering method in the clustering algorithm to do cluster analysis. On the other hand, this paper uses the QUBIC algorithm to do biclustering analysis, which breaks through the restriction that all clustering conditions in the clustering method must participate. is also solves the problem of "either or the other" in the classification of objects. e same object can belong to different categories under different conditions. At the same time, this method can find objects with completely opposite expressions at the same time, which is of great significance in economic analysis. Secondly, the data processing method in this article is also different from the past. Most studies on economic zoning use the method of segmented averaging to process data over a period of time, and the results are rough. e data dimensionality reduction method in this paper makes the analysis results specific to each year, making the results of economic zoning more detailed. Finally, based on the empirical analysis results, the following conclusions can be drawn: (1) e result of hierarchical cluster analysis gives a division of my country's overall economic region. (2) QUBIC got different clustering results from the previous method. (3) e development mode of high expression areas is low, a good template for expressing regions. Data Availability e data used to support the findings of this study are available from the corresponding author upon request. Conflicts of Interest e author declares that there are no conflicts of interest.
2022-02-09T16:18:10.440Z
2022-02-07T00:00:00.000
{ "year": 2022, "sha1": "66bf97b2242857be86756556cba3905d2710ec8c", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1155/2022/7768949", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "a60d59f956201f1bfdc744351ac6eeacadb7e128", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [] }
236141040
pes2o/s2orc
v3-fos-license
A Childhood Farm Environment Protects from Allergic Sensitization until Middle Age but Not from New-Onset Sensitization in Adulthood: A 15 Year Longitudinal Study Data are insufficient on the protective effect of a farm environment in childhood regarding sensitization in middle age and new-onset sensitization in adulthood. A skin prick test (SPT) and questionnaire data from the Northern Finland Birth Cohort 1966 study (NFBC66) were used to investigate sensitization at age 46 years related to childhood living environment. A subpopulation of 3409 participants was analyzed to study factors related to new-onset sensitization between ages of 31 and 46 years. Data on complete SPTs were available for 5373 cohort members at age 46. Professional farming by parents (odds ratio (OR) 0.54; 95% confidence interval (CI) 0.43–0.68) and keeping of farm animals (OR 0.53; 95% CI 0.43–0.66) in infancy were associated with a lower risk of sensitization at age 46. Sensitization (OR 0.58; 95% CI 0.47–0.72) and polysensitization (OR 0.43; 95% CI 0.32–0.57) were less common in those who lived in a rural area in infancy compared to a city area. The childhood living environment had no effect on new-onset sensitization between ages 31 and 46. We conclude that living on a farm or in a rural environment in childhood had a protective effect on sensitization even in middle age, but these factors did not protect from new-onset sensitization in adults. Introduction Allergic diseases are a major health challenge worldwide [1] and the prevalence of atopic sensitization in adults has been reported as still increasing in industrialized countries such as those in Northern Europe in the current millennium [2,3]. Allergic diseases have a complex background where genetics, environmental factors, and timing play a role. Factors early in life and during the fetal period are thought to be important influences on the risk of sensitization later in life [4,5]. Although the farm effect during childhood, currently considered an indicator for exposure to a diversity of microbes [6][7][8], is an important protective factor for future aeroallergen sensitization in several studies [9][10][11][12], most of these only reach until early adulthood. In a cross-sectional study on sensitization and asthma conducted in Finland, a childhood farm environment had a protective effect on sensitization to pollens and cat in subjects aged 18-26 years [9]. In another Finnish study of unselected adults, a farming environment in childhood was found to protect from sensitization at the age of 31 [10]. In Denmark, the effect of a childhood living environment on sensitization was studied in 1236 men aged 30 to 40 years. This study found not only the overall prevalence of sensitization, but also specific sensitization decreased with a decreasing degree of urbanization, being lowest in the farm group [11]. The effect of a current living environment in adulthood versus sensitization was investigated [13]. In a study conducted in Finland, women who lived on a farm were less likely to have sensitization to pollens and cat than those who did not. This association was strongest in those who had also lived on a farm in childhood [13]. Currently, there are limited data on the effect of early farm exposure on sensitization to aeroallergens later in adulthood. Previous studies have not analyzed these effects with an extensive prospective follow-up, neither have studies analyzed the effect of a childhood environment on new-onset sensitization in adulthood. The aims of the present study were to examine the effect of childhood farm exposure on the risk of sensitization and polysensitization at age 46 years and the effect of a childhood living environment on new-onset sensitization between the ages of 31 and 46. Study Population Our study data originated from the Northern Finland Birth Cohort 1966 (NFBC66), a longitudinal research program in the two northernmost provinces in Finland. The NFBC66 initially included all 12,058 children whose expected date of birth fell in the year 1966. The children of NFBC66 were followed regularly since their birth. The follow-up was conducted through health questionnaires and clinical investigations including skin examination [14,15]. To date, the cohort has been subjected to four main follow-up visits: at birth, and at 14, 31, and 46 years. Skin prick tests (SPTs) were performed at the ages of 31 and 46 years ( Figure 1) [3]. Detailed information on the NFBC66 can be found on the research program's website [16]. Skin Prick Test SPTs were conducted with standard dilutions of three of the most common allergens in Finland (cat, birch, and timothy grass), plus house dust mite (HDM) (Dermatophagoides Pteronyssinus) (Alk-Abello Nordic, Espoo, Finland) as described previously [3]. Questionnaire and Confounder Factors Data on environmental factors in childhood were collected with health questionnaires antenatally and at age 31. The questionnaires included information on the place of residence in infancy, the parents' professional farming and keeping of farm animals, keeping of cats and dogs, and the number of animal species. Potential confounders were selected on the basis of previous studies that showed an association with sensitization: residential density [10], maternal smoking during pregnancy [17], maternal and paternal asthma [18,19], maternal and paternal allergy [18,19], mother's age of menarche [20], maternal body mass index (BMI) [21], parity [18], maternal age [22], and study subject's gestational age [18], sex [23,24], current socio-economic status (SES, defined as level of education) [25], current BMI [26], current residence on a farm [12,27] and current smoking [28] (defined as smoking at least once a week) were included. Maternal education and study subject's birth weight and height were considered but ultimately excluded from the final analyses due to the absence of a significant association with sensitization. Analyses included gestational age, the alternative confounder for birth height and weight. Skin Prick Test SPTs were conducted with standard dilutions of three of the most common allergens in Finland (cat, birch, and timothy grass), plus house dust mite (HDM) (Dermatophagoides Pteronyssinus) (Alk-Abello Nordic, Espoo, Finland) as described previously [3]. Statistical Analysis Associations between the environmental factors in infancy and sensitization and polysensitization at age 46 years were analyzed by cross-tabulation and the data are presented as frequencies and percentages. The chi-squared test was used to test differences between sensitized and non-sensitized and classes of polysensitization. Associations between environmental factors in infancy and sensitization or polysensitization at age 46 years were tested using binary logistic regression analysis and multinomial logistic regression. Two models were used, unadjusted (crude) and adjusted by sex, maternal age, smoking during pregnancy, maternal BMI, residential density, current education, current BMI, current farm living, current smoking, paternal asthma, paternal allergy, maternal asthma, maternal allergy, gestational age, mother's age of menarche and parity. Analyses were conducted using the SAS software package (version 9.4, SAS Institute Inc., Cary, NC, USA). Characteristics of the Study Population At 46 years, invitations were sent to every living member of the cohort whose addresses were known (n = 10,321) and, of these, 5861 participants attended the clinical examination day. In the 46-year follow-up study, SPTs were performed on 5714 (55.4% of invited) participants. Because of invalid data, 331 participants were excluded from the 46 -year follow-up analysis and a further 10 participants did not provide their consent to data processing. The final study population consisted of participants who had complete SPT data on all allergens at age 46 (n = 5373) of which 2394 (44.6%) were men and 2979 (55.4%) were women. A subpopulation of 3409 participants with complete data on all allergens at age 31 and 46 years (longitudinal subpopulation) was further analyzed ( Figure 1). Environmental Factors and Allergic Sensitization at Age 46 After adjusting for multiple potential confounders, the risk of allergic sensitization remained significantly lower in those who lived in outlying districts compared to cities (odds ratio (OR) 0.58; 95% confidence interval (CI) 0.47-0.72), and in those whose parents were professional farmers (OR 0.54; 95% CI 0.43-0.68) or had farm animals (OR 0.53; 95% CI 0.43-0.66). The protective effect of cats (OR 0.74; 95% CI 0.62-0.89) and dogs (OR 0.74; 95% CI 0.62-0.88) remained significant after adjustment. The risk of sensitization was significantly and inversely associated with the number of animal species with an OR of 0.41 (95% CI 0.31-0.54) for three or more animal species (no farm animals as a reference). See Table 1. The risk of polysensitization was significantly lower in those who lived in an outlying district compared to a city (OR 0.43; CI 0.32-0.57). Professional farming by parents (OR 0.34; 95% CI 0.24-0.49) and keeping of farm animals (OR 0.34; 95% CI 0.25-0.48) were both associated with a significantly lower risk for polysensitization. An increasing number of farm animals gradually lowered the risk of polysensitization (OR 0.26; 95% CI 0.17-0.40 for three or more animal species (no farm animals as a reference)). See Table 2. When analyzed by allergen, the keeping of three or more animal species on the farm had the strongest protective effect on sensitization to timothy grass (OR 0.27; 95% CI 0.18-0.40), followed by cat (OR 0.34; 95% CI 0.23-0.50), and birch (OR 0.48; 95% CI 0.34-0.68), but had no effect on sensitization to HDM (OR 0.58; 95% CI 0.32-1.04). Residence in an outlying district protected from sensitization in the same pattern. Parents' professional farming and keeping of farm animals decreased the risk of sensitization to pollen and cat, but not to HDM. Having at least one dog in the family before the participant turned 7 years was associated with a lower risk of sensitization to HDM (OR 0.63; 95% CI 0.43-0.91) ( Table 3). Environmental Factors and New-Onset Sensitization between Ages 31 and 46 Years From the longitudinal subpopulation (n = 3409), two subgroups were further analyzed: the first group stayed unsensitized (no positive SPT findings) during the follow-up period from age 31 to 46 years, while the second group acquired sensitization (no positive SPT at 31 and ≥1 positive SPT at 46 years) during this period. Neither farm exposure nor place of residence in infancy differed between these two groups ( Table 4). Maternal asthma was the only statistically significant confounder and was more common in those who acquired sensitization during this period (data not shown). a Total number of participants varies due to incomplete responses to the questionnaire, b adjusted for sex, maternal age, smoking during pregnancy, maternal BMI, residential density, current education, current BMI, current farm living, current smoking, paternal asthma, paternal allergy, maternal asthma, maternal allergy, gestational age, mother's age of menarche and parity, c before age 7 years. Discussion From this large birth cohort study, we showed that living on a farm and especially the keeping of farm animals in childhood had a protective effect on sensitization and polysensitization to aeroallergens even in middle-aged adults. This is in line with the hygiene hypothesis, highlighting the importance of environmental diversity [29]. Living in an outlying district in childhood was associated with a decreased risk of sensitization compared to living in a city. The keeping of cats and dogs had a protective effect on sensitization and the risk of sensitization decreased with an increasing number of animal species. However, these childhood factors did not protect from new-onset sensitization acquired between the ages of 31 and 46 years. In previous studies, the effect of a childhood farm environment on aeroallergen sensitization was commonly protective and our present findings confirm this perception. Although a birth cohort study conducted in Finland found that the overall sensitization rate to common aeroallergens at age 31 was lower in those with a farm background in childhood (n = 5509), the specific aeroallergens were not examined separately in that study [10]. A Danish study found a lower risk of both overall and specific sensitization in the childhood animal farm group among male subjects aged 30 to 40 years (n = 1236) compared to rural, town, or city habitants. However, the study population was smaller than our study and excluded women. Moreover, the median age of the population was less than 35 years, and thus unrepresentative of a middle-aged population [11]. Studies have reported differing results when the effect of farming was examined by allergen. Although in the previously represented Danish study, sensitization to birch, grass, cat, and HDM was less common in participants with a farm background [11], others reported differing findings. For instance, in a Finnish study of 18-26-year-old subjects (n = 296), a childhood farming environment led to less sensitization to pollens and cat in the SPT [9]. However, significantly higher sensitization rate to HDM was found in those with a farming background [9]. In our study, sensitization to birch, timothy grass, and cat allergens was less common if the parents were professional farmers or if the family had farm animals. Still, these factors did not protect from sensitization to HDM. In the present study, the same protective associations were also observed for polysensitization, which was linked to an increased risk of allergic multimorbidity [30][31][32]. Atopic dermatitis (AD) is known to increase the risk of atopic comorbidities (allergic rhino-conjunctivitis, allergic asthma, and IgE-mediated food allergy), and this atopic march may start developing in the early years of life [33,34]. In addition to the protective effect on sensitization, studies also showed the protective effect of early farm exposure on asthma [9] and allergic rhinitis [35,36], thus also highlighting the clinical relevance of the lower polysensitization rate found in the farm group. Differences in sensitization between sexes have been proposed previously, with men seemingly more prone to sensitization [3,23,24]. This was also found in our results as the male sex was a risk factor for overall sensitization, specific sensitization, and polysensitization. Sensitization to common aeroallergens is less common in adult farmers. A Finnish study of 433 women, including 231 women currently living on a farm, found a reduced risk of sensitization to pollens and cat in the farm group. In addition, the protective effect was most pronounced in those with both childhood and current farming exposure [13]. In the present study, even when adjusted for current farming, the protective effect of childhood farm exposure remained significant. The other remarkable finding in our study is that a farm environment in childhood did not protect from new-onset sensitization in adults. In longitudinal analyses (between ages 31 and 46 years), there was no difference in the effect of childhood environment (living on a farm or rural environment and having a pet) between new-onset and non-sensitized groups. In our study, only the mother's asthma was associated with a higher risk of sensitization during the follow-up period from 31 to 46 years of age. Although new-onset sensitization can occur in adults [3,37] most sensitization likely occurs early in life. This suggests that some other factor independent of a childhood environment drives the process in those who develop sensitization later in life. This is the first large scale study on the effect of childhood living environment on sensitization that reaches until the age of 46 years. The strengths of the study include its unselected population and the longitudinal follow-up data, which enabled the analysis of childhood environmental factors for sensitization in middle age. The prospective setting of the study excludes recall bias. Only data on maternal and paternal asthma, maternal and paternal allergy, and pet keeping under the age of 7 years were recorded retrospectively. Potential confounders were included in the analysis comprehensively. The relatively high participation rate allows our results to be generalized to the entire population. We used the SPT technique, considered the gold standard for detecting allergic sensitization [38] with its good positive predictive value, to determine clinical allergy in respiratory allergic diseases [39]. In previous studies, the effect of farming was not always subdivided concerning farm animals, the variety of animal species, and pets. We were able to break down the effect of farming and showed that farm animals and pets had a protective association with sensitization and that the number of animal species affected the risk of sensitization. After adjusting for current living on a farm, we showed that the protective association between childhood exposure to a farm environment and sensitization remained significant. The lack of a concurrent specific immunoglobulin E assessment can be considered a limitation of this study. The duration of farm exposure in childhood was not recorded and we cannot definitively state that the antenatally reported environment stayed the same after birth. Some participants declined the SPT for unknown reasons. Consequently, some of the most sensitized individuals may not have been included in the analyzed population. Furthermore, there was a possibility of selection bias in the population who participated in the clinical examinations. The permitted use of antihistamines may also have affected the outcome of the SPT, although few among the study population reported current antihistamine medication. Conclusions In conclusion, environmental exposures early in life are important in developing tolerance to aeroallergens and this protective effect is shown to last until middle age. Therefore, measures aiming to increase contact with natural environments and their diversity of microbes are essential in childhood. Funding: This study was supported by the University of Oulu, the Lapland Regional Fund of the Finnish Cultural Foundation and the Finnish Dermatological Society. Institutional Review Board Statement: The Ethical Committee of the Northern Ostrobothnia Hospital District approved the study, which was performed according to the Helsinki Declaration of 1983. Informed Consent Statement: Written informed consent for scientific purposes was received from all participants. Data Availability Statement: The data that support the findings of this study are available from Northern Finland Birth Cohort 1966 Study. Restrictions apply to the availability of these data, which were used under license for this study. Data are available at http://www.oulu.fi/nfbc/node/44315 with the permission of Northern Finland Birth Cohort (accessed on 20 May 2021). Conflicts of Interest: The authors have no conflict of interest to declare.
2021-07-21T06:18:07.622Z
2021-07-01T00:00:00.000
{ "year": 2021, "sha1": "3fb4eb4c70e41ec1bfd70148e4c8272d0ab15053", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1660-4601/18/13/7078/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "15eb7f73b4e9af9f6b6720ac4881498115db027f", "s2fieldsofstudy": [ "Environmental Science", "Agricultural and Food Sciences", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
11867072
pes2o/s2orc
v3-fos-license
Short-range Molecular Rearrangements in Ion Channels Detected by Tryptophan Quenching of Bimane Fluorescence Ion channels are allosteric membrane proteins that open and close an ion-permeable pore in response to various stimuli. This gating process provides the regulation that underlies electrical signaling events such as action potentials, postsynaptic potentials, and sensory receptor potentials. Recently, the molecular structures of a number of ion channels and channel domains have been solved by x-ray crystallography. These structures have highlighted a gap in our understanding of the relationship between a channel's function and its structure. Here we introduce a new technique to fill this gap by simultaneously measuring the channel function with the inside-out patch-clamp technique and the channel structure with fluorescence spectroscopy. The structure and dynamics of short-range interactions in the channel can be measured by the presence of quenching of a covalently attached bimane fluorophore by a nearby tryptophan residue in the channel. This approach was applied to study the gating rearrangements in the bovine rod cyclic nucleotide-gated ion channel CNGA1 where it was found that C481 moves towards A461 during the opening allosteric transition induced by cyclic nucleotide. The approach offers new hope for elucidating the gating rearrangements in channels of known structure. Ion channels are allosteric membrane proteins that open and close an ion-permeable pore in response to various stimuli. This gating process provides the regulation that underlies electrical signaling events such as action potentials, postsynaptic potentials, and sensory receptor potentials. Recently, the molecular structures of a number of ion channels and channel domains have been solved by x-ray crystallography. These structures have highlighted a gap in our understanding of the relationship between a channel's function and its structure. Here we introduce a new technique to fi ll this gap by simultaneously measuring the channel function with the inside-out patch-clamp technique and the channel structure with fl uorescence spectroscopy. The structure and dynamics of short-range interactions in the channel can be measured by the presence of quenching of a covalently attached bimane fl uorophore by a nearby tryptophan residue in the channel. This approach was applied to study the gating rearrangements in the bovine rod cyclic nucleotide-gated ion channel CNGA1 where it was found that C481 moves towards A461 during the opening allosteric transition induced by cyclic nucleotide. The approach offers new hope for elucidating the gating rearrangements in channels of known structure. I N T R O D U C T I O N With the recent unveiling of many high-resolution x-ray crystallographic structures of ion channel proteins, we have entered a new era in ion channel research (Hille, 2001;Swartz, 2004). We are no longer asking simply what does the channel look like but instead are now asking how does the channel structure change or rearrange during gating. Gating is the process by which all ion channels control the opening and closing of their ion-permeable pore. In most cases it is thought to be an allosteric conformational change that is regulated by signals such as changes in transmembrane voltage, the binding of external or internal ligands, or membrane stretch (Sigworth, 1994;Li et al., 1997;Perozo and Rees, 2003). In each case, the regulator differentially affects the energy of the closed and open conformations and produces changes in the channel open probability. These changes in open probability are of fundamental importance to the physiological function of ion channels, but their detailed molecular mechanisms remain unknown. Unfortunately, structural determination of the same channel protein in different conformational states has proven diffi cult. As a result, a number of methods have been developed to infer conformational changes from more indirect measurements such as gating effects of channel mutations, state-dependent changes in cysteine accessibility or disulfi de bond formation, changes in cysteine-linked biotin accessibility, and state dependence of channel modulators and blockers Liu et al., 1997;Johnson and Zagotta, 2001;Laine et al., 2003;Phillips et al., 2005;Ruta et al., 2005). Recently, site-specifi c fl uorescence labeling of channels has been used to follow the conformational changes associated with gating in voltagedependent channels (Mannuzzu et al., 1996;Cha and Bezanilla, 1997;Cha et al., 1999;Zheng and Zagotta, 2000;Posson et al., 2005). The fl uorescence of a fl uorophore can report changes in local environment, accessibility to soluble quenchers, or proximity to nearby fl uorophores by fl uorescence resonance energy transfer (FRET) (Selvin, 1995). This method can be combined with whole-cell (or whole-oocyte) recording or excised patch recording, allowing simultaneous monitoring of channel function and structure with a relatively noninvasive probe. The existing fl uorescence methods have a number of limitations that have reduced their usefulness: (a) the cause of fl uorescence changes is often ambiguous, and its time course often complex, making the molecular interpretation of the results diffi cult; (b) the fl uorophores are often large (e.g., GFP derivatives) or attached by long linkers, making the fl uorophore a poor reporter of the movement of its attachment point; (c) the fl uorescent labeling is not always completely specifi c to the channel or the cysteine in question; (d) distances reported by standard FRET are too large (30-70 Å), on the order of the diameter of the channel protein itself; and (e) the distance dependence of FRET is extremely steep (FRET effi ciency depends on the sixth power of the distance), making it sensitive to movements only in a very narrow range of distances, around R o . This steep distance dependence, combined with orientation dependence and inaccuracies in measuring the FRET effi ciency, makes distance measurements with FRET unreliable. The availability of high resolution structural information has created the need for a fl uorescence approach that utilizes smaller, more specifi c, probes that monitor much shorter distances, on the order of the interaction distances observed in the molecular structures. The need for a new approach is perhaps nowhere better illustrated than in cyclic nucleotide-gated (CNG) channels. CNG channels have been well characterized both functionally and structurally (Kaupp and Seifert, 2002;Craven and Zagotta, 2005). They are nonselective cation channels that are opened by the direct binding of cyclic nucleotides to an intracellular domain and have important roles in signal transduction, both in photo receptors and olfactory neurons. Recently, a high resolution x-ray structure has been solved for the intracellular carboxy-terminal region of a related channel, the hyperpolarization-activated cyclic nucleotidemodulated (HCN) channel . This structure is in all likelihood very similar to the corresponding region in CNG channels. The availability of this structure has raised a number of important questions. (a) Does the structure of the isolated HCN fragment refl ect a conformation of the intact HCN and CNG channels? (b) If so, which conformation does it represent: closed channel? Open channel? Gating intermediate? (c) And how does this structure rearrange during gating? These questions can be addressed by directly measuring the structure and dynamics of short-range molecular interactions predicted by the structure. Here we report a new method for simultaneously measuring short-range interactions with fl uorescence spectroscopy and channel function with patch-clamp recording, and show an example of its use in CNG channels. M AT E R I A L S A N D M E T H O D S Bimane-C3-maleimide and mono bromo bimane were purchased from Molecular Probes and dissolved at 100 mM in DMSO. This stock was aliquoted in small volumes and maintained frozen at −20°C. An aliquot was thawed just before each experiment and generally used for no more than two freeze-thaw cycles. All other reagents were obtained from Sigma-Aldrich. Channel Mutagenesis The CNGA1 channel with all seven endogenous cysteines removed (cysteineless) was used as a template for mutagenesis ( Matulef et al., 1999). C481, I600C, and all tryptophan mutations were constructed by PCR mutagenesis as previously described (Gordon and Zagotta, 1995). All constructs were in the pGEMHE oocyte expression vector, and mRNA was prepared using the mMessage mAchine kit (Ambion). Xenopus oocytes were prepared and injected with RNA as previously described (Gordon and Zagotta, 1995). Electrical Recording and Solutions Current recording was performed in inside-out patches (Hamill et al., 1981) formed at the tip of 300-500 KΩ glass pipettes using an Axopatch 200B amplifi er (Axon Instruments). Macroscopic currents were fi ltered at 2.5 KHz, sampled at 5 KHz, and digitized by an ITC-18 interface (Instrutech Co.). Data were acquired and analyzed with Pulse (HEKA Elektronik) and Igor (Wave Metrics). Patches were perfused with cyclic nucleotide-containing solutions using a rapid solution changer (BioLogic) in the presence of constant bath perfusion. The pipette and bath solutions contained 130 mM NaCl, 3 mM HEPES, 0.2 mM EDTA, pH 7.2 with NMG. Sodium salts of cyclic nucleotide were dissolved in this solution, compensating to maintain a 130 mM fi nal sodium concentration. Immediately before the experiment, bimane maleimide was diluted to the desired concentration (indicated in fi gure legends) in this solution, with or without 2 mM cGMP. Optical Recording Setup Optical recordings in cell-free patches (patch-clamp fl uorometry [PCF]) were performed on a Nikon Eclipse TE2000-E microscope equipped with a 60× 1.4 na Plan Apo oil immersion objective (Nikon). Laser illumination was achieved using a Coherent Radius solid-state laser (403 nm, 30 mW), coupled via an optical fi ber to the total internal refl ection (TIR) module from Nikon. The laser input power at the back aperture of the objective was 2.1 mW. In some experiments, this was attenuated to 0.26 mW using a neutral density fi lter to reduce photobleaching. No excitation fi lter was used. Light was collected by the objective, passed through a dichroic mirror centered at 405 nm and a long pass fi lter centered at 415 nm (Chroma Technology), and then was refl ected on to the grating of a spectrograph (Acton MicroSpec 2150i) that was coupled to one of the side ports of the microscope. Light was detected by a Cascade 512B intensifi ed CCD camera with a chip size of 512 × 512 pixels (Roper Scientifi c). Spectroscopic measurements were done by placing the slit of the spectrograph over the image of the patch at the tip of the pipette and refl ecting the image onto the grating (300/500 blazing). This produces a "spectral image" at the camera in which the x axis is a wavelength dimension and the y axis is a spatial dimension. The spectral region resolved by this grating and camera combination is 153 nm. Images were acquired for 300 ms to 1 s and the pixels were binned up to 5 × 5. The resolution is dependent on the level of binning of images and its value is 0.303 nm/pixel without binning. Calibration of the spectrograph was achieved by generating a spectrum from a multi-line Ar-ion laser (SpectraPhysics) and mapping the peak wavelengths (456, 488, and 514 nm) to pixel number. For imaging, the grating was moved to its zeroth-order position and the slit removed. Image acquisition and analysis, and microscope and instrument control were performed with MetaMorph (Universal Imaging Co.). Data Analysis Normalized dose responses of currents as a function cGMP concentration were fi tted with the Hill equation: where I is the cGMP-elicited current, I max is the maximal current, K d is the concentration of cGMP for which I/I max = 0.5, and n is the Hill coeffi cient. Results from multiple experiments are expressed as the mean ± SEM. The time course of modifi cation in the open state was fi t with the following equation: where I cAMP and I cGMP are the current in saturating cAMP and GMP, respectively. R max is the steady-state fractional activation by cAMP, k is the modifi cation rate, and m is an exponent that accounts for the sigmoidal time course. Images and line scans from spectra were acquired in MetaMorph and transferred to Igor (Wave Metrics) for further analysis. Currents were analyzed in PulseFit (HEKA) and also transferred to Igor. Spectra were background corrected by subtracting a line scan of the nonfl uorescent region immediately above the pipette. All the vertical pixels corresponding to the length of the patch were averaged to form a spectrum. Quenching of bimane fl uorescence (F) by tryptophan in solution was quantifi ed from spectra by plotting the average fl uorescence in a 7-nm window centered at 488 nm as a modifi ed Stern-Volmer plot. Quenching of bimane in solution and in patches containing cysteineless CNGA1 channels were fi t by a Stern-Volmer equation: where F o is the fl uorescence in the absence of quencher, F is the fl uorescence at each concentration of quencher, K is the quenching constant, and [q] is the concentration of tryptophan. Data for quenching of bimane in patches containing C481 or I600C channels were fi t by a two component Stern-Volmer equation: where f is a number between 0 and 1 representing the relative contribution of each component, and K i is the quenching constant for each component. Tryptophan Quenches the Fluorescence of Bimane Bimane is a small, environmentally sensitive fl uorophore whose fl uorescence has been used as a sensor of the local environment of residues in proteins (Kosower et al., 1979;Kosower et al., 1980;Kachel et al., 1998;Mansoor et al., 1999;Silvius, 1999). More recently it was discovered that bimane fl uorescence is quenched by nearby tryptophan residues ( Fig. 1 A) (Mansoor et al., 2002). Tryptophan quenching of bimane fl uorescence is thought to be due to photo-induced electron transfer from tryptophan to excited bimane, and can be easily observed in a fl uorometer with increasing concentrations of tryptophan added to the solution ( Fig. 1 B). The decrease in fl uorescence emission with increasing tryptophan concentration followed a simple linear Stern-Volmer relation, a plot of the inverse of the fractional fl uorescence change as a function of the inverse of the quencher concentration ( Fig. 1 C, see Materials and methods). This indicates that the quenching can be described by a single quenching constant K. Mansoor et al. (2002) have shown that intramolecular quenching between bimane and tryptophan can be observed in T4 lysozyme, and the quenching occurs when the tryptophan to bimane distance is <15 Å. Thus, tryptophan quenching of bimane can be used as a proximity detector for distance changes associated with conformational changes in proteins (Mansoor et al., 2002;Janz and Farrens, 2004). Since the distances are much shorter than standard FRET, this approach, in theory, could allow one to compile a series of contact points between different regions of the protein and measure the dynamics of these contacts, akin to spin-spin coupling in EPR or NOE's in NMR. Unlike these other techniques, though, tryptophan quenching of bimane can be measured for very small amounts of channel protein in membrane patches simultaneous with electrical recording. To explore the use of this approach to measure the conformational dynamics of an ion channel, we chose to study the gating rearrangements of the carboxy-terminal region of CNGA1 channels. This region exhibits 47% sequence similarity to the corresponding region of HCN2 channels whose structure was recently solved by x-ray crystallography . The structure consists of a fourfold symmetric tetramer on the intracellular end of the channel pore ( Fig. 2 A). Each subunit contains a cyclic nucleotide-binding domain (CNBD), with bound cAMP (or cGMP), attached to the inner helices of the pore by an α-helical C-linker domain. To get at the molecular dynamics of the channel, we have used patch-clamp fl uorometry (PCF) to simultaneously measure the channel structure with fl uorescence and the channel function with patch-clamp recording Zagotta, 2000, 2003). In our present PCF experiments, cysteines and tryptophans were introduced into the channel sequence and the mutant channels expressed in Xenopus oocytes. Then, insideout patch-clamp recordings were made and the cysteine was modifi ed with a cysteine-reactive bimane. This confi guration allowed the time course of modifi cation to be followed and the unincorporated fl uorophore to be washed away. The fl uorescence from the patch during various experimental manipulations could then be collected by a high numerical aperture objective and either imaged on a CCD camera or analyzed on a spectrograph to measure the emission spectrum. Modifi cation of C481 Channels with Bimane Maleimide In CNGA1, an endogenous cysteine, C481, resides on the periphery of the carboxy-terminal region at the junction between the C-linker and CNBD domains (Fig. 2 A). We chose this cysteine residue as a site to introduce cysteine-reactive bimane into otherwise cysteineless CNGA1 channels. Based on the effects and state dependence of cysteine-modifying reagents, this cysteine has previously been proposed to undergo a rearrangement during gating (Gordon et al., 1997;Brown et al., 1998). Fig. 2 B shows the gating effect of modifying C481 with bimane maleimide. Modifi cation caused an increase in the apparent affi nity for cGMP (K d decreased from 34.0 ± 5.2 μM to 7.05 ± 6.11 μM, n = 4) (Fig. 2 B) and an increase in the fractional activation of the partial agonist cAMP (I cAMP /I cGMP increased from 0.0087 ± 0.0047 to 0.526 ± 0.075, n = 3) (Fig. 2 B). These results can both be explained if modifi cation causes a decrease in the free energy of channel opening, making cAMP a better agonist, and suggest that C481 or nearby regions undergo a rearrangement during gating. Consistent with this interpretation, the rate of modifi cation of C481 was profoundly state dependent. Addition of bimane maleimide in the absence of cGMP (closed state) caused little or no modifi cation of the channels, while addition in the presence of cGMP (open state) caused a large increase in the cAMP-activated current (Fig. 2 C). These results suggest that C481 is more accessible in the open state. After modifi cation of C481 with bimane maleimide, bright fl uorescence could be observed confi ned to the area of the patch (Fig. 3 A). With excitation by a 403-nm laser, an omega-shaped patch was clearly visible inside the pipette. Unlike most fl uorophores, bimane maleimide shows a marked increase in fl uorescence quantum yield upon reaction with a cysteine (Kosower et al., 1980), suggesting that most of the observed fl uorescence arose from cysteine-reacted bimane. Consistent with this idea, little or no fl uorescence was observed in patches previously blocked by 10 min in 1 mM N-ethylmaleimide (NEM) in the presence of cGMP (unpublished data). Spectral analysis of the fl uorescence revealed that the emission spectrum of bimane was slightly blue shifted ‫01ف(‬ nm; Fig. 3 B, black trace) relative to the spectrum of free bimane in solution (Fig. 3 B, dashed red trace) and similar to bimane reacted with BSA (Fig. 3 B; solid red trace). Bimane has been extensively used as an environmentally sensitive fl uorophore, and the blue shift is as would be expected for bimane experiencing the less polar environment of a protein (Wang et al., 1997;Mansoor et al., 1999). Since the fl uorescently labeled channels are present in the controlled environment of an inside-out patch, we could directly measure the effects of exogenously applied cyclic nucleotide and tryptophan in solution on channel ionic current and fl uorescence. 2 mM cGMP, a saturating concentration, produced a large increase in the current (Fig. 3 C, corresponding to the activation of ‫000,51ف‬ channels) but no detectable change in the fl uorescence (Fig. 3 D, compare spectra 1 and 2). This result indicates that, at this concentration, cGMP has no direct effect on the fl uorescence of bimane. The addition of 25 mM tryptophan produced only a small decrease in current (Fig. 3 C) but a large decrease in fl uorescence (Fig. 3 D, compare spectra 2 and 3). The decrease in fl uorescence was rapidly reversible The quenching effi ciency of tryptophan in solution was state dependent. Fig. 4 A shows Stern-Volmer plots for the quenching of bimane in a C481-containing patch in the absence (filled squares) and presence (open squares) of cGMP. A decreased slope in these plots refl ects a higher quenching effi ciency (a higher quenching constant K). Therefore C481-bimane was more easily quenched by soluble tryptophan, and thus more accessible in the open state in the presence of cGMP (K = 6.1 ± 1.5 mM −1 , n = 3) than in the closed state in the absence of cyclic nucleotide (K = 2.8 ± 1.1 mM −1 , n = 3). The opposite state dependence occurs upon labeling a cysteine introduced at the 600 position (I600C). I600C is located in the C-helix of the CNBD of CNGA1 (Fig. 1 A) and has been shown previously to form a homotypic intersubunit disulfi de bond in closed channels (Matulef and Zagotta, 2002). Unlike C481, I600C-bimane was more accessible to tryptophan in solution in the closed state (K = 1.55 ± 0.45 mM −1 , n = 2) than in the open state (K = 0.63 ± 0.08 mM −1 , n = 2) (Fig. 4 B), consistent with the proposed movement of the C-helix during CNG channel gating (Goulding et al., 1994;Varnum et al., 1995). In both cases, data could not be fi t by a simple linear Stern-Volmer relation, but required two quenching components. The state-dependent component was only a fraction of the total patch fl uorescence (15% for C481 and 20% for I600C in Fig. 4, A and B, respectively). This component clearly arises from channel-associated fl uorophore, suggesting that under these labeling conditions, only a fraction of the bimane in the patch was directly associated with the channel. The state-independent component probably represents background fl uorescence and could be directly observed in patches containing cysteineless CNGA1 channels. Its quenching was similar to quenching of bimane free in solution (Fig. 4 C). Interestingly the quenching effi ciency of the channelassociated component was higher than for free bimane. A similar observation has been made for iodide and thalium quenching of other fl uorophores attached at C481 Zagotta, 2000, 2003). The reason for this increased quenching effi ciency is unknown. Although the state dependence of quenching provided a useful way to distinguish between background fl uorescence and channel-associated fl uorescence, we sought an additional approach to improve the specifi city of bimane labeling. We capitalized on the statedependent accessibility of C481 to eliminate much of the nonchannel-associated fl uorescence. By applying a nonfl uorescent cysteine-modifying reagent in the absence of cGMP, we fi rst blocked reactive cysteines in the patch that were accessible when the channels were closed. We could then specifi cally label the channels with bimane maleimide in the presence of cGMP. The results of such an experiment are shown in Fig. 5. Labeling the patch without blocking produced a fairly uniform labeling throughout the area of the patch (Fig. 5 A, and red trace in Fig. 5 C). However blocking the background fi rst for 7 min with 1 mM NEM in the absence of cyclic nucleotide produced more specifi c labeling of the plasma membrane as seen by the bright signal in the area of the membrane (Fig. 5 B) and by the now well-defi ned peak of fl uorescence seen in the line scan (Fig. 5 C). Preblocking nonchannel-associated cysteines with NEM produced a specifi c reduction of fl uorescence in the interior of the patch (note the different scales in Fig. 5 C), suggesting that most of the background fl uorescence arose from cytosolic proteins that are associated with the patch. The rate of modifi cation of the background sites is considerably faster than modifi cation of C481 Zagotta, 2000, 2003), so the specifi city of labeling can also be increased by blocking the background sites with short exposures to NEM, eliminating the need for state-dependent modifi cation. However because fl uorescence intensity changes of just a few percent could be easily observed, no background blocking was applied for the experiments to follow. Intramolecular Quenching of Bimane by Introduced Tryptophan Residues The real power of this new approach is not its ability to observe state-dependent changes in accessibility of residues to soluble quenchers, but in its ability to measure state-dependent changes in short-range interactions in the channel, using tryptophan as an intrinsic quencher in the channel. This is particularly valuable when a highresolution structure is available that provides testable predictions and a framework within which to interpret the results. Such is the case for the CNG channel. We used our homology model of the carboxy-terminal region of CNGA1 (Fig. 1 A) to identify residues within a sphere of 15 Å radius from C481. Five residues were chosen to be mutated to tryptophan in a C481 background and act as possible quenching partners for C481-bimane. Fig. 6 A shows the position of these fi ve mutated residues, three in the D' helix (A461W, A464W, and I465W) and two in the C helix (Y586W and D588W), and their corresponding β-carbon distances to C481. All of these channels with introduced tryptophan residues produced relatively large cGMP-activated currents in inside-out patches (>1000 pA at 30 mV) and could be modifi ed by bimane maleimide at C481. However, most did not show state-dependent changes in bimane fl uorescence that would refl ect changes in intramolecular quenching by the introduced tryptophan (Fig. 6 B). One such mutant is D588W. After bimane modifi cation of C481, D588W channels could be normally activated by cGMP (Fig. 7 A) and exhibited bimane fl uorescence, but the fl uorescence was not signifi cantly different in the absence and presence of cGMP ( Fig. 7 B, compare spectra 1 and 2). In contrast, A461W channels showed a signifi cant change in fl uorescence upon application of cGMP (Fig. 7, C and D). 2 mM cGMP caused a 14.1 ± 2.9% (n = 5) decrease in fl uorescence that was rapidly reversible and reproducible. The fl uorescence quenching was only seen in the A461W mutant and was not observed in C481F-A461W channels. Furthermore, the quenching was not associated with a change in shape of the emission spectrum, but only a decrease in peak amplitude (Fig. 7 D) similar to quenching by soluble tryptophan. These results suggest that the quenching in A461W channels arose from intramolecular quenching of C481-bimane by A461W. To confi rm that the change in bimane quenching by A461W was due to the molecular rearrangement associated with CNG channel gating, we compared the cyclic nucleotide dependence of quenching to that of channel activation measured in the same patch. As shown in Fig. 8, the magnitude of the quenching exhibited the same dependence on cGMP (Fig. 8 B) as the activation of the modifi ed A461W channel (Fig. 8 A). The dose dependence of quenching could be fi t with the same apparent affi nity (K d ) and Hill coeffi cient (n) as for activation of modifi ed channels. Furthermore, 16 mM cAMP was similarly effective at promoting channel opening and quenching of C481 by A461W. 16 mM cAMP is a saturating concentration that should bind completely but is less able to promote channel opening. These results indicate that the fl uorescence quenching of C481-bimane is reporting a molecular rearrangement of the channel associated with the opening gating transition and not cyclic nucleotide binding. The decrease in fl uorescence with cGMP suggests that C481 moves closer to A461 during the activating allosteric transition in the channel. The bimane C 3 -maleimide used in these experiments contains a relatively long linker between the bimane and maleimide moieties, leaving open the possibility that the quenching is not faithfully reporting the movement of C481 relative to A461W. To address this concern, we reacted C481, A461W channels with monobromobimane, which contains virtually no linker. As shown in Fig. 9, the open state-dependent quenching with monobromobimane modifi cation was quantitatively very similar to that with bimane C 3 -maleimide. These results confi rm that the fl uorescence quenching is reporting a movement of C481 relative to A461W during channel opening and does not require the long linker of bimane maleimide. D I S C U S S I O N In this paper we have demonstrated a new approach to study conformational changes in channels using quenching of bimane fluorescence by nearby tryptophan residues. Fluorescence from bimane-modifi ed channels is readily observable in inside-out patches and spectroscopic measurements are easily obtained. We observed that the fl uorescence of bimane-modifi ed channels can be quenched by both tryptophan in solution and tryptophan in the channel. The state-dependent fl uorescence changes in the C481-bimane A461W channels are well correlated with the cyclic nucleotide-induced opening conformational change, indicating that the fl uorescence is reporting a conformational change that is tightly coupled to channel opening. Our results explicitly rule out the possibility that the fl uorescence is simply reporting cyclic nucleotide binding. (a) cAMP is a partial agonist and only partially quenches bimane fl uorescence relative to cGMP. If it was the binding of the negatively charged ligand that caused quenching, then saturating concentrations of cAMP would produce the same degree of quenching as cGMP. (b) In our experiments with C481 channels with no additional tryptophan, we observe no fl uorescence quenching effects, arguing that the ligand alone is not responsible for quenching. The absence of a cyclic nucleotide-dependent change in bimane fl uorescence in the other tryptophan mutants is diffi cult to interpret. These negative results could arise in at least three ways: (1) these tryptophans and bimane may not be close enough or properly oriented to produce quenching; (2) the distance between these tryptophans and C481 might not change enough during channel gating to change the quenching; and (3) the change in quenching might be too small to detect above the background bimane fl uorescence. Given these uncertainties, it seems most prudent in this approach to interpret only the results from bimanetryptophan pairs that produce a gating-dependent increase or decrease in bimane fl uorescence, as seen in C481-bimane A461W. In the future, an instrument capable of measuring fl uorescence lifetimes could be used to determine the absolute degree of quenching in each state of the channel, allowing a more thorough comparison to the x-ray structure. It is interesting to note that of all of the residues we tested, A461W is the furthest from C481 (15 Å) in the homology model of the carboxy-terminal region. This distance is on the outer limits of the distances reported for tryptophan to quench bimane (Mansoor et al., 2002). One possible explanation for this observation is that the crystallized structure of the carboxyterminal region is in the resting conformation, at least with regard to the relative position of A461 and C481, and that the two residues are closer in the open channel. A similar proposal was recently made based on the effects of mutating salt bridges observed in the structure of the C-linker region (Craven and Zagotta, 2004). In both CNGA1 and HCN2 channels, mutating these salt bridges caused a potentiation of channel activation, suggesting that the salt bridges helped stabilize the closed conformation. Therefore the crystal structure of the C-linker region may reside in a resting confi guration, and activation of the channel may move A461 and C481 in closer proximity. Alternatively, A461W and C481bimane may be in close enough proximity in the structure to achieve quenching, and closure of the channel may move them even further apart. Data from more bimane-tryptophan pairs are needed to distinguish between these alternatives. With multiple quenching pairs, tryptophan quenching of bimane fl uorescence offers great hope for elucidating the rearrangements and dynamics of gating conformational changes in ion channels. This approach should prove generally useful for probing short-range interactions in proteins, especially when a structural framework exists for interpreting state-dependent changes in quenching as structural rearrangements. By focusing on short-range interactions, it overcomes many of the problems associated with intramolecular FRET experiments. In addition, only one fl uorophore needs to be introduced into the channel by modifi cation, signifi cantly reducing the problems with specifi city of introducing two separate fl uorophores into the same protein. Finally, combined with the use of PCF, this approach allows for the study of intracellular sites, where many of the rearrangements in ion channels take place. We are grateful to Heidi Utsugi, Kevin Black, Shellee Cunnington, and Gay Sheridan for technical assistance. We thank Justin Taraska, Kimberley Craven, Michael Puljung, and Sharona Gordon for comments on the manuscript. We thank David Farrens for helpful discussions. This work was supported by the Howard Hughes Medical Institute and a grant from the National Eye Institute (EY10329) to W.N. Zagotta. Lawrence G. Palmer served as editor. The data were fi t with Eq. 1, with K d = 15 μM and n = 1.0. The closed circle is the response to 16 mM cAMP after modifi cation. (B) cGMP dose-response relation of bimane quenching in modifi ed A461W channels. The data were fi t with Eq. 1 with a maximum value of 0.15, and the same parameters used for the cGMP dose-response relation of modifi ed channels in A. The red symbol is the quenching observed when the channels are opened by 16 mM cAMP and was corrected for the 3.1 ± 0.6% (n = 3) quenching of bimane produced by 16 mM cAMP alone.
2016-05-04T20:20:58.661Z
2006-09-01T00:00:00.000
{ "year": 2006, "sha1": "bdf36079c4b39c38fb4d1110a6db51caf1c35e84", "oa_license": "CCBYNCSA", "oa_url": "http://jgp.rupress.org/content/128/3/337.full.pdf", "oa_status": "BRONZE", "pdf_src": "PubMedCentral", "pdf_hash": "bdf36079c4b39c38fb4d1110a6db51caf1c35e84", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }
220981401
pes2o/s2orc
v3-fos-license
ATP-dependent hydroxylation of an unactivated primary carbon with water Enzymatic hydroxylation of unactivated primary carbons is generally associated with the use of molecular oxygen as co-substrate for monooxygenases. However, in anaerobic cholesterol-degrading bacteria such as Sterolibacterium denitrificans the primary carbon of the isoprenoid side chain is oxidised to a carboxylate in the absence of oxygen. Here, we identify an enzymatic reaction sequence comprising two molybdenum-dependent hydroxylases and one ATP-dependent dehydratase that accomplish the hydroxylation of unactivated primary C26 methyl group of cholesterol with water: (i) hydroxylation of C25 to a tertiary alcohol, (ii) ATP-dependent dehydration to an alkene via a phosphorylated intermediate, (iii) hydroxylation of C26 to an allylic alcohol that is subsequently oxidised to the carboxylate. The three-step enzymatic reaction cascade divides the high activation energy barrier of primary C–H bond cleavage into three biologically feasible steps. This finding expands our knowledge of biological C–H activations beyond canonical oxygenase-dependent reactions. T he selective oxidation of unactivated C-H bonds at alkyl functionalities to alcohols is of great importance for a plethora of synthetic processes. Enzymatic solutions for these challenging reactions have continuously motivated organic chemists for developing bioinspired strategies [1][2][3][4][5] . Biocatalytic C-H bond activations of alkyls via hydroxylation have generally been associated with metal-dependent monooxygenases, peroxidases or peroxygenases [6][7][8] . As an example, the well-studied cytochrome P450 monooxygenases reduce the dioxygen cosubstrate to the formal oxidation state of a hydroxyl radical, which allows for the hydroxylation of unactivated primary carbons to alcohols 9 . While cytochrome P450 enzymes require an auxiliary electron donor system, peroxygenases and peroxidases depend on a balanced supply with its reactive co-substrate 6,8 . The use of oxygenases or peroxygenases is not an option for C-H bond oxidations in anaerobic hydrocarbon-degrading microorganisms, for which a few oxygen-independent enzymatic solutions for C-H bond hydroxylation at activated positions have been discovered in the past decades 10 . Prototypical examples are the water-dependent hydroxylations of the benzylic carbons of p-cresol 11,12 , p-cymene 13 , and ethylbenzene 14,15 , as well as of the allylic carbon of limonene 16 . Here, C-H bond activation proceeds via hydride transfer to FAD or Mo(VI) = O cofactors ( Fig. 1a-d). In all these reactions the resulting carbocation intermediates are stabilised by the possibility of forming multiple resonance structures. However, the water-dependent hydroxylation of unactivated primary carbons of alkanes or isoprenoids has not been reported so far in enzyme catalysis. The ubiquitous, biologically active steroids are composed of a tetracyclic steran system and an isoprenoid side chain resulting in a low water-solubility and persistence in the environment. Bacterial degradation of steroids is of global importance for biomass decomposition, removal of environmental pollutants and for intracellular survival of Mycobacterium tuberculosis and other pathogens [17][18][19] . In aerobic cholesterol degrading bacteria, side chain oxidation is initiated by the hydroxylation of the unactivated primary C26 (or equivalent C27) to a primary alcohol by cytochrome P450 enzymes that is further oxidised to a C26carboxylate 20,21 . The isoprenoid side chain of the latter is then converted to acetyl-CoA and propionyl-CoA units via modified β-oxidation 22,23 . In denitrifying bacteria, oxygen-independent cholesterol degradation also proceeds via β-oxidation of the 26carboxylate intermediate involving highly similar enzymes [24][25][26] . However, the oxidation of the primary, unactivated C26 to the 26-carboxylate must proceed in an oxygen-independent manner. In the denitrifying β-proteobacterial model organism Sterolibacterium denitrificans Chol-1S, anaerobic cholesterol degradation is initiated by the periplasmic isomerization/ dehydrogenation of ring A to cholest-4-en-3-one (CEO) by the AcmA gene product 27 , that may be further dehydrogenated to cholesta-1,4-dien-3-one (CDO) by AcmB 25 . Both intermediates serve as substrate for steroid C25 dehydrogenases (S25DH) that hydroxylates tertiary C25 of the isoprenoid side chain with water to 25-OH-CEO/-CDO in the presence of cytochrome c or an artificial electron acceptor 28 (Fig. 1e). Periplasmic S25DH belongs to the DMSO reductase family of Mo-cofactor containing enzymes and is composed of the catalytic α-subunit harbouring the Mo-cofactor and the FeS-clusters/heme b containing electron transferring βand γ-subunits 29 . This enzyme is proposed to abstract a hydride from tertiary C25 to the Mo(VI) = O species yielding the Mo(IV)-OH intermediate 14 cholesterol and its analogues to tertiary alcohols 30,31 . The rationale for the initial formation of the tertiary alcohol 25-OH-CDO, that cannot be directly further oxidised, during anaerobic cholesterol catabolism has remained enigmatic. Though previous high resolution mass spectrometry (HRMS)-based analyses suggested the primary alcohol 26-OH-CDO as an intermediate 24,32 , the underlying conversion of a tertiary alcohol to a primary alcohol involves an unknown enzymology. Here, we aim to identify the enzymatic steps involved in the water-dependent oxidation of a primary carbon atom during anaerobic cholesterol degradation. We demonstrate that the formation of a tertiary alcohol at the cholesterol side chain initiates a three-step enzymatic reaction cascade finally resulting in the oxidation of unactivated C26 to a carboxylate. Results Enzymatic dehydration of 25-OH-cholesta-1,4-diene-3-one. To elucidate the enzymology responsible for production of CDO-26oate, we synthesised 25-OH-CDO from CEO using overproduced S25DH and AcmB. Cell-free extracts of S. denitrificans grown with cholesterol under denitrifying condition were anoxically reacted with 25-OH-CDO in the presence of a multitude of natural and artificial electron acceptors (e.g., NAD[P] + , K 3 [Fe (CN) 6 , or 2,6-dichlorophenolindophenol at 0.5 mM, respectively). In no case, conversion of 25-OH-CDO to a product was observed. In further trials, electron acceptors were omitted in the reaction assays and a number of potential co-substrates were tested. In the presence of ATP and MgCl 2 (5 mM each), the time-dependent conversion of 25-OH-CDO 1 to a minor, more polar intermediate 2 was observed using ultra-fast performance liquid chromatography (UPLC) analyses, which was readily further converted to a second less polar product 3, (Fig. 2). After ultracentrifugation of cell-free extracts this conversion was found to occur only in the soluble protein fraction. UPLC-HRMS analysis of 2 did not allow a clear assignment to a molecular mass, whereas analysis of compound 3 revealed a [M + H] + ion with m/z = 381.3161 ± 0.4 Da suggesting the loss of H 2 O from the substrate (m/z = 399.3274 ± 1.1; for MS data of all steroid intermediates analysed in this work, see Supplementary Table 1). The product 3 was purified by preparative HPLC and identified by 1 H, 13 C, and 2D NMR analyses as desmost-1,4-diene-3-one (DDO) containing an allylic Δ24 double bond ( Supplementary Figs. 1-3). Thus, the hydroxyl functionality introduced by S25DH was subsequently eliminated to an alkene by an ATP-dependent 25-OH-CDO dehydration activity. We tested whether product 2 represents a phosphorylated intermediate during the ATP-dependent dehydration of 25-OH-CDO. For this purpose we chemically synthesised a 25-phospho-CDO standard from enzymatically prepared 25-OH-CDO. We opted for a P(III)-amidite approach due to the steric hindrance of the tertiary OH, using a bis fluorenylmethyl (Fm) protected For the determination of cofactor specificity and stoichiometry, the ATP-dependent 25-OH-CDO dehydration activity was enriched via ammonium sulphate precipitation (activity retained in the suspended 50% saturation pellet). During formation of DDO from 25-OH-CDO no other nucleoside triphosphate than ATP was accepted as co-substrate; Mg 2+ could substitute for Mn 2+ albeit at only 5% activity. UPLC analyses of adenosine nucleotides revealed that 1.2 ± 0.1 mol ATP were hydrolysed to 0.9 ± 0.1 mol ADP and 0.3 ± 0.1 mol AMP per mol 25-OH-CDO consumed (mean value ± standard deviation in three independent determinations); ATP hydrolysis was negligible in the absence of the steroid. Oxidation of the allylic methyl group. For analysing the further conversion of DDO, we enzymatically synthesised DDO from 25-OH-CDO using cell-free extracts from S. denitrificans. Strictly depending on K 3 [Fe(CN) 6 ] as electron acceptor, cell-free extracts of S. denitrificans converted DDO to product 4 that was subsequently converted to 5 in assays performed anaerobically (Fig. 3). In summary, DDO 3 formed by ATP-dependent dehydration of the tertiary alcohol was hydroxylated first to the allylic primary alcohol 4 with water, followed by the oxidation to the aldehyde 5. Specific activities in cell-free extracts were 2 ± 0.1 nmol min -1 mg -1 for 26-OH-DDO formation and 1.2 ± 0.2 nmol min -1 mg -1 for 26-OH-DDO dehydrogenation (mean value ± standard deviation in three biological replicates of cell extract preparation). To test whether one of the DH 5-7 enzymes may be involved in the water-dependent hydroxylation of DDO, we heterologously expressed the genes potentially encoding the active site αβγsubunits of DH 5-7 and an assumed δ-chaperone in the βproteobacterium Thauera aromatica as described for S25DH 1-4 30 . Using cell-free extracts from T. aromatica producing DH 5-7 , only with DH 5 we observed the K 3 [Fe(CN) 6 ] dependent conversion of DDO to 26-OH-DDO; we therefore refer this enzyme to S26DH 1 . Heterologously produced S26DH 1 was enriched from T. aromatica extracts by three chromatographic steps under anaerobic conditions using a modified protocol established for S25DH 1-4 30 . SDS-PAGE analysis of enriched S26DH 1 revealed four major protein bands (Fig. 4b) that were excised, tryptically digested, and analysed by electrospray ionisation quadrupole time-of-flight mass spectrometry (ESI-Q-TOF-MS). The masses of the ions obtained from tryptic peptides confirmed the presence of the α,β, γ-subunits of S26DH 1 from S. denitrificans, those obtained from the band eluting at 43 kDa were assigned to a co-purified 6oxocyclohex-1-ene-1-carboxyl-CoA hydrolase from T. aromatica (Supplementary Table 2). The latter cofactor-free enzyme plays a role in the catabolic benzoyl-CoA degradation pathway 34 and is unlikely to be associated with steroid hydroxylation. Amino acid sequence similarities of S26DH 1 with characterised S25DHs 30 suggested the presence of one Mo-cofactor, four [4Fe-4S] clusters, one [3Fe-4S] cluster, and a heme b. The metal content of S26DH 1 determined by colorimetric procedures was 0.81 ± 0.03 mol Mo, 19.7 ± 0.4 mol Fe, and 0.89 ± 0.01 mol heme b per mol S26DH 1 , respectively (mean value ± standard deviation in three different enzyme preparations). These values are in good agreement with the expected cofactor content and with the values determined for other S25DHs 30 . Unexpectedly, enriched S26DH 1 catalysed the hydroxylation of DDO to 26-OH-DDO 4 and the subsequent dehydrogenation of the latter to DDO-26-al 5 with K 3 [Fe(CN) 6 ] as electron acceptor ( Supplementary Fig. 13). As a control, T. aromatica extracts containing the plasmid without the S26DH 1 genes did not catalyse such a reaction. Both partial reactions of S26DH 1 dehydrogenase followed Michaelis-Menten kinetics with V max = 219 ± 11 nmol min -1 mg -1 /K m = 123 ± 25 µM for DDO hydroxylation, and V max = 73 ± 3 nmol min -1 mg -1 /K m = 83 ± 12 µM for 26-OH-DDO dehydrogenation (mean values ± standard deviations of enzyme activity measurements at various substrate concentrations, for the data points used see Supplementary Fig. 6c, d). No further oxidation to a carboxylate was observed by enriched S26DH 1 . Oxidation of 26-OH-DDO to DDO-26-carboxylate. Enriched S26DH 1 used K 3 [Fe(CN) 6 ] but not NAD + as electron acceptor for the oxidation of the allylic alcohol 4 to the aldehyde 5, which is in agreement with its proposed periplasmic location (see Discussion). We tested whether this activity is relevant for the cholesterol catabolic pathway or whether an additional cytoplasmic alcohol dehydrogenase is involved. Using NAD + as electron acceptor, extracts from S. denitrificans grown with cholesterol showed virtually no conversion of 26-OH-DDO 4 to DDO-26-al 5 (<0.1 nmol min -1 mg -1 ), whereas it served as acceptor for the ready oxidation of DDO-26-al 5 to DDO-26-carboxylate (5 ± 0.2 nmol min -1 mg -1 ) (mean value ± standard deviation in three replicates). In this assay, formation of an additional product was observed that after ESI-Q-TOF-MS analyses was assigned to 26-OH-DDO (Supplementary Table 1, Fig. 14). Obviously, the NADH formed during DDO-26-al oxidation served as donor for a promiscuous dehydrogenase that reduced the C26 aldehyde to the C26 alcohol. Recently, a cholesterol induced gene was identified in S. denitrificans (WP_154716401) that was hypothesised to encode an aldehyde dehydrogenase (C26-ALDH) involved in cholesterol C26 oxidation 25 . We heterologously expressed this gene in E. coli, and extracts from cells producing the recombinant enzyme The previously assigned DH 5 is now referred to as S26DH 1 (marked in red). b SDS-PAGE of S26DH 1 heterologously produced in T. aromatica; the numbers refer to the molecular masses (kDa) of a standard. The band marked with an asterisk was identified as co-purified 6-oxocyclohex-1-ene-1carboxyl-CoA hydrolase from T. aromatica; the minor band eluting slightly above the 55 kDa is a degradation product of the α 5 -subunit (Supplementary Table 2). c Cluster of genes encoding the αβγ-subunits and the gene encoding the δ-chaperone of S26DH 1 (accession numbers: γβα-subunits, WP_154715926-8; δ-chaperone, WP_154716737). Source data are provided as a Source Data file. Fig. 15b). Discussion In this work we identified an enzymatic reaction sequence that allows for the oxidation of an unactivated primary carbon to a carboxylate using water as only hydroxylating agent. In this enzyme cascade the high activation energy barrier for C-H bond hydroxylation of a primary carbon is divided into three individual steps, each of which being energetically and mechanistically plausible under cellular conditions. The C-H bond dissociation energies to carbocations and hydrides relevant for the hydroxylation of tertiary C25 in cholesterol and allylic C26 in desmosterol are 142 kJ mol -1 and 117 kJ mol -1 lower than that for the primary C26 of cholesterol, respectively (based on gas phase calculations for the isopentane and 2-methyl-2-butene analogues) 35 . Thus, the rationale for the initial formation of a tertiary alcohol during anaerobic cholesterol catabolism 28,29 is to enable the ATPdependent dehydration to a trisubstituted alkene that is crucial for the subsequent water-dependent hydroxylation of the allylic carbon via a relatively stable allylic carbocation intermediate. The major challenge of oxygen-independent primary carbon activation represents the dehydration of the tertiary alcohol that is achieved by coupling to exergonic ATP hydrolysis (ΔG' ≈ -50 kJ mol −1 ). The phosphate eliminated from the phosphoester intermediate is a much better leaving group than water. A related reaction is known from isoprenoid biosynthesis via the mevalonate pathway: (R)-mevalonate-5-diphosphate is ATPdependently decarboxylated to isopentenyl diphosphate 36,37 . Here, phosphate elimination from an assumed phosphorylated intermediate is accompanied by decarboxylation, which additionally drives the elimination reaction forward ( Supplementary Fig. 16). ATP-dependent dehydration has also been reported for the recycling of spontaneously formed NAD(P)H hydrates 38 , however it is unclear whether it proceeds via a phosphorylated intermediate. While a reaction mechanism via tertiary and allylic carbocation intermediates is plausible for the two Mo-dependent hydroxylases S25DH 1 /S26DH 1 , it remains uncertain whether this is the case for phosphate elimination from 25-phospho-DDO. If the phosphate elimination would proceed via the identical tertiary carbocation as proposed for S25DH catalysis, the question rises why it is not directly deprotonated by S25DH to the alkene in a single step? Probably, the rebound of the hydroxyl-functionality at the assumed Mo(IV)-OH intermediate to the carbocation is much faster than a competing deprotonation at C24 to an alkene by a putative base. The reaction cascade identified in this work allows in principle for any enzymatic water-dependent hydroxylation at a primary carbon next to a tertiary one. The genome of S. denitrificans contains two further copies of S26DH-like enzymes and they may represent isoenzymes specifically involved in the hydroxylation of allylic methyl groups of intermediates during catabolism of steroids with modified isoprenoid side chains such as β-sitosterol or ergosterol. Hence, a pair of specific Mo-dependent S25DH and S26DH appears to be required for the hydroxylation of primary carbons at the individual isoprenoid chains. The reaction cascade may also be involved in the degradation of non-steroidal isoprenoids or tertiary alcohols. E.g., the anaerobic degradation of the fuel oxygenate methyl tert-butyl ether, an environmental pollutant of global concern, has frequently been described at anoxic environments 39,40 . However, the anoxic degradation of the tert-butanol intermediate is unknown but could in principle be accomplished via ATP-dependent dehydration to isobutene, analogous to the described pathway. Cholesterol degradation in anaerobic bacteria is initiated in the periplasm by AcmA, S25DH 1 , and probably AcmB dependent conversion into 25-OH-CDO/25-OH-DDO (oxidation to the diene in ring A may also occur later in the pathway). However, subsequent alkene formation is ATP-dependent and therefore has to occur in the cytoplasm. In contrast, subsequent C26 hydroxylation and oxidation to the C26 aldehyde will again take place in the periplasm as evidenced by the N-terminal twin-arginine translocation (TAT) sequence present in S25DH and S26DH (Supplementary Table S3). Thus, initial steps of anaerobic cholesterol degradation involve enzymes alternately accessing their substrates from the periplasm and the cytoplasm (Fig. 5). Though flip-flop of cholesterol between the two leaflets of biological membranes is generally considered fast with rate constants in the 10 4 s -1 range 41-43 , it is 5.5-fold faster for 25-OH-cholesterol vs cholesterol 43 . Thus, initial side chain-hydroxylation facilitates the accessibility of cholesterol-derived intermediates from both sides of the cytoplasmic membrane, which appears to be crucial for the reaction cascade involved in water-dependent primary carbon oxidation. Heterologous production of AcmB in E. coli BL21. The gene encoding AcmB was amplified using the primers listed in Supplementary Table S4. The resulting 1.8-kb DNA fragment was SacI/HindIII double digested and cloned into pASK-IBA15plus before transformed into E. coli BL21. Induction of AcmB was carried out in 2 YT medium at 20°C, supplemented with 100 µg mL -1 ampicillin and 20 µg mL -1 anhydrotetracycline. Cells were harvested in the late exponential phase (16 hours induction) and used for further experiments. Fig. 5 Proposed subcellular localisation of initial anaerobic cholesterol degradation steps. For easier presentation, reduction of Δ24 double bond is omitted. The dotted arrows indicate that hydrophobic side chain needs to be bound by S25DH 1 and S26DH 1 . The assignment of S25DH 1 and S26DH 1 to the periplasm are based on their TAT sequence, that of S25 dehydratase and aldehyde dehydrogenase (ALDH) to the cytoplasm on their cytoplasmic cosubstrates. Hydroxylations at C25 and C26 abolish polarity of CDO/DDO and improves access from both sides of the cytoplasmic membrane. Heterologous production of DDO-26-al dehydrogenase (C26-ALDH) in E. coli BL21. The C26-ALDH gene (WP_154716401) was amplified using the primers listed in Supplementary Table S4. The resulting 1.5-kb DNA Fragment was digested by NheI/NcoI and cloned into pASK-IBA15plus and transformed into E. coli BL21. Induction of C26-ALDH was carried out as described for the AcmB gene. Enrichment of S26DH 1 after heterologous production in T. aromatica K172. All steps were performed under anaerobic conditions in an anaerobic glove box (95% N 2 , 5% H 2 , by vol.; O 2 < 2 ppm). The buffers and reagents used were degassed using alternating (20 cycles) N 2 (0.5 bar) and vacuum (> -0.9 bar) to reach anaerobicity. Anaerobically harvested cells were lysed with a French pressure cell at 137 MPa using two volumes (w/v) of buffer A (20 mM Tris/HCl pH 7.0, 0.02% [w/v] Tween 20, 0.5 mM dithioertythritol [DTE]). Solubilisation of lyzed cells was carried out for 16 h at 4°C using 1% (v/v) Tween 20. After ultra-centrifugation at 150,000 × g for 1.5 h, the supernatant was used for enzyme enrichment. The soluble proteins were applied to a DEAE-Sepharose column (75 mL; GE Healthcare) at 7. Spectrophotometric determination of Mo, Fe and heme b. Iron was determined with o-phenantroline using a modified protocol as described 45 . Enriched proteins (5 µL) were added to 245 µL H 2 O and acidified with 7.5 µL 25% HCl, mixed, incubated at 80°C for 10 min, followed by centrifugation at 10,000 × g for 10 min. The supernatant was transferred and 750 µL H 2 O, 50 µL 10% [w/v] hydroxylamine and 0.1% [w/v] o-phenanthroline were added, mixed and incubated at room temperature for 30 min. The absorbance was measured at 512 nm and compared with a (NH 4 ) 2 Fe(SO 4 ) 2 standard. Mo was determined colorimetrically with dithiol using a modified protocol 46 . 20 µL proteins were incubated at 60°C until dryness was reached. The dried product was solved in 250 µL 4 M HCl and heated to 90°C for 30 min. 100 µL reducing solution (15% [w/v] ascorbic acid and 2% [w/v] citric acid in H 2 O) were added, mixed and incubated for 5 min at room temperature. 300 µL H 2 O and 100 µL dithiol solution (0.1 g zinc dithiol and 0.4 g NaOH diluted in 600 µL ethanol, 31 mL H 2 O and 200 µL thioglycollic acid) were added, mixed and incubated for 5 min. Then, 500 µL isoamyl acetate were added and the mixture was shaken vigorously for 1 min. The absorbance was measured in the organic phase at 680 nm and compared with a sodium molybdate dihydrate standard. The heme b content was determined spectrophotometrically at 556 nm after the anaerobically conducted complete reduction of the enzyme with sodium dithionite. The heme-content was calculated using the extinction coefficient of reduced heme b (ε 556 = 34.64 mM −1 cm −1 ) 47 . Protein identification by mass spectrometry of peptides. Proteins were identified by excising the bands of interest from SDS-PAGE gels. After in-gel digestion with trypsin, the resulting peptides were separated by UPLC and identified using a Synapt G2-Si high-resolution mass spectrometry (HRMS) electrospray ionisation quadrupole time-of-flight (ESI-Q-TOF) mass spectrometry system (Waters); the system has been described in detail previously 48 . LC-ESI-MS analyses of steroid compounds. Metabolites were analysed by an Acquity I-class UPLC system (Waters) using an Acquity UPLC CSH C18 (1.7 µm, 2.1 × 100 mm) column coupled to a Synapt G2-Si HRMS ESI-Q-TOF device (Waters). For separation, an aqueous 10 mM NH 4 OAc/acetonitrile gradient was applied. Samples were measured in positive mode with a capillary voltage of 2 kV, 100°C source temperature, 450°C desolvation temperature, 1000 L min -1 N 2 desolvation gas flow, and 30 L min -1 N 2 cone gas flow. Evaluation of LC-MS metabolites was performed using MassLynx (Waters). Metabolites were identified by their retention times, UV-vis spectra, m/z values. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/ licenses/by/4.0/.
2020-08-06T14:35:47.660Z
2020-08-06T00:00:00.000
{ "year": 2020, "sha1": "24bc05056ef2acdd8bacda4dfaf991cbc900705b", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41467-020-17675-7.pdf", "oa_status": "GOLD", "pdf_src": "Springer", "pdf_hash": "24bc05056ef2acdd8bacda4dfaf991cbc900705b", "s2fieldsofstudy": [ "Chemistry", "Biology", "Environmental Science" ], "extfieldsofstudy": [ "Medicine", "Chemistry" ] }
241119315
pes2o/s2orc
v3-fos-license
Three-Year Outcomes of Neovascular Age-Related Macular Degeneration in Eyes That Do Not Develop Macular Atrophy or Subretinal Fibrosis Purpose To report the 36-month treatment outcomes of eyes with neovascular age-related macular degeneration (nAMD) receiving vascular endothelial growth factor (VEGF) inhibitors in daily practice who did not develop either subretinal fibrosis (SRFi) or macular atrophy (MA). Methods This is a retrospective analysis of data from the Fight Retinal Blindness registry. Treatment-naïve eyes starting intravitreal injection of VEGF inhibitors for nAMD from January 1, 2010, to September 1, 2017, and did not have SRFI and MA at baseline were tracked. Results We identified 2478 eligible eyes, of which 1712 eyes did not develop SRFi or MA, 291 developed extrafoveal SRFI or MA, and 475 developed subfoveal SRFi or MA over 36 months. The estimated visual acuity stabilized from 6 months to 36 months in eyes that did not develop SRFI or MA with a mean (95% confidence interval [CI]) change in VA of −1 (−2, 0) letters, whereas eyes that developed extrafoveal (−3 [−5, −2] letters) or subfoveal (−10 [−11, −8] letters) SRFi or MA declined in vision in the same period. Eyes with no or extrafoveal SRFi or MA over 36 months were more likely to maintain their visual improvement from six months to 36 months (odds ratio [OR; 95% CI] = 2.3 [1.5, 3.3] for absence vs. subfoveal SRFi or MA, P ≤ 0.01 and OR = 2.0 [1.2, 3.4] for extrafoveal vs. subfoveal MA or SRFi, P = 0.01). Conclusions Treatment-naïve nAMD eyes receiving VEGF inhibitors maintain their initial six-month visual improvement over three years if they do not develop SRFI or MA. Translational Relevance The nAMD is still a major cause of blindness despite antiangiogenic treatments. We found that eyes that did not develop subretinal fibrosis or macular atrophy maintained their initial vision improvement for at least three years, suggesting that identifying treatments for these complications is the final barrier to achieving excellent outcomes in nAMD. Introduction The end-stage sequelae of neovascular age-related macular degeneration (nAMD), macular atrophy (MA), and subretinal fibrosis (SRFi) increase with time, are untreatable, and are associated with poor visual outcomes. [1][2][3][4][5] Both clinical trials and observational studies tend to find that visual acuity (VA) improves from baseline for six months after starting treatment with vascular endothelial growth factor (VEGF) inhibitors and then progressively declines thereafter in association with the development of foveal MA or SRFi, with final vision depending mainly on the presenting VA at the start of the treatment and the number of injections. [6][7][8] Older age, presenting VA, and type of choroidal neovascularization (CNV) may predict risk of progression to MA and SRFi under treatment more strongly than treatment strategy and frequency. 1,9,10 Few studies, if any, have investigated whether there is any other mechanism that causes visual loss in eyes with nAMD independently of these features. Here, we tested the hypothesis that eyes with nAMD treated with VEGF inhibitors continue to lose vision through unknown mechanisms, even if they do not develop SRFi or MA. Design and Setting This was a retrospective analysis of treatment-naïve eyes that had received intravitreal VEGF inhibitors for nAMD in routine clinical practice tracked in the prospectively designed observational database-The Fight Retinal Blindness! (FRB!) registry. The details of the FRB! database have been previously published. 11 Analyzed data are 100% completed because all fields must be filled out with in-range values before being accepted by the database. Participants in this analysis included patients from Australia, France, Ireland, Italy, the Netherlands, New Zealand, Singapore, Spain, and Switzerland. Institutional approval was obtained from the University of Sydney, the Royal Australian and New Zealand College of Ophthalmologists, the French Institutional Review Board (IRB) (Société Française d'Ophtalmologie IRB), the Mater Private Hospital IRB in Dublin, Ireland, the Fondazione IRCCS Cà Granda Ospedale Maggiore Policlinico, Milan, Comité de Ética de la Investigación con medicamentos de Euskadi (CEIm-E), and Agencia Española de Medicamentos y Productos Sanitarios (AEMPS), Singhealth Singapore, and the Cantonal Ethics Committee Zurich. All patients gave their informed consent. Informed consent ("opt-in consent") was sought from patients in France, Ireland, Italy, the Netherlands, Singapore, Spain, and Switzerland. Ethics committees in Australia and New Zealand approved the use of "opt-out" patient consent. This study adhered to the Declaration of Helsinki's tenets and followed the Strengthening the Reporting of Observational Studies in Epidemiology statements for reporting observational studies. 12 Data Sources and Measurements We analyzed data from the nAMD module of the FRB! outcomes registry. Data were obtained from each clinical visit, including the VA, the activity of the underlying choroidal neovascularization (CNV) lesion, the presence of SRFi or MA, treatment given, procedures, and ocular adverse events. Distance VA (uncorrected, corrected and pinhole if required) was measured in Snellen chart and converted as the number of letters read on a logarithm of the minimum angle of resolution (logMAR) VA standard ETDRS chart. 13 The activity of the CNV lesion was graded by the treating physician based on findings from clinical examination according to a definition provided in the data collection screen from optical coherence tomography (OCT) and dye-based fundus fluorescein angiography, alone or in combination, at each visit. Physician grading of MA and SRFi was implemented in April 2016 into FRB! to comply with the International Consortium for Health Outcomes Measurements (ICHOM) macular degeneration standard set and was recorded prospectively at each visit from then: these data were retrospectively entered for eyes with data entered before this date (n= 245 eyes). 14 No distinction was made between nonfibrotic scar and fibrotic scar in the grading, 15 because the diagnosis was based on the concordance between the appearance of SRFi on clinical examination, color fundus photography, and SD-OCT. At each visit, documentation of MA and SRFi was recorded according to the ICHOM standard set as: "Not present" or if present, based on location: "Extrafoveal" or "Subfoveal." 14 Repeat treatments were at the physician's discretion in consultation with the patient, thereby reflecting routine clinical practice. Patient Selection and Definitions Treatment-naïve eyes starting intravitreal injection of VEGF inhibitors of either aflibercept (2 mg Eylea; Bayer Healthcare, Leverkusen, Germany), bevacizumab (1.25 mg Avastin; Genetech Inc/Roche, Basel, Switzerland), or ranibizumab (0.5 mg Lucentis; Genetech Inc/Novartis) for nAMD from January 1, 2010, to September 1, 2017, thereby allowing the possibility of having at least 36 months of observations after the initial treatment, and who did not have SRFI or MA at baseline were tracked in the registry. Eyes were excluded if the grading of SRFI or MA was not entered at baseline. To ensure that eligible eyes did not have SRFI or MA at presentation, the baseline grading of SRFi or MA was based on multimodal imaging at each visit from the start of the treatment to the three-month visit to detect eyes with undiagnosed SRFi or MA at the beginning of the treatment because of intense exudative signs and exclude other reasons of subretinal hyperreflective material such as fibrin or hemorrhage. Three groups were defined based on the physician grading of SRFi or MA over 36 months of treatment: absence (i.e., eyes that did not develop SRFi or MA graded over 36 months), subfoveal SRFI or MA (i.e., eyes that developed subfoveal SRFi or MA graded over 36 months) and extrafoveal SRFI or MA (i.e., eyes that developed extrafoveal SRFi or MA graded over 36 months). Eye that developed first extrafoveal SRFi or MA and then progressed to subfoveal SRFi or MA over the period were included in the subfoveal SRFI or MA group for the analysis. Eyes that completed at least 1035 days of follow-up were defined as "completers." Eyes that did not complete 36 months of observations were defined as "non-completers." To investigate if vision declined after the initial visual improvement, we analyzed the mean change in VA from six months to 36 months. The initial visual improvement was considered as maintained if the VA change from six months to 36 months was more than −5 letters. Outcomes The main outcome was the estimated mean change in VA from 6 months at 36 months. Secondary visual outcomes included the estimated mean change in VA from baseline to 36 months, the proportion of eyes who maintained vision (VA change > −5 letters), lost ≥10, and ≥15 letters of vision from six months at 36 months, and the mean final VA. Other outcomes of interest were the median time to the development of MA and SRFi over 36 months, the baseline predictors of the development of MA, and SRFi over 36 months, the proportion of visits over 36 months in which the CNV lesion was graded as active, the median time to first grading of CNV inactivity over 36 months, the median time interval between injections over 36 months, the median number of visits and injections administered over 36 months, the rate of noncompletion, the median time and mean VA change to dropout, and the reason for discontinuation over 36 months. Statistical Analysis Descriptive data were summarized using the mean (standard deviation), median (first and third quartiles), and number (percentages) where appropriate. Calculation of crude visual outcomes over 36 months used the last-observation-carried-forward for non-completers. We compared visual outcomes between SRFi or MA groups over three years using mixed-effects longitudinal generalized additive models with the interaction between the development and location of MA or SRFi during the treatment and time as the main predictor variable. Longitudinal models included all visits from completers and non-completers (all observations until the 36-month visit or dropout). The proportions of eyes who maintained vision, lost 10 letters and 15 letters of vision from six months at 36 months between groups were compared using logistic mixed-effects regression models. Longitudinal and logistic models included age, gender, VA, type of CNV lesion at baseline and lens status during the followup, and nesting of outcomes within practitioners and patients with bilateral disease as random effects. Generalized Poisson regression models were used to compare the number of injections and visits between groups over 36 months. Longitudinal, logistic and generalized Poisson models included age, gender, VA, type of CNV lesion at baseline and lens status during the follow-up, and nesting of outcomes within practitioners and patients with bilateral disease as random effects. Cox-proportional hazards models and Kaplan-Meier survival curves were used to assess and visualize the time to first grading of inactivity, first grading of SRFi and MA, and non-completion rates over 36 months. Choroidal Neovascularization Activity Outcomes Over Three Years Overall, the proportion of active visits in eyes completing 36 months was 55%, lower in eyes that developed extrafoveal SRFi or MA (43.3%) than eyes that did not develop (56%) or developed subfoveal SRFI or MA (56%) (P = 0.044). The median (Q1, Q3) time to first grading of inactivity was 119 (82, 385) days and was not significantly different between subgroups (Fig. 3). Injections and Visits Over Three Years The median (Q1, Q3) number of injections was 19 (14,25) over three years in completers with 8 (6, 10), 6 (3, 8) and 5 (3,8) median (Q1, Q3) injections yearly at first, second, and third year, respectively ( Table 2). The adjusted ratio (95% CI) of the number of injections and visits from the generalized Poisson regression model was similar between subgroups according to the development of SRFi or MA in eyes completing 36 months ( Outcomes of Eyes not Completing Three Years The overall non-completion rate over 36 months was 45.4% (1125 eyes) and was more frequent in eyes that did not develop SRFi or MA than eyes that developed extrafoveal or subfoveal SRFi or MA (50% absence vs. 30% extrafoveal vs. 38% subfoveal, P < 0.01; see Supplementary Fig. S2). The mean VA at drop out was significantly better than the mean baseline VA each year of drop out ( Supplementary Fig. S3). The reasons for patients discontinuing treatment were tracked in 141 (14%) eyes. These were mainly not related to a poor outcome (71%, 100 eyes): treatment considered as successful 39% (54 eyes), patient transferred to another doctor 14% (20 eyes), death 14% (20 eyes), and medical contraindication 4% (six eyes). Discussion The present study reports that treatment-naïve nAMD eyes receiving VEGF inhibitors maintain their initial six-month visual improvement over three years of treatment in routine clinical practice if they do not develop SRFI or MA. This is not necessarily surprising, because SRFi and MA are well-known associations of poor long-term visual outcomes in treated nAMD eyes. 2,3,5,7,[15][16][17][18] The significance of our findings is that there is probably no other disease process that causes loss of the initial gains seen in eyes treated for nAMD. Although it would be helpful for this finding to be consolidated and extended in future studies, it appears that the prevention of MA and SRFi is the final obstacle to achieving better, enduring outcomes in nAMD. Not surprisingly, eyes in the three groups were not comparable at baseline particularly regarding presenting VA. This may be possibly due to an increased amount of blood or fibrin at baseline in the incident SRFI or MA group. We tried to limit the inclusion of baseline SRFi or MA eyes in the study using multimodal imaging definition to differentiate these features with other causes of subretinal hyperreflective material and defined the status of baseline SRFi or MA on the first three months treatment visits. Our results emphasize that the development or extension of MA or SRFi in the subfoveal region is associated with poor long-term visual outcomes. [15][16][17][18][19] Eyes that developed subfoveal SRFi or MA over 36 months had at least two to three lines difference in the final estimated mean VA change from baseline, were at least half as likely to maintain six-month visual improvement at 36 months and twice as likely to have a two-line or three-line VA loss from six months at 36 months than eyes that developed extrafoveal SRFi or MA over 36 months of treatment. Approximately 20% and 25% of eligible eyes developed SRFI and MA respectively by 36 months from the start of the treatment. These cumulative rates of SRFi and MA were similar to those reported elsewhere. 9,16,17 The Macular atrophy in Pro re Nata versus Treat-and-Extend (MANEX) study reported that the incidence of new atrophy lesion in consecutive naïve treated nAMD eyes receiving VEGF inhibitors was approximately 19% and 22% at 2 and 3 years of treatment, respectively. 9 In the comparison of AMD Treatments Trials (CATT), non-geographic atrophy and scar rates were estimated to be approximately 12 to 19% and 16 to 20% at 2 years, respectively, depending on the type of drug and treatment pattern. 17,18 The FRB registry has implemented the ICHOM classification grading of SRFi and MA to standardize the diagnosis of these features and compare outcomes between different reports. Our results are derived from an extensive observational database with multiple practitioners grading the clinical features, which may be less precise than in reports from RCTs such as CATT. [16][17][18] However, most of the practitioners contributing data are retina specialists who have agreed to use the ICHOM multimodal grading definition of these features. These real-world findings also reflect how diagnosis and treatment decision would be made in daily clinical practice if an effective drug were developed for preventing or treating MA or SRFi. The three-year visual real-world outcomes of VEGF inhibitors for nAMD were reasonably good (mean +3 letters improvement from baseline) with a median number of injections yearly of eight, six, and five during the first, second, and third year of treatment. Previous retrospective observational studies have reported poorer outcomes at three years. [20][21][22][23] It is difficult to compare our study to earlier reports because we included only eyes that had been diagnosed early without SRFi or MA when they started treatment. There was no difference in treatment and visits frequency between the subgroups. However, we found that eyes with no SRFi or MA, which achieved the best visual outcomes, were more aggressively treated and monitored during the third year of treatment. This reinforces the idea that initial VA improvement in nAMD can be maintained with more intensive or proactive treatment approaches in clinical practice. 8 As previously reported in the literature, presenting VA was a significant predictive factor of MA and SRFi development in our study. 1-3 Type 3 CNV was associated with an increased risk of MA and type 2 CNV with an increased risk of SRFi, which has been confirmed in previous reports. 1,9,10,19 Loss to follow-up may introduce bias because eyes that discontinue may drop out because of poor outcomes or sometimes because of good response to treatment and stabilization of vision. The rate of noncompletion was, in fact, highest in eyes that did not develop SRFi or MA over 36 months of treatment. The mean VA of the eyes that dropped out tended to be better than the presenting VA when treatment had discontinued, suggesting that those eyes tended to have good visual outcomes. Most (70%) of tracked reasons for discontinuation were mainly not related to a poor outcome. Our estimated outcomes, particularly in eyes that did not develop SRFi or MA over the study, may be inferior to the actual results if patients with good vision tended to discontinue follow-up within 36 months. We acknowledge several limitations that are mostly inherent in retrospective observational studies. Injection decisions in routine clinical practice are made without a guided management protocol, so they may vary among retinal specialists compared to RCTs. The grading of SRFi, MA, lesion type, and lesion activity may have interphysician variability. The FRB! registry receives data from a wide variety of international practices and practitioners. Thus we believe our data are fairly representative of clinicians worldwide, which may reduce potential bias caused by this variability to some extent. We also included nesting of outcomes within practitioners in our models to help account for these effects. This analysis's main strengths are its origi-nality and the large number of eyes that were studied over a long time period in daily clinical practice. To conclude, our study suggests that SRFi and MA are the main retinal causes of the long-term (three-year) visual decline in vision in nAMD eyes in routine clinical practice. Early diagnosis and appropriate application of treatment regimens to prevent these features and their extension to the subfoveal region stabilize the visual improvement after the start of the treatment. Further research is warranted to determine whether these findings hold for longer periods. There is a need to develop new drugs with potential antifibrotic and neuroprotective effects, combined with VEGF inhibitors, to prevent or even treat these endstage features of nAMD and further improve visual outcomes and the quality of life of our patients.
2021-11-04T06:22:26.930Z
2021-11-01T00:00:00.000
{ "year": 2021, "sha1": "f9966b58830d46cd09c3f0427d0c015f81b1c645", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1167/tvst.10.13.5", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "a90b45c2e84bc194731d9935bc645606e2d1a35d", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
120208571
pes2o/s2orc
v3-fos-license
Homogenization of a locally-periodic medium with areas of low and high diffusivity We aim at understanding transport in porous materials including regions with both high and low diffusivities. For such scenarios, the transport becomes structured (here: {\em micro-macro}). The geometry we have in mind includes regions of low diffusivity arranged in a locally-periodic fashion. We choose a prototypical advection-diffusion system (of minimal size), discuss its formal homogenization (the heterogenous medium being now assumed to be made of zones with circular areas of low diffusivity of $x$-varying sizes), and prove the weak solvability of the limit two-scale reaction-diffusion model. A special feature of our analysis is that most of the basic estimates (positivity, $L^\infty$-bounds, uniqueness, energy inequality) are obtained in $x$-dependent Bochner spaces. Introduction We consider transport in heterogeneous media presenting regions with high and low diffusivities.Examples of such media are concrete and scavenger packaging materials.For the scenario we have in mind, the old classical idea to replace the heterogeneous medium by a homogeneous equivalent representation (see [1,2,5,22] and references therein) that gives the average behaviour of the medium submitted to a macroscopic boundary condition is not working anymore.Specifically, now the transport becomes structured (here: micro-macro1 ) [3,14]. The geometry we have in mind includes space-dependent perforations2 arranged in a locally-periodic fashion.We refer the reader to section 2 (in particular to Fig. 1), where we explain our concept of local periodicity.Our approach is based on the one developed in [24,25] and is conceptually related to, e.g., [6,11].When periodicity is lacking, the typical strategy would be to tackle the matter from the percolation theory perspective (see e.g.chapter 2 in [12] and references cited therein 3 ) or to reformulate the oscillating problem in terms of stochastic homogenization (see e.g.[4]).In this paper, we stay within a deterministic framework by deviating in a controlled manner (made precise in section 2) from the purely periodic homogenization. We show our working methodology for a prototypical diffusion system of minimal size.To keep presentation simple, our scenario does not include chemistry.With minimal effort, both our asymptotic technique and analysis can be extended to account for volume and surface reaction production terms and other linear micro-macro transmission conditions.We only emphasize the fact that if chemical reactions take place, then most likely that they will be hosted by the micro-structures of the low-diffusivity regions.We discuss the microscale model for the particular case in which the heterogenous medium is only composed of zones with circular areas of low diffusivity of x-varying sizes.This assumption on the geometry should not be seen as a restriction.We only use it for ease of presentations and it does not play a role in our formal and analytical results.Our asymptotic strategy is based on a suitable expansion (remotely resembling the boundary unfolding operator [7]) of the boundary of the perforations in terms of level-set functions.In particular, we can treat in a quite similar way situations when free-interfaces travel the microstructure; we refer the reader to [24] for a dissolution precipitation freeboundary problem and [20] for a fast-reaction slow-diffusion scenario where we addressed the matter. The results or our paper are twofold: (i) We develop a strategy to deal (formally) with the asymptotics → 0 for a locally periodic medium (where > 0 is the microstructure width) and derive a macroscopic equation and x-dependent effective transport coefficients (porosity, permeability, tortuosity) for the species undergoing fast transport (i.e. that one living in high diffusivity areas), while we preserve the precise geometry of the microstructure and corresponding balance equation.The result of this homogenization procedure is a distributedmicrostructure model in the terminology of R. E. Showalter, which we refer here as two-scale model.(ii) We analyze the solvability of the resulting two-scale model.Solutions of the twoscale model are elements of x-dependent Bochner spaces.Our approach benefits from previous work on two-scale models by, e.g., Showalter and Walkington [23], Eck [9], and Meier and Böhm [17,18].A special feature of our analysis is that most of the basic estimates (positivity, L ∞ -bounds, uniqueness, energy inequality) are obtained in the x-dependent Bochner spaces.Our existence proof is constructed using a Schauder fixed-point argument and is an alternative to [23], where the situation is formulated as a Cauchy problem in Hilbert spaces and then resolved by holomorphic semigroups, or to [17], where a Banach-fixed point argument for the problem stated in transformed domains (i.e.x-independent) is employed. Note that (i) and (ii) are preliminary results preparing the framework for rigorously Figure 1.Schematic representation of a locally-periodic heterogeneous medium.The centers of the gray circles are on a grid with width .These circles represent the areas of low diffusivity and their radii may vary. proving a convergence rate for the asymptotics → 0; we will address this convergence issue elsewhere. The paper is organized in the following fashion: Section 2 contains the description of the model equations at the micro scale together with the precise geometry of our x-dependent microstructure for the particular case of circular perforations.The homogenization procedure is detailed in section 3. The main result of this part of the paper is the two-scale model equations as well as a couple of effective coefficients reported in section 4. The second part of the paper focusses on the analysis of the two-scale model; see section 5.The main result, i.e.Theorem 5.11, ensures the global-in-time existence of weak solutions to our two-scale model and appears at the end of section 5.3.A brief discussion section concludes the paper. Model equations We consider a heterogenous medium consisting of areas of high and low diffusivity.The medium is in the present paper represented by a two dimensional domain.We denote the two dimensional bounded domain by Ω ⊂ R 2 , with boundary Γ, and for ease of presentation we suppose in this section that the areas of the medium with low diffusivity are circles.We do not use this restriction in later sections; there the areas with low diffusivity can have different shapes, as long as neighboring areas do not touch each other. Let the centers of the circles B ij with low diffusivity, with radius R ij < /2, be located in a equidistant grid with nodes at ( i, j), where is a small dimensionless length scale. We assume that there is given a function r(x) : Ω → [0, 1/2) such that the radii R ij of the circles B ij are given by R ij = r(x ij ), where x ij = ( i, j).We define the area of low diffusivity Ω l , which is the collection of the circles of low diffusivity, as Ω l := ∪B ij and we define the area of high diffusivity Ω h , which is the complement of Ω l in Ω, as Ω h := Ω\Ω l .The boundary between high and low diffusivity areas is denoted by Γ , which is given by Γ := ∂Ω l .It is important to note that we assume that the circles of low diffusivity do not touch each other, so that Γ ij ∩ Γ kl = ∅ if i = k or j = l, where Γ ij := ∂B ij , and we also assume that the area of low permeability does not intersect the outer boundary of the domain Ω, so that Γ ∩ Γ ij = ∅ for all i, j. We denote the tracer concentration in the high diffusivity area by u , the concentration in the low diffusivity area by v , the velocity of the fluid phase by q and the pressure by p .All these unknowns are dimensionless.In the high diffusivity area we assume for the fluid flow a Darcy-like law and incompressibility, while we neglect fluid flow in the low diffusivity area.The diffusion coefficient in the low diffusivity area is assumed to be of the order of O( 2), while all the remaining coefficients are of the order of O(1) in .We assume continuity of concentration and of fluxes across the boundary between the high and low diffusivity areas. The model is now given by where D h denotes the diffusion coefficient in the high diffusivity region, D l the diffusion coefficient in the low diffusivity regions, κ denotes the permeability in the Darcy law for the flow in the high diffusivity region, ν denotes the unit normal to the boundary Γ (t), where q b and u b denote the Dirichlet boundary data for the concentration u and Darcy velocity q and where u I and v I denote initial value data for the concentration u and v . The gradient of a function f (x, x ), depending on x and y = x is given by where ∇ x and ∇ y denote the gradients with respect to the first and second variables of f . Level set formulation of the perforations boundary Since the location of the interfaces between the low and the high diffusivity regions also depends on , we need an -dependent parametrization of these interfaces.A convenient way to parameterize the interfaces is to use a level set function, which we denote by S (x): x ∈ Γ ⇔ S (x) = 0. Since we allow the size and shape of the perforations to vary with the macroscopic variable x, we might use the following characterization of S : where S : Ω × U → R is 1-periodic in its second variable, and is independent of .In this section we show, using the example of a grid of circles with varying sizes, that this characterization of S is not sufficient to characterize all locally-periodic sequences of perforation geometries.In fact, we need to expand S as where the S i : Ω × U → R are 1-periodic in their second variable, for i = 0, 1, 2, ... and are independent of .In order to find an explicit expression for S (x) in this particular case, i.e. the case of circular domains with radius r(x) (see Fig. 1), we define P (x) to be the periodic extension of the function x → |x| and Q(x) to be the periodic extension of the function x → x, both defined on the square given by where a := max{n ∈ Z | n ≤ a} denotes the floor of a (rounding down).We can write S (x) as follows: Interestingly, the expression (3.5) plays the same role as the boundary unfolding operator (cf., for instance, [7] Definition 5.1).Note that S is not a continuous function, it jumps when x 1 or x 2 cross a multiple of .Whenever we assume that r(x, t) < 1/2, this is not a problem, since in this case S is continuous and smooth in a neighborhood of its zero level set, which is what we are interested in. To check that the zero level set of S consists indeed of circles around x ij with radius r(x ij ), we consider a curve, which without loss of generality can be parametrized in the square with sides around x ij by x ij +γ(s).For this curve to be a zero level set, it should hold that Using that x ij = ( i, j), with i, j ∈ Z ∩ Ω, we obtain r(( i, j) + γ(s) − Q((i, j) + −1 γ(s))) = P ((i, j) + γ(s)), and using the periodicity of P and Q we get which means that γ(s) should be a circle with radius r(x ij ).Now we can write the level set function S formally as the expansion where S k (•, y, •), for k = 0, 1, 2, ..., are 1-periodic in y = x , and are independent of .In order to find the terms in this expansion, we assume that r is sufficiently smooth and so that we can use the Taylor series of r around x: where D 2 r denotes the Hessian of r w.r.t.x.This suggests the following definition of the terms in the expansion of S : Interface conditions In (2.3 1 ) we have used the superscript for the normal vector ν in the interface conditions for v and u .The reason is that the normal vector depends on the geometry of the different regions, and this in turn depends on .In order to perform the steps of formal homogenization, we have to expand ν in a power series in .This can be done in terms of the level set function S : First we expand |∇S |.Using the chain rule (3.2) (see also [12]), the expansion of S and the Taylor series of the square-root function, we obtain In the same fashion, we get where and If we introduce the normalized tangential vector τ 0 , with τ 0 ⊥ ν 0 , we can rewrite ν 1 as Now we focus on the interface conditions posed at Γ .In order to obtain interface conditions in the auxiliary problems, we substitute the expansions of u , q , and ν into (2.3).This is not so straight-forward as it may seem, since the interface conditions (2.3) are enforced at the oscillating interface Γ , i.e. at every x where S (x) = 0.For formulating the upscaled model it would be convenient to have boundary conditions enforced at To obtain them, we suppose that we can parametrize the part of the boundary Γ ij that surrounds the sphere B ij with k (s), so that holds and we assume that we can expand k (s) using the formal asymptotic expansion Using the expansion for S , the periodicity of S i in y, and the Taylor series of S 0 and S 1 around (x, k 0 ), we obtain Collecting terms with the same order of , we see that k 0 (s) parametrizes locally the zero level set of S 0 : For k 1 , we have the equation It suffices to seek for k 1 that is aligned with ν 0 , so that we write where, using (3.11), λ is given by Each of the boundary conditions in (2.3) admits the structural form where K is a suitable linear combination of u , ∇u , q , p , v , and ∇v .Using (3.10) and the Taylor series of K around (x, k 0 ), we obtain where D 2 K denotes the Hessian of K w.r.t.x and y.Substituting (3.12) into (3.14),we can restate (3.14) in the following way: In order to proceed further, we make use of the following technical lemmas.Their proofs can be found in [24]. Diffusion equation in the low diffusivity areas Substituting the asymptotic expansion of v into (2.2),we obtain Similarly expanding the boundary condition (2.3 2 ), we get which, after substitution into (3.15),becomes Collecting the lowest order terms, and using that u 0 does not depend on y, we obtain the boundary condition v 0 (x, y, t) = u 0 (x, t) for all y ∈ Γ 0 (x), x ∈ Ω. (3.23) Convection-diffusion equation in the high diffusivity area Substituting the asymptotic expansion of u into (2.1 1 ), we obtain where Using the expansions for u , v and ν , we first expand (2.3 1 ): + O( 2), for all x ∈ Γ and y = x . Next we substitute this expansion into (3.15), and thus obtain Now we collect the −2 -term from (3.24) and the −1 -term from (3.26).Hence we obtain for u 0 the equations where Y 0 (x) is given by (3.20).This means that u 0 is determined up to a constant and does not depend on y, so that ∇ y u 0 = 0. Collecting the −1 terms from (3.24), the 0 -terms from (3.26), and using that ∇ y u 0 = 0, we get for u 1 the equations u 1 y-periodic. (3.28) Collecting the 0 -terms from (3.24) and the 1 -terms from (3.26), we obtain u 2 y-periodic. Remark 3.3 Note that in this section we have not used any assumptions of the shape of the perforations.They may have any shape as long as their limiting shape is described by the level set function S 0 . Upscaled equations The equations for lowest order terms of q and p , (3.19) and (3.21), v , (3.22), u , (3.30), and the coupling conditions (3.23) together constitute the upscaled model.In this section we collect these equations for the case discussed in Section 2, i.e. for circular perforations. For this purpose we return to a formulation in terms of r(x, t), where we use We write the solutions of equations (3.28) and (3.19) in terms of the solutions of the following two cell problems (see, e.g.[12]) and ∇ y • w j (x, y) = 0 for all x ∈ Ω, y ∈ U, |y| > r(x), w j = 0 for all x ∈ Ω, |y| = r(x), w j (x, y) and π j (x, y) y-periodic, (4.2) for j = 1, 2. The use of these cell problems allows us to write the results of the formal homogenization procedure in the form of the following distributed-microstructure model where the porosity θ(x) of the medium is given by θ(x) := 1 − πr 2 (x), while the effective diffusivity A(x) := (a ij (x)) i,j and the effective permeability K(x) := (k ij (x)) i,j are defined by w ji (x, y, t) dy. Analysis of upscaled equations In this section we investigate the solvability of the upscaled equations (4.3)-(4.5).Note that the equations (4.3 3,4 ) for q and p 0 , together with the boundary condition (4.4 3 ) are decoupled from the other equations.We may assume that we can solve these equations for q and p 0 such that q ∈ L ∞ (Ω; R 2 ) (see Assumption 2 below).Standard arguments form the theory of partial differential equations justify this assumption if the data q b and r are suitable, see [13] for a closely related scenario.With this assumption the equations where B(x) := Y 0 (x), where Y 0 is defined in (3.20).Notice that in this section we again do not restrict ourselves to circular perforations.The perforations may have any shape as long as they are described by the level set S 0 .In the following sections we discuss the existence and uniqueness of weak solutions to problem (P ). Functional setting and weak formulation For notational convenience we define the following spaces: The spaces H 2 and V 2 make sense if, for instance, we assume (like in [18]): Assumption 1 The function S 0 : Ω × U → R, which defines B(x) := Y 0 (x) in (3.20), and which also defines the 1-dimensional boundary Ω × ∂B(x) of Ω × B(x) as (x, y) ∈ Ω × ∂B(x) if and only if S 0 (x, y) = 0, is an element of C 2 (Ω × U ). Assume additionally that the Clarke gradient ∂ y S 0 (x, y) is regular for all choices of (x, y) ∈ Ω × U . Following the lines of [18] and [23], Assumption 1 implies in particular that the measures |∂B(x)| and |B(x)| are bounded away from zero (uniformly in x).Consequently, the following direct Hilbert integrals (cf.[8] (part II, chapter 2), e.g.) are well-defined separable Hilbert spaces and, additionally, the distributed trace given by is a bounded linear operator.For each fixed x ∈ Ω, the map γ x , which is arising in (5.5), is the standard trace operator from H 1 (B(x) to L 2 (∂B(x)).We refer the reader to [17] for more details on the construction of these spaces and to [19] for the definitions of their duals as well as for a less regular condition (compared to Assumption 1) allowing to define these spaces in the context of a certain class of anisotropic Sobolev spaces.Furthermore we assume Assumption 2 We also define the following constants for later use: Note that M 1 and M 2 depend on the initial and boundary data, but not on the final time T .Let us introduce the evolution triple (V, H, V * ), where Denote U := u − u b and notice that U = 0 at ∂Ω. Definition 5.1 Assume Assumptions 1 and 2. The pair (u, v), with u = U + u b and where (U, v) ∈ V, is a weak solution of the problem (P ) if the following identities hold ) for all (φ, ψ) ∈ V and t ∈ S. As a last item in this section on the functional framework, we mention for reader's convenience the following lemma by Lions and Aubin [16], which we will need later on: Then W → → L p (S; B). Estimates and uniqueness In this section we establish the positivity and boundedness of the concentrations.Furthermore, we prove an energy inequality and ensure the uniqueness of weak solutions to problem (P). Lemma 5.3 Let Assumptions 1 and 2 be satisfied.Then any weak solution (u, v) of problem (P ) has the following properties: (i) u ≥ 0 for a.e.x ∈ Ω and for all t ∈ S; (ii) v ≥ 0 for a.e.(x, y) ∈ Ω × B(x) and for all t ∈ S; (iii) u ≤ M 1 for a.e.x ∈ Ω and for all t ∈ S; (iv) v ≤ M 2 for a.e.(x, y) ∈ Ω × B(x) and for all t ∈ S; (v) The following energy inequality holds: where M 1 and M 2 are given in (5.6) and (5.7), and where c 1 is a constant independent of u and v. Proof We prove (i) and (ii) simultaneously.Similar arguments combined with corresponding suitable choices of test functions lead in a straightforward manner to (iii), (iv), and (v).We omit the proof details.Choosing in the weak formulation as test functions (ϕ, ψ) := (−U − , −v − ) ∈ V, we obtain: Note that, excepting the last two terms, the right-hand side of (5.13) has the right sign.Assuming, additionally, a compatibility relation between the data q, u b , for instance, of the type D∇u b = u b divq a.e. in Ω × S, makes the last term of the r.h.s. of (5.13) vanish.The key observation in estimating the last by one term is the fact that the sets {x ∈ Ω : U (x) ≥ 0} and {x ∈ Ω : U (x) ≤ 0} are Lebesque measurable.This allow to proceed as follows: U + divq∇U − = 0. (5.14) (5.15) After applying the inequality between the arithmetic and geometric means applied to the second term for the right hand-side of (5.13), the conclusion of both (i) and (ii) follows via the Gronwall's inequality. Proposition 5.4 (Uniqueness) Problem (P ) admits at most one weak solution. Proof Let (u i , v i ), with i ∈ {1, 2}, be two distinct arbitrarily chosen weak solutions. from X 2 to X2 ⊂ X 3 .The fact that the linear PDE (5.24) and its weak solution depend (continuously) on the fixed parameter x ∈ Ω is not "disturbing" at this point 4 .Since for any v ∈ X 3 the gradient ∇ y v has a trace on ∂B(x), the well-definedness and continuity of T 3 is ensured. Furthermore we need for the fixed-point argument that the operator T is compact.It is enough that one of the operators T 1 , T 2 and T 3 is compact.Here we will show that T 2 maps X 2 compactly into X 3 . Proof of claim The conclusion of the Lemma is a straightforward consequence of the regularity of S 0 , by Assumption 1. Claim 5.9 (Interior and boundary H 2 -regularity) Assume Assumptions 1 and 2 and take (5.32) Proof of claim The proof idea follows closely the lines of Theorem 1 and Theorem 4 (cf.[10], sect.6.3) Claim 5.10 (Additional two-scale regularity) Assume the hypotheses of Lemma 5.9 to be satisfied.Then (5.34) The extension to L 2 (S; H 1 (Ω; H 2 (B(0)) ∩ H 1 0 (B(0)))) can be done with help of a cutoff function as in [10] (see e.g.Theorem 1 in sect.6.3).We omit this step here and refer the reader to loc.cit.for more details on the way the cutoff enters the estimates.To simplify the writing of this proof, instead of V (and other functions derived from V ) we write V (without the hat).Furthermore, since here we focus on the regularity w.r.t.x of the involved functions, we omit to indicate the dependence of U on t and of V on y and t.For all t ∈ S, x ∈ Ω and Y ∈ Y 0 , we denote by U i h and V i h the following difference quotients with respect to the variable x: We have for all ψ ∈ L 2 (Ω , H 1 0 (B(0))) the following identities: and (5.36) Subtracting the latter two equations, dividing the result by h > 0 and choosing then as test function ψ := V i h yields the expression where V i h [J(x + he i )∂ t (V (x + he i ) + U (x + he i )) − J(x)∂ t (V (x) + U (x))] Re-arranging conveniently the terms, we obtain the following inequality: I . (5.37) To estimate the terms I we make use of Cauchy-Schwarz and Young inequalities, the inequality between the arithmetic and geometric means, and of the trace inequality.We get and (Ω ;H 2 (B(0))) . (5.42) Note that all terms |I | are bounded from above.To get their boundedness we essentially rely on the energy estimates for V , U , U i h as well as on the L ∞ -estimates on V and V i h on sets like Ω × B(0) and Ω × Γ 0 .The conclusion of this proof follows by applying Gronwall's inequality. Putting now together the above results, we are able to formulate the main result of section 5, namely: Theorem 5.11 Problem (P) admits at least a global-in-time weak solution in the sense of Definition 5.1. Discussion The remaining challenge is to make the asymptotic homogenization step (the passage → 0) rigorous.Due to the x-dependence of the microstructure the existing rigorous ways of passing to the limit seem to fail [3,14,21].As next step, we hope to be able to marry succesfully the philosophy of the corrector estimates analysis by Chechkin and Piatnitski [6] with the intimate two-scale structure of our model.
2010-03-21T08:07:28.000Z
2010-03-21T00:00:00.000
{ "year": 2010, "sha1": "8c5299d338184f4809632923d1a2a53aac7a6979", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "40ffd761bb4a205922404a1a100538f6ee177990", "s2fieldsofstudy": [ "Mathematics", "Physics" ], "extfieldsofstudy": [ "Physics", "Mathematics" ] }
52243281
pes2o/s2orc
v3-fos-license
Spectrophotometric, thermal and determination of trace amount of palladium (II) nickel (II) and silver (I) by using pyrazolone azo derivative Azo compounds are very important class of chemical compounds containing a hetro cyclic moiety which has attracted the attention of many researchers in the recent years.1 There has been increasing interest in syntheses of heterocyclic compounds that have biological and commercial importance clinical,2 and pharmacological activities.3 One of the most important derivatives of antipyrine is 4-aminoantipyrine, which is used as a synthetic intermediate to prepare polyfunctionally substituted heterocyclic moieties.4 Introduction Azo compounds are very important class of chemical compounds containing a hetro cyclic moiety which has attracted the attention of many researchers in the recent years. 1 There has been increasing interest in syntheses of heterocyclic compounds that have biological and commercial importance clinical, 2 and pharmacological activities. 3 One of the most important derivatives of antipyrine is 4-aminoantipyrine, which is used as a synthetic intermediate to prepare polyfunctionally substituted heterocyclic moieties. 4 Azo compounds containing N 3 donor atoms act as superior chelating agents for the transition and non-transition metal ions and showed biological activities. 5 Azo days are commonly synthesized by coupling a diazonium reagent with an aromatic compound to form an azo reagent. 6 The azo compounds give bright, high intensity colors, much more than the other most common compounds, in addition, they have fair to good fastness properties, their biggest advantage it their cost-effectiveness, which is due to the processes involved in manufacture. 7 The coordinating property of 4-aminoantipyrine ligand has been modified to give a flexible ligand system, formed by condensation with variety reagents like aldehydes, ketones and carbazides. 8 Ellagic acid, a dimeric derivative of gallic acid, is apolyphenolic antioxidant that occurs in its free form as aglycoside or is found as ellagitannins in fruits and nuts ofseveral plants. 9 Several studies have reported the antioxidant, antimutagenic, anti-inflammatory and cardioprotective activityof ellagic acid. 10 In this research, novel sensitive and selective azodyes was prepared by reaction 4-aminoantipyrine with ellagic acid as a coupling agent to determine trace amount of Pd (II), Ni (II) and Ag (I) as complexes. Materials and methods All chemicals ofhighest purity were used in this work which supplied by Fluka and BDH. Spectrophotometric measurements were made with ShimadzuUV-Visible-1650Pc double beam. The FTIR measurements were made in Shimadz 8400 Series Japan. Differential Scanning Calorimeter DSC MODEL STA PT-1000 Linseis. Atomic Absorption Spectrophotometer Flame Model Shimadzu AA-7000F. Electricmolar conductivity measurements were carried out at room temperature usingan Alpha digital conductivity model-UK 9300. The pH measurements were performed with a "HANNA pH meter H19811-5 Instrument. Synthesis of H 4 L The reagents and solvents were of analytical grade and used without further purification. 4-Aminoantipyrine (0.00492mole) 1.0000gm was diazotized by dissolving it in 25ml ethanol then 5ml of HCl was adding keeping the temperature at 0-5°C and then adding NaNO 2 solution gradually and left the solution about 15min. 11 The diazonium salt was spontaneously added slowly drop wise to a well cooled alkaline solution of coupling agent (ellagic acid 1.4869gm), the mixture was allowed to stand for 1h. The dark colored mixture was neutralized with HCl and the solid precipitate was filtered off and washed several times with (1:1) (methanol: water) mixture then recrystallised from boiling methanol and left to dry (Scheme 1). Solutions A stock standard Palladium solution (100ppm) was prepared by dissolving 0.0353g of PdCl 2 , the volume was completed to 200ml with distilled water. A stock standard Nickel solution (100ppm) was prepared by dissolving 0.0806 gm of NiCl 2 .6H 2 O, the volume was completed to 200ml with distilled water. A stock standard Silver solution (100ppm) was prepared by dissolving 0.0314gm of AgNO 3 , the volume was completed to 200ml with distilled water. Solution of azodye reagent (1x10 -3 M) were prepared by dissolving (0.0516gm) and complete the volume to100ml with absolute methanol. Foreign ions solutions (100μg ml -1 ), these solutions were prepared by dissolving an amount of the compound in distilled water completing the volume in a volumetric flask. Procedure To get highest absorbance of complexes formed, it is necessary to get optimum conditions of forming each complex, which include, the selection of the suitable wavelength (λ max ), effect of time, effect of pH values, effect of sequence of additions, stoichiometry and effect of interferences of strange ions. The general procedure was summarized by taking (0.1-3ppm) of Pd(II), Ni(II) and Ag(I) ions with (3ml) of 1x10 -3 M of H 4 L then the volume was completed to 10ml. After 15 min, the absorbance of formed complexes were measured at λ max of 504, 495and492nm respectively. Absorption spectra The electronic spectra of H 4 L and their complexes with ions, Pd(II), Ni(II) and Ag(I). Complexes are at λ max wavelengths were fixed in Figure 1, shows the absorbance λ max at 504, 495 and 492nm respectively. It is clear that according to the red shift that happened in λ max show the stable complexes are formed immediately, π π* transition within the azo group and heterocyclic moieties involving the whole π electronic system of the compound influenced by intermolcular charge transfere character. 12 A great bathochromic shift in the visible region was detected in the complex solution spectra with respect to that of the free ligand. The intensity colored solutions formed from the reaction of the azo ligands with the metal ions is playing an important rule for many UV-Vis spectral studies. This is because of the presence of a sharp and high absorption peak which belongs to the metal complex. The large bathochromic shift of this peak in the visible region with respect to that of the ligand may give a good indication on the complex formation. FTIR spectra The FT-IR data of H 4 L reagent and their complexes are with their probable assignment given in Table 1. The important bands observed in the spectra of the complexes while comparing with reagents, which is helpful in detection of donation sites. The IR spectra of the free ligand have a broad band around 3415.9cm -1 which could be attributed to O-H stretching vibration and it shifted to lower frequency. The stretching frequency of carbonyl group of ligand ν(C=O) 1676.14(s) cm -1 is shifted to a lower frequency in complexes. Similarly the frequency corresponding to ν(N=N) at 1494.8 (s) is shifted to range (1383-1496)cm -1 in complexes. The shifting in λ max and their intensities of bands led to predict the chelating behavior, i.e., coordination occurs through ring carbonyl oxygen atom with the nitrogen atom of azo group. The spectra of metal complexes also show additional bands in (546-563) (w) cm -1 and (480-418) (w) cm -1 which is probably due to the formation of ν(M-O) and ν(M-N)bond respectively. 13 Effect of pH: The influence of pH value on the absorbance of complexes was studied at different pH ( Figure 2) by using of HCl and NaOH (0.05M) solutions (pH 2-10). It was found that the highest absorbance of Pd-L, Ni-L and Ag-L at pH 3, 8 and 6 respectively, because of the formation of the anionic form of the reagent, which can easily react with ions to form complexes. Effect of reagent volume: Various volume of HL (0.5-4ml)of 1x10 -3 M were added to a fixed amount of Pd(II), Ni(II) and Ag(I) of 10ppm (2ml). It's found enough to develop the color to its full intensity and give a minimum blank value and were considered to be optimum for the volume 3, 3 and 2ml respectively. Figure 3 show the effect of concentration HL with ions. Effect of time: The stability of complexes with time was showed in Figure 4, from the data obtained it was found that the highest absorbance reached at 15min and remains constant up 24hrs with respect to Pd-L, Ni-L and Ag-L respectively. Table 3. Conductivity measurements: The solubility of the complexes in dimethylsulfoxide (DMSO), ethanol and methanol permitted of the molar conductivity of 1x10 -3 M solution at 25°C and by comparison, the electrolytic nature of complexes. The lower values of the molar conductance data listed in Table 3 indicate that the complexes are nonelectrolytes. Accuracy and precision To determine the accuracy and precision of the method, Pd(II), Ni(II) and Ag(I) were determined at three different concentrations. The results shown in Table 4 a satisfactory precision and accuracy with the proposed method. Effect of interferences: The effect of diverse ions as interferences was studied. To test of diverse ions was determined by the general procedure, in the presence of their respective foreign ions. In the experiment, a suitable amount of the standard ions solution, coexisting ion solution and masking agent were added. The metal ions can be determined in thepresence of a 10 fold excess of cation and anion, the results are listed in Table 4. They found that large amounts of NO 3 -1 , Cl -1 , CO 3 -2 and SO 4 -2 have a few effects of ions. It's found that positive ions interfere seriously and can make the absorbance increase or decrease. However, their interferences are masked efficiently by the addition of (0.5-2.0ml) of 0.1M of sodium nitrate for palladium and nickel and (0.5-1.5ml) of 0.1M of oxalic acid for silver. Composition of the complexes (stoichiometery): The empirical formula of the complexes was evaluated by using a continuous variation method (Jop's method) and Mole Ratio Method. 14 It was found that complexes form 1:2 for Pd (II) and Ni(II) but 1:1 for Ag(I) (M:L), (Figure 8-13) show the Composition of the complexes. On the basic of the FTIR, stoichiomstric and conductivity data, the structures of complexes can be suggested as the following (Figure 14 & Figure 15). Thermal analysis: TG/HDSC analyses are very useful methods for investigating the thermal decomposition of solid substances involving simple metal salts as well as for complex compounds. 15,16 The thermogram follows the decrease in sample weight with the linear increase in heat treatment temperature (10 0 C min -1 ) up to 400 0 C. The decomposition occurs in at least three major detectable steps, each step does not refer in generally to single process, but rather is reflects of two or three overlapping processes and attributed to the ligand alone or companied by chlorine atoms. The aim of the thermal analysis is to obtain information concerning the thermal stability of the investigated complexes as seen in Table 5 and (Figure 16-19), to decide whether water molecules are inside or outside the coordination sphere. All the complexes show three-stage mass loss in their TG/HDSC curves. For H 4 L C 25 H 16 N 4 O 9 from the TG curve, it appears that the sample decomposes in two stages. The first stage decomposition occurs at 207.1 0 C with a mass loss of 2.0% and the second decomposition at 352.8 0 C with a 48% mass loss. For palladium complexes (C 50 H 34 N 8 O 18 Pd), the data obtained support the proposed structure and indicate that Pd(II) complex undergo three step degradation reaction. The first step occurs at maximum peak lying in 85.6 0 C, corresponding to the loss of 2% the weight loss associated with this step agrees quite well with the loss one terminal methyl groups in 4-aminoantipyrine moiety. The second step occurs at T max 150.7 0 C, corresponding to the loss of 3% and it referred to loss of chlorine atom. The third decomposition step occurs at T max 354.6 0 C corresponding to the loss of 25% referred to a single process, but it's reflective of two or three overlapping processes and attributed to loss of the 4-aminoantipyrine and moieties. The residual is in agreement with Pd metal. For Ni complex (C 50 H 34 N 8 NiO 18 ), the data obtained support the proposed structure and indicate that Ni(II) complex undergo three step degradation reaction. The first step occurs at maximum peak lying in 157.7 0 C, corresponding to the loss of 2% the weight loss associated with this step agrees quite well with the loss of one water molecule. The second step occurs at T max 189.2 0 C, corresponding to the loss of 6% and it referred to loss two terminal methyl groups in 4-aminoantipyrine moiety The third decomposition step occurs at T max (358.3 0 C) corresponding to the loss of 35% referred to a single process, but it's reflective of two or three overlapping processes and attributed to loss of the 4-aminoantipyrine and moieties. For Ag complex (C 25 H 16 AgN 5 O 12 ), a mass loss occurred within the temperature 115.6 0 C corresponding to the loss of 1% for one molecule of water the temperature 179.7 0 C a loss of 5.33%, corresponding to a loss of one NO 3 molecule at the end of the thermogram at higher temperature 356.3 0 C. 17 Figure 11 Mole ratio of the Ni-L. Figure 12 Job's method of the Ag-L. Figure 13 Mole ratio of the Ag-L. Figure14 The proposed structural formula of metal complexes (M =Pd(II) and Ni(II)). Figure 15 The proposed structural formula of Ag metal complex. Applications To determination of ions determined according to the spectrophotometric method and atomic absorption in Al-Kufa and Al-Kfel rivers, we use the following methods. 18 For Pd(II): 500ml of water sample was concentrated to about 10ml by heating on a hot plate. 10ml of nitric acid and 5ml of 30% hydrogen peroxide were added in this solution. The mixture was heated on a hot plate and evaporated to near dryness. The residue was dissolved in 10ml of 10% HCl and transferred into a calibrated flask (100ml).For Ni(II):The water sample (500ml) was collected in a clean container and evaporated to about 25ml. Then 5ml of H 2 O 2 was added and evaporated to dryness. It was then dissolved in 20ml of water and filtered to remove insoluble substances. The filtered was collected in 100ml volumetric flask quantitatively to the mark with distilled water. For Ag(I): The water was filtered through filter paper no. 41. The pH of filtered sample was adjusted to 2 with 1:1 nitric acid solution. The results obtained are given in Table 7.
2019-04-10T13:12:22.647Z
2018-08-31T00:00:00.000
{ "year": 2018, "sha1": "2b1d98388d1e2f0c06a22d3800855e11ae972536", "oa_license": "CCBYNC", "oa_url": "https://medcraveonline.com/JAPLR/JAPLR-07-00275.pdf", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "04e3772a6257149e033524632585d0e19e82461a", "s2fieldsofstudy": [ "Chemistry" ], "extfieldsofstudy": [ "Chemistry" ] }
20888483
pes2o/s2orc
v3-fos-license
Purification by Ni2+ affinity chromatography, and functional reconstitution of the transporter for N-acetylglucosamine of Escherichia coli. The N-acetyl-D-glucosamine transporter (IIGlcNAc) of the bacterial phosphotransferase system couples vectorial translocation to phosphorylation of the transported GlcNAc. IIGlcNAc of Escherichia coli containing a carboxyl-terminal affinity tag of six histidines was purified by Ni2+ chelate affinity chromatography. 4 mg of purified protein was obtained from 10 g (wet weight) of cells. Purified IIGlcNAc was reconstituted into phospholipid vesicles by detergent dialysis and freeze/thaw sonication. IIGlcNAc was oriented randomly in the vesicles as inferred from protein phosphorylation studies. Import and subsequent phosphorylation of GlcNAc were measured with proteoliposomes preloaded with enzyme I, histidine-containing phosphocarrier protein, and phosphoenolpyruvate. Uptake and phosphorylation occurred in a 1:1 ratio. Active extrusion of GlcNAc entrapped in vesicles was also measured by the addition of enzyme I, histidine-containing phosphocarrier protein, and phosphoenolpyruvate to the outside of the vesicles. The Km for vectorial phosphorylation and non-vectorial phosphorylation were 66. 6 +/- 8.2 microM and 750 +/- 19.6 microM, respectively. Non-vectorial phosphorylation was faster than vectorial phosphorylation with kcat 15.8 +/- 0.9 s-1 and 6.2 +/- 0.7 s-1, respectively. Using exactly the same conditions, the purified transporters for mannose (IIABMan, IICMan, IIDMan) and glucose (IICBGlc, IIAGlc) were also reconstituted for comparison. Although the vectorial transport activities of IICBAGlcNAc and IICBGlc. IIAGlc are inhibited by non-vectorial phosphorylation, no such effect was observed with the IIABMan.IICMan.IIDMan complex. This suggests that the molecular mechanisms underlying solute transport and phosphorylation are different for different transporters of the phosphotransferase system. N-Acetylglucosamine (GlcNAc) is the monomer building block of chitin, which forms the organic matrix of the exoskeletons of arthropods (insects, spiders, crabs) and the cell walls of fungi and of zooplankton. Chitin is the second most abundant biopolymer after cellulose. Roseman and colleagues pointed out that the ocean waters would be rapidly depleted of carbon and nitrogen by sedimentation of the insoluble exoskeletons if the carbon and nitrogen could not be returned to the ecosystem by chitinovorous bacteria (Yu et al., 1991). They discern five steps for chitin deg-radation by bacteria: chemotaxis, adhesion, extracellular degradation, uptake, and catabolism (Bassler et al., 1991a(Bassler et al., , 1991bYu et al., 1991). GlcNAc is taken up and phosphorylated by an inner membrane transport protein of the bacterial phosphotransferase system (PTS). 1 GlcNAc-containing oligosaccharides, in contrast, aretransportedbyaseparate,possiblyperiplasmicbindingproteindependent permease. It has been shown that the phosphotransferase proteins of a wide variety of marine Vibrio immunologically cross-react with and functionally complement the PTS proteins of enteric bacteria (Meadow et al., 1988), e.g. the four subunits of the mannose-and glucose-specific permease of Vibrio furnissii have between 28 and 37% sequence identity with the mannose transporter of Escherichia coli. 2 The purification and functional reconstitution of the GlcNAcspecific transporter (IICBA GlcNAc ) of E. coli are the subjects of this report. IICBA GlcNAc belongs to the group of carbohydrate transporters known as Enzymes II of the phosphoenolpyruvate (PEP)-dependent phosphotransferase system (Kundig et al., 1964). These proteins act by a mechanism that couples vectorial translocation with phosphorylation of the transported solute. PEP is the phosphoryldonor, and phosphoryltransfer proceeds through phosphoprotein intermediates in the sequence PEP 3 enzyme I 3 HPr 3 IIA 3 IIB 3 O-6Ј of hexose. Enzyme I and HPr are cytosolic proteins, IIA and IIB are subunits or domains of the sugar-specific transporters (for comprehensive reviews, see Meadow et al., 1990;Postma et al., 1993;Saier and Reizer, 1994;Lengeler et al., 1994). IICBA GlcNAc has been identified as a 65-kDa membrane protein by Waygood et al. (1984), and the nagE gene was cloned and sequenced (Rogers et al., 1988;Peri and Waygood, 1988;Peri et al., 1991). The transcription control of the nag operon by a specific repressor and the catabolite activator protein (Cap) has been analyzed in E. coli and Klebsiella pneumoniae (Vogler and Lengeler, 1989;Plumbridge, 1990;Plumbridge andKolb, 1991, 1993). IICBA GlcNAc has 40% sequence identity and is colinear with the IICB Glc and IIA Glc subunits of the glucose transporter. Based on this strong structural similarity, IICBA GlcNAc can be characterized as follows (Weigel et al., 1982a(Weigel et al., , 1982bDörschug et al., 1984;Peri and Waygood, 1988;Hummel et al., 1992;Buhr and Erni, 1993;Meins et al., 1993). The amino-terminal IIC domain of 370 residues spans the membrane eight times, contains the substrate specificity determinants, and provides the interface for dimerization. The IIB and the IIA domains are globular and exposed on the cytosolic face of the membrane. IIB (residues 370 -480) mediates phosphoryltransfer between IIA and O-6Ј of GlcNAc. In this process IIB becomes transiently phosphorylated on Cys 412 . The carboxyl-terminal IIA domain (residues 480 -648) mediates phosphoryltransfer between HPr and IIB through a phospho-His 569 intermediate. The IIA and IIB domains of IICBA GlcNAc are linked through an Ala-Pro-rich peptide segment, which is characteristic for structurally independent domains (Erni, 1989;Perham, 1991). The IIB and IIC domains are linked by the invariant sequence LKTPGRED. IICB Glc ⅐IIA Glc and IICBA GlcNAc can functionally complement each other. IIA Glc can complement a truncated GlcNAc transporter (IICB Glc ; , and IICBA GlcNAc can suppress IIA Glc defects . A chimeric protein between the IIC domain of IICB Glc and the IIA and IIB domains of IICBA GlcNAc was active and glucose-specific (Hummel et al., 1992). Complementation is not restricted to the transport function, but it also includes some of the allosteric control functions exerted by the IIA Glc subunit. Under appropriate physiological conditions the IIA Glc -like IIA domain of IICBA GlcNAc inhibits glycerol kinase and maltose uptake, but in contrast to IIA Glc does not inhibit adenylcyclase (catabolite repression; van der Vlag and Postma, 1995). This functional interaction between two homologous but not identical membrane transporters poses questions with respect to the mechanism of the underlying protein protein interactions. Does complementation occur between different domains on two homodimeric transporters (e.g., between IIB GlcNAc and IIC Glc in a transient tetrameric intermediate), or is there subunit exchange with concomitant formation of IICB Glc ⅐IICBA GlcNAc heterodimers? The complexity of native membranes and interferences with other membrane constituents severely limit the elucidation of these aspects. Thus it is vital to reconstitute the purified membrane proteins in artificial phospholipid vesicles to study these functions. The transporters for mannitol and mannose have already been reconstituted into phospholipid vesicles (Elferink et al., 1990;Mao et al., 1995). However, they belong to structurally unrelated families of PTS transporters. In this paper, we describe the purification of IICBA GlcNAc by Ni 2ϩ chelate affinity chromatography and its functional reconstitution in phosphatidylethanolamine vesicles. As a prerequisite for further work and for comparison, IICB Glc and the structurally unrelated mannose transporter were also reconstituted using the same procedure. Plasmid Construction-Plasmid pJFEH6 (see Fig. 1A) for the controlled expression and purification by Ni 2ϩ chelate affinity chromatography of IICBA GlcNAc plasmid was constructed as follows. The 5Ј-upstream region of nagE in plasmid pTSE21 (Hummel et al., 1992) was trimmed with Bal31, the truncated nagE cloned into the SmaI site of pJFEH119, and one plasmid (pJFNagE) containing only a 20-nucleotide upstream noncoding sequence was selected as described (Buhr et al., 1994). To append six histidines to the carboxyl terminus of IICBA GlcNAc the polymerase chain reaction was used. A polymerase chain reaction fragment was amplified with primers GGTGCACTGCAGAAGAC-GAGATCGTTACT and CCCCCAAGCTTCAGATTAGTGATGGTGA-TGGTGATGCTTTTTGATTTCATACAGCGG. The polymerase chain reaction product was digested with NdeI and HindIII and the 270-base pair fragment ligated with the 7-kilobase vector fragment obtained by digestion of pJFNagE with HindIII and partial NdeI. Standard procedures were used for plasmid purification, restriction analysis, ligation, and transformation (Sambrook et al., 1989). Expression and Purification of IICBA GlcNAc -6H-E. coli LR2-168⌬G (pJFEH6) was grown in LB broth. When the cells had reached A 600 ϭ 1.5, protein expression was induced with 0.1 mM isopropyl-␤-D-thioga-lactopyranoside and incubation continued for 3 h. Cells were harvested by centrifugation (16,000 ϫ g; 4°C; 15 min), and the cell pellet was resuspended in buffer A (50 mM Tris-HCl, pH 7.5, 500 mM NaCl, 1 mM EDTA, 1 mM dithiothreitol, 2 ml/g, wet weight, of cells). Cells were broken by two passages through a French pressure cell, cell debris was removed by low speed centrifugation (12,000 ϫ g; 4°C; 10 min), and membranes were collected by high speed centrifugation (300,000 ϫ g; 4°C; 1 h), resuspended in buffer B (10 mM Tris glycine, pH 9.3, 10 mM ␤-mercaptoethanol), shock frozen in liquid N 2 , and stored at Ϫ80°C. Membrane proteins were solubilized with 2% n-decyl-␤-D-maltopyranoside (DM, Sigma). The mixture was sonicated in a bath-type sonicator (Tec 40, Tecsonic, Switzerland) for 1 min, stirred for 15 min at 4°C, and freed of nonsolubilized membranes by centrifugation (300,000 ϫ g; 4°C; 1 h). Without delay, the pH of the extract was adjusted to 8.3 with 1 M acetic acid, mixed with Ni 2ϩ -nitrilotriacetic acid-agarose (Qiagen, GmbH, Germany; 3 ml of resin for the membrane extract from 1 g, wet weight, of cells; equilibrated with buffer C: 50 mM MOPS, pH 7.5, 300 mM NaCl, 10 mM ␤-mercaptoethanol, 0.5% DM), and incubated for 1 h at 4°C with gentle shaking. The slurry was transferred to a chromatography column, washed with 5 bed volumes of buffer C, and eluted stepwise with 10, 25, and 100 mM imidazole in buffer C. IICBA GlcNAc eluted in the 100 mM imidazole step. The active fractions were pooled, supplemented with 10% glycerol (final concentration), shock frozen in liquid N 2 , and stored at Ϫ80°C. Protein concentrations were determined by a modified Lowry assay (Markwell et al., 1978) using bovine serum albumin as standard. Assay for PEP:Sugar Phosphotransferase Activity-Phosphorylation of GlcNAc was assayed by the ion exchange method of Kundig and Roseman (1971) modified as described (Erni et al., 1982). The reaction mixture contained per 100 l: 50 mM KPi, pH 7.5, 2.5 mM dithiothreitol, 2.5 mM NaF, 5 mM MgCl 2 , 1 mM PEP (Sigma), 0.5 mM [ 14 C]GlcNAc (New England Nuclear, 56.3 mCi/mmol, diluted to 1,000 dpm/nmol), 2 l (20 g) of a cytoplasmic extract as a source of soluble phosphoryl carrier proteins (enzyme I, HPr), and either the indicated amount of IICBA GlcNAc plus 1 g of phosphatidylglycerol (Sigma), or IICBA GlcNAc in proteoliposomes prepared as described below. Incubation was for 30 min at 37°C. Assay for Vectorial Import and Phosphorylation of GlcNAc-Proteoliposomes were loaded with PEP, cytosolic PTS proteins, and with 0.5 mM L-[ 3 H]Glc (45,000 dpm/nmol) as aqueous phase marker. A 250-l aliquot of proteoliposomes was thawed at room temperature, and MgCl 2 , Mg 2ϩ -PEP, enzyme I, and HPr were added to a final concentration of 1 mM, 10 mM, 0.1 M, and 0.1 M, respectively. The mixture was sonicated for 45 s in a bath-type sonicator. The sonicated proteoliposomes were freeze-thawed six times (liquid N 2 /room temperature water bath) and sonicated for 20 s in a bath-type sonicator. The proteoliposomes were separated from free components by gel filtration on Sephacryl S-300 (Pharmacia; 12-ml bed volume, buffer D). The peak liposome-containing fractions were pooled. To measure GlcNAc uptake, the proteoliposomes were diluted 10-fold in buffer D and incubated at 30°C for 15 min with ADP (2 mM, Serva) and pyruvate kinase (25 g, Fluka) to destroy residual external PEP. The import reaction was started by adding [ 14 C]GlcNAc (New England Nuclear, 5.0 mCi/mmol) to the desired concentration. 50-l aliquots were withdrawn at the indicated time points, diluted into 1 ml of ice-cold buffer D, and immediately filtered through nitrocellulose membrane filters (ME24, Schleicher & Schuell, 0.2-m pore size). The filters were washed with 2 ϫ 1 ml of buffer D, and the radioactivity retained on the filters was determined by liquid scintillation counting. To measure the concomitant formation of GlcNAc-6P, 50-l aliquots were diluted into buffer D containing 0.2% Triton X-100, and GlcNAc-6P was separated from free GlcNAc by anion exchange chromatography (Erni et al., 1982). Assay for Vectorial Export and Phosphorylation-250-l aliquots of proteoliposomes were loaded with 1 mM [ 14 C]GlcNAc and L-[ 3 H]Glc as marker for nonspecific leakage and purified as described above. To 1:10 diluted proteoliposomes from the peak fractions, MgCl 2 , enzyme I, and HPr were added to final concentrations of 10 mM, 0.1 M, and 0.1 M, respectively. The export reaction was started by adding PEP to a final concentration of 5 mM. 50-l aliquots were withdrawn at different time points, and the radioactivity retained in the vesicles as well as the formation of GlcNAc-6P were measured as described above. To measure competition between vectorial export and non-vectorial phosphorylation, 5 and 10 mM [ 12 C]GlcNAc was added to the external compartment together with enzyme I and HPr. The reaction was started by the addition of PEP (10 mM) at 37°C, and 50-l aliquots were analyzed as indicated above. Protein Phosphorylation Assay-20 l of incubation mixture in buffer E (50 mM NaPi, pH 7.5, 10 mM MgCl 2 , 2.5 mM dithiothreitol, 2.5 mM NaF) contained 18 pmol of purified IICBA GlcNAc reconstituted in phosphatidylethanolamine vesicles, 10 pmol of purified HPr, and 4 pmol of enzyme I. The reaction was started by adding 400 pmol of [ 32 P]PEP (39 cpm/pmol, 80 pmol/l) at 37°C. After a 5-min incubation, the reaction was stopped with 1 ml of ice-cold buffer E, proteins were adsorbed to cellulose nitrate filters (Sartorius) under suction, the filters were washed twice with 1 ml of buffer E, and the filter-bound radioactivity determined by liquid scintillation counting (Buhr et al., 1994). Where indicated, proteoliposomes were treated prior to phosphorylation with trypsin with and without 2% Triton X-100 (25 g/ml trypsin, room temperature, 15 min; proteolysis was stopped by adding phenylmethanylsulfonyl fluoride to 3 mM final concentration). [ 32 P]PEP was prepared as described by Roossien et al. (1983). Reconstitution of IICB Glc and IIC Man ⅐IID Man -IICB Glc was solubilized and purified by Ni 2ϩ chelate affinity chromatography in the presence of 0.2% DM and IIC Man ⅐IID Man in the presence of 0.02% dodecyl maltopyranoside (Waeber et al., 1993;Huber, 1996). Reconstitution and all subsequent procedures were done exactly as described above for IICBA GlcNAc . Purification of IICBA GlcNAc by Ni 2ϩ Chelate Affinity Chromatography-Exploratory experiments indicated that IICBA GlcNAc , like the transporters for mannose and glucose, could be purified by isoelectric focusing. However, the yield and purification were not satisfactory. To facilitate purification nagE was cloned under the control of the inducible Ptac promoter, and the carboxyl terminus of the protein was extended with a hexahistidine tag for purification by metal chelate affinity chromatography (Fig. 1A). 95% of the membrane-bound GlcNAc phosphotransferase activity could be solubilized in 2% DM at pH 9.3. Besides DM, octyl glucoside was also satisfactory, whereas Triton X-100 and pentaethylene glycol octyl ether incompletely solubilized the activity. IICBA GlcNAc could be eluted with 100 mM imidazole in 50 mM MOPS, 300 mM NaCl, pH 7.5. 80% of the phosphotransferase activity present in the membranes was recovered, and the protein was more than 95% pure as judged by polyacrylamide gel electrophoresis (Fig. 1B). Approximately 4 mg of purified IICBA GlcNAc was obtained from 10 g, wet weight, of cells. Preparation and Characterization of Proteoliposomes-Purified IICBA GlcNAc was reconstituted with E. coli phospholipids by the ␤-octyl glucoside detergent dialysis method based on the dilution procedure of Racker et al. (1979). The IICB Glc subunit of the glucose transporter and the IIC Man ⅐IID Man complex of the mannose transporter could be reconstituted by the same method. SM-2 Bio-Beads were added in the dialysis buffer to facilitate the removal of detergent and reduce the dialysis time and number of buffer changes (Philippot et al., 1988). The preformed, concentrated proteoliposomes were loaded by freeze/thaw sonication with either [ 14 C]GlcNAc, or PEP, enzyme I, and HPr. [ 3 H]LGlucose was added as a marker to measure the included aqueous space and to control the impermeability of the proteoliposomes. The loaded proteoliposomes were separated from nonincluded components by gel filtration chromatography. The uranyl acetate-stained vesicles obtained after gel filtration had diameters between 150 and 450 nm (electron micrographs not shown). The internal volume of the vesicles is approximately 0.8 l/mg phospholipid as calculated from the amount of L-[ 3 H]Glc coeluting with the proteoliposomes from the gel filtration column. The orientation of membrane proteins in the bilayer depends on the method by which proteoliposome are formed (Levy et al., 1990(Levy et al., , 1992. The orientation of IICBA GlcNAc was determined by phosphorylation, exploiting the fact that the IIA and IIB domains are surface-exposed but covalently linked to the transmembrane IIC domain and can specifically be phosphorylated by HPr and enzyme I. IICBA GlcNAc was phosphorylated before and after detergent solubilization. Twice as much IICBA GlcNAc is phosphorylated in the presence of Triton X-100, indicating that only 50% is accessible in intact proteoliposomes (Table I). When the intact proteoliposomes were first treated with trypsin, phosphorylation was strongly reduced. When the trypsin-treated proteoliposomes were solubilized, about 50% of the total protein could again be phosphorylated (Table I) (Laemmli, 1970). teolytic digestion of IICBA GlcNAc confirmed that only half of the IICBA GlcNAc molecules in intact liposomes were accessible to trypsin, consistent with an approximately equal number of IICBA GlcNAc molecules facing each direction. Import and Phosphorylation of GlcNAc-A unique direction of transport was imposed on randomly incorporated IICBA GlcNAc by the addition of PEP and the cytosolic phosphoryl carrier proteins enzyme I and HPr to the inside and the substrate [ 14 C]GlcNAc to the outside of the vesicles. Vectorial transport and phosphorylation take place in a 1:1 stoichiometry ( Fig. 2A). The K m is 66.6 Ϯ 8.2 M and the turnover number 6.2 Ϯ 0.7 s Ϫ1 , as calculated from the concentration dependence of GlcNAc transport and taking into account that only 50% of IICBA GlcNAc is correctly oriented (Fig. 2, B-D). From the amount of imported [ 14 C]GlcNAc ( Fig. 2A) and the aqueous space in the liposomes an internal GlcNAc-6P concentration of 2 mM can be calculated. This is the minimum estimate on the assumption that the total aqueous space is accessible to Glc-NAc. GlcNAc-6P is concentrated 40 ϫ more than the external concentration of 50 M. When GlcNAc and the cytosolic components are both added to the outside, non-vectorial phosphorylation but no transport is observed. Non-vectorial phosphorylation is concentration-dependent with a K m of 750 Ϯ 19.6 M and k cat of 15.8 Ϯ 0.9 s Ϫ1 (Fig. 3, A-C). The K m for non-vectorial phosphorylation is 10 times higher than for vectorial transport. This result will be further discussed below. Competition between Transport and Non-vectorial Phosphorylation-If proteoliposomes are loaded with [ 14 C]GlcNAc and the cytosolic proteins are added to the outside, inside-out transport can be measured. The imposed orientation allows to manipulate the system from the "cytosolic face." However, only qualitative changes can be monitored. The system is of limited value for the quantitative determination of kinetic parameters because intravesicular GlcNAc is depleted quickly. The rapid extrusion of GlcNAc is strictly coupled to phosphorylation, and there is no diffusion of GlcNAc through the phospholipid layer and no facilitated diffusion via the unphosphorylated carrier (Fig. 4). As demonstrated above, IICBA GlcNAc catalyzes two reactions: vectorial transport with concomitant phosphorylation, and non-vectorial phosphorylation. The k cat /K m values of the two reactions are 0.09 and 0.025 s Ϫ1 M Ϫ1 , respectively. This raises the question of whether the two reactions compete. Therefore the export of encapsulated [ 14 C]GlcNAc was measured in the presence of increasing concentrations of external GlcNAc. GlcNAc inhibits transport of [ 14 C]GlcNAc in a concentration-dependent manner (Fig. 5A). Concentrations of 5 and 10 mM inhibit the export by 40 and 55%, respectively. Because no competition was observed in previous experiments with the mannose transporter of the bacterial phosphotransferase system (Mao et al., 1995), these experiments were repeated. For comparison they were also done with the glucose transporter IICB Glc ⅐IIA Glc . The two transporters were purified by metal chelate affinity chromatography (Waeber et al., 1993;Huber, 1996), reconstituted by octyl glucoside dialysis, and loaded with [ 14 C]Glc exactly as IICBA GlcNAc . Consistent with the observation of Mao et al. (1995) and in striking contrast to IICBA GlcNAc , the transport activity of IIAB Man ⅐IIC Man ⅐IID Man is not inhibited by external glucose (Fig. 5B). Glucose transport by the IICB Glc ⅐IIA Glc complex, on the other hand is inhibited exactly as IICBA GlcNAc (Fig. 5C). Conclusions-The transporter for GlcNAc is the fourth membrane transporter of the bacterial phosphotransferase system that has been purified to homogeneity (Jacobson et al., 1979;Erni et al., 1982;Erni and Zanolari, 1985). IICBA GlcNAc could be purified in a single step by metal chelate affinity chromatography. This method appears generally suitable for the purification of phosphotransferase transporters (Waeber et al., 1993;Huber, 1996), possibly because these proteins have large hydrophilic domains to attach to the histidine tag. This can be visualized with the x-ray structure of the IIA domain of the Bacillus subtilis glucose transporter (Liao et al., 1991), to which IIA GlcNAc is homologous. The carboxyl terminus is exposed on the protein surface, at 21 Å from the amino terminus and 23 Å from the active site His 83 (equivalent to His 569 ). Over these distances, the His tag is unlikely to interfere with docking between the IIA active site and either HPr or the IIB domain. Indeed, the carboxyl-terminal His tag does not affect the phosphotransferase activity of IICBA GlcNAc . Phosphorylation of substrates without transport was first observed in vivo Nuoffer et 2. Import and phosphorylation of GlcNAc. IICBA GlcNAc -containing proteoliposomes were loaded with enzyme I, HPr, and PEP. Import was started by the addition of [ 14 C]GlcNAc to the outside. Uptake and phosphorylation were measured as described under "Materials and Methods. Panel D, Lineweaver-Burk plot of initial rates of uptake from panels B and C. 100-l aliquots were taken for counting in panel A; 50 l in panels B and C. al., 1988) and was also found after reconstitution of the transporters for mannitol and mannose (Elferink et al., 1990;Mao et al., 1995). In all cases the K m for transport is lower than for non-vectorial phosphorylation. This difference (66 M versus 750 M for IICBA GlcNAc ) suggests that phosphotransferase transporters have different affinities depending on what side of the membrane the binding site is oriented to (assuming that the transporter has only one substrate binding site that can isomerize between inward and outward orientations). Of particular interest is the difference between IICBA GlcNAc and IICB Glc ⅐IIA Glc on the one hand and IIAB Man ⅐IIC Man ⅐IID Man on the other hand with respect to competition between transport and non-vectorial phosphorylation. To our knowledge this FIG. 4. Export and phosphorylation of GlcNAc. IICBA GlcNAc -containing proteoliposomes were loaded with [ 14 C]GlcNAc. Enzyme I and HPr were added to the outside. Export was started by the addition of PEP to the outside. Export and phosphorylation were measured as described under "Materials and Methods." Open symbols indicate export of [ 14 C]GlcNAc; closed symbols indicate phosphorylation of exported [ 14 C]GlcNAc. All points are the mean Ϯ S.D. (n ϭ 3). 50-l aliquots were taken for counting. FIG. 5. Inhibition of vectorial export by non-vectorial phosphorylation. Proteoliposomes containing different enzymes II (IICBA GlcNAc , IICB Glc , or IIC Man ⅐IID Man ) were loaded with 1 mM [ 14 C] GlcNAc or [ 14 C]Glc. Enzyme I, HPr, and IIA were added to the outside. The export of 14 C-sugars was measured in the presence on the outside of increasing concentrations (0 -10 mM) of the non-labeled sugars. Export was started by the addition of PEP to the outside. Panel A, IICBA GlcNAc (protein/lipid, 37,870:1 mol/mol); 0 mM (q), 5 mM (E), 10 mM (å) GlcNAc. Panel B, IIC Man ⅐IID Man (protein/lipid, 34,500:1 mol/ mol); 0.1 M IIAB Man ; 0 mM (E), 5 mM (f), 10 mM (å) Glc. Panel C, IICB Glc (protein/lipid, 27,700:1 mol/mol); 0.1 M IIA Glc ; 0 mM (E), 5 mM (f), 10 mM (å) Glc. All points are the mean Ϯ S.D. (n ϭ 3). 50-l aliquots were taken for counting. is the first experiment that indicates that the two families of PTS transporters have not only different structures but that they also function differently. The homologous IICBA GlcNAc and IICB Glc ⅐IIA Glc are homodimeric complexes of narrow specificity. The transphosphorylation reaction proceeds through a phosphohistidine and a phosphocysteine intermediate. The IIAB Man ⅐IIC Man ⅐IID Man complex has a broad substrate specificity (including Glc, Man, GlcNAc), and transphosphorylation proceeds through two phosphohistidine intermediates. It is a heterooligomer composed of two membrane-spanning subunits (stoichiometry 1:2) and a hydrophilic complex of two structurally interwined polypeptide chains (Nunn et al., 1996). The mechanistic basis of this differences remains to be discovered. Seema Mukhija and Bernhard Erni Escherichia coli -Acetylglucosamine of N Transporter for Affinity Chromatography, and Functional Reconstitution of the 2+ Purification by Ni
2018-04-03T00:41:57.434Z
1996-06-21T00:00:00.000
{ "year": 1996, "sha1": "1a9392e56df4c7b80a0547bce725a7b414fa34ed", "oa_license": "CCBY", "oa_url": "http://www.jbc.org/content/271/25/14819.full.pdf", "oa_status": "HYBRID", "pdf_src": "Adhoc", "pdf_hash": "8a8c831907ab865f63e76ece3b028929e0f0a2e7", "s2fieldsofstudy": [ "Biology", "Chemistry" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
22755543
pes2o/s2orc
v3-fos-license
Antibiotic prescribing patterns in out-of-hours primary care: A population-based descriptive study Abstract Objective. To describe the frequency and characteristics of antibiotic prescribing for different types of contacts with the Danish out-of-hours (OOH) primary care service. Design. Population-based observational registry study using routine registry data from the OOH registration system on patient contacts and ATC-coded prescriptions. Setting. The OOH primary care service in the Central Denmark Region. Subjects. All contacts with OOH primary care during a 12-month period (June 2010–May 2011). Main outcome measures. Descriptive analyses of antibiotic prescription proportions stratified for type of antibiotic, patient age and gender, contact type, and weekdays or weekend. Results. Of the 644 777 contacts registered during the study period, 15.0% received an antibiotic prescription: 26.1% resulted from clinic consultations, 10.7% from telephone consultations, and 10.9% from home visits. The prescription proportion was higher for weekends (17.6%) than for weekdays (10.6%). The most frequently prescribed antibiotic drugs were beta-lactamase sensitive penicillins (34.9%), antibiotic eye drops (21.2%), and broad-spectrum penicillins (21.0%). Most antibiotic eye drops (73%) were prescribed in a telephone consultation. Most antibiotics were prescribed at 4–6 p.m. on weekdays. Young infants received most antibacterial eye drops (41.3%), patients aged 5–17 years and 18–60 years received most beta-lactamase sensitive penicillins (44.6% and 38.9%, respectively), while patients aged 60 + years received most broad-spectrum penicillins (32.9% of all antibiotic prescriptions). Conclusion. Antibiotics were most often prescribed in clinic consultations, but, in absolute terms, many were also prescribed by telephone. The high prescription proportion, particularly antibacterial eye drops for young infants, suggests room for improvement in rational antibiotic use. Introduction Increased prescription of antibiotics is a topic of concern and debate in many countries. Antibiotic resistance is a growing problem, which may delay or reduce effective treatment, and high exposure to antibiotics is considered a major cause of antibiotic resistance [1]. Denmark has traditionally had a low use of antibiotics, but its use has increased in the last decade, as in many other European countries [2,3]. Several factors could be related to this change, including changes in medical needs (e.g. population ageing with altered needs), in guideline recommendations (e.g. amoxycillin-clavulanic acid for exacerbations of COPD), introduction of new antibiotics, and in prescribing behaviour of genereral practitioners (GPs) [3,4]. Furthermore, it has been debated that the antibiotics may form part of effective and effi cient treatment of patients. Furthermore, having GPs answering the telephone may also limit follow-up consultations with the patient ' s own GP, the OOH primary care services, or other health-care providers. To evaluate the existing system and suggest possible future interventions, we need to obtain systematically collected information concerning anti-biotic prescribing at OOH primary care services. We aimed to describe the frequency and characteristics of antibiotic prescribing (type of antibiotics, patient age and gender, contact type, and time of contact) in one of the Danish OOH primary care services. Design and setting We conducted a population-based retrospective observational study of all patient contacts with the OOH primary care service from June 2010 to May 2011. The study was performed in the Central Denmark Region (1.2 million citizens). In four of the fi ve Danish regions, GPs provide regional OOH primary care on a rotating basis. The regional OOH primary care service consists of two call centres and 13 consultation centres located throughout the region. Opening hours are from 4 p.m. to 8 a.m. on weekdays, during the entire weekend, and at holiday times. Patients in need of acute care outside offi ce hours must call the OOH primary care service, where GPs answer calls and perform telephone triage to decide type and level of health care needed. GPs can decide to end the contact on the telephone (i.e. telephone consultation), plan a face-to-face contact with a GP (i.e. clinic consultation or home visit), or refer the patient to the emergency department (ED) or ambulance care . In general, 59% of all contacts are telephone consultations, 28% are clinic consultations, and 13% are home visits [10]. The OOH registration system is fully computerized, and each contact is registered in the patient ' s medical record through the unique civil registration (CPR) number assigned to every Danish citizen. An electronic copy of the record is subsequently sent to the patient ' s own GP, and data are transmitted to the regional administration for remuneration purposes as the GPs are paid a fee for service . Data and variables The electronic OOH registration system provided data on patient age and gender, date and time of contact, type of contact, and detailed prescription information on type, dose, and duration through Anatomic Therapeutic Chemical (ATC) coding [11]. Contact and prescription information was delivered in two separate datasets. Procedure for coding contacts with antibiotic prescription We selected all antibiotics prescriptions on the basis of the registered ATC codes for antibiotic drugs. We made a list of all prescriptions by using the ATC level-5 codes, and two physician researchers independently defi ned and selected the antibiotic drugs on the basis of the WHO website coding system [11] and discussed the fi nal list to achieve consensus (see Box 1) on classifi cation of antibiotics. Data analysis Descriptive analyses of antibiotic prescription frequencies were performed, including percentage, 95% confi dence intervals (CI), and proportion. The prescription proportion (PP) was calculated by dividing number of antibiotic prescriptions by number of contacts. First, we presented the proportions of contact type, gender, age group, weekdays or weekend, and type of antibiotic. Second, we stratifi ed for type of contact, patient age group, weekdays or weekend, and type of antibiotic. STATA was used to perform the statistical analyses. Denmark has seen increasing antibiotic • prescribing in the last decade; out-of-hours primary care services have been suspected of discharging particularly numerous prescriptions. Antibiotics prescription proportions were • highest for clinic consultations in out-ofhours primary care. In absolute terms, a large proportion of • antibiotics were prescribed in telephone consultations. There was a high prescription proportion • for antibacterial eye drops in telephone consultations, particularly for infants. Frequency of antibiotic prescriptions Other antibiotics J01DB01 J01DC02 Types of antibiotic prescriptions The most frequently prescribed antibiotics were beta-lactamase sensitive penicillins (prescription proportion: 5.2%), antibacterial eye drops (3.2%), and broad-spectrum penicillins (3.2%) ( Table II). The type of prescribed antibiotic drug varied slightly with contact type. Antibacterial eye drops were prescribed most often in telephone consultations, followed by penicillins (beta-lactamase sensitive and broad-spectrum types of penicillin). In clinic consultations and home visits, beta-lactamase sensitive and broad-spectrum types of penicillins were most frequently prescribed. In total, 82.6% of all prescriptions for sulphonamides, trimethoprim, and nitrofurantoin and 73.0% of all antibacterial eye drops were prescribed in telephone consultations, whereas 66.5% of all beta-lactamase sensitive penicillins were prescribed in clinic consultations. Types of antibiotics per age group The most frequently prescribed types of antibiotics varied between age groups: antibacterial eye drops for infants aged 0 -4 years (41.3%) and beta-lactamase sensitive penicillins for children aged 5 -17 years (44.6%) and adults aged 18 -60 years (38.9%) (Table III). Above 60 years, patients most frequently received broad-spectrum penicillins (32.9%). Types of antibiotics for weekdays and weekends Broad-spectrum penicillins were more frequently prescribed during weekdays than at weekends, for all contact types (Table IV). For beta-lactamase sensitive penicillins the prescription proportion was higher during weekends than during weekdays. Antibacterial eye drops had a similar rate during weekdays and weekends. At weekends antibiotics were more frequently prescribed on Saturdays than on Sundays, most often during the daytime. Most antibiotics were prescribed during weekdays (Monday to Friday) at 4 -6 p.m., just after the opening hours of the OOH primary care service and at 6 -8 p.m. (not in Table). 18.4 (16.9 -19.9) 6.8 (5.8 -7.8) 13.0 (11.8 -14.4) 6.9 (6.0 -8. Percentage of antibiotic prescriptions of all contacts. 3 Percentage of antibiotic prescriptions for all contacts resulting in an antibiotic prescription. Statement of principal fi ndings In 15% of all contacts with the OOH primary care service, an antibiotic drug was prescribed; antibiotic drugs were prescribed more than twice as often in clinic consultations than in telephone consultations or on home visits. The most frequently prescribed antibiotic drugs were beta-lactamase sensitive penicillins, antibacterial eye drops, and broad-spectrum penicillins; antibacterial eye drops for children aged below fi ve years, beta-lactamase sensitive penicillins for patients aged 5 -60 years, and broad-spectrum penicillins for patients aged above 60 years. Nearly half of all antibiotics were prescribed in clinic consultations, but more than 40% of all antibiotics were prescribed in telephone consultations (in particular antibacterial eye drops, sulphonamides, trimethoprim, and nitrofurantoin). The highest prescription proportion was seen just after opening hours on weekdays (at 4 -6 p.m.). Strengths and weaknesses Our study included statistically precise data at detailed ATC level on all patient contacts at a regional OOH primary care service during a 12-month period, thus accounting for seasonal variations. We identifi ed all prescriptions made in a catchment area covering about 1.2 million inhabitants, and the GPs were unaware of the ongoing investigation. The automatic electronic data collection ensured complete and valid data with limited risk of information or selection bias. The organization of the setting was similar to that in other Danish regions. Our results may, therefore, be generalized to other settings. The routinely collected data did not allow us to review the indications for antibiotic prescriptions or measure guideline adherence. Findings in relation to other studies Home visits are generally reserved for severely ill patients. However, we did not fi nd a higher proportion of antibiotic prescriptions for home visits, and our data could not identify the reasons behind this fi nding (such as lower rate of infections, presence of medication at home, subsequent referral to a hospital, or low threshold for offering a home visit). Patients often contact the OOH primary care services for health problems related to infections, which may increase the need for antibiotic prescriptions [16]. Several studies report that factors other than strictly medical indications infl uence decisions as to whether or not to prescribe antibiotics, such as a particular time of day or week, pending weekend, time constraints, and heightened workload [3,4,17,18]. All these factors are more prominent in OOH primary care. We found an increased propensity for prescription of antibiotics during the fi rst opening hours of the OOH primary care service during weekdays. The high workload during these hours could be a possible explanation as a higher medical need for antibiotics is unlikely in this particular period. Yet, also lack of accessibility to one ' s own GP and convenience for the patient (i.e. direct and immediate access to an OOH GP) may play a role. During weekends antibiotics were more frequently prescribed; the longer time to opening hours of one ' s own GP could infl uence the prescription behaviour of the GPs on duty. One study found that GPs prescribed antibiotics in a similar way in and out of offi ce hours, but with signifi cant differences between individual GPs [19]. A Dutch study on guideline adherence at OOH primary care services found that prescription of antibiotics had a lower adherence score (69%) than prescription of pain medication and referral of patients, with over-prescription of antibiotics in 42% of cases and under-prescription in 21% of cases [20]. The GPs prescribed nearly half (42%) of all antibiotics on the telephone. Most prescriptions were for antibacterial eye drops and broad-spectrum penicillin, but prescriptions for lower urinary tract infections (LUTIs) were also frequently made by telephone. It is questionable whether all these prescriptions were well indicated from a medical perspective. On the one hand, an uncomplicated case of LUTI can be treated with antibiotics prescribed solely on the basis of history-taking according to national guidelines. On the other hand, conjunctivitis, one of the main indications for prescription of antibacterial eye drops, is mostly of viral origin. Acute conjunctivitis is considered a self-limiting condition, and most patients get better regardless of antibiotic use [14]. Social context seems to play a role as well, because in Denmark child day care institutions often demand ongoing treatment of conjunctivitis for a child to be present. Full-time work participation of Danish women is high, so Danish families have high incitements for getting children to day care. Thus, future studies could focus on interventions aimed at reducing prescriptions for conjunctivitis (e.g. use of delayed or wait-and-see prescriptions) [15]. GP telephone triage may also infl uence the prescription behaviour. GPs may, more often than nurses, decide to prescribe antibiotics in a telephone consultation rather than plan subsequent face-toface contact. Many of these patients may also receive an antibiotic prescription in a face-to-face contact. Such a subsequent contact could increase " state of the art " prescribing, but may also decrease patient satisfaction (e.g. face-to-face contact may be less convenient) and put pressure on the consultation shifts. Small-spectrum penicillins, such as betalactamase sensitive penicillins, were prescribed frequently. Yet a considerable proportion of prescriptions were for broad-spectrum penicillins, as well as for macrolides. We could not fi nd studies presenting comparable fi gures, but an increase in the use of broad-spectrum antibiotics has been described elsewhere [2,4]. Even though we have no information on the indication for prescribing antibiotics and thus regarding appropriateness, this proportion seems relevant for future studies and interventions. The general recommendation for prescribing antibiotics is to minimize the use of broad-spectrum drugs as much as possible in order to avoid development of resistance. Meaning of the study: implications This study suggests that rational prescription of antibiotics in the OOH primary care services may be promoted in Denmark. An earlier Danish study indicated that an intervention in primary care may limit antibiotic prescribing considerably [21]. Our results suggest that areas for targeted intervention could be telephone prescriptions of antibacterial eye drops and penicillin. For instance, GPs could be recommended to advise self-care for conjunctivitis. The high number of antibiotic prescriptions for LUTIs in telephone consultations may be relevant, but it requires high quality of history-taking and clear indications for prescribing. This routine may cause ineffective treatment and lack of proper investigation for serious symptoms of LUTI. Future studies should assess the medical appropriateness of antibiotic prescriptions in OOH primary care and should particularly address diagnosis, indications, and specifi c patient groups. The relation between access to diagnostic tests in OOH primary care services (e.g. C-reactive protein test and rapid strep test) and antibiotic prescription is also an important area for future studies. GPs currently have limited access to diagnostic tests, and this may affect the use of antibiotic drugs, particularly in the OOH service where patients in need of immediate care are unknown to the GPs and tend to be worried. Conclusions Antibiotics were most often prescribed in clinic consultations, but, in absolute terms, many were also prescribed by telephone. The prescription proportion seemed high, particularly antibacterial eye drops for young infants. Also, the frequent prescription of broad-spectrum penicillins and macrolides suggest room for improvement of rational antibiotic use, both in telephone consultations and in clinic consultations. Further studies on the appropriateness and motives for prescribing antibiotics in out-of-hours primary care are highly relevant to further promote rational prescription. Ethical approval According to Danish national regulations, research based on registry data on non-identifi able persons does not require approval by a research ethics committee. Funding No specifi c funding.
2017-06-16T16:32:50.523Z
2014-12-01T00:00:00.000
{ "year": 2014, "sha1": "9d6f80b1a6b741dc9605ac465fd2d812aac15e38", "oa_license": "CCBYNC", "oa_url": "https://www.tandfonline.com/doi/pdf/10.3109/02813432.2014.972067?needAccess=true", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "add9649c8639c14d0518ed2751c4e1e5c7c90c2e", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
37639905
pes2o/s2orc
v3-fos-license
Two Dimensional Phosphorus Oxides as Energy and Information Materials Two-dimensional (2D) black phosphorus (i.e., phosphorene) has become a rising star in electronics. Recently, 2D phosphorus oxides with higher stability have been synthesized. In this work, we systematically explore the structures and properties of 2D phosphorus oxides on the basis of global optimization approach and first-principles calculations. We find that the structural features of 2D phosphorus oxides PxOy vary with the oxygen concentration. When the oxygen content is poor, the most stable 2D PxOy can be obtained by adsorbing O atoms on phosphorene. However, when the oxygen concentration becomes rich, stable structures are no longer based on phosphorene and will contain P-O-P motifs. For the 2D P4O4, we find that it has a direct band gap (about 2.24 eV), good optical absorption, and high stability in water, suggesting that it may be good candidate for photochemical water splitting application. Interestingly, 2D P2O3 adopt two possible stable ferroelectric structures (P2O3-I and P2O3-II) as the lowest energy configurations within a given layer thickness. The electric polarizations of P2O3-I and P2O3-II are perpendicular and parallel to the lateral plane, respectively. We propose that 2D P2O3 could be used in a novel nanoscale multiple-state memory device. Our work suggests that 2D phosphorus oxides may be excellent functional materials. In recent years, two dimensional (2D) materials have attracted lots of interests for their novel physical properties and potential applications [1] . Graphene is a representative 2D material [1a-d] in which many exotic phenomena were discovered due to the presence of a Dirac-type band dispersion. At the same time, the absence of a band gap unfortunately limits its applications in electronic and optoelectronic devices. Subsequently, monolayer transition metal dichalcogenides (TMDs) were studied extensively due to their intrinsic direct band gaps [1e-g] . Despite multiple researches report that the electronic and optoelectronic devices based on monolayer TMDs have shown high on/off ratio and high responsivity, the carrier mobility of these TMDs members is still much lower than graphene [2] . Most recently, 2D monolayer black phosphorus (BP), i.e., phosphorene, has attracted much attention since it not only has a direct band gap of about 2.0 eV [3] , but also high carrier mobility [4] . Interestingly, other phosphorene-like materials were predicted to display high carrier mobilities and a broad range of band gaps [5] . These unique properties make phosphorene superior to graphene in electronic and optoelectronic applications. In fact, it was demonstrated that the fieldeffect transistor (FET) made of few-layer black phosphorus presented high on/off ratio [4a] . Experiment also showed that photodetector made of multilayer black phosphorus can exhibit high-contrast images both in the visible as well as in the infrared spectral regime [6] . Due to the inherent orthorhombic waved structure of phosphorene, the carrier mobility is highly anisotropic [7] and its electronic properties can be easily tuned by strain, which could be useful in some special applications [8] . Although phosphorene is a promising material with many novel properties, it has a well-known drawback. It degrades easily in the oxygen and humidity environment [9] . Therefore, it is important to understand the oxidation mechanism in phosphorene. This is difficult to be investigated directly through experiments since oxidation may lead to amorphous structures that cannot be probed by diffraction, and since most techniques that measure the real space structure lead to further sample degradation. Theoretically, Ziletti et al. found a single O atom tend to adsorb on one phosphorus atom, forming the dangling-type structure [9a] . This is because the electron lone pair of the sp 3 hybridized P atom could interact with an oxygen atom which needs two more electrons to satisfy the octet rule. Thus, the interaction mechanism between phosphorene and a few oxygen molecules and oxygen atoms is now clear [10] . But, the structure and properties of recently synthesized layered phosphorus oxide compounds (PxOy) with a high oxygen concentration are not well understood [11] . Although some model structures for 2D phosphorus oxides have been built manually [12] , it is possible that these model structures are not necessarily stable. On the other hand, like graphene oxides [13] , 2D phosphorus oxides themselves may serve as novel functional materials with fascinating properties. In fact, Lu et al. experimentally demonstrated that phosphorus oxides and suboxides not only have tunable band gaps, but also are much more stable compared with pristine phosphorene under ambient condition. They further showed that these 2D phosphorus oxides could be used in multicolored display and toxic gas sensor applications [14] . So, a detailed understanding of the structure of 2D phosphorus oxides will not only provide deeper insight into the degradation mechanism, but also may lead to the discovery of new functional 2D materials. In this work, we systematically predict the lowest energy structures of 2D phosphorus oxides with different oxygen concentrations through our newly developed global optimization approach [15] . We find that 2D phosphorus oxides keep the phosphorene framework with dangling P=O motifs when the oxygen concentration is low, while there will exist P-O-P motifs when the oxygen concentration is higher than one third. In the most stable structures of P4O4 and P2O3 with the thickness less than 3.2 Å, there are only P-O-P motifs but no dangling P=O motif. Interestingly, the most stable structure of 2D P4O4, i.e., P4O4-I, has a direct band gap of 2.24 eV, good optical absorption, appropriate band edge positions, and high stability in water, suggesting that it may be used for photocatalytic water splitting. For 2D P2O3, there are two stable ferroelectric structures (namely P2O3-I and P2O3-II) with non-zero out-of-plane and inplane polarization, respectively, suggesting that 2D P2O3 may be used in novel multiplestate memory devices. Our work shows that 2D phosphorus oxides could act as novel functional materials. We search the stable structures of 2D phosphorus oxides with different oxygen concentrations through a global optimization approach. For some oxygen concentrations, we find new structures with total energies lower than the previously proposed structures. In the following, we will first discuss the structure features of 2D phosphorus oxides revealed from our particle swarm optimization (PSO) simulations. Then, we will focus on 2D P4O4 and P2O3 to demonstrate that they are promising materials for the use in photochemical water splitting and ferroelectric multiple-state memory, respectively. From our PSO simulations, we find that the stable structures of phosphorus oxides display different motifs when oxygen concentration varies. This is because the Coulomb interactions between oxygen ions will destabilize the dangling-BP type structure in the high oxygen concentration case. To be more specific, for P8O1 and P6O1, their lowest-energy structures are of the dangling-BP type, as shown in Figure 1. For P4O1, its 2D structure also belongs to the dangling-BP type which was first proposed by Wang et al. [12a] (see Figure 1). But surprisingly and interestingly, we find that P4O1 has another special one dimensional (1D) tubular structure (supporting information, Figure S1). This 1D structure has lower total energy (about 14 meV/atom) than its 2D configuration. Note that the 1D P4O1 with a 3.3 Å diameter may be the smallest nanotube since the smallest experimentally observed carbon nanotube has a diameter of 4 Å [16] . For P2O1, it contains not only the dangling P=O motifs but also the P-O-P motifs (hereafter referred as P2O1-I, see Figure 1). Its total energy is lower than that of previous proposed P2O1 structure (referred as P2O1-IV, see supporting information, Figure S4) [12a] by about 174 meV/atom despite of the fact that the P2O1-I structure is thinner than the previous P2O1 structure by about 1.0 Å. It is worth stressing that P2O1-I is unusual and complicated. There exist four-member rings and eight-member rings formed by phosphorus atoms. Since its framework is completely different from that of phosphorene, it is almost impossible to build it manually, indicating that our global optimization approach is powerful. The most stable structures of P4O4 and P2O3 are shown in Figure 2(a) and Figure 5 (b), respectively. We can find that both of them belong to the bridge-type structure without the P=O motifs any more. Previously, it was suggested that the band gap of phosphorus oxides increases monotonously with the oxygen concentration [12b, 14] . Here, we find that this is not necessarily true: The band gap of our P2O1 structure is about 2.85 eV, which is larger than that (2.24 eV) of the stable structure of P4O4 (i.e. P4O4-I). This suggests that 2D phosphorus oxides have rich structural and electronic properties. As expected, the average binding energy per phosphorus atom increases with the oxygen concentration (supporting information, Figure S2). The lowest energy structure of P4O4 (i.e., P4O4-I) within the 3.2 Å thickness only contains P-O-P bridge motifs, as shown in Figure 2(a). Six phosphorus atoms and four oxygen atoms form a ring. There are two P-P dimers in a primitive cell. Each oxygen atom is two-fold coordinated, and each phosphorus atom bonds with one phosphorus atom and two oxygen atoms. The phosphorus atom takes the sp 3 hybridization with an electron lone pair, while there are two electron lone pairs for each sp 3 hybridized oxygen atom. This chemical bonding analysis suggests that P4O4-I is a semiconductor. The calculated band structure indeed confirms the semiconducting nature. As can be seen from Figure 3(a), P4O4-I has a direct band gap of 2.24 eV at  point. The reason why the valence band maximum (VBM) located at  point is because of the interaction between the P-P  bonding states (see supporting information for a detailed analysis). Moreover, we find the band dispersion near  is large, indicating a high mobility. The electron effective mass with the local density approximation (LDA) functional is computed to be 0.58 m0, which is even smaller than that (0.68 m0) of phosphorene. The optical absorption spectrum calculated with the HSE functional is plotted in Figure 4 (b). From it, we can see the absorption starting from 2.24 eV, indicating the dipole transition between conduction band minimum (CBM) and VBM is allowed. This can be understood within the group theory. The point group of P4O4-I is C2h, which has four one dimensional irreducible representations. We find that the wave function of CBM and VBM belong to odd Au and even Bg representations, respectively. This explains the dipole-transition between the band edge states. As we mentioned above, the band gap of P4O4-I is even smaller than P2O1-I. In order to understand this unusual phenomenon, we plot wave functions of VBM and CBM states of P4O4-I in Figure 3(c) and (d). We find that the VBM state is mainly contributed by the P-P  bonding states. While for the CBM state, the wavefunction is mainly distributed between one phosphorus atom and its next-nearest neighbor in a six phosphorus ring (namely, P1 and P2 atom). The interaction between P1 and P2 atoms is very important to the low energy of the CBM state, and thus is responsible for the small band gap of P4O4-I. The uncommon long-range P-P interaction is due to the twisting of P6O4 rings in P4O4-I (For more detailed discussions, see supporting information). Our above results indicate that P4O4-I may be appropriate for photoelectrochemical (PEC) water splitting application [17] . As we know, in order to be a good photochemical water splitting material, the positions of band edges should be suitable for solar-driven water splitting [18] . The estimated band edge positions with respect to the vacuum level are shown in Figure 4(a). The hydrogen evolution potential and oxygen evolution potential are marked with black dot and green dot respectively. It can be seen that the CBM of P4O4-I is higher than the hydrogen evolution potential by about 0.67 eV and the VBM of P4O4-I is below the oxygen evolution potential by about 0.37 eV. These band edge positions are suitable for PEC water splitting. We investigate the interaction between water and P4O4-I with first-principles molecular dynamics (MD) simulation (supporting information, Figure S3). Our results show that P4O4-I can be stable in water at room temperature. In addition, the adsorption energies of a water molecule on P4O4-I and phosphorene are found to be -256 meV and -283 meV, respectively. This suggests that P4O4-I is more stable than phosphorene in water [19] . Three dimensional ferroelectric materials are widely explored both experimentally and theoretically [20] . In two dimension, the depolarizing field was believed to suppress the ferroelectric dipoles perpendicular to the film surface, leading to the disappearing of ferroelectricity with thickness less than a certain value [21] . For this reason, only a few works focused on 2D ferroelectrics in the past [21b, 22] . Recently, the 1T monolayer MoS2 and 2D honeycomb binary compounds were predicted to be 2D ferroelectric materials. But their ferroelectric properties still await experimental confirmation. So, discovering new 2D ferroelectric materials will not only help us to understand new physical mechanism for 2D ferroelectricity, but also accelerate the applications of 2D ferroelectrics. From our PSO simulation, we find two kinds of ferroelectric structures for P2O3 (namely, P2O3-I and P2O3-II), as shown in Figure 5(a) and (b). For P2O3-I, it is the lowest energy structure with thickness less than 1.4 Å. The phosphorus atoms form a honeycomb lattice and each phosphorus atom is surrounded by three oxygen atoms. Thus, there are only P-O  bonds, resulting in a large band gap (about 5.79 eV). All phosphorus atoms are located in the top-plane and all oxygen atoms are located in the bottom-plane. Hence, P2O3-I has a non-zero electric polarization perpendicular to the lateral plane. The presence of a perpendicular ferroelectric polarization in P2O3-I can be explained as following. Although the pure electrostatic interaction between the P 3+ ion and O 2ion favors a flat structure, the instability in flat-P2O3 is due to the presence of lone-pair electrons of the P 3+ ion. As shown in Figure 6, the interaction between P 3pz and 3s orbitals in flat-P2O3 is forbidden by symmetry. In contrast, P 3pz orbital can mix with P 3s orbital to lower the energy level in P2O3-I. Since this level will be occupied by the lone pair electrons, the total energy becomes lower than that of flat-P2O3. This is rather similar to the mechanism of the buckling of the NH3 molecule. The ferroelectric mechanism in P2O3-I is different from that in hexagonal ABC hyperferroelectric [23] where small effective charges and large dielectric constants play a role. According to Garrity et al. [23] , hyperferroelectrics refers to a class of proper ferroelectrics which polarize even when the depolarization field is unscreened. In this sense, P2O3-I is the thinnest hyperferroelectrics originated from a new lone-pair mechanism. P2O3-II is the lowest energy structure of P2O3 with the thickness less than 3.2 Å. Its topology is similar with that of P2O3-I. But unlike P2O3-I, the phosphorus atoms are no longer in the same plane and so do oxygen atoms, leading to zero polarization perpendicular to the lateral plane. But it has a non-zero in-plane polarization due to the collective oxygen displacements along the y axis [see Figure 5 (b)]. The total energy of P2O3-II is lower than P2O3-I by about 34 meV/atom. However, we note that P2O3-II has a larger thickness than P2O3-I (1.46 Å vs. 0.80 Å). To estimate the switching barrier and magnitude of electric polarization, we consider a paraelectric phase of P2O3 [P2O3-III in Fig 5(c)] in which the oxygen plane is sandwiched between the two phosphorus planes. The paraelectric phase P2O3-III is a semiconductor with a LDA band gap of 3.41 eV. The energy barrier between ferroelectric P2O3-I (P2O3-II) and paraelectric phase is about 75 meV/atom (109 meV/atom). So, it is possible to switch the ferroelectric phase to the paraelectric phase through applying the external electric field. For P2O3-I, the electric polarization can be either along z or -z directions. For P2O3-II, there are six possible ferroelectric states with in-plane electric polarizations because of the six-fold symmetry of the paraelectric state. Therefore, totally there are eight different ferroelectric states for 2D P2O3. With the electric field, we can change the direction of electric polarization and the nanoscale multiple-state (8-state) memory device can be possibly realized (see Figure 7). We note that these 2D phosphorus oxides are thermally and kinetically stable (supporting information, Figure S6). Finally, we proposed two possible ways to synthesize these materials. One way to obtain P4O4 and P2O3 is to partially oxidize the phosphorene by ozone or oxygen plasma with controlled oxygen concentration. Another way is to reduce the phosphorus oxides with high oxygen concentration by the chemical reduction method, which was successfully adopted to obtain partially oxidized graphene [24] . In conclusion, we systematically predict the lowest energy structures of 2D P8O1, P6O1, P4O1, P2O1, P4O4 and P2O3 with the global optimization method. We find that the features of stable structures vary with oxygen concentration. If the oxygen concentration is low, 2D PxOy structures are based on phosphorene with dangling P=O motifs, as reported by Zeilt et al. With the increase of oxygen concentration, 2D PxOy structures will most likely exhibit the P-O-P motifs. We further show that 2D PxOy may have unique properties for functional materials. P4O4-I may be good candidate for PEC water splitting application since it has an appropriate band gap, good optical absorption, and high stability in water. Both P2O3-I and P2O3-II are 2D ferroelectrics. In particular, P2O3-I may be the thinnest hyperferroelectrics originated from a new mechanism. We propose that 2D P2O3 could be used in nanoscale multiple-state memory devices in the future. Experimental Section In this work, density functional theory (DFT) method is used for structural relaxation and electronic structure calculation. The ion-electron interaction is treated by the projector augmented-wave (PAW) [25] technique as implemented in the Vienna ab initio simulation package [26] . The exchange-correlation potential is treated by LDA [27] . For structural relaxation, all the atoms are allowed to relax until atomic forces are smaller than 0.01 eV/Å. The 2D k-mesh is generated by the Monkhorst-Pack scheme. To avoid the interaction between neighboring layers, the vacuum thickness is chosen to be 12 Å. To obtain more reliable results for electronic and optical properties, the HSE06 functional [28] is adopted since LDA underestimates the band gap. The band edge positions are estimated by aligning the vacuum level with respect to the electrostatic potential in the vacuum region of the supercell. In our implementation, for each 2D structure, we first randomly select a layer group [15] instead of a planar space groups [29] . The lateral lattice parameters and atomic positions are then randomly generated but confined within the chosen layer group symmetry. Subsequently, local optimization including the atomic coordinates and lateral lattice parameters is performed for each of the initial structures. In the next generation, a certain number of new structures (the best 60% of the population size) are generated by PSO [30] . The other structures are generated randomly, which is critical to increase the structure diversity. When we obtain the new 2D structures by the PSO operation or random generation, we make sure that the thickness of the 2D structure is smaller than the given thickness. Extensive PSO simulations are performed to find out the global stable structures of 2D P8O1, P6O1, P4O1, P2O1, P4O4, and P2O3. We set the population size to 30 and the number of generations to 20. The total number of atoms in the unit cell is less than 16. We consider five different thicknesses (between 1 and 4 Å) for each system. In addition, we repeat twice of each calculation in order to make results reliable. Supporting Information Supporting Information accompanies this paper at http://onlinelibrary.wiley.com. Correspondence and requests for materials should be addressed to H.X. Author Information Corresponding Author *E-mail: hxiang@fudan.edu.cn (H. J. Xiang). P4O4-II has a higher energy by 11 meV/atom than P4O4-I. "d" is the thickness of 2D structures. P2O3-I is ferroelectric with an out-of-plane electric polarization. b) Top and side views of P2O3-II, which is the lowest energy P2O3 with thickness less than 3.2 Å. P2O3-I is ferroelectric with an in-plane electric polarization. c) Top and side views of the paraelectric P2O3 structure (i.e., P2O3-III). Oxygen atoms are inversion centers.
2018-04-03T04:05:43.341Z
2016-05-10T00:00:00.000
{ "year": 2016, "sha1": "189f4aef306853819611749363cb4091da3743d1", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "24521b9cb1bc0095388f48c0504b263a83c040fe", "s2fieldsofstudy": [ "Materials Science", "Physics" ], "extfieldsofstudy": [ "Medicine", "Materials Science", "Physics", "Chemistry" ] }
195694665
pes2o/s2orc
v3-fos-license
Evaluating the potential impact and cost-effectiveness of dapivirine vaginal ring pre-exposure prophylaxis for HIV prevention Background Expanded HIV prevention options are needed to increase uptake of HIV prevention among women, especially in generalized epidemics. As the dapivirine vaginal ring moves forward through regulatory review and open-label extension studies, the potential public health impact and cost-effectiveness of this new prevention method are not fully known. We used mathematical modeling to explore the impact and cost-effectiveness of the ring in different implementation scenarios alongside scale-up of other HIV prevention interventions. Given the knowledge gaps about key factors influencing the ring’s implementation, including potential uptake and delivery costs, we engaged in a stakeholder consultation process to elicit plausible parameter ranges and explored scenarios to identify the possible range of impact, cost, and cost-effectiveness. Methods and findings We used the Goals model to simulate scenarios of oral and ring pre-exposure prophylaxis (PrEP) implementation among female sex workers and among other women ≤21 years or >21 years with multiple male partners, in Kenya, South Africa, Uganda, and Zimbabwe. In these scenarios, we varied antiretroviral therapy (ART) coverage, dapivirine ring coverage and ring effectiveness (encompassing efficacy and adherence) by risk group. Following discussions with stakeholders, the maximum level of PrEP coverage (oral and/or ring) considered in each country was equal to modern contraception use minus condom use in the two age groups. We assessed results for 18 years, from 2018 to 2035. In South Africa, for example, the HIV infections averted by PrEP (ring plus oral PrEP) ranged from 310,000 under the highest-impact scenario (including ART held constant at 2017 levels, high ring coverage, and 85% ring effectiveness) to 55,000 under the lowest-impact scenario (including ART reaching the UNAIDS 90-90-90 targets by 2020, low ring coverage, and 30% ring effectiveness). This represented a range of 6.4% to 2.2% of new HIV infections averted. Given our assumptions, the addition of the ring results in 11% to 132% more impact than oral PrEP alone. The cost per HIV infection averted for the ring ranged from US$13,000 to US$121,000. Conclusions This analysis offers a wide range of scenarios given the considerable uncertainty over ring uptake, consistency of use, and effectiveness, as well as HIV testing, prevention, and treatment use over the next two decades. This could help inform donors and implementers as they decide where to allocate resources in order to maximize the impact of the dapivirine ring in light of funding and implementation constraints. Better understanding of the cost and potential uptake of the intervention would improve our ability to estimate its cost-effectiveness and assess where it can have the most impact. Methods and findings We used the Goals model to simulate scenarios of oral and ring pre-exposure prophylaxis (PrEP) implementation among female sex workers and among other women �21 years or >21 years with multiple male partners, in Kenya, South Africa, Uganda, and Zimbabwe. In these scenarios, we varied antiretroviral therapy (ART) coverage, dapivirine ring coverage and ring effectiveness (encompassing efficacy and adherence) by risk group. Following discussions with stakeholders, the maximum level of PrEP coverage (oral and/or ring) considered in each country was equal to modern contraception use minus condom use in the two age groups. We assessed results for 18 years, from 2018 to 2035. In South Africa, for example, the HIV infections averted by PrEP (ring plus oral PrEP) ranged from 310,000 under the highest-impact scenario (including ART held constant at 2017 levels, high ring coverage, and 85% ring effectiveness) to 55,000 under the lowest-impact scenario (including ART reaching the UNAIDS 90-90-90 targets by 2020, low ring coverage, and 30% ring effectiveness). This represented a range of 6.4% to 2.2% of new HIV infections averted. Given our assumptions, PLOS Introduction Despite successes in scaling up antiretroviral therapy (ART) in many countries, which can help reduce HIV transmission when viral load suppression is achieved, there are still as many as 1.8 million new HIV infections annually. In sub-Saharan Africa, women account for more than half of all new adult HIV infections [1]. Access to and use of daily oral pre-exposure prophylaxis (PrEP) has been increasing in sub-Saharan Africa since September 2015, when the World Health Organization (WHO) issued guidance recommending daily oral PrEP for people at "substantial risk" of HIV infection (defined by the WHO as HIV incidence greater than 3% in the absence of daily oral PrEP) [2]. Yet not everyone at substantial risk will be able to effectively use oral PrEP. Prevention gaps will remain, which could be addressed by offering more biomedical prevention choices, similar to how expanded method mix has led to greater uptake in the contraceptive field [3]. Additional biomedical prevention approaches are under development in an effort to provide a suite of HIV prevention options that can be used effectively by a wider subset of the population at risk. The monthly dapivirine vaginal ring is one of these potential prevention options, which specifically addresses the need for a woman-centered product. Developed by the International Partnership for Microbicides, the dapivirine ring is a flexible, silicone ring that provides sustained release of the antiretroviral (ARV) drug dapivirine over one month to reduce the risk of HIV-1 acquisition. Phase III clinical trials (i.e., ASPIRE and The Ring Study) showed that the ring reduced HIV infection by approximately 30 percent overall [4]. Post-hoc exploratory analyses suggested that HIV risk was reduced by up to 75% among a subset of participants who appeared to have better adherence [5]. As these results are influenced by challenges in quantifying adherence due to measurement error of drug levels in the ring, the effectiveness of the ring at near-perfect adherence may be even higher. Further data are anticipated from two recently completed open-label extension (OLE) studies, HOPE and DREAM, in which ring was provided to previous Phase III trial participants. Preliminary findings from these two studies suggest higher overall levels of adherence, leading to higher effectiveness (estimated to be around 50% using modeling), than were seen during the Phase III trials, similar to the experience with oral PrEP in Phase III versus OLE [6,7,8]. As the ring moves forward through regulatory review, the potential impact and cost-effectiveness of this new HIV prevention product are not fully known. There are significant knowledge gaps with regard to the ring's potential uptake, delivery costs, and effectiveness in real-world settings. This study used mathematical modeling to explore dapivirine ring impact and cost-effectiveness in different implementation scenarios, alongside scale-up of HIV treatment and other prevention interventions, in order to define the range of potential impact. Given the knowledge gaps related to rollout of the ring, we conducted a stakeholder consultation process to elicit plausible parameter ranges, and we explored different scenarios to bookend impact, cost, and cost-effectiveness. Forty-four stakeholders representing donors, United Nations agencies, product developers, modelers, advocates, and implementers were engaged in consultations individually, in small groups, or a large group in one case (with 37 participants). Stakeholder consultation respondents offered a wide, and often divergent, range of opinions. Stakeholders recommended that modeling include countries where the ring clinical trials had been held and that represented a range of epidemic contexts. Recommendations related to estimates of coverage-meaning the percentage of women in each risk group with oral PrEP pills or the ring in their possession at a given point-ranged from a minimum of 5% to 10%, to a maximum of 20% to 30%, and there were disparate opinions of whether contraceptive prevalence was an appropriate benchmark. Regarding subpopulations, there was a consensus that although younger women generally have higher HIV incidence than older cohorts, younger women may have lower uptake, and this should be included in modeling, although age disaggregation is challenging due to paucity of data. Respondents had trouble quantifying potential adherence to the ring. Efficacy estimates were highly divergent, from 90% to 95% down to 30%. The results of the stakeholder consultation emphasized the degree to which we do not yet know what the actual values of these parameters will be in non-research settings. Model We used Goals, a dynamic, compartmental HIV epidemic model within the Spectrum suite of models (developed by Avenir Health [9]), to simulate scenarios of PrEP implementation in Kenya, South Africa, Uganda, and Zimbabwe among medium-risk women less than or equal to 21 years of age and greater than 21 years, as well as female sex workers (FSWs). Here, we use the term "PrEP" to refer to the general class of products for preventing HIV prior to exposure using ARV drugs, regardless of delivery modality (i.e., oral, ring, injectable, implant). Goals simulates transmission of HIV and its morbidity and mortality consequences for adult populations ages 15 to 49 years, which are structured into five mutually exclusive risk categories: 1) low risk (stable couples, defined as men and women reporting a single sexual partner in the last year), 2) medium risk (men and women with more than one partner in the last year), 3) high risk (FSWs and their male clients), 4) men who have sex with men, and 5) male and female people who inject drugs. Medium-risk women 21 years of age or less and greater than 21 years are embedded within the medium-risk group in Goals. The Goals model was modified to accept correction factors based on the relative population size and relative incidence within each age group by country. These correction factors are derived from the AIDS Impact Module (AIM) in Spectrum, which Goals uses to disaggregate the impact between the two age groups. National AIM files are publicly available in Spectrum and are updated and validated annually by Ministry of Health staff in each country in a process coordinated by the Joint United Nations Programme on HIV/AIDS (UNAIDS) to produce national, regional, and global estimates of HIV burden. In various test scenarios, we varied ART coverage, oral PrEP and dapivirine ring coverage, and ring effectiveness (encompassing efficacy and adherence) by risk group. In the Goals model, we used the average coverage in a year to represent the entire year. Unless otherwise stated, all scenarios used the following assumptions: moderate oral PrEP coverage (level varied by country and risk group, see Table 1), moderate ring coverage (level varied by country and risk group, see Table 1), oral PrEP effectiveness of 71%, ring effectiveness of 50% [6,7], and ART scale-up achieving the 90-90-90 targets by 2020 [10]. Condom use and voluntary medical male circumcision (VMMC) rates were held constant at 2017 levels for all scenarios. See S1 Table for additional details of all scenarios described below. ART scale-up scenarios We evaluated four different ART scenarios: Ring coverage scenarios We developed seven scenarios of ring scale-up involving different combinations of high, moderate, low, and no ring scale-up in the three risk groups, with maximum ring coverage levels The combined PrEP (oral plus ring) reference coverage for the model's medium risk group was set at the level of modern contraception use among sexually active women (minus the level of condom use for the purpose of family planning) in each of the two age groups [11,12,13,14]. We benchmarked PrEP coverage to the use of modern contraception as a proxy for health system access and capacity, as well as the ability and motivation of women in a given context to use a prevention intervention. Our expert consultations advised that this was a reasonable reference in the absence of real-world uptake data on the ring. For FSWs, the combined PrEP reference coverage was set at 60%. The coverage of 30% per method for FSWs was informed by oral PrEP uptake in demonstration projects in South Africa and Benin [15,16]. See Table 1 for coverage values by risk group in each country. High coverage for each method was 60% of the reference coverage level for each risk group/country; moderate coverage was 30%, and low coverage was 15%, such that the total combined PrEP coverage would be 120% of the reference if both methods had high coverage, 60% if both had moderate coverage, and 30% if both had low coverage. It should be noted that because the high and medium risk groups are only a fraction of the total female population, the coverage of PrEP in the overall population is much lower than the indicated value. In addition to the scenarios where all risk groups achieved the same level of coverage, we evaluated three scenarios where coverage varied between risk groups: (5) ring coverage is lower among medium-risk women, (6) ring coverage is lower among younger women (intended to represent a scenario of lower uptake of the ring among younger women, similar to Phase III trial findings indicating lower use and adherence in this group), and (7) no use of the ring among younger women (to represent a case in which the ring is not approved or recommended for use in younger women). See Table 2 for the coverage patterns and Table 3 for an example of the resulting values for South Africa. Ring effectiveness scenarios We examined low, moderate, and high ring effectiveness in all risk groups, as well as in different combinations, to represent potential variations in adherence in different risk groups. All ring effectiveness scenarios included moderate PrEP coverage (from Scenario 2 in Table 2). These scenarios were: (11) higher ring adherence among FSWs, (12) lower ring adherence among younger women, and (13) both higher ring adherence among FSWs and lower adherence among younger women. Levels ranged from a low of 30% to a high of 85% as a sensitivity analysis around the standard assumption of 50% effectiveness (see Table 4) [4,5,6,7]. For this analysis we chose to not disaggregate efficacy and adherence (the combination of which constitutes effectiveness), but we assumed that adherence (not efficacy) is the mechanism through which effectiveness would vary. Bookend scenarios In order to represent the extremes of the range of possible impact, the upper bound was set by combining the most optimistic PrEP coverage and effectiveness with the low-ART scenario (i.e., continue current coverage), while the lower bound was set by combining the most pessimistic PrEP coverage and effectiveness with the base ART scenario (i.e., achieving 90-90-90 by 2020). Unit costs The unit costs for oral PrEP in Kenya and South Africa came from costing studies conducted in each country using comparable approaches and cost categories [17,18]. The Kenya costs were translated to Zimbabwe and Uganda, where unit cost data encompassing the same cost categories were not available in the literature, using the gross national income (GNI) per capita to convert values for cost categories that were driven by labor costs. Costs represent a provider perspective. We assumed the service delivery costs for the ring were the same as those for oral PrEP and that only product and laboratory testing costs varied across countries. For the product component of the ring unit cost, we used a cost of US$7 per ring and 12 rings per year. For the ring, we assumed that HIV testing was the only laboratory test required, whereas for oral PrEP, hepatitis B and creatinine testing were also included. Included in Table 5 is the cost of a person-year of ART in each country, as the incremental cost of PrEP for each scenario takes into account ART cost savings from HIV infections averted by PrEP during the period assessed. All of these costs were held constant over time. Results The results for all of the scenarios described above, representing sensitivity analyses around ART scale-up, ring coverage, oral PrEP coverage, and ring effectiveness, are presented in full What is the range of potential impact of PrEP (oral plus ring) in the most extreme scenarios? Evaluating the base and low-ART scenarios, combined with the extremes of ring coverage and effectiveness, allows us to bookend the range of highest and lowest potential impact of PrEP (see Table 6, Scenarios 14 and 15). The maximum potential impact in South Africa is far greater in absolute terms than in any other country included in the analysis, with 310,000new HIV infections averted by ring and oral PrEP from 2018-2035. The maximum potential impact in Kenya, Uganda, and Zimbabwe is 64,000, 53,000, and 50,000 new HIV infections averted, respectively. In relative terms, the potential maximum impact ranges from 7.7% (HIV infections averted divided by the total number of HIV infections in the counterfactual scenario) in Kenya to 4.5% in Uganda, based on variations in assumed ring coverage in each country. Comparatively, the minimum potential impact for PrEP is 55,000 new HIV infections averted in South Africa, 10,000 in Kenya, 10,000 in Uganda, and 8,000 in Zimbabwe. In relative terms, the minimum potential impact ranges from 2.3% in Kenya 1.5% in Uganda. How will different scenarios for coverage of the dapivirine ring affect the impact of the HIV prevention program? Comparing each ring coverage scenario with our counterfactual scenario with no ring or oral PrEP gives us an estimate of the contribution of the ring to HIV infections averted at different levels of coverage. All scenarios below (Fig 1) assume moderate coverage of oral PrEP and that countries reach 90-90-90 by 2020. As seen previously, in absolute terms, the largest combined impact of ring plus oral PrEP is in South Africa, ranging from 58,000 infections averted in the low-coverage scenario to 84,000 in the high-coverage scenario. Ring impact roughly corresponds to the level of ring coverage in the overall population. However, the relative impact shows some variability between countries, due to the differences in HIV incidence among risk groups, as well as in levels of ring coverage. Almost identical levels of relative impact are seen in Kenya and South Africa for the low-, moderate-, and high ring coverage scenarios. How will different scenarios for ART scale-up affect PrEP (oral plus ring) impact? Using the standard scenario of moderate oral PrEP coverage, moderate ring coverage, and moderate ring effectiveness (Scenario 2) and varying ART scale-up, we see the strong influence treatment as prevention has on the potential impact of PrEP. How will different scenarios for effectiveness of the dapivirine ring affect impact? HIV infections averted are directly correlated with the level of ring effectiveness used in the scenario, ranging from 30% effectiveness on the low end to 85% effectiveness on the high end. Fig 3 presents the scenarios that vary effectiveness overall and by risk group. These are compared with the moderate-effectiveness scenario (50% effectiveness), which is the same moderate scenario (Scenario 2) seen in the previous three research questions (i.e., What is the range of potential impact of the dapivirine ring plus oral PrEP in the most extreme scenarios? How will different scenarios for coverage of the dapivirine ring affect the impact of the HIV prevention program? How will different scenarios for ART scale-up affect dapivirine ring impact?), where there is moderate coverage of both oral PrEP and the ring and ART scale-up assumes achievement of 90-90-90 by 2020. This illustrates the results of reduced adherence in younger women, the combination of higher adherence among FSWs and lower adherence among younger women, and finally, higher adherence among FSWs. The patterns are similar across countries, with some variation in levels. For example, in South Africa, given the relative incidence rates in the risk groups included in the model, reduced adherence among younger women has a greater impact on new HIV infections than increased adherence among FSWs. This parallels the results seen when increasing ring coverage in FSWs or reducing coverage in younger women. How does the cost-effectiveness of the dapivirine ring vary in these scenarios? The For the moderate scenario in South Africa, assuming achievement of 90-90-90 by 2020 (Scenario 2a), the cost per infection averted by the ring is US$73,000, compared with US $40,000 per infection averted by oral PrEP with no ring and achievement of 90-90-90 by 2020 (Scenario 1a), and US$13,000 for oral PrEP with no ring when continuation of current ART coverage is assumed (Scenario 1b) and US$23,000 for the ring under this same ART scenario (Scenario 2b). S2 Table shows the cost per HIV infection averted by ring for all countries and scenarios. Discussion Just as total contraception use has been shown to increase (and unintended pregnancies decrease) with the number of contraceptive methods made available [3], we anticipate the addition of dapivirine ring to the choice of prevention options could result in 11% to 132% more impact for ARV-based prophylaxis than oral PrEP alone. Our analysis showed how the impact of the dapivirine ring depends on the level of ring coverage achieved in each target population and the effectiveness of the ring, which can be increased by increasing adherence. It also depends on the level of scale-up of other interventions, such as ART and oral PrEP. Not surprisingly, the maximum potential impact in absolute terms is far greater in South Africa than in any other country evaluated, with 310,000 new HIV infections averted from 2018-2035, while the potential impact across the four countries is lowest in Zimbabwe (8,000). In relative terms, the greatest potential impact is seen in Kenya, with 7.7% of new HIV infections averted, while the lowest potential impact is in Uganda (4.5%). As expected, with higher ring coverage and consistent use, the impact of the ring increases. During implementation of the ring, maximizing adherence would increase both impact and cost-effectiveness of the intervention. The scenarios presented here include modest assumptions on uptake, based on feedback from the stakeholder consultation, in the absence of implementation research. Focusing on uptake would increase the overall impact of the intervention, but that would not in itself increase cost-effectiveness. We have assumed here that the ring is less effective than oral PrEP; however, the effectiveness and uptake of both products may vary by subpopulation, based on these groups' different needs and product preferences. A recent user preference study among product-experienced (with placebo injections, tablets, and rings) participants emphasized the importance of offering a variety of method options in order to meet the heterogenous needs and preferences of different women [19,20]. It is also unknown whether the effectiveness of oral PrEP is actually higher than that of the ring in real-world settings. Because the ring is more user-independent than oral PrEP (requiring only monthly placement of the ring rather than daily use of a pill), there is the potential for the effectiveness of the ring to be greater than oral PrEP in real-world application. This may have significant implications for potential future uptake, as prevention efficacy is a strong determinant of method choice [20]. The impact and cost-effectiveness results depend on the likelihood of infection for the users. Any HIV prevention intervention will be more impactful and cost-effective among populations with higher incidence. With high ART coverage, the likelihood of HIV transmission-and infection-decreases, and therefore the additional impact of the ring and other primary prevention interventions would also decrease. For example, if South Africa were to achieve 90-90-90 by 2020, incidence would decline by more than half from 2015 to 2020. This would be a welcome scenario, as lowering HIV infections is the overall goal, but it also would mean that every prevention intervention will be twice as expensive per infection averted, if evaluated as a single intervention. In a scenario in which the current pace of ART scale-up remains steady or slows (Scenarios b-d), as UNAIDS is anticipating [21], the ring's potential impact would be higher, as would that of other primary HIV prevention interventions. The numbers for cost per HIV infection averted that are presented in our findings may seem high. Readers should keep in mind that the scenarios presented here assume not only scale-up of oral PrEP, but also in most cases scale-up of ART to the 90-90-90 targets. Because both of these interventions serve to decrease HIV incidence in the population, the incremental impact of yet another prevention intervention will be smaller than if the ring had been introduced at a time prior to massive scale-up of ART and in the absence of oral PrEP. At this point in the HIV epidemic response, additional interventions are still needed to fill gaps left by scaleup of ART and other prevention approaches, but the incremental cost-effectiveness of these gap-filling interventions, especially those that require delivery on an ongoing basis, such as any form of PrEP, will necessarily be high. Moreover, if we compare the cost per HIV infection averted by ART to that of primary prevention, we need to consider the fact that ART for the index case reduces the risk to all partners. Primary prevention, on the other hand, must overcome the probability of meeting an infected partner first. These results are more robust in illustrating relative cost-effectiveness for different possible scenarios within countries than in cross-country comparison. As few country-specific costing studies of oral PrEP have been conducted, and none have been conducted for the ring, to date, the unit costs for these interventions, and the resulting cost-effectiveness findings, should be interpreted with caution. Unit costs may vary by implementation model, and costs of both oral PrEP and the ring may come down with improved implementation and efficiencies of scale. Additionally, the cost per person-year of treatment was held constant over time. If the actual cost of treatment were to decrease relative to the cost of ring delivery, then the ring would become less cost-effective. Conversely, if the cost to deliver the ring in any country were lower than the cost used in this analysis, then the ring would become more cost-effective. The differences between countries are affected not just by the cost of the intervention and of ART, but also by the HIV incidence in the population that is being provided with the intervention. It is worth calling attention to the fact that this analysis did not include any scenarios in which low-risk women used the dapivirine ring, meaning that the uptake assumptions, even in the highest coverage scenarios, are quite conservative. While PrEP is generally discussed as being needed for high risk populations [2], lower risk women in generalized epidemic situations may also wish to protect themselves from HIV, and some may find the ring particularly attractive due to its discretion, ease of use, and lack of side effects compared with oral PrEP, particularly if it is formulated in combination with contraception to protect against unwanted pregnancy [20]. Uptake among lower risk women in generalized epidemics could increase the impact of the ring above the levels projected in this analysis. With the paucity of data to inform every parameter used in the modeling, the limitations to this analysis are significant. While we set our ranges based on early study results and expert consultation, uptake, cost, and effectiveness of both the ring and oral PrEP when fully implemented could be very different. In addition, while the ring and oral PrEP were provided to FSWs and medium risk women in our analysis, it should be noted that the medium risk group in the model is a poor proxy for women at elevated risk for HIV, as this group is not well characterized in the real world. Little is known about how to identify women at elevated risk in implementation settings, how big this subpopulation is, and the degree to which their risk is increased. The levels of impact and cost-effectiveness in this modeling exercise should be interpreted with these caveats in mind. Conclusion Given the persistently high rates of HIV infection among women despite the scale-up of ART and VMMC, and the importance of product choice for effective use, new HIV prevention methods for women are needed. Depending on a number of factors explored in this paper, the dapivirine vaginal ring may provide additional impact on control of the HIV epidemic. Greater understanding of the real-world cost and potential uptake of the intervention would improve our ability to estimate its possible impact and cost-effectiveness. However, ultimately, the purpose of the ring is to increase uptake of HIV prevention to prevent HIV acquisition, not necessarily to maximize cost-effectiveness. Because the ring is a new delivery modality, there are many unknowns. Implementation research and demonstration and pilot projects are needed to improve our understanding of the ring's potential impact and to devise strategies to maximize it. This modeling exercise offers a wide range of scenarios that incorporate the considerable uncertainty about ring uptake, consistency of use, and effectiveness, as well as oral PrEP and HIV treatment use over the next two decades. Even amid this uncertainty, however, it is clear that for the ring to have the greatest impact, implementers and donors should invest in maximizing uptake and adherence to the ring among women who are in need of HIV prevention and who are unlikely to consistently use other primary prevention interventions. Supporting information S1
2019-06-28T13:21:57.465Z
2019-06-26T00:00:00.000
{ "year": 2019, "sha1": "5a2447e0a6658c448c61ce8089e29d4af86b4ee4", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1371/journal.pone.0218710", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "5a2447e0a6658c448c61ce8089e29d4af86b4ee4", "s2fieldsofstudy": [ "Medicine", "Political Science" ], "extfieldsofstudy": [ "Medicine" ] }
257191448
pes2o/s2orc
v3-fos-license
A new catfish species of the Trichomycterus hasemani group ( Siluriformes : Trichomycteridae ) , from the Branco river basin , northern Brazil Trichomycterus wapixana is described from the Branco river basin, Roraima State, northern Brazil. It belongs to the T. hasemani group, composed of T. hasemani, T. johnsoni and T. anhanga and defined by the presence of a single wide cranial fontanel delimited by the frontal and supraoccipital, absence of the pores i1 and i3, absence of branchiostegal rays on the posterior ceratohyal and the by the presence of a large and distally expanded process on the palatine. It differs from the other species of that assemblage by having a unique combination of character states, including number of vertebrae, relative position of anal fin, relative position of pelvic and dorsal fin, presence of pelvic fin and pelvic girdle, number of dorsal and ventral procurrent rays in the caudal fin, anal-fin rays, pectoral-fin rays, branchiostegal rays, pleural ribs, morphology of palatine, presence of parasphenoid and relative position of urogenital pore. Introduction Trichomycteridae is a family of catfishes comprising 278 valid species (EschmEyEr & Fong, 2014) distributed from Costa Rica to Patagonia, in both cis-and trans-Andean drainages (dE Pinna, 1998).Trichomycterinae is the only one of the eight recognized subfamilies which monophyly has not been supported in phylogenetic studies (Baskin, 1973;dE Pinna 1989;costa & Bockmann, 1993).Tricho mycterus ValEnciEnnEs is the most species-rich genus of the family, comprising over 140 species (FErnandEz & Vari, 2009;katz et al., 2013).Its extensive geographical range, high number of described species and lack of synapomorphies make Trichomycterus a huge taxonomic problem within the Trichomycteridae (BarBosa & costa, 2003).This condition is illustrated by the description of Ituglanis costa & Bockmann 1993, in which this new genus was described based on nine species that were pre-viously placed in Trichomycterus (costa & Bockmann, 1993). Despite arratia (1990) and datoVo & Bockmann (2010) tried to establish derived character states for the Trichomycterinae, these works did not include Tricho mycterus hasemani (EigEnmann, 1914) and T. johnsoni (FowlEr, 1932).According to de Pinna (1989), T. hase mani and T. johnsoni are each other closest relatives and T. hasemani is related to the Tridentinae due to their expanded cranial fontanel, considering all these taxa derived from a single miniaturization event.Recently, dutra et al. (2012) described T. anhanga dutra, wosiacki & dE Pinna 2012 as being closely related to T. hasemani and T. johnsoni, naming the "T.hasemani group" for this species assemblage.The "T. hasemani group" is monophyletic and possibly related to non-Trichomycterinae taxa (dE Pinna, 1989(dE Pinna, , dutra et al., 2012)).The geographical distribution of this group contrasts with the distribution of other species placed in Trichomycterus, by occurring in lowlands of the Amazon rainforest and Pantanal, instead of being endemic to mountain river drainages of southeastern and southern Brazil (BarBosa & costa, 2010), Andes (arratia, 1998) and those draining the Guyana Shield (EigEnmann, 1909;EigEnmann, 1912;lasso & ProVEnzano, 2002).The new species herein described was collected in the Branco river basin. Material and Methods Measurements follow dutra et al. (2012) with the addition of pre-pelvic length (from the middle of the pelvicfin base to the snout tip).Measurements are presented as percentages of standard length (SL), except for subunits of the head, which are presented as percentages of head length (HL).Counts, following BarBosa & costa (2003), were made only in cleared and stained specimens (c&s) prepared following taylor & Van dykE (1985).Scale bars = 1 mm.Nomenclature for the latero-sensory system is according to arratia & huaquin (1995).Specimens were euthanized submerging them in a buffered solution of Ethyl 3-aminobenzoate methanesulfonate (MS-222) at a concentration of 250mg/l, for a period of 10 min, following the guidelines of the Journal of the American Veterinary Medical Association (AVMA Guidelines), and European Commission DGXI consensus for fish euthanasia.Material is deposited in the ichthyological collection of the Instituto de Biologia, Universidade Federal do Rio de Janeiro, Rio de Janeiro (UFRJ), Field Museum of Natural History (FMNH) and in the Academy of Natural Sciences of Philadelphia (ANSP).The method for species delimitation follows the Population Aggregation Analysis (daVis & nixon, 1992), in which one or more populations are recognized as a species by a unique combination of character states.Holotype.UFRJ 10251,14.0mm SL; Brazil: Estado de Roraima: Município de Bonfim: flooded areas in the Tacutu river drainage, tributary of the Branco river drainage, Amazonas river basin, 03° 24' 07"N 59°56'23"W, altitude about 110 m; collected by E. Henschel, F. P. Ottoni, P. Bragança; 10 September 2012. Diagnosis. T. wapixana is distinguished from all other species of the T. hasemani group by the presence of 34 to 36 vertebrae (vs.32 in T. hasemani and T. johnsoni, 29 to 32 in T. anhanga); the origin of the anal fin in a vertical through the base of the 20 th , 21 st or 22 nd vertebra (vs.18 th in T. hasemani, 17 th in T. johnsoni and 16 th in T. anhanga).It is distinguished from T. hasemani and T. johnsoni by having the origin of the pelvic fin in a vertical between the base of 15 th and 17 th vertebrae (vs.14 th in T. hasemani and 13 th in T. johnsoni)and by the presence of a dark spot on the middle of the lower lip (vs.absence).Trichomy cterus wapixana differs from T. anhanga by the presence of pelvic fins and girdle (vs.absence); the presence of 10 to 11 dorsal procurrent rays in the caudal fin (vs.6 to 8); the presence of 9 to 12 ventral procurrent rays in the caudal fin (vs.6 to 7); the presence of seven (ii +5 or iii + 4) anal fin rays (vs.ii + 4); the presence of five (i + 4 or ii + 3) pectoral fin rays (vs.i + 2); the origin of the dorsal fin at vertical through the base of the 20 th , 21 st or 22 nd vertebra (vs.16 th or 17 th ); the presence of six branchiostegal rays (vs.four or five); the presence of two pairs of pleural ribs on first two vertebrae posterior to Weberian Complex (vs.single pair); the presence of a series of dark brown spots in the lateral midline of the body (vs.absence); the broad palatine (Fig. 4) (vs.narrow, comma-shaped palatine) (dutra et al., 2012; fig. 2a) and by the presence of the parasphenoid (vs.absence).It differs further from T. johnsoni by the origin of the urogenital pore in a vertical between the base of the 17 th and 19 th vertebrae (vs.15 th ). Description.Morphometric data for holotype and paratypes given in Table I.Body elongate, subcylindrical on anterior portion, gradually compressed until caudal peduncle.Dorsal profile slightly convex between snout and pectoral-fin origin, straight from that point to caudal peduncle.Ventral profile straight between tip of the snout and insertion of the pectoral fin, gently convex from that point to pelvic-fin origin and straight to end of caudal peduncle.Greatest body depth in vertical immediately in front of pelvic-fin origin.Dorsal and anal fins approximately triangular.Dorsal-fin origin in vertical through base of 20 th , 21 st or 22 nd vertebra.Anal-fin origin in vertical through base of 20 th , 21 st or 22 nd vertebra.Pelvic-fin origin in vertical through base of 15 th , 16 th or 17 th vertebra.Pectoral fin about triangular.First pectoral-fin ray terminating in long filament, about 30 -40% pectoral-fin length.Pelvic fin not covering urogenital pore, bases separated by interspace; insertion in vertical through base of 15 th , 16 th or 17 th vertebra.Caudal fin truncate.Dorsal-fin A B rays 7 -8 (iii + 4, ii + 5 or iiii + 4, iii + 5) ; anal-fin rays 7 (iii + 4, ii +5); pectoral-fin rays 5 (ii + 3, i + 4); pelvic-fin rays 4 (ii + 2); caudal-fin principal rays 12 (ii + 8 + ii, i + 9 + ii), dorsal procurrent rays 10 to 11, ventral procurrent rays 9 to 12. Total vertebrae 34 to 36; pleural ribs on first two vertebrae posterior to Weberian Complex.Head trapezoidal in dorsal view.Mouth subterminal.Teeth conical.Tip of nasal barbel reaching posterior tip of interopercular patch of odontodes.Tip of maxillary barbel reaching middle of interopercular patch of odontodes.Tip of rictal barbel reaching posterior tip of interopercular patch of odontodes.Six branchiostegal rays.Odontodes conical.Interopercular odontodes 6 to 11, opercular odontodes 9 to 14. Lateral line with two pores, LL1 and LL2.Cephalic portion of latero-sensory canal system restricted to s3, i11 and a praeopercular pore, S4. Colouration in preserved specimens (Fig. 1a and b).Ground colour cream.Head with dark brown spot extending from anterior surface of eye to anterior margin of upper lip.Dorsal region of neurocranium with light brown spot.Dark brown spot on basis of opercular patch of odontodes and on basis of interopercular patch of odontodes.Ventral surface of head with small dark spots on upper lip and dark spot on lower lip.Nasal, maxillary and rictal barbels with small dark spots concentrated at basis.Dorsal and lateral surfaces of body with chromatophores distributed between head and caudal peduncle.Ventral surface of body with chromatophores concentrated on head and between pelvic and anal fins.Lateral surface with series of dark brown spots. Dorsal and anal fin hyaline with dark brown blotch on basis of rays.Caudal fin with small dark brown cromatophores.Caudal fin with light brown bar on basis of rays and with small dark brown spot on middle of basis of rays.Pectoral fin hyaline with small dark spot on basis of filament.Pelvic fin hyaline with dark small spot on basis of fin. Etymology.The wapixana is a native tribe from the Serra da Lua region in western Roraima state, northern Brazil.These natives have occupied this region for, at least, three centuries.The villages of Cantá and Bonfim, where Trichomicterus wapixana was mainly collected, are situated in this area.The Wapixana tribe was oppressed by other native tribes and by colonisers, fact that contributed for a huge cultural loss. Distribution.Known from the Branco and Negro river drainages, Amazonas river basin (Fig. 5). Discussion The "Trichomycterus hasemani group" has been considered as an incertae sedis group among trichomycterids.The first approach concerning the relationships between T. hasemani and T. johnsoni was made by dE Pinna (1989), where these species were proposed to constitute a clade more related to the Tridentinae than to the Trichomycterinae.This hypothesis was based on the presence of an expanded cranial fontanel delimited by the frontal and supraoccipital in the two taxa.Later, dE Pinna (1998), in a cladogram with information combined from several authors, listed the following synapomorphies to support the clade comprising the Vandelliinae, Stegophilinae and Tridentinae: 1 -absence of lacrimal; 2 -lateral opening of Weberian capsule at the end of a neck like constriction; 3 -jaw teeth S-shaped; 4 -mesethmoid cornu with ventral process.The T. hasemani group shares with this clade only the second condition, which makes the hypothesis of sister group relationships between the T. hasemani group and the Tridentinae doubtful. dutra et al. (2012) established the following character states to diagnose the T. hasemani group: 1 -a wide fontanel that occupies most of the skull roof and is delimited by the frontal and supraoccipital (Fig. 3); 2 -absence of the anterior portion of the infraorbital canal (pores i1 and i3); 3 -first pectoral-fin ray much longer than other rays; 4 -absence of branchiostegal rays on the posterior ceratohyal; and 5 -a large posterior process of the palatine, partly forked and expanded distally (Fig. 4).The species herein described shares all these five char-acter states with T. hasemani and T. johnsoni.However, the palatine condition (Fig. 4) is quite different from the other species of the group in T. anhanga, since the partly forked posterior process of the palatine is entirely absent in this species (dutra et al., 2012; fig. 2a). costa & Bockmann (1994) stated that a sister-group relationship between Sarcoglanidinae and Glanapteryginae would be supported by the reduced dorsal portion of the quadrate, the presence of a large anteriorly directed process in the hyomandibula, vomer rudimentary and miniaturization.Trichomycterus wapixana, T. hasemani and T. johnsoni share with these two subfamilies the last three character states.These authors also established a clade comprising Sarcoglanidinae, Glanapteryginae, Tridentinae, Vandelliinae and Stegophilinae, the so-called TSVSG clade, on the basis of an interopercular patch of odontodes reduced in length and with 15 or fewer odontodes, a reduction in number of the pleural ribs (1 -8), a short posterior portion of the parasphenoid, its tip not reaching the basioccipital or extending only to its anterior part, and metapterygoid reduced or absent.Trichomy cterus wapixana, T. hasemani and T. johnsoni also have all these character states.Since these character states are unique within the Trichomycteridae, they indicate that possibly the T. hasemani group is closely related to the TSVSG clade.However, the position of the group within the family cannot be exactly established, which depends on an inclusive phylogenetic analysis, which is beyond the scope of this study.These species thus remain allocated in Trichomycterus until a proper phylogenetic analysis is developed. Other miniature Amazonian species-groups also have a problematic taxonomy.In the case of the Scoloplax BailEy & Baskin, 1976, 110 years have passed since the specimens collection in Thayer Expedition in 1866 and the formal description of the genus.This genus was originally described as a member of the Loricariidae, being placed in a new monotypic family (Scoloplacidae) by isBrückEr (1980).According to schaEFEr et al. (1989) the small size of these catfishes was the main problem to describe them, along with other factors such as lack of collecting effort, absence of any distinctive anatomy and their cryptic habitat.These authors also stated that the reductive characters derived from the miniaturization process represent synapomorphies at some phylogenetic level in Scoloplax.The same occurred with species of the genus Fluviphylax whitlEy, 1965: specimens were collected in Thayer Expedition but remained undescribed until 1955(myErs, 1955)).Previously, garman (1895) improperly identified these fishes as undetermined species of Rivulus PoEy, 1860(costa & lE Bail, 1999).These authors also stated that the miniaturization in Flu viphylax is parsimoniously interpreted as a single event. wEitzman & Vari (1988) published a study focusing on the miniaturization in the several groups of the Neotropical region and the consequences of this process to the phylogeny of those taxa.In the characiform genus Nan nostomus günthEr, 1872, the small size of three miniaturized species is probably derived, but attempts to eluci-date the relationships within the genus were not feasible, and in the catfish genus Corydoras lacEPèdE, 1803, a very specious genus, the relationships within four minitaturized species are unresolved, but possibly involving a single miniaturization event (wEitzman & Vari, 1988).The species herein described belongs to a miniaturized group, but the relationships within the T. hasemani group cannot be established since its phylogenetic relationships are still unknown.
2018-11-01T04:23:25.101Z
2016-09-28T00:00:00.000
{ "year": 2016, "sha1": "c419226ee2e1d9286ea4cd4b0aa050ad025e58c4", "oa_license": "CCBY", "oa_url": "https://vertebrate-zoology.arphahub.com/article/31537/download/pdf/", "oa_status": "HYBRID", "pdf_src": "ScienceParseMerged", "pdf_hash": "c419226ee2e1d9286ea4cd4b0aa050ad025e58c4", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [] }
33917015
pes2o/s2orc
v3-fos-license
Birth Outcomes after the Fukushima Daiichi Nuclear Power Plant Disaster: A Long-Term Retrospective Study Changes in population birth outcomes, including increases in low birthweight or preterm births, have been documented after natural and manmade disasters. However, information is limited following the 2011 Fukushima Daiichi Nuclear Power Plant Disaster. In this study, we assessed whether there were long-term changes in birth outcomes post-disaster, compared to pre-disaster data, and whether residential area and food purchasing patterns, as proxy measurements of evacuation and radiation-related anxiety, were associated with post-disaster birth outcomes. Maternal and perinatal data were retrospectively collected for all live singleton births at a public hospital, located 23 km from the power plant, from 2008 to 2015. Proportions of low birthweight (<2500 g at birth) and preterm births (<37 weeks gestation at birth) were compared pre- and post-disaster, and regression models were conducted to assess for associations between these outcomes and evacuation and food avoidance. A total of 1101 live singleton births were included. There were no increased proportions of low birthweight or preterm births in any year after the disaster (merged post-disaster risk ratio of low birthweight birth: 0.98, 95% confidence interval (CI): 0.64–1.51; and preterm birth: 0.68, 95% CI: 0.38–1.21). No significant associations between birth outcomes and residential area or food purchasing patterns were identified, after adjustment for covariates. In conclusion, no changes in birth outcomes were found in this institution-based investigation after the Fukushima disaster. Further research is needed on the pathways that may exacerbate or reduce disaster effects on maternal and perinatal health. Introduction Perinatal health is a crucial aspect of public health. Birth outcomes, measureable as birthweight and gestational age at birth, have been found to predict both short-and long-term health trajectories of the neonate; a topic which has gained significant attention in the field of epidemiology [1][2][3][4]. A host of factors can influence birth outcomes, including maternal medical history, environmental and behavioural factors, and sociodemographic factors such as ethnicity, age and marital status [5,6]. Evidence has additionally grown for associations between external stressors and adverse birth outcomes [7], opening new discussions on the broad determinants of health at birth [8]. Disasters are one type of external stressor associated with changes in population birth outcomes. Increases in low birthweight births have been documented after natural disasters, chemical disasters and terrorism, with or without concurrent increases in preterm births [9][10][11][12][13][14][15][16][17][18][19][20]. Post-disaster changes in birth outcomes are thought to be mediated through maternal exposure to environmental toxins or disaster-related psychosocial stress, yet an area that remains unclear is the timeframe between exposure and outcome [9,21]. Most studies to date have focused on women who were pregnant at the time of a disaster, yet there is also evidence for increased prevalence of low birthweight and preterm births lasting for years post-disaster [10,22]. This finding is consistent with growing evidence that stressful life events prior to conception can increase the risk of delivering a low birthweight neonate later in life [23,24], and highlights the possibility that disasters may not only have immediate health impacts, but additionally lead to long-term changes in the birth outcomes of affected populations. On 11 March 2011, Northeast Japan was struck by an earthquake and tsunami, triggering a nuclear disaster at Fukushima Daiichi Nuclear Power Plant. In contrast to the relatively immediate destruction of the earthquake and tsunami, the nuclear disaster has led to long-term societal changes such as prolonged evacuation [25], and changing health risks have been observed in affected populations [26][27][28]. Issues of stigma, radiation-related anxiety, and increasing mental health problems have additionally been identified [25]. However, there is limited understanding of maternal and perinatal health following this disaster. There has been mixed evidence for immediate post-disaster changes in birth outcomes; some previous studies have found no increased proportions of low birthweight or preterm births in areas affected by the earthquake and tsunami [29], or in areas additionally affected by the nuclear disaster [30][31][32], in the first year post-disaster. However, there have also been findings of a slight increase in low birthweight neonates to women that had been 28-36 weeks pregnant at the time of the earthquake, in earthquake-and tsunami-affected areas [29], and increased proportions of low birthweight and preterm birth to women who conceived within six months post-disaster in areas affected by the Fukushima nuclear disaster [33]. However, despite the continuing social, psychological and physical health impacts of the nuclear disaster [25], there have been very few assessments to date of the long-term trends in birth outcomes in affected areas [32]; an area that calls for further elucidation. In this institution-based study we evaluated data from Minamisoma Municipal General Hospital (MMGH), located 23 km from the plant (Figure 1), to investigate long-term trends in maternal and neonatal characteristics following the 2011 nuclear disaster. The objective of the present study is two-fold: to assess if there were long-term changes in birth outcomes following the Fukushima nuclear disaster, in comparison with pre-disaster baseline data, and to evaluate whether residential address at the time of the disaster, as a proxy measurement of evacuation, and avoidance of Fukushima food products, as a proxy measurement of radiation-related anxiety, were associated with any post-disaster birth outcomes. Setting and Participants All live singleton births at MMGH from April 2008 to 2015 were included in this study. On 12 March 2011, the 20 km radius of the power plant was classified as a restricted zone under mandatory evacuation orders by the central government of Japan [34]. On 15 March, those in the 20-30 km radius were ordered to shelter indoors, and on 25 March, this zone was classified as a voluntary evacuation area [25]. The mandatory evacuation zone has been under frequent updates, as described in previous reports [26], expanding to the northwest mountainous areas heavily affected by radioactive fallout. MMGH falls just outside of the mandatory evacuation zone, and serves areas significantly affected by the nuclear disaster. Although the Obstetrics and Gynecology Department of the hospital closed immediately after the disaster, it re-opened in April 2012. The time-period of this study therefore captures three years of post-disaster data (2012-2015) on births in this hospital, compared to the same length of period pre-disaster (2008-2011), defined in the format of Japanese fiscal years which begin in April and end in March of the following year. Data Collection Data on maternal characteristics and birth outcomes were extracted from the hospital's patient records. Maternal characteristics included age at time of the birth, number of previous deliveries (parity) and residential address. Birth data of birthweight, gestational age at birth, mode of delivery (vaginal delivery or caesarean section), date of delivery, and sex of the neonate were collected. Main Outcome Measures The following two outcome measures were considered as primary birth outcomes of interest in this study: low birthweight (<2500 g at birth), and preterm birth (<37 weeks of gestation at birth). Residential Area at the Time of the Disaster The difficulty of defining maternal exposure to a disaster has been previously noted [9]. The present study uses residential area at the time of the disaster [9] to estimate evacuation experience as an indicator of maternal disaster exposure. For all mothers who delivered in the post-disaster Setting and Participants All live singleton births at MMGH from April 2008 to 2015 were included in this study. On 12 March 2011, the 20 km radius of the power plant was classified as a restricted zone under mandatory evacuation orders by the central government of Japan [34]. On 15 March, those in the 20-30 km radius were ordered to shelter indoors, and on 25 March, this zone was classified as a voluntary evacuation area [25]. The mandatory evacuation zone has been under frequent updates, as described in previous reports [26], expanding to the northwest mountainous areas heavily affected by radioactive fallout. MMGH falls just outside of the mandatory evacuation zone, and serves areas significantly affected by the nuclear disaster. Although the Obstetrics and Gynecology Department of the hospital closed immediately after the disaster, it re-opened in April 2012. The time-period of this study therefore captures three years of post-disaster data (2012-2015) on births in this hospital, compared to the same length of period pre-disaster (2008-2011), defined in the format of Japanese fiscal years which begin in April and end in March of the following year. Data Collection Data on maternal characteristics and birth outcomes were extracted from the hospital's patient records. Maternal characteristics included age at time of the birth, number of previous deliveries (parity) and residential address. Birth data of birthweight, gestational age at birth, mode of delivery (vaginal delivery or caesarean section), date of delivery, and sex of the neonate were collected. Main Outcome Measures The following two outcome measures were considered as primary birth outcomes of interest in this study: low birthweight (<2500 g at birth), and preterm birth (<37 weeks of gestation at birth). Residential Area at the Time of the Disaster The difficulty of defining maternal exposure to a disaster has been previously noted [9]. The present study uses residential area at the time of the disaster [9] to estimate evacuation experience as an indicator of maternal disaster exposure. For all mothers who delivered in the post-disaster period, data on residential address at the time of the disaster was extracted from hospital records. Mothers were then classified into four groups based on evacuation orders: (1) inside the mandatory evacuation zone; (2) inside the indoor sheltering/voluntary evacuation zone; (3) inside areas of Soso District under no evacuation orders; and (4) outside Soso District. For participants in the pre-disaster period, residential address at the time of delivery was classified in the same manner. The geographical scope of the evacuation orders during the study period is displayed in Figure 1. Soso District is specified in these classifications as it was significantly affected by the disasters, with areas falling in the mandatory, voluntary, and non-evacuation zones, and significant evacuation even in non-ordered areas; it is reported by the Japanese Ministry of Education, Culture, Sports, Science and Technology (MEXT) that at the time point of 15 March 2011, 102,882 Fukushima Prefecture residents had evacuated within or outside the Prefecture, and voluntary evacuees were estimated to account for 39.1% (40,256 people), leaving both voluntary evacuation zones and non-ordered areas [35]. Post-Disaster Food Purchasing Patterns This study additionally used data on maternal food purchasing patterns. Since March 2012, all pregnant women under the care of MMGH have been encouraged to undergo free Whole Body Counter (WBC) internal radiation contamination screenings at MMGH during their pregnancies. An exposure risk assessment questionnaire is given at the time of WBC screenings, which contains items on methods of acquiring the following six food products: rice, meat, fish, produce, mushrooms and milk. Each item has four choices: (a) purchasing food products at a supermarket based on origin (Fukushima vs. non-Fukushima); (b) purchasing food products at a supermarket without consideration of origin; (c) using local farms or consuming home grown foods with radiation inspection; or (d) without it. We extracted data on the food purchasing preferences of participants in the post-disaster period, with the hypothesis that avoidance of Fukushima products could be an indicator of radiation-related anxiety. If participants had undergone multiple WBC screenings during their pregnancy, questionnaire data were extracted from the screening closest to the date of delivery. In all WBC screenings of pregnant women from 2012 to 2015, there were no cases of detectable internal radiation contamination (detection limits with a 2-min scan: 210 Bq/body for Caesium-134 and 250 Bq/body for Caesium-137); therefore, internal radiation levels could not be considered as a variable for analysis in this study. Statistical Analyses We conducted two analyses. First, to evaluate any difference in the rates of post-versus pre-disaster birth outcomes, proportions of low birthweight and preterm birth were calculated for each period and compared, expressed as risk ratios (RRs). Second, to examine any associations of these two outcomes (low birthweight and preterm birth) with residential area at the time of the disaster or food purchasing patterns, adjusted for potential covariates, we performed multivariate logistic regression analyses with the post-disaster data. For model building, variables initially entered into the regression models were chosen based on univariate analyses. Additional model selection was performed using backward-stepwise method with p-to-remove of >0.05. Backward-stepwise regression starts with all the candidate variables in the model and removes the least significant variables until all the remaining variables are statistically significant. Basic variables, such as year, maternal age at time of birth, sex of neonate, and the number of previous deliveries, as well as those of main interest in this study (i.e., residential address at the time of the disaster and post-disaster food patterns) were incorporated into the final model regardless of their statistical significance as long as stable models were obtained. The partial F-test was used to verify the entry and removal of variables from the model. Since some participants had more than one delivery at MMGH during the study period, the regression model included a random effect at individual level to control for the fact that the same individual's data were correlated. Ethics Approval Ethics approval for this study was granted by the MMGH Institutional Review Board, reference number 27-21. Participant consent was not found to be necessary, as this was a retrospective analysis of hospital records. All data were anonymised prior to analysis. Maternal and neonatal characteristics by year are described in Table 1. There were no significant differences between years in the proportions of low birthweight or preterm births. The distributions of birthweight and gestational age at birth in pre-and post-disaster periods are displayed in Figure 2. The number of previous deliveries per mother significantly differed between years, with a pre-disaster decrease in mothers with two or more previous deliveries (and increase in mothers with 0 or 1 previous deliveries) in 2009, and a post-disaster increase in first-time mothers peaking in 2014 (p < 0.001). There were significant changes in maternal residential address patterns throughout years (p < 0.001) with post-disaster decreases in deliveries at MMGH by those who had been living outside Soso District or within the mandatory evacuation zone at the time of the disaster, alongside increases in those who had been living in areas under voluntary evacuation orders or areas of Soso District under no evacuation orders. Because there were few mothers aged <19 years old (zero in 2008, four in 2009, three in 2010, zero in 2011, one in 2012, six in 2013 and four in 2014), we were unable to categorize this potentially high-risk group; maternal age at birth was instead categorized as <35 and >35 years, and a significant increase in mothers >35 years of age was observed after the disaster (p < 0.05). Other variables, such as sex of neonate, mode of delivery, and season of delivery were not significantly different by year. Of the 430 mothers included in the post-disaster period, 401 (93.3%) participated in the WBC screenings. Of the 29 study participants that did not undergo WBC screening, six had been living outside Soso District at the time of the disaster, and 19 delivered in the last year of the study period (2014)(2015). Trends in food purchasing choices are outlined in the additional material, and indicate that avoidance of locally produced rice and produce significantly increased as years passed after the disaster (p < 0.05 and p < 0.01, respectively) (Table S1). Table 3 shows results of the regression analysis for post-disaster low birthweight. The final model considered year, sex of neonate, mode of delivery, maternal age, number of prior deliveries, and residential address at the time of the disaster. There were no statistically significant associations found with post-disaster low birthweight. This final model for low birthweight was not able to include the variable of post-disaster food purchasing patterns (which were not statistically significant) because of model instability. Sensitivity analyses were performed that constructed three different regression models in which data of 2008, 2009, and 2010 were considered as reference years; similar results were obtained (data not available). Similar results were obtained in the regression analysis for preterm birth (Table S2), in which the final model showed no statistical significance in the relationship between any variables and preterm birth. The results of univariate analyses, showing no statistically significant associations between food purchasing patterns and low birthweight or preterm births, are displayed in Table S3. Discussion This study retrospectively assessed all live singleton births from 2008 to 2014 in a hospital serving areas affected by the 2011 Fukushima Daiichi Nuclear Power Plant Accident, finding no significant long-term changes in the prevalence of low birthweight or preterm births after the disaster, compared to a pre-disaster period. There were additionally no associations between residential address at the time of the disaster or food purchasing patterns, and post-disaster birth outcomes. We did confirm a substantial decrease in the number of births occurring at MMGH after the disaster, with no births in 2011 due to departmental closure, followed by gradually increasing numbers in each post-disaster year. Previous studies on birth outcomes following the Fukushima disaster have produced mixed results, with some studies finding no significant changes in birth outcomes in areas affected by the nuclear disaster [30][31][32], and others finding increased proportions of low birthweight and preterm births [33]; however, most studies to date have only assessed outcomes within the first year of the disaster. The overall inconsistency within results from Fukushima, and between results from Fukushima and other disasters where increases in low birthweight or preterm births have been predominant indicate that the effects of this disaster may differ from those observed in other settings [9][10][11][12][13]. In order to interpret inconsistencies between results of post-disaster studies on birth outcomes, it has been noted that clear assessment of the pathways between disasters and outcomes is crucial [9,21]. Two commonly proposed pathways to post-disaster changes in population birth outcomes are environmental exposures and psychological stress [9,21], and we took particular methodological considerations of these factors in our study, as outlined below. In terms of environmental exposures, nuclear disasters are rare and understudied events that present the danger of exposing populations to radioactive materials. However, impacts of nuclear disasters on population birth outcomes are not well studied, and likely to vary by the scale of each disaster. Studies after the Chernobyl nuclear disaster in 1986 indicate mixed evidence for a small increase in congenital anomalies, yet overall little effect on most pregnancies [9,36,37]. Radiation related health risks in Fukushima have been found to be significantly less than those in Chernobyl due to lower exposure doses [38], and the United Nations Scientific Committee on the Effects of Atomic Radiation (UNSCEAR) has predicted no deterministic effects of radiation exposure to the general public in Fukushima [38]. In the present study, none of the post-disaster mothers who underwent WBC screening (n = 401, 93.2%) had detectable levels of internal radiation contamination, and we therefore find it unlikely that radiation exposure would have had any effect on birth outcomes in this study. However, the Fukushima disaster has led to social disruption and public concern for radiation exposure [25] which may pose its own risks to maternal and perinatal health; a previous study after the Chernobyl accident found radiation-related anxiety, not radiation itself, to be associated with earlier births [39]. Psychological stress is a frequently reported pathway from disaster occurrence to changes in population birth outcomes [9,21,40]. We attempted to capture potential effects of stress by categorizing participants based on residential address at the time of the disasters and food purchasing patterns as indicators of evacuation and anxiety about radiation, respectively. Further, following findings on the effects of pre-conception stressful life events on birthweight [23,24] and the long-term increases in adverse birth outcomes seen after the Red River Catastrophic Flood [10] and the 11 September attacks [22], we included three years in the post-disaster period to assess for any long-term changes. Our finding that these indicators were not significantly associated with birth outcomes may suggest that disaster experience in itself does not qualify as a stressful life event with long-term effects on reproductive health. However, it is possible that our approach of measuring food purchasing patterns and residential address at the time of the disaster were unable to accurately capture pathways from disaster exposure to birth outcomes. Food purchasing patterns may be limited as a proxy measurement for anxiety, as they are generally linked with household socioeconomic factors and local food availability, two factors that may be affected by the disaster, yet were not possible to assess in this study. It should also be noted that while 93.2% of the post-disaster participants in this study underwent WBC screening and completed the food questionnaire, out of the 29 that did not complete it, six had been living outside Soso District at the time of the disaster, and 19 had delivered in the last year of the study (2014), meaning that food questionnaire results may be most representative of the population living within Soso District in 2012 and 2013. In terms of evacuation, we should acknowledge that movement after the disaster may have also been influenced by socioeconomic factors, meaning that there may have been socioeconomic differences between those who were able to evacuate and those who remained. Future evaluation of whether birth outcomes were patterned by socioeconomic factors, in both Fukushima and other disaster settings, would be of great benefit to begin filling this evidence gap that was out of the scope of our study. This is not the first time that contradictory results on birth outcomes after disasters have been observed. In addition to some inconsistency in findings between different disasters [9,21], there have also been contradictory findings after the same disaster, as seen in the literature on Hurricane Katrina [11,15,17,41]. Recent studies on Hurricane Katrina have suggested that rapid population changes and differing risk profiles of the remaining population may have contributed to null findings, or even apparent reductions in risk of adverse birth outcomes [15,41], highlighting the need for disaster effects on birth outcomes to be considered in relation to potential population changes [15]. In this regard, we must recognize that large population shifts in Fukushima Prefecture have been documented since the 2011 disaster [42], and Minamisoma City in particular has experienced dramatic population loss, from 71,561 to approximately 10,000 within the first month of the disaster [43]. There has been slow population return to the city after the disaster, particularly in adult women [44]. We can speculate that one reason for null results observed in this study may have been the risk profile of the post-disaster population, which may have differed from the pre-disaster population (i.e., those at the highest risk of adverse birth outcomes may have been unable to return to the city after evacuation, as was speculated after Hurricane Katrina [15], and thus would not have been included in this study). The potential for population changes to influence results of birth outcome studies following disasters further underscores the need to understand any socioeconomic shifts in pre-and post-disaster populations, and how post-disaster adverse birth outcomes may relate to underlying population risks, in future studies. Although this study found no changes in birth outcomes after the Fukushima nuclear disaster, our results suggest post-disaster changes in maternal demographics. There were statistically significant increases in the proportions of first-time mothers (p < 0.001), and in the proportions of mothers >35 years of age after the disaster (p < 0.05) ( Table 1). This change is likely to be related to post-disaster population shifts as discussed above [44], with many women of reproductive age leaving to live elsewhere (either temporarily or permanently), and may also reflect a decision to delay childbirth on the part of women who remained in the area, as suggested by increased maternal age in the post-disaster period. Economic instability, community tensions and separation of families are issues that have been observed in post-disaster Fukushima [25,45], and all could have reduced social and economic resources available to women in disaster-affected areas-changes which could be speculated to have impacted fertility decisions being made in the study context. While it is unclear why the proportions of mothers with previous deliveries decreased (and first-time mothers increased) after the disaster, particularly in 2014 (Table 1), it could be hypothesized that anxiety or fear of radiation [46] may have influenced fertility decisions, potentially in different ways between women who already had children and those who did not. We also should consider that there may have been specific mental health impacts of the disaster to mothers; a recent study found high rates of depressive symptoms among mothers who were pregnant in 2010 or 2011 in Fukushima Prefecture [47], with particularly high rates in Soso District compared to other areas of Fukushima [47]. Depressive symptoms in women with deliveries around the time of the disaster may be related to the lower proportions of repeat pregnancies observed in the post-disaster period of the present study, as mothers experiencing depressive symptoms may have been less likely to want or try for repeat pregnancies. However, there is still limited information on the drivers of fertility decisions and maternal demographics following the Fukushima disaster, and nuclear disasters in general, and these areas deserve further research beyond our speculations here. Japan is the most rapidly ageing country in the world, and had fertility rates below replacement levels since before the 2011 triple disaster [48]. We could not find any previous research that has discussed the ways in which disaster impacts on fertility patterns and birth outcomes may differ in baseline low fertility settings compared to high fertility settings, and we suggest that additional research in this area may be valuable, particularly as disasters are expected to happen more frequently in the future [49], and will more often hit low fertility countries as they continue to increase. The pathways of disasters effects on birth outcomes are still not conclusively understood, and could be hypothesized to function differently in low fertility vs. high fertility contexts. In this regard, Japan is representative of the global phenomenon of population ageing and declining fertility rates [48], and further assessment of the predictors of post-disaster fertility trends and birth outcomes in this context may be useful. Strengths and Limitations There are limitations to this study. First, we were unable to adjust analyses for maternal risk characteristics for low birthweight such as smoking and alcohol consumption, in addition to socioeconomic characteristics, because of limitations in data availability. Second, we did not have any data from April 2011 through March 2012, due to the closure of the Obstetrics and Gynecology Department in the study institution. Therefore, it was not possible to assess for any changes in birth outcomes in those who were pregnant at the time of the disaster (March 2011). Our lack of immediate post-disaster data may have contributed to the null results, and may not be directly comparable to studies that have assessed immediate post-disaster birth outcomes. The sample size was small for the analyses conducted, and although the data for this study comes from one institution, there are potential differences between our pre-and post-disaster samples that should be acknowledged as they could have caused sampling bias. As presented in the Discussion Section, post-disaster evacuation may have been patterned by socioeconomic factors, which could have influenced the population composition of those remaining in Soso District and thus participating in the post-disaster period of this study. It is further possible that disaster-related psychosocial stress may disproportionately affect women with low-or high-risk pregnancies, or those with lower or higher socioeconomic status, yet we were unable to assess for such characteristics due to limited data availability. For these reasons, we cannot rule out the possibility that sampling bias may have masked any real associations between the disaster and low birthweight or preterm birth. However, alongside these limitations, this study has unique strengths. Methodological investigation of evacuation and food purchasing patterns, in addition to a prolonged study period, are points that could be informative to future research. Although efficacy of food purchasing preferences as a measurement tool was limited in this study, we suggest that open discussion of the methodological process undertaken here may be of use to future research in disaster settings; in essence, this is not only a public health study but also an account of the exploratory methods undertaken in a data-constrained post-disaster context. We suggest that there is a great need for in-depth exploration of the pathways to adverse birth outcomes, and the potential for socioeconomic patterning, in future studies. Conclusions The prevalence of low birthweight and preterm births did not significantly change in a hospital affected by the Fukushima Daiichi Nuclear Power Plant Accident, and there were no statistically significant associations between these birth outcomes and evacuation or food purchasing patterns in the post-disaster period. These results are inconsistent with previous findings on associations between disasters and adverse birth outcomes, and call for further research, particularly on the mechanisms by which disaster effects on maternal and perinatal health may be mediated.
2017-07-26T20:40:16.948Z
2017-05-01T00:00:00.000
{ "year": 2017, "sha1": "76a8a4dda7b8daf2af301600c9fc38c0a7076676", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1660-4601/14/5/542/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "76a8a4dda7b8daf2af301600c9fc38c0a7076676", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
91704154
pes2o/s2orc
v3-fos-license
A Chalcone-Based Potential Therapeutic Small Molecule That Binds to Subdomain IIA in HSA Precisely Controls the Rotamerization of Trp-214 The principal intent of this work is to explore whether the site-specific binding of a newly synthesized quinoline-appended anthracenyl chalcone, (E)-3-(anthracen-10-yl)-1-(6,8-dibromo-2-methylquinolin-3-yl)prop-2-en-1-one (ADMQ), with an extracellular protein of the human circulatory system, human serum albumin (HSA), can control the rotamerization of its sole tryptophan residue, Trp-214. With this aim, we have systematically studied the binding affinity, interactions, and localization pattern of the title compound inside the specific binding domain of the transport protein and any conformation alteration caused therein. Multiple spectroscopic experiments substantiated by an in silico molecular modeling exercise provide evidence for the binding of the guest ADMQ in the hydrophobic domain of HSA, which is primarily constituted by residues Trp-214, Arg-218, Arg-222, Asp-451, and Tyr-452. Rotationally restricted ADMQ prefers to reside in Sudlow site I (subdomain IIA) of HSA in close proximity (2.45 nm) to the intrinsic fluorophore Trp-214 and is interestingly found to control its vital rotamerization process. The driving force for this rotational interconversion is predominantly found to be governed by the direct interaction of ADMQ with Trp-214. However, the role of induced conformational perturbation in the biomacromolecule itself upon ADMQ adoption cannot be ruled out completely, as indicated by circular dichroism, 3D fluorescence, root-mean-square deviation, root-mean-square fluctuation, and secondary structure element observations. The comprehensive spectroscopic study outlined herein provides important information on the biophysical interaction of a chalcone-based potential therapeutic candidate with a carrier protein, exemplifying its utility in having a regulatory effect on the microconformations of Trp-214. INTRODUCTION Chalcones and their heterocyclic derivatives 1,2 are recognized for their plethora of promising pharmacological activities due to their DNA-targeting properties, 3,4 proven gametocytocidal activity in the life cycle of Plasmodium falciparum, 5 superior vasodilative properties in treating hypertension, 6,7 and usefulness for the treatment of inflammatory diseases. 8,9 Investigations of the binding mechanisms of several bioactive compounds with the transport protein human serum albumin (HSA) present in the human circulatory system is an important course to understand the behavior of drugs in terms of therapeutics as well as toxicity. 10−15 Simultaneously it is also significant to apprehend their transport and disposition under physiological conditions. 16 In this context, HSA, a wellstructured major circulatory system protein, binds such ligands and also acts as an important determinant for the study of the pharmacokinetics of the drug molecules. 17 The fluorescence of HSA is predominantly governed by the sole Trp residue along with minor contributions from few tyrosines. Out of these, Trp is the most investigated fluorescent probe for protein conformation and dynamics. 18,19 The presence of the highly sensitive indole side chain in Trp makes it more capable for exploring the conformational ensembles of proteins in solution. 18 The fluorescence emission profile of Trp is sensitive to slight transitions in the protein quaternary structure, ligand binding, or any other subunit association. The governing fluorophore Trp displays strong emission in the 310−350 nm region (when excited at its absorption maximum) and also exhibits multiexponential decay with fluorescence lifetimes ranging from 50 ps to 8 ns. 20 This multiexponential fluorescence decay may be attributed to its various ground-state conformers. Upon interaction with a ligand, the populations of Trp rotamers display variation in their percentage distribution. More precisely, the Trp rotamerization is governed by the orientation of the aromatic heterocyclic indole group, whose changes in geometry are so fast that they can only be captured on a shorter time scale. The different deactivation pathways and rates of depopulation of the excited state account for the existence of different decay times for each rotamer. 21 The decay profile of the host HSA is sensitive to the ligand, which may have a direct influence on the microenvironment of the sole Trp residue. 18 The extent of interaction between the Trp residue and ligand/quencher is governed by their distance and relative orientation. Hence, the different rotamer populations of Trp in HSA provide key information on the binding mode of the ligand. 21−24 In this context, the interaction of quinolineappended chalcone derivatives (which are highly bioactive and characterized by rich photophysical properties) with human plasma proteins may be of great interest as it can alter the pharmacodynamics and pharmacokinetics (including distribution, metabolism, and elimination) of chalcone-based pharmaceutically relevant molecules. In this article, we demonstrate the site-specific interaction and mode of binding of a newly synthesized multitherapeutic quinoline-appended chalcone derivative, (E)-3-(anthracen-10yl)-1-(6,8-dibromo-2-methylquinolin-3-yl)prop-2-en-1-one (ADMQ), with the model plasma protein HSA and its consequences using optical spectroscopic methodologies. The spectroscopic results obtained herein are vindicated by computational molecular modeling studies. Besides, the understanding of the Trp emissive characteristics by virtue of the population distribution of its rotamers evokes interest as to whether the vital Trp rotamerization process is controlled by induced conformational changes or direct interaction of the binding ligand with Trp. RESULTS AND DISCUSSION 2.1. Insights into the Interaction and Binding Affinity of ADMQ with HSA. Each ligand/drug present in the circulatory system binds with serum albumin (SA) to a different extent, which affects the absorption, distribution, metabolism, and excretion (ADME) properties as well as the toxicity of the drug. Thus, it is well-recognized that SA present in human blood binds with the ligand/drug reversibly, which essentially affects the pharmacodynamics and pharmacokinetics of the drug/ligand. 25 In the case of hydrophobic drugs, protein binding enhances the solubility of the ligand in human plasma. Thus, it is necessary to get acquainted with protein− drug binding studies in order to determine the biochemical consequences of the synthesized/designed drug inside the human body, which may further facilitate formulation efficacy. 25−28 Hence, it is necessary to understand the exact mode and affinity of binding of ADMQ with HSA. The intrinsic fluorescence of plasma protein can be monitored using steady-state fluorescence spectroscopy. 18 This emission is attributed to the sole Trp-214 (in subdomain IIA) and 18 tyrosine residues. 29,30 For the ADMQ−HSA system, a wavelength of 295 nm was chosen for excitation to selectively monitor the emission of Trp-214 (exclusively at 340 nm) at three different temperatures: 298, 303, and 308 K. When the ligand ADMQ was gradually introduced into the aqueous HSA solution at pH 7.4, a concomitant diminution in the protein fluorescence along with a 4 nm hypsochromic shift (340 to 336 nm) in the emission maximum was observed ( Figure 1A). This progressive diminution of the tryptophan fluorescence by ADMQ suggests its binding interaction with the host protein molecule, whereas the increased hydrophobicity around the fluorophore is manifested by the hypsochromic shift. The above-said fluorescence quenching phenomenon is quantified by the Stern−Volmer (SV) equation: 31−33 ) where F and F 0 are the steady-state fluorescence intensities in the presence and absence of the quencher ADMQ, respectively, K SV is the SV quenching constant, [Q] is the total concentration of quencher (ADMQ), and τ 0 is the average lifetime of the protein in the absence of ADMQ and here is considered to be 1 × 10 −8 s. The SV plots thus obtained ( Figure 1B) exhibit an upward deviation at higher ADMQ concentrations (for all three temperatures), as reported earlier for many other drug−protein interactions. 16,33 The calculated K SV is on the order of 10 4 , and k q is on the order of 10 12 (much higher than the value for a diffusion-controlled process, where k q = 2.0 × 10 10 M −1 s −1 ), indicating the interaction of ADMQ with HSA via formation of a ground-state complex. This was further confirmed by the destabilization of the complex formed by increasing the temperature to 308 K (Table 1). Overall, the biphasic nature of the SV plot in the steady-state fluorescence quenching experiment suggested that a dual static and dynamic quenching mechanism is operative, which was further confirmed by timeresolved fluorescence (TRF) studies (discussed later). Determination of the Binding Affinity. The quenching data were further analyzed in order to determine the binding affinity of ADMQ with HSA using the following modified SV (MSV) equation: 17 The affinity constants K a were obtained with correlation coefficients of 0.99 ( Figure 1C) and are listed in Table 1. The magnitude of K a was found to be on the order of 10 4 , which implies moderate binding affinity of ADMQ toward HSA. TRF measurements were also employed to ascertain the simultaneous involvement of dynamic quenching (if any, along with static quenching) in the binding process. In this context, the mean fluorescence lifetime (τ m ) was obtained using the following expression: 34 where α i is the relative percentage contribution of the decay component possessing lifetime τ i . From the TRF results for HSA (shown in Table 4), it can be seen that τ m significantly decreases from 4.51 to 0.74 ns upon gradual addition of ACS Omega Article ADMQ to the protein solution. The SV plot obtained from the mean fluorescence lifetimes, which is depicted in Figure 2, follows linearity only up to [ADMQ] = 40 μM. Intriguingly, an upward curvature is seen at higher concentrations of ADMQ, which indicates possibilities such as (i) the presence of a sphere of action around the fluorophore 35 and (ii) the operation of simultaneous static and dynamic quenching. 18 However, these possibilities may come into play individually or simultaneously. Thermodynamics of the Binding Interaction. Generally, four kinds of non-covalent interactions govern the binding of the ligand to the protein that eventually help to extract the thermodynamic parameters. These includes van der Waals interactions, H-bond formation, electrostatic interactions, and hydrophobic interactions. 36 Ross and Subramanian 37 reported that the signs and magnitudes of the thermodynamic parameters so obtained indicate the predominant forces involved in binding of the ligand to the protein. The enthalpy (ΔH) and entropy (ΔS) of formation of the ADMQ−HSA complex were determined on the basis of the van't Hoff equation (eq 3): where K is in the present case the association constant K a , R is the gas constant, and T is the absolute temperature. The data are pooled in Table 2. The slope of the van't Hoff plot is related to ΔH, while the intercept of the plot indicates ΔS (Figure 3). The value of ΔG is calculated using the expression ACS Omega The negative value of ΔG obtained indicates that the formation of the complex between ADMQ and HSA is spontaneous. The obtained values of ΔS and ΔH are positive, indicating that the binding process mainly involves hydrophobic interactions. It is assumed that whenever the ligand binds to the protein in water-accessible area, 38 the protein releases the excess solvent from its surface, thus driving the entropy to a positive value. From this perspective, it can be concluded that the nonpolar quinoline and anthracene groups can effectively interact with amino acids present in the hydrophobic region. Hence, we can presume the involvement of hydrophobic interactions to be predominant in securing the ligand ADMQ into the protein scaffold. 2.2. Protein-Induced Rotational Confinement. The principle of fluorescence anisotropy measurements lies in photoselective excitation of a fluorescent molecule using polarized light that ultimately results in polarized emission. It is important to note that transition dipole moments for absorption and emission lie in specific directions within the fluorophore structure. Such anisotropic measurements provide an outlook for protein conformational dynamics. Fluorescence anisotropy strongly depends on various factors such as solvent viscosity, fluorophore shape, and protein flexibility. A higher value of the anisotropy is observed as the environment around fluorophore becomes restricted and hinders its rotational diffusion. The anisotropy value in fluids gradually decreases because of the ease of rotation of the fluorophore molecule, but rotation can be restricted in different matrixes such as micelles, reverse micelles, dextrin, etc. 39 In the present experiment, variation in the anisotropy of ADMQ was recorded with increasing concentration of HSA by monitoring the emission of excited ADMQ only. It is relevant to mention here that the synthesized molecule ADMQ exhibits two distinct absorption bands at 259 and 434 nm in water and shows a remarkable emission with a maximum at around 550 nm (λ ex = 434 nm) in water. 40 The plot ( Figure 4) shows that as the concentration of HSA was increased, the fluorescence anisotropy (r) increased to a value of r ≈ 0.232 at 260 μM and then leveled off. This observation is attributed to restricted motion of ADMQ somewhere inside the protein scaffold and not in the aqueous phase. Site-Specific Interaction Studies with Site Markers. To discern the location of ADMQ in HSA, site-marker competitive experiments were conducted. According to Sudlow's nomenclature, warfarin (War) and ibuprofen (Ibu) occupy the hydrophobic pockets 41,42 and show affinity for binding site I (subdomain IIA) and binding site II (subdomain IIIA), respectively. 43 War binds in a stable fashion within subdomain IIA of SA, 44,45 which is reflected by its enhanced fluorescence that arises as a result of the close proximity of Trp-214 of HSA and its benzyl moiety. 46,47 In this experiment, ADMQ was gradually added to a solution with a fixed [HSA]:[site marker] concentration ratio of 1:1, and the changes in the emission spectrum were monitored upon excitation at 295 nm. A decrease in the fluorescence intensity of HSA was observed, accompanied by a hypsochromic shift from 345 to 340 nm ( Figure S1). This observation reveals that there is an increased nonpolar region in vicinity of Trp-214 and that ADMQ addition somehow perturbs the site in which War is bound to HSA. On the other hand, with Ibu there is just a decrease in fluorescence intensity without any shift, indicating that Ibu is incapable of averting ADMQ binding. In order to quantify the extent of the binding interaction of the ADMQ−HSA complex in the absence and presence of the stereotypical site markers, eq 2 48,49 was used, and the affinity constant (K a ) for ADMQ−HSA was found to be (4.85 ± 0.35) × 10 4 at 298 K and pH 7.4, whereas the values of K a for ADMQ−HSA in the presence of Ibu and War were (4.75 ± 0.29) × 10 4 and (4.20 ± 0.34) × 10 4 , respectively ( Figure 5). It is prudent that the K a dwindled upon War addition but remained similar to the original value upon introduction of Ibu. The lower K a of ADMQ toward HSA in the presence of War indicates that ADMQ competes with War for the same site, i.e., site I of the protein. Hence, this site-marker experiment indicates that the ligand ADMQ is localized in subdomain IIA of serum albumin. 2.4. In Silico Investigation of the ADMQ Binding Site in HSA. In order to substantiate the detailed in vitro spectroscopic observations, a three-in-one molecular modeling exercise was employed, involving regular extra-precision (XP) ACS Omega Article molecular docking, induced-fit docking, and molecular dynamics (MD) to decipher the site-specific interaction between the transport protein (HSA) and ADMQ. Molecular Docking. The arrangement and configuration of ligands inside carrier proteins have a significant influence on their conformational change and bioactivity under physiological conditions. 50 The globular protein HSA is characterized primarily by two sites I and II, each divided into two subdomains A and B. 41,42 The key factors that hold importance for any ligand to bind to HSA are its affinity, site specificity, and binding pose within the protein pocket. Therefore, a complete picture of HSA and its interaction with ADMQ can be better understood through computational studies. ADMQ exists in different structural forms in different environments, as shown in Scheme 1. 40 In this scenario, a molecular docking exercise was carried out with the β-hydroxy keto form of ADMQ. Docking of the β-hydroxy keto form of ADMQ with HSA was performed and compared with that of the site-specific marker warfarin (which prefers to bind in subdomain IIA) by running the docking program Glide to uncover their respective binding modes. Glide searches for favorable interactions between the ligand and the receptor molecule, usually a protein, using the Optimized Potentials for Liquid Simulations (OPLS) force field. The molecular docking study of the ADMQ−HSA system suggests that the ligand prefers to occupy Sudlow binding site I (subdomain IIA) of HSA where warfarin resides, near Trp-214 ( Figure 6). The hydrophobic, electrostatic, and H-bonding interactions secure the ligand in the binding cleft of HSA in subdomain IIA. The hydrophobic residues Trp-214, Leu-198, Ala-291, Val-343, Val-344, Tyr-452, and Val-455 facilitate the binding inside the protein pocket. A comparative picture of the docking scores and binding energies of ADMQ and warfarin is documented in Table 3, indicating that ADMQ is stable while accommodating itself in binding site I (BS I), in good agreement with site-specific interaction studies in the presence of site markers. Induced-Fit Docking Study. This preliminary research prompted further that a more sophisticated docking method like induced-fit docking (IFD) 51 had to be adopted to shed light on the binding site, binding affinity, orientation, and binding pose of the ligand and receptor. IFD uses Glide and the refinement module in Prime to induce adjustments in the receptor structure so that it can accommodate the ligand as per its geometry and orientation. The receptor here also is untrimmed but a softened one compared with the standard virtual docking study, in which the receptor is kept rigid. From the IFD studies, it can be judged that the minimumenergy conformation of ADMQ prefers to acquire an arclike shape with a dihedral angle of 113.8°within the hydrophobic pocket with slight structural perturbations in the receptor site and is characterized by an IFD score of −1283.42 kcal mol −1 . Trp-214 is also found to lie in close proximity to ADMQ (estimated distance of ∼1.97 nm), as depicted in Figure 7 left. The IFD-generated pose depicts that the hydroxyl group of ADMQ forms H-bonds with Trp-214 and Asp-451, whereas the nitrogen atom of the quinoline moiety forms a H-bond with Arg-222. The two-dimensional 2D ligand interaction pattern (Figure 7 right) depicts the predominance of hydrophobic amino acids (apple-green balls) in proximity to the ADMQ molecule along with a few positively charged residues like Arg-222, Arg-218, and Lys-195. Thus, several non-covalent interactions shelter ADMQ within HSA. MD Simulations. In order to analyze various features of ADMQ−HSA binding (e.g., stability, types of interactions, etc.) in explicit solvent under physiological conditions, MD simulations were employed. Here the pose-scoring aspect of IFD was taken into consideration, and out of the 12 poses that were generated, the ADMQ−HSA conjugate pose with the highest IFD score (−1283.42 kcal mol −1 ) was taken for the next level of 15 ns MD simulation by Desmond, 52 an explicitsolvent program with emphasis on accuracy, speed, and scalability. MD simulations were used to gain an idea regarding ACS Omega Article the comparative stability of ADMQ and warfarin inside the same binding pocket, as represented in terms of root-meansquare deviation (RMSD), root-mean-square fluctuation (RMSF), secondary structure elements (SSEs), and protein− ligand contacts. The RMSD for ADMQ−HSA is shown in Figure 8 while that for the warfarin−HSA system is presented in Figure S2. The RMSD plot illustrates that ADMQ stays in its primary binding pocket and remains in contact with the protein chain without diffusing away into the solvent. The RMSD change during the complete 15 ns run lies within the 1−3 Å range, which shows that the system has equilibrated and the RMSD has stabilized around fixed values in both the case of ADMQ and warfarin. The RMSF plot presents peaks that are indicative of those regions of the protein that have undergone maximum fluctuations. The RMSF changes throughout the run were found to be similar in the cases of ADMQ ( Figure 9) and warfarin ( Figure S3). Inspection of the peak fluctuations reveals how similar side chains of the protein have undergone fluctuations with both ADMQ and warfarin. The agreement between the protein RMSD and protein RMSF is found to be ACS Omega Article excellent, which concretizes our observation that the probable binding site of ADMQ is site IIA (the warfarin binding site). Moreover, the SSE distributions by residue index for ADMQ ( Figure 10) and warfarin ( Figure S4) throughout HSA can be attributed to similar changes in α-helix percentage to the tune of 57.24% for ADMQ and 57.72% for warfarin. The SSE compositions for each trajectory frame for ADMQ on HSA ( Figure 11) and warfarin on HSA ( Figure S5) were also found to be similar, along with analogous changes in αhelix. The findings signify that subdomain IIA undergoes similar changes when ADMQ and warfarin are incorporated at the same site. Hence, this comprehensive in silico exercise provides insight into the binding location, affinity, stability, and orientation pattern of the bioactive ligand ADMQ inside the HSA scaffold. 2.5. Precise Control over Rotamerization of Trp-214. Modulation of the Excited-State Dynamics of Trp-214 by ADMQ. HSA is prone to continuous wobbling motion, and so is its sole Trp residue, Trp-214. In its natural habitat, HSA exhibits three lifetime components, 53 each representing a specific ground-state conformer of Trp with lifetimes of 1.1 ns (τ 1 ), 3.96 ns (τ 2 ), and 7.32 ns (τ 3 ). These three Trp rotamers ACS Omega Article are denoted as conformer I, conformer II, and conformer III, respectively. 22 The disparity in the lifetimes of the three conformers in native HSA itself signifies their relative exposure to the microenvironment. The shortest and intermediate lifetimes reflect a degree of quenching for certain Trp conformers due to the rigidity of the Trp microenvironment, 54 whereas the longest lifetime represents a "free" conformer of Trp. 54 Hence, a longer lifetime (conformer III) corresponds to greater exposure of the conformer to the aqueous microenvironment, and a shorter lifetime (conformers I and II) indicates exposure of the conformer toward a less polar environment and its burial deep inside, away from the influence of bulk water. When HSA is subjected to increasing ADMQ concentrations (Figure 12), the lifetimes of the conformers are affected differently (the data are pooled in Table 4). τ 3 remains almost unaltered throughout the entire concentration range, indicating that it is unlikely that ADMQ interacts with conformer III. For conformer II having an intermediate lifetime, τ 2 shows a marginal variation and decreases from 3.96 to 3.56 ns. The lifetime of conformer I, τ 1 decreases markedly from 1.1 to 0.52 ns, which is nearly a 50% reduction relative to its initial value. Another interesting observation is the profound transformation in the percentage contributions of these three rotamers in HSA with increasing ADMQ concentration. The percentage contributions dwindle radically for both conformer II and III from 34.64 to 3.82% and 38.82 to 1.50%, respectively. However, for conformer I the reverse situation applies, as the percentage contribution mounts from 26.54% to a very large value of 94.68% (Figure 13). The change triggered in the relative percentage contributions and fluorescence decays of the three rotamers/microconformations of Trp (in concentration-dependent ADMQ binding to HSA) points toward ADMQ-induced perturbations of the Trp-214 neighborhood. This disposition also points out the fact that ADMQ does exhibit precise control over the rotamerization of Trp-214. This control over the rotamerization process may be ACS Omega Article due to conformational alteration in HSA, direct interaction between ADMQ and Trp-214, or both. To understand the effect of changing pH on the conformational transformation of HSA, the fluorescence lifetime parameters (lifetime and pre-exponential value) were monitored carefully as the ADMQ concentration was increased in the HSA environment at different pH at 298 K. The changes in lifetime values as well as in pre-exponential factors throughout the pH range under study (Figures S7−S10) are very similar to those reported at physiological pH. 2.6. Verification of Conformational Alteration in HSA upon ADMQ Adoption. Binding of any exogenous ligand to serum albumin may induce an alteration in the conformation and three-dimensional (3D) structure of the host macromolecule. The drug-induced conformational change, if major, may affect the biochemical/biological activity of the protein itself 6 because maintaining the active site in the proper configuration is the cardinal function of the protein structure, and thus, a minimal conformational change may be highly appreciated. The conformational flexibility and adaptability exhibited by a protein to interact with a drug molecule is known to influence the transport mechanism of any drug in the physiological microenvironment. Hence, the ADMQ-induced conformational alteration in HSA was further investigated by circular dichroism (CD) and 3D fluorescence spectroscopy. Verification by CD Spectroscopy. Far-UV circular dichroism in the range of 200−260 nm was measured to observe alterations (if any) in the secondary structure of the host protein to accommodate the newly synthesized ADMQ molecule at increasing concentration. The dichroic absorption bands for α-helix, β-sheet (parallel and antiparallel), β-turn, and random coil content were documented for the addition of ADMQ to HSA. The CD spectrum of HSA manifests itself with two signature absorption bands at 208 and 222 nm, characteristic of α-helices. 55 Upon gradual addition of ADMQ, ACS Omega Article the decrease in the ellipticities at 208 and 222 nm ( Figure 14) indicates an increase in the negative Cotton effect, which reveals slight changes in the secondary structure of HSA. The CD results also indicate that upon addition of ADMQ to free HSA, the antiparallel β-sheet structure rises from 2.30% to 3.30%, the parallel β-sheet arrangement from 2.70% to 3.50%, the β-turn structure from 11.10 to 12.50%, and the random coil structure from 13.00 to 15.50%. Moreover, the αhelical content shows a reduction from 73% (free HSA) to 63.70% (ADMQ−HSA complex), as shown in Figure 14. This diminution in the percentage of α-helices (∼10%) upon introduction of ADMQ is indicative of minor conformational alterations in HSA. The observation that the CD spectrum of free HSA overlapped with the spectrum of the ADMQ−HSA complex suggests that HSA in complex with ADMQ is predominantly α-helical with very little change in the relative quantities of each component, as mentioned above. This indicates that binding of ADMQ may have caused the polypeptide chain to become slightly more tender in order to accommodate the ligand inside the protein pocket. This meager decrement in the percentage of α-helices may not render HSA inactive but may have some effect on the rotamerization of Trp-214. Verification by 3D Fluorescence Spectroscopy. To gain more insight into the ADMQ-rendered alteration in the polypeptide backbone of HSA, 3D fluorescence measurements were also performed on free HSA and in presence of increasing concentrations of ADMQ. The 3D fluorescence spectra and contour diagrams for native HSA and the ADMQ−HSA complex are presented in Figure 15. In the 3D spectra, peak a (λ ex = λ em ) corresponds to Rayleigh scattering. Additionally, two peaks designated as peak 1 (λ ex = 295 nm) and peak 2 (λ ex = 235 nm) are also observed. 56,57 Peak 1 is characterized as the fluorescent tryptophan residue in HSA and reflects changes in the tertiary structure of the protein, whereas peak 2 is indicative of changes in the secondary structure of the protein upon addition of ADMQ. The decreasing trends in the fluorescence intensity for both peak 1 (27%) and peak 2 (25%) individually with increasing ADMQ concentration point toward microenvironmental changes along with slight alterations of the peptide strand (secondary structure) and overall tertiary structure of the host protein. Hence, both CD and 3D fluorescence spectroscopy establish minimal conformational alteration of the host HSA to accommodate the potential bioactive ADMQ molecule. Hence, at this juncture it may be inferred that this minor conformational alteration may also be conducive to Trp-214 rotamerization. 2.7. Verification of a Direct Interaction between ADMQ and Trp-214. The relative distribution of the Trp microconformers in HSA can be regulated by ADMQ only when a direct interaction is possible between them. Hence, it was necessary to determine whether these two partners are close enough to each other to achieve an effective dynamic interaction. Determination of the Proximity between Trp-214 and ADMQ. In connection with the possibility of the direct interaction of ADMQ with Trp-214, Forster resonance energy transfer (FRET) measurements were performed for the ADMQ−HSA system to investigate the distance-dependent energy transfer between the donor (HSA) and acceptor (ADMQ) in the biological microenvironment. 58 Energy transfer occurs when the absorption spectrum of the acceptor molecule overlaps with the emission spectrum of the donor. 59 This spectral overlap between the interacting partners is depicted in Figure 16. The efficiency of energy transfer between the donor and acceptor can be estimated from the photoluminescence quenching using the Forster equation: 60 where E is the efficiency, r is the distance between the donor and acceptor, and R 0 is the distance at which the energy transfer becomes 50%, which can be calculated as where K 2 is the dipole orientation factor for the donor and acceptor, φ D is the fluorescence quantum yield of the donor in the absence of acceptor, n is the refractive index, and J is the spectral overlap integral, expressed as 18 For ADMQ and HSA, ε a = 7.49 × 10 3 M −1 cm −1 (at 430 nm), n = 1.37, and φ D = 0.15. From eqs 5−7, the evaluated values are J = 4.80 × 10 13 M −1 cm −1 nm 4 , R 0 = 2.21 nm, E = 0.3574, and r = 2.45 nm. The obtained value of r lies within 2−8 nm scale and satisfies the condition 0.5R 0 < r < 1.5R 0 , which implies the existence of energy transfer between HSA and ADMQ. The distance between Trp-214 and ADMQ (r = 2.45 nm) is analogous with the result of the IFD study (Figure 7a). 2.8. Control Experiment with Warfarin and Ibuprofen. ADMQ occupies a similar protein scaffold as the site marker warfarin, i.e., subdomain IIA of BS I in proximity to Trp-214 of HSA. Thus, a control experiment was also performed with same anticoagulant warfarin to see its effect on the three conformers of tryptophan. The addition of warfarin to HSA resulted in an increase in the percentage of the short-lifetime rotamer (conformer I) from 26.19 to 72.69% and precisely points toward similar changes in the Trp-214 microenvironment ( Figure 17). This observation inclines toward the fact that ADMQ and warfarin exhibit similar effects on the carrier protein in ACS Omega Article subdomain IIA (BS I) and have similar control over the rotamerization of Trp-214. Interestingly, performing the same experiment with another stereotypical ligand, ibuprofen (a non-steroidal anti-inflammatory drug), which binds to subdomain IIIA (BS II) in the cleft/pocket, remote from vicinity of Trp-214, did not bring out the same picture. Upon gradual addition of ibuprofen to HSA at physiological pH, the percentage contribution of conformer I varied meagerly from 31.91 to 40.5%, which indicates that ibuprofen does not interfere with the microconformer (conformer I) that is buried deep inside the pocket of subdomain IIA and does not control rotamerization of Trp-214, as observed in case of ADMQ and warfarin. Hence, from the proximity determination and control experiments in presence of site markers, it may be concluded that although ADMQ exerts a minor conformational perturbation on the polypeptide backbone,the change in rotamerization of Trp-214 is mainly influenced by direct interaction of ADMQ with Trp-214, as they reside in close proximity to each other (2.45 nm). CONCLUSION The present comprehensive study enlightens a coherent flow of the binding affinity, location, and spatial structural changes induced in HSA by a newly synthesized quinoline-appended anthracenyl chalcone, ADMQ, as shown by several in vitro spectroscopic techniques combined with molecular modeling exercises. The site-marker competitive assay in conjunction with the three-in-one molecular modeling approach generated comparable profiles in terms of binding location for ADMQ and the standard site marker warfarin in the HSA scaffold. ADMQ is enveloped inside the hydrophobic domain (BS I, subdomain IIA) of HSA via hydrophobic interactions with Trp-214, Arg-218, Arg222, Asp451, Tyr452, etc. Our quest provides salutary quantitative data not only on how the dynamic behavior of ADMQ with Trp-214 is manifested via ACS Omega Article the FRET mechanism but also on how the molecule has shown its ability to regulate the subpopulations of different Trp-214 rotamers. Although ADMQ exerts a minimal conformational perturbation in the polypeptide backbone of HSA, its direct interaction with the closely spaced Trp-214 (r = 2.45 nm) is found to predominantly regulate/control the interconversion of discrete intrinsic fluorophore microconformations. This in vitro spectroscopy amalgamated with in silico molecular modeling exercise elucidates the use of the newly synthesized quinolinyl chalcone having a regulatory effect on the rotamers of sole Trp-214 and its site-specific binding interaction with the model transport protein HSA. This may be very useful to understand the detailed mechanistic and functional effects that chalcone-based therapeutic agents exert on the distribution of rotameric forms of Trp in the human circulatory system. EXPERIMENTAL DETAILS 4.1. Materials. The ligand ADMQ used for the present study was synthesized previously by our group as described elsewhere. 40 A JASCO FP-8300 spectrofluorimeter with a spectral resolution of 2.5 nm was used for steady-state excitation and emission measurement with 1 cm path length cuvettes. The excitation wavelength of 295 nm was chosen to measure the emission from the sole Trp residue in HSA. Spectral corrections were carried out by subtracting appropriate blanks from the sample spectra run under the same conditions. Samples were equilibrated for 10 min at a particular temperature before data acquisition. All of the fluorescence intensities were corrected for the inner-filter effect using the following expression: 61 where C p is the molar concentration of protein, n is the number of amino acid residues (585 for HSA), and l is the path length (1 cm). The α-helical content was estimated from the MRE value at 208 nm using the following expression: ACS Omega Article % helix MRE 4000 33000 4000 100 where MRE 208 is the observed MRE value at 208 nm. The MRE value of 4000 corresponds to the β-form and random coils, whereas the MRE value of 33000 is for pure α-helix at 208 nm. 4.2.5. Fluorescence Lifetime Measurements. Time-resolved fluorescence decays were measured using the timecorrelated single-photon counting (TCSPC) technique with a PTI Pico Master instrument. A pulsed diode with a wavelength of 280 nm was used as the excitation source for HSA. The instrument response function (IRF) was acquired using sodium dodecyl sulfate as a scatterer. The decays and preexponential values obtained at an emission wavelength of 340 nm were analyzed using FelixGX 4.1.2 software. The reduced χ 2 value and Durbin−Watson parameter were used to authenticate the goodness of fit. The lifetime data were acquired with similar parameters at pH 2, 4, 9, and 11 using sodium acetate and disodium hydrogen orthophosphate buffers with increasing ADMQ concentration in the HSA environment. 4.3. In Silico Studies. The molecular modeling studies were conducted using an Intel core i5−2500 CPU running at a clock speed of 3.3 GHz. The 64-bit processors were coupled with 4 GB of high-speed RAM while using Maestro version 9.9 (Schrodinger LLC, New York, 2014). The machines were operated by the Linux Centos 6 operating system. 4.3.1. Ligand and Protein Preparation. The synthesized ligand ADMQ was prepared in Maestro 9.9 over the LigPrep module 63−65 using the OPLS-2005 force field 66,67 for rectification of molecular geometries and subsequent ionization at a physiological pH of 7.4, thereby enabling specific chirality retention in order to obtain a conformation with minimum energy. HSA (PDB ID 1AO6) 68,69 was prepared using the Protein Preparation Wizard (Schrodinger). 70,71 The HSA model prepared in this fashion may be considered reliable as it closely resembles the native 3D structure. Finally, the whole HSA structure was minimized by convergence to an RMSD of 0.3 Å with the OPLS force field. 4.3.2. Regular XP Molecular Docking and Induced-Fit Docking. The site map module was used to identify possible binding sites in the fully processed protein. The main purpose of the docking exercise was to generate a list of feasible binding sites that could finally lead to a stable protein−ligand complex considering the penalty scores and other short-range interactions. Subsequently, the best ligand conformation having the lowest penalty score was considered for further analysis. The ligand was docked at the selected sites using the XP mode of Glide. 63 The stability of the ligand−protein complex was depicted by molecular mechanics generalized Born surface area (MM-GBSA) binding energy values. Flexible docking was performed using induced fit docking. 51 4.3.3. MD Simulations. Desmond MD System v2.2 (D. E. Shaw Research, Schrodinger) 66,72 was used to perform MD simulations. The TIP3P solvation model was configured to an orthorhombic box shape. Neutralization of the ions was carried out by introducing Na + salt. To minimize the solvated ligand− receptor complex, 2000 iterations were carried out. The simulation was carried out for 15 ns. The overall method for MD simulation was adopted as reported earlier 6 for the ADMQ−BSA system. * S Supporting Information The Supporting Information is available free of charge on the ACS Publications website at DOI: 10.1021/acsomega.8b01079. Effect of the selected site marker (warfarin) on the ADMQ−HSA system in HEPES buffer at pH 7.0 ( Figure S1); HSA and warfarin RMSD plot during 15 ns simulation ( Figure S2); HSA and warfarin RMSF plot during 15 ns simulation ( Figure S3); SSE distribution by residue index of warfarin throughout HSA during the course of 15 ns simulation ( Figure S4); SSE composition for each trajectory frame over the course of simulation and SSE assignments of each residue over time for warfarin on HSA ( Figure S5); absorption and emission spectra of ADMQ in water/buffer at pH 7.0 at 298 K ( Figure S6); percentage contributions of three rotamers in native HSA with increasing ADMQ at pH 2, 4, 9, and 11 (Figures S7−S10) (PDF)
2019-04-03T13:07:00.187Z
2018-08-29T00:00:00.000
{ "year": 2018, "sha1": "80e91373121d9de93e0317911bcf6e63b08eab92", "oa_license": "acs-specific: authorchoice/editors choice usage agreement", "oa_url": "https://pubs.acs.org/doi/pdf/10.1021/acsomega.8b01079", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "e1d3e189fba99cf3989e2540b18f43f9635c4ca4", "s2fieldsofstudy": [ "Chemistry", "Biology" ], "extfieldsofstudy": [ "Medicine", "Chemistry" ] }
228901397
pes2o/s2orc
v3-fos-license
A Rapid Review of Prescribing Education Interventions Introduction Many studies conducted on the causes and nature of prescribing errors have highlighted the inadequacy of teaching and training of prescribers. Subsequently, a rapid review was undertaken to update on the nature and effectiveness of educational interventions aimed at improving the prescribing skills and competencies. Methods Twenty-two studies taking place between 2009 and 2019 were identified across nine databases. Results and Discussion This review reinforced the importance of the WHO Guide to Good Prescribing to prescribing curriculum design as well as the effectiveness of small group teaching. However, it also highlighted the lack of innovation in prescribing education and lack of longitudinal follow-up regarding the effectiveness of prescribing education interventions. Introduction Over time, deficiencies in prescribing education, such as a lack of practical prescribing training, a lack of linking theory to practice and the affordance of little attention towards generic prescribing skills, have led to the increasing emergence of prescribing errors [1]. A prescribing error is defined: "a clinically meaningful prescribing error occurring when... there is an unintentional significant reduction in the probability of treatment being timely and effective or increase in the risk of harm when compared with generally accepted practice" [2]. Errors in the prescription of medicines are currently one of the biggest dilemmas facing medicine and healthcare. Numerous studies have been conducted based upon prescribing errors and their impact on patient safety [3][4][5]. Adverse drug effects (ADEs) are found to be one of the main causes of injury to hospitalised patients [6], with over half of all prescribing errors considered as potentially harmful to patients, and 7.3% of these errors leading to life-threatening consequences [7]. Previously, only doctors and dentists held the legal authority to prescribe prescription-only medicines; however, this situation recently began to change globally, with either pharmacists or nurses or both obtaining the authority to prescribe independently [8]. The United Kingdom (UK) provides the most extensive rights to pharmacists and nurses, where doctors and dentists are known as medical prescribers (MPs) and other healthcare professionals who prescribe are known as non-medical prescribers (NMPs) [9]. The rationale of this development was to provide patients with quicker access to medicines. Not only would this decrease a very heavy workload within general practice but would also widen the use of the skills of pharmacists and nurses [10]. A small number of studies exploring the effectiveness of NMP prescribing have been encouraging, demonstrating that they are making clinically appropriate prescribing decisions [10,11]. Baqir et al. found that pharmacist prescribers demonstrated an error rate of 0.3%; however, they advocate for further, larger scale research to be conducted on the prescribing practices of NMPs to obtain a clearer picture of the nature of errors NMPs can be prone to [12]. Cope et al. have also called for more research to investigate how NMPs are trained to prescribe safely and effectively [8]. Prescribing is overall a very complicated task requiring the amalgamation of knowledge of medicines, diagnostic and communication skills, an in-depth understanding of principles underpinning clinical pharmacology and an appreciation of risk and uncertainty [13]. Dornan et al. conducted research to determine the causes of prescription errors. They interviewed mainly recently graduated doctors and found that out of skillbased, rule-based and knowledge-based mistakes, rule-based mistakes were the main cause of prescribing errors. They reported that this suggests a lack in the ability of junior doctors to correctly apply the knowledge acquired in undergraduate education. This was supported by a consensus that students felt there was a lack of modules preparing them for the transition from theory to practice and current pharmacology education was not beneficial enough with regard to prescribing. It was concluded that rule-based mistakes were most likely to go unnoticed and inflict harm towards the patient [1]. Nazar et al. [14] built upon the research conducted by Dornan et al. [1] to delve further into the causes of prescribing errors. Their research implied that a lack of knowledge is not solely responsible for prescribing errors. They found that methods of teaching as well as the environment of prescribing also contribute toward prescribing errors. Audit Scotland questioned the adequacy of undergraduate medical education in preparing new doctors for rational and safe prescribing [15]. Previously, a systematic review was conducted by Kamarudin et al., examining previous work on educational interventions designed to enhance the prescribing competency of both medical and non-medical prescribers [16]. However, Kamarudin et al., as well as other systematic reviews on prescribing education interventions [17,18], have only investigated the quantitatively measured effectiveness of interventions and omitted reviewing studies which qualitatively investigate the views and perspectives of students on the various interventions. Given that previous literature reviews have omitted qualitative studies on prescribing education interventions, coupled with the advancement of the nature of educational interventions across the medical education continuum and the time elapsed since a previous review in this area, our aim was to perform a rapid systematic review to provide an update on the scope, nature and effectiveness of educational interventions aimed at developing the prescribing skills and competencies of medical and non-medical prescribers and investigate the views and perspectives of the students regarding different prescribing educational interventions. Design Given that previous literature reviews evaluating prescribing education interventions had been conducted, the aim was to investigate whether and to what extent the nature of these educational interventions had evolved in the last 10 years; therefore, a rapid review was deemed most appropriate. A rapid review is defined as a form of evidence synthesis that provides more timely information for decision-making as compared to a traditional systematic review. In addition, rapid reviews have been the preferred form of evidence synthesis for reviews aiming to serve as an update on previous reviews [19]. In addition, due to the heterogeneity of the studies and the inclusion of both qualitative and quantitative studies, the data was synthesised using a narrative approach [20]. Search Strategy The focus was towards identifying studies where an educational intervention was implemented in a curriculum to improve the prescribing skills of medical and/or non-medical prescribing students. Papers were screened from nine different databases, including MEDLINE, EMBASE, PsycINFO, Scopus, Academic Search Premier, CINAHL Complete, Cochrane Library, NIH PubMed and Google Scholar. A search strategy was developed with the aid of a librarian from the University of York Library. The search terms entered into these databases were as follows: Study Selection The inclusion criteria were if they were published in English, were full-text journal papers and evaluated an implemented educational intervention related to prescribing. Both qualitative and quantitative studies of any design taking place in medical schools and/or non-medical prescribing programmes were included, whether the intervention was evaluated through assessments or through qualitative student perspectives. However, they had to have taken place between the years 2009 and 2019. Papers were excluded if the educational interventions were not related to prescribing, and were systematic reviews, meeting reports, letters, opinion pieces or studies involving qualified doctors. The screening process took place in compliance with the PRISMA guidelines [21]. The titles and abstracts of the papers were reviewed by two authors to assess relevance of studies. Both authors held discussions regarding which papers should be included for full-text screening and an agreement was reached in a timely manner. Both authors also conducted full-text screening and, after agreeing upon 95% of the papers, selected them for data extraction. Data Extraction and Quality Appraisal Initially, a small number of papers underwent dual data extraction by both Usmaan Omer and Evangelos Danopolous as recommended by Waffenschmidt et al. [22] based on study design, location, study aims, type and success of educational intervention, level of innovation and specific areas of prescribing targeted by intervention. The quality of each study was assessed using the Best Evidence Medical Education (BEME) scale [23]. As both authors agreed on the data extracted, data extraction of the remaining papers was conducted by UO and ED alone. Number of Studies Overall, a total of 1137 papers were identified across all nine databases. Following the removal of duplicates, 696 papers remained, of which 634 were excluded for reasons including having no relevance to prescribing, studies not including medical and/or non-medical prescribing students as study cohorts or studies being conducted before 2009. After consultation between the two authors, it was agreed that 58 papers should be included for full-text screening. Following the process of full-text screening, 22 papers were included for the review. (PRISMA diagram included as Appendix) Study Characteristics Of the 22 studies selected for the review, eight were randomised or non-randomised controlled trials, six before-and-after studies, five mixed-methods studies, two qualitative studies and one cross-sectional survey study (Tables 1, 2, 3, 4, and 5). Types of Educational Interventions Teaching and Mentoring from Healthcare Professionals Other than Faculty Members Four case-based educational interventions included teaching and mentoring from qualified healthcare professionals other than faculty lecturers [24][25][26][27]. Two studies followed a group learning format using case-based scenarios [24,25], one study used experiential learning through observations of real-life prescribing situations [27] and one study implemented a mentoring scheme between learner and expert [26]. Newby et al.'s study [24] included pharmacist-led tutorials using common case scenarios seen by junior doctors, and, similarly, Gibson and colleagues used clinical case scenarios in tutorials led by junior doctors, but these were discussed in small groups of students, who devised a clinical management plan for the patient in the clinical scenario. Tittle et al.'s study [27] used small-group tutorials with students shadowing pharmacists in clinical practice, where topics such as prescribing for acute medical emergencies, taking patient drug histories, discharge prescriptions and therapeutic drug monitoring were covered. Bowskill et al. [26] implemented a mentoring scheme in the NMP programme at Nottingham, where students were allocated an alumnus of the programme who would act as their prescribing mentor, aiding them in effectively integrating prescribing skills learnt during the programme into their area of clinical expertise. The studies used different methods to evaluate the outcomes of their studies. Newby et al. [24] employed mixed methods to evaluate the benefits of these sessions, where students undertook a prescribing exercise and a prescribing confidence questionnaire before and after the implementation of the intervention alongside focus groups where selected students discussed the benefits and potential drawbacks of the tutorials. Post-intervention scores were significantly higher, and both the focus groups' and questionnaires' data indicated that the tutorials had improved prescribing confidence in students. Gibson et al. [25] also used end-of-session questionnaires but observed student examination performance as indicators of success. The results of the questionnaires showed that most students rated the tutorials as 'excellent', greatly enhancing their prescribing confidence, knowledge and skills with the role of the junior doctor as the teacher being well received. Both Tittle et al. [27] and Bowskill et al. [26] evaluated outcomes qualitatively through focus groups, semistructured interviews and surveys. Tittle et al.'s [27] focus group results demonstrated positive perceptions for the intervention; the role of the clinical pharmacist as the teacher and the positive effect of the intervention on their prescribing confidence were recorded. However, Bowskill et al. [26] found that although students praised the scheme for helping contextualisation of prescribing into their specific area of practice, they felt that adequate support was already provided from colleagues and tutors. Interventions Designed Using and Featuring the WHO Guide to Good Prescribing Six case-based interventions were conducted through exposing students to treatment-setting standards from the World Health Organization (WHO) Guide to Good Prescribing (GGP) to varying [28][29][30][31][32][33]. Two studies used a combination of didactic lectures and subsequent prescription-writing for specific paper case scenarios [28,31], two studies implemented an individualised instruction approach where students were provided with the WHO GGP to use individually for creating treatment plans [32,33], one study used an experiential approach where students learned through observing real-life patients [30] and one study implemented the WHO GGP across an entire curriculum and in a variety of teaching formats [29]. Keisjers et al. [29] made extensive use of the WHO GGP through incorporating it into a whole medical curriculum, where all pharmacology and pharmacotherapy modules were modelled according to the learning goals of the WHO GGP, and the guide was heavily featured during whole-group lectures, small-group tutorials and practical sessions. Kamat et al. [31] themed prior lectures and case-based tutorials (CBT) involving treatment of varying conditions such as diabetes mellitus, peptic ulcers and constipation on the six steps of the WHO GGP. Raghu et al. [28] recruited 117 second-year medical students and asked them to compile prescriptions for three case scenarios. After delivering rational prescribing sessions and subsequently asking for the prescriptions to be rewritten, they assessed and provided feedback to the students according to the WHO GGP standards. Both Krishnaiah et al. and Tichelaar et al. [32,33] required students to use the WHO GGP as an aid in compiling treatment plans for hypothetical case scenarios; however, the purpose of Tichelaar et al.'s study [33] was to compare the impact of the WHO GGP to the 'SMART' criteria of goal setting on treatment planning. Thenrajan et al.'s study [30] used a test and a control group, both of whom were exposed to the WHO GGP guidelines of selecting the preferred drug following a clinical diagnosis. After receiving five clinical scenarios, the test group underwent patient-based teaching where they would see real patients suffering from the same conditions seen in the clinical scenarios, whereas the control group underwent further prescription-writing training. Outcomes by most studies were assessed through scoring the treatment plans and prescriptions written by students following the intervention. Both Raghu et al. and Krishnaiah et al. [28,32] found student treatment plans to score higher post-intervention and compared to control groups. However, Tichelaar et al. [33] found the treatment plans of students using the SMART criteria to score higher than those who used the WHO GGP. Keisjers et al. [29] examined the impact of their curricular intervention through a formative standardised assessment testing basic pharmacological knowledge (testing factual knowledge), applied pharmacological knowledge (solving clinical scenarios) and pharmacotherapy skills as well as prescription-writing. The results demonstrated that both fourth-and sixth-year students receiving the WHO GGP intervention significantly outscored their control group peers. Self-Directed and Online Learning Three studies involved interventions which included a component of self-directed or online learning [34][35][36]. Two studies incorporated their self-directed components of the intervention alongside PBL-based tutorials involving case-based scenarios [34,35] and one study implemented an entirely individualised e-learning prescribing module [36]. Al Khaja and Sequiera [34] investigated the impact of an optional 2-h interactive prescribing skills session at the end of each pre-clerkship unit phase, where five to six clinical scenarios were discussed. Hauser et al. [35] required students enrolled in their study to collaborate with tutors to develop model patient-prescriber conversation guides. Following a PBL session on medication non-adherence where they defined learning goals, students conducted independent research on strategies to achieve their learning goals in anticipation of a second PBL session where they discussed results of their research findings, which was followed by the workshop where they devised their conversation guides. Sikkens et al. [36] designed a randomised controlled intervention where a group of fourth-year medical students were provided access to a 6week e-learning module with eight clinical cases based on the WHO GGP. The outcomes of these studies were assessed through observation of usual course assessment, where the scores of participants were higher as compared to those who had not been recruited for the study [34]; student reflections in the programme portfolio, where students expressed a high level of satisfaction with the intervention [35]; and through MCQ knowledge tests and OSCE simulations, where it was found that students exposed to the e-learning group performed significantly better and pass rates were much higher compared to the control group. Survey results also showed that students rated the e-learning module to have enhanced their prescribing confidence in antimicrobial therapy [36]. Simulation and Role-play Three studies implemented an educational intervention centred around learning through role-play and Simulation-Based Medical Education (SBME) [37][38][39]. Two studies implemented a mixed disciplinary small-group approach to their role-play method of teaching [37,38] and one study used a large-group experimental observation approach [39]. Cooke et al. [37] split medical and pharmacy students into small mixed-disciplinary groups who consulted with simulated patients and subsequently devised a working diagnosis, a mock prescription and detailed management plan to explain to the simulated patient. Paterson et al. [38] collaborated medical and non-medical prescribing students into multidisciplinary groups where they would devise prescriptions for three scenarios, two with simulated patients and one paper based. [39] large-group demonstration intervention used a student volunteer on patient communication with regard to drug treatment. The faculty member acted as the physician and the volunteer student acted as the patient. All students had opportunity to act as volunteers in these demonstrations. Study outcomes were assessed through both qualitative and quantitative approaches. Cooke et al.'s [37] focus group participants expressed positive perceptions of the intervention in focus groups, stating the ability to apply theory into practice in a safe environment along with understanding the role of other healthcare professionals in prescribing. Paterson et al.'s [38] focus group discussions indicated that students positively received the master classes, praised the concept of working in small groups and gained a greater awareness and appreciation of the roles of other professionals in prescribing. They also used a pre-and post-readiness for interprofessional learning score (RIPLS) and self-efficacy score to evaluate the impact of the interprofessional simulation exercise. Tayem et al.'s [39] used recorded questionnaires, where students found the role-play demonstrations instructive, helping to enhance their ability to communicate drug therapy information effectively to patients, increase prescription-writing confidence and that they would like to be given further opportunities to undertake role-playing exercises in other facets of their medical education. Additionally, students attending focus groups reported that the educational intervention helped develop interaction skills with patients and that the exercise would be most effective within small groups. Moreover, OSCE scores of those attending these role-play sessions were higher than those of nonattendees. Peer-Based and Inter-Professional Learning Two studies implemented educational interventions where either students from multiple stages of the medical programme were recruited for team-based learning or students from different degree programmes were brought together to partake in an inter-professional-based learning experience [40,41]. One study implemented a small-group experiential learning approach under supervision [40] and one study used a blended approach of didactic lectures and case-based small-group learning [41]. Dekker et al. [40] recruited first-, third-and fifth-year medical students to take part in a pilot intervention involving student-run clinics (SRCs), where first-, thirdand fifth-year medical students were tasked with collaborating in consultations with real patients with a supervisor overseeing the consultation. Like Dekker et al., Achike et al. [41] also conducted a pilot study. However, this intervention brought together both second-year medical and fourth-year nursing students for an inter-professional learning (IPL) class. The class consisted of a brief didactic lecture followed by a smallgroup discussion on a clinical scenario and group presentation. Outcomes were measured by Dekker et al. [40] through evaluation questionnaires by students, supervisors and patients, from which feedback was positive all-round, with the consensus that the SRC was safe, provided high level of care and was beneficial to the students [40]. Likewise, Achike et al. (2014) [41] administered feedback questionnaires to students before they left the class, which showed overall positive perceptions of the class, with students complementing interactions with students of other professions and learning more about the process of rational drug choice. Two studies implemented peer-based learning between students of the same cohort [42,43]. Both studies implemented small-group teaching; however, one of these also incorporated large-group discussions at the end of the session [42] and the other implemented specific tutorials on a single topic [43]. Zgheib et al.'s study [42] included six clinical pharmacology sessions which were delivered twice monthly over a period of 3 months, of which five were team-based learning (TBL) sessions including activities such as compiling of group prescriptions and group formularies, small-group work on MCQs eventually being joined into whole-class discussion on answers, group work on clinical scenarios and their appropriate prescribing decisions. Wilcock and Strivens [43] conducted a study where a certain segment of the overall prescribing education intervention involved teaching between peers. Groups of six to ten students received one 40-min tutorial every 2 weeks on the medications aspirin, tiotropium and simvastatin. During the 6 weeks of these tutorials, one student in each group was asked to voluntarily provide their own tutorial to their peers on a fourth medication of their choice while following the same tutorial format. The interventions were evaluated through multiple approaches. Zgheib et al. [42] graded group prescriptions, formularies and answers to case scenarios compiled in the sessions and provided students with the opportunity to mention the strengths and weaknesses of the course through completing course evaluation forms. The scores of the group prescriptions, formularies and case scenarios improved after each session and students expressed satisfaction with the format of the sessions, mentioning that they helped with improving their group interaction skills. Wilcock and Strivens [43] administered post-tests to their students, who demonstrated struggles on the ethics of prescribing and, although enjoyed delivering tutorials to their peers, did not appear to display sustained improvements in their critical thinking [43]. Other Studies Two studies did not fit under any specific theme as their objectives were of a more general nature [44,45]. One study investigated whether case-based teaching was more effective in small-group or large-group settings. Small groups were made up of 13 to 15 students each and the large-group session included the entire cohort. Both sessions concluded with the distribution of questionnaires to students regarding their perceptions of the session. Focus group discussions also took place where a small number of students were asked to express their views and perspectives on both the small-group and large-group approaches. The results of both questionnaires and focus groups indicated a strong preference by students for the small-group teaching sessions [44]. Celebi et al. [45] conducted a study investigating whether a module on drug-related problems (DRPs) could help reduce the number of prescribing errors. Group 1 underwent the week-long prescription training course followed by a week-long skills laboratory training period, while group 2 acted as the late intervention group by undergoing the week-long skills laboratory training before the prescription training course. Both groups underwent assessments before the training, a week later and at the end of the training programme. The training module included a 90-min seminar on adverse drug reactions (ADRs), prescribing errors and special needs patients. Another 90 min was dedicated to practical training based on a virtual case of congestive heart failure. The next 3 days involved the students practicing prescriptions for real-life patients every morning and discussing the real-life patient cases with lecturers in afternoon sessions, affording attention towards avoiding prescribing errors. At the end of the week, students were required to sit an examination with cases like assessment cases but with different diseases. The results of the assessments demonstrated a significant decrease in prescription errors. These results were more prominent in the early intervention group [45]. Discussion In the last 10 years, we found 22 studies which met the inclusion criteria of educational interventions aimed at improving the prescribing skills and competencies of medical and non-medical prescribing students. These showed that a considerable amount of studies continue to be conducted on the best educational approaches to improving prescribing skills; however, as reported by previous systematic reviews [17,18], generalisability and validity continue to be limited due to the diversity and heterogeneity of the reported studies. The most recent literature review on this topic was conducted by Kamarudin et al. [16], which reported that many interventions were designed based on the concepts of the WHO GGP. This review also found that prescribing education interventions continue to be designed using the main concepts of the WHO GGP, demonstrating that despite its publication being back in 1994, the guideline continues to be the leading model for safe and rational prescribing to this day. This assertion is aided by the positive results yielded by interventions designed around the WHO GGP, both in assessment and student perception [28][29][30][31][32][33]. Despite there being a range of different educational interventions to improve the teaching of prescribing, most of these interventions feature the heavy use of clinical case scenarios. Brauer et al. [46] report that clinical case scenarios are vital to problem-based learning in healthcare and to the development of clinical practice guidelines. This also applies to the WHO GGP, which consists of a plethora of case scenarios of various clinical areas such as diabetes, cancers and gastrointestinal, respiratory and cardiovascular disorders. Hence, the designing of effective prescribing educational interventions requires the inclusion of robust clinical scenarios as they can be applied to improving multiple aspects of prescribing competencies such as prescription-writing, prescribing communication and recognising of ADRs. In addition, apart from one study, all studies reported a high level of success regarding their interventions, whether through students attaining higher scores in traditional assessments, scored treatment plans and OSCEs in comparison to control groups or through students expressing positive views of the educational intervention. Another theme to emerge from this review was the use of small-group teaching. Many of the interventions required multiple small groups of students to be created to deliver the teaching, with one study specifically evaluating the difference in effectiveness between smalland large-group teaching. Along with demonstrating high scores in assessments, small-group teaching was particularly perceived positively in qualitative interviews with students. NMP programmes consist of far less student numbers per cohort as compared to medical school programmes; however, studies introducing educational interventions to NMP programmes remain very low, as this review could only locate two studies involving NMP programmes, one introducing a mentoring scheme to NMP students and the other involving an IPL intervention with medical students. Given that certain areas of the literature indicate an incredibly low prescribing error rate of NMPs [12], the specific benefits of small-group teaching in the context of prescribing skill require further investigation. Despite identifying a range of different educational interventions aimed at improving prescribing education, the level of innovation seen in these interventions appears to be low, given that most studies used orthodox teaching methods such as didactic lectures and group exercises throughout. In a literature review, Dearnley et al. [47] categorised innovation in medical education to include simulation; digital teaching aids; online/elearning teaching and assessment; social media and virtual learning environments. Only three studies implemented a degree of innovation, where simulated and real-life patients and role-play were used. Here, although one of the studies failed to provide an insight into the content of the simulated consultations, when students were provided with the opportunity to use their prescribing skills on either simulated or real-life patients, their responses were overwhelmingly positive. Some of the studies mentioned the use of self-directed learning aided through an online e-learning system; however, it was unclear what content was included in these e-learning systems. None of the studies implemented the use of social media or innovative uses of virtual learning environments such as virtual reality with virtual patients. Most studies implemented interventions which, for the most part, were based on case scenarios on paper. Although with the exception of one study, all interventions were reported to be successful in improving the prescribing skill and competency of students and were perceived positively, questions on their long-term effects upon prescribing practice of students beyond graduation and into their full-time clinical careers still remain as these studies failed to implement a longitudinal follow-up of whether their benefits on the prescribing practice of these students are sustained over a long period of time, as this would be a more reliable indicator of whether an educational intervention has achieved its desired outcome. Moreover, studies which only assessed the benefits of an intervention through the views and perspectives of the students undertaking them would be greatly enhanced if they utilised assessments and evaluated whether the scores of these assessments supported the positive viewpoints of the students. Given that most studies only assess the short-term impact of educational interventions on prescribing practice, educators should also assess whether the positive impact of these interventions is sustained over a longer period as prescribers advance in their careers. Also, the WHO GGP continues to be a model from which prescribing educators design their teaching approaches. This could partly be due to it providing a comprehensive prescribing guidance on many areas of expertise using clinical case scenarios, something established as being core to problem-based learning. Given the lack of educational interventions being evaluated in NMP programmes, it would be prudent to design an intervention around the WHO GGP and evaluate its effectiveness in an NMP setting due to the existence of a variety of clinical areas of expertise in NMP programme cohorts. This review did include certain limitations. As we limited the inclusion criteria to include studies involving students only, we could have included studies involving junior doctors. The search strategy also excluded non-English language papers. In addition, given that the papers we identified reported positive outcomes and perspectives as a result of the interventions, there is also the possibility of positive publication bias. Overall, this review was able to retrieve a broad range of studies investigating various prescribing education interventions. Conclusion Although a wide range of educational interventions to improve prescribing skills and competencies have been developed, despite their high success rate in the short term in both assessment and student perception, there still exists a lack of innovation in these interventions. Given that we are seeing other areas of medical education adapting their teaching approaches to be more innovative with the recent rise in technology, prescribing curricula also need to adapt and evaluate the scope of implementing educational approaches which utilise innovations such as virtual reality and explore areas where students can commit errors in a safe environment and learn from these to better their prescribing skills in preparation for real-life clinical practice. Compliance with Ethical Standards Conflict of Interest The authors declare that they have no conflict of interest. Consent for Publication Not applicable Code Availability Not applicable
2020-11-19T09:15:46.639Z
2020-11-16T00:00:00.000
{ "year": 2020, "sha1": "cbc3238e69dca07fba359179feddef8cb7abeead", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s40670-020-01131-8.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "f7c8f2a74a781ac9cbe1d5e2a0d6c4410f0dd803", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine", "Psychology" ] }
258841761
pes2o/s2orc
v3-fos-license
ANALYSIS OF THE COMPUTATION OF GR ¨OBNER BASES AND GR ¨OBNER DEGENERATIONS VIA THEORY OF SIGNATURES . The signatures of polynomials were originally introduced by Faug`ere for the efficient computation of Gr¨obner bases [Fau02], and redefined by Arri-Perry [AP11] as the standard monomials modulo the module of syzygies. Since it is difficult to determine signatures, Vaccon-Yokoyama [VY17] introduced an alternative object called guessed signatures. In this paper, we consider a module Gobs( F ) consisting of the equivalent classes of the guessed signatures for a tuple of polynomials F . This is the residue module in ≺ (Syz(LM( F ))) / in ≺ (Syz( F )) defined by the initial modules of the syzygy modules with respect to the Schreyer order. We first show that F is a Gr¨obner basis if and only if Gobs( F ) is the zero module. Then we find a necessity to compute divisions of S-polynomials to find Gr¨obner bases. We give examples of transitions of minimal free resolutions of Gobs( F ) in a signature based algorithm. Fi-nally, we show a connection between the module Gobs( F ) and Gr¨obner degenerations. Introduction The history of computing Gröbner bases began with the Buchberger's algorithm, which selects polynomials by running a multivariate division algorithm and adding them to the set of generators until it satisfies the Buchberger's criterion [Buc65]. The ideas of the Buchberger's algorithm are still the basis of Gröbner basis computation algorithms, and most algorithms gradually approximate the input polynomial system to a Gröbner basis by iteratively computing the S-polynomials generated by the cancellations of the leading terms. A practical problem with this method is that the artifacts produced by the procedure are unpredictable for the choice of generators, term order, and so on. This implies a computational difficulty in applications of Gröbner basis theory. Our motivations in this paper are: • to obtain a quantitative cost function of a tuple of polynomials F that predicts the complexity of the computation of a Gröbner basis from F , • to clarify the necessity of the S-polynomial computation to determine a Gröbner basis, and • to represent the computation of Gröbner bases in the geometrical context, for the construction of new efficient algorithms intrinsically different from Buchberger's algorithm, such as Newton's method, midpoint method and so on, in the future. To realize it, we give an algebraic or geometric analysis of the syzygies of F in the computational aspects with a theory of the signatures. Then we obtain an object Gobs(F ) that corresponds to the computation of a Gröbner basis from F and a Gröbner degeneration of F . And we prove that remainders of divisions of S-polynomials must be determined to obtain Gröbner bases with smallest signatures. Let R = K[x 1 , . . . , x n ] be the polynomial ring with a term order < over a field K, F = (f 1 , f 2 , . . . , f m ) a tuple of elements in R, and I the ideal generated by F . By R m = ⊕ m i=1 Re i we denote the free R-module with the basis (e 1 , e 2 , . . . , e m ) corresponding to F . Assume that R m equips a term order ≺. The signature S(f ) of a non-zero element f in I is defined as whereū is the image of u under the canonical surjection R m → I → 0 (see also Definition 3.1, Proposition 3.2). Faugère first introduced the concept of signatures in his F 5 algorithm for efficient computation of Gröbner bases by avoiding reductions to zero [Fau02]. Several researchers proposed many variants of the F 5 algorithm, nowadays called signature based algorithms. Arri-Perry introduced the above definition of the signatures to give a proof of the termination and correctness of the F 5 algorithm or signature based algorithms for any input [AP11]. It is difficult to determine the signature for a general polynomial without a Gröbner basis of I or the syzygy module Syz(F ). Vaccon-Yokoyama defined the "guessed" signatures of the S-polynomials as an alternative object of signatures [VY17]. The guessed signatures are only determined from the computational history of the running instance. Then they made a simple implementation of a signature based algorithm. In this paper we introduce a definition of guessed signatures that is different from [VY17]. We define the guessed signatures for pairs (x α e i , x β e j ) of monomials in R m such that x α LM(f i ) = x β LM(f j ) (i < j) as the monomials x β e j in the second components (Definition 3.3). If we attach the Schreyer order on R m (Definition 2.1), the guessed signature of a pair (x α e i , x β e j ) is the leading monomial of x α e i − x β e j . In fact, the guessed signature of a pair (x α e i , x β e j ) is not always the signature of the S-polynomial 1 It partly depends on whether the reduction of the S-polynomial is zero or not. From this point of view, in this paper we suppose that the difference between the set of guessed signatures and the set of signatures might predict the behavior to computations of Gröbner bases from F , and then we focus on this difference. From the Schreyer's theorem, the set of guessed signatures is the set of the leading monomials LM(Syz(LM(F ))) of the syzygy module of the tuple LM(F ) = {LM(f 1 ), . . . , LM(f m )} [Eis95, Theorem 15.10]. Then our main target is the residue module From now on we always attach the Schreyer order on R m . Our contributions in this paper are the following. (A) We give a criterion for Gröbner bases: F is a Gröbner basis if and only if Gobs(F ) = 0 (Theorem 3.5). (B) We show that to get a Gröbner basis consisting of polynomials of smallest signatures, we must need to find remainders of divisions of S-polynomials (Theorem 4.1). (C) We give examples of transitions of Gobs(F ) in a signature based algorithm (Section 5). (D) We find a closed subscheme X in Spec R× K A 1 K and direct summand N (F ) of Gobs(F ) such that X is a flat deformation of Spec R/I to Spec R/ LM(F ) over A 1 K if and only if N (F ) = 0 (Theorem 6.6, Lemma 6.8). For (A), a key lemma is the following (see also Lemma 3.7). Lemma 1.1. For any element f in I, the condition (B) is based on Lemma 1.1. Let us consider about finding an element of the leading monomial not in LM(f 1 ), . . . , LM(f m ) . Let f m+1 be an element of I such that LM(f m+1 ) ∈ LM(f 1 ), . . . , LM(f m ) and put F ′ = (f 1 , f 2 , . . . , f m , f m+1 ). Assume that f m+1 =ū for an element u in R m and LM(u) = S(f ). By Lemma 1.1, the equivalent class of S(f m+1 ) in Gobs(F ) is not zero. On the other hand, since u − e m+1 ∈ Syz(F ′ ) and LM(u − e m+1 ) = LM(u) (see Lemma 3.7), we can show that the equivalent class of S(f m+1 ) in Gobs(F ′ ) is zero. Then one may interpret that finding an element f m+1 of the leading monomial not in LM(f 1 ), . . . , LM(f m ) is vanishing a non-zero element of Gobs(F ). If F consists of homogeneous elements and the term order < on R is compatible with the ordinal degree of R, one may consider that the signature S(f ) is an index of the computational cost of representing f by F , since degrees are a factor of the complexity of computing polynomials [MM84,Giu05,Dub90,BFSY05]. Therefore a naive idea to compute Gröbner bases efficiently is to choose polynomials of small signatures. In fact, several signature based algorithms follow this idea [AP11, VY17, Sak20] (see also Algorithm 1). Then we identify the polynomials of the signature that is smallest in Gobs(F ). About (C), as mentioned above, some signature based algorithms can be intuitively thought of as methods that attempt to reduce the size of Gobs(F ) by annihilating the smallest elements. However, in Section 5, we observe examples of transitions of Gobs(F ) in an implementation of a signature based algorithm, then we find examples that the sequence of Gobs(F ) does not monotonically go to the zero-module in the procedure. On the other hand, observing such examples leads to the conjecture that, in some cases, the first Betti number of Gobs(F ) represents the phase of the monomial ideal generated by LM(F ). More precisely, some examples satisfy the statement that if the first Betti number increases in a step, then the new leading monomial found in that step divides another leading monomial of the generators (Example 5.1, Example 5.2, Example 5.4). However, the above statement is not true in Example 5.3. Furthermore, in Example 5.3, Gobs(F ) is generated by a single equivalent class for the input F , nevertheless the instance does not terminate by a single step. We still do not know what is going on in the background of all this. About (D), we show that Gobs(F ) contains flatness obstructions of a family introduced from F in the context of Gröbner degenerations. Then we call Gobs(F ) the module of Gröbnerness obstructions of F in this paper. Let us recall Gröbner degenerations. We call a closed subscheme X in Spec R × K Spec K[t] a Gröbner degeneration of Spec R/I if the projection X → Spec K[t] is flat, generic fibers X t of the projection over t = 0 are isomorphic to Spec R/I and the special fiber X 0 at t = 0 is isomorphic to Spec R/ in < (I). There exists a Gröbner degeneration constructed from a weighting on variables [Bay82,Eis95]. Gröbner degenerations are used in studies of degenerations of varieties, homological invariants, Hilbert schemes and so on [Har66,KM05,LR11,CV20,Kam22]. Our main theorem about the relationship between Gobs(F ) and Gröbner degenerations is the following. Theorem 1.3 (Theorem 6.6, Lemma 6.8). There exists a closed subscheme X in Spec R × K Spec K[t] and a direct summand N (F ) of Gobs(F ) such that • generic fibers of the projection X → Spec K[t] over t = 0 are isomorphic to Spec R/I, the special fiber at t = 0 is isomorphic to is flat if and only if N (F ) = 0. Preliminary Let K be a field. Let R = K[x 1 , . . . , x n ] be the polynomial ring over K in n variables attached a term order <. We use the following notation: • A : the ideal generated by A in R, • LM(f ): the leading monomial of f , x αn n for a vector α = (α 1 , α 2 , . . . , α n ). In this paper, we always consider a fixed tuple of polynomials and r = 0 or LM(r) ∈ LM(f 1 ), . . . , LM(f m ) . We call this form a division of f with F . We also call h 1 , . . . , h m the quotient and r the remainder of this division of f with F . Let I = F be the ideal generated by F in R. We call the ideal LM(f ) | f ∈ I \ {0} the initial ideal of I and denote it by in < (I). We say F is a Gröbner basis if the initial ideal in < (I) is generated by the tuple LM(F ) = (LM(f 1 ), . . . , LM(f m )). For the elementary of Gröbner bases, see [Eis95,Section 15]. Let R m = ⊕ m i=1 Re i be the free R-module of rank m with the basis (e 1 , . . . , e m ). A monomial in R m is an element of the form x α e i . In this paper, we always attach the following order on R m . Definition 2.1. The Schreyer order on R m is the order of monomials in R m such that Let u be a non-zero element in R m . The leading monomial of u is the largest monomial with non-zero coefficient occurring in u. We define the leading coefficient and leading term as the same. We use the following notation: • LM(u): the leading monomial of u, • LC(u): the leading coefficient of u, • LT(u) = LC(u) LM(u): the leading term of u, Let us define the syzygies. Definition 2.2. The notationū for u denotes the value of the R-module morphism R m → I e i → f i at u. Ifū = 0, then we say u is a syzygy of F . The syzygy module of F is the kernel of the above morphism. We denote the syzygy module of F by Syz(F ). In general, generators of the syzygy module Syz(F ) depend on F and need precise computation to determine. On the other hand, generators of the syzygy module Syz(LM(F )) is theoretically determined with an explicit form by the Schreyer's theorem. for distinct indexes i, j. Then the set is a Gröbner basis of Syz(LM(F )). In particular, the initial module of Syz(LM(F )) is generated by the set {m Proposition 2.4. It holds that LM(Syz(F )) ⊂ LM(Syz(LM(F ))). Proof. For any u ∈ Syz(F ), denote Let us divide u into the following two parts: By definition of the Schreyer order, we have LM(u) = LM(u 0 ), thus it is enough to show that u 0 ∈ Syz(LM(F )). Let us compute u 0 as the following: . Since the second sum in the above consist of terms smaller than x ξ , the Signatures and guessed signatures We recall the definition of the signatures given in [AP11]. equals to the following set of monomials: {s | s is a monomial in R m , s ∈ LM(Syz(F ))}. In particular, the set of the equivalent classes of the signatures is a basis of the residue module R m /Syz(F ) as a K-linear space. As easiest example of signatures, one may hope that S(f i ) = e i . However, it is wrong in general. For example, assume , then the signature of f 3 is not e 3 . Indeed, put u = e 1 + e 2 . We haveū = f 3 and LM(u) = e 2 . Thus the signature of f 3 is less than or equal to e 2 . Since we attach the Schreyer order on R m , we have e 2 < e 3 . Therefore we obtain S(f 3 ) < e 3 . Note that, in general, we need a Gröbner basis of Syz(F ) to determine the signature S(f ) of given polynomial f . As a more reasonable object than the signatures, we introduce the guessed signatures. Definition 3.3. An S-pair is a pair of monomials (x γ e k , x δ e ℓ ) such that k < ℓ and x γ LM(f k ) = x δ LM(f ℓ ). We denote S-pairs as p = (x γ e k , x δ e ℓ ). The S-polynomial of p = (x γ e k , x δ e ℓ ) denoted by Spoly(p) is the polynomial For an S-pair p = (x γ e k , x δ e ℓ ), we call the second component x δ e ℓ the guessed signature of p. We denote the guessed signature of p byŜ(p). We say an S- Remark 3.4. The original definition of guessed signature is not as in Definition 3.3. We note the original definition that previous studies (for example, [AP11, VY17, Sak20]) used in the following: fix a tuple F as a set of generators of the ideal I and consider a set G = {g 1 , g 2 , . . . , g b } of elements in I including F , we call a pair of generators ( The guessed signature of a pseudo regular S-pair (g i , g j ) is the maximum element of the set {m In our definition (Definition 3.3), we only consider the situation of G = F , omit hypothesis on pseudo regularity, and use x δ e ℓ as the guessed signature instead of x δ S(f ℓ ) for convenience in the latter. Since it holds that x δ e ℓ , one may guess that the signature of the S-polynomial is x δ e ℓ . This is the reason why we call x δ e ℓ the "guessed" signature. In fact, the equality S(Spoly(p)) =Ŝ(p) is a non-trivial condition to determine if F is a Gröbner basis or not. Here we note the mean of the condition (e). Let u = α,i c α,i x α e i be an element of R m such that u = f and LM(u) = S(f ). Assume that S(f ) = x β e j and put x ξ = LM S(f ) = x β LM(f j ). Then by definition of the Schreyer order we have x ξ = max{x α LM(f i ) | c α,i = 0}. We divide f into the following two parts: Proof. The latter part is clear from Theorem 2.3. Let L be the set of the guessed signature of standard S-pairs. For any S-pair p = (x γ e k , x δ e ℓ ), there exists a monomial x λ such that We have x δ = x λ x β and thenŜ(p) = x δ e ℓ = x λŜ (x α e k , x β e ℓ ). Therefore the guessed signatureŜ(p) is a multiple of an element of L and then an element of LM(Syz(LM(F )). Conversely, for any element of u ∈ Syz(LM(F )), there exist a monomial x λ and an elementŜ(x γ e k , x δ e ℓ ) in L such that LM(u) = x λŜ (x γ e k , x δ e ℓ ) =Ŝ(x λ x γ e k , x λ x δ e ℓ ). Therefore LM(u) is the guessed signature of a S-pair (x λ x γ e k , x λ x δ e ℓ ). Proof. Let u = α,i c α,i x α e i be an element of R m such thatū = f and LM(u) = S(f ). Assume that S(f ) = x β e j and put x ξ = LM S(f ) = x β LM(f j ). Then, by definition of the Schreyer order, we have Therefore as the proof of Proposition 2.4, putting Hence it is enough to show that u 0 ∈ Syz(LM(F )). Since Proof. By definition of signatures, inequality LM(u) S(f ) always holds. If the equality LM(u) = S(f ) holds, then we have LM(u) ∈ LM(Syz(F )) from Proposition 3.2. Conversely, if it holds that LM(u) > S(f ), let v be an element of R m such thatv = f and LM(v) = S(f ). Then u − v is a syzygy of F . Therefore we obtain that LM(u) = LM(u − v) ∈ LM(Syz(F )). Proof of Theorem 3.5. [(a) =⇒ (b)] If F is a Gröbner basis, then for any S-pair p = (x γ e k , x δ e ℓ ), there exist polynomials h 1 , h 2 , . . . , h t in R such that Since LM(h i f i ) < x δ f ℓ , we have LM(u) = x δ e ℓ ∈ LM(Syz(F )). Therefore the guessed signature of p is not the signature of Spoly(p) since any signature is not an element of LM(Syz(F )) (Proposition 3.2). [(d) =⇒ (e)] For any non-zero element f ∈ I, the signature S(f ) is not an element of LM(Syz(F )) (Proposition 3.2). From (d), the signature S(f ) is also not an element of LM(Syz(LM(F ))), therefore it holds that LM S(f ) = LM(f ) by Lemma 3.7. As a consequence of Theorem 3.5, we find an algebraic obstacle where the tuple of generators F is a Gröbner basis. Namely, for a tuple of generators F , We can compute the smallest non-zero element of LSyL(F )\LSy(F ) using a step-by-step method. Let p be a standard S-pair such thatŜ(p) = s i . Assume that i = 1 or s 1 , s 2 , . . . , s i−1 ∈ LSy(F ) (i ≥ 2). Then s i ∈ LSy(F ) if and only if the reminder of any division of the S-polynomial Spoly(p) with F is 0. Proof. Assume that p = (x γ e k , x δ e ℓ ). Let h 1 , . . . , h m be the quotients and r i the remainder of any division of Spoly(p) with F . Then it holds that We have LM(u) = s i and u = r. It implies that S(r) ≤ s i if r = 0. If r = 0, then the element u is a syzygy of F . Therefore we have s i ∈ LSy(F ). Let us show the converse. If i = 1 and r = 0, then the signature S(r) is an element of LSyL(F ) from Lemma 3.7 since LM(r) ∈ LM(F ) . Therefore we obtain s 1 = S(r) and s 1 ∈ LSy(F ) since s 1 is the minimum element of LSyL(F ). If i ≥ 2 and r = 0, then the signature S(r) is also an element of LSyL(F ). Since S(r) ≤ s i , there exists an index j smaller than or equal to i such that s j |S(r) (note that LSyL(F ) is generated by {s i }). Since s j ∈ LSy(F ) if j < i and S(r) ∈ LSy(F ), we have j = i. Therefore we obtain s i = S(r) and s i ∈ LSy(F ). Why do we need to compute divisions of S-polynomials? As an application of Theorem 3.5, let us give a mathematical answer to the question "Why do we need to compute remainders of divisions of Spolynomials to get Gröbner bases?". As far as the author knows, all previous algorithms for computing Gröbner bases require computing remainders of divisions of S-polynomials by using division algorithms, Macaulay matrices and so on. Thus, several researchers have evaluated the computational complexity and presented improvements of these computations. It is well known that this method certainly produce a non-trivial leading monomial and is a part of the Buchberger's criterion. However, in the context of simply obtaining Gröbner bases, we still do not know if this method is really necessary. From the previous section, we know that in order to get Gröbner bases we have to vanish the non-zero elements in Gobs(F ). Then the following theorem gives a necessity of computing remainders of divisions of S-polynomials. Then we haveū = r and LM(u) = x δ e ℓ = S(f ) ∈ LSy(F ) (Proposition 3.2). Therefore it holds that r =ū = 0 and S(r) = S(f ) from Lemma 3.8. Let v be an element in R m such thatv = f and LM(v) = S(f ). Put f. If g = 0, then we obtain f = LC(v) LC(u) r and LM(f ) = LM(r). If g = 0, then we have LM(w) < LM(u) = S(r). It implies that S(g) < S(f ). Since the signature S(f ) is the minimum element of LSyL(F ) \ LSy(F ), the signature S(g) is not an element of LSyL(F ). Therefore the leading monomial LM(g) is an element of LM(f 1 ), . . . , LM(f m ) from Lemma 3.7. It implies that LM(f ) = LM(r) since those are not elements of LM(f 1 ), . . . , LM(f m ) . Moreover, g consists of the differences of the non-leading terms of f and r. Therefore if the non-leading terms of f and r are not in LM(f 1 ), . . . , LM(f m ) , then g = 0. Examples of transitions of Gobs(F ) in a signature based algorithm Let us look at computational examples of Gobs(F ). We use a naive implementation of a signature based algorithm (Algorithm 1), which is similar to the algorithms presented in [AP11,VY17,Sak20]. The difference of Algorithm 1 is that it iterates to update the tuple of generators F , and then the signatures change for each step. The performance is not discussed here. The termination is clear since R is a Noether ring. We use SageMath [The22] to implement and run Algorithm 1. Example 5.1. Let R = Q[x, y, z] be the polynomial ring equipped with the graded lexicographic order of x > y > z. Let Using Algorithm 1, we get a sequence of tuples F 3 , F 4 , . . . , F 11 such that • the signature of f j+1 with respect to F j is the minimum element of LSyL(F j ) \ LSy(F j ) and • F 11 is a Gröbner basis of I = f 1 , f 2 , f 3 . Let us observe transition of Gobs(F i ). The following are minimal free resolutions of Gobs(F i ) computed by sage math packages, and we also compare the monomial ideals generated by LM(F i ). The generator of each monomial ideal wrote in the last is the new leading monomial LM(f i ) added in that x 3 y, xyz, xy 2 , z 3 , Gobs(F 10 ) ← R 1 ← R 2 ← R 1 ← 0, LM(F 10 ) = xy, z 3 , y 2 z, y 3 , xz 2 , yz 2 , Gobs(F 11 ) ← 0, LM(F 11 ) = xy, z 3 , y 2 z, y 3 , yz 2 , xz . From Theorem 4.1, the leading monomial of the remainder of any division of Spoly(xe 1 , ye 2 ) with F is constant and it is xw 2 by computing a reduction of Spoly(xe 1 , ye 2 ). However, in fact, we have xy, x 2 , zw, xw 2 in < (I) = x 2 , xy, xw 2 , y 4 , y 3 z, zw , therefore we do not obtain a Gröbner basis of I by eliminating the minimum guessed signature ye 2 . Moreover, the first Betti number of the module of Gröbnerness obstructions increase. On the other hand, if we set the degree reversed lexicographic order of x > y > z > w on R, then we obtain a Gröbner basis of I by only one step that reduces Spoly(xe 1 , ye 2 ). Let us observe the transitions of Gobs(F i ) in these two cases. For the lexicographic order: x 2 , xy, zw, xw 2 , For the graded reverse lexicographic order: Example 5.4. Let us consider the case of coefficients in a finite field. Let R = Z/5Z[x, y, z] be the polynomial ring equipped with the degree lexicographic order of x > y > z. Let f 1 = xy + 4z + 2, f 2 = xyz + y 2 + 1, Then we get a sequence of tuples F 3 , F 4 , . . . , F 9 with the same conditions as in Example 5.1. Let us see the transition of Gobs(F i ): Gobs(F 9 ) ← 0, LM(F 9 ) = xy, y 2 , xz, z 3 , yz 2 , x 2 . From the above examples, the sequence of Gobs(F i ) does not monotonically go to the zero-module in general. Moreover, the sequence of Betti numbers or projective dimensions of Gobs(F i ) also does not monotonically go to 0. Here one may suggest the following question. Question. Does there exists an algorithm such that the values of some invariant of Gobs(F i ) monotonically go to 0? Is it fast? It seems that the increase and decrease of the first Betti numbers link to phases of the leading monomials (Example 5.1, Example 5.2, Example 5.4). However, there is an exceptional example (Example 5.3). We have not yet obtained consideration of it in this paper. Gröbner degenerations and signatures In fact, there exists an affine scheme X in A n K × K A 1 K such that the projection π : X → A 1 K is flat, generic fibers X t = π −1 (t) over t = 0 are isomorphic to the affine scheme Spec R/I, and the special fiber X 0 = π −1 (0) is isomorphic to the affine scheme Spec R/(in < I) [Bay82,Eis95]. Such affine schemes are called Gröbner degenerations of Spec R/I. We recall how to construct a Gröbner degeneration from a Gröbner basis and a weighting vector of positive integers. Definition 6.1. Let A be a finite set of monomials. In fact, there exists a vector of positive integers ω ∈ Z n >0 such that for any monomials x α , x β ∈ A, x α < x β if and only if ω · α < ω · β [Rob85]. Here we denote by ω · α the ordinal inner product of ω and α. We say that such vector ω is compatible with A. Definition 6.2. Assume that a vector of positive integers ω ∈ Z n >0 is compatible with the set of monomials appeared in elements of F . We define the ω-degree of a monomial x α as deg ω x α = ω · α. Also for any element f ∈ I, we define the ω-degree of a polynomial f as deg ω f = max{deg ω x α | x α appears in f }. We denote by Top ω f the sum of all terms of f of ω-degree deg ω f , and call Top ω f the top terms of f with respect to ω. Let f = α c α x α be an element of R. We define notations for new variable t independent to x 1 , . . . , x n . The former is an element of the Laurent polynomials ring R[t, t −1 ] = R ⊗ K K[t, t −1 ], the latter is an element of the polynomial ring . Moreover, the latter is a homogeneous element of R[t] with respect to the grading deg ω t d x α = d+ω·α, we have (f (t) ) |t=0 = Top ω (f ). In below, we fix the setting of Definition 6.2 and assume that all elements of F are monic (namely, LC(f i ) = 1). Therefore we have Top . The fibers X t over t = 0 are isomorphic to Spec R/I. Moreover, if F is a Gröbner basis, this family is flat over A 1 K,t = Spec K[t] and the special fiber at t = 0 is isomorphic to Spec R/(in < I). Our goal in this section is to give necessary and sufficient conditions of that the family X = Spec R[t]/ F Considering initial modules in R m , we obtain the following corollary of Theorem 6.4 that states a relationship between the flatness and guessed signatures. Corollary 6.5. The family Spec We denote by LImSy(F Proof. If F is a Gröbner basis, then by Theorem 3.5, Theorem 6.3 and Corollary 6.5 we have that Spec R[t]/ F Assuming a special assumption on the weight vector ω, we show that the set LSy(F ) is included in LImSy(F (t) ω ). Lemma 6.7. Let V F = {v 1 , . . . , v b } be a Gröbner basis of the syzygy module Syz(F ). Let A be the sum of the following sets of monomials in R: • {x α | x α appears in an element of F }, • {x α LM(f i ) | x α e i appears in an element of V F }. Assume that ω is compatible with A. Then for any element v of V F , it holds that LM(Top ω (v)) = LM(v). i .
2023-05-24T06:14:59.039Z
2023-01-01T00:00:00.000
{ "year": 2023, "sha1": "0d4c44ebdba409a2e1e7aefbc82c42882b0a7d35", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "0d4c44ebdba409a2e1e7aefbc82c42882b0a7d35", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
210195582
pes2o/s2orc
v3-fos-license
Immediate early gene fingerprints of multi-component behaviour The ability to execute different responses in an expedient temporal order is central for efficient goal-directed actions and often referred to as multi-component behaviour. However, the underlying neural mechanisms on a cellular level remain unclear. Here we establish a link between neural activity at the cellular level within functional neuroanatomical structures to this form of goal-directed behaviour by analyzing immediate early gene (IEG) expression in an animal model, the pigeon (Columba livia). We focus on the group of zif268 IEGs and ZENK in particular. We show that when birds have to cascade separate task goals, ZENK expression is increased in the avian equivalent of the mammalian prefrontal cortex, i.e. the nidopallium caudolaterale (NCL) as well as in the homologous striatum. Moreover, in the NCL as well as in the medial striatum (MSt), the degree of ZENK expression was highly correlated with the efficiency of multi-component behaviour. The results provide the first link between cellular IEG expression and behavioural outcome in multitasking situations. Moreover, the data suggest that the function of the fronto-striatal circuitry is comparable across species indicating that there is limited flexibility in the implementation of complex cognition such as multi-component behaviour within functional neuroanatomical structures. Results Comparison of ZENK expression between the GO, STOP and STOP-CHANGE groups. In a first step we wanted to determine which brain areas were involved in multi-component behaviour in pigeons. Therefore, pigeons were divided in three groups. The GO group only performed GO trials and served as a baseline condition controlling for movement and reward related neural activity during the task (Fig. 1A). The STOP group performed GO and STOP trials and was important to dissociate simple STOP processes from STOP-CHANGE processes (Fig. 1B). The STOP-CHANGE group performed the whole STOP-CHANGE paradigm (see methods for more details) and was the experimental group of this study in which multi-component behaviour was tested (Fig. 1C). In the final test session, all groups received 400 trials in their particular paradigm and pigeons were perfused 60 minutes after the first trial had been started. With stainings against the immediate early gene ZENK we could quantify the contribution of the NCL, the striatum, the arcopallium and the dorsal portion of the dorsomedial hippocampus (DMd) to the above-mentioned processes. Group differences in the number of IEG-expressing neurons in all areas of interest were analyzed with a repeated measures ANOVA using the within-subject factor "area" and the between subject factor "group". We found a main effect of area (F (3,45) = 17.98, p < 0.001, η p 2 = 0.545) and a main effect of group (F (2,15) = 6.33, p = 0.010, η p 2 = 0.458). Importantly, there was also an interaction between the factor "group" and "area" (F (6,45) = 3.19, p = 0.011, η p 2 = 0.298). Bonferroni corrected pairwise comparisons revealed that the NCL was significantly more active in the STOP-CHANGE group (1081 cells per mm 2 ± 339 SEM) compared to the GO group (172 cells per mm 2 ± 45 SEM; p = 0.009; Fig. 2A,F). Furthermore, the ZENK expression in the NCL was significantly higher in the STOP-CHANGE group as compared to the STOP group (292 cells per mm 2 ± 92 SEM; p = 0.048; Fig. 2A,F). The ZENK expression pattern in the striatum was very similar to that of the NCL. Bonferroni corrected pairwise comparisons revealed that ZENK expression in the striatum was significantly higher in the STOP-CHANGE group (1599 cells per mm 2 ± 370 SEM) compared to the GO group (479 cells per mm 2 ± 198 SEM; p = 0.031; Fig. 2B,F). However, ZENK expression in the striatum in the STOP-CHANGE group was not significantly increased as compared to the STOP group (582 cells per mm 2 ± 114 SEM; p = 0.177; Fig. 2B,F). Another area of interest was the arcopallium which is thought to be a functional equivalent to the mammalian pre/motorcortex 32 . We analyzed this area because it receives motor input from the NCL and should be activated in tasks that require motor feedback 15 . We found a significant difference between groups within this area. The arcopallium was significantly more active in the STOP-CHANGE group (432 cells per mm 2 ± 103 SEM) as compared to the GO group (145 cells per mm 2 ± 42 SEM; p = 0.020; Fig. 2C,F). Furthermore, the ZENK expression in the arcopallium was significantly higher in the STOP-CHANGE group compared to the STOP group (97 cells per mm 2 ± 17 SEM; p = 0.004; Fig. 2C,F). As the hippocampus is not expected to be involved in STOP-CHANGE processes, its subdivision DMd was analyzed as a control area to ensure that group differences were not the result of varying staining intensities. ZENK expression in DMd was similar between the STOP-CHANGE group (428 cells per mm 2 ± 99 SEM) and the GO group (430 cells per mm 2 ± 138 SEM; p = 1.000; Fig. 2D,F). Additionally, the activity within DMd was similar between the STOP-CHANGE group and the STOP group (320 cells per mm 2 ± 134 SEM; p = 0.787; Fig. 2D,F). In all tested areas, the GO and the STOP group displayed similar patterns of activation (for all comparisons p = 1.000; Fig. 2F). As already outlined above, the ANOVA also revealed a main effect of area (F (3,45) = 17.980, p < 0.001) indicating that the brain areas significantly differed in their activity. However, this could easily reflect simple differences in neuron densities. To distinguish the relative importance of the NCL and the striatum that can be traced back to STOP-CHANGE processes, the relative increase in ZENK positive cells between all conditions must be considered. Interestingly, the striatum was significantly more active than the NCL in the GO group (NCL: 172 cells per mm 2 ± 45 SEM; striatum: 479 cells per mm 2 ± 198 SEM; p = 0.047; Fig. 2F) and STOP group (NCL: 292 cells per mm 2 ± 92 SEM; striatum: 582 cells per mm 2 ± 114 SEM; p = 0.049; Fig. 2F). This was however not the case in the STOP-CHANGE group (NCL: 1081 cells per mm 2 ± 339 SEM; striatum: 1599 cells per mm 2 ± 370 SEM; www.nature.com/scientificreports www.nature.com/scientificreports/ p = 0.943; Fig. 2F). The relative increase between the STOP and STOP-CHANGE group in neuronal activity that reflects the CHANGE process was 1.3 times greater for the NCL than for the striatum. Correlation of ZENK expression in the STOP-CHANGE group with the efficiency of multi-component behavior. Another goal of this study was to determine whether the brain activity as measured with ZENK expression was directly correlated with the efficiency of multi-component behaviour. Therefore, we calculated In that paradigm, pigeons received 70% GO trials, where pecking the GO stimulus (green circle) terminated the trial and 30% STOP trials, where a red-light signaled that pecking had to be inhibited for 5 s to terminate the trial. (C) Schematic illustration of the STOP-CHANGE paradigm. This group received 70% GO trials where responding to the GO stimulus (left green circle) terminated the trial and 30% STOP-CHANGE (SC) trials where responding to the CHANGE signal (right white circle) terminated the trial. The STOP-SIGNAL delay (SSD) was adjusted by means of a staircase procedure. The STOP-CHANGE delay (SCD) between the onset of the STOP and CHANGE stimuli was fixed and set to 0 ms in half of the SC trials and to 300 ms in the other half. The CHANGE stimulus could appear at two locations (top right and bottom right circle). in NCL, striatum, arcopallium and DMd across all three groups (GO group: light blue, STOP group: darker blue, STOP-CHANGE group: dark blue). ZENK expression was significantly increased in NCL, arcopallium and striatum in the STOP-CHANGE group compared to the GO group. Furthermore, in the NCL and arcopallium ZENK expression was significantly increased in the STOP-CHANGE group compared to the STOP group. In the control area DMd no differences were found between the conditions. The error bars represent the standard error of the mean (SEM). Abbreviations: A: arcopallium; DMd: the dorsal part of the dorsomedial hippocampus, E: entopallium; GP: globus pallidus; NCL: nidopallium caudolaterale; St: striatum *p < 0.05, **p < 0.01. All scale bars represent 50 µm. www.nature.com/scientificreports www.nature.com/scientificreports/ an individual slope value between the CHANGE (GO2) response times (RTs) in the "SCD 0" and "SCD 300" condition for all pigeons that performed the STOP-CHANGE paradigm (for more details see method section). This slope value indicates whether the task was solved using a parallel processing strategy (slope value closer to 1, less efficient) or a serial processing strategy (slope value closer to 0, more efficient) 1,4,31 . This slope value was correlated with the number of IEG expressing neurons in all brain areas of interest. For this data analysis the NCL was subdivided into NCL pars lateralis (NCLl; Fig. 3A) and NCL pars medialis (NCLm; Fig. 3B) since both subdivisions have different neuroanatomical target regions. While the NCLm projects to the medial striatum (MSt), the NCLl projects to the arcopallium 24 . The histological data furthermore suggested to subdivide the striatum into the medial striatum (MSt, Fig. 3C) and the lateral striatum (LSt, Fig. 3D). ZENK expression in the NCLl, the NCLm and the MSt was macroscopically different between pigeons that used a rather serial processing strategy www.nature.com/scientificreports www.nature.com/scientificreports/ Discussion In the current study we examined the IEG-correlates of multi-component behaviour. Thus far, previous studies in humans were able to delineate the functional neuroanatomical network and some neurobiological insights into the mechanisms of multi-component behaviour using pharmacological and genotyping approaches. However, in-depth correlates at a cellular level have remained elusive. To this end, we examined IEG expression (i.e. ZENK expression) in an animal model (i.e. pigeons), which have previously been shown to be able to display multi-component behaviour 10 . Based on the analogy of the NCL and the PFC and the homology of the avian and mammalian striatum, we hypothesized that both structures are involved in multi-component behaviour and would display increased ZENK expression compared to a simple stimulus response task as was conducted in the GO control group and compared to a pure response inhibition task as was conducted in the STOP group. Furthermore, we hypothesized that the brain activity measured in the STOP-CHANGE group within both areas would be correlated with the efficiency of multi-component behaviour. Additionally, we analysed ZENK expression in the arcopallium which is comparable to the mammalian premotor cortex 32 . Activity within all tested brain areas and groups was symmetrical between both hemispheres. While a lot of studies report behavioural and neuronal asymmetries in the avian literature [33][34][35] , it needs to be noted that that the basis of brain asymmetries mostly www.nature.com/scientificreports www.nature.com/scientificreports/ refers to timing differences between the two hemispheres 36 . However, the temporal resolution of ZENK is not fast enough to depict such timing differences. Moreover, ZENK is not sensitive enough to visualize certain aspects of lateralization as recently corroborated by an fMRI and ZENK study 37 . It is therefore still possible that aspects of multi-component behaviour are lateralized that could not be identified with ZENK. In the following, the findings will be summarized and discussed for every area separately. Similar to several human studies providing evidence that the prefrontal cortex plays an important role 1,3-7 , we observed ZENK activity in the NCL. ZENK activity was significantly upregulated when birds performed a STOP-CHANGE paradigm, as compared to a condition with simple movement execution (GO group) and as compared to a condition with movement inhibition (STOP group). Furthermore, the ZENK expression in the NCL was significantly correlated with the efficiency of multi-component behaviour. For this correlation analysis, the NCL was subdivided into the NCL pars medialis (NCLm) and the NCL pars lateralis (NCLl). While NCLm is highly connected to MSt, resembling the fronto-striatal network in mammals 15,24 , NCLl shows strong projections to the arcopallium as a motor output structure 15,38 . Additionally, NCLl receives massive sensory input from secondary sensory areas and also projects back to these structures 15 . Both the NCLl and the NCLm revealed strong linear correlations between the number of ZENK-positive cells and the slope of the SCD-RT2 function, which provides an index of the efficacy of multi-component behaviour (see methods section for details). A steeper slope of the SCD-RT2 function has been shown to indicate less efficient multi-component behaviour (parallel processing) 2,4,31 . Thus, the data show that both subdivisions of the NCL revealed stronger activity when multi-component behaviour was less efficient. While this shows that both NCL parts are involved in multi-component behaviour, it is possible that this is due to different reasons: As the NCLm is part of a network resembling the fronto-striatal network, it is possible that this subdivision is responsible for parts of multi-component behaviour that are governed by prefrontal areas in humans 1,3,4,6-9 . Activity in the NCLl, however, could be explained by the tasks necessity to use goal-directed movements to solve this particular paradigm, as premotor units have been associated with this area 39 . Moreover, given the sensory input to NCLl, activity in this area could be a response to stimuli of different modalities involved in this task. In fact, the complexity of sensory integration processes has been shown to modulate neural processes in humans while performing an equivalent task to measure multi-component behaviour 6,7,40,41 . Furthermore, also in human studies correlations between EEG correlates and the efficiency in multi-component behaviour have been found indicating that stronger amplitudes were associated with less efficient multi-component behavior 1,2,4 . While the results obtained from ZENK expression studies are not directly comparable to EEG amplitudes, the explanation for both findings might be similar: As outlined, the slope of the SCD-RT2 function becomes steeper whenever STOP and CHANGE stimuli are processed at the same time (i.e. in parallel). When STOP-and CHANGE-associated task goals are processed in parallel, reaction times increase because these processes must share a limited capacity. Especially the prefrontal cortex which is the mammalian equivalent to the NCL 11,23 , is subject to simultaneity constraints. The same lateral prefrontal neurons/circuits have been shown to respond to very different stimuli under different task conditions 3,42,43 . The increased activation as indicated by an increased ZENK-positive cell count might represent an attempt to process different task goals simultaneously. Overall, the ZENK expression data in the NCL indicates that this structure is important for multi-component behaviour in pigeons. As the PFC has been shown to be involved in multi-component behaviour in several human studies 1,3-7 this finding illustrates a further functional similarity of the avian NCL to the mammalian PFC. Since the PFC and the NCL are only functionally comparable but differ in their anatomical position 15 and genetic profile 22 , they cannot be considered homologous 23 . Thus, our data provide further evidence for the idea that the avian NCL and the mammalian PFC are a case of evolutionary convergence and that there is limited flexibility in the implementation of complex cognition such as multi-component behaviour within functional neuroanatomical structures. Comparable to the results of the NCL, ZENK activity in the striatum was significantly upregulated when birds performed a STOP-CHANGE paradigm, as compared to a condition with simple movement execution (i.e. GO trials). However, ZENK expression in the striatum was not significantly increased as compared to a condition with motor response inhibition (STOP group). Thus, it cannot completely be ruled out that the striatal ZENK activity in the STOP-CHANGE group was the result of STOP processes rather than CHANGE processes. Nevertheless, ZENK activity in the MSt was significantly correlated with the efficiency of multi-component behaviour. This indicates that activity within this area is important for the outcome of multi-component behaviour. In contrast to this, ZENK activity in the LSt was not correlated with the efficiency of multi-component behaviour suggesting subregion-specific differences in the functionality of the avian striatum. Both, the MSt and the LSt, are thought to be homologous to the mammalian striatum (i.e. caudate and putamen) based on similar neurochemistry as well as shared hodological and developmental traits 25 . Yet, both subdivisions are not completely identical in their cellular composition and circuitry 24 . While the MSt contains cholinergic and medium-sized aspiny GABAergic interneurons that express neuropeptide Y (NPY) and somatostatin 24,44 , those cells are fewer or even absent in LSt 24,44,45 . This suggest that different neuronal computations can be performed within both striatal structures. Furthermore, while the MSt has a strong projection to the substantia nigra and projects only sparsely to the globus pallidus, the LSt shows the reversed pattern with stronger projections to the globus pallidus and only sparse projections to the substantia nigra 24,45,46 . The functional implication of this has not been investigated yet. However, it is conceivable that structures with different cell types and main targets could vary in their functionality. For example, in mammals, the substantia nigra and the globus pallidus externa belong to different subsystems. While the substantia nigra is part of the direct pathway, the globus pallidus externa is part of the indirect pathway 47 www.nature.com/scientificreports www.nature.com/scientificreports/ The finding that MSt was involved in the outcome of multi-component behaviour is in line with conceptual accounts suggesting that the basal ganglia medium spiny neuron system constitutes an important structure mediating response selection processes [49][50][51][52] . Furthermore, the finding is in line with several human studies showing striatal activation during multi-component behaviour 8,9,53 . Also in humans, correlations between striatal activity as measured with BOLD activation and the efficiency of multi-component behaviour have been observed, where a lower BOLD activation in the caudate nucleus was associated with inefficient (parallel) processing, and higher BOLD activation in the caudate nucleus was associated with a more efficient (serial) processing mode 9 . The authors linked this finding to the proposed role of the striatum in producing sequential representations of actions (i.e. action-chunking 54 ). They argued that an increased activation of the striatum is thought to strengthen its role in action-chunking and therefore enforce a rather serial processing of cascaded actions 9 . At first glance, this correlation seems to be in the opposite direction to our findings in the MSt of pigeons, where a greater activation was associated with less efficient parallel processing. However, it needs to be noted, that results obtained from fMRI and ZENK studies are not directly comparable. BOLD reflects the overall activity within a chosen area at a specific point in time but not on a single cell level, whereas ZENK expression indicates the amount of cells that was recruited during a whole session. A possible explanation for our correlation between a high ZENK expression in MSt and a parallel/less efficient processing strategy might be that when more cells are recruited this leads to more interference and thus more inefficient/parallel processing. This result is in line with models that describe action selection as a function of multiple parallel loops running through the basal ganglia. According to those models, the most active loop dominates the selected response, whereas activity within multiple loops creates interference 51 . Taken together, the data suggests that similar to humans, striatal structures in pigeons play an important role during multi-component behaviour indicating that there are evolutionary conserved mechanisms of this behaviour. Another finding of the study was that the ZENK expression in the arcopallium was significantly upregulated when birds performed the STOP-CHANGE paradigm, as compared to a condition with simple movement execution (GO group) and as compared to a condition with movement inhibition (STOP group). However, the activity within the arcopallium was not correlated to the efficiency of multi-component behaviour. Activity within this area might have reflected the task's necessity to use goal-directed movements as this structure is functionally comparable to the mammalian pre/motorcortex 32 . Neurons within this area project bilaterally via the tractus occipitomesencephalicus (TOM) to brainstem nuclei to regulate body, head and beak movements [55][56][57] . Electrophysiological studies have found that most neurons within the arcopallium are visuomotor neurons that start firing after GO-stimulus presentation and stop firing after the animal has responded. However, those neurons are not responsive to STOP signals 36 . The finding that the STOP group displayed the lowest ZENK cell count and the STOP-CHANGE group displayed the highest ZENK cell count is well in line with this electrophysiological study. The STOP group encountered 30% STOP trials that probably did not elicit any activity in arcopallial visuomotor neurons. In contrast to this, the STOP-CHANGE group encountered 30% CHANGE trials in which two GO stimuli were presented (GO-stimulus and CHANGE-stimulus) that probably both elicited neuronal activity within different arcopallial visuomotor neurons. Thus, the activity within the arcopallium probably reflected the complexity of visuomotor behaviour that was greater in the STOP-CHANGE group as compared to the STOP and GO group. This idea is further supported by the finding that the ZENK activity within the arcopallium was not significantly correlated with the efficiency in multi-component behaviour suggesting a visuomotor rather than a cognitive mechanism. To summarize, the current data show that comparable to human studies, the "avian PFC" as well as the MSt are involved in multi-component behaviour, and the activity in both areas is directly correlated to its efficiency indicating a similar function of the fronto-striatal circuitry between species in multi-component behaviour. With this study we furthermore provide a first step towards an appropriate animal model for future mechanistic studies in which neuronal activity can be influenced with methods such as optogenetics to investigate the direct effect of stimulation on the processing mode of multi-component behaviour. Materials and Methods Experimental subjects. For this study, N = 18 adult homing pigeons (Columba livia) of undetermined sex were obtained from local breeders. They were individually caged and placed on a 12-hour light-dark cycle. During the time period of training and testing, the birds were maintained at approximately 85% of their free feeding weight. All experiments were performed according to the principles regarding the care and use of animals adopted by the German Animal Welfare Law for the prevention of cruelty to animals as suggested by the European Communities Council Directive of November 24, 1986 (86/609/EEC) and were approved by the animal ethics committee of the Landesamt für Natur, Umwelt und Verbraucherschutz NRW, Germany. All efforts were made to minimize the number of animals used and to minimize their suffering. Skinner boxes. All experiments were conducted in conventional Skinner boxes (32 cm (w) × 34 cm (d) × 32 cm (h)). All Skinner boxes were equipped with white and red house lights and four transparent, round pecking keys (1.5 cm in diameter). Three keys were located on the front panel and one on the left side panel. A monitor was attached behind the front panel to display color stimuli behind the pecking keys. The initialization key on the side panel was illuminated by a blue LED. Below the keys on the front panel, a feeder was located, where the birds received a food reward consisting of mixed grains when responding correctly within the paradigm. A feeder light was positioned immediately above the feeder and indicated when the feeder was activated and food was available. All programs for this experiment were created using MATLAB and the Biopsychology Toolbox 58 . www.nature.com/scientificreports www.nature.com/scientificreports/ STOP-CHANGE paradigm (STOP-CHANGE group). For this study, a STOP-CHANGE paradigm was used 10 that reflects a direct translation of a human paradigm used in previous studies delineating the functional neuroanatomical and neurobiological correlates of multi-component behaviour in humans 1,4,[6][7][8][9] . The paradigm is shown in Fig. 1C. The first phase of training consisted of an autoshaping phase, where the birds learned to associate pecking on the illuminated keys with food reward. After the pigeons had learnt to peck on all the keys, they were trained in a STOP-CHANGE paradigm with a total of 400 trials. Each trial began with the illumination of the blue initialization key on the side panel, along with the presentation of a tone, which indicated the start of a trial. After initialization, a GO stimulus (left green pecking key on the front panel) was presented after a delay of 900 ms. This delay allowed the pigeon to turn towards the front in anticipation of the GO stimulus. In 70% of trials, pecking on the GO stimulus was the correct behaviour, which was rewarded with 2 s of food reward (GO trial). In the other 30% of trials, a STOP signal was presented after the GO stimulus by turning on the red houselight. This signal indicated that the reaction to the GO stimulus had to be inhibited and a reaction to a STOP-CHANGE (SC) stimulus had to be performed. The time between the onset of the GO stimulus and onset of the STOP signal (STOP-SIGNAL delay, SSD) was initially set to 450 ms and adjusted using a staircase procedure 31 . It was modified so that the probability of successfully interrupted GO responses was 50%. If a pigeon was able to inhibit its reaction to the GO stimulus and subsequently reacted to the SC stimulus, the SSD was shortened by 50 ms for the next trial. If the animal failed to perform both actions, the SSD was prolonged by 50 ms in the next trial. The SC stimulus consisted of a white illumination of either the upper or lower pecking key to the right of the GO key. The SCD between the presentation of the STOP and the SC stimuli was 0 ms in 50% of all STOP-CHANGE trials and 300 ms in the other 50% of trials. If the pigeon correctly pecked on the SC stimulus it received a 2 s food access reward (STOP-CHANGE trial). If it incorrectly pecked on the GO stimulus at any point after the STOP signal appeared, the lights in the box turned off for 5 s. The end of each trial consisted of a 5 s inter-trial interval before the blue initialization key and a tone signaled the start of the next trial (Fig. 1C). An important aspect to consider is that the analysis of IEG in pigeons performing the paradigms reflects activities to CHANGE, STOP and usual GO processes. Therefore, it is necessary to examine pigeons in two 'control experiments' in which only GO or GO and STOP processes are required. GO group. The GO group was trained in the same Skinner boxes and performed the same autoshaping procedure as the STOP-CHANGE group (see Skinner boxes and STOP-CHANGE paradigm). This group was only trained with GO trials and served as a control group for the basic conditions of the paradigm since this condition contains all task-relevant motor executions such as pecking, eating, retrieval of the task and general movement in the Skinner box. ZENK expression patterns going beyond the ZENK expression in this control group can therefore be attributed to STOP or STOP-CHANGE related neuronal activity. The paradigm of the GO group was based on the STOP-CHANGE paradigm described above, but in this case, pigeons were only confronted with GO trials and did not have to perform STOP or STOP-CHANGE actions. Pecking on the green GO key was always rewarded with 2 s of food access (Fig. 1A). STOP group. The STOP group was trained in the same Skinner boxes and performed the same autoshaping procedure as the STOP-CHANGE group (see Skinner boxes and STOP-CHANGE paradigm). This group was only trained with GO and STOP trials and was important to dissociate simple STOP processes from STOP-CHANGE processes. The paradigm for the STOP group was based on the STOP-CHANGE paradigm described above. The birds in this group, however, received 70% GO and 30% STOP trials. While in GO trials pecking on the green GO stimulus was the rewarded action, in STOP trials the pigeons were rewarded for inhibiting their reaction to the GO stimulus for at least 5 seconds. In this group the STOP signal (red light) appeared always in parallel with the GO stimulus. If the birds incorrectly pecked on the GO stimulus after the STOP signal had appeared, they were punished by turning off the house lights for 5 s (Fig. 1B). Efficacy estimation of multi-component behaviour. Psychological models suggest that response selection is a capacity-limited process 4,5,31,59 . As outlined above, the STOP-CHANGE experiment used two different SCD intervals to present the CHANGE signal after the STOP signal. In the SCD 0 condition, STOP and CHANGE stimuli are presented at the same time, whereas in the SCD 300 condition STOP and CHANGE are presented with a 300 ms time gap. Thus, the SCD 0 condition leaves a choice how to process STOP-and CHANGE-associated processes. If the choice is to simultaneously process STOP-and CHANGE-associated task goals (i.e., in parallel), reaction times to the CHANGE stimulus (RT2) increase because these processes must share a limited capacity. However, in the SCD0 condition, it is also possible to choose a strategy in which STOP-and CHANGE-associated task goals are processed in a step-by-step (i.e., serial) manner. In this case, the STOP and CHANGE processes do not have to share a limited capacity when the STOP process is finished before the CHANGE process. This leads to shorter RT2s than the strategy in which STOP-and CHANGE-associated task goals are processed simultaneously. Critically, the SCD 300 condition always enforces a serial processing of the STOP-and CHANGE-related processes because the STOP process has finished when the CHANGE stimulus is presented 300 ms later. If such a serial processing strategy is used in the SCD 0 condition, the RT2s are comparable with those in the SCD 300 condition. The ratio of RT2 differences in the SCD 0 and SCD 300 conditions therefore gives an estimate of the strategy used during multi-component behaviour 31 . The value becomes steeper with increasing differences between RT2SCD0 and RT2SCD300. When the STOP process has not finished by the time the CHANGE process is initiated (parallel processing strategy), the slope value becomes steeper, indicating that multi-component behaviour is less efficient. If the STOP process has finished (serial processing strategy), the slope approaches zero, which indicates that multi-component behaviour www.nature.com/scientificreports www.nature.com/scientificreports/ becomes more efficient 31 . Therefore, the slope of the SCD-RT2 function is flatter in the case of more efficient processing than in the case of the less efficient processing mode. The mean slope value was individually calculated for each pigeon in the final test session and correlated with the neuronal activity as measured with IEG expression. Activity assessment/final test session. In the final test session, all groups were trained for 400 trials in their specific paradigm (i.e. GO group, STOP group and STOP-CHANGE group). The immediate early gene ZENK has been linked to long-term memory formation and synaptic plasticity 60,61 and was used to visualize the activity in different areas of the pigeon brain. IEGs are a useful tool to visualize brain activity since they have a low basal expression, but a fast induction and degeneration 62 . The ZENK protein for example can be detected in neurons as fast as 15 minutes after stimulation and reaches its expression peek between 1 and 2 hours after stimulation 62 . Therefore, the pigeons were sacrificed 60 minutes after the first trial of the final test session had started and when they engaged in a minimum of 80% of the total trials. Intravenous injections of equithesin (0.45 ml per 100 g body weight) were applied into the brachial vein to minimize the time variance in the uptake of the anesthetic which can occur with intramuscular injections. The perfusion started after the heart of the animal stopped beating and eyelid closure reflex was negative. Perfusion and tissue processing. Pigeons were perfused as previously described elsewhere 63 . The transcardial perfusion via the ventricle started with 0.9% sodium chloride (NaCl) and was followed by cold (4 °C) 4% paraformaldehyde (PFA) in 0.12 M phosphate buffer (PB; pH 7.4). After full blood exchange, the brains were removed from the skull and postfixed in 4% PFA with 30% sucrose at 4 °C for 2 hours. Hereafter, the brains were cryoprotected in 30% sucrose solution in phosphate-buffered saline (PBS; pH 7.4) for 24 hours. To simplify slicing, the brains were embedded in 15% gelatin/30% sucrose and were further fixated in 4% paraformaldehyde in PBS for 24 hours. Brains were sectioned in coronal plane in 40 µm-thick slices using a freezing microtome (Leica, Wetzlar, Germany) and stored at 4 °C in PBS with 0.1% sodium azide until further processing. Immunohistochemistry. For immunohistochemistry against ZENK every tenth slice of all pigeon brains were used. The staining was performed with free floating sections and the ZENK protein was visualized with a DAB (3,3 diaminobenzidinetetrahydrochloride) staining procedure. The DAB-reaction was carried out according to the experimental protocol of the used DAB-Kit (Vector Laboratories, DAB Substrate Kit SK-4100) 64 . After rinsing (3 × 10 min in PBS), the slices were incubated in 0.3% hydrogen peroxide (H 2 O 2 ) in distilled water for 30 min to block endogenous peroxidases. Following further rinsing, blocking of unspecific binding sites using 10% normal horse serum (NHS; Vector Laboratories-Vectastain Elite ABC kit) in PBS with 0.3% Triton-X-100 (PBST) was performed for 30 min. In the next step, the slices were incubated with a monoclonal mouse anti-ZENK antibody (1:5000 in PBST, 7B7-A3) at 4 °C over night. The 7B7-A3 antibody was raised in mice against the ZENK peptide from the rock pigeon (Columba livia) and its sensitivity and selectivity for its target was verified in immunblots as well as with histological stainings 65 . The next day, the slices were rinsed in PBS (3 × 10 min) and incubated with a secondary biotinylated anti-mouse antibody (1:1000 in PBST; Vector Laboratories-Vectastain Elite ABC kit) at room temperature for 1 hour. After further rinsing (3 × 10 min in PBS), the slices were transferred into an avidin-biotin complex (Vector Laboratories-Vectastain Elite ABC kit; 1:100 in PBST). After further rinsing (3 × 10 min), slices were transferred to the DAB solution that consisted of 5 ml distilled water with 2 drops (84 µl) of buffer stock solution, 4 drops (100 µl) of DAB stock solution and 2 drops (80 µl) of nickel solution. Sections were transferred to cell wells, whereby each well contained 1 ml of the working solution. The reaction was started by adding 6 µl H 2 O 2 solution to each well. After 2 min incubation time the slices were transferred into cell wells with PBS and rinsed (2 × 5 min in PBS). Finally, the slices were mounted on gelatin-coated slides, dehydrated in alcohol and coverslipped with depex (Fluka). Quantification of ZENK activity. For quantitative analysis of ZENK expression, all slices were imaged at 100× magnification using a ZEISS AXIO Imager.M1 with a camera (AxioCam MRm ZEISS 60N-C 2/3″0.63×). The whole slice was imaged bilaterally for all areas of interest. Depending on the rostro-caudal extent of the analyzed brain area, several slices could contribute to one brain region. Therefore, cells were counted in both hemispheres on four consecutive slices in NCLl, NCLm, arcopallium, MSt and LSt at different anterior (A) levels and the arithmetic mean was calculated for the further statistical analysis (planes: NCLm and NCLl: A 5.0-6.5, arcopallium: A 5.5-7.0, MSt A 9.0-10.5 and LSt A 8.0-9.5). As the hippocampus is not expected to be involved in STOP-CHANGE processes, this area was analyzed as a control area to ensure that group differences were not the result of varying staining intensities. Cells were counted in two consecutive slices in the dorsal portion of the dorsomedial hippocampus (DMd) since this subdivision displayed reliable ZENK expression in all tested animals. Cells were counted at A 5.0-6.0 and the arithmetic mean was calculated for the further statistical analysis. ZENK-positive cells were counted automatically with ImageJ (see more in the methods section ImageJ analysis) with the counter being blind to the group of the animal. The whole analysis was performed in all pigeons separately for the two hemispheres to control for hemispheric differences. A repeated measures ANOVA with the within subject factors hemisphere (left, right) and area (NCL, striatum and arcopallium) and the between subject factor group (GO, STOP, CHANGE) revealed that there was no significant difference between the hemispheres in all tested brain areas and groups (F (1,15) = 0.344, p = 0.566, η p 2 = 0.022) (see also Supplementary Fig. 1). A Bayesian analysis to assess the evidence for the null hypothesis (a lack of hemispheric differences) 66 revealed a Bayes factor of 3.45. According to Kass and Raftery 67 , this indicates substantial evidence for the null hypothesis. Therefore, the measurements from both hemispheres were pooled together for the further statistical analyses. Furthermore, for the comparison between the GO, STOP and STOP-CHANGE groups, the data from the MSt and the LSt, and the data from the NCLl and the NCLm were pooled together. The correlation analysis between ZENK activity and the processing mode in the STOP-CHANGE group was however performed for all subdivision separately. www.nature.com/scientificreports www.nature.com/scientificreports/ Image analysis. The microscopic images were processed with ImageJ and converted into an 8-bit picture. Mean staining intensities were measured within one section of the HC of all tested pigeons. A One-Way ANOVA revealed that there was no difference in mean staining intensities between all three groups (F (2,15) = 0.022, p = 0.978), indicating that staining intensities were comparable between all animals. Intensely stained neurons within the striatum were taken for the upper threshold and weakly stained neurons within the HC were taken as the lower threshold. Once the threshold was determined it was kept consistent for the analysis of all images. Furthermore, the size (>20 pixel) and roundness (>0.4) of the stained particles was used as a further selection criterion and was adjusted once for all measurements. Since the whole slice had been imaged bilaterally, it was possible to delineate the whole region of interest with anatomically correct borders and count ZENK-positive cells within the whole area. All regions of interest have a different absolute size. Therefore, the size of the delineated area was always measured in ImageJ, so that the counted cells could be standardized to 1 mm 2 in the end. Statistical analysis. We used the Shapiro-Wilk test to test for normal distribution of the data and the Levene's test to test for the homogeneity of the variance. Both tests indicated that the requirements for parametric tests were violated. Therefore, we performed a logarithmic transformation of the data log 10 (x) that improved the normality as well as the equality of variances as follows: Levene's test before logarithmic transformation: NCL: F (2,15) Ethical statement. All experiments were performed according to the principles regarding the care and use of animals adopted by the German Animal Welfare Law for the prevention of cruelty to animals as suggested by the European Communities Council Directive of November 24, 1986 (86/609/EEC) and were approved by the animal ethics committee of the Landesamt für Natur, Umwelt und Verbraucherschutz NRW, Germany. All efforts were made to minimize the number of animals used and to minimize their suffering. Data availability The data will be made available upon request.
2020-01-15T15:19:17.340Z
2020-01-15T00:00:00.000
{ "year": 2020, "sha1": "3a54bd25bc77f1e957c8e7e4cd14d3f7e4fbf8ab", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41598-019-56998-4.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "3a54bd25bc77f1e957c8e7e4cd14d3f7e4fbf8ab", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
225543210
pes2o/s2orc
v3-fos-license
The predictive value of mean platelet volume, neutrophil lymphocyte ratio and platelet lymphocyte ratio in acute middle cerebral artery infarction patients Objective: The aim of this study was to determine the predictive value of mean platelet volume (MPV), neutrophil-to-lymphocyte ratio(NLR) and platelet-to-lymphocyte ratio(PLR) in patients with acute middle cerebral artery (MCA)branch infarction. Method: This examination was performed on the files of 50 patients followed up at the Anesthesia and Reanimation ICU of Onsekiz Mart University Hospital between April 2017 and September 2019 for acute occlusion in the MCA branch, with no previous history of an ischemic stroke. These patients were assessed with the National Institutes of Health Stroke Scale (NIHSS) and the Modified Rankin Score (mRS). The ratios of the neutrophil and platelet countsto the lymphocyte count (NLR and PLR ratios) within the first hour of the stroke and MPV were compared to a control group. Results: The study included 50 patients with the acute ischemia of the MCA branch, of which 28 (56%) were female. The control group consisted of 50 healthy people, and 24 (48%) of these were female. The mean NIHSS score of the patients was 11.4 ± 2.6, and their mean mRS was 2.5 ± 1.5. Conclusions: In our study with patients who had acute middle cerebral artery infarction, we found that the NLR, PLR, and MPV levels within the first hour of stroke were higher in the patient group in comparison to the control group ona statistically significant level. However, multicenter studies with larger subject groups are needed for the use of hematological parameters as biomarkers in an ischemic stroke. INTRODUCTION Stroke is a disease whose socioeconomic burden is constantly increasing, and morbidity rate is high especially in developing countries. Especially its high mortality rate and loss of function in survivors, as well as requirement of long-lasting support and care, increase the importance of early diagnosis and effective treatment of acute stroke. Approximately 87% of stroke patients have ischemic stroke, while 16% have hemorrhagic stroke (intracerebral hemorrhage and subarachnoid hemorrhages) 1 . For cerebral infarctions, several risk factors as age, sex, hypertension, diabetes mellitus, smoking, genetic factors and cardiac diseases may be listed 2,3 . Our knowledge on stroke is being updated day by day with various studies from different countries assessing data in the last three decades. Risk factors for cerebral infarctions may be listed as age, sex, race, hypertension, metabolic syndrome, diabetes mellitus, smoking, poor nutrition, alcohol consumption, sedentary life, genetic factors, obesity, psychiatric diseases, cardiac diseases, lipid metabolism disorder, air pollution, diseases causing chronic inflammation, sepsis and acute infection, chronic kidney failure, substance addiction and drug abuse, carotid stenosis and sleep breathing disorders 2,3 . The middle cerebral artery (MCA) is one of the brain arteries that are frequently clogged in infarctions, and the clinical signs in the case of their clogging are very well-known. It is known that, if MCA clogging is not intervened with, intraparenchymal ischemia induces an irreversible injury in neurons. There is rapid deterioration in neurological signs in the first hours of stroke development in 25-40% of ischemic stroke patients. Looking at causes of neurological deterioration, extracellular glutamate increase, formation of cellular acidosis, cytokine secretion, free radical formation, nitric oxide production and intracellular calcium increase are among the possible mechanisms, whereas the area in the central nervous system innervated by the clogged artery is also highly important. Studies that have been conducted investigated the acute phase reactants, signal molecules, cytokines, interleukins, hormones and biomarkers that decrease or increase in acute ischemic stroke and their relationship to the prognosis of stroke [4][5] . Recent studies have shown that the ratios of neutrophil and platelet counts to lymphocyte count (NLR and PLR ratios) and MPV (mean platelet volume) values may be an indicator of systemic inflammation, and they are related to the prognosis in several cardiovascular diseases, malignancies and chronic inflammatory diseases [6][7] . Neutrophil activation leads to secretion of reactive oxygen species, cytokines, proteases and cationic proteins (e.g. elastase, lactoferrin). In the case of an infarction in the region of the middle cerebral artery (MCA), the effects of the intensity of these inflammatory cytokines on the patient's prognosis become highly significant. A study conducted in Turkey demonstrated that NLR and PLR increased in the acute period in patients with intracerebral hemorrhage, and these may be associated with mortality 8 . In our study, in patients that were hospitalized at the intensive care unit with a definite diagnosis of acute middle cerebral artery branch infarction who had not had an ischemic stroke before, we compared the neutrophil and platelet to lymphocyte ratios (NLR and PLR) and MPV (mean platelet volume) values to those in a control group. MATERIAL AND METHODS Our study included 50 patients who were admitted to the emergency service in the first 12 hours who were being monitored and hospitalized at our intensive care unit with the diagnosis of middle cerebral infarction between April 2017 and September 2019. The control group consisted of 50 healthy individuals who were admitted to the check-up polyclinic of our hospital. Based on the information obtained from patient relatives and hospital records, patients with usage of drugs that alter platelet, lymphocyte or neutrophil counts, those with systemic or chronic inflammatory diseases, those at the stage of active infection, those with a recent history of trauma and surgical intervention or those with an acute MI picture, total middle cerebral artery, anterior cerebral artery infarction, posterior cerebral artery infarction patients, those with combined infarction of these arteries, patients with lacunar syndrome diagnosis, patients with history of previous stroke and those with liver failure, kidney failure or advanced heart failure were excluded from the study. The study included 50 patients with acute middle cerebral artery branch infarction, 28 of whom were female. Blood samples were obtained and collected in hemogram tubes from all patients to measure their NLR, PLR and MPV levels in the first hour of stroke. All patients received neurological examination when they were admitted to the intensive care unit. The severity of stroke was determined by filling out NIHSS (National Institutes of Health Stroke Scale). At the time of admission, their cranial computerized tomography images were taken, and control CT imaging was performed on all patients at the 24th-48th hours. In this process, the patients received all necessary monitoring and follow up procedures at the intensive care unit, and all routine biochemistry and hemogram tests were carried out. For our study, approval was obtained from the local ethics board (Decision no: 2019-21). Statistical Analysis In our study, the SPSS (Statistical Package for the Social Sciences) Windows 19.0 package program was used for the statistical analyses. The data were analyzed by using descriptive statistical methods (mean, standard deviation). Student's t-test was used to compare the patient and control groups. The results were interpreted in a 95% confidence interval, and p<0.05 was accepted as statistically significant. RESULTS Our study included 50 patients, 28 (56%) of whom were female. The mean age of the patients was 70±13, with a range of 57 to 83. Among the 50 healthy individuals in the control group, 24 (48%) were female. The mean age of the control group was 67±12, with a range of 55 to 79. The age and sex distributions of the control and patient groups were similar (Table 1). Among the 50 patients, 32 (64%) had hypertension, while 22 (44%) had diabetes. There was a history of cardiac disease in 24 (48%) patients. Table 1 shows the demographic characteristics of the patient and control groups. In the patient group, according to the results of hemograms taken in the first hour of stroke, the neutrophil and platelet to lymphocyte ratios (NLR and PLR) were calculated, and the MPV (mean platelet volume) values were determined. The mean value of the NLR in the first hour of stroke in the patient group was 3.8±1.9, while the mean NLR in the control group was 2.1±0.7. There was a statistically significant difference in the NLR values between the patient and control groups (p<0.001). Table 2 shows the other clinical results in the patient and control groups. The patient group had an NIHSS mean score of 11.4±2.6 (0-42), while their mean mRS was 2.576± 2.100 (0-6). DISCUSSION In our study, the NLR, PLR and MPV levels in the first hour of stroke in the patients who had acute middle cerebral artery branch infarction for the first time were significantly higher than those in the control group. It is thought that free oxygen radicals seen in ischemia, reperfusion, cellular acidosis and hypoxia processes lead to various structural changes in acute stroke patients. In recent years, it has been reported that neutrophilia and lymphopenia are independently related to increased cardiovascular risk. In particular, it was stated that the maximum NLR is one of the determining factors on mortality in myocardial infarction 9,10 . Studies have reported that high MPV, NLR and PLR values are an independent risk factor and indicator of poor prognosis in malignancies, chronic inflammatory diseases and cardiovascular diseases such as myocardial infarction. MPV, NLR and PLR are easily measurable parameters 11 . The inflammatory process develops right at the onset of ischemic stroke and as a response to ischemic brain injury; neutrophils rapidly migrate to the damaged area in the ischemic brain tissue. Studies reported that the basal neutrophil count is also important, and it may lead to poor neurological outcomes by increasing the severity of ischemic damage 12 . Moreover, the increase in lymphocytes starting after stroke and their peak on the seventh day show that especially T cell lymphocytes have a repairing effect on inflammation 13 . Higher neutrophil counts are considered together with lower lymphocyte counts which also result in low NLR values, and consequently, they indicate a highly damaged region with lower repair effects. Buck et al. showed a significant relationship between the size of cerebral infarction and the increase in leukocyte and neutrophil counts 14 . Another study reported that there was an increase in leukocyte and neutrophil levels after temporary ischemic episode in patients with temporary ischemic episode 15 . Leukocytes are main cells that form a response to inflammation, cause endothelial damage and disrupt the antithrombogenic property of the endothelium 16 . Studies have demonstrated that, in humans, radioactively marked neutrophils rapidly gathered in the region of ischemia in 3-6 hours, and they reached the maximum levels within 12-24 hours. Leukocytes that pass into the ischemic region slow down the blood flow and form the phenomenon of postischemic reperfusion noreflow 17 . Leukocyte accumulation in the intravascular region disrupts the micro circulation, causing endothelial damage, and at the end, it may cause vasospasm and increase the severity of stroke. On the other hand, Beray-Berthat et al. 18 showed that leukocyte increase may show a difference based on the damaged intracerebral region and the oxidative stress that develops, and different leukocyte responses may form in the experimental ischemic injuries in different intracerebral regions. Greisenegger et al. 19 revealed the relationship between MPV and stroke severity and determined that platelet reactivity increases by the severity of stroke. However, it should be kept in mind that the lifespan of platelets is short, the relationship obtained by these blood samples taken at the time of admission may be an acute phase reaction, and there may be a platelet dysfunction also in the prestroke period. In a study carried out in Turkey by Deveci et al., MPV was higher in comparison to the control group in the post-stroke acute period in ischemic stroke patients, and it may be that MPV anomaly is not an outcome of ischemic stroke but a significant factor for formation of ischemic stroke 20 . In a multicenter study conducted by Bath et al. with 3134 voluntary participants with a history of cerebrovascular disease, it was revealed that high MPV levels are an independent risk factor for recurring strokes 21 . Previous studies have shown that some hemogram parameters that may be markers of inflammation and hypoxemia are associated with episodes and stroke. While the mean platelet volume is an indicator of platelet function and activation, it is also directly affected by inflammation 22 . There is a low number of studies that investigated hemogram parameters in this group of patients who experience acute neurological defect. Özaydın et al. proposed the mean platelet volume as a marker of the differentiation of simple and complex febrile episodes 23 . In a prospective study, epilepsy patients were compared to healthy control both during episodes and in the remission period, and the mean platelet volume values were determined to be higher in comparison to the control group in the patient group during episodes and in the period without episode activation 24 . The fact that the number of patients in our study was low, that they were assessed with different inflammatory parameters and that clinical correlation could not be evaluated alongside repeated measures may be considered as the limitations of our study. Consequently, in this study we conducted with patients who had acute middle cerebral artery branch infarctions, the NLR, PLR and MPV levels checked in the first hour of stroke in the patients were significantly higher than those measured in the control group. We believe that there is a need for further studies with a higher participation rate showing that these parameters may be used as a biochemical marker supporting the diagnosis of patients with negative imaging results but showing clinical signs especially in the first 24 hours.
2020-07-23T09:06:08.713Z
2020-06-28T00:00:00.000
{ "year": 2020, "sha1": "0efc5a18b5baffcbe2ed199f4e99c77550b9889b", "oa_license": null, "oa_url": "https://dergipark.org.tr/en/download/article-file/1211607", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "472593e2add20028accf7c72109f5922f77de38b", "s2fieldsofstudy": [], "extfieldsofstudy": [ "Medicine" ] }
214727733
pes2o/s2orc
v3-fos-license
Interaction and temperature effects on the magneto-optical conductivity of Weyl liquids Negative magnetoresistance is one of the manifestations of the chiral anomaly in Weyl semimetals. The magneto-optical conductivity also shows transitions between Landau levels that are not spaced as in an ordinary electron gas. How are such topological properties modified by interactions and temperature? We answer this question by studying a lattice model of Weyl semimetals with an on-site Hubbard interaction. Such an interacting Weyl semimetal, dubbed as Weyl liquid, may be realized in Mn$_3$Sn. We solve that model with single-site dynamical mean-field theory. We find that in a Weyl liquid, quasiparticles can be characterized by a quasiparticle spectral weight $Z$, although their lifetime increases much more rapidly as frequency approaches zero than in an ordinary Fermi liquid. The negative magnetoresistance still exists, even though the slope of the linear dependence of the DC conductivity with respect to the magnetic filed is decreased by the interaction. At elevated temperatures, a Weyl liquid crossesover to bad metallic behavior where the Drude peak becomes flat and featureless. I. INTRODUCTION Weyl semimetals are 3D analogs of graphene with topologically protected band crossings and interesting transport phenomena. Monopnictides TaAs, TaP, NbAs and NbP are prime examples 1-3 . In the presence of a uniform magnetic field, a Weyl node splits into degenerate Landau levels with a chiral zeroth level that crosses the Fermi level and gives a non-zero DC conductivity. The degeneracy of the zeroth Landau level depends on the amplitude of the magnetic field. Hence, the resistivity in the presence of parallel electric and magnetic fields acquires a magnetic-field-dependent contribution, the so-called chiral anomaly contribution, which is negative in contrast with the conventional metal. 4 Negative magnetoresistance has been experimentaly observed in Weyl semimetals such as TaAs 5,6 or Mn 3 Sn 7 even though it is not always clear whether its origin is the chiral anomaly. [8][9][10] The standard Drude contribution and the chiralanomaly contribution compete with each other in general, leading to a nonmonotonic dependence of the conductivity (or resistivity) on magnetic field. Whether the chiral anomaly-related conductivity dominates or not depends on the ratio of different scattering times, such as inter-Weyl point scattering time and transport scattering time and the location of the chemical potential with respect to Weyl points. These quantities are impacted by both temperature and electron-electron interaction, in particular in correlated Weyl semimetals such as Mn 3 Sn 7 , raising the question of their influence on the magnetoresistance of Weyl semimetals. The frequency dependence of the conductivity in Weyl semi-metals also has interesting features. At zero temperature and in the absence of a magnetic field, the optical conductivity in the continuum limit and without interactions exhibits a linear frequency dependence with vanishing DC limit when the chemical potential is at the nodes. 11,12 This is a consequence of the parabolic density of states. It has been observed in the low-temperature, low-frequency optical spectroscopy of the known Weyl semimetal TaAs. 13 Adding a magnetic field, not only leads to the above-mentionned chiral-anomaly in the DC conductivity, it modifies the optical conductivity that now consists of a narrow low-frequency peak, whose DC value manifests the chiral nomaly, and of a series of asymmetric peaks from interband transitions superimposed on the linear background from the no-field case 11,14 . The case of hybridized Weyl nodes has also been considered both theoretically and experimentally for NbP 15 . The effect of long-range Coulomb interactions on interband magneto-optical absorption has been studied theoretically using GRPA 16 . Here, using Dynamical Mean-Field Theory 17 , we study, based on Ref. 18, the effects of both local Hubbardlike interactions and temperature on the magneto-optical conductivity. We show how interactions and temperature broaden the low-frequency peak, renormalize and redistribute optical spectral weight between the low-frequency peak and the inter-band transitions between Landau levels and how interactions transfer optical weight to the high-frequency incoherent satellites. Theoretically, all this is related to the robustness of the quasiparticle picture, which we thoroughly analyze. We will refer to the interacting Weyl semimetal as a Weyl liquid 19 , by analogy with the Fermi liquid. We also investigate the temperature-induced transition to bad-metal behavior. This occurs as follows in normal metals. The half bandwidth of the low-frequency peak in optical spectra, proportional to the scattering rate, grows with increasing temperature because of thermally induced scattering events grow. Upon approaching the so-called Mott-Ioffe-Regel 20 , the scattering rate becomes of the order of the bandwidth, and the low-frequency peak becomes flat and essentially featureless [21][22][23][24] . Quasiparticles and Fermi liquid behavior disappear. How this physics manifests itself in a Weyl liquid is another subject of this study. After we introduce the model in Sec. II, we show the effect of interactions on single-particle properties in Sec. III and then of interactions and temperature on the conductivity in Sec. IV. Appendix A contains a perturbative estimate of the imaginary part of the self-energy, and shows that despite its ω 8 dependence, the usual quasiparticle renormalization Z follows. Appendix B explains how to extract the weight of an isolated Drude peak directly from imaginary frequency. II. MODEL AND METHODS We start from a Weyl semimetal model defined on a cubic lattice and add a particle-hole symmetric Hubbard interaction. In the zero-field case, it readŝ where U is the strength of the Hubbard interaction,n r↑ (n r↓ ) are ocupation numbers for spin up (down), and µ is the chemical potential. The non-interacting Hamiltonian H 0 in second quantized form is 25 where t and t z are independent parameters and σ x , σ y and σ z represent Pauli matrices. The Hamiltonian is written in spin-space, so the creation and annihilation operators are two-spinors,Ĉ k = (ĉ k,↑ ,ĉ k,↓ ). We also take Boltzman's constant k B equal to unity and t = t z = 1 for all calculations and use units whereh = 1, e = 1. In addition, the lattice constant a equals unity. In this regime, t = t z , there are four Weyl nodes in the first Brillouin zone of the non-interacting Hamiltonian located at (0, π, ±π/2) and (π, 0, ±π/2). Time-reversal symmetry is broken, but this model has other symmetries detailed in Ref. 26. We introduce the orbital effects of a uniform magnetic field through the Peierls substitution 27 . We do not take into account the Zeeman term. The magnetic Hamiltonian (in the tight-binding form) then includes a site-dependent Peierls phase φ nm (B) that changes the symmetries of the model. We pick magnetic field values commensurate with the original lattice, namely we take eBa 2 /h = p/q with rational values of p/q 28 . In the rest of this paper, and with no loss of generality, we set p = 1. The applied magnetic field is in the z-direction and we use the Landau gauge (i.e. a vector potential A = (0, Bx, 0)) to preserve translation symmetry along the y-direction. This leads to the following Harper matrix: Each of the sub-matrices in the above equation is of dimension q × q. They are defined as follows, with and with the definitions M n = −2t cos (k y + 2πnp/q) − 2t z cos k z and A n = 2it sin (k y + 2πnp/q). Eq. 3 describes a magnetic unit cell with q sites in the x direction. The full Hamiltonian includes a periodic extension of Harper matrix along the x axis, the chemical potential and the Hubbard interaction. For the values of q that we choose, the Harper matrix is sufficiently large and the corresponding reduced Brillouin zone along the k x direction sufficiently small that dependencies on k x can be neglected. These dependencies are associated with the periodicity of the magnetic unit cell and they become important in the large field limit where the magnetic unit cell has only a few sites. Consequently, the sites inside the magnetic unit cell have identical local density of states (spin up + spin down) at half-filling. Therefore, we are in the Landau regime defined in Ref. 29. In this approximation, the free Hamiltonian that takes maximum advantage of translational invariance iŝ where the creation and destruction operators are defined in the basis:Ĉ = (ĉ 1,ky,kz,↑ , . . .ĉ q,ky,kz,↓ ). We solve the interacting Hamiltonian nonperturbatively using the Dynamical Mean Field Theory (DMFT) framework 17 . Ref. 29 contains the derivation of the DMFT equations that include the orbital effects of a uniform magnetic field. In summary, the local self-energy depends on the magnetic field and the self-consistency equation itself is unaltered. The effect of the magnetic field on the self-energy comes from the noninteracting density of states of the Landau levels and the self-consistency equation. 29,30 We use an exact diagonalisation (ED) impurity solver for the impurity problem with a finite number of bath sites, n b . 31,32 Though still of considerable size, the n b = 5 orbital Hamiltonian in this scheme can be diagonalized exactly to compute the local Green's function at finite temperature. III. MAGNETIC FIELD AND ELECTRONIC INTERACTION EFFECTS ON SINGLE-PARTICLE PROPERTIES We first consider the effects of magnetic field and interactions on the density of states and then discuss the self-energy. This leads us to comment on the resilience of quasiparticles in the presence of interactions in a Weyl semimetal. A. Density of states For a non-interacting Weyl semimetal without magnetic field, described by Eq. 2, the density of states at low energy is characterized by ω 2 behavior, with a vanishing density of states at ω = 0. With a magnetic field, a finite field-dependent density of states appears at low energies as shown in Fig. 1a. The non-interacting density of states in this figure is obtained from where N is the total number of Landau levels, n is the Landau level index and n,ky,kz the corresponding dispersion energy obtained from the Harper Matrix. The finite field-dependent density of states at low energy increases with increasing field. Apart from this, a magnetic field does not influence the main characteristics and the bandwidth of the density of states of a non-interacting Weyl semimetal. Fig. 1a also shows the density of states of the interacting system. Apart from the lower and upper Hubbard bands (incoherent satellites around ω = ±10 on Fig. 1a), the interaction tends to shrink the coherent quasiparticle bandwidth by the quasiparticle weight Z. However, the density of states at the Fermi level is not affected by electronic correlations. This can be explained as follows. Physically, the spectral function is proportional to the quasiparticle weight Z, but the one-dimensional density of states of the chiral level is proportional to one over the renormalized Fermi velocity Zv F so that the two factors of Z cancel each other. The density of states for chiral Landau levels in the quasipaticle approximation is then given by where v F stands for the Fermi velocity, k c is an energy cut-off and Θ is the step function. Eq. 8 shows that the density of states is insensitive to the quasiparticle spectral weight Z, but the range of frequencies where it applies is narrowed down. There is another way to explain that the density of states at the Fermi level is independent of interactions: It is a general property of singlesite DMFT at low enough temperature. Indeed, one can show, using Luttinger's theorem for a momentumindependent self-energy 33 , that the density of states at the Fermi level (ω = 0) is independent of interactions. One has to assume that this remains valid for a range of energies close to the Fermi level. B. Self-energy The scattering amplitude is related to the imaginary part of the self-energy, which depends on U . At fixed U , we investigate the effect of the magnetic field on the imaginary part of the self-energy at low temperature (corresponding to an inverse temperature β = 80) by com-paring the case without a magnetic field, i.e. B = 0, with the case where B = 2π/40, and with the case where B = 2π/16, i.e. in the quantum limit. The latter is characterized by a clear seperation between the zeroth and the first non-zero Landau levels. Fig. 1b illustrates the effect of magnetic field on the imaginary part of the self-energy for the first few Matsubara frequencies at β = 80, U = 12 and half-filling. As in the case of the Hubbard model on the square lattice 34 , the magnetic field does not have a large effect on the self-energy. For B = 2π/40, there are few differences with the self-energy without a magnetic field. However, for B = 2π/16 we can clearly see that the self-energies depart from each other at larger Matsubara frequencies. Those differences at intermediate frequencies come from the significant modification of the local density of states due to the orbital effect of the magnetic field. It is also interesting to note that despite these differences, the three self-energies have the same value at the first Matsubara frequency. This indicates that the scattering times at the Fermi level for half-filling are essentially the same with or without magnetic field. C. Resilience of quaiparticles Although it is not completely apparent from the above results, quasiparticles are remarkably resilient in a Weyl semimetal. To show this, let us momentarily remove the magnetic field. At small interaction strength, the selfenergy can be calculated with the IPT solver 35 but without the DMFT self-consistency. Then one can use the quadratic effective density of state of Weyl semimetals near the Fermi energy and obtain analytically the imaginary part of the self-energy at low frequency for a single Weyl node (see the derivation in appendix A): where v F is the Fermi velocity. Nevertheless, one expects that since this suggests the existence of quasiparticles, this result should be valid for larger values of U . Note that this unusual behavior is very different from Fermi liquid theory where Σ (ω) is quadratic in frequency. From the point of view of lifetime, the "Weyl liquid" seems to lead to even more stable quasiparticles than Fermi liquids. But what about the quasiparticle spectral weight Z? Appendix A shows that there is a finite value of Z and calculations show that, as expected, it decreases (roughly as −U 2 ) as U increases. The smallness of the imaginary part of the self-energy at low frequency offers a clue for the robustness of quasiparticle physics. This justifies a quasiparticle approach where the main effect of the interactions is encoded in the quasiparticle weight with an (extermely) small lifetime for the quasiparticles. However, the above derivation is only valid in the absence of magnetic field since it relies on a vanishing density of states at the Fermi level. When an external magnetic field is applied, a finite density of states appears and that could change the physics. IV. CONDUCTIVITY OF INTERACTING WEYL SEMIMETALS In this section, we use our knowledge of single-particle properties to compute the optical conductivity and find out how it is affected by magnetic field, temperature and finite bandwidth. Let us first consider the current-current correlation function in Matsubara frequency. In linear response theory at zero momentum and neglecting vertex corrections, this correlation function Π zz (q → 0, iν n ) along the z direction is 36,37 where β = 1/(k B T ) is the inverse of temperature, iω n is the fermionic Matsubara frequency, Tr is the trace over 2q × 2q matrices, a zz (k) = ∂ 2 kz H H is the inverse effective mass tensor, v z (k) = ∂ kz H H the velocity matrix along the z direction, and G(k, iω m ) is the interacting Matsubara Green function. Π zz is composed of two parts: Π Dia , the diamagnetic part and Π P ara , the paramagnetic part (respectively, first and second term in Eq. 10). Gauge invariance imposes that these two parts cancel each other perfectly at ν n = 0. After analytic continuation, the real part of the retarded conductivity σ zz is where Π zz (ω) is the imaginary part of Π zz (ω). One can also compute the real part of conductivity directly in real frequency from 38 where A is the spectral function matrix and f is the Fermi-Dirac distribution function. In the DC limit, the difference of the Fermi-Dirac distribution functions in Eq. 12 can be replaced by the derivative with respect to frequency at ω = 0. Eq. 12 is valid only when the trace is real, otherwise, additionnal terms coming from the paramagnetic term of the current-current correlation function have to be taken into account to obtain the real part of the conductivity. Moreover, since the system breaks time-reversal symmetry, the spectral weight must be computed from the anti-hermitian part of the Green's function matrix where the spectral weight is normalized as follows, dωA(k, ω) = 1. The real-frequency dependent Green's functions are obtained by analytic continuation using the Padé approximant method. 39 A. Interaction effects in the low-temperature limit Figure 2 shows the magneto-optical conductivity of the interacting Weyl semimetal for several interaction strengths and B = 2π/16. It is an even function of frequency. In the presence of Hubbard interactions, the magneto-optical conductivity has three well defined features: The Drude Peak near zero frequency, the interband transitions between Landau levels 11 and incoherents peaks that are a consequence of Hubbard bands in the density of states. The interband transitions at lower frequency are quite similar to the results obtained from a non-interacting continuum model of Weyl nodes 11 , i.e. a series of asymmetric peaks superimposed on the linear background from the no-field case. In our case, the lattice introduces a natural cutoff and differences with the continuum model at higher energy. The three parts of the optical conductivity are affected differently by the Hubbard interaction. It is the chiral zero'th Landau level that leads to a finite density of states at the Fermi level and to a Drude peak in the optical conductivity. 4,40 Upon increasing U , the Drude peak decreases in intensity and in weight because of the quasiparticle weight Z, even if the density of state at ω = 0 is not affected by electron-electron interactions. The optical spectral weight of the interband contribution is also reduced by interaction and the optical weight is transferred to the incoherent satellite at high energy. Furthermore, the optical gap between low-frequency peak and the interband contribution is decreased by the interaction, as can be seen from Fig. 2a. A quantitative study of the Drude peak in real frequency is cumbersome since it requires analytic continuation of numerical data. This could introduce artifacts, especially in presence of interactions. Fortunately, at low temperatures the weight of the Drude peak can be extracted from the Matsubara-frequency current-current correlation function Π P ara zz (iν n ). As shown in Fig. 2b, Π P ara zz (iν n ) is a smooth function of the Matsubara frequencies except at the lowest frequency, i.e., ν 0 = 0. The sudden increase at this frequency indicates a non-zero DC conductivity. Indeed, in an insulator, Π P ara zz (iν n ) smoothly reaches its zero frequency value as one can see from B = 0 result. Furthermore, in the interacting Weyl semimetals at low temperatures, the Drude peak is separated from the rest of the optical spectrum by a clear gap (see Fig. 2a). This allows us to show that the jump in With the help of Eq. B5, we are able to follow the fate of the Drude weight, W D = dωσ D (ω), with and without interaction. Here, we define σ zz (ω) = σ D zz (ω) + σ res (ω) because the Drude peak is separated with an energy gap from the rest of the optical spectrum. As one can see from Fig. 3a, at low temperatures the interaction dependence of the low-frequency peak weight normalized by the non-interacting one follows exactly the quasiparticle weight for all values of U tested. The normalized Drude weight scales like the quasiparticle weight Z, even near the Mott transition (around U = 20), a clear sign of the robustness of quasiparticle physics. The shrinking of interband contributions with increasing U is also a sign of quasiparticle physics. Indeed, the quasiparticle weight Z renormalizes the whole band, so the Landau levels become closer to each other. This leads to excitations at lower frequency than in the absence of interactions. Contrary to interband transitions, transitions between incoherent Hubbard bands and quasiparticle bands and between the lower and the upper Hubbard bands is due to the frequency dependence of the self-energy and cannot be explained by the quasiparticle spectral weight alone. However, Z still governs a large part of the optical conductivity. In Fig. 3b, we present W res = dωσ res (ω) normalized by its value at U = 0. This quantity is easily computed with the help of equation B5 and of whereΠ P ara zz denotes the paramagnetic part of the current-current correlation function without the jump at the lowest frequency. It can be obtained from an extrapolation to ν n = 0 of a polynomial fit of Π P ara zz (iν n = 0). In the weak to intermediate range of interaction, W res depends only weakly on the interaction strength. For interaction strengths larger than the bare band-width, W res decreases at a faster rate and eventually saturates to a finite value when the system undergoes a phase transition to a Mott insulator. Hence the variation of W D or of the effective mass with interaction is more pronounced than the variation of W res . Figure 3c shows the normalized W res as a function of square root of the quasiparticle weight √ Z. The point at Z = 0 is in the insulator. The linearity of the curve and the value of the slope, equal to 1/2 over the whole range, except for the transition from metal to insulator, indicates that the ratio is directly proportional to the square root of Z. We have not found a simple argument for this result. Finally, consider the magnetoresistance of a Weyl liquid. For the values of q tested in this paper (q ∈ [16, 100]), the Drude weight increases linearly with the magnetic field (see figure 4). This is the famous negative magnetoresistance phenomenon, which is a consequence of the quantum limit where only the chiral Landau levels contribute to the Drude peak 4 . As one can see from Fig. 4, the linear dependence of the conductivity is not impacted by the interaction but the slope decreases upon increasing U . At higher U , the system undergoes a phase transition to a Mott phase with zero DC conductivity. Electron-electron interactions do not destroy the negative magnetoresistance, they only renormalize it, as expected from the quasiparticle picture. This statement finds its experimental proof since Weyl physics has been observed in correlated materials. 7 B. Temperature effects Bad metal behavior is a consequence of interactions. 21,22,41 In a Fermi liquid (FL) the crossover between a coherent metal and a bad metal can be identified from the frequency dependence of the optical conductivity at low frequencies 22 . At low-T , the Drude peak of a FL decays as 1/ω 2 . The crossover out of the FL regime leads to a broader low-frequency peak whose frequency dependence is no longer 1/ω 2 . At the transition to the bad (color online) a) Interacting Drude-peak weight normalized by the non-interacting one (left vertical axis) and quasi-particle weight for B = 2π/16 (right vertical axis) both as a function of interaction strength. Both quantities show identical interaction dependence. b) Spectral weight of the Drude peak normalized by the interband spectral weight as a function of U , calculated using Eq. 14. c) Normalized Wres as a function of square root of the quasiparticle weight √ Z that suggests Z 1/2 dependence. For all panels, β = 80. metal regime the Drude and interband features merge. 22 Similar behavior can be seen in the optical conductivity of a Weyl semimetal. At low-T , the Drude peak is separated from the interband contribution and it decays very quickly with frequency. At higher temperatures though, both Drude peak and interband contributions broaden and merge together as one can see from The inset shows the magneto-optical spectral weight W (ω) as a function of cutoff frequency (cf. Eq. 15). Three distinct plateaux corresponding to the three features discussed in IV A can be seen in the inset crossover affects the optical conductivity on all frequency scales. To illustrate this, we calculated the optical spectral weight integrated up to a cutoff ω and we plotted it as an inset in Fig. 5. At low temperatures, three frequency ranges can be identified: the Drude weight for ω ∈ [0, 1] followed by the intraband contributions and finally the Hubbard band contributions at higher energies. This three-part structure is unchanged until the crossover between metal and bad metal occurs at T * . Above T * , the Drude and finite frequency features merge. That temperature affects dynamical properties over scales much larger than thermal energies is characteristic of strongly correlated systems. (color online) a) Imaginary part of the self-energy as a function of Matsubara frequencies for different temperatures. U is fixed to 12 and the system is at half-filling. The continuous lines are computed using DMFT. The dashed lines are a zero frequency extrapolation of the self-energies based on a fourth-order polynomial fit of the first five values of the self-energy. At low temperature, the self-energy extrapolates to zero like in ordinary metals. However, at higher temperature, it extrapolates to a value far from being zero at zero frequency. b) Double occupancy as a function of temperature for U = 12 and for several magnetic fields. T * can be defined by the inflection point around 0.45, marked by a vertical dashed line. c) T * coincides with the minimum in the approximation (1 − ImΣ(ω0)/ω0) −1 for the single-particle spectral weight Z. Values of (1 − ImΣ(ω0)/ω0) −1 larger than unity are not physical and in fact quasiparticles disappear in the regime where there is an increase after reaching a minimum. In bad metals, the quasi-particles become ill-defined. This can be seen from the quasi-particles scattering rate, given by the imaginary part of the self-energy on the real axis. At low enough temperature, it can be approximated by the imaginary part of the Matsubara self-energy using In Fig. 6a, the continuous lines represent the value of the self-energy obtained from DMFT for U = 12 and at different temperatures. The dashed lines represent a zero frequency extrapolation based on a fourth order polynomial fit of the first five values of the self-energy. It extrapolates to zero frequency at low temperature, consistent with existence of the low-energy well-defined excitations, but at higher temperature this behavior is no longer observed. The crossover temperature can be identified clearly from static quantities as well. Consider double occupancy, shown in Fig. 6b. The crossover temperature is around the inflection point of these curves at T * ≈ 0.45, consistent with the optical conductivity. Moreover, we also find that the crossover temperature in the presence of a magnetic field is very close to the crossover temperature obtained without magnetic field. This is because, as shown for example in Ref. 22, the bandwidth and U are clearly the two energies that control T * . The energies associated with the magnetic fields that we consider are small compared with the bandwidth and with U , and are thus not very relevant. The crossover can also be seen from the temperature dependance of (1 − ImΣ(ω 0 )/ω 0 ) −1 , plotted at the lower panel of the Fig. 6c. At low-T , this quantity gives the quasi-particle Z. As one can see, it reaches a minimum around T * ≈ 0.45 and becomes even larger than unity at high enough temperature. This behavior is not physical and coincides with the disappearance of quasi-particles. V. CONCLUSION In summary, our study reveals that an interacting Weyl semimetal is extremely robust to short-range interactions. The density of states at the Fermi level coming from the magnetic-field-induced chiral level is not modified by interactions. Interactions are manifest mostly through a quasiparticle renormalization Z, as in a Fermi liquid, but with a frequency-dependent self-energy that vanishes much faster with frequency as one approaches the Fermi level. The slope of the negative magnetoresistance dependence on the field is reduced by Z. At elevated temperatures, Weyl liquids exhibit a crossover to a bad metal phase at a crossover temperature that is essentially magnetic-field independent. In order to derive Eq. 9, we use second order perturbation theory for the self-energy. The latter is given by the formula: where ρ is the non-interacting local density of state of the impurity. This is justified in the limit U → 0. Using the density of states of the low-energy Hamiltonian of a Weyl semimetal leads to formula 9, which shows that the imaginary part of the self-energy scales like ω 8 . It seems that the zero-temperature Weyl semi-metal is a sort of "super Fermi liquid" with an imaginary part that is even smaller than the ω 2 of a Fermi liquid. Nevertheless, there is a quasiparticle spectral weight that is smaller than unity, as in Fermi-liquid theory. To show this, start from the Kramers-Kronig relations imaginary parts of self-energy Σ (ω) = P dω π where P stands for Cauchy's principal value. We can solve this equation A3 by using the identity (A4) Noting that ω 8 − ω 8 = (ω − ω)(ω + ω)(ω 2 + ω 2 )(ω 4 + ω 4 ) (A5) and using a particle-hole symmetric model, we find Σ (ω) = − U 2 40320π 6 v 9 F 2D 7 ω 7 + 2D 5 ω 3 5 with D the bandwidth of our toy model. Equation A6 has a term linear in frequency that dominates at low frequency, other terms being of higher order in (ω/D) 2 . This behavior affects the quasiparticle weight the same way as in Fermi liquid theory since This proof can clearly be generalized to models without particle-hole symmetry and for Σ (ω) ∼ −ω n with n an arbitrary integer because (ω − ω) is always a factor of (ω n − ω n ).
2020-04-01T01:01:24.468Z
2020-03-31T00:00:00.000
{ "year": 2020, "sha1": "5b9b6880111739ffb1cf7eed534f4ae1a1d3eb84", "oa_license": null, "oa_url": "http://arxiv.org/pdf/2003.14246", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "c2f6b87f6bb91b40a748f7a579d5da3598a32e28", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
259196316
pes2o/s2orc
v3-fos-license
Towards a global partnership model in interprofessional education for cross-sector problem-solving Objectives A partnership model in interprofessional education (IPE) is important in promoting a sense of global citizenship while preparing students for cross-sector problem-solving. However, the literature remains scant in providing useful guidance for the development of an IPE programme co-implemented by external partners. In this pioneering study, we describe the processes of forging global partnerships in co-implementing IPE and evaluate the programme in light of the preliminary data available. Methods This study is generally quantitative. We collected data from a total of 747 health and social care students from four higher education institutions. We utilized a descriptive narrative format and a quantitative design to present our experiences of running IPE with external partners and performed independent t-tests and analysis of variance to examine pretest and posttest mean differences in students’ data. Results We identified factors in establishing a cross-institutional IPE programme. These factors include complementarity of expertise, mutual benefits, internet connectivity, interactivity of design, and time difference. We found significant pretest–posttest differences in students’ readiness for interprofessional learning (teamwork and collaboration, positive professional identity, roles, and responsibilities). We also found a significant decrease in students’ social interaction anxiety after the IPE simulation. Conclusions The narrative of our experiences described in this manuscript could be considered by higher education institutions seeking to forge meaningful external partnerships in their effort to establish interprofessional global health education. Supplementary Information The online version contains supplementary material available at 10.1186/s12909-023-04290-5. Introduction The ability to work effectively as a member of interprofessional teams has been recognized as both a practice standard for different professions and a desirable graduate attribute of most universities [1,2]. In healthcare, interprofessional collaboration is linked to optimal patient-centered care because, in contrast to the in-silo model, it leverages a team's concerted expertise in managing the growing complexity of patient needs. Fostered through interprofessional education (IPE), an important assumption is that when professionals work in alliance, new practice-transforming solutions will emerge, medical errors will decline, and patient outcomes will improve [3]. Historically, IPE has been promoted with the goal of breaking down disciplinary silos, by providing healthcare students or professionals from two or more professions the opportunity to learn about, from, and witheach other to optimize healthcare [4]. IPE is conventionally implemented as a cross-faculty programme that allows complementary disciplines within a university to work together in transforming the workplace. While within-University IPE is the default standard in many higher education institutions (HEIs; e.g., the study of El Ansari et al. [5]), the inherent limitation of this model is its inability to foster the development of students' global and intercultural perspectives in health care. These desirable perspectives cannot be achieved within the bounds of a single institution, but need to be cultivated through external partnerships between universities. It is necessary to recognize that the inherently different perspectives of team members reflect the curricular and cultural influences imparted by the HEIs where they were trained. For this reason, a global IPE model which is co-created and co-implemented through the strategic cooperation or partnership of HEIs becomes relevant. The Interprofessional Education and Collaborative Practice (IPECP) at the University of Hong Kong is one of the biggest interprofessional simulation programmes in Asia, training an annual average of 1,644 health and social care students. IPECP is an authentic experiential learning programme that aims to develop interprofessional collaboration-related competencies (e.g., values/ ethics for interprofessional practice, roles/responsibilities for collaborative practice, interprofessional communication, interprofessional teamwork, and team-based care) among health and social care students [6], in response to the call of various health organizations to promote team-based healthcare [4,7]. In the seventh year since its inception in 2016, and amidst the unprecedented changes to the landscape of education due to the COVID-19 pandemic, we reappraised and redesigned the IPECP programme to address both the students' changing needs and the needs of the evolving healthcare delivery ecosystem. This provided the impetus for internationalization [8] as a way to evolve the programme. Internationalization focusing on digital teaching channels is framed as a means to foster international cooperation, intercultural understanding, and a sense of global citizenship [9] early in students without the need to meet face-to-face. This provided an opportunity for four HEIs in Hong Kong and the United Kingdom (The University of Hong Kong, Tung Wah College, Hong Kong Metropolitan University, and University College London) to forge an IPE partnership (Table 1). In this international crossand inter-institutional collaboration project, we set out to model how to advance IPE by co-designing and co-implementing creative IPE learning experiences notwithstanding the pandemic. Since its formal launch in 2016 [10], interprofessional education team-based learning (IPTBL) has been delivered using a blended learning approach that leverages on combined strengths of online learning and classroom face-to-face learning [11]. Although synchronous, face-to-face interaction is ideal if the goal is to simulate face-to-face teamwork. Owing to the differential time schedules of the 13 programmes involved from eight faculties (Table 1), the use of blended learning affords time and place flexibility to participating students and content experts from four HEIs to come together to learn with, about, and from one another. In the course of developing the programme, we targeted an intervention that could yield desirable interprofessional collaboration competencies and facilitate smooth social interaction amongst students from different participating HEIs. We expected students' potential interaction anxiety which might affect their engagement and achievement in learning progress [12,13]. Acknowledging the importance of students' smooth social interactions in spite of their diverse academic backgrounds and culture, in the present study, we initiated a simple experiment aimed at helping students who may be showing anxiety in social interactions in a culturally diverse IPE learning environment. We conducted a sentencecompletion intervention and examined how this could reduce students' interaction problems [14]. In the inclusion of this experiment, we hope to demonstrate the importance of designing a learning environment where students, regardless of their level of ease of social interaction, were supported. After launching the IPE global partnership model using an online platform, the next important step for IPE was to conduct an initial clarificative evaluation [15] by revisiting the programme activities more closely to ensure their alignment with programme goals and outcomes. While the HKU, UCL, TWC, and HKMU partnership was established primarily to co-train our students with interprofessional learning competencies, we took advantage of this initiative to generate new research directions. To set the momentum for research partnership, we developed a research framework that outlined the goals and priorities and evoked the cooperation of specialists with diverse expertise from the involved HEIs. As a starting point, we aimed to understand if the global IPE model in a digital online format would yield desirable collaboration-related outcomes (e.g., teamwork and collaboration, professional identity, roles and responsibilities) similar to conventional face-to-face [16]. The IPECP three-tier model The IPE programme is a spiral model composed of three tiers (Tier 1: IPE literacy, Tier 2: IPE simulation, Tier 3: IPE collaborative practice). Tiers 1 and 2 were implemented through an online learning management system (LMS) called Open edX, which was founded by Harvard and Massachusetts Institute of Technology to cater to the needs of online collaborative learning [17]. We also used Zoom, Miro, Padlet, Metaverse, and Qualtrics embedded in Open edX. These were integrated into the "Asynchronous and Synchronous Interprofessional Education" to train health and social care students for collaborative practice. This model was based on a constructivist approach [18], in which activities were designed to provide students with experiential learning. We identified learning targets that were mapped alongside the Canadian National Interprofessional Education Competency Framework: 1. patient/client centeredness, 2. collaborative communication, 3. role understanding, 4. team functioning, 5. shared leadership and collaborative decision making, and 6. conflict resolution [19]; and Core Competencies for Interprofessional Collaborative Practice: 1. values and ethics, 2. roles and responsibilities for collaborative practice, 3. interprofessional communication, and 4. teamwork and team-based care [20]. The "IPE Simulation" (Tier 2) was designed following a PRAE sequence of asynchronous (online) and synchronous activities (face-to-face). This acronym stands for a sequence of activities: Preparation, Readiness assurance, Application exercise, and Enrichment activity (Fig. 1). Consistent with the partnership model, clinical cases on Dementia and Fracture including all other learning activities were co-developed by all the content experts from the four HEIs through various discussions. These cases underwent a number of iterations to meet the suggestions of all the content experts involved and to ensure clarity, relevance, correctness, and cognitive load or appropriateness to the levels of the students. The activities were framed within Garrison et al. 's social constructivist framework called Community of Inquiry (CoI) [15] which highlights three essential elements of educational experience: social presence (encouraging connection with others), cognitive presence (meaning construction from learning experience), and teaching presence (activities surrounding the course design). This framework has been helpful for us in designing comprehensive IPE experiences to promote a community of inquiry in which meaningful learning experiences could be realized. The present study In establishing partnerships with international and local HEIs, we considered a number of factors before launching the idea and before signing the letters of understanding (Table 2). These factors were taken into account as we sought to develop meaningful partnerships in an effort to provide our students with a relevant interprofessional global health education programme. To fine-tune the IPE programme, increase the likelihood that its implementation leads to desired outcomes, and provide a basis for monitoring and eventual impact evaluation, it is important to examine the effectiveness of this global partnership IPE model. In this connection, the aims of this study were threefold: 1. Describe the core components of the IPECP programme model, 2. Evaluate its effectiveness using the following indicators: 3. students' behavioral change across the indices of interprofessional collaboration, including teamwork and collaboration, positive and negative professional identity, and roles and responsibilities; and 4. programme's ability to facilitate social interaction adjustments, relatedness, and engagement in the IPE context; 5. Identify general programme areas needing improvement. We aim to contribute to the growing body of knowledge of IPE by modeling the importance of the global partnership IPE model in healthcare curricula. To our knowledge, no similar attempt has been undertaken to understand this global partnership IPE model; hence, this study is essential because it addresses this significant knowledge gap. We hope to build from the conversation on various IPE topics to represent the global IPE model we developed and describe all the parts to understand the psychological and pedagogical basis for which they are included in the model. This is an important step within which best practices in managing a global IPE partnership model may be uncovered. Additionally, we aim to model the cooperation and partnership of HEIs in providing a narrative of how HEIs can come together to provide students with a richer and more authentic learning experience. Design We utilized a descriptive narrative format in describing the programme and used a quantitative design to understand students' potential gains in IPECP. To provide preliminary evidence to suggest the acceptability of the programme, the programme implementers and content experts used debriefing as a strategy for learning about and making future improvements. We conducted this investigation in pre-clinical IPE simulations: The Online IPECP Model (Fig. 1) consisted of around two hours of pre-class preparation and 3.5 h of the face-to-face session. Participants We collected data from a total of 747 health and social care students with a mean age of 22.42 in the academic years 2020-2021 (n = 285) and 2021-2022 (n = 462) ( Table 2). These students participated in any of the IPE simulations (Tier 2, explained in the results section) as part of their curricula. Recruitment of participants was facilitated by content experts of each of the participating HEIs. Students' participation was completely voluntary, and we explained that their participation in the study would not affect their course grades. Participants signed the consent form to indicate their participation in the investigation. The content experts attended the debriefing which led to the identification of factors we considered in forging a partnership model in IPE. Readiness of students towards IPE To estimate the readiness of students to engage in Online IPE, the Readiness for Interprofessional Learning Scale was administered before and after an IPE simulation intervention [21]. The 19 items were rated on a scale from 1 (strongly disagree) to 5 (strongly agree) under the following domains: teamwork and collaboration (9 items, "Learning with other students will help me become a more effective member of a health care team"; α = 0.94), negative professional identity (3 items, "I don't want to waste my time learning with other health-care students"; α = 0.90), positive professional identity (4 items, "Shared learning will help to clarify the nature of patient problems"; α = 0.89), and roles and responsibilities (3 items, "I'm not sure what my professional role will be"; α = 0.83). This scale has been validated in Hong Kong students [22]. We reported here the Cronbach's alpha reliability (α) based on the current data. Behavior engagement and disaffection To measure students' engagement and disaffection, we used the two subscales of Engagement Versus Disaffection with Learning: Student Report in the IPE context: behavior engagement (5 items, "In IPE, I work as hard as I can; α = 0.92"), and behavior disaffection (5 items, "When I'm in IPE, I just act like I'm working; α = 0.84") [23]. Responses are scaled from 0 (not at all true for me) to 3 (very true for me). This scale was administered after the Ten-Day Asynchronous and Synchronous Interprofessional Education. This scale was previously validated in IPE in the current setting [22]. Sense of relatedness Aiming to understand the sense of relatedness of the students [24], we used four key items from the previous study. Items were rated from 1 (strongly disagree) to 4 (strongly agree). To understand students' sense of relatedness in two learning contexts e.g., peer and IPE, we adapted the original questionnaire measuring peer interaction (e.g., "When I'm with peers in my discipline, I feel ignored"; α = 0.73) to the IPE context (e.g., "When I'm in IPE, I feel ignored"; α = 0.85). Social Interaction Anxiety Scale (SIAS) and Social Phobia Scale(SPS) [25]. The SIAS-6 measures general anxiety in terms of initiation and maintenance of social interactions. The SPS-6 intends to measure the experience of anxiety in the performance of various tasks while being examined by others. The items were rated from 0 (not at all characteristic or true of me) to 4 (extremely characteristic or true of me). These scales have been validated in HEIs [25]. We used analysis of variance (ANOVA) to understand potential differences in students' behavioral engagement, behavioral disaffection, sense of relatedness in IPE, and sense of relatedness with peers. We used paired sample t-tests to compare the pretest and posttest scores on readiness for interprofessional learning, potential social interaction anxiety, and social phobia. For all the data analysis, we used the Statistical Package for Social Sciences (SPSS) Version 23. We examined potential differences between disciplines in behavioral engagement and disaffection and sense of relatedness on the post-test data (Table 4 We performed a similar analysis with data from the IPE Fracture module (Table 4). One-way ANOVA results showed a significant discipline effect in behavior engagement (F(4,162) = 6.125, p = 0.000), no significant effect in behavior disaffection (F(4,160) = 0.357, p = 0.839), and a sense of relatedness in IPE (F(4,162) = 0.654, p = 0.625), as well as the marginal effect on the sense of relatedness with a peer (F(4,162) = 2.536, p = 0.042). We conducted a paired sample t-test to compare the pre-test and post-test scores on the Social Interaction Anxiety Scale (SIAS) to understand how IPE social interaction with sentence-completion priming can help reduce students' potential interaction anxiety (Table 5). Results revealed a significant difference (t (20) = 1.724, p = 0.01) in the pre-test (M = 2.02, SD = 0.34) and the post-test (M = 1.70, SD = 0.86) in terms of the degree of students' Social Interaction Anxiety. A paired-sample t-test was also conducted to compare the score on the Social Phobia Scale (SPS) before and after the IPE activity. The mean of the Table 3 Mean comparison of students' readiness for interprofessional learning (n = 183, IPE Depression, 2021) Teamwork-interaction of two or more individuals who interdependently work for a common purpose; professional identity-a sense of oneself reflecting the attitudes, values, and knowledge specific to a professional group; roles and responsibilities-refers to one's position on a team including related tasks and duties he tasks and duties of their particular role or job description. 1 = strongly disagree-5 = strongly agree; p = *** < .001; ** p < .01; * p < .05; ns = not significant RIPLS Dimensions Pre-test Post-test t p pre-test results was 1.98 (SD = 0.28) compared to the mean of 1.40 (SD = 0.83) in the post-test, which was a significant difference (t (18) = 2.77, p = 0.01). Mean (SD) Mean (SD) The content experts, together with the programme coordinator, attended a debriefing where they discussed their experiences that led to the creation of a partnership model in IPE (Table 6). Discussion The recognition that members of healthcare teams in workplace clinical settings usually obtained their prelicensure training from different institutions of higher learning suggests the need to align this workplace reality with the IPE training in medical schools. This recognition leads to the need for an inter-institutional or global IPE. To the best of our knowledge, this is the first study on IPE global partnership model with multiple local and international partners. The strengths of our study were the representation of study participants from 13 different health and social care programmes from four HEIs, the integration of our own experiences, and the students' performance data using standardized and validated questionnaires. We believe that we were able to drive innovation in the way IPE is developed and implemented through modeling cross-institutional collaboration to benefit our students. Our data suggest the acceptability of implementation outcomes [26] of an Online IPE jointly implemented by collaborating HEIs. Our experience in co-implementing the global IPE was a meaningful learning opportunity both for content experts and students. From our prior experience, we learned that an IPE programme that was previously hybrid in format and delivered for a sole university could be successfully co-implemented and offered completely online. The integration of interprofessional care planning underpinned by constructive controversy using the online platform MIRO board (was an important innovation designed to foster interprofessional teamwork and collaboration. An interesting observation noted by the facilitators and content experts relates to the demonstration of students' positive interdependence in social interactions during live synchronous activities. We observed that students showed higher motivation to do well in the context of mixed team membership from different institutions, in contrast with a team with members from a single institution. Students were very active in various team activities and were willing to turn on their cameras. This observation may be explained by the ability of social situations or associational forces to influence one's tendency to do well, especially in social settings known as social facilitation [27]. In particular, it may be the case of The participants in this study were those who scored high in measures of social anxiety and social phobia administered at Time 1 -pre-test; *p < .1; **p < .05 Variable Pre-test Post-test n t co-action effects (tendency to do well as others are doing similar tasks) and audience effects (tendency to do well in front of an audience). Our students in IPECP were diverse and dispersed, with an average of five disciplines in a single simulation model. Given the diversity of students' backgrounds, we underscored the development of a shared or collaborative mindset, providing the team with a compelling direction to adopt a collective healthcare team [28]. We provided the teams with the opportunity to identify healthcare management goals explicitly. Based on our observation, despite their differential disciplinary expertise, the shared mindset which we emphasized in whole-class briefings provided them motivation and direction. We reiterated the core value of interdependence by demonstrating teamwork and collaboration. We emphasized to the students the need to understand their professional identity in the context of teams, the roles, responsibilities, and partnerships among various professionals. Mutual trust, respect, communications, and accountability are crucial elements for synergistic work outcomes. Mean (SD) Mean (SD) We believe that our partnership with other HEIs is a strong starting point in which we can jointly promote the advancement of science and scholarship of IPE through research. We examined if Online IPE can yield desirable effects that are associated with face-to-face delivery. Similar to face-to-face IPE [29], our data suggest that the Online IPE model can develop students' teamwork and collaboration, positive professional identity, and roles and responsibilities (Table 7). Additionally, our data in running IPE Dementia and Fracture simulations suggest that students, in general, yield high behavioral engagement (and low disaffection) and a sense of relatedness in IPE and peers (Figs. 2,3,4,5). We would like to emphasize that the non-statistically significant programme effect was in line with our expectations, suggesting the effect of IPE across all the programmes. Aside from behavioral engagement, which was found to be significantly lower among medical and pharmacy students than nursing and physiotherapy students, there were no significant post-test differences in students' low behavioral disaffection and sense of relatedness, suggesting equal benefits among programmes. Taken together, these pieces of evidence suggest the effectiveness of the programme. We wish to emphasize that we planned ahead to mitigate potential students' social interaction problems, given the mix of students from different expertise, faculties, and HEIs. To do this, we built from social psychology ideas [14] and conducted a simple experiment aimed at facilitating positive social interaction of students across HEIs, which explored interaction anxiety through explicit priming (sentence-completion test about IPE). Our data suggest that there was a significant decrease in social interaction anxiety and social phobia after the intervention by explicit priming, suggesting that the environment can be designed to help students overcome social interaction problems (Table 6). Institutional goals and priorities Our consultation with HEIs aimed at a mutually beneficial partnership. We discussed how our cooperation was aligned with the achievement of our institution's thrust and priorities Complementarity of expertise The appropriateness and combination of pool expertise or discipline in a team were of primary importance. In the initial identification of HEIs to join the program, we considered the expertise of the target HEI to complement and not duplicate the existing disciplines. For example, the inclusion of Physiotherapy students from Hong Kong Metropolitan University and Tung Wah College complements the existing disciplines Internet connectivity and electronic platform The Open edX learning management system adopted for IPECP has been finetuned over the years to especially meet the need of increasing numbers of students who concurrently use Open edX Programme challenges and opportunities for improvement While global IPE is important for preparing health and social care students for future collaborative efforts, we experienced a number of difficulties in its implementation. We summarized these challenges and proposed actions in response to these limitations. The significant involvement of facilitators was one of the challenges we encountered. Given that IPECP is a large-scale interinstitutional collaboration, this necessitates a big number of facilitators. In line with this, we involved and trained near-peer-teachers (NPTs) as floating facilitators who rotated through teams. The time difference between involved HEIs was an additional challenge, suggesting the need to plan ahead to identify common times when students can meet. Monitoring team interactions was also a challenge. Many of the team activities were designed to be completed asynchronously. The use of learning analytics was necessary to ease the monitoring process of team progress in completing their tasks. This study is not without limitations. First, in terms of participants, they were composed of only those who volunteered to participate in this study. Furthermore, there was a great difference in the number of participants from four HEIs who volunteered to participate in this investigation. Second, the self-report nature of the questionnaires was influenced by social desirability bias, although the anonymity of participants was ensured. Third, even if we have a large number of participants (N = 747), this number was not representative of the four HEIs. These limitations notwithstanding, we believe that these do not undermine the strength of this paper which is the integration of both descriptive and quantitative data collected from a large-scale global IPE model. While we know of various face-to-face IPE developed and implemented for a single institution [30,31], the present study extends our understanding of the considerations in forging global IPE co-implemented by collaborating HEIs. Conclusion We end by reflecting on our journey in co-developing global IPECP. With clear common goals shared by collaborating HEIs and institutional commitment, cross-institutional collaboration provides a win-win situation for all. The identified areas for improvement from our evaluation suggest that no collaboration is perfect. However, we are optimistic that no barrier is insurmountable with the synergy of our collective efforts to champion global IPE to revolutionize how care is delivered. We hope that our partnerships in developing Global IPE will serve as a model for school administrators to remain committed to designing innovative programmes to equip students with skills that will enable them to thrive in the twenty-first century workplace.
2023-06-20T13:57:40.981Z
2023-06-20T00:00:00.000
{ "year": 2023, "sha1": "5d076d8879c20453c1b29552ac794e99583197cc", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Springer", "pdf_hash": "5d076d8879c20453c1b29552ac794e99583197cc", "s2fieldsofstudy": [ "Education" ], "extfieldsofstudy": [ "Medicine" ] }
220071958
pes2o/s2orc
v3-fos-license
Relationship between mucosal healing by tacrolimus and relapse of refractory ulcerative colitis: a retrospective study Background Tacrolimus (TAC) is a powerful remission-inducing drug for refractory ulcerative colitis (UC). However, it is unclear whether mucosal healing (MH) influences relapse after completion of TAC.We investigated whether MH is related to relapse after TAC. Patients: Among 109 patients treated with TAC, 86 patients achieved clinical remission and 55 of them underwent colonoscopy at the end of TAC. These 55 patients were investigated. Methods Patients with MH at the end of TAC were classified into the MH group (n = 41), while patients without MH were classified into the non-MH group (n = 14). These groups were compared with respect to 1) clinical characteristics before treatment, 2) clinical characteristics on completion of treatment, and 3) the relapse rate and adverse events rates. This is a retrospective study conducted at a single institution. Results 1) There was a significant difference in baseline age between the two groups before TAC therapy, but there were no significant differences in other clinical characteristics. The NMH group was younger (MH group: 48.1 (23–79) years, NMH group: 36.3 (18–58) years, P = 0.007). Endoscopic scores showed significant differences between the 2 groups at the end of TAC. There were also significant differences in the steroid-free rate after 24 weeks (MH group: 85.3%, NMH group 50%, P = 0.012). There was no significant difference in the relapse rate between the 2 groups at 100 days after remission, but a significant difference was noted at 300 days (17% vs. 43%), 500 days (17% vs. 75%), and 1000 days (17% vs. 81%) (all P < 0.05). Conclusions TAC is effective for refractory ulcerative colitis. However, even if clinical remission is achieved, relapse is frequent when colonoscopy shows that MH has not been achieved. It is important to evaluate the mucosal response by colonoscopy on completion of TAC. Background UC is a chronic benign intestinal disease. The prevalence of UC has increased worldwide and is expected to increase further [1]. Surgery may be indicated for severe UC, especially refractory prednisolone-resistant or prednisolone-dependent disease, which is often difficult to treat medically. Nonresponse of UC to steroids leads to surgery 55-85% of the time [2]. However, surgery is associated with various complications [3,4]. TAC is used as remission induction therapy for refractory UC, and the short-term remission rate achieved with TAC is high [5]. However, the relapse rate at 1 year after induction of remission by TAC is also high, in the range of 20 to 30% [6]. Prevention of relapse after remission has been achieved by TAC, which is an important issue in patients with refractory UC. Reports have recently been published concerning the control of UC relapse by achieving MH [7], but it has not been clarified whether MH influences relapse after TAC therapy. Therefore, this study was performed to investigate the relationship between MH and relapse of UC after TAC. Methods At our department, TAC was administered to 109 patients from April 2016 to December 2018 (mean followup period: 819 ± 781 days). TAC was given at 0.025 to 0.075 mg/kg body weight twice daily before breakfast and dinner. Blood samples were collected daily for measurement of TAC levels until the target blood concentration was reached. The TAC dose was adjusted to reach the target trough concentration of 10 to 15 ng/mL blood within two weeks of starting TAC remission induction therapy. Then, 2 to 3 weeks after the TAC concentration was within the target range, the dose was adjusted again to reach a new lower target concentration of 5 to 10 ng/mL. Clinical remission was achieved in 86 patients. Colonoscopy was performed at the end of TAC therapy in 55 patients who had agreed to receive colonoscopy (Fig. 1). All patients received steroids before TAC. PSL was administered to all 55 patients before initiation of TAC, so all patients had prednisolone-dependent or prednisoloneresistant refractory UC. Prior to TAC, 5 patients had used biologics. Endoscopic evaluation was performed to determine the Mayo score and the UCEIS score [8,9]. Before steroid administration, 42 patients underwent endoscopy (MH group: n = 33, NMH group: n = 9). MH was defined as a Mayo score of 0 or 1 [7]. Patients who achieved MH at the end of TAC were classified into the MH group (group: n = 41), while patients who did not were classified into the non-MH group (group: n = 14). The Lichtiger score was determined as the clinical activity index (CAI) [10]. The MH and NMH groups were compared with respect to the following three factors: 1) clinical characteristics before PSL, CAI, hemoglobin, albumin, CRP (C-reactive protein), endoscopic scores (Mayo and UCEIS) and TAC (sex, age, duration of UC, site of UC, PSL responsiveness (dependent/resistant), CAI, hemoglobin, albumin, CRP, endoscopic scores (Mayo and UCEIS), and time to reach the target trough level of TAC); 2) clinical characteristics at the end of TAC (CAI, hemoglobin, albumin, CRP, endoscopic scores (Mayo and UCEIS), total PSL dose during hospitalization, duration of TAC, frequency of combined azathioprine (AZA) therapy, and steroid-free rate after 24 weeks), and 3) the relapse rate at 100 (MH group: n = 39, NMH group: n = 15), 300 (MH group: n = 35, NMH group: n = 11), 500 (MH group: n = 32, NMH group: n = 3), and 1000 days (MH group: n = 16, NMH group: n = 3) after achieving remission. Remission was defined as a CAI ≤4 at 4 weeks or longer after initiation of remission induction therapy. The total PSL dose during hospitalization was the amount of PSL used until a clinical remission and discharge. Surgery was not required as a result of induction. Relapse was defined as the need for high-dose intravenous steroid therapy, switching to a biologic, readministration of TAC, or re-administration of TAC at a higher dose (target trough level ≥ 10 ng/dL) to induce remission again. Adverse events were defined as any undesired or unintended illness or signs thereof (including abnormal laboratory values) occurring in subjects receiving TAC. Statistical analysis The results are expressed as the number of patients or as the mean ± standard deviation. The Wilcoxon test was used for comparisons between the 2 groups, and differences were considered to be significant at P < 0.05. JMP Pro12 (Statistical Discover, SAS) was used for all analyses. Clinical characteristics The clinical characteristics of the MH and NMH groups are summarized in Table 1. Relapse rate The relapse rate at 100 days after induction of remission showed no significant difference between the MH and NMH groups, being 8% in both groups. However, there was a significant difference in the relapse rate between the 2 groups at 300 days (MH group: 17%, NMH group: 43%), 500 days (MH group: 17%, NMH group: 75%), and 1000 days (MH group: 17%, NMH group: 81%) after induction of remission (P < 0.05) (Fig. 3, Table 3). Adverse events related to TAC were reported as tremor, renal impairment, headache, and hypomagnesemia in 7, 5, 3, and 3 patients, respectively ( Table 4). The prevalence of adverse events was not different between the MH and NMH groups. In no patient was TAC discontinued due to adverse events. Clinical characteristics UC is a chronic inflammatory disease that repeats episodes of relapse and remission.Only 20-30% of patients with UC moderate or more relapse have difficulty in treatment. PSL are used first in moderate or more severe cases [2]. The patients in this study also had moderate or worse UC. The clinical activity index (CAI) before PSL administration was high, and the endoscopy score (Mayo, UCEIS) was high in 42 patients who underwent endoscopy (Table 1). Therefore, PSL was given before TAC in all cases that were steroid-resistant or refractory. PSL is one of the poor prognostic factors for UC. In cases where PSL must be used, repeated relapses require surgery [11]. Therefore, PSL use has been reported to be one of the markers of UC severity. For intractable cases, it is important to prevent relapse after induction and maintain remission. Various reports have been published on the benefits of achieving MH and this is considered a therapeutic goal for UC, as MH has reduced the rate of relapse [7,12]. TAC is used to induce the remission of UC. However, the duration of treatment with TAC is not clearly defined in the ECCO Guidelines [2]. In this study, we compared patients treated with TAC who achieved mucosal healing (MH group) or did not achieve mucosal healing (NMH group). Among the clinical characteristics that we investigated, the age at initiation of treatment was significantly lower in the NMH group. In this study, TAC was started at the time of hospital admission and was completed during outpatient follow-up, so oral administration of TAC was managed by the patients themselves after discharge from the hospital. The percentage of young people in the NMH group was high. This may have contributed to reduced oral drug compliance. It has been reported that compliance with internal medicine is likely to be reduced among young people [13]. TAC blood levels are affected by diet. Therefore, the time for internal use must be adjusted twice a day. Compliance with oral administration will decline when oral administration becomes more complex due to other drugs [14]. However, this study did not confirm compliance with internal medicine after discharge and did not measure frequent blood troughs. Therefore, the decline in compliance with oral administration of young people is only speculation. Furthermore, as the clinical symptoms improve, the compliance with oral administration tends to be further reduced [15]. Therefore, it can be suggested that compliance of younger patients decreases after discharge from the hospital, resulting in failure to achieve MH. Other clinical characteristics, particularly the duration of UC, CAI, and endoscopic severity scores before initiation of TAC, were not correlated with MH. Clinical characteristics on completion of TAC Comparison of clinical characteristics between the MH and NMH groups at the end of TAC revealed no significant differences in CAI, hemoglobin, albumin, or CRP. At the end of treatment, detailed data about clinical characteristics and laboratory parameters were obtained from both groups. However, there were no significant differences in clinical or laboratory characteristics between the 2 groups, suggesting that it is difficult to predict the achievement of MH based on these factors. By definition, there were significant differences in endoscopic findings between the MH and NMH groups ( Table 2, Fig. 2). Importantly, the duration of TAC was markedly different between the MH and NMH groups, being significantly longer in the MH group (Table 2). There are no rules regarding the duration of TAC administration. Therefore, the duration of TAC administration varied from physician to attendant, resulting in a variable duration of administration. This suggests the possibility that MH is more likely to be obtained by long-term treatment with TAC. Therefore, it may be necessary to continue TAC after clinical remission if MH has not been achieved. There was no significant difference in the combination of AZA between the MH group and NMH group. Therefore, whether AZA contributed to the promotion of MH was unclear in this study. The steroid-free rate at 24 weeks showed a significant difference between the 2 groups. PSL are generally the first-line treatment for moderate/ severe UC [2,16]. If there is no response to PSL, switching therapy or the addition of other drugs is required. When a response is noted, the PSL dosage should be tapered or discontinued as soon as possible because these drugs have no remission-maintaining effect and prolonged use may cause steroid dependence, leading to intractability of UC [2,[16][17][18]19]. In this study, TAC was not used in the NMH group for long enough. Therefore, MH could not be achieved in the NMH group, and steroid-free treatment could not be achieved. TAC must be used for a sufficient period to be steroid free. Prolonged use of steroids may also cause a wide variety of adverse events [18]. Accordingly, achievement of a steroid-free status in UC patients is important to prevent intractability of the disease and steroid-related adverse events [2,20]. Relapse rate At 100 days after induction of remission, the relapse rate was the same in the MH and NMH groups (MH group: 8%, NMH group: 8%), but the relapse rate showed a significant difference between the two groups at 300 days (MH group: 17%, NMH group: 43%), 500 days (MH group: 17%, NMH group: 75%), and 1000 days (MH group: 17%, NMH group: 81%) (P < 0.05) (Fig. 3, Table 3). That is the relapse rate was lower from 300 days on in the MH group. It has been reported that MH affects the recurrence and maintenance of remission [11]. Achieving MH in infliximab studies has been reported to contribute to the maintenance of remission. Achieving MH is considered to be a target for UC treatment. This study focused on achieving MH with TAC [12]. Unlikely previous reports, we focused on the achievement of MH by TAC in this study. There are few reports examining the relationship with MH in refractory cases using TAC. Miyoshi et al. reported that colonoscopy results 3 months after TAC were associated with later relapse [21]. This study also reports that MH was involved in maintaining remission. However, our study differs from Miyoshi's report in the period of use of TAC. TAC was used until MH was achieved to prevent relapse. The importance of preventing relapse differs between patients with refractory or moderate/severe UC and patients with mild UC. Refractory or relapsing moderate/ severe UC results in frequent hospital attendance, admission, and intensified drug therapy, which have a detrimental impact on the quality of life [22,23]. Furthermore, surgery is often required to manage patients with refractory or relapsing moderate/severe UC [4]. Therefore, maintenance of remission is very important for refractory or relapsing moderate/severe UC. As adverse events related to TAC therapy, tremor (7 patients), renal impairment (5 patients), headache (3 patients), and hypomagnesemia (3 patients) were observed in the present study ( Table 4). All of these events improved after dosage reduction of TAC. Renal impairment was improved by the addition of fluids and TAC could be continued [24,25]. A strength of this paper is that the total amount of PSL until remission (mg), the number of days until the TAC trough was achieved (day), and the TAC administration period are examined. There were several reports on TAC and MH. However, there is no report on these points. This suggests that this report may serve as a standard for treatment in clinical practice. Unfortunately, this study did not use calprotectin to assess MH. It has been reported that calprotectin is simple and is an excellent biomarker for predicting MH and relapse. In the future, it is necessary to consider additional factors such as this [26]. The limitations of this study included its retrospective design and its collection of data from a single center, which could have resulted in bias. To confirm our findings, it will be necessary to perform a prospective multicenter study in a larger number of patients.
2020-06-27T14:18:28.611Z
2020-06-26T00:00:00.000
{ "year": 2020, "sha1": "31bc032ea88cf578835fa91a1b64611673beaaad", "oa_license": "CCBY", "oa_url": "https://bmcgastroenterol.biomedcentral.com/track/pdf/10.1186/s12876-020-01317-9", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "31bc032ea88cf578835fa91a1b64611673beaaad", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
80416514
pes2o/s2orc
v3-fos-license
Setting of import tolerance for quizalofop‐P‐ethyl in genetically modified maize Abstract In accordance with Article 6 of Regulation (EC) No 396/2005, the applicant Dow AgroSciences submitted a request to the competent national authority in Finland, to set an import tolerance for quizalofop‐P‐ethyl in grain from genetically modified maize containing aad‐1 gene. The data submitted in support of the request were found to be sufficient to derive a maximum residue level (MRL) proposal for quizalofop‐P‐ethyl maize grain. Adequate analytical methods for enforcement are available to control the residues of quizalofop‐P‐ethyl in maize grain. Based on the risk assessment results, EFSA concluded that the authorised use of quizalofop‐P‐ethyl on genetically modified maize containing aad‐1 gene and the subsequent import of maize grain in Europe will not result in a consumer exposure exceeding the toxicological reference value and therefore is unlikely to pose a risk to consumers' health. tefuryl and propaquizafop was now updated with risk assessment values derived for maize grain. The calculated dietary burdens exceed the trigger value of 0.1 mg/kg dry matter (DM) for all livestock species and the intake is mainly driven by residues in potatoes from the existing use of propaquizafop assessed in the MRL review. Residues of quizalofop-P-ethyl in maize grain contribute insignificantly to the livestock exposure and thus would not affect the MRL proposals for animal commodities derived by the MRL review. The consumer risk assessment was performed with revision 2 of the EFSA Pesticide Residues Intake Model (PRIMo). In the framework of the MRL review, a comprehensive consumer exposure to residues arising in food from the existing European Union (EU) uses of quizalofop-P-ethyl, quizalofop-P-tefuryl and propaquizafop was calculated, considering the lowest acceptable daily intake (ADI) value set for quizalofop-P-ethyl (0.009 mg/kg body weight (bw) day) and the lowest acute reference dose (ARfD) set for quizalofop-P-tefuryl (0.1 mg/kg bw), expressed as quizalofop equivalents. This exposure was now updated with the supervised trial median residue (STMR) values derived for GM maize grain assessed in this application. The estimated long-term dietary intake was in the range of 5-30% of the ADI. The contribution of quizalofop-P-ethyl residues in maize grain to the overall long-term exposure is insignificant. No shortterm intake concerns were identified with regard to residues in maize grain (0.2% of the ARfD). EFSA concluded that the authorised use of quizalofop-P-ethyl on GM maize expressing aad-1 gene and consequent residues in maize grain will not result in a consumer exposure exceeding the toxicological reference value and therefore is unlikely to pose a risk to consumers' health. EFSA proposes to amend the existing MRL as reported in the summary table below. Full details of all end points and the consumer risk assessment can be found in Appendices B-D. 0.02 Import tolerance application from Canada is supported by data and no consumer risk has been identified. The GM maize that expresses aad-1 gene has been assessed by EFSA Panel on Genetically Modified Organisms (GMO) and is authorised within the EU for the marketing of food and feed and derived products. Assessment The detailed description of the authorised use of quizalofop-P-ethyl in Canada, on genetically modified (GM) maize , which is the basis for the current maximum residue level (MRL) application, is reported in Appendix A. The placing on the market of products containing, consisting of, or produced from GM maize DAS-40278-9 has been authorised by the Commission Decision (EU) 2017/1212 1 . Quizalofop-P-ethyl is the ISO common name for ethyl (2R)-2-[4-(6-chloroquinoxalin-2-yloxy) phenoxy] propionate (IUPAC). It is an ester variant of quizalofop-P. Quizalofop-P is the ISO common name for (R)-2-[4-(6-chloroquinoxalin-2-yloxy)phenoxy]propionic acid (IUPAC). The unresolved isomeric mixture of this substance has the common name quizalofop. Quizalofop-P belongs to the class of aryloxyphenoxypropionic herbicides which are taken up via leaves and hinder the synthesis of fatty acids by inhibition of the enzyme Acetyl-CoA carboxylase (ACCase). The chemical structures of the active substance and its main metabolites are reported in Appendix E. Quizalofop-P (considered variants quizalofop-P-ethyl and quizalofop-P-tefuryl) was evaluated in the framework of Directive 91/414/EEC 2 with Finland, designated as rapporteur Member State (RMS) for the representative uses as herbicide on oilseed rape, sugar/fodder beet, potato, pea, beans, linseed and sunflower. The draft assessment report (DAR) prepared by the RMS has been peer reviewed by European Food Safety Authority (EFSA) (EFSA, 2009 In accordance with Article 6 of Regulation (EC) No 396/2005, Dow AgroSciences submitted an application to the competent national authority in Finland (evaluating Member State, EMS), to set an import tolerance for the active substance quizalofop-P-ethyl in GM maize grain, containing the aryloxyalkanoate dioxygenase (aad-1) gene. The EMS drafted an evaluation report in accordance with Article 8 of Regulation (EC) No 396/2005, which was submitted to the European Commission and forwarded to the EFSA on 25 January 2018. The EMS proposed to establish an MRL for maize grain imported from Canada, at the level of 0.02 mg/kg. EFSA assessed the application and the evaluation report as required by Article 10 of the MRL regulation. EFSA based its assessment on the evaluation report submitted by the EMS (Finland, 2018), the DAR (and its addenda) (Finland, 2007(Finland, , 2008 prepared under Council Directive 91/414/EEC, the conclusion on the peer review of the pesticide risk assessment of the active substance quizalofop-P (EFSA, 2009) as well as the conclusions from the MRL review on quizalofop-P-ethyl, quizalofop-Ptefuryl and propaquizafop (EFSA, 2017). For this application, the data requirements established in Regulation (EU) No 544/2011 5 and the guidance documents applicable at the date of submission of the application to the EMS are applicable (European Commission, 1997a-g, 2000, 2010a,b, 2017OECD, 2011OECD, , 2013 A selected list of end points of the studies assessed by EFSA in the framework of this MRL application, including the end points of relevant studies assessed previously, are presented in Appendix B. The evaluation report submitted by the EMS (Finland, 2018) and the exposure calculations using the EFSA Pesticide Residues Intake Model (PRIMo) are considered as supporting documents to this reasoned opinion and, thus, are made publicly available as background documents to this reasoned opinion. 1. Residues in plants 1.1. Nature of residues and methods of analysis in plants 1.1.1. Nature of residues in primary crops The metabolism of quizalofop-P-ethyl in primary conventional crops belonging to the group of fruit crops, root crops and pulses/oilseeds following foliar applications has been investigated in the framework of the EU peer review and the MRL review (EFSA, 2009(EFSA, , 2017. In the framework of the current application, a new metabolism study was submitted investigating the nature of quizalofop-P-ethyl in GM maize, containing the aryloxyalkanoate dioxygenase (aad-1) gene. The aad-1 gene is a herbicide tolerant gene that encodes an enzyme which detoxifies aryloxyphenoxypropionate herbicides via an aketoglutarate-dependent dioxygenase reaction. The AAD-1 protein can degrade the R-enantiomers of aryloxyphenoxypropionates (AOPPs) such as quizalofop-P to an inactive phenol. The first major product in the metabolic pathway is quizalofop-P-acid (Wright et al., 2009). Herbicide tolerant maize containing aad-1 gene was treated with 14 C-quizalofop-P-ethyl (labelled in phenyl and quinoxaline moiety) at an application rate of 98 g/ha at the growth stage of six leaves unfolded (ca. BBCH 16). The samples of mature grain, cobs, forage and fodder were taken for analysis. The total radioactive residues (TTR) in grain (0.004-0.005 mg eq/kg) and cobs (0.002 mg eq/kg) were low and therefore not further characterised. Because of very low TRR levels in maize grain, potential quizalofop conjugates, if present, will unlikely be a significant part of the residue and are therefore considered of no relevance for maize grain. The TRR in forage accounted for 0.007-0.122 mg eq/kg and in the fodder for 0.26-0.35 mg eq/kg. Quizalofop-P-ethyl was identified at low levels in fodder from quinoxaline study (0.4%; 0.001 mg/kg) and in forage from phenyl study (1.4%; 0.004 mg/kg). Quizalofop (acid) was a minor metabolite identified in all fodder and forage samples (0.9-1.4%; 0.003 mg/kg). In total 17-30% of the radioactivity was characterised as polar fractions, accounting for a maximum of 0.043-0.049 mg eq/kg (17-14% TRR) per fraction, depending on retention times. Attempts to further identify the polar unknown fractions were not undertaken, but would be desirable for full elucidation of the metabolic pattern of quizalofop-P-ethyl in maize. The bound residues accounted for up to 33% TRR in forage and for up to 29% TRR in fodder. Quizalofop-P-ethyl and quizalofop (acid) which were identified at very low levels in maize fodder and forage have been present also in the metabolism of quizalofop-Pethyl in conventional plants. EFSA concludes that the metabolism of quizalofop-P-ethyl in GM maize grain is sufficiently investigated, indicating very low residues in grain when treated with quizalofop-P-ethyl at the rate tested. Additional metabolism studies are currently not required; this conclusion is valid only for maize grain derived from GM maize expressing aad-1 gene. Nature of residues in rotational crops Investigations of residues in rotational crops are not required for imported crops. Nature of residues in processed commodities The studies investigating the effect of processing on the nature of quizalofop-P-ethyl have not been performed. However, the hydrolysis study with quizalofop (acid), which was investigated in the framework of the MRL review, is considered sufficient to address the nature of quizalofop-P-ethyl under standard processing conditions (EFSA, 2017). The results of the study demonstrate that quizalofop (acid) is stable under pasteurisation, sterilisation and baking/brewing/boiling. Methods of analysis in plants The availability of the analytical enforcement methods for the determination of quizalofop-P-ethyl residues in plant matrices was investigated in the framework of the MRL review (EFSA, 2017). The common moiety method using liquid chromatography with tandem mass spectrometry (LC-MS/ MS) is validated for the determination of quizalofop-P-ethyl and quizalofop (through hydrolysis) in high starch content commodities at a combined limit of quantification (LOQ) of 0.01 mg/kg. The MRL review noted that the extraction efficiency and hydrolysis of conjugates and other ester variants were not demonstrated. Since in maize grain, according to metabolism studies, conjugates of quizalofop-P-ethyl are not expected to occur in significant amounts, the lack of validation of the extraction efficiency and hydrolysis of conjugates was not considered relevant in the framework of this import tolerance application. It is concluded that a sufficiently validated analytical enforcement method is available to determine quizalofop-P-ethyl and quizalofop (acid) residues in GM maize grain containing aad-1 gene at the validated LOQ of 0.01 mg/kg. Stability of residues in plants The storage stability of quizalofop-P-ethyl and quizalofop-P according to studies reported in the MRL review has been sufficiently demonstrated in high starch content commodities (wheat grain) for 12 months when stored at À18°C (EFSA, 2017). In the framework of the current application, new studies were submitted investigating the storage stability of quizalofop-P-ethyl and quizalofop (acid) in GM maize grain, fodder, forage, starch, flour and oil when stored at À20°C for 13 months (Finland, 2018). Homogenised samples were spiked with quizalofop-P-ethyl and quizalofop (acid) at 0.1 mg/kg. The results of the study demonstrate degradation of quizalofop-P-ethyl beyond 30% in maize grain, fodder and flour as from 3 months of storage and in maize forage after 1 month of storage. Increased amounts of quizalofop (acid) in all maize fractions during storage indicate hydrolysis of quizalofop-P-ethyl to quizalofop (acid). As both compounds are included in the risk assessment and enforcement residue definition, the freezer storage stability of the sum of quizalofop (acid) and quizalofop-P-ethyl is considered addressed for 13 months in the GM maize containing aad-1 gene. Proposed residue definitions Based on the metabolic pattern identified in conventional crop metabolism studies with quizalofop-P-ethyl, quizalofop-P-tefuryl and propaquizafop, the results of hydrolysis studies, the toxicological significance of metabolites and the capabilities of enforcement analytical methods, the following residue definitions were proposed by the MRL review (EFSA, 2017): • residue definition for risk assessment: sum of quizalofop, its salts, its esters (including propaquizafop) and its conjugates, expressed as quizalofop (any ratio of constituent isomers). • residue definition for enforcement: sum of quizalofop, its salts, its esters (including propaquizafop) and its conjugates, expressed as quizalofop (any ratio of constituent isomers). The peer review of quizalofop-P proposed provisional enforcement and risk assessment residue definitions as 'sum of quizalofop esters, quizalofop and quizalofop conjugates expressed as quizalofop (sum of isomers)' (EFSA, 2009). EFSA concludes that the previously derived residue definitions are appropriate for GM maize grain containing aad-1 gene. 1.2. Magnitude of residues in plants 1. Magnitude of residues in primary crops In support of the authorised use, the applicant submitted in total 25 residue trials where field maize (event DAS-40278-p, expressing AAD-1 protein) was treated with quizalofop-P-ethyl at application rates ranging from 89-99 g/ha. Residue trials were performed in the United States (23 trials) and Canada (two trials) in 2009. Plants were treated at the growth stage of BBCH 16-42. Samples were taken at the preharvest interval (PHI) intervals of 79-144 days. Samples were analysed separately for quizalofop (acid) and quizalofop-P-ethyl, and results indicate that, in none of the samples, residues were above the individual LOQs of 0.01 mg/kg. It is noted that 18 of these trials were performed at higher application rates (differing by more than 25% from the application rate defined in the good agricultural practices (GAP)). However, as residues in all trials were below the limit of detection (LOD), residue trials were considered acceptable. Residue data on forage and fodder were not provided and are not relevant for the import tolerance application for maize grain. Prior to analysis samples were stored frozen for a maximum interval of 376 days; the storage stability of the total quizalofop (acid) and quizalofop-P-ethyl residues has been demonstrated for this storage period. The analytical method used in the residue trials did not include hydrolysis step and samples were analysed separately for quizalofop (acid) and quizalofop-P-ethyl. As quizalofop-P-ethyl conjugates were not identified in maize grain according to metabolism studies, the omission of hydrolysis step of the analytical method is not considered to affect the validity of residue data. The analytical method is thus considered sufficiently validated and fit for the purpose. It is noted that to comply with the enforcement residue definition proposed by the MRL review, residues shall be expressed as quizalofop (acid). Also, the existing enforcement residue definition in Regulation (EC) No 396/2005 refers to quizalofop (acid). As residues in all trials were below the LOD, in this case, conversions are irrelevant. Residue data are considered sufficient to derive a MRL proposal of 0.02 mg/kg for the sum of quizalofop-P-ethyl and quizalofop, expressed as quizalofop, in GM maize grain. The MRL proposal refers to the sum of LOQs of quizalofop-P-ethyl and quizalofop (acid). EFSA notes that enforcement method is a common moiety method for which an LOQ of 0.01 mg/kg (for total residues) is validated. As residues in all trials were below the LOD (0.003 mg/kg) for each compound, it would be more appropriate to propose the MRL at the enforcement LOQ of 0.01 mg/kg. However, as the MRL proposal of 0.02 mg/kg corresponds to the tolerance set for quizalofop-P-ethyl in Canada, 7 EFSA considered it acceptable. Magnitude of residues in rotational crops Investigation of residues in rotational crops is not relevant for the import tolerance application. Magnitude of residues in processed commodities In the framework of the current application, studies investigating the effect of processing on the magnitude of quizalofop-P-ethyl and quizalofop residues in processed maize commodities were submitted (Finland, 2018). In two field trials, GM maize containing aad-1 gene was treated with quizalofop-P-ethyl at an application rate of 184 g/ha. Grains were first cleaned by aspiration and screening and then processed by dry or wet milling into flour, meal, refined oil, starch and aspirated grain fraction. Residues of quizalofop-P-ethyl and quizalofop were below the individual LOQs of 0.01 mg/kg both in raw commodities and in all processed fractions. Processing factors were thus not derived. Proposed MRLs The available data are considered sufficient to derive MRL proposal as well as risk assessment values for quizalofop-P-ethyl GM maize grain (see Appendix B.1.2). The MRL proposal accommodates the residue definitions proposed by the MRL review and peer review. In Section B.3., EFSA assessed whether residues on maize grain resulting from the uses authorised in Canada are likely to pose a consumer health risk. Residues in livestock Maize grain and by-products can be used for feed purposes. Hence, it was necessary to update the livestock dietary burden calculated in the framework of the MRL review (EFSA, 2017) to estimate whether the import of GM maize grain would have an impact on the livestock exposure and residues in the food of animal origin. The livestock dietary burden in the MRL review was calculated according to the OECD methodology (OECD, 2013) and took into consideration the highest residue expected in livestock feed from the authorised uses of quizalofop-P-ethyl, quizalofop-P-tefuryl and propaquizafop. The livestock dietary burden was now updated with risk assessment values derived for maize grain and various grain by-products according to the current assessment. The data on residues in maize stover were not provided and are not considered relevant for the import tolerance request. The calculated dietary burdens exceed the trigger value of 0.1 mg/kg dry matter (DM) for all livestock species and the intake is mainly driven by residues in potatoes from the existing use of quizalofop-P-tefuryl assessed in the MRL review. Residues of quizalofop-P-ethyl in maize grain contribute insignificantly to the existing livestock exposure and thus would not affect the MRL proposals derived for commodities of animal origin in the framework of the MRL review of quizalofop-P-ethyl, quizalofop-P-tefuryl and propaquizafop. 3. Consumer risk assessment EFSA performed a dietary risk assessment using revision 2 of the EFSA PRIMo (EFSA, 2007). This exposure assessment model contains food consumption data for different subgroups of the EU population and allows the acute and chronic exposure assessment to be performed in accordance with the internationally agreed methodology for pesticide residues (FAO, 2016). In the framework of the MRL review, a comprehensive consumer exposure to residues arising in food from the existing EU uses of quizalofop-P-ethyl, quizalofop-P-tefuryl and propaquizafop was calculated, considering the lowest acceptable daily intake (ADI) value set for quizalofop-P-ethyl (0.009 mg/kg body weight (bw) per day) and the lowest acute reference dose (ARfD) set for quizalofop-P-tefuryl (0.1 mg/kg bw), expressed as quizalofop equivalents (EFSA, 2017). This exposure was now updated with the supervised trial median residue (STMR) values derived for quizalofop-Pethyl GM maize grain assessed in this application. The estimated long-term dietary intake was in the range of 5-30% of the ADI. The contribution of residues expected in maize grain to the overall long-term exposure is insignificant and is presented in more detail in Appendix B.3. No short-term intake concerns were identified with regard to residues in maize grain (0.2% of the ARfD). EFSA concluded that the long-term and short-term intake of residues occurring in food from the existing uses of quizalofop-P-ethyl, quizalofop-P-tefuryl and propaquizafop and from the authorised use of quizalofop-P-ethyl on GM maize in Canada, is unlikely to present a risk to consumer health. Conclusion and Recommendations The data submitted in support of this MRL application were found to be sufficient to derive an MRL proposal in maize grain accommodating the authorised use of quizalofop-P-ethyl in Canada on GM maize. EFSA concluded that residues of quizalofop-P-ethyl in maize grain will not result in a consumer exposure exceeding the toxicological reference values and therefore is unlikely to pose a risk to consumers' health. Not available for quizalofop-P-ethyl but not required since study performed with quizalofop in the framework of the MRL review for quizalofop-P-tefuryl is expected to cover all three ester variants. Existing enforcement residue definition (Regulation ( Residue trials on genetically modified maize expressing AAD-1 protein (event DAS-40278-9). 18 residue trials overdosed in terms of an application rate (> 25% deviation), but as residues in all grain samples were below the limit of detection, trials were accepted. The residue data do not cover possible conjugates but due to low residues in grain, this deviation is accepted and the residue data are considered valid also for the residue definitions proposed by the MRL review. .2. Residues in rotational crops Not relevant for the import tolerance application. B.1.2.3. Processing factors New processing studies were submitted in the framework of the current application. Residues of quizalofop-P-ethyl and quizalofop (acid) were below the LOQ in the raw agricultural commodity (GM maize aad-1) and in all processed commodities derived from maize grain: flour, meal, refined oil, starch and aspirated grain fraction (Finland, 2018) Assumptions made for the calculations The STMR value for maize grain derived from the residue trials submitted in the framework of this application was used as input value. For the remaining commodities, the input values were as referred to in the MRL review: For each commodity, the median residue levels obtained for quizalofop-P-ethyl, quizalofop-P-tefuryl and propaquizafop were compared and the most critical values were selected for the exposure calculation. For certain commodities, the available residue trials were not sufficient to derive risk assessment values for the use of all the variants and could not be excluded that those uses not supported by data will result in higher residue levels, in particular when the existing EU MRL is higher than the MRL proposal derived. In these cases, EFSA decided, as a conservative approach, to use the existing EU MRL for an indicative exposure calculation Also for those commodities where data were insufficient to derive an MRL for any of the variants, EFSA considered the existing EU MRL for an indicative calculation. The contributions of other commodities, for which no GAP was reported in the framework of this review, were not included in the calculation. All input values refer to the residues in the raw agricultural commodities. Assumptions made for the calculations The STMR value for maize grain derived from the residue trials submitted in the framework of this application was used as input value. For the remaining commodities, the input values were as referred to in the MRL review: For each commodity, the median residue levels obtained for quizalofop-P-ethyl, quizalofop-P-tefuryl and propaquizafop were compared and the most critical values were selected for the exposure calculation. For certain commodities, the available residue trials were not sufficient to derive risk assessment values for the use of all the variants and could not be excluded that those uses not supported by data will result in higher residue levels, in particular when the existing EU MRL is higher than the MRL proposal derived. In these cases, EFSA decided, as a conservative approach, to use the existing EU MRL for an indicative exposure calculation. Also for those commodities where data were insufficient to derive an MRL for any of the variants, EFSA considered the existing EU MRL for an indicative calculation. The contributions of other commodities, for which no GAP was reported in the framework of this review, were not included in the calculation. All input values refer to the residues in the raw agricultural commodities. Recommended MRLs Code ( Conclusion: The estimated Theoretical Maximum Daily Intakes (TMDI), based on pTMRLs were below the ADI. A long-term intake of residues of Quizalofop-P is unlikely to present a public health concern. Quizalofop-P The acute risk assessment is based on the ARfD. Unprocessed commodities *) The results of the IESTI calculations are reported for at least 5 commodities. If the ARfD is exceeded for more than 5 commodities, all IESTI values > 90% of ARfD are reported. **) pTMRL: provisional temporary MRL. ***) pTMRL: provisional temporary MRL for unprocessed commodity. No exceedance of the ARfD/ADI was identified for any unprocessed commodity. Conclusion: For Quizalofop-P, IESTI 1 and IESTI 2 were calculated for food commodities for which pTMRLs were submitted and for which consumption data are available. In the IESTI 1 calculation, the variability factors were 10, 7 or 5 (according to JMPR manual 2002); for lettuce, a variability factor of 5 was used. In the IESTI 2 calculations, the variability factors of 10 and 7 were replaced by 5. For lettuce, the calculation was performed with a variabilty factor of 3. No of commodities for which ARfD/ADI is exceeded (IESTI 2): For each commodity, the calculation is based on the highest reported MS consumption per kg bw and the corresponding unit weight from the MS with the critical consumption. If no data on the unit weight was available from that MS, an average European unit weight was used for the IESTI calculation. No of commodities for which ARfD/ADI is exceeded: No of commodities for which ARfD/ADI is exceeded: Threshold MRL is the calculated residue level which would leads to an exposure equivalent to 100% of the ARfD. Other feed commodities on which uses were reported in the MRL review STMR/HR As reported in the EFSA reasoned opinion on the review of the existing MRLs for quizalofop-P-ethyl, quizalofop-P-tefuryl and propaquizafop (EFSA et al., 2017) STMR: supervised trials median residue; HR: highest residue; PF: processing factor. (a): As residues in the raw commodity (maize grain) were below the LOQ, no concentration of residues is expected in processed commodities and a processing factor was therefore not applied.
2019-03-17T13:12:34.653Z
2018-04-01T00:00:00.000
{ "year": 2018, "sha1": "aa99d5695c728556a490aab8356938cbfc863bec", "oa_license": "CCBYND", "oa_url": "https://doi.org/10.2903/j.efsa.2018.5250", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "35c37ba5369cde4cd1022e8f439d8090df146021", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Medicine" ] }
250336042
pes2o/s2orc
v3-fos-license
Larval Mortality and Ovipositional Preference in Aedes albopictus (Diptera: Culicidae) Induced by the Entomopathogenic Fungus Beauveria bassiana (Hypocreales: Cordycipitaceae) Abstract Entomopathogenic fungi allow chemical-free and environmentally safe vector management. Beauveria bassiana (Balsamo-Crivelli) Vuillemin is a promising biological control agent and an important component of integrated vector management. We investigated the mortality of Aedes albopictus (Skuse) larvae exposed to five concentrations of B. bassiana using Mycotrol ESO and adult oviposition behavior to analyze the egg-laying preferences of wild Ae. albopictus in response to different fungal concentrations. We examined the mortality of mid-instars exposed to B. bassiana concentrations of 1 × 104, 1 × 105, 1 × 106, 1 × 107, and 1 × 108 conidia/ml every 24 h for 12 d. In the oviposition behavior study, the fungus was applied to wooden paddles at 1 × 105, 1 × 107, and 1 × 109 conidia/ml, and the paddles were individually placed into quad-ovitraps. Both experiments contained control groups without B. bassiana. Kaplan–Meier survival analysis revealed that larval mortality was concentration dependent. The median lethal concentration was 2.43 × 105 conidia/ml on d 12. The median lethal time was 3.68 d at 1 × 106 conidia/ml. Oviposition monitoring revealed no significant difference in egg count between the control and treatment paddles. We observed an inverse relationship between the concentration of B. bassiana and the percentage of paddles with eggs. We concluded that concentrations above 1 × 106 conidia/ml are larvicidal, and Ae. albopictus laid similar numbers of eggs on fungus-impregnated and control wooden substrates; however, they were more likely to oviposit on substrates without B. bassiana. With these findings, we suggest that B. bassiana-infused ovitraps can be used for mosquito population monitoring while also delivering mycopesticides to adult mosquitoes. through population suppression and mosquito longevity reduction (Achee et al. 2019). Beauveria bassiana infects hosts through contact with the cuticle, where it adheres and germinates; it then penetrates the insect and extracts hemolymph nutrients, eventually killing the host (Litwin et al. 2020). It has also been proposed that larval death occurs due to secondary metabolites produced by the fungus and mechanical blockage of the siphon and trachea (Clark et al. 1968, Daniel et al. 2016. Entomopathogenic fungi are currently utilized on a variety of hexapod families in agriculture and ornamental crops (Arthurs andDara 2019, Mascarin et al. 2019), and B. bassiana has been approved for use in In2Care mosquito traps (In2Care BV, Wageningen, Netherlands, EPA Reg. No. 91720-1) (Snetselaar et al. 2014, Su et al. 2020. The existing literature evaluating B. bassiana as a vector control agent generally focuses on either larvae or adults and either Aedes aegypti (Linnaeus) or Anopheles gambiae Giles (Diptera: Culicidae), with few studies analyzing Ae. albopictus in the two developmental stages (Deng et al. 2017(Deng et al. , 2019a. Studies evaluating mosquito larval mortality after exposure to various strains of B. bassiana or secondary metabolites produced by the fungus aim to reduce the vector population before adulthood and therefore reduce the risk of disease transmission (Clark et al. 1968, Miranpuri and Khachatourians 1991, Montoya-Treviño et al. 2008, Alcalde-Mosqueira et al. 2014, Daniel et al. 2016, Deng et al. 2017, Vivekanandhan et al. 2018, Deng et al. 2019a, de Luna-Santillana et al. 2020, Vivekanandhan et al. 2020. Additionally, the mortality rates in adult Aedes exposed to B. bassiana varied between 60% and 100%, depending on the length of study and exposure amount, with most studies reporting 80-90% mortality rates (de Paula et al. 2008, Darbro et al. 2012, Deng et al. 2019b, Lee et al. 2019, de Luna-Santillana et al. 2020, Shoukat et al. 2020). However, the aforementioned studies did not analyze fungus-induced behavior and rather focused on olfactory and visual attractants. These strong stimuli likely conceal fungus-induced attraction or repulsion, resulting in unexplored areas involving fungusinduced behavior (Snetselaar et al. 2014, Buckner et al. 2017. When mycopesticides are used for the control of adult mosquitoes, they must be applied at a concentration that causes pathogenicity but does not deter contact between the pathogen and host. In this study, a larval bioassay investigated the mortality of second-and third-instar Ae. albopictus exposed to five different concentrations of B. bassiana in Mycotrol ESO. The second part of the study investigated wild Ae. albopictus ovipositional behavior to determine if three different concentrations of B. bassiana altered the number of eggs laid and percent ovipositon in the ovitraps. With an understanding of these preferences, future vector management research can determine how the use of this entomopathogen would benefit surveillance and management programs. Larval Bioassay Mosquitoes Aedes albopictus were collected using ovipositional traps from the University of Hawai'i at Mānoa campus and maintained in a colony. Eggs were hatched in distilled water, and larvae were maintained at ambient temperature and humidity (22 ± 1.5°C, 65 ± 10%), with a 12L:12D photoperiod. Larvae were fed a 1:1:1 mixture of brewer's yeast, skim milk powder, and bovine liver powder (MP Biomedicals LLC., Solon, OH, USA) with 0.4% Vanderzants Vitamin Mixture for Insects (MP Biomedicals LLC., Solon, OH, USA). Adults were provided 10% (w/v) sucrose ad libitum. Adult females were periodically provided a blood meal of defibrinated bovine blood (HemoStat Laboratories, Dixon, CA, USA) warmed at 40°C for a minimum of 20 min via a parafilmwrapped Petri dish with a heat pack for warmth (Moutailler et al. 2007). Damp filter paper folded in a cone on a Petri dish was provided as an egg-laying substrate and replaced when needed. Fungus Preparation Beauveria bassiana strain GHA was purchased in the form of Mycotrol ESO (LAM International Corporation, Butte, Montana, USA). Stock Mycotrol ESO was halved with distilled water at a concentration of 1.06 × 10 10 conidia/ml (rounded to 1 × 10 10 ). From the stock solution, serial dilutions were prepared using distilled water. Five concentrations of B. bassiana were tested, in addition to a control without fungus. Furthermore, 0.05% of polysorbate 20 (Sigma-Aldrich, St. Louis, Missouri, USA) was added to each treatment and control to reduce formula separation (Dong et al. 2012). Experimental fungus concentrations were based on previous studies and WHO guidelines (WHO 2005;Deng et al. 2017Deng et al. , 2019a. Bioassay Twenty-five second-and third-instar Ae. albopictus larvae were added by a transfer pipette to containers with 100 ml of distilled water. Each serial dilution was dispensed into the larval containers for final concentrations of 1 × 10 8 , 1 × 10 7 , 1 × 10 6 , 1 × 10 5 , and 1 × 10 4 conidia/ml of B. bassiana and a control. Each trial comprised four containers for each treatment and the control. Each container was provided food and maintained at ambient temperature and humidity (22 ± 1.5°C, 65 ± 10%). Mortality was monitored every 24 h for 12 d using established WHO guidelines (WHO 2005). Up to ten mosquito larvae from each container were separated upon death and placed individually in a Petri dish with dampened black filter paper. The dishes were placed in an incubator (18 ± 2°C, RH 65 ± 10%), and fungal presence was later confirmed on the larval carcasses. The experiment was repeated three times, with a total of 300 larvae tested in each treatment and control. Oviposition Behavior Trap and Fungus Preparations Mycotrol ESO was diluted with distilled water to produce concentrations of 1 × 10 5 , 1 × 10 7 , and 1 × 10 9 conidia/ml. The diluted concentrations of B. bassiana were applied to all sides of a wooden paddle with a paint brush and allowed to dry for three h. Twenty-four 473-ml (16 oz.) black plastic cups with five small holes placed at two-thirds the height (approx. 300 ml) were arranged in groups of four to create quadovitraps. Three wooden paddles (2.5 × 14.0 × 0.5 cm) on which each concentration of B. bassiana (treatment) was applied and one paddle on which no B. bassiana applied (control) were set in individual cups of the quad-ovitrap, for a total of four paddles (Supp. Fig. 1 [Online only]). Each cup was filled with tap water to the overflow holes. Ovitraps were placed in the environment for seven days before servicing, which included collecting and replacing the wooden paddle and tap water. Trapping was conducted for six weeks, totaling 128 trapping events. Location The study was conducted at six sites, with one quad-ovitrap placed at each site. The sites were located on the University of Hawai'i at Mānoa campus (21.301° N, 157.816° W) at approximately 30 m elevation (USGS 2021). The study was conducted during the end of the transition from the wet winter season to the drier summer season from February-May 2021. The average temperature during the study was 23.44°C, and the average relative humidity was 69.45%. The average weekly rainfall was 2.42 mm. Climatology data were obtained from the Department of Atmospheric Sciences at the University of Hawai'i at Mānoa (UH Mānoa Atmospheric Sciences 2021). Sample Processing The collected paddles were placed on paper towels for two days to dry. Eggs were counted using a dissecting microscope (Leica M80, Leica Instruments Pte Ltd., Germany). Paddles without eggs were placed in an incubator to allow fungal growth to determine viability. After a two-day embryonation period, paddles with eggs were submerged into individual containers with distilled water and 0.05 g of yeast. Paddles were left submerged for one week to allow hatching. Hatched larvae were reared to adulthood, at which point they were frozen and identified to species (Darsie and Ward 2016). Statistical Analysis For the larval bioassay, a Kaplan-Meier survival analysis was performed using the R package 'survival' v.3.2-7, followed by a log-rank test with pairwise comparisons to determine differences in mosquito survivorship between the treatments and the control (Therneau 2021). Lethal concentrations (LC 50 , LC 90 , LC 99 ) at d 12 and median lethal time (LT 50 ) were calculated using a probit analysis in the R package 'ecotox' v.1.4.2 (Hlina 2021). For the oviposition behavior experiment, a generalized linear model (GLM) with negative binomial distribution was constructed to determine if there was a difference in the number of eggs laid among the three concentrations of fungus and the control. The goal of the model was to evaluate if concentration of fungus and site had influences on the number of eggs laid on the paddles. The model was constructed using R package 'MASS' and best fit was selected using AICc in R package 'bbmle' (Akaike 1973, Venables and Ripley 2002, Bolker and R Development Core Team 2020. Subsequently, a presence or absence GLM with a binomial distribution was constructed to determine if there was a difference in the percentage of paddles with and without eggs by concentration. All paddles that had one or more egg were included in the "present" category, while all paddles that had zero eggs were included in the "absent" category. A log-likelihood ratio test was performed against a null model to determine if the concentration significantly influenced the presence of mosquito eggs. All analyses were performed using R software v. 1.0.143 (R Core Team 2020). Kaplan-Meier Survival Analysis We conducted a Kaplan-Meier survival analysis followed by pairwise log-rank tests to determine the significant differences among each tested concentration. There was an overall significant effect of fungus concentration on survival of the mosquitoes (χ 2 = 67.50, df = 3, p < 0.0001), with all pairwise comparisons resulting in significance; therefore, each concentration of fungus resulted in a different rate of mortality (Fig. 1). The best performing negative binomial distribution GLM for the egg count predictions contained the factors of concentration of fungus and site (Table 2). There was a statistically significant decrease in the amount of eggs laid with increasing fungus concentrations (95% CI: (−2.03 × 10 −9 -−2.13 × 10 −10 ), P = 0.0104). The negative association indicated there was a deterring effect caused by the fungus to the mosquito when a gravid mosquito is scouting for oviposition locations. A likelihood ratio test between the best model and a null model resulted in significance demonstrating the concentration of fungus improved the model performance (χ 2 = 5.403, df = 1, P = 0.0201). Significant site variations in the number of eggs were also determined by the model. Presence/Absence of Eggs by Concentration The lowest percentage of paddles with eggs, 31.25% (10 out of 32) was associated with the highest concentration of fungus, 1 × 10 9 conidia/ml. At 1 × 10 7 conidia/ml, 50% (16 out of 32) of the paddles contained eggs. The lowest concentration, 1 × 10 5 conidia/ml, had the highest percentage of paddles with eggs among the three treatments, with 56.25% (18 out of 32) of paddles containing eggs. The control had the highest percentage of paddles with eggs, at 62.5% (20 out of 32). The GLM model created to assess the effect of concentration of fungus and site location on the presence and absence of mosquito eggs determined the best fit model contained the fixed effects of concentration and site (Table 2). Results showed there was a significant decline in the likelihood of eggs with an increase in fungus concentrations (95% CI: (−2.20 × 10 −9 -−3.30 × 10 −10 ), P = 0.0092). The significant negative association between occurrence and concentration indicates selective behavior when determining an oviposition location. A likelihood ratio test resulted in a significant improvement with addition of fungus concentration (χ 2 = 7.247, df = 1, P = 0.0071). Additionally, there were significant differences in egg presence between sites in the model. 2008) were both higher than the current findings LC 50 of 2.43 × 10 5 conidia/ml, as they reported LC 50 values of 3.6 × 10 6 conidia/ml and 3.86 × 10 6 conidia/ml, respectively. These three previous studies examined the lethal concentrations for various strains of B. bassiana against third-and fourth-instar Ae. aegypti. The age of larvae at introduction of the fungus has been observed to determine susceptibility, with faster-growing species experiencing lower mortality attributable to fungal exposure (Clark et al. 1968, Scholte et al. 2004). The current experiment used both second-and third-instar larvae hatched at the same time, and the second instars may have survived exposure to a higher concentration due to molting soon after exposure to the fungus. Several environmentally sampled strains of B. bassiana have been shown to require higher concentrations of conidia to cause equivalent pathogenicity (Vivekanandhan et al. 2018, de Luna-Santillana et al. 2020. For example, B. bassiana sampled from southern Indian soils was used in a larval bioassay involving Ae. aegypti, and the results indicated an LT 50 of 5.91 d and 78.66% mortality after 10 d of exposure to 1 × 10 8 conidia/ml (Vivekanandhan et al. 2020). These findings at 1 × 10 8 conidia/ml indicate lower pathogenicity than that in our experiment, in which all larvae died within 24 h. Discussion A larval mortality trend was observed: there were two to three days with a high mortality rate, with the onset being staggered by concentration, followed by gradual mortality resulting in similar daily mortality percentages. This was observed for all three concentrations, 1 × 10 4 , 1 × 10 5 , and 1 × 10 6 conidia/ml, from 6.65 × 10 5 (5.79 × 10 5 -7.52 × 10 5 ) 1.01 × 10 6 (8.76 × 10 5 -1.14 × 10 6 ) 1.08 ± 0.06 67.50 3 <0.0001 1 × 10 7 100.00 ± 0.00 1 × 10 6 98.00 ± 0.60 1 × 10 5 47.33 ± 8.55 1 × 10 4 32.00 ± 7.24 Control 3.00 ± 1.00 SE = standard error, LC = lethal concentration (conidia/ml), LCL = 95% lower confidence level, UCL = 95% upper confidence level, χ 2 = chi-square statistic, df = degrees of freedom. d 7 to study conclusion (Fig. 1), diverging from the control group. There were no consecutive days in which the control exhibited a similar mortality range. The continued mortality after the large initial mortality event suggests lingering fungal effects on the larvae, potentially weakening the immune system and other vital pathways (Yassine et al. 2012, Ramirez et al. 2019, Shoukat et al. 2020. The question of whether B. bassiana, secondary metabolites, or other factors were the cause of the rapid mortality observed in our experiment remains to be elucidated. Because Mycotrol ESO contains petroleum distillates, it is possible that the larval environment became coated in an impermeable layer of oils, causing the larvae to die due to suffocation as opposed to fungal exposure, although the addition of polysorbate 20 reduced this possibility. By subsampling dead larvae, fungal growth was visually confirmed; however, the rapid mortality observed at the two highest concentrations does not accommodate the generalized timeline of the fungus (Clark et al. 1968). Secondary metabolites have been isolated and shown to cause larval mortality in past experiments (Daniel et al. 2016, Vivekanandhan et al. 2018). Both our research team and previous researchers observed melanization of larval siphons and midguts during experimental periods, which is part of the immune response to fungal infection (Clark et al. 1968, Yassine et al. 2012. Mechanical blockage of the tracheal trunks and larval siphon by the fungus has been suggested as one of the causes of larval mortality (Clark et al. 1968, Miranpuri and Khachatourians 1991, Daniel et al. 2016, Amobonye et al. 2020. We hypothesize that multiple insecticidal modes of action occurred, causing initial and sustained mortality. Genetic engineering of B. bassiana has been proposed to increase the efficacy of the fungus in second instar Ae. albopictus through increased and quicker lethality (Deng et al. 2017(Deng et al. , 2019a. These studies hypothesized that the integration of a single-chain neurotoxic polypeptide (AaIT) from the venom of the buthid scorpion (Androctonus australis) or a toxin expressed by Bacillus thuringiensis (Cyt2Ba) into the genome of B. bassiana will cause increased virulence and decrease the survival of Ae. albopictus (Deng et al. 2017(Deng et al. , 2019a. The current experiment with B. bassiana strain GHA resulted in an LC 50 of 2.43 × 10 5 conidia/ml at d 12, whereas the LC 50 of the toxin-infused B. bassiana strain GIM3.428 was 1.47 × 10 3 conidia/ml at d 10, and the LC 50 of a wild type of the same strain was 1.65 × 10 4 conidia/ml at d 10 (Deng et al. 2017). It would appear that the toxin-infused strain caused mortality at a lower concentration than the commonly used entomopathogenic strains. However, the B. bassiana strain GHA had a shorter LT 50 of 1.05 d at 1 × 10 7 conidia/ml and 3.68 d at 1 × 10 6 conidia/ml compared to the Aa-IT toxin-infused strain, with an LT 50 at 4.4 d and 5.5 d, respectively, and the Cyt2Ba toxin-infused strain, with an LT 50 of 4.0 d and 5.5 d, respectively (Deng et al. 2017(Deng et al. , 2019a. This can likely be attributed to Mycotrol ESO formula optimization as a pesticide. When assessing whether adult behavior was altered by different concentrations of B. bassiana, we found that control paddles and paddles coated with 1 × 10 5 conidia/ml more commonly had eggs than paddles coated with 1 × 10 7 and 1 × 10 9 conidia/ml. Model results show a significant negative association between the presence of mosquito eggs and the concentration of B. bassiana. The declining percentage of paddles with eggs with increasing concentration may be explained by the fungus itself or the product formulation. The fungus-impregnated wooden substrate may have been a deterrent to gravid mosquitoes, as hyphae or product oils potentially made the paddle unfavorable for oviposition. Additionally, secondary metabolites produced by the fungus may cause a repulsive effect in Ae. albopictus. However, little is known about secondary metabolite interactions with mosquitoes, and a previous study speculated that secondary metabolites have minimal behavioral influences on mosquito oviposition preferences . Within the past ten years, research has routinely evaluated B. bassiana and other entomopathogenic fungi toxicity in adult mosquitoes and found high rates of mortality using different infection protocols (Darbro et al. 2012, Jaber et al. 2016, Buckner et al. 2017, Lee et al. 2019, de Luna-Santillana et al. 2020, Shoukat et al. 2020, Vivekanandhan et al. 2020. One experiment sprayed a 1 × 10 8 conidia/ml suspension of various isolates of B. bassiana into a cage with Ae. albopictus and found that multiple isolates had 100% adult mortality at 10 d, with an LT 50 as low as 4.5 d for the most pathogenic strain (Lee et al. 2019). A different study used a concentration of 3 × 10 8 conidia/ml and found a mortality rate of 87.5% at seven d with an LC 50 of 3 × 10 6 conidia/ml (Shoukat et al. 2020). The concentration of conidia transferred to an ovipositing mosquito is lower than the concentration examined in these studies, which is why our methods used a higher concentration of 1 × 10 9 conidia/ml. Future research can investigate attractive measures in low-cost oviposition traps to increase the likelihood of ovipositing and the number of eggs laid in the ovitrap, further increasing mycopesticide contact time and removing more vectors from the environment. Using current trapping methods, many experiments have been successful in transferring conidia from traps to mosquitoes, resulting in mosquito mortality (Scholte et al. 2005, Farenhorst et al. 2008, Howard et al. 2010, Snetselaar et al. 2014, Buckner et al. 2017. These studies reported that mosquitoes are more attracted to dark colors, yeast-infused water, and synthetic lures than fungi (Snetselaar et al. 2014, Buckner et al. 2017. The fungus delivery system has varied widely in trials; passive methods included applying conidia to resting container walls and netting, and active methods in laboratory trials included applying conidia topically, injecting conidia into the thorax, and spraying conidia directly onto adults (Farenhorst et al. 2008, Howard et al. 2010, Mnyone et al. 2010, Johnson et al. 2019, Lee et al. 2019, Ramirez et al. 2019, Shoukat et al. 2020). Our study applied the mycopesticide formula to the substrate with a paintbrush for an even coating that dried on the wooden substrate. Two studies found no repellent effect between nets treated with the fungus and control nets by observing at the number of mosquitoes that traveled toward an odor cue through placed holes in fungus-impregnated netting (Howard et al. 2010, Mnyone et al. 2010. Conversely, George et al. (2013) performed an olfactory choice experiment in which fungus was placed at one end of the Y-tube and found that fungus was an attractant to Anopheles stephensi Liston. Using ovitraps and different formulation of B. bassiana, we demonstrated Ae. albopictus did not exhibit a similar response. The inconclusive body of literature necessitates further research on fungus-induced behavior modifications as well as different strains, formulations, and species of mosquitoes. Aedes albopictus exhibits an ovipositional behavior known as "skip oviposition", where one gravid female lays eggs from the same gonotrophic cycle in multiple larval habitats (Trexler et al. 1998, Reinbold-Wasson andReiskind 2021). Thus, multiple eggs from the same gravid female may have been laid at multiple paddles in the same quad-ovitrap. If this occurred with the highest concentration of fungus habitat being selected against, or had a repulsive effect on the gravid mosquito, it would further support our observation that Ae. albopictus altered the number of eggs laid due to the presence of fungus. The two highest concentrations of fungus, 1 × 10 9 and 1 × 10 7 conidia/ml, were associated with lower medians and numbers of paddles with eggs than the control and 1 × 10 5 conidia/ml concentration, as would be predicted if there was a repellent effect, causing the gravid female to oviposit at a perceived better location without a high concentration of fungus and then move on to a different site. This may have been occurring during the experiment, as there was a higher likelihood of the presence of eggs on control paddles, indicating that control paddles and paddles with lower concentrations of B. bassiana were considered more attractive oviposition sites. Additionally, skip oviposition may also increase the likelihood of fungal transference to the mosquito, as increased contact time results in higher infectivity (de Paula et al. 2008). One challenge in creating an effective fungus delivery system is the low tolerance of entomopathogenic fungi to sunlight. As fungi degrade quickly in UV light, the traps were placed in shaded and humid areas, which are also mosquito-favored habitats (Fargues et al. 1996, Zimmermann 2007, Dickens et al. 2018. Observations of the postcollection paddles showed sporulated fungal bodies, indicating successful application of the fungus. The current study shows that UV degradation can be minimized with attention to trap placement, aided by temperatures that allow for fungus survival. Conclusions This study was conducted to ascertain the integration feasibility of two existing vector management strategies, entomopathogenic fungi and ovitraps. The results showed larvicidal activity of B. bassiana at concentrations at or higher than 1 × 10 6 conidia/ml against second-and third-instar Ae. albopictus. We also demonstrated that gravid Ae. albopictus laid fewer eggs on wooden paddles with higher concentrations of B. bassiana and the likelihood of egg laying decreased with increasing fungal concentrations. By using a precise concentration of entomopathogenic fungi, infectivity and pathogenicity can be optimized for control measures without sacrificing the number of mosquitoes infected. The use of B. bassiana can be integrated with existing integrated vector management strategies to better control vectors and to minimize the spread of insecticide resistance in mosquito populations.
2022-07-08T06:15:54.398Z
2022-07-07T00:00:00.000
{ "year": 2022, "sha1": "72e386de45fa17bfed881869b98445db54ac0c80", "oa_license": "CCBY", "oa_url": "https://academic.oup.com/jme/advance-article-pdf/doi/10.1093/jme/tjac084/44514013/tjac084.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "72e386de45fa17bfed881869b98445db54ac0c80", "s2fieldsofstudy": [ "Environmental Science", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
6267202
pes2o/s2orc
v3-fos-license
Narrow Fingerprint Template Synthesis by Clustering Minutiae Descriptors SUMMARY Narrow swipe sensor has been widely used in embedded systems such as smart-phone. However, the size of captured image is much smaller than that obtained by the traditional area sensor. Therefore, the limited template coverage is the performance bottleneck of such kind of systems. Aiming to increase the geometry coverage of templates, a novel fin-gerprint template feature synthesis scheme is proposed in the present study. This method could synthesis multiple input fingerprints into a wider template by clustering the minutiae descriptors. The proposed method consists of two modules. Firstly, a user behavior-based Registration Pattern Inspection (RPI) algorithm is proposed to select the qualified candidates. Sec-ondly, an iterative clustering algorithm Modified Fuzzy C-Means (MFCM) is proposed to process the large amount of minutiae descriptors and then generate the final template. Experiments conducted over swipe fingerprint database validate that this innovative method gives rise to significant improvements in reducing FRR (False Reject Rate) and EER (Equal Error Rate). Introduction Nowadays, fingerprint authentication technique has been widely applied in smart-phones, which is universally acknowledged to be an ideal method to protect private information from being divulged [1]. To embed fingerprint sensor into the smart-phone, the size of fingerprint sensor has to be reduced greatly. In this situation, narrow swipe sensor is particularly applicable due to relatively smaller physical volume. However, the size of image captured by narrow swipe sensor is much smaller than that captured by the conventional area sensor as shown in Fig. 1. This may lead to that the enrolled template image cannot cover the input fingerprint images with large displacement. In this regard, high FAR (False Accept Rate) becomes one of the most serious problems. Additionally, there is no space left for swipingguide. As a result, the fingerprint image with non-linear distortion will be generated during the swiping process. Conventional solutions for solving the limited sensing area problem are mainly divided into two kinds of approaches: (1) Image mosaicking, which combines multiple impressions at the image level [2]- [6]. (2) Template feature synthesis, which merges feature from multiple impressions at feature level [7]- [11]. As for image mosaicking, A. Jain and A. Ross merged multiple fingerprint images by employing the iterative closest point (ICP) algorithm [4]. A. Ross et al. performed mosaicking by utilizing a thin plane spline as a transformation model to combine several fingerprint images to one template [5]. However, it is too computation-expensive to process multiple images for embedded system. Additionally, the mosaicked result is quite sensitive to the distortion and scaling. As for the template feature sets synthesis methods, W.C. Ryu et al. employed the successive Bayesian estimation method to merge several enrollment minutiae sets [10]. T. Uz et al. merged several impressions based on the hierarchical Delaunay triangulations [11]. Nevertheless, compared to area sensor case, the distortion and scaling problems are more serious in the narrow swipe sensor situation. Furthermore, the image captured from narrow swipe sensor could be rotated or distorted by user's swiping behavior. If we take the distortion and scaling issues into account, the traditional approaches might generate inferior template and also lower the system performance. Besides the two kind of methods mentioned above, S. Sin et al. updated a ninetemplates structure during verification [12], however multiple templates dramatically increased the matching time and also the storage requirement. In this work, we attempt to explore a convincing solution: a novel feature synthesis scheme to solve the nonlinear distortion and scaling problems, thereby expanding the template coverage in narrow swipe sensor case. This scheme consists of two consecutive modules: 1) Registration Pattern Inspection (RPI) and 2) a novel Modified Fuzzy Copyright c 2017 The Institute of Electronics, Information and Communication Engineers C-Means (MFCM) cluster algorithm. Actually, the proposed two modules are closely interrelated, since the second module relies on the first module to ensure the synthesis candidates quality. In the first module, synthesis candidates with serious non-linear distortion are eliminated. Based on a large amount of experimental data, we find that, if the user swipe over the sensor in an improper way, the distortion will be appeared in the reconstructed image. To model and address the distortion problem, a continuous vector representation of the swipe pattern: trajectory is derived from image reconstruction process. The trajectory directly describes the swiping behavior of human being. By utilizing this feature, the large distorted images could be rejected in advance even without image quality assessment. Consequently, the following cluster module will not suffer from distortion problem. The second module is used to neutralize the scaling issue and eliminate the spurious minutiae, which can severely affect the matching performance. The proposed method is inspired by the following two aspects. (1) Minutiae descriptor has been validated to be an effective feature to represent fingerprint and to fulfill the matching tasks [13]- [23]. Generally, minutiae descriptor based methods perform much better than those based on minutiae in verification tasks, since the descriptors contains more information than a single minutia. In other words, the representation ability of standard minutia is enhanced by the auxiliary information from descriptor. (2) Motivated by the fact that fuzzy c-means (FCM) with its derivatives have been successfully applied in many classification tasks [24]- [28], in this work we employ the improved FCM algorithm to classify the minutiae descriptors from multiple impressions into several separated clusters. The statistical information derived from each cluster is applied to compensate the scaling issue, eliminate the spurious, and enhance the robustness of synthesized feature. The original FCM algorithm does not take any contextual information into consideration resulting in low robustness to noise. Response to this, in our approach the minutiae descriptor similarity constraint is incorporated intrinsically into the clustering process by supplementing a penalty term. Finally, the wider final template is obtained by merging each minutiae descriptor clusters. Furthermore, the unstable minutiae will be eliminated and the position of minutiae will be rectified as well. Experiments conducted over database, which is composed of narrow fingerprint images, approved the effeteness and robustness of proposed synthesis scheme. The rest of this paper is organized as following: Sect. 2 presents the RPI algorithm. The minutiae descriptor is introduced in Sect. 3. Section 4 describes the MFCM clustering algorithm. Section 5 exhibits the experimental results and analysis to confirm the validity and robustness of the proposed method. A brief conclusion will be given in Sect. 6. Narrow Swipe Sensor Image Reconstruction In order to address the distortion issue, the image acquisition mechanism of the sweep sensor based system should be investigated, since it is significantly different from area sensor. Regarding the swipe sensor situation, the fingerprint image (as shown in Fig. 2 (a)) need to be assembled from large amount of raw frames (as shown in Fig. 2 (b)). This process is briefed as follows. Firstly, a sequence of sampled raw image frames F are captured by the swipe sensor at a fixed sampling rate, which can be denoted as where F i is a pixel matrix with dimensions of N rows and M columns. K is the total frame count. Finally, by employing the MV x and MV y , the redundant image lines are removed and then raw frames are reconstructed to a fingerprint image R (as shown in Fig. 2 (c)). L i is the i th line from the reconstructed image R, which is represented in the matrix notation By utilizing the continuous vector array: MV x and MV y , the characteristic of the user's sweep behavior as shown in Fig. 3 (a) could be visualized and analyzed. Registration Pattern Inspection To quantify the distortion level, a continuous vector representation of the swipe pattern: trajectory T K is introduced. Figure 3 (b-d) illustrate the representative images with good quality and distorted ones. To formulate T K , the moving vectors MV x and MV y are accumulated consecutively along horizontal and vertical direction and then turn out to be a series of key points. T K is defined by connecting those isolated key points with cascade lines as illustrated in Fig. 4 (b) and formulated by Eq. (4), Eq. (5) and Eq. (6). where mvx i ∈ MV x and mvy i ∈ MV y are instant moving vectors, x k and y k are accumulated horizontal and vertical displacement, respectively. According to these equations, each horizontal displacement in raw frames is accumulated along the swiping direction. To this end, each tiny finger movement in the swipe process is recorded by the trajectory T K . An example of distorted image with corresponding trajectory is shown in Fig. 3 (e and f). As a result, T K explicitly reflects the user's sweep movement over the reconstructed image. After the trajectory T K is obtained, we use the standard deviation S D(ϕ) of instantaneous swipe angles (as shown in Fig. 4 (a)) to describe the distortion level as de-tailed in Eq. (8). This is due to the fact that if the user swipe over the surface of the sensor with a uniform direction, the S D(ϕ) would be equal to zero. In other words, if the user rotates the finger irregularly during the swipe procure, the reconstructed image is distorted along with large S D(ϕ). As a result, the distortion level influenced by the user's behavior can be approximately estimated by S D(ϕ). To simplify the computation process, the trajectory is equidistantly divided into M partitions with fixed window size of W and then the instantaneous swipe angle array {ϕ k } M k=0 is derived from the trajectory T K . Namely, the ϕ k is the relative angle of two consecutive partitions which could be calculated by Eq. (7). where, Γ(θ, ϕ) denotes the rotation angle from θ to ϕ and W is the window size, x k and y k ∈ T K , respectively. To normalize the distortion level, score S (ϕ) is defined as the following Eq. (10). As a result, the distortion score S (ϕ) spreads between 0 and 1. where T c is a normalization parameter. According to different values of the distortion score: S (ϕ), the reconstructed images can be classified as follows: (1) Class I: Straight image without distortion; (2) Class II: Distorted image; (3) Class III: Rotated image without distortion. In case that S (ϕ) is larger than a predefined threshold D thr , the candidate is considered as good quality (Class I and Class III). Note that D thr is derived from sufficient experimental observation. Consequently, the uniform swipe anglē ϕ can be simply estimated by μ ϕ , and the minutiae feature extracted from rotated image (Class III) could be compensated. After candidates filtering, the images in Class I and Class III are selected for the synthesis process. Minutiae Descriptor In this approach the minutiae descriptor is employed to facilitate the cluster process. This descriptor consists of two features: Minutiae Ridge Tracing Points (MRTP) and Minutiae Orientation Descriptor (MOD). Minutiae Ridge Tracing Points In our approach, a series ridge tracing points R i are served as a representation of the corresponding ridge, where i is the minutiae index. Two rotation invariant features, the Euclidean distance RL i k and rotation angle Rθ i k between two neighbor sampling points, are derived from ridge tracing points as follows: where k is the tracing points index, N R is the total number of each ridge tracing points. An instance of MRTP is shown in Fig. 5 (a). Minutiae Orientation Descriptor The MOD is extracted by following procedures. (1) ROI (Region of Interest) definition. The circular area centering at minutia position, and ranging from radius R i to R o is served as the ROI. In our work, the R i is 10 pixels, while the R o is 40 pixels, respectively. (2) Tessellation. We circularly tessellate the ROI area into equidistant sectors and bands, which is denoted as S = {S n } s×b n=1 , where s is the sector count and b is the band count, n is the sub-sector index. The red circular area in Fig. 5 (b) is an example of band, while the blue area shows one sub-sector S n . In this figure, there exists 3 bands and each band consists of 8 sectors. Of note, the first band starts from the minutia direction to achieve rotation invariant. (3) Orientation calculation. The descriptor can be denoted as follows: where ω i k ∈ [0, π) are the corresponding orientation sets derived from the circular tessellation S , i indicates that the descriptor belongs to the i th minutia. In our experiments, the s is selected as 8 and b is set to be 3, respectively. An instance of tessellation result is shown in Fig. 5 In the narrow swipe sensor case, the ROI easily falls outside the foreground region, which will be considered as invalid area. To address this issue, we predict the orientation value ψ i of invalid sector from its N O -nearest valid cells, as shown in Fig. 5 (c) by Eq. (13): where ψ i is the predicted orientation value, while {ω o } N O o=1 are orientation of its nearest neighbors. Template Synthesis by MFCM The MFCM synthesis process is detailed as follows. Firstly, the prime template is selected from several enrolled fingerprint candidates by examining the distortion score and image quality. Secondly, the core point which consists of coordinate x and y in pixels is extracted. Note that for the arc type fingerprint, the point with highest curvature in the ridge, is assigned as the core point. The core extraction algorithm in [29] is used in this work. Thirdly, all the qualified input fingerprints are aligned with the prime template by the core position. The qualified inputs are defined as follows: 1) Matching score against prime template is higher than the threshold S thr . The minutiae descriptor based greedy matching algorithm in [18] is applied to get the matching score. Furthermore, we use the Eq. (19) and Eq. (20) to obtain the similarity of one minutia descriptor pair. The final matching score of a pair of fingerprint images is normalized to [0, 1000) in our work. 2) Distortion score is higher than D thr . In our experiments the S thr is selected as 600 out of 1000, while the D thr is selected as 0.8 out of 1.0. We also notice that the robustness of extracted minutiae descriptor set and core also have significant impact on the performance of proposed synthesis algorithm. To specific, the inferior image quality can increase the possibility of spurious appearance and lower the accuracy of core detection result. Responding to this, we employ an image quality map to select robust minutiae and core according to the quality assessment. Firstly, a block-wise (e.g. 16 × 16 pixels for one block) quality index as shown in Fig. 6 (a and b) is derived from the original image. We use the strength of average squared binary gradients which is introduced in [30] to calculate the block-wise image quality value. Secondly, the quality map is binarized with respect to the threshold as shown in Fig. 6 (c) which only consists of black and white blocks. Finally, the minutiae located in the black block are selected for the synthesis process. The candidates, whose core is located at white block region, are eliminated. Since we select the candidates with strict quality regulation, the core feature could be considered as a robust landmark for alignment. Finally, the minutiae descriptors are clustered by MFCM and then the clusters are merged by predefined rules followed by that new minutiae are added into the prime template. The overall flowchart of proposed synthesis scheme is illustrated in Fig. 7. Fuzzy C-Means Clustering Algorithm FCM clustering algorithm, an unsupervised clustering technique, has been widely used in classification applications. The algorithm iteratively clusters the data set to an optimal c partitions by minimizing the squared error objective function J m by the following equation: where X = {x i , i = 1, 2, . . . , N|x i ∈ R d } is the data set in the d-dimensional vector space, c denotes the number of clusters, N F is the number of points in the data set. u ki describes the degree of membership of x i in the k th cluster, m is a fuzzy parameter to determine the fuzziness of the resulting classification, v k is the prototype of the center of cluster k, • denotes the Euclidean norm. Modified FCM Clustering Algorithm Intuitively, the minutiae descriptors can be considered as some points distributed on the coordinate plane. Our target is to classify those points into c clusters with respect to their position and intrinsic feature. In this work, the position refers to the coordinate of minutia descriptor describe the intrinsic characteristic of minutia, where N D denotes the aligned minutiae number. We extend the original FCM by incorporating the minutiae descriptor similarity into the objective function. where v k and u ki are defined as follows: where D R is the neighborhood minutiae descriptors set within block-size R by R. n R is the overall minutiae descriptor number in the neighborhood block. ξ controls the strength of influence from neighborhood descriptors. r represents one of the neighbor minutiae descriptors of cluster prototype descriptor k. b is the loop counter. S M rk is the similarity between minutiae descriptor r and cluster prototype descriptor k. x i and v k are the coordinates of the minutiae descriptor which consists of x and y. S M rk is formulated by the following equation: where S R (r, k) and D MO (r, k) are the similarity measurement function (Eq. (19) and Eq. (20)) of minutiae ridge tracing points and orientation descriptor, respectively. α and β are the weights assigned to each part, respectively. In our experiments, the α is set to be 0.8 and the β is set to be 0.2. where N R is the common tracing points number of corresponding minutiae pair. ε 1 = 0.4 and ε 2 = 0.6 are the weights of each contribution. The MFCM algorithm is described in the following. Step 1. Initialize the membership matrix U = [u ki ] 1≤k≤c,1≤i≤N with random values between 0 and 1 and also satisfy the constraint c k=1 u ki = 1, i = 1, 2, . . . , N. Since the spurious minutiae clusters could be eliminated by the merging step, the clusters number c is decided by the maximum number count of aligned minutiae from synthesis input. Note that, we initialize the cluster centroids V k by randomly selected minutiae. Empirical setting of neighbour block size R to 32 pixel. Step 4. The objective function can get minimum as follows: when V new − V old < , the iteration will be terminated accompanied with generating c partitions, where V is the cluster center vectors. The value of = 10 −3 is found to be appropriate by large amount of experiments. Merging Minutiae Descriptor Clusters By employing the statistical information derived from the minutiae cluster, new minutiae position is decided by the corresponding cluster center, and also spurious minutiae are eliminated. The rule to examine the spurious minutiae is that the ratio of member count of a cluster to the average member count is less than 0.38. In this situation this cluster would be eliminated. Experiment Environment In our experiments, the accuracy and robustness of the proposed algorithm are evaluated. The database is captured by a swipe sensor named FPC1080. The physical width of the swipe sensor is only 8 mm, covering approximately half width of normal human finger, and the width of generated raw image is 126 pixels in this system. The height of the reconstructed image is approximately 400 pixels. The original raw image frames are reconstructed firstly, then the fingerprint image and corresponding moving vectors are used for evaluation. The database consists of 100 fingers, where each finger includes 100 images. Figure 2 shows an example image of the database. Note that, the images with nonlinear distortion and bad quality region are included in this database. False Accept Rate (FAR), False Reject Rate (FRR) and Equal Error Rate (EER) are commonly used to estimate the performance of a fingerprint identification system. The FAR is calculated by the probability that imposter impressions are falsely accepted, on the other hand FRR is calculated by the probability that the genuine impressions are falsely rejected. Therefore, the FAR is considered as the measurement on security while FRR is the measurement on convince. DET (Detection Error Tradeoff) curve plots the FRR against the FAR at different thresholds. FAR100, FAR1000 and FAR10000, which denotes the value of FRR for FAR equals 1/100, 1/1000, 1/10000, respectively. Evaluation of Registration Pattern Inspection In order to validate the performance gained by the proposed RPI algorithm, two experimental results are compared. The first one is conducted with the templates selected by RPI. The image with highest distortion score is assigned as the template. The second one is conducted by randomly selecting templates, and matching against the same testing set. In order to evaluate the algorithm more objectively, we randomly select 10 sets of the original templates from candidates and calculate the mean and standard deviation of the EER and FAR values, respectively. The minutiae descriptor based greedy matching algorithm in [18] is applied to get the matching result. The database is divided into two groups, the first 30 images of each finger are served as the template candidates, while the rest 70 images are used for matching. Therefore, the number of genuine matching is 100 × 70 =7000, and the number of imposter matching is 100 × ((100-1) ×70) = 693000. The experimental procedures are shown as Fig. 8. Comparison results for with RPI and without RPI, are illustrated in Fig. 9. The results suggest that the perfor-mance is significantly improved by proposed RPI algorithm. The mean value of EER of ten tests decreases from 6.28% to 2.75%, and FAR10000 value decreases from 17.70% to 8.25% as detailed in Table 1. Based on our experiments, there are approximately 12 qualified inputs for each finger in average. This result could be interpreted that the templates candidates with non-linear distortion are removed by RPI algorithm. The templates with better quality significantly contribute to the performance gain. This result also indicates that the quality of templates has strong impact on the verification performance. Several instances of images which from the same finger with respective distortion score are shown in Fig. 10. Note that, larger value indicates less distortion appears in the fingerprint images. Evaluation of MFCM In order to evaluate the synthesis algorithm, we compare three results: (1) Fig. 9. From the DET curves we can see that, the random tests yield the lowest performance due to distortion and small coverage area. When the RPI filter the template candidates, the EER value drops from 6.28% to 2.75%. Finally, by applying the proposed synthesis scheme with 30 candidates, EER decreases from 2.75% to 1.51%, and FAR10000 decreases from 8.25% to 4.90%, respectively. These results confirm that the performance has been dramatically improved with the benefits of robustly expanded templates. To specific, we further explain these results by following two aspects. Firstly, the expanded template coverage brings up the genuine matching score. The Fig. 11 shows the average template coverage at different merging times. Upon exhausting 30 candidate impressions, the average template coverage is expanded from 126 pixels to 176 pixels and also the average minutiae number is increased from 14 to 28 as well. Secondly, the template feature is refined by the proposed MFCM algorithm, minutiae position and descriptor are refined with higher precision and spurious minutiae are eliminated. According to examples of cluster centroids and cluster members as shown in Fig. 12, it is obvious that the orientation descriptor of cluster members has similar ridge flow patterns as their corresponding cluster centroids. This result confirms the validity of the descriptor based cluster algorithm. The maximum memory required by this method is less than 20k words which allows the algorithm could run in an embedded system. Additionally, the processing speed is approximately 10 milliseconds in a 2.8 GHz Quad-core PC. Algorithms Evaluation Since almost all the fingerprint template synthesis algorithms are targeting at area sensor case, we compare our work with Sin's template update method [12] which is designed for narrow swipe sensor same as our method. The same training and matching set are applied, then the matching results are compared. The DET curves are shown in Fig. 13 and detailed in Table 2. The EER drops from 2.52% to 1.51%, while the FAR10000 drops from 10.2% to 4.90%. Since the method in [12] did not take the Table 3 Comparison results of proposed algorithm and the algorithms in [13], [16], [21] and [23]. distortion issue into consideration, the distorted images are easily enrolled as template, and thus, the verification accuracy would be compromised. Furthermore, Sin employed multiple templates to form the template structure which is computation expensive to get matching result. Thanks to the RPI module and synthesis module, our method achieves lower EER and FAR10000 value than that obtained by Sin's method. Furthermore, the final template size in our method is almost half of that in Sin's method. Thus our synthesis scheme is a more compact and effective method. Additionally, the matching time drops from 60 milliseconds to 10 milliseconds compared with Sin's method. Figure 14 shows several synthesis instances by MFCM, the spurious minutiae marked by black rectangle are eliminated. We also compare our proposed technique with four well-known minutiae descriptor based approaches. (4) Zhou [23]-image texture based minutiae descriptor. The DET curves of comparison results are shown in Fig. 15. The corresponding algorithm performance indices (EER and FAR) of four methods [13], [16], [21], [23] and proposed algorithm are reported in Table 3. One can see that our proposed method achieves the lowest EER value. This result can be explained as follows: (1) The local minutiae neighbour based method [13] and [16] suffered from limited minutiae number in narrow fingerprint situation. (2) The method in [21] and [23] employed the ridge feature and texture information around minutiae to increase the discrimination power, respectively. However, they both suffered from border effect in narrow fingerprint case. (3) By employing the refined minutiae descriptor obtained from proposed MFCM algorithm, our method outperforms the above four approaches. In other words, the template feature representation becomes more accurate by employing the MFCM. Conclusions In this work we employ a novel minutiae descriptor based synthesis algorithm to generate an expanded new template for narrow swipe sensor based system. Our major contributions can be summarized as follows: (1) A novel user behavior based distortion analysis algorithm: RPI is proposed, to quantify the distortion level in the reconstructed image of narrow swipe sensor. The candidates containing nonlinear distortion could be removed in advance without timeconsuming image quality check. (2) A minutiae-descriptor based cluster algorithm MFCM is proposed, which works robustly to deal with the scaling issue, eliminate spurious minutiae, and generate the expanded template. By applying the proposed method, the matching accuracy is improved without sacrifice of the compactness of template. Moreover, the matching time is dramatically decreased compared with multiple templates based approach. These results demonstrate the feasibility of obtaining a very effective fingerprint template representation for narrow sensor based fingerprint authentication systems. These results also encourage further exploitation of the matching method for distorted image caused by the user's behavior.
2018-04-03T03:29:35.532Z
2017-06-01T00:00:00.000
{ "year": 2017, "sha1": "9162a067f556e9312e63638a1464059b15d6691d", "oa_license": null, "oa_url": "https://www.jstage.jst.go.jp/article/transinf/E100.D/6/E100.D_2016EDP7401/_pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "0bebe794eba5bcd4aa3be9e4a25388e85e63cf7d", "s2fieldsofstudy": [ "Computer Science", "Engineering" ], "extfieldsofstudy": [ "Computer Science" ] }
116539606
pes2o/s2orc
v3-fos-license
Searching optimal shape in viscous flow: its dependence on Reynolds number In this work a simple problem on 2D optimal shape for body immersed in a viscous flow is analyzed. The body has geometrical constraints and its profile would be found in the class of cubics which satisfy those conditions. The optimal profile depends on the leading coefficient of these cubics and its relation with the Reynolds number of the system is found. The solution to the problem uses a method based on a suitable transformation rule for the cartesian reference. We would study the problem of searching the optimal 2D shape or profile of a cartesian object immersed in a constant fluid viscous flow and subjected to some constraints on the boundary. This is a particular question on the general field of shape optimization for bodies immersed in flows (see [4]). Let be {x, y} a cartesian system, and y = f (x) a function whose graph is the profile which we want to optimize. The function f is subjected to the following constraints, arising from engineering requirements: where x 0 and y 0 are positive numbers. The graph is immersed in a viscous flow having, at freestream zone, constant velocity V = (v ∞ , 0) parallel to the x-axis, with v ∞ > 0. What is, or which are, the functions f that minimize the pressure drop along its profile, that is, if p = p(x, y) is the static pressure along the graph, what is the curve for which the difference ∆p = p(0, 0) − p(x 0 , y 0 ) is minimum? The question is related to drag and lift optimization (see [1]). We consider the cubics f (x) = ax 3 + bx 2 + cx + d. Using c1, c2 and c3, we find The derivative of the general cubic is f ′ (x) = 3ax 2 +2bx+c so that f ′ (0) = c. From c4, we have the following condition on parameter a: while a few algebra, imposing the condition f ′ (t) = 0 if and only if t ≤ 0 or t ≥ x 0 , shows that c5 is satisfied if Then the search of the optimal profile is defined on the class of cubics We use Newton's theory on fluid velocity distribution on the profile or surface of a body immersed in a flow with constant freestream speed v ∞ ( [2]). If α = α(x) is the angle between the direction of the flow and the tangent to the curve, the velocity in a point of the profile has two components, the tangential one with module v ∞ cosα, and the normal one with module v ∞ sinα. The latter gives the amount for the body resistence (see [3]). But we would analyze the flow on the upper neighbour of the profile, where fluid has a tangent velocity field given by From usual trigonometrical formulas the following identity holds Let µ the dynamic viscosity, ρ the density and p = p(x, f (x)) the static pressure of the fluid on the body profile. We could write the Navier-Stokes equations for this flow in the upper neighbour of the profile, with (7) and p(0, 0) = p(x 0 , y 0 ) as boundary condition: We try to simplify the resolution of previous system by the following transformation rule on coordinates system: It is important to note that, from c5 condition and usual notions of real analysis, the function f is invertible on the interval (0, Then, the (X,Y )-flow near the curve is parallel to X, that is the velocity field U in the system {X, Y } has only the X-component: U = (U 1 , 0). If σ = ρ(g(X), Y + X), ν = µ(g(X), Y + X) and P = p(g(X), Y + X) are the representation of the scalar functions ρ, µ, p in the {X, Y } system, the flow is described by the simple Navier-Stokes equation The boundary conditions are now U(X, 0) = v 1 (g(X), Y + X) and P (0, 0) = P (y 0 , 0). We can write the expression of U 1 , where g ′ = d X g, and the expression of its first derivative: Note that U 1 doesn't depend explicitly on Y . For X = f (x 0 ) = y 0 and Y = 0, the value of U 1 is defined by continuity extension, because f ′ (x 0 ) = 0 and consequently d X g(y 0 ) = ∞. From (10) In the case f ′ (0) = 0, at the same manner the definition of U 1 can be extended at (0, 0) by U 1 (0, 0) = v ∞ . Now we integrate the two members of (9) respect to the X variable from 0 to f (x 0 ) = y 0 : therefore it must be U 1 (0, 0) = U 1 (y 0 , 0). Using previous identities, this equation can be written as Then it must be f ′ (0) = 0, that is c = 2y 0 +ax 3 0 x 0 = 0. Therefore, in the case of inviscid flow, the optimization problem is solved by the cubic, which belongs to the class C, with The cartesian equation is y = −2 y 0 Now we consider the viscous case. The condition to impose in equation (13) is always [P ] y 0 0 = 0, therefore The first member is For expliciting the second member, from (11) we have to find g ′′ = g XX . Apply usual differentiation rules: Therefore we can write For a cubic of the class C the following identities hold: f ′ (0) = . Equation (17) can be written as It is an algebraic equation of the 5th order in the parameter a. But we are interested in physical situations where viscosity is small, that is when the parameter a has values near the previous computed quantity (16): then, in this situation, f ′ (0) is small, f ′′ (0) = 2b = 6 y 0 x 2 0 and f ′′ (x 0 ) = −6 y 0 . We can consider the following Taylor expansion of the quantity (1 + f ′ (0) 2 ) 2 : Then the algebraic equation in the parameter a can be simplified in the following 2nd order one: The admissible solution to this equations is Note that, in the case ν = 0, the solution has the expression (16), as expected. In viscous case, on the contrary as inviscid one, the optimal profile depends on the values of density ρ, viscosity µ, and freestream speed v ∞ . We can see that the main effect of viscosity is its influence on f ′ (0), that is on the angle of attack between flow and profile. In fact, a is an increasing function of µ, therefore f ′ (0) = c = therefore the increasing of flow speed is equivalent to a vanishing of viscosity. Now multiply by y 0 and divide by νx 2 0 both numerator and denominator of the radicand in (26). Introducing the label z 0 = y 0 x 0 and the Reynolds number (recall that σ is the density and ν the dynamic viscosity) where y 0 is the characteristic length of this geometrical system (see [4]), the parameter a can be written in the form This expression separates the dependence of a on geometrical (x 0 , z 0 ) and physical parameters (Re). As expected, for high Reynolds number the influence of viscosity vanishes and the optimal profile tends to the shape of the case µ = 0:
2008-10-09T15:54:32.000Z
2008-10-09T00:00:00.000
{ "year": 2008, "sha1": "9f236c3fb9dbb5e348d6cc429a96fc7e51cc5782", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "9f236c3fb9dbb5e348d6cc429a96fc7e51cc5782", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics", "Physics" ] }
188383357
pes2o/s2orc
v3-fos-license
Analysis of the Performance of National Foreign Exchange Bank in Indonesia This research aims to compare the performance of the national foreign exchange bank in 2010-2014 based on the ratio of performance and compliance has been determined by Bank Indonesia. The data used in the research was obtained from annual reports published by the website of the national foreign exchange bank. Period of data using the annual reports National foreign exchange bank in the period December 2010-December 2014. After passing the purposive sample ̧ then a decent sample used as many as 22 National foreign exchange bank recorded in Bank Indonesia. The results of this research show that the national foreign exchange bank should get special attention from the research by Bank Indonesia is PT Bank Antardaerah, PT Bank Bumi Arta, PT Bank Ekonomi Raharja, PT Bank Ganesha, PT Bank Hana, PT Bank Himpunan Saudara 1906, PT Bank Bumiputra ICB, PT Bank ICBC Indonesia, PT Bank International Indonesia, PT Bank Maspion Indonesia, PT Bank Mega, PT Bank Mestika Dharma, PT Bank of India Indonesia, PT Bank Sinarmas, PT QNB Bank Kesawan. The ratio values are examined on the bank during the years 2010-2014 has a high rating from a rating assessment criteria and should get special attention by Bank Indonesia. Hypothesis testing in this research using MANOVA Test, the results show the value of 13 ratio in the past 5 years (20102014) shows a noticeable difference in the 22 National foreign exchange bank. Citation: Fauzan K, Kuswanto A (2018) Analysis of the Performance of National Foreign Exchange Bank in Indonesia. J Glob Econ 6: 308. doi: 10.4172/2375-4389.1000308 Introduction Bank has been a partner for societies in order to meet all their financial needs. Bank serve as a place to perform various financial transactions related to such, where securing money, make investments, remittances, make payments or perform billing. Besides, the role of banks greatly affect a country's economic activity. Bank can be regarded as the blood of a country's economy. Therefore, the progress of a bank in one country can also be used as a measure of progress of the country concerned. The more developed a country, the greater the role of banks in controlling the country's economy. In developing countries, an understanding of the new bank piecemeal. Most people only understand the extent of a bank to borrow and borrow money alone. In fact, sometimes some people did not understand the bank as a whole, so the view of the bank often interpreted incorrectly. All this is understandable since the introduction of the banking world as a whole on the community is minimal, so it is not surprising collapse of the banking world is inseparable from his lack of understanding of banking in the country managers in understanding the banking world as a whole. In this modern world, the role of banks in promoting the economy of a country is very large. Almost all sectors related to various financial activities always require the services of a bank. Therefore, at present and in the future we will not be able to escape from the world of banking, if you want to run a financial activity, whether individuals or institutions, whether social or company. Once the importance of the banking world, so there is a presumption that the bank is a "soul" to drive the economy of a country. This assumption is certainly not wrong, because the function of banks as financial institutions is vital, for example in terms of the creation of money, circulate money, providing money to support business activities, where securing money, where to invest and other financial services. According to Law No. 10 of 1998 is a bank is a business entity that collects and of the community in the form of savings and channel them to the public in the form of credit and/or other forms in order to improve the standard of living of the people. Bank Indonesia Regulation Number 13/1/PBI/2011 on Rating Bank Chapter 1 Article 2, the Bank is required to maintain and/or improve the health level of the Bank by applying the principles of prudence and risk management in conducting business activities [1]. Bank financial health must be maintained and/or improved so that public confidence in the Bank can be maintained. In addition, the Bank is used as a means to evaluate the condition and problems faced by the Bank and to determine the follow-up to address the weaknesses or issues of the Bank, either in the form of corrective action by the Bank or supervisory action by Bank Indonesia. The financial ratios required to be reported by the Bank is the ratio of performance and compliance. Performance Ratio consists of the Capital Adequacy Ratio (CAR), Productive Assets Troubled and Nonproductive Assets Troubled to Total Productive and Nonproductive Assets, Productive Assets Troubled to Total Productive Assets, Allowance for Impairment Losses financial assets to productive assets, Gross non-performing loans (NPL), Net Non-Performing Loans (NPL), Return Type of data In conducting this research, the data used is secondary data in the form of historical reports ratios financial ratios of each national foreign exchange bank recorded at Bank Indonesia as well as the financial statements in the form of annual report national foreign exchange bank has been recorded in the Bank Indonesia has been published in the period of December 2010-2014. Source of data From Table 1 data needed in this research is secondary data historically, which is obtained from the Annual Report published by the national foreign exchange bank. Period of data using the annual report national foreign exchange bank period December 2010-December 2014 period is deemed sufficient to follow the development of the Bank's performance because it used the time series data and includes the latest period financial statements annual reports published by bank. Samples were taken by purposive sampling, where samples are used if it meets the following criteria: Based on the sampling criteria, then the number of samples used in this research were 22 banks. The bank became the sample can be seen more clearly in Table 2. Result and Discussion The object of this research is the entire National foreign exchange bank recorded at Bank Indonesia during the period December 2010-December 2014, but after the purposive sampling, the samples are fit for use (meet the criteria) in this research was 22 national foreign exchange bank recorded at Bank Indonesia. Data taken from the Annual Report of the bank, especially in Financial Ratio Calculation Report. The data on the dynamics of movement performance ratios and compliance with National foreign exchange bank recorded at Bank Indonesia in the period December 2010-2014, there were 13 ratios were assessed in this research and there is a matrix assessment criteria set by Bank Indonesia. National foreign exchange bank that provide annual report data and report the calculation of financial ratios (the period December 2010-December 2014). 28 28 28 28 28 144 National foreign exchange banks which have total assets of the average total assets minus the standard deviation of total assets with the average total assets plus the standard deviation of total assets x ̅ total assets − ∂ total assets < x < x ̅ total assets + ∂ total assets (the period December 2010-December 2014). Based on Rating Criteria Matrix Components of capital, it can be concluded that the 22 National foreign exchange bank very significantly higher and significantly compared to ratios set forth in the provisions of CAR. Rank 1 and rank 2 shows that the Capital Adequacy Fulfilment by 22 National foreign exchange bank have been satisfied with the excellent and good. This indicates that the national foreign exchange bank can finance the total assets to risk capital through the bank itself. In the Figure 1 can be seen that the highest CAR occurred in 2011, namely PT Bank Hana and PT Bank QNB Kesawan is equal to 43.60% and 45.75%, PT Bank Hana maintain CAR at rank 1 of the year 2010-2014, PT Bank QNB Kesawan has increased from 2010-2011 and 2011-2014 decreased but the change of CAR at PT Bank QNB Kesawan exhibited significantly higher and were significant enough of the Capital Adequacy ratio that has been determined. Capital Adequacy Ratio of the lowest occurred in 2011 at PT Bank ICB Bumi Putera amounted to 10.12%, this ratio shows the significant value of the Capital Adequacy Ratio that has been determined by Bank Indonesia [2]. Based Matrix Rating Criteria component of KAP, it can be concluded for the quality of productive assets and non-productive the 22 National foreign exchange bank gain special attention is PT Bank ICB Bumiputera and PT Bank SBI Indonesia because the ratio of productive assets and non-productive at PT Bank ICB Bumiputera achieving a rating of 4, which means the development of the ratio at PT Bank ICB Bumiputera high enough. The ratio in PT Bank SBI Indonesia reached number 4 in 2012 and may imply that the development of the ratio of productive and non-productive assets at PT Bank SBI Indonesia is quite high and should receive special attention by Bank Indonesia. In Figure 2 can be viewed quality of earning assets and non-productive assets to total earning and non-earning assets showed that of PT Bank ICB Bumiputera in 2014 had the highest ratio is 7.33% and the ratio of this particular attention that gets criteria 4 ranked PT Bank ICB Bumiputera in the last 5 years have always exceeded 3%, which means moderate growth rates ranging from 5% to 8%. PT Bank SBI Indonesia has a ratio of 6.44% in 2012 and belonging to rank 4 that shows the development ratio is high enough to require special attention by Bank Indonesia [3]. Based Matrix Rating Criteria component of KAP, it can be concluded for the quality of productive assets 22 National foreign exchange bank gain special attention is PT Bank ICB Bumiputera and PT Bank SBI Indonesia because the ratio of earning assets in PT Bank ICB Bumiputera achieve a rating of 4, which can mean the development of the ratio of productive assets troubled in PT Bank ICB Bumiputera high enough. In the Figure 3 can be seen quality of earning assets to total earning assets showed that of PT Bank ICB Bumiputera in 2014 had the highest ratio is 6.67% and the ratio of this particular attention that gets ranking criteria 4. The ratio of earning assets Asel to total earning at PT Bank ICB Bumiputera rank 3 and rank 4 in the year 2010 to 2015, which means the development of the ratio of earning assets is always in the determination of the ratio is quite high and growing [4]. Based on Bank Indonesia Regulation No.14/15/PBI 2012 Chapter V Assets and Reserves Allowance for impairment losses that commercial banks have to set aside reserves for impairment losses amounting to at least 1% (one percent) of Productive Assets classified as current [5]. It can be concluded on the 22 National foreign exchange bank that meet this provision in the year 2010-2014 is PT Bank International Indonesia and PT Bank ICB Bumiputera. In Figure 4 can be seen Ratio of Allowance for Impairment Losses against the ratio Credit of Micro, Small and Medium Enterprises or had nonperforming loans in total loans and or the ratio of Non-Performing Loans Credit of Micro Small Medium Enterprises greater than or equal to 5% (five percent), subject to a reduction in current accounts. In Table 2, it can be seen that the Bank did not comply with the provisions ratios Loans Small Medium Micro Enterprises or had non-performing loans to total loans is PT Bank ICB Bumiputera, PT Bank Mestika Dharma, PT Bank SBI Indonesia. In Figure 5 [6]. In Figure 6 can be seen 22 national foreign exchange bank have Net NPL below 5%, are in the NPL ratio of the Bank Indonesia provisions according to Bank Indonesia Circular Letter No.17/19/DPUM 2015 [6]. Use of Net Non-Performing Loans as an indicator of the health of the bank is less incentive for banks to maintain credit quality. However, it seems that Bank Indonesia would indeed set very loose conditions. Only banks with net non-perfoming 29%. On the other bank indicating the rank 1, 2, and 3 which shows that the profit is very high, higher profits, and profits high enough or ROE ratios ranging from 5% to 12.5% [10]. Based Matrix Rating Criteria Component NIM, it can be seen at 22 National foreign exchange bank that suffered a loss that is at rank 4 is PT Bank Hana in 2011 and 2012 with a value of 1.50% and 1.44%, which means they net interest margin of the Bank Hana low. 21 Bank another indicating the rank 1, 2, and 3 which shows the net interest margin is very high, high, and high enough or NIM ratios ranged from 1.5% to 2%. In the Figure Another bank that showed LDR with a rating of 1, 2, 3 which shows the level of ability to repay the withdrawal of funds by depositors is very good, good, and good enough or LDR ranging from 85% to 100% [12]. [14]. Net Open Position as a whole as referred to in paragraph (1) letter a is the sum of the absolute values of the net difference between assets and liabilities in the balance sheet for each foreign currency the net difference between claims and liabilities which are commitments and contingencies in administrative accounts for each foreign currency are all stated in rupiah ( Figure 13). F test at MANOVA F test used to see whether overall if there is a significant effect of group (independent variable) on a set group of dependent variables. Test F is said to be significant if the p-value 4 that Pillai's Trace Test, Wilks' lambda Test, Hotelling Trace Test, Test Roy's Largest Root Test in Table 3 Multivariate test of less than 0.05, which means significant at the 95% confidence level. F test at MANOVA also shows that there is not one single multivariate analysis, there are four different types of test. Table 3 [15,16]. H 1 =Variable CAR, productive and non-productive assets, Productive Assets, Allowance for Impairment Losses financial assets to productive assets, NPL Gross, NPL Net, ROA, ROE, NIM, BOPO, LDR, Statutory Reserves main rupiah, Net Open Position as whole together; show the difference in National foreign exchange bank within a period of 5 years (2010-2014 ) [17]. Or it could be said, 13 ratio in the period of 5 years (2010-2014) is different in National foreign exchange bank. Criteria for decision If the number sig. >0.05 then H0 accepted. If the number sig. <0.05 then H0 rejected. Output Consider the Bank row in Table 3, shows that the number of significance was tested by the procedure Pillai 'Trace, Wilk's lambda, Hotelling and Roy's. All procedures showed the significance below 0.05 with the numbers the same significance for each procedure is 0,000. Thus, H 0 rejected [18]. This indicates that the value of 13 ratio in the period of 5 years (2010-2014) showed significant differences at the 22 National foreign exchange bank [19,20]. Conclusion Based on the provision Bank Indonesia regarding the rating of the bank, each ratio observed in this research assessed based on the statutes and the Circular Letter of Bank Indonesia shows that the conclusions of this research are as follows: 2. The ratio in this research has significant differences in the 22 national foreign exchange bank. In Table 3 test of betweensubjects effects showed significant differences in the ratio of 13 tested individually and the results in the Recommendation From the findings of this research concluded that during the period of the research, it was revealed that the National Foreign Exchange Bank recorded in Bank Indonesia is less than optimal in the intermediation function, evidenced by the ratio of national foreign exchange bank in ranking high on the ranking criteria of Bank Indonesia. National foreign exchange banks must maintain stability and compliance performance ratio is at Bank Indonesia, the ranking criteria so that the national foreign exchange banks can have a good performance. The bank's performance is important to remember banks manage funds of public funds. Banks that do not have a good performance, not only harm themselves but others. Bank Indonesia as the regulatory and banking supervisors should be advised on improvements. Improvements will include change management, merge such merger, consolidation, acquisition, or even liquidated (dissolved) whereabouts if it is already severe conditions of the bank. Consideration for this depends on the conditions experienced by the bank concerned. If the bank's condition was so severe, but still has some potential, then you should find a way out with a merger with another bank.
2019-06-13T13:19:42.072Z
2018-01-01T00:00:00.000
{ "year": 2018, "sha1": "65e34f3e315f18096604ed867a65f3a6197b9178", "oa_license": "CCBY", "oa_url": "https://doi.org/10.4172/2375-4389.1000308", "oa_status": "HYBRID", "pdf_src": "MergedPDFExtraction", "pdf_hash": "c4507401e8d59ee019a06656370e9f4e7814ffb1", "s2fieldsofstudy": [ "Economics", "Business" ], "extfieldsofstudy": [ "Business" ] }
88519378
pes2o/s2orc
v3-fos-license
Assessing Relative Volatility/Intermittency/Energy Dissipation We introduce the notion of relative volatility/intermittency and demonstrate how relative volatility statistics can be used to estimate consistently the temporal variation of volatility/intermittency when the data of interest are generated by a non-semimartingale, or a Brownian semistationary process in particular. This estimation method is motivated by the assessment of relative energy dissipation in empirical data of turbulence, but it is also applicable in other areas. We develop a probabilistic asymptotic theory for realised relative power variations of Brownian semistationary processes, and introduce inference methods based on the theory. We also discuss how to extend the asymptotic theory to other classes of processes exhibiting stochastic volatility/intermittency. As an empirical application, we study relative energy dissipation in data of atmospheric turbulence. Introduction The concept of (stochastic) volatility is of central importance in many fields of science. In some of these the term intermittency is used instead of volatility. Thus volatility/intermittency has a central role in mathematical finance and financial econometrics (Barndorff-Nielsen and Shephard, 2010), in turbulence, rain and cloud studies (Lovejoy and Schertzer, 2006;Waymire, 2006) and other aspects of environmental science (Pichard and Abbott, 2012), in relation to nanoscale emitters (Frantsuzov et al., 2013), magnetohydrodynamics (Mininni and Pouquet, 2009), and to liquid mixtures of chemicals (Sreenivasan, 2004), and last but not least in the physics of fusion plasmas (Carbone et al., 2000). In turbulence the key concept of energy dissipation is subsumed under that of intermittency. For discussions of intermittency and energy dissipation in turbulence see Frisch (1995, Chapter 6) and Tsinober (2009, Chapter 7) (cf. also the illustration on p. 20 of the latter reference). Speaking generally, volatility/intermittency is taken to mean that the phenomenon under study exhibits more variation than expected; that is, more than the most basic type of random influence (often thought of as Gaussian) envisaged. Hence volatility/intermittency is a relative concept, and its meaning depends on the particular setting under investigation. Once that meaning is clarified the question is how to assess the volatility/intermittency empirically and then to describe it in stochastic terms and incorporate it in a suitable probabilistic model. Such 'additional' random fluctuations generally vary, in time and/or in space, in regard to intensity (activity rate and duration) and amplitude. Typically the volatility/intermittency may be further classified into continuous and discrete (i.e., jumps) elements, and long and short term effects. In finance the investigation of volatility is well developed and many of the procedures of probabilistic and statistical analysis applied are similar to those of relevance in turbulence, for instance in regard to multipower variations, particularly quadratic and bipower variations and variation ratios. Other important issues concern the modelling of propagating stochastic volatility/intermittency fields and the question of predictability of volatility/intermittency. This paper introduces a concept of realised relative volatility/intermittency and hence of realised relative energy dissipation, the ultimate purpose of which is to assess the relative volatility/intermittency or energy dissipation in arbitrary subregions of a region C of space-time relative to the total volatility/intermittency/energy in C. The concept of realised relative volatility/intermittency also paves the way for practical applications of some recent advances in the asymptotic theory of power variations of non-semimartingales (see, e.g., Corcuera et al. (2006) and Barndorff-Nielsen et al. (2011) to volatility/intermittency measurements and inference with empirical data. In the non-semimartingale setting, realised power variations need to be scaled properly, in a way that depends on the smoothness of the process through an unknown parameter, to ensure convergence. This makes inference, in particular, difficult. Realised relative power variations, however, are self-scaling and, moreover, admit a statistically feasible central limit theorem, which can be used, e.g., to construct confidence intervals for the realised relative volatility/intermittency. (Self-scaling statistics have also been recently used by Podolskij and Wasmuth (2013) to construct a goodness-of-fit test for the volatility coefficient of a fractional diffusion.) We start the further discussion by describing, in Section 2, how energy dissipation in turbulence is defined and traditionally assessed. This is followed by a brief outline of some results from the theory of Brownian semistationary (BSS) processes that are pertinent for the main topic of the present paper. The definition of realised relative volatility/intermittency/energy dissipation is given in Section 3. For concreteness and because of its particular importance we focus on realised relative energy dissipation. Asymptotic probabilistic properties-consistency and a functional central limit theorem-for realised relative power variations are derived in Section 4. Applications to data on turbulence and energy prices are presented in Section 5. Section 6 concludes. Energy dissipation in turbulence In a purely spatial setting the energy dissipation of a homogenous and isotropic turbulent field is (up to an ignorable constant involving viscosity) where y i denotes the velocity at the spatial position x ∈ R 3 . The coarse grained energy dissipation over a region C in R 3 is then given by Furthermore, if only measurements of the velocity component in the main direction x 1 of the flow are considered one defines the surrogate energy dissipation as By Taylor's frozen field hypothesis (Taylor, 1938), this may then be reinterpreted as the timewise surrogate energy dissipation which would be the relevant quantity in case the measurements were of the same, main, component of the velocity but now as a function of time rather than of position. Associated to this is the coarse grained energy dissipation corresponding to the interval [t, t + u] and given by Supposing that the velocity y t has been observed over the interval [0, T ] at times 0, δ, . . . , T /δ δ, when it comes to estimating ε + (t), as given by (2), this is traditionally done by taking the normalised realised quadratic variation Correspondingly, the coarse grained energy dissipation over [0, T ] is estimated bŷ The definitions (1) and (2) of course assume that the sample path y is differentiable. On the other hand, going back to Kolmogorov, it is broadly recognised that turbulence can only be comprehensibly understood by viewing it as a random phenomenon-see Kolmogorov (1941aKolmogorov ( ,b,c, 1962 and, for a recent overview, Tsinober (2009). Accordingly, y should be viewed as a stochastic process, henceforth denoted Y , and it is not realistic to assume that its sample paths are differentiable. Thus a broader setting for the analysis of the energy dissipation in Y is called for, and in the following we propose and discuss such a setting. A Brownian semistationary (BSS) process, as introduced by Barndorff-Nielsen and Schmiegel (2009), may be used as a model for the timewise development of the velocity at a fixed point in space and in the main direction of the flow in a homogeneous and isotropic turbulent field. For focus and illustation we shall consider cases where Y is a stationary BSS process, where g and q are deterministic kernel functions, B is Brownian motion and σ is a stationary process expressing the volatility/intermittency of the process. In that context the gamma form of the kernel g has a special role. In particular, if ν = 5 6 and σ is square integrable, then the autocorrelation function of Y is identical to von Kármán's autocorrelation function (von Kármán, 1948) for ideal turbulence. In relation to the BSS process (3) with gamma kernel (4) a central question is that of determining σ 2 from Y . In case the process is a semimartingale the answer is given by the quadratic variation of Y ; in fact, then is the accumulated quadratic volatility over the interval [0, t]. However, in the cases of most interest for turbulence, that is ν ∈ ( 1 2 , 1) ∪ (1, 3 2 ) the process Y is not a semimartingale and in order to determine σ 2+ by a limiting procedure from the realised quadratic variation the latter has to be normalised by a factor depending on δ and ν. Specifically, as shown by Barndorff-Nielsen and Schmiegel (2009), this factor is δc (δ) −2 where is defined using the Gaussian core Using this result for estimation of σ 2+ t requires either that ν is known or that a sufficiently accurate estimate of ν can be found, The latter question has led to detailed studies of the application of power and multipower variations to estimation of ν ( Barndorff-Nielsen et al., 2011Corcuera et al., 2013). Realised relative V/I/E Supposing again that the velocity Y t has been observed at times 0, δ, . . . , T /δ δ, we are interested in the relative energy dissipation of Y over any subinterval [t, t + u] where ε + (T ) is the energy dissipation in [0, T ]. Within the turbulence literature, this definition of the relative energy dissipation is strongly related to the definition of a multiplier in the cascade picture of the transport of energy from large to small scales (see Cleve et al. (2008) and references therein). We now introduce the concept of realised relative energy dissipation. Specifically, whether Y t is deterministic and differentiable or an arbitrary stochastic process we define the realised relative energy dissipation over the subinterval [t, t + u] as is the realised relative quadratic variation of Y . We note that the quantity R + δ (t, t + u) is entirely empirically based. In the "classical" case of turbulence, where Y t is differentiable, as δ → 0 we have and hence, as δ → 0, i.e., the limit equals the relative energy dissipation (8). Now suppose that Y is a stationary BSS process (3) with gamma kernel (4) and ν > 1 2 , as it needed for the stochastic integral to exist. Then, if ν > 3 2 , Y has continuous differentiable sample paths, i.e. we are essentially in the "classical" situation. If ν = 1 the process Y is a semimartingale and the realised quadratic variation [Y δ ] converges to the quadratic variation [Y ], that is where σ 2+ t is the accumulated quadratic volatility/intermittency (5). Consequently, for the realised relative energy dissipation we have i.e., the limit is the relative accumulated squared volatility/intermittency. Finally, suppose that ν ∈ ( 1 2 , 1) ∪ (1, 3 2 ), i.e., we are in the non-semimartingale case and the sample paths are Hölder continuous of order ν − 1/2. Then, subject to a mild condition on q (see Appendix C for a result covering the case where q is of the gamma form), we have again, as δ → 0, that . This follows directly from limiting results of Barndorff-Nielsen and Schmiegel (2009) and Barndorff-Nielsen et al. (2011). In view of these results, in the turbulence context we view the limit of R + δ as the relative energy dissipation. Remark 1. As mentioned in Section 2, use of the original assessment procedure (7) requires determination of the degree of freedom/smoothness parameter ν. The realised relative quadratic variation [Y δ ] t,T is entirely empirically determined, self-scaling, and its consistency does not rely on inference on ν. Asymptotic theory of realised relative power variations We develop now a probabilistic asymptotic theory for realised relative power variations, going slightly beyond the earlier discussion of quadratic variations and energy dissipation. To highlight the robustness of realised relative power variations to model misspecification, we consider both a BSS process and a Brownian semimartingale as the underlying process. While we limit the discussion to power variations for the sake of simpler exposition, our results can be easily extended to multipower variations. Probabilistic setup and consistency Let us consider a stochastic process Y = {Y t } t≥0 , defined on a complete filtered probability space (Ω, F, (F t ) t∈R , P ) via the decomposition where A = {A t } t≥0 is a process that allows for skewness in the distribution of Y t . The process A is assumed to fulfill one of two negligibility conditions, viz. (10) and (13) given below (Appendix C presents more concrete criteria that can be used to check these conditions). Given a standard Brownian motion B = {B t } t∈R and a càglàd process σ = {σ t } t∈R , adapted to the natural filtration of B, we allow for the following two specifications of the process X = {X t } t≥0 . (I) X is a local Brownian martingale given by (II) X is a BSS process given by for all t ≥ 0. On the one hand, by choosing A to be absolutely continuous in the case (I), we see that this framework includes rather general Brownian semimartingales. On the other hand, in the case (II) the process Y is typically a non-semimartingale, as discussed above. Remark 2. In addition to the setting (II), asymptotic theory for relative realised power variations could also be developed in the context of the fractional processes studied by Corcuera et al. (2006). Then, we would define where {Z H t } t≥0 is a fractional Brownian motion with Hurst parameter H ∈ 1 2 , 3 4 and σ satisfies certain path-regularity conditions (Corcuera et al., 2006, pp. 716, 723). The proofs would be analogous to the case (II), but take as an input the asymptotic results of Corcuera et al. (2006), instead of those of Barndorff-Nielsen and Schmiegel (2009) and Barndorff-Nielsen et al. (2011). Recall that for any p > 0, the p-th order realised power variation of the process Y with lag δ > 0 is given by The power variations [A δ ] (p) and [X δ ] (p) are, of course, defined analogously. Similarly to earlier literature on power and multipower variations of BSS processes ( Barndorff-Nielsen et al., 2011Corcuera et al., 2013) we assume that the kernel function g behaves like t ν−1 near zero for some ν ∈ ( 1 2 , 1) ∪ (1, 3 2 ), or more precisely that where L g is slowly varying at zero, which implies that X is not a semimartingale in the case (II). Then, under some further regularity conditions on g (Corcuera et al., , pp. 2555(Corcuera et al., -2556, which include the assumption that c(δ) = δ ν−1/2 L c (δ) with L c slowly varying at zero, and which are satisfied for instance when g is the gamma kernel (4), we have where c(δ) is given by (6), σ p+ t = t 0 |σ s | p ds, and m p = E{|ξ| p } for ξ ∼ N (0, 1), by Theorem 3.1 of Corcuera et al. (2013). In the case (I), setting c(δ) = √ δ, the convergence (9) holds without any additional assumptions, e.g., by Theorem 2.2 of Barndorff-Nielsen et al. (2006). Additionally, note that the convergence (9) holds also when X is replaced For fixed time horizon T > 0, we introduce the p-th order realised relative power variation process over [0, T ] by The relative power variation has the following evident consistency property. Central limit theorem Recall first that random elements U 1 , U 2 , . . . in some metric space U converge stably (in law) to a random element U in U, defined on an extension (Ω , F , P ) of the underlying probability space (Ω, F, P ), if for any bounded, continuous function f : U → R and bounded random variable V on (Ω, F, P ). We denote stable convergence by st − →. Remark 6. Stable convergence, introduced by Rényi (1963), is stronger than ordinary convergence in law and weaker than convergence in probability. It is key that the limiting random element U is defined on an extension of the original probability space, because in the case where U is F-measurable, the convergence U n st − → U is in fact equivalent to U n p − → U (Podolskij and Vetter, 2010, Lemma 1). Remark 7. The usefulness of stable convergence can be illustrated by the following example that is pertinent to the asymptotic results below. Suppose that U n st − → θξ in R, where ξ ∼ N (0, 1) and θ is a positive random variable independent of ξ. In other words, U n follows asymptotically a mixed Gaussian law with mean zero and conditional variance θ 2 . Ifθ n is a positive, consistent estimator of θ, i.e.,θ n p → θ, then the stable convergence of U n allows us to deduce that U n /θ n d − → N (0, 1). We refer to Rényi (1963), Aldous and Eagleson (1978), Jacod and Shiryaev (2003, pp. 512-518), and Podolskij and Vetter (2010, pp. 332-334) for more information on stable convergence. Let us write D([0, T ]) for the space of càdlàg functions from [0, T ] to R, endowed with the usual Skorohod metric (Jacod and Shiryaev, 2003, Chapter V). (Recall, however, that convergence to a continuous function in this metric is equivalent to uniform convergence.) Under slightly strengthened assumptions, the realised power variation of X satisfies a stable central limit theorem of the form where λ X,p > 0 is a deterministic constant and {W t } t∈[0,T ] a standard Brownian motion, independent of F, defined on an extension of (Ω, F, P ). Indeed, in the case (I) the convergence (11) holds with λ X,p = m 2p − m 2 p , provided that σ is an Itô semimartingale ( Barndorff-Nielsen et al., 2006, Theorem 2.4). Moreover, we have (11) also in the case (II) if we make the restriction ν ∈ ( 1 2 , 1), the situation of most interest concerning turbulence, and assume that σ satisfies a Hölder condition in expectation (Corcuera et al., 2013, Theorem 3.2). Then, in contrast to the semimartingale case, where λ p : ( 1 2 , 1) → (0, ∞) is a continuous function defined using the correlation structure of fractional Brownian noise (see Appendix B for the definition and proof of continuity). Analogously to (10), the convergence (11) The realised relative power variation of Y satisfies the following central limit theorem, which is an immediate consequence of Lemma 14 in Appendix A. Remark 9. In the case (II) the restriction ν ∈ ( 1 2 , 1) can be relaxed when one considers power variations defined using second or higher order differences of Y ( Corcuera et al., 2013). Then, (11) holds for all ν ∈ ( 1 2 , 1)∪(1, 3 2 ). As Theorems 3 and 8 do not depend on the type of differences used in the power variation, they obviously apply also in this case. Conditional on F, the limiting process on the right-hand side of (14) is a Gaussian bridge. In particular, its (unconditional) marginal law at time t ∈ [0, T ] is mixed Gaussian with mean zero and conditional variance Note also that when σ is constant, the limiting process reduces to a Brownian bridge. In effect, the result is analogous to Donsker's theorem for empirical cumulative distribution functions (see, e.g., Kosorok (2008) for an overview of such results). Clearly, we may estimate the asymptotic variance (15) consistently using In the case (II) the estimator V t (δ) is not feasible as such since ν appears as a nuisance parameter in λ X,p = λ p (ν). However, we may replace λ X,p with λ p (ν δ ), whereν δ is any consistent estimator of ν based on the observations Y 0 , Y δ , . . . , Y T /δ δ (they have been developed by Barndorff-Nielsen et al. (2011; Corcuera et al. (2013)). Using the properties of stable convergence, we obtain the following feasible central limit theorem. Inference on realised relative V/I/E Proposition 10 can be used to construct approximative, pointwise confidence intervals for the relative volatility/intermittency σ p+ t,T . Since, by construction, σ p+ t,T assumes values in [0, 1], it is reasonable to constrain the confidence interval to be a subset of [0, 1]. Thus, we define for any a ∈ (0, 1) the corresponding (1 − a) · 100% confidence interval as where z 1−a/2 > 0 is the 1 − a 2 -quantile of the standard Gaussian distribution. Another application of the central limit theory is a non-parametric homoskedasticity test that is similar in nature to the classical Kolmogorov-Smirnov and Cramér-von Mises goodness-of-fit tests for empirical distribution functions. This extends the homoskedasticity tests proposed by Dette et al. (2006) and Dette and Podolskij (2008) to a nonsemimartingale setting. Another extension of these tests to non-semimartingales, namely fractional diffusions, is given by Podolskij and Wasmuth (2013). The approach is also similar to the cumulative sum of squares test (Brown et al., 1975) of structural breaks studied in time series analysis. To formulate our test, we introduce the hypotheses H 0 : σ t = σ 0 for all t ∈ [0, T ], H 1 : σ t = σ 0 for some t ∈ [0, T ]. As mentioned above, Theorem 8 implies that under H 0 , The distance between the realised relative power variation and the linear function can be measured using various norms and metrics. Here, we consider the typical sup and L 2 norms that correspond to the Kolmogorov-Smirnov and Cramér-von Mises test statistics, respectively. More precisely, we define the statistics Note that also in (17), we may use λ p (ν δ ) instead of λ X,p in the case (II). By (16) and the scaling properties of Brownian motion, we obtain under H 0 the classical Kolmogorov-Smirnov and Cramér-von Mises limiting distributions for our statistics, namely, Remark 11. Well-known series expansions for the cumulative distribution functions of the limiting functionals in (18) can be found, e.g., in Lehmann and Romano (2005, p. 585) and Anderson and Darling (1952, p. 202). Remark 12. The finite-sample performance of the test statistics S KS δ and S CvM δ is explored in a separate paper (Bennedsen et al., 2014). Brookhaven turbulence data We apply the methodology developed above first to data of turbulence. The data consist of a time series of the main component of a turbulent velocity vector, measured at a fixed position in the atmospheric boundary layer using a hotwire anemometer, during an approximately 66 minutes long observation period at sampling frequency of 5 kHz (i.e. 5000 observations per second). The measurements were made at Brookhaven National Laboratory (Long Island, NY), and a comprehensive account of the data has been given by Drhuva (2000). As a first illustration, we study the observations up to time horizon T = 800 milliseconds. Using the smallest possible lag, δ = 0.2 ms, this amounts to 4000 observations. Figure 1(a) displays the squared increments corresponding to these observations. As a comparison, the same time horizon is captured in Figure 1(b) but with lag δ = 0.8 ms. Figure 1(c) compares the associated accumulated realised relative energy dissipations/quadratic variations. The graphs for these two lags show very similar behaviour, exhibiting how the total time interval is divided into a sequence of intervals over which the slope of the energy dissipation is roughly constant. On the other hand, the amplitudes of the volatility/intermittency are of the same order in the whole observation interval. To be able to draw inference on relative volatility/intermittency using the data, we need to address two issues. Firstly, for this time series, the lags δ = 0.2 ms and δ = 0.8 ms are below the so-called inertial range of turbulence, where a BSS process with a gamma kernel, a model of ideal turbulence, provides an accurate description of the data-see Corcuera et al. (2013), where the same data are analysed. Secondly, the data were digitised using a 12-bit analog-to-digital converter. Thus, the measurements can assume at most 2 12 = 4096 different values, and due to the resulting discretisation error, a non-negligible amount of the increments are in fact equal to zero (roughly 20 % of all increments). These discretisation errors are bound to bias the estimation of the parameter ν, which is needed for the inference methods. We mitigate these issues by subsampling, namely, we apply the inference methods using a considerably longer lag, δ = 80 ms, which is near the lower bound of the inertial range for this time series Figure 1). We divide the time series into 66 non-overlapping one-minute-long subperiods, testing the constancy of σ, i.e., the null hypothesis H 0 , within each subperiod. Figure 2(a) displays the estimates of ν for each subperiod using the change-of-frequency method ( Corcuera et al., 2013). All of the estimates belong to the interval ( 1 2 , 1) and they are scattered around the value ν = 5 6 predicted by Kolmogorov's (K41) scaling law of turbulence (Kolmogorov, 1941a,b). The homoskedasticity test statistics, for p = 2, and their critical values, derived using (18), in Figure 2 To understand what kind of intermittency the tests are detecting in the data, we look into two extremal cases, the 27th and 40th subperiods (the red bars in Figure 2(b) and (c)). To this end, we plot the realised relative energy dissipations, with δ = 80 ms, during the 27th and 40th subperiods in Figure 3(a) and (b), respectively. We also include the pointwise confidence intervals, the p-values of the homoskedasticity tests, and as a reference, the realised relative energy dissipations using the smallest possible lag δ = 0.2 ms. While the realised relative energy dissipations exhibit a slight discrepancy between the lags δ = 80 ms and δ = 0.2 ms, it is clear that 40th subperiod indeed contains significant intermittency, whereas the during the 27th subperiod, the (accumulated) realised relative energy dissipation grows nearly linearly. EEX electricity spot prices We also briefly exemplify the concept of relative volatility using electricity spot price data from the European Energy Exchange (EEX). Specifically, we consider deseasonalised daily Phelix peak load data (that is, the daily averages of the hourly spot prices of electricity delivered between 8 am and 8 pm) with delivery days ranging from January 1, 2002 to October 21, 2008. Weekends are not included in the peak load data, and in total we have 1775 observations. This time series was studied in the paper by and the deseasonalisation method is explained therein. As usual, we consider here logarithmic prices. Figure 1(d) shows the squared increments up to the total time horizon T = 1775 days with lag δ = 1 day. The same time horizon is captured in Figure 1(e) but with a resolution δ = 4 days. Figure 1(f) compares the corresponding accumulated realised relative quadratic variations. The results for these two lags do not show the same similarity as with the turbulence data (Figure 1(a-b)). Judging by eye, we observe that the intensity of the volatility is changing with lag δ. This lag dependence is also observed in the amplitudes, again in contrast to the figures on the left hand side. (However, more quantitative investigation of such amplitude/density arguments is outside the scope of the present paper.) The dependence of the estimation results on the lag δ is, at least partly, explained by the relatively low sampling frequency of the data. With δ = 1 day, the increments are dominated by a few exceptional observations (which may correspond Remark 13. It was shown by that by suitably choosing both g and q to be of gamma type it is possible to construct a BSS process with normal inverse Gaussian one-dimensional marginal law, which corresponds closely to the empirics for the time series of log spot prices considered. Moreover, the estimated value of the smoothness parameter ν for this time series falls in the interval ( 1 2 , 1). Conclusion The definition of realised relative energy dissipation introduced in this paper applies to any continuous time, real valued process Y . An extension to vector valued processes is an Figure 3: Brookhaven turbulence data: Realised relative energy dissipation during the 27th (a) and 40th (b) subperiods with δ = 80 ms and δ = 0.2 ms. Additionally, p-values for the hypothesis H 0 , estimates of ν using the change-of-frequency method, and 95% pointwise confidence intervals, all using the lag δ = 80 ms. issue of interest, in particular in relation to the definition (1) of the energy dissipation in three-dimensional turbulent fields. The extent to which the realised volatility/intermittency/energy is an empirical counterpart of what can be conceived theoretically as relative volatility/intermittency/energy depends on the model under consideration. As discussed above this is the case, in particular, both under Brownian semimartingales, as these occur widely in mathematical finance and financial econometrics, and under stationary BSS processes. In the timewise stationary setting, the realised relative energy dissipation is a parameter free statistic which provides estimates of the relative energy in subintervals of the full observation range, by relating the quadratic variation over each subinterval to the total realised energy for the entire range. It provides robust estimates of the relative accumulated energy as this develops over time and is intimately connected to the concepts of volatility/intermittency and energy dissipation as these occur in statistical turbulence and in finance. This was illustrated in connection to the class of BSS processes with g of the gamma form. Lemma 15. The function λ p is continuous. C Sufficient conditions for the negligibility of the skewness term Suppose first that the process A = {A t } t≥0 is given by where µ ∈ R is a constant and the process {a t } t≥0 is measurable and locally bounded. Then we can establish rather simple conditions for its negligibility in the asymptotic results for power variations. By Jensen's inequality, we have for any p ≥ 1, s ≥ 0, and t ≥ 0, where C a > 0 is a random variable that depends locally on the path of a. Thus, the condition (10) holds whenever δ c(δ) → 0, and (13) holds if δ p−1/2 c(δ) p → 0. Suppose now, instead, that A follows where q is the gamma kernel q(t) = c t η−1 e −ρt for some c > 0, η > 0, and ρ > 0. We assume that the process {a t } t∈R is measurable, locally bounded, and satisfies for any t ≥ 0, which is true, e.g., when the auxiliary process u −∞ q(u − s)|a s |ds, u ≥ 0, has a càdlàg or continuous modification. Next, we want to show that In the case η ≥ 2 the derivative q is bounded and (28) is immediate. Suppose that η < 2. Then, |q (t)| ≤ Ct η−2 on any finite interval, where C > 0 depends on the interval. Using the mean value theorem, we obtain which implies (28). To bound |I 4 δ |, note that, by (27), |q (t)| ≤ C q(t) for all t ≥ −s * , where C > 0 is a constant. For any s < s * , we have (j − 1)δ − s > η−1 ρ . Thus, by the mean value theorem, q(jδ − s) − q((j − 1)δ − s) ≤ C q (j − 1)δ − s δ and, consequently, Collecting the estimates, we have Checking the sufficiency of the asserted conditions is now a straightforward task.
2015-09-15T19:36:09.000Z
2013-04-24T00:00:00.000
{ "year": 2013, "sha1": "d6ad50b0c2c102a35551e2b755dc48004a144048", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1214/14-ejs942", "oa_status": "GOLD", "pdf_src": "Arxiv", "pdf_hash": "7b895ab8cbe8b5c4f45927dfe3fa179713092613", "s2fieldsofstudy": [ "Mathematics", "Physics" ], "extfieldsofstudy": [ "Mathematics" ] }
227739257
pes2o/s2orc
v3-fos-license
Using multiple ASR hypotheses to boost i18n NLU performance Current voice assistants typically use the best hypothesis yielded by their Automatic Speech Recognition (ASR) module as input to their Natural Language Understanding (NLU) module, thereby losing helpful information that might be stored in lower-ranked ASR hypotheses. We explore the change in performance of NLU associated tasks when utilizing five-best ASR hypotheses when compared to status quo for two language datasets, German and Portuguese. To harvest information from the ASR five-best, we leverage extractive summarization and joint extractive-abstractive summarization models for Domain Classification (DC) experiments while using a sequence-to-sequence model with a pointer generator network for Intent Classification (IC) and Named Entity Recognition (NER) multi-task experiments. For the DC full test set, we observe significant improvements of up to 7.2% and 15.5% in micro-averaged F1 scores, for German and Portuguese, respectively. In cases where the best ASR hypothesis was not an exact match to the transcribed utterance (mismatched test set), we see improvements of up to 6.7% and 8.8% micro-averaged F1 scores, for German and Portuguese, respectively. For IC and NER multi-task experiments, when evaluating on the mismatched test set, we see improvements across all domains in German and in 17 out of 19 domains in Portuguese (improvements based on change in SeMER scores). Our results suggest that the use of multiple ASR hypotheses, as opposed to one, can lead to significant performance improvements in the DC task for these non-English datasets. In addition, it could lead to significant improvement in the performance of IC and NER tasks in cases where the ASR model makes mistakes. Introduction Recent years have seen a dramatic increase in the adoption of intelligent voice assistants such as Amazon Alexa, Apple Siri and Google Assistant. As use cases expand, these assistants are expected to process ever more complex user utterances and perform many different tasks. Some of the key components that enable the performance of these tasks are housed within the spoken language understanding (SLU) system; one being the Automatic Speech Recognition (ASR) module which transcribes the users' vocal sound wave into text and another being the Natural Language Understanding module which performs a variety of downstream tasks that help identify the actions requested by the user (Ram et al., 2018;Gao et al., 2018). These modules perform in tandem and are crucial for the successful processing of user utterances. Typical ASR models generate multiple hypotheses for an input audio signal, that are ranked by their confidence scores (Li et al., 2020). However, only the top ranked hypothesis (referred to hereafter as the ASR 1-best) is usually processed by the NLU module for downstream tasks (Li et al., 2020). Three major tasks performed by the NLU module are Domain Classification (DC), Intent Classification (IC) and Named Entity Recognition (NER). DC predicts the domain relevant to the utterance (Weather, Shopping, Music etc.) and IC extracts actions requested by users (some examples are, buy an item, play a song or set a reminder). NER is focused on identifying and extracting entities from user requests (names, dates, locations, etc.). Current NLU models usually take in the ASR 1-best hypothesis as input to perform NLU recognition (Li et al., 2020). However, the highest-scored ASR hypothesis is not always correct and, at times, can lead to downstream failures including incorrect NLU hypotheses. These errors can be mitigated by uti-lizing multiple top-ranked ASR hypotheses (ASR n-best hypotheses) in NLU modeling, which have a higher likelihood of containing the correct hypothesis. Even in the case of all n-best hypotheses being incorrect, the NLU models may be capable of recovering the correct hypothesis by integrating the information contained within the n-best hypotheses. Hence, the use of multiple hypotheses should help obtain firmer predictions from ASR modules for their corresponding NLU module and result in improved performance. In this study we focus on two non-English internal datasets, German and Portuguese, and evaluate the use of ASR n-best hypotheses for improving NLU modeling within these contexts. Given that the ASR models we use in this experiment produce a maximum of five (or less) hypotheses per input utterance, we utilize all available hypotheses (referred to hereafter as the ASR 5-best) for our work. We leverage two BERT-based summarization models (Devlin et al., 2019;Liu, 2019;Liu and Lapata, 2019) and a sequence-to-sequence model with a pointer generator network (Rongali et al., 2020) to extract the information from the ASR 5-best hypotheses. We show that using multiple hypotheses, as opposed to just one, can significantly improve the overall performance of DC, and the performance of IC and NER in cases where the ASR model makes mistakes. We describe relevant work in Section 2 and present a description of our data set and opportunity cost analysis in Section 3. In Section 4 we describe the architecture of our models. In Section 5, we present our experimental results followed by our conclusions in Section 6. Related work Using deep learning models for summarization has been an active area of research in the recent past. Two popular types in current literature have been extractive summarization and abstractive summarization. Extractive summarization systems summarize by identifying and concatenating the most important sentences in a document whereas abstractive summarization systems conceptualize the task as a sequence-to-sequence problem and generate the summary by paraphrasing sections of the source document. Extensive work has been done on extractive summarization (Liu, 2019;Cheng and Lapata, 2016;Nallapati et al., 2016a;Narayan et al., 2018b;Dong et al., 2018;Zhang et al., 2018; and abstractive summa-rization (Narayan et al., 2018a;See et al., 2017;Rush et al., 2015;Nallapati et al., 2016b) used in isolation. Furthermore, studies have shown improvement in summary quality when extractive and abstractive objectives have been used in combination (Liu and Lapata, 2019;Gehrmann et al., 2018;Li et al., 2018). Liu (2019) proposed a simple, yet powerful, variant of BERT for extractive summarization in which they modified the input sequence of BERT from its original two sentences to multiple sentences. They used multiple classification tokens ([CLS]) combined with interval segment embeddings to distinguish multiple sentences within a document. They appended several summarization specific layers (either a simple classifier, a transformer or an LSTM) on top of the BERT outputs to capture document level features relevant for extracting summaries. Following this work, Liu and Lapata (2019) proposed a model that comprises of the pre-trained BERT extractive summarization model (Liu, 2019) as the encoder and a decoder which consists of a 6-layered transformer (Vaswani et al., 2017). The encoder was fine-tuned in two stages, first on the extractive summarization task and then again on an abstractive summarization task resulting in a joint extractive-abstractive model that showed improved performance on summarization tasks. The utilization of multiple ASR hypotheses for improved NLU model performance across DC, IC tasks was first introduced by Li et al. (2020). They proposed the use of 5-best ASR hypotheses to train a BiLSTM language model, instead of using a single 1-best hypothesis selected using either majority vote, highest confidence score or a reranker. They explored two methods to integrate the n-best hypothesis: a basic concatenation of hypotheses text and a hypothesis embedding concatenation using max/avg pooling. The results show 14%-25% relative gains in both DC and IC accuracy. In our work, we explore the performance improvement offered by utilizing the ASR 5-best hypotheses in previously unexplored languages, German and Portuguese. We also differ from previous studies due to our use of the superior BERTbased extractive (Liu, 2019) and joint extractiveabstractive (Liu and Lapata, 2019) summarization models to extract a summary hypothesis for the DC task, from the ASR 5-best. Voice assistants traditionally handle IC and NER tasks using semantic parsing components which typically comprise of statistical slot-filling systems for simple queries and, in more recent time, shiftreduce parsers Einolghozati et al., 2019) for more complex utterances. Rongali et al. (2020) proposed a unified architecture based on sequence-to-sequence models and pointer generator networks to handle both simple and complex IC and NER tasks with which they achieve stateof-the-art results. In this work, we use a model that expands this approach to consume the 5-best ASR hypotheses and evaluate its performance on IC/NER tasks for the two language datasets considered. Data Our experiments focus on two non-English internal datasets; German and Portuguese. We run all utterances in each language through one languagespecific ASR model and take the top-ranked ASR hypothesis for each utterance as ASR 1-best and all available hypotheses for each utterance (a maximum of five in our models) as ASR 5-best. In addition, we also obtain a human transcribed version of each utterance. For German, we use 1.48 million utterances from 21 domains for training and validation. We split the data randomly within each domain, with 85% used for training and 15% for validation. An independent set of 193K utterances are used for testing. Within the independent test set we find 17K utterances where the ASR 1-best did not match the transcribed utterance exactly and mark them as the "mismatched" test set. (Table 1). For Portuguese, we use 890K utterances from 19 domains for training and validation, split the same way as with German. Another 247K utterances are used for testing. We find 41K utterances within test, where the ASR 1-best did not match the transcribed utterance exactly, and mark them as the mismatched test set (Table 1). Li et al. (2020) showed improvement in NLU model performance on English (en-US) upon utilizing the ASR 5-best hypotheses instead of only ASR 1-best. However, the impact of this on non-English languages has not yet been explored. To understand the opportunity of improvement that the ASR 5-best hypotheses can lend to NLU model performance in German and Portuguese datasets, we analyze the ASR 5-best hypotheses in compar-ison to the ground-truth human transcribed data for each of the considered language datasets. First, we calculate the number of exact matches to the transcribed utterance occurring in each of the top 5best hypotheses. It should be mentioned that each ASR hypothesis is different from the others and only one hypothesis (if at all) can match the transcribed utterance. Next we compute the amount of exact matches found in the n th -best hypothesis set, as a fraction of the volume of exact matches found at 1-best. The results are shown in Table 2. We find that the amount of exact matches that occur in 2-5 best hypotheses, compared to the volume of exact matches that occur in the top-ranked hypothesis, is large for Portuguese (30.16%) and German (20.83%) (see Table 2). This gives an indication of the opportunity present in using hypotheses beyond ASR 1-best for each language dataset. Opportunity Cost Measurement In Table 3, we further illustrate the use of the ASR 5-best hypotheses by showing three possible cases of stored information that we want our NLU model to extract; selecting the best matching hypothesis (first and second rows) and combining hypotheses (third row). DC models For our DC experiments, we compare performance across the following classification models: • Baseline -A BERT-based classification baseline model with MLP classifier trained on the transcribed utterance and tested on the ASR 1-best • BSUMEXT-A BERT-based extractive summarization model trained and tested on the ASR 5-best • BSUMEXTABS-A BERT-based joint extractive and abstractive summarization model trained and tested on the ASR 5-best Standard testing on transcribed utterances underestimates the combined ASR and NLU errors. In order to avoid this our test sets exclude transcribed utterances and thus reflect the real situation. In Section 3, we described the simple extractive summarization model proposed by Liu (2019). We adapt their extractive summarization model to take the ASR 5-best hypotheses as input and output a probability score per domain based on a summarized hypothesis. Figure 1 shows the architecture The model then takes the [CLS] representation of each ASR 5-best utterance and performs multiheaded attention to obtain the summary hypothesis. For the BSUMEXTABS model, the BERT encoder is fine-tuned on an abstractive summarization task and then further fine-tuned on the extractive summarization task. In this model the summary hypothesis fed into the multi-layer perceptron classifier, is generated token by token in a sequence-tosequence fashion. Similar to Liu and Lapata (2019), a decoupled fine-tuning schedule which separates the optimizers of the encoder and the decoder is used. We trained each of our models for up to 30 epochs and use the best performing model, based on validation metrics, for evaluating the independent test set. IC/NER models We compare the following models for the IC and NER tasks: • Baseline -A BERT-based classification baseline model trained on the transcribed utterance and tested on the ASR 1-best • BERT S2S NBEST PTR -A BERT-based sequence-to-sequence model which employs a pointer generator network, trained on the ASR 5-best + transcribed utterance and tested on ASR 5-best Instead of a typical sequence tagging problem, Rongali et al. (2020) propose a unified architecture to handle IC and NER tasks as a sequence generation problem. We build upon that approach. BERT S2S NBEST PTR is a sequenceto-sequence model augmented with a pointer generator network which functions as a self-attention mechanism. We expand the architecture proposed by Rongali et al. (2020) to include multiple input queries. The model task is to generate target words which can be either intent or slot delimiters or words that are from the source sequences. The pointer generator network enables the model to generate pointers to the source sequences (instead of using a large vocabulary of tokens) within the target sequence. An example of a source sequence with two ASR hypotheses and a target sequence looks as follows (we use spaces to delimit hypotheses and & to delimit separate tokens within an utterance): 1 Source: ply_&_madonna play_&_mad_&_owner 2 Target: PlaySongIntent( @ptr1_0 ArtistName( @ptr0_1 )ArtistName ) PlaySongIntent where @ptr0 1, for example, is a pointer to the second word "madonna" in the first utterance of the source query. One advantage of using pointers instead of the actual tokens is the smaller target vocabulary required for the decoder, resulting in a more light-weight model. The architecture consists of a pre-trained BERT encoder and a transformer decoder (Devlin et al., . The decoder is augmented with a pointer generator network that functions as a self-attention mechanism. Figure 2 shows the high-level architecture. The Bert encoder processes each ASR hypothesis separately. The encoder hidden states over all ASR hypotheses are then concatenated and passed to the decoder. The decoder hidden states are used to update the attention mechanism and the tagging vocabulary and pointer distributions (see Rongali et al. (2020) for detailed descriptions). These probability distributions of tags and pointers are used to determine the next word and tag that is output by the decoder. The model is trained by minimizing sequence cross entropy loss over the training set. These models are domain-specific multi-task models which handle both IC and NER tasks simultaneously. We trained one model per domain with all models trained for up to 50 epochs. The best performing model based on validation metrics was used for evaluating the independent test set. Evaluation We measure the success of our DC experiments by comparing both micro-and macro-averaged F1 scores of our experimental models to those of the baseline model. Micro-and macro-averaged F1 scores are defined as where P and R are overall precision and recall respectively and P i and R i are the within class precisions and recalls respectively. We also calculate the relative change in error of each experimental model run with respect to baseline as shown in equation 3. Note that "lower-is-better" for this metric. In addition to these metrics calculated on the full test data set, we also calculate these metrics on the mismatched test set utterances where the ASR 1-best did not match the transcribed utterance. ∆ err = 100 × ((100−F 1 experiment )−(100−F 1 baseline )) (100−F 1 baseline )) (3) For the IC and NER experiments, we use Semantic Error Rate (SemER) (Su et al., 2018) as our metric of choice. SemER is defined as follows: Figure 2: A schematic of the sequence-to-sequence model with attention. Each ASR hypothesis is encoded separately. The encoder hidden states are then concatenated and passed to the decoder to have a cross-attention between encoder and decoder outputs over all ASR hypotheses. where D=deletion, I=insertion, S=substitution and C=correct-slots. The Intent is treated as a slot in this metric and Intent error, considered as a substitution. We use the relative change in SemER with respect to the baseline model (equation 5), both overall and per domain in order to evaluate the success of our models. Note that "lower-is-better" for relative change in SemER as well. Table 4 describes the performance of all the models defined in Section 4.1 on the full test set and the mismatched test set (see Section 3 and Table 1). The full test set enables us to understand the general performance improvement that can be achieved by using summarization models. Although utilizing the full ASR 5-best hypotheses might offer some improvement even in cases where the ASR 1-best hypothesis is an exact match to the transcribed utterance, much more value-add is expected when using the ASR 5-best hypotheses in cases where there is a mismatch between the transcribed utterance and ASR 1-best. To study this use case, we use the mismatched test set. DC experiments We observed that a majority of F1 scores across all models for German exceeded their corresponding values in Portuguese. Our opportunity cost analysis showed that exact matches between the transcribed utterance and ASR 2-5-best for Portuguese are higher than for German (see Section 3.1). This suggests that the German ASR model tends to perform better than the Portuguese ASR model. In this light, the smaller gains in relative change in error observed for German when compared to Portuguese are likely due to the German ASR model being superior and therefore leaving smaller room for improvement. Figure 3 displays the relative changes of each model against the baseline for each dataset. When considering micro-averaged F1 scores, the BSUMEXT and BSUMEXTABS models outperform the baseline in all cases, with the later out-performing the former. This shows that the use of ASR 5-best hypotheses can significantly improve overall classification for both language datasets. The BSUMEXTABS models also consistently out-perform the baseline on macro-averaged F1 scores, showing improvement in mean withinclass classification scores as well. This suggests that BSUMEXTABS with additional fine-tuning on the abstractive task, is in general more successful at creating a firmer hypothesis for DC than the pure extractive summarization of BSUMEXT. For Portuguese, even with the relatively large percentage of exact matches available for extraction within its ASR 2-5 hypotheses (see Section 3), BSUMEXTABS consistently outperforms BSUMEXT across all metrics and datasets. Table 5 describes the performance of all the models defined in Section 4.2 on domain-level data from the full test set and the mismatched test set. As with the DC experiments, we use the full test set to understand the general overall performance improvement, and use the mismatched test set to identify improvement in cases where the ASR 1-best hypothesis is not an exact match to the transcribed utterance. IC and NER experiments When evaluating the BERT S2S NBEST PTR model, we find that it tends improve performance specifically on the mismatched test set. For German, we find improved performance across every domain on the mismatched test set (see Table 5) with an overall SemER improvement of 11.6% against baseline. However, we only observe improvement in three domains on the full set, while other domains show degradation in SemER. It is also interesting to note that the domains that improve also had low utterance counts. For Portuguese, testing on the mismatched test set yields improved performance across 17 out of 19 domains (see Table 5) with an overall SemER improvement of 8.1% against baseline, while we see only three domains show improvement on the full test set. Our results suggest that the ASR 1-best hypothesis works well for IC/NER tasks. The noise added by additional hypotheses seem to degrade results in the general use case. However, the additional hypotheses tend to be very helpful in cases where the ASR model makes mistakes (i.e. mismatched set data where the ASR 1-best is not an exact match to the transcribed utterance). Our full test set results show that the baseline model appears to be a better choice for the IC/NER tasks. However, if we could detect user utterances where the ASR model might have made a mistake in its top hypothesis, the ASR outputs (i.e. the set of all hypotheses) of these utterances could be channeled to a separate NLU model such as BERT S2S NBEST PTR, that could build a better hypothesis than the baseline and improve overall IC/NER performance. We analyzed the confidence scores of our ASR models on the full and mismatched test set hypotheses to explore the possibility of detecting a mismatched set ASR output. For each ASR output we obtain the mean confidence score across all available hypotheses. We then compare the frequency distributions of the mean confidence scores in the full and mismatched test sets. Figure 4 shows the resulting distributions for two example domains for each language dataset. We find that the full set shows a strong peak at high confidence scores while the mismatched set shows a more uniform distribution. The pronounced difference in distribution shape suggests that a thresholding mechanism based on the confidence score output by the ASR model (or a simple classifier trained on ASR outputs and scores) might be used to predict mismatched test set outputs with good confidence. Leveraging such a mechanism might enable the use of a second model such as BERT S2S NBEST PTR to improve performance in these mismatched cases, and in turn improve overall IC/NER performance. Conclusions and future work In this study, we explore the benefits of using ASR 5-best hypotheses for the NLU tasks in the German and Portuguese datasets. We explore several models to perform DC and IC/NER tasks and evaluate their performance against baseline models that use ASR 1-best. We find significant overall improvement in performance for the DC task. We also find significant improvement in performance of the jointly evaluated IC/NER tasks in cases where the ASR 1-best hypothesis is not an exact match to the transcribed utterance. For the DC task, our results suggest that the use of ASR 5-best helps produce better hypotheses and thereby greater improvements in the case of slight lower quality ASR models. Our next steps will include exploring how different data splits based on ASR confidence scores might affect the sequence-to-sequence model performance. Furthermore, we will explore performance improvements in IC and NER tasks, using different model architectures and training schedules. We will also expand our study to a larger set of languages in order to understand how the use of multiple ASR hypotheses might affect languages with different lexical distributions. Languages which use multiple scripts (Japanese, Hindi, Arabic etc.) or which are more opaque and likely to have heterographs (e.g., "serial, "cereal") and those that have less standardized spelling systems (Hindi etc) are more likely to have ASR errors. They may have different levels of improvement with the use of ASR 5-best hypotheses and we hope to analyze this in our future work.
2020-12-09T02:41:14.046Z
2020-12-07T00:00:00.000
{ "year": 2020, "sha1": "318b6c9a67a47d8019f3fa5773d8789e6d4f7dc5", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "3496d1f3181919a769c37856e1c9d2013c81e944", "s2fieldsofstudy": [ "Computer Science", "Linguistics" ], "extfieldsofstudy": [ "Computer Science" ] }
263668955
pes2o/s2orc
v3-fos-license
Challenges in the real world use of classification accuracy metrics: From recall and precision to the Matthews correlation coefficient The accuracy of a classification is fundamental to its interpretation, use and ultimately decision making. Unfortunately, the apparent accuracy assessed can differ greatly from the true accuracy. Mis-estimation of classification accuracy metrics and associated mis-interpretations are often due to variations in prevalence and the use of an imperfect reference standard. The fundamental issues underlying the problems associated with variations in prevalence and reference standard quality are revisited here for binary classifications with particular attention focused on the use of the Matthews correlation coefficient (MCC). A key attribute claimed of the MCC is that a high value can only be attained when the classification performed well on both classes in a binary classification. However, it is shown here that the apparent magnitude of a set of popular accuracy metrics used in fields such as computer science medicine and environmental science (Recall, Precision, Specificity, Negative Predictive Value, J, F1, likelihood ratios and MCC) and one key attribute (prevalence) were all influenced greatly by variations in prevalence and use of an imperfect reference standard. Simulations using realistic values for data quality in applications such as remote sensing showed each metric varied over the range of possible prevalence and at differing levels of reference standard quality. The direction and magnitude of accuracy metric mis-estimation were a function of prevalence and the size and nature of the imperfections in the reference standard. It was evident that the apparent MCC could be substantially under- or over-estimated. Additionally, a high apparent MCC arose from an unquestionably poor classification. As with some other metrics of accuracy, the utility of the MCC may be overstated and apparent values need to be interpreted with caution. Apparent accuracy and prevalence values can be mis-leading and calls for the issues to be recognised and addressed should be heeded. Introduction The value and use of a classification is dependent on its accuracy.Many metrics of accuracy are available to express different aspects of classification quality and have well-defined properties.Unfortunately, with many metrics some key properties such as independence of prevalence and the meaning of calculated values are defined on the assumption that a gold standard reference is used in the accuracy assessment.This assumption is often untenable in real world applications.The use of an imperfect reference standard and can lead to substantial mis-estimation of classification accuracy and derived variables such as class prevalence. This article adds to the literature on accuracy assessment by revisiting fundamental issues with widely used accuracy metrics.The latter range from recall and precision that express the accuracy of the positive class of a binary classification through to the MCC that is claimed to provide a truthful guide to overall classification quality.Using simulated data it is shown that the theoretical independence of prevalence of some popular metrics is not realised and that the apparent accuracy of a classification can differ greatly from the truth when an imperfect reference is used.Estimates of classification accuracy and prevalence are shown to change substantially with variation prevalence and the quality of the reference standard.The direction and magnitude of the biases introduced by the use of an imperfect reference varies as a function of the nature of the errors it contains.Critically, accuracy metrics do not always possess the properties claimed in some of the literature because a fundamental assumption underlying accuracy assessment is unsatisfied.For example, it is shown that a relatively high MCC may be estimated for an essentially worthless classification arising from a classifier with the skill of a coin tosser.Consequently, some claims made about the value and use of MCC, and other metrics, for accuracy assessment are untrue.Researchers need to avoid naïve interpretation of accuracy metrics based directly on estimated apparent accuracy values and act to address challenges in the interpretation and use of a classification.They could, for example, interpret apparent values with care, estimate reference data quality and apply methods to correct for the biases introduced into an analysis through the use of an imperfect reference. Overview A critical attribute of a classification is its quality, often described in terms of its accuracy.Classification accuracy reflects the amount of error contained in a classified data set and indicates the classification's fitness for purpose.An unacceptably low accuracy might be taken to be a spur to refine and enhance a classifier (e.g.add additional discriminating variables) until a sufficiently accurate classification is achieved.As such, accuracy can be fundamental to classifier development.Such activity is, for example, central to cross-validation in model development.Additionally, a low accuracy might cause a researcher to discard a classifier as inappropriate and select an alternative that might be superior.Ultimately, the accuracy of a classification provides a guide to the performance of the classifier and impacts on the quality of products derived from the classification.For example, the accuracy of a classification influences estimates of class abundance such as the prevalence of a disease in a typical medical application.Critically, the accuracy of a classification is fundamental to its use and ultimately the inferences and decisions made based upon it.Incorrect evaluations of accuracy, however, can be deceiving and can lead to the questioning of the quality and indeed validity of conclusions drawn from a classification analysis [1,2]. In principle, accuracy assessment is a straightforward task.The accuracy of a classification is simply an indicator of the amount of error in the labels generated by a classifier such as a diagnostic test.The error can be calculated by comparing the classifier's labels with reality.In practice, the labels predicted by the classifier are compared against those obtained from a reference standard.Typically, the core focus is on the magnitude of a quantitative accuracy metric; often informed with confidence intervals and rigorous statistical testing [3].Sometimes a scale may be used to aid interpretation of a metric, with the range of possible values divided up to provide an ordinal scale such as low, medium and high accuracy (e.g.[4]).In many situations, the relative magnitude of accuracy metrics is critical, often relative to a target or threshold value (e.g.[1,5]) or between a set of results obtained from other studies perhaps based on different samples.None-the-less the key focus in evaluating a classification is typically the magnitude of a calculated metric of accuracy. There is no single perfect metric of classification accuracy [6,7].There are many measures of classification accuracy which typically reflect different aspects of the quality of a classification.With binary classifications, which are the focus of this article, popular approaches include, overall accuracy, recall, specificity, F 1 and measures such as the area under the receiver operating characteristics or precision-recall curves [8][9][10][11][12]; the metrics used in this paper are defined in section 1.2.These various measures all convey different information about a classification, each with merits and demerits for a particular application.Consequently, there have been many calls for the use of multiple metrics although this can complicate interpretation [13,14].Additionally, a common, albeit unrealistic, desire is to have a single value to summarise a binary classification and recent literature has promoted the Matthews correlation coefficient (MCC) for all researchers and all subjects [11,15]. The focus of this article is on some of the challenges of interpreting the magnitude of a computed accuracy metric.The interpretation of an accuracy statement is more challenging than it may first seem to be in many real world situations.A fundamental concern is that the magnitude of an accuracy metric is not solely a function of the quality of the classification.The magnitude of an accuracy metric can, for example, also be a function of the population being studied and the specific sample of cases used in its calculation [9] as well as of the quality of the reference standard used [16].The former issue is associated mostly with the effects of prevalence and the latter with deviation from a true gold standard reference.Both of these variables can result in the apparent accuracy of a classification differing from reality.In this situation, the accuracy assessment is also reduced from an objective and generalizable assessment of classification quality to an assessment relative to only the specific sample of data cases and reference standard used [2,15].The apparent accuracy indicated by the magnitude of an accuracy metric may differ greatly from reality.Moreover, the magnitude and direction of the deviation from the true accuracy is a function of the nature of the data set used and of the imperfections that exist in the reference standard [2,[16][17][18][19][20]. Critically, the magnitude of an accuracy metric is not influenced solely by the quality of the classification.This makes comparisons of accuracy metric values, whether to a scale or between classifications, difficult.The challenges introduced by variations in prevalence and the use of an imperfect reference standard are well known but sadly are often not or only poorly addressed [1,16,21].Here, the aim is to revisit some of the fundamental issues for a set of popular accuracy metrics but with particular regard to the MCC that has recently been strongly promoted as an accuracy metric yet actually has some undesirable properties associated with prevalence and use of an imperfect reference standard. The classes in a classification analysis are often imbalanced in real world applications.With a focus on widely used binary classifications, this situation commonly arises when a class, often the one of particular interest, is rare.The prevalence may also vary, perhaps in space and or time.For example, in studies of Crohn's disease the prevalence varied from 20 to 70% in different sub-groups investigated [22].Even larger ranges of prevalence can sometimes be expected to occur.For example, in satellite remote sensing of tropical deforestation, deforestation may be relatively rare at the global scale but in small local studies it could be completely absent while other sites have been completely cleared of forest making the full range of prevalence possible.As a result, imbalanced classes may be common [12,23].Moreover, the degree of imbalance can be very large.For example, recent literature highlights this issue with reference made to situations in which the ratio of the number of cases in the majority class to that in the minority class included values such as 2,000:1 [24] and 10,000:1 [12]. Some studies seek to reduce the problem of imbalanced classes by sampling the population to achieve a balanced sample.Alternatively, researchers sometimes artificially balance the sample by use of suitable data augmentation procedures or other means to adjust a sample to achieve a desired level of balance [25][26][27].But such approaches are not problem-free.Synthetic minority oversampling data augmentation methods have, for example, the potential to increase biases in the data set and overfit to the minority class [28].Imbalanced classes are, therefore, common and researchers should really correct estimates of accuracy metrics and derived products such as prevalence for the bias induced by class imbalances [29][30][31][32].Commonly, many seek to address this problem by making a call for the use of accuracy metrics that are believed to be independent of prevalence such as recall, specificity and Youden's J [33][34][35]. The assumption of independence of prevalence also underpins the use of Bayes theorem in applications such as clinical diagnosis [34,36].However, the claimed independence of prevalence linked to these and some other accuracy metrics can disappear if an imperfect reference standard is used in the accuracy assessment. While more attention has been paid to sample issues such as sampling design and class imbalance than to reference data quality [21], the effects associated with the use of an imperfect reference standard are well known [1].Despite this situation the negative effects associated with the use of an imperfect reference standard are often ignored or only poorly addressed [21]. The use of a perfect, gold standard, reference data set in which all class labels contained are completely correct is often assumed implicitly in an accuracy assessment.However, such a gold standard may not exist [37,38] or might be unavailable because it is perhaps too costly or impractical to use [16,20,39].Such situations force the use of an imperfect reference standard.In many studies the reference data arise from expert labelling [22].Unfortunately, such an approach is far from perfect with the level of disagreement between expert interpreters often large.For example, values of disagreement up to 25% noted have been noted in some medical research [40] and even higher, approximately 30%, in a remote sensing studies [41].These disagreements arise because there are many error sources [20,21,42].Real world data are often messy.Errors can be anything from a typographical mistake to an issue connected to a random event (e.g.shadow in an image complicating labelling) to systematic errors associated with the skills, training and personal characteristics of the people providing labels and the tools that they use.Errors can range from honest mistakes through poor contributions from spammers to deliberately mis-labeled cases provided with malicious intent [21,43].Sometimes imperfections in the reference data are noted and the aim is to use a reference that is expected to be more accurate than the classification under-study [3].The bottom line is that the reference data set used is often not a true gold standard.However, it is common for the reference standard to be poorly, if at all, discussed and imperfections rarely addressed [1,16].Although the deviations from perfection can be large it is important to note that even small errors in the reference data can lead to large bias and mis-estimation from which can follow mis-interpretation and the drawing of incorrect conclusions [1,16,44].Furthermore, the magnitude and direction of mis-estimation varies as a function of the nature of the errors present in the reference standard.The challenges in accuracy assessment should not be ignored and researchers have been urged to take action to address them which includes the generation of error corrected estimates of classification accuracy [20,[29][30][31][32] and derived variables such as prevalence [45]. The effects of variations in prevalence and reference data error on accuracy assessment are well known and recent literature has focused on metrics to provide meaningful information.Recent literature has also promoted the use of the MCC in accuracy assessment in all subjects and highlighted merits relative to other popular and widely used measures such as recall, precision and F 1 [11,15].Key features behind the arguments put forward for the use of the MCC are that it can be extended from binary to multi-class classifications, uses all four elements of a binary confusion matrix and is more informative than other popular accuracy metrics [11,15].A key property claimed about the MCC is that a high value can only arise if good results are obtained for all four confusion matrix elements [11] and thus a high score can only be obtained if the classification performed well on both of the classes [46].Moreover, it is claimed that the MCC is robust to imbalanced data sets [46] although variation with prevalence is known to occur [15].While the magnitude of the MCC can vary with prevalence workarounds exist and it has been suggested that a metric that is claimed to be unbiased by prevalence, such as J, be used if class imbalance is a concern [15].Here, it is suggested that in real world applications the magnitude of the MCC, and other metrics such as J, can be difficult to interpret and that the potential of the MCC may be over-stated.This situation arises since classification accuracy is a function of more than just the performance of the classifier.For example, a low apparent accuracy can be obtained from a highly accurate classification.Alternatively, the magnitude of a metric, including the MCC, may be artificially inflated complicating the interpretation of large values.Here, a key aim is to revisit some fundamental issues with accuracy assessment and show that some well-known problems apply to metrics such as the MCC. Background to accuracy assessment The quality of a binary classification is typically assessed with the aid of a 2x2 confusion matrix.The latter is simply a cross-tabulation of the labels allocated to a set of cases by a classifier against the corresponding labels in a reference data set.The labels used for the two classes may vary between studies (e.g.change v no change, yes v no etc.) but often take the form of positive v negative.It is assumed that the classes are discrete, mutually exclusive and exhaustively defined (i.e. each case lies fully and unambiguously in one of the two classes and no other outcome for a case is possible).The binary confusion matrix comprises just four elements but these, and their marginal values, fully describe the classification (Fig 1 ). A range of measures of classification quality may be generated from the confusion matrix to illustrate different aspects of the classification.Commonly, however, there is a general desire for a single value to summarise the accuracy of the classification.Unfortunately, the assessment of classification accuracy can be a more challenging and difficult task than it may first appear.In addition, the challenge is further complicated by the use of different terminology in the vast array of disciplines that require and use accuracy assessment (e.g.[33,44]).This section aims to cover some of the fundamental issues and introduce the terminology to be used to avoid potential confusion.The discussion is focused on only some of the most popular and widely used classification metrics and does not seek to be exhaustive but to show some key issues and trends.The discussion focuses mainly on popular metrics focused on the positive cases such as recall and precision through to the MCC which has recently been promoted as a standard metric for use in all subjects [15]. The four elements of the confusion matrix show both the correct and incorrect allocations made by the classifier.The cases correctly allocated, and lying in the main diagonal of the matrix, are the true positives (TP, cases that were classed as positive and also have a positive label in the reference data) and true negatives (TN, cases that were classed as negative and also have a negative label in the reference data).The cases incorrectly allocated in the classification are the misclassifications and are false positives (FP, cases labelled positive by the classifier but having a negative label in the reference data) and false negatives (FN, cases labelled negative by the classifier but having a positive label in the reference data).The discussion in this section assumes that the reference data used in determining which element of the confusion matrix to place a classified case into is a true gold standard (i.e. it is perfect, containing zero error). A simple and widely used metric to express the quality of the entire classification is the proportion of cases correctly allocated which is often referred to as accuracy or overall accuracy which may be calculated from: Although this can be a useful metric that makes use of all four elements of the confusion matrix there are concerns with its use [13].For example, a key problem is that the metric is strongly impacted by imbalanced classes with a bias toward the majority class and can be uninformative [11,47].A variety of other accuracy metrics have been proposed to evaluate the accuracy of binary classifications. Often an analysis has a focus on the positive cases.An important issue in a classification accuracy assessment is the proportion of positive cases in the reference data set that is often defined as the prevalence (P) and is calculated from: where N is the total sample size (i.e. the number of cases in the data set, the sum of all the positive and negative cases in the data set).Prevalence is a property of the population under study [9].The prevalence also indicates the relative balance or size of the two classes in the data set. When prevalence equals 0.5 the classes are balanced with an equal number of positive and negative cases in the reference data set and the ratio of positive:negative cases is 1:1.The more the magnitude of the prevalence deviates from 0.5 greater the degree of imbalance present.In many studies, the positive cases are relatively rare and hence imbalance can become an issue.While some studies seek balance and may achieve this via careful sampling or use of augmentation methods many proceed with imbalanced data sets.With a focus on the positive cases, a simple metric to quantify the accuracy of the classification is Recall that is calculated from: Recall indicates how well the classifier labels as positive cases that actually are positive, showing the ability to correctly identify cases that are positive [35,48].This measure is widely used in computer science and machine learning but also described by other communities as sensitivity, hit rate, true positive rate and producer's accuracy [3,9,48].Although useful it has limitations such as the provision of no information on the FP cases [47].Additionally, Recall does not fully capture the full information on the accuracy of the classification with regard to the positive cases.Recall expresses the probability of correctly predicting a positive case [47].Some researchers may have a different perspective and be interested in the probability that a positive prediction is correct.The suitability of a metric depends on the researcher's needs and the application in-hand.For example, some studies may be more focused on commission rather than omission errors and so researchers may at times wish to focus on the rows rather than the columns of the confusion matrix as determined by their perspective [36,[49][50][51].An alternative metric to Recall that is focused on the positive cases is Precision, which is calculated from: Again, although this is a term widely used in computer science and machine learning other expressions such as the positive predictive value and user's accuracy are used in other disciplines [3]. Should interest focus on the negative cases, two additional metrics may be defined.Similar to Recall for the positive cases, the accuracy of the negative classifications may be expressed by Specificity calculated from: Specificity indicates the ability of the classifier to correctly identify negative cases [35,48].This metric is sometimes referred to as the true negative rate.Additionally, from the same perspective used in the calculation of Precision for the positive cases, the accuracy of the negative cases can be expressed by the Negative Predictive Values (NPV) that can be obtained from: The magnitude of the four basic metrics of Recall, Precision, Specificity and NPV are all positively related to the aspect of accuracy that they measure and lie on a 0-1 scale.If a gold standard reference is used in the accuracy assessment, a key feature of Recall and Specificity is that are independent of prevalence while Precision and NPV are dependent on prevalence [35]. Each of these four metrics of accuracy (Eqs 3-6) can be informative and useful.Each does not, however, fully summarise the accuracy of the entire classification; each is based on only two of the four confusion matrix elements [15,47].It is, therefore, common for two or more metrics to be used together.For example, Recall and Precision are widely reported together to obtain a fuller characterisation of accuracy than that arising from one alone.Other approaches to more fully characterise a classification have been presented in the literature (e.g.[9,47]). An alternative characterisation of classification accuracy can be achieved by combing metrics.For example, a popular approach is to determine Youden's J from: This metric is sometimes referred to as the true skills statistic and bookmaker informedness [9,15].The magnitude of J is related positively to classification accuracy and lies on a scale from -1.0 to 1.0.The metric J is often promoted for use in accuracy assessment since its magnitude is independent of prevalence [15] if a gold standard reference is used in the accuracy assessment. Another widely used metric that essentially combines the information of two of the basic metrics is F 1 .Specifically, the F 1 metric is the harmonic mean of Recall and Precision and may be calculated from: The magnitude of F 1 ranges from 0 if Recall and or Precision are zero to 1.0 if both Recall and Precision indicate a perfect classification.Although widely used as an accuracy metric, F 1 is described as being inappropriate for use with imbalanced data sets and its magnitude is dependent on prevalence [11] making its magnitude misleading.In addition, the F 1 metric does not use all of the information contained in the confusion matrix [15]. Recently, the MCC has been promoted as a standard metric of classification accuracy for all subjects and data sets.The MCC uses all four elements of the confusion matrix [11,47] and is calculated from: MCC ¼ ðTP � TNÞ À ðFP � FNÞ ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi The magnitude of the MCC is positively related to the quality of a classification and lies on a scale from -1.0 to 1.0, with an MCC = 0 indicating an accuracy equivalent to that from a coin-tossing classifier [11].Although some claims to the MCC being robust or relatively unaffected by class imbalance have been made [52] it is known that the MCC can be impacted by prevalence.However, it has been suggested that workarounds exist for this situation or that a metric such as J should be used if class imbalance is a concern [11,15]. The MCC has been forcefully promoted as superior to other measures such as accuracy and F 1 , which are the most widely used measures [11].For example, Chicco et al. (2021) [15] argue that a high accuracy (Eq 1) or F 1 (Eq 8) guarantee that two of the basic metrics (Eqs 3-6) are high, a high J guarantees that three of the basic metrics are high while a high MCC guarantees that all four basic metrics are high.Thus, the MCC "produces a high score only if the predictions obtained good results in all four confusion matrix categories" [11, page 1]. Many other metrics of classification accuracy are available.Popular metrics beyond the set defined above include likelihood ratios (LR) and the area under the receiver operating characteristics and/or precision-recall curves.LRs are often calculated with regard to both classes.The positive LR is the ratio of the true positivity rate to the false positivity rate [53] and is calculated from: The negative LR is the ratio of the false negative rate to the true negative rate [54] which may be calculated from: LRs lie on a scale from 0 to infinity.A value of 1 indicates a poor classification in which the probability of a classifier predicting a positive label is the same for cases that belong to the positive class and to the negative class.Classifications that have a high LR+ (>1) and low LR-(<1) demonstrate high discriminating ability [54,55].The LRs are typically claimed to be unaffected by prevalence [53]. The receiver operating characteristics and precision-recall curves are also based on the basic metrics of accuracy.The receiver operating characteristics curve is simply a depiction of the relationship between Recall and 1-Specificity.The precision-recall curve is also, as evident from its name, the relationship between Precision and Recall.As these, and other metrics, are essentially based on the basic metrics discussed above they will share some properties, including the degree of sensitivity to variations in prevalence [11,23,56].The core focus in this paper will be the accuracy metrics represented by Eqs 3-9 and prevalence calculated from Eq 2; a limited discussion on LRs will be included to illustrate issues on an additional approach used widely in accuracy assessment. Throughout this section it has been explicitly assumed that a gold standard reference is used in the accuracy assessment.However, an uncomfortable truth in real world studies is that the reference standard is often imperfect.In such situations, an apparent rather than true confusion matrix is generated and this, together with the associated metrics calculated from it, can differ greatly from the truth. Use of an imperfect reference Error in the reference data has a simple effect, in essence it simply moves an affected case from one confusion matrix element to another.This has the effect of altering the magnitude of the entries in the confusion matrix and thereby the magnitude of the accuracy metrics that may be calculated from it. Unfortunately, even small errors in the reference data can be a source of major mis-estimation of accuracy metrics.Furthermore, the magnitude and direction of the mis-estimation a function of the nature of the errors in the reference data set as well as the prevalence [16][17][18][19].If the classifier's errors are conditionally independent of those in made with the reference standard, the errors are unrelated or independent.In this situation, it is common to find that the magnitude of an accuracy metric is often under-estimated.Commonly, it is impossible to assume independence of errors, especially if the classifier and reference are based on the same phenomenon or process [16,17,44]. Different trends may be observed if the errors made by the classifier and the reference standard are conditionally dependent and so tend to occur on the same cases.The direction of mis-estimation can be in either direction depending on the strength of correlation [16].When the degree of correlation is relatively strong, the error rates in an analysis can be substantially under-estimated and hence accuracy metrics over-estimated [16,17,31,32]. Mis-estimates of a derived property such as prevalence can be in either direction.The magnitude of mis-estimation is dependent on the degree of correlation in the errors and the true prevalence [16]. Critically, the fundamental assumption of the use of a gold standard reference in an accuracy assessment is often unsatisfied.The use of an imperfect reference standard results in the generation of an apparent confusion matrix which can differ greatly from the true matrix that would be formed with a gold standard reference.Consequently, the metrics estimated from the confusion matrix are also apparent values that may differ from the truth. Materials and methods A simple simulation-based approach was used to explore the effects of variations in prevalence and reference standard error on the magnitude of a suite of accuracy metrics and prevalence from apparent confusion matrices.The focus is on the assessment of the accuracy of classifications of known and constant quality (as defined by Recall, Specificity and J) but with differing prevalence and evaluated using imperfect reference standards of varying quality.For simplicity, simple scenarios in which the classification being assessed and reference standard each had Recall = Specificity were used; the equations given below can be used to explore other scenarios. With knowledge of a classification's true values of Recall and Specificity together with the prevalence it is possible to generate a confusion matrix.The equations to determine the entries in the four elements of a binary confusion matrix are: For illustrative purposes, some example matrices will be generated for display.This simply requires multiplying the computed value for each element by the sample size.For this purpose it was assumed that the sample size was N = 1,000.Critically, the above equations allow generation of the actual or true confusion matrix that would be observed if a gold standard reference data set was used.The true value for each of the selected accuracy metrics and prevalence may then be estimated from the confusion matrix using Eqs 2-9. A wide range of values for Recall and Specificity are reported for reference standards in the literature (e.g.[2,57,58]).Here attention was focused initially on a simple scenario in which the outputs of a classification, measured against a gold standard reference, could be summarised as Recall = Specificity = 0.8.As a consequence of these latter values, the classification also had J = 0.6.The values selected are essentially arbitrary but are taken to represent what in many instances would be viewed as a 'good' classification.The values are comparable to others reported in the literature but also allow comparison against a set of imperfect reference data sets that are, as is often desired, more accurate than the classification under evaluation.Again, the magnitudes of the imperfections are arbitrary but here three imperfect reference standards of relatively high, medium and low accuracy were generated.These reference data sets contained 2% (i.e.Recall = Specificity = 0.98), 10% (i.e.Recall = Specificity = 0.90) and 18% error (i.e.Recall = Specificity = 0.82) respectively.Consequently, it was possible to estimate accuracy metrics and prevalence using three imperfect standards of differing quality and know also the true values that would arise from the use of a gold standard reference.To further extend the study, the analyses were undertaken twice, once with independent errors in the reference standard and then again using correlated errors. Generating the apparent confusion matrices and estimating the magnitude of the apparent accuracy metrics and apparent prevalence was undertaken using approaches used previously in the literature.Specifically, for the situation in which the errors in the reference standard and classified data are independent the equations presented in [30,59] were used.In this approach, the apparent confusion matrices were generated using the following equations: in which the superscript ' highlights that this is the apparent rather than true value and the subscripts R and C refer to the reference standard and the classification respectively.The apparent values for accuracy metrics and prevalence were then calculated from the apparent confusion matrices using Eqs 2-9. In the case of correlated errors, the approach discussed by [16] was used.In this approach, the true confusion matrices generated earlier were adjusted to reflect the level of error contained in the reference standard.To do this, the number of positive cases corresponding to the relevant error amount (2%, 10% or 18%) were relabelled to be incorrectly negative in both the classification and the reference data.Similarly, the number of negative cases corresponding to the selected error amount were relabelled to be incorrectly positive in both the classification and the reference data. The apparent confusion matrices were generated, across the full range of prevalence at 0.05 increments; to avoid complications associated with the extreme values of 0 and 1.0 the actual start and end points of the prevalence scale were 0.01 and 0.99 respectively.In each analysis, the classification being evaluated against any of the reference standards had the same basic properties with Recall = Specificity = 0.8.Thus, in essence, at each level of prevalence, a confusion matrix was generated using a gold reference standard and accompanied by confusion matrices generated using two imperfect reference standards.A total of six sets of confusion matrices were generated with the imperfect reference data as there were three levels of imperfection (2%, 10% and 18%) for situations in which the errors were independent and then when the errors were correlated.From each apparent confusion matrix a set of standard metrics of accuracy were calculated.These are all apparent values rather than truth as the reference data used to form each apparent confusion matrix are imperfect.The core focus was on the four basic metrics of accuracy (Recall, Precision, Specificity and NPV) and then four important and widely used metrics.The latter were the apparent values of J (suggested as an alternative to MCC if imbalance issues are a concern as claimed to be independent of prevalence), F 1 , MCC and prevalence.Other metrics can, of course, be estimated and as an example results for LRs will also be presented.Some example confusion matrices will also be provided to aid readers wishing to explore other metrics and issues beyond the scope of this article. Finally, one further set of simulations was undertaken to help illustrate a specific situation in which the MCC is expected to be over-estimated and complicate the interpretation of a relatively high score.This additional set of simulations was focused on evaluation of an unquestionably poor classification using an imperfect reference standard.The classification to be evaluated had Recall = Specificity = 0.5 and J = 0, values that would be obtained from an unskilled or coin-tossing classifier.The imperfect reference data contained 30% correlated error (i.e.Recall = Specificity = 0.7).This is a less accurate reference data set than used in the other simulations discussed above but still in the range of values reported in the literature.Thus, in this scenario a dreadful classification is assessed relative to an imperfect but still realistic reference data set. The supporting information files S1-S3 Tables contain the components of the confusion matrices generated in the scenarios reported.The latter information allows the calculation of all of the accuracy metrics used from the equations provided in section 1.2. Results Confusion matrices generated using a gold standard reference (Recall = Specificity = 1.0) and imperfect reference standards with independent and correlated errors (both with Recall = Specificity = 0.9) for two levels of prevalence are shown in Fig 2. The relationship between the apparent accuracy indicated by the four basic metrics (Recall, Precision, Specificity and NPV) and prevalence generated from the use of three imperfect reference data sets of differing accuracy associated with inclusion of independent errors is shown in Fig 7 shows the variation in true and apparent MCC with prevalence for the poor classification (Recall = Specificity = 0.5, and hence J = 0) assessed relative to an imperfect reference standard containing correlated errors (Recall = Specificity = 0.7, and hence J = 0.4). Finally, to illustrate the effects of variations in prevalence and reference data quality on other metrics that are founded upon the core set of metrics discussed, the LR+ and LR-, which are based on Recall and Specificity, were also calculated. Discussion For any simulated situation constructed, variation in prevalence and the use of an imperfect reference standard could make substantial changes to the confusion matrix from the truth.Consequently, the values of accuracy metrics and prevalence calculated from an apparent confusion matrix could deviate from the true situation. Fig 2 summarises the key issues at two levels of prevalence: 0.1 and 0.3.In both cases, the classes were imbalanced, with positive cases rarer than negatives.Note that from the scenario used to drive the simulations the use of a gold standard reference would show Recall = Specificity = 0.8 (and J = 0.6).However, it is evident that the dissimilarities between Critically, aside from the row marginal values (TP+FP and FN+TN) and total sample size (N), all of which were fixed, the value for every other element of the confusion matrix and the column marginal values could change with variation in prevalence and the use of an imperfect reference standard.It was, therefore, unsurprising that the values for the accuracy metrics calculated from the apparent confusion matrices differed from the truth.In the limited example provided in Fig 2 it is evident that the magnitude of mis-estimation varied greatly between the various metrics calculated.For some accuracy metrics, the magnitude of mis-estimation was very small.For example, the Specificity when prevalence was 0.1 and the errors independent was calculated to be 0.79 while the true value was 0.80 (Fig 2A).Conversely, the value of some other metrics was greatly mis-estimated.For example, the Precision when the prevalence was 0.1 and the errors correlated was 0.65 while the true value was less than half of this value at 0.31 (Fig 2A).be allocated to the negative class with the remaining 90 cases labelled positive.Hence, the column marginal values became 100-10+90 = 180 and 900+10-90 = 820 for the positive and negative class respectively.Consequently, an initial impact of the use of the imperfect reference was that the apparent prevalence rose from the true value to 0.18.The distribution of cases in the confusion matrix differed between the situations in which the errors were independent and correlated.Consequently, the accuracy metrics calculated from the confusion matrices associated with the use of the imperfect reference depended on the nature of the errors it contains. The differences between Fig 2A and 2B illustrated a sensitivity of the apparent accuracy metrics to variation in prevalence.Note that claims that a key set of metrics, namely Recall, Specificity, J and MCC, are independent of prevalence holds when a gold standard reference was used but is untenable when an imperfect reference standard was used.The magnitude of the apparent accuracy assessed with all four of these metrics was underestimated when the errors in the reference standard and classification were independent of each other.Conversely, the magnitude of each of these four metrics was over-estimated when the errors in the reference standard and classification were correlated.The differences between the two apparent confusion matrices generated with the use of the imperfect reference standards (e.g. in the classification, the impacts of the use of this imperfect reference on the confusion matrix can be illustrated following the discussion in [30].When the classifier was applied to the 90 truly positive cases, 72 (90x0.8)were labelled positive and the remaining 18 labelled as negative.For the 90 truly negative cases that were incorrectly labelled positive in the reference set 72 (90x0.8)remained labelled negative with the other 18 labelled positive.The net result of this situation was that TP 0 = 72+18 = 90 and FN 0 = 18+72 = 90.Similarly, of the 10 truly positive cases allocated to the negative class 8 (10x0.8)remained in the negative class with the remaining 2 cases labelled positive.Of the 810 cases truly negative cases 648 (810x0.8)remained as labelled negative with the other 162 labelled positive.Thus FP 0 = 8+162 = 170 and TN 0 = 2+-648 = 650; the apparent confusion matrix values could also be calculated using Eqs 16-19.The cases were distributed differently within the confusion matrix when the errors in the reference standard were correlated with those in the classification. The distribution of cases in the apparent confusion matrix generated when the imperfect reference standard contained errors correlated with those in the classification can be illustrated following the discussion in [16].Maintaining a focus on the situation depicted in use of a reference containing correlated error is to alter the distribution of cases in the confusion matrix from the truth.While the column marginal values remained the same as in the situation when the errors were independent the distribution of cases within the confusion matrix could differ greatly.With correlated errors, the 90 truly negative cases that were labelled positive in both the reference and classification inflated TP' relative to the true value.Specifically, TP 0 = 80+90 = 170.Similarly, the 10 truly positive cases that were labelled negative in both the reference and classification inflated TN 0 ; TN 0 = 720+10 = 730.Since the column marginal values are fixed at 180 and 820 for the positive and negative cases the values for FP 0 and FN 0 could be calculated.The net effect of this situation was that Recall and Specificity were over-estimated.Additionally, as captured in the differences between Fig 2A and 2B, the magnitude of mis-estimation varied with prevalence.Variation in the apparent values of the set of key accuracy metrics and prevalence over the full range of prevalence is explored below. The magnitude of accuracy metrics beyond the set reported could also be expected to vary between scenarios.As one example, Accuracy (Eq 1) can be calculated from the confusion matrices shown in Fig 2 .While the scenario adopted, in which the classification being assessed always had Recall = Specificity = 0.8 and hence Accuracy remains constant as prevalence varies, it is evident that the use of an imperfect reference resulted in Accuracy being mis-estimated.Specifically, while the simulation approach adopted ensures that the true value was always 0.80 the apparent values differ.With correlated errors, the Accuracy was over-estimated (0.90) while it was under-estimated if the errors were independent (0.74). The nature of the difference between apparent and true values varied as a function of prevalence and the magnitude and nature of reference data error.If the errors in the reference standard were independent it is evident that the apparent value for all four basic metrics of accuracy varied with prevalence (Fig 3).A key feature to note is that Recall and Specificity were no longer independent of prevalence and their magnitudes were under-estimated.Indeed, with the use of an imperfect reference standard containing independent errors, the apparent values for Recall and Specificity varied greatly with prevalence and were modulated by the magnitude of reference data error.Recall was substantially underestimated at low prevalence while Specificity was substantially under-estimated at high prevalence and the magnitude of mis-estimation was positively related to the size of the error in the reference data. If attention was focused on the Precision and NPV, the apparent values of these metrics varied greatly with both prevalence and the magnitude of reference data error (Fig 3).For both Precision and NPV, the apparent values were under-estimated over part of the scale of prevalence and over-estimated for the remainder.The point of transition from under-to over-estimation occured at (1-Specificity R )/(2-Recall R -Specificity R ) for Precision and (1-Recall R )/ (2-Recall R -Specificity R ) for the NPV [16].The magnitude of mis-estimation was again positively related to the amount of error in the reference standard used.The apparent values of J, F 1 , MCC and prevalence all also varied with prevalence and the magnitude of reference data error (Fig 4).Since Recall and Specificity lost independence of prevalence due to the use of an imperfect reference standard so too did J.Indeed, the apparent value of J varied notably at the extremities of the scale of prevalence and for this metric the magnitude was always under-estimated.The estimates of apparent F 1 and prevalence also varied with prevalence and the magnitude of error-contained in the reference data set.Over most of the scale of prevalence, F 1 was underestimated.The apparent values of the MCC varied with prevalence, being particularly low at extreme values of prevalence.Again, the magnitude of mis-estimation was positively related to the degree of imperfection in the reference standard used.Of particular concern to this article is that very low, near zero, values for the MCC were obtained for a 'good' classification (Recall = Specificity = 0.8) if an imperfect reference standard was used and the data set was highly imbalanced.Such low values for the MCC could lead to the inappropriate decision to disregard the classification as being of insufficient accuracy when its actual accuracy could be adequate for the intended purpose.Finally, the apparent prevalence was linearly related to the actual prevalence and changed from over-to under-estimation as the actual prevalence increased.(Fig 4).At low prevalence, the apparent prevalence was substantially over-estimated (e.g. with 18% independent error, a prevalence of 0.01 was mis-estimated to be over 18 times too high).Conversely, at high prevalence the opposite trend was noted with prevalence under-estimated. Different trends in the mis-estimation of accuracy metrics were observed when the reference standard contained correlated rather than independent errors.With the use of an imperfect reference standard containing correlated errors, all of the evaluated metrics of accuracy were over-estimated (Figs 5 and 6). For the four basic metrics that are often calculated (Recall, Precision, Specificity and NPV), the effect of using an imperfect reference standard with correlated errors was to generate optimistically biased estimates across the entire scale of prevalence and with the magnitude of misestimation positively related to the degree of error in the reference standard used (Fig 5).As with the case of independent errors, Recall and Specificity varied with prevalence if an imperfect reference was used.The magnitude of the mis-estimation for the four accuracy metrics was most marked for Precision and NPV at extreme levels of prevalence.Precision was overestimated at low prevalence and NPV over-estimated at high prevalence. Given the changes to the confusion matrix associated with changes in prevalence and/or reference data error it was unsurprising that other metrics that in some way build on them were impacted.Fig 6 shows the apparent values for four key metrics often calculated: J, F 1 , MCC and the prevalence.Since the magnitude of Recall and Specificity were no longer independent of prevalence, J too varied with prevalence, especially at the extreme values of the scale of prevalence.The two popular measure of F 1 and MCC also showed substantial dependency on prevalence.In all cases the magnitude of J, F 1 and MCC were over-estimated relative to the truth, notably at one or both extremities of the scale of prevalence.A key issue to note is that very high values of MCC, up to 0.96, were observed with the use of the least accurate reference data set.Finally, the prevalence itself, which may be the key property a study seeks to estimate, was also substantially mis-estimated.The trend for apparent prevalence was the same as that observed with the use of an imperfect reference containing independent error. The over-estimation of apparent MCC arising through the use of an imperfect reference standard containing correlated error was also evident in the analyses based on the unquestionably poor classification (Recall = Specificity = 0.5, J = 0).For this classification, the use of a gold standard reference would result in MCC = 0.The apparent MCC values, however, were substantially over-estimated, with apparent values of up to 0.65 observed, with a relatively small degree of variation over the range of prevalence (Fig 7).Critically, a relatively high apparent MCC value could be obtained from an unquestionably poor classification. Variation in prevalence and impacts arising from the use of an imperfect reference standard would be expected with other popular metrics of accuracy.For example, popular approaches based on the receiver operating characteristics curve or the precision-recall curve are based on the set of basic accuracy metrics and hence would also be impacted by variations in prevalence and reference data imperfections.Similarly, the LRs which are based on Recall and Specificity would be expected to vary with prevalence even though they are often claimed to be unaffected by it.This latter issue is illustrated in Fig 8 as an example of impacts on metrics beyond the core set assessed here.It was evident that the LRs varied with prevalence and the magnitude of mis-estimation was positively related to the amount of error in the reference standard used.The LRs under-estimated the quality of the classification when the errors in the reference standard were independent.Conversely, the LRs over-estimated the quality of the classification when the imperfect reference standard contained correlated errors. Of key relevance to this paper in relation to the use of the MCC was that a very low or very high apparent MCC could be observed for a 'good' classification depending on the level of prevalence and the quality and nature of the reference standard used.Thus, for example, a high MCC score could potentially arise from a modest or even poor classification.Alternatively, a low apparent MCC value may not reflect the actual status of a 'good' classification.The use of apparent MCC values may unjustifiably lead researcher to believe a classification to be of very different quality to the true situation.The comment that in relation to the four basic metrics (Eqs 3-6) the "MCC generates a high score only if all four of them are high" [15, page 13] may be true if implicit assumption of the use of a gold reference standard holds but sadly this may often not be the case.Naïve use of the apparent MCC such as in direct comparison against some popular threshold value or against values from another classification analysis with different properties (e.g.prevalence) may lead to inappropriate and incorrect interpretation of classifications and incorrect decision making.Just as other accuracy metrics which have been over-sold [13,60], the MCC has limitations which can result in mis-leading interpretations of classification quality.Indeed some researchers deliberately stress that they do not endorse the use of the MCC [61]. The fundamental concerns with variations in prevalence and imperfections in the reference standard on accuracy estimation are well known and many have called for the issues to recognise and addressed.Rather than naively use apparent values researchers are instead encouraged to correct the assessment and estimate true values for accuracy metrics and prevalence [2,16,29,30].In some situations, such as when the reference standard contains independent errors but is of known quality, simple equations may be used to obtain the true values for accuracy metrics and prevalence [16,30].If the quality of the reference standard is not fully known it may also sometimes be possible to estimate them allowing the generation of truer values [62].While correction for independent errors is easier than for correlated errors means to estimate truer values exist [16,17].In addition, for both independent and correlated errors, it is possible to effectively construct a reference standard by perhaps using the outputs of multiple classifications in a latent class analysis to estimate properties such as the Recall and Specificity of the classifications [20,63,64].It is important that the calls to address the challenges associated with prevalence and use of imperfect reference data lead to change in the way classifications are routinely assessed and used.Researchers need to avoid bad habits such as the routine and unquestioning use of inappropriate metrics [60] especially if subject to mis-estimation due to commonly encountered challenges. Finally, this article has focused on issues connected with classification accuracy assessment but the challenges associated with variations in prevalence and reference data error also impact upon other aspects of a classification analysis.Prevalence and reference data error also impact on activities such as the training of supervised classifications and classifier development.The training of machine learning methods , for example, is impacted greatly by class imbalances [65] and hence the composition of the sample used in cross-validation should be carefully selected perhaps with regard to their abundance in the population under study [66].Error in the reference standard can also degrade training data and ultimately classification performance and accuracy [67].Further complications to accuracy assessment can also arise if other fundamental assumptions that underlie the analysis are unsatisfied.The conventional confusion matrix, for example, cannot be formed if the assumption that each case belongs fully to a single class in untenable.In such circumstances, a soft or fuzzy approach to accuracy assessment is required [68][69][70]. Conclusions Classification analyses are widely used in a diverse array of disciplines.A fundamental issue in the use of a classification is its quality that is assessed typically via analysis of a confusion matrix that cross tabulates the labels predicted by the classification against those obtained from a reference standard for a sample of cases.The calculation of accuracy metrics and associated variables such as prevalence from the confusion matrix is, however, fraught with challenges. The literature promotes a wide range of accuracy metrics and a common concern is that each typically only provides partial information on classification quality.Additionally, the magnitude of some metrics may vary with prevalence that is not a property of the classification but of the population under study.It is common to see researchers encouraged to use one or more metrics that are claimed to be prevalent independent (e.g.Recall, Specificity and J).Such metrics are widely used but do not fully capture the entire quality of a classification.Recent literature has encouraged the use of the MCC.The latter has been claimed to be a more truthful accuracy metric than other popular methods.Additionally, if prevalence is a concern with the use of the MCC it has been suggested that J could be used to summarise key aspects of classification quality.As with other accuracy metrics, however, important challenges arise in real world applications.An uncomfortable truth is that the reference standard used to assess accuracy is often imperfect and hence a fundamental assumption in classification accuracy assessment that is often made implicitly is unsatisfied.This is well known but rarely checked or addressed. Here, the effect of variations in prevalence and reference standard error are shown to substantially impact on the assessment of classification accuracy and calculation of properties such as class abundance.Accuracy metrics such as Recall, Specificity and J lost their independence of prevalence when an imperfect reference standard was used in the accuracy assessment.Indeed, all of the accuracy metrics included in this article (Recall, Precision, Specificity, NPV, J, F 1 , LR+, LR-and MCC) were sensitive to variations in prevalence and use of an imperfect reference standard.Critically, the estimated values of accuracy metrics deviated substantially from the true values that would be obtained by the use of a gold standard reference. The magnitude and direction of mis-estimation was notably a function of prevalence and the size and nature of the imperfections in the reference standard.For example, the four basic metrics of accuracy (Recall, Precision, Specificity and NPV) were all under-estimated when the reference standard contained errors that were independent of the classification.However, when the errors in the reference standard were correlated, that is tending to err on the same cases the classifier mis-labelled, the opposite trend was observed with the values of all of the accuracy metrics over-estimated. Of particular importance, however, is that the MCC displayed undesirable properties.It was possible for the apparent value of the MCC to be substantially under-or over-estimated as a result of variations in prevalence and/or use of an imperfect reference.Critically, a high value for the MCC could be obtained from a poor classification, notably if the reference standard used was inaccurate with errors correlated with those in the classification under evaluation.This observation runs contrary to arguments promoting the MCC that contend that a high value is only possible when the classification has performed well on both classes.Such arguments are founded on unrealistic conditions (e.g. on the use of a gold standard reference).In reality, the magnitude of the MCC is influenced greatly by class imbalance and reference data imperfections. Real world challenges to accuracy assessment need to be addressed.Many of the fundamental issues are well known but rarely acted on.Researchers should recognise the problems and take action to address them such as by estimating true values.This may require a culture change as methods and practices often seem to be fixed firmly and research communities resistant to change.However, if classifications are to be evaluated and used appropriately the routine use of inappropriate metrics and mis-interpretation of apparent values must stop.Readers of published studies based on a classification analysis should also interpret results and associated interpretations with care, especially if no explicit account is made for the effects of prevalence and reference data error.This is not to question the integrity of the authors of a study but simply to recognise the possible impacts arising from a failure to satisfy an underlying assumption in an accuracy assessment.Fortunately, a range of methods exist to address the problems that arise from class imbalance and use of an imperfect reference in order to allow an enhanced evaluation of classifications [20,63,71]. Fig 1 . Fig 1.The binary confusion matrix with positive (+) and negative (-) classes.Note in the format shown, the labelling provided by the reference data are in the columns and the results of the classifier being assessed are in the rows.Abbreviations used are defined in the text.https://doi.org/10.1371/journal.pone.0291908.g001 Fig 3 . The associated relationships for the apparent values of J, F 1 , MCC and prevalence with prevalence are shown in Fig 4. The true values for each metric arising from the use of a gold reference standard are plotted in Figs 3 and 4 for comparison.Throughout, the classification being assessed had Recall = Specificity = 0.8.The relationship between the apparent accuracy and prevalence for the four basic metrics of classification accuracy generated from the use of three imperfect reference data sets of differing accuracy associated with inclusion of correlated errors is shown in Fig 5.The associated relationships for the apparent values of J, F 1 , MCC and prevalence with prevalence are shown in Fig 6.Again, the true values for each metric arising from the use of a gold reference standard are plotted in Figs 5 and 6 for comparison.Throughout, the classification being assessed had Recall = Specificity = 0.8. Fig 8 illustrates the variation in apparent LR+ and LR-values with prevalence obtained with the use of the reference standards of varying quality. Fig 2A and 2B show variation in confusion matrix values and metrics derived from them due to the difference in prevalence.Moreover, in both Fig 2A and 2B the use of an imperfect reference resulted in the apparent accuracy values deviating from the truth, the magnitude and direction of which differed between situations in which the errors were correlated or independent. Fig 2 provides a basis to illustrate some of the key impacts of reference data error and variation in prevalence on the confusion matrix and the metrics calculated from it.In the scenario in Fig 2A, the actual prevalence is 0.1.If, as shown in Fig 2A, an imperfect reference standard(Recall = Specificity = 0.9) was used the way the sample of cases was distributed in the matrix was changed from the true situation.Specifically, the values in all four matrix elements and the column marginal values could change; the row marginal values and total sample size were fixed.As Fig2Ashows, the use of the imperfect reference resulted in 90 (calculated from 100x0.9) of the 100 cases that truly were positive being labelled positive with the remaining 10 cases labelled as negative.Similarly, 810 (900x0.9) of the cases that truly were negative would Fig 3 . Fig 3. Relationships between apparent Recall, Precision, Specificity and Negative Predictive Value with prevalence assessed using three imperfect reference standards of differing quality that contain errors independent of those in the classification.The true relationship, obtained using a gold standard reference, is also shown for comparative purposes with a dashed black line.https://doi.org/10.1371/journal.pone.0291908.g003 Fig 4 . Fig 4. Relationships between apparent J, F 1 , MCC and prevalence with prevalence assessed using three imperfect reference standards of differing quality that contain errors independent of those in the classification.The true relationship, obtained using a gold standard reference, is also shown for comparative purposes with a dashed black line.https://doi.org/10.1371/journal.pone.0291908.g004 Fig 2A) arose because of the way in which cases are distributed within them.The accuracy of the classification being assessed in Fig 2A can be summarised by Recall = Specificity = 0.8.With the errors in the reference standard being independent of those Fig 5 . Fig 5. Relationships between apparent Recall, Precision, Specificity and Negative Predictive Value with prevalence assessed using three imperfect reference standards of differing quality that contain errors correlated with those in the classification.The true relationship, obtained using a gold standard reference, is also shown for comparative purposes with a dashed black line.https://doi.org/10.1371/journal.pone.0291908.g005 Fig 2A, the Fig 6 . Fig 6.Relationships between apparent J, F 1 , MCC and prevalence with prevalence assessed using three imperfect reference standards of differing quality that contain errors correlated with those in the classification.The true relationship, obtained using a gold standard reference, is also shown for comparative purposes with a dashed black line.https://doi.org/10.1371/journal.pone.0291908.g006 Fig 7 . Fig 7. Relationship of apparent MCC with prevalence for a poor classification (Recall = Specificity = 0.5, J = 0) assessed with an imperfect reference (Recall = Specificity = 0.7) containing correlated errors.The true relationship, obtained using a gold standard reference, is also shown for comparative purposes with a dashed black line.https://doi.org/10.1371/journal.pone.0291908.g007 Fig 8 . Fig 8. Relationship between apparent LR+ and LR-values with prevalence assessed using three imperfect reference standards.(a) Error in the reference is independent of that in the classification and (b) error in the reference is correlated with that in the classification.Note, in Fig 2B the Y axis for the positive LR was trimmed for visualisation purposes, the apparent value obtained for LR+ rises to 909.2.The true relationship, obtained using a gold standard reference, is also shown for comparative purposes with a dashed black line.https://doi.org/10.1371/journal.pone.0291908.g008
2023-10-06T05:08:08.945Z
2023-10-04T00:00:00.000
{ "year": 2023, "sha1": "cc38ed5b6fca80da6b99b78de596c22be04f29eb", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0291908&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "cc38ed5b6fca80da6b99b78de596c22be04f29eb", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Medicine" ] }
220671919
pes2o/s2orc
v3-fos-license
Prognostic Value of Dual-Energy CT-Based Iodine Quantification versus Conventional CT in Acute Pulmonary Embolism: A Propensity-Match Analysis Objective The present study aimed to investigate whether quantitative dual-energy computed tomography (DECT) parameters offer an incremental risk stratification benefit over the CT ventricular diameter ratio in patients with acute pulmonary embolism (PE) by using propensity score analysis. Materials and Methods This study was conducted on 480 patients with acute PE who underwent CT pulmonary angiography (CTPA) or DECT pulmonary angiography (DE CT-PA). This propensity-matched study population included 240 patients with acute PE each in the CTPA and DECT groups. Altogether, 260 (54.1%) patients were men, and the mean age was 64.9 years (64.9 ± 13.5 years). The primary endpoint was all-cause death within 30 days. The Cox proportional hazards regression model was used to identify associations between CT parameters and outcomes and to identify potential predictors. Concordance (C) statistics were used to compare the prognoses between the two groups. Results In both CTPA and DECT groups, right to left ventricle diameter ratio ≥ 1 was associated with an increased risk of all-cause death within 30 days (hazard ratio: 3.707, p < 0.001 and 5.573, p < 0.001, respectively). However, C-statistics showed no statistically significant difference between the CTPA and DECT groups for predicting death within 30 days (C-statistics: 0.759 vs. 0.819, p = 0.117). Conclusion Quantitative measurement of lung perfusion defect volume by DECT had no added benefit over CT ventricular diameter ratio for predicting all-cause death within 30 days. INTRODUCTION parameters have been proposed as potential predictors of PE severity and clinical outcome. Currently, the CT ventricular diameter (VD) ratio is a well-established and widely used prognostic indicator in patients with acute PE. A previous meta-analysis demonstrated that the quantitative CT parameter of right to left ventricle diameter ratio greater than 1 showed the strongest predictive ability and most robust evidence for an adverse clinical outcome in patients with acute PE (3). Dual-energy computed tomography (DECT) has been used in the diagnosis and evaluation of PE, and recent studies have shown that quantitative parameters of DECT are helpful in predicting the clinical outcome of patients with PE (4)(5)(6)(7)(8)(9)(10)(11)(12)(13). A previous study demonstrated that DECT perfusion imaging could display pulmonary perfusion defects with good agreement to scintigraphic findings (5). kjronline.org Several studies have described the functional relevance of perfusion defects (PDs) detected on DECT, and studies have shown that the extent of PDs measured with DECT correlates with an adverse clinical outcome in patients with PE (9,10). Studies demonstrating the clinical utility of PDs using DECT have been published, but there is little evidence for the additional risk stratification benefit of the CT VD ratio in patients with acute PE (8)(9)(10)(11). The purpose of the present study was to investigate whether quantitative DECT parameters offer incremental risk stratification benefits over the CT VD ratio in patients with acute PE by using a propensity score analysis. Patient Population This single-center, propensity score-matched study compared the predictive value of quantitative DECT parameters and CT VD ratio in patients with acute PE. Institutional Review Board approval was obtained, and the requirement for informed consent was waived for this retrospective propensity score-matched study. All consecutive patients who underwent CT pulmonary angiography (CTPA) or DECT pulmonary angiography (DE CT-PA) and were suspected to have acute PE between January 2015 and December 2017 were considered potentially eligible for this analysis. Among 3419 patients (CTPA group, n = 2045, DECT group, n = 1374), the following patients were excluded: patients with negative CT results (n = 2486), patients who did not clinically or radiologically meet the criteria for acute PE (n = 47), those in whom DECT or CT was performed as a follow-up CT examination after receiving anticoagulation therapy (n = 105), and those for whom CT image quality was insufficient or CT image data were not available (n = 21). Finally, 484 patients (23.6%) who were diagnosed with acute PE by CTPA and 276 patients (20.1%) who were diagnosed with acute PE by DECT were recruited for the present study (Fig. 1). Patient clinical information, including age, sex, and medical history (hypertension, diabetes mellitus, smoking, heart disease [including congenital heart disease, coronary artery disease, myocardial infarction, valvular heart disease, heart failure, arrhythmia and cardiomyopathy], chronic obstructive pulmonary disease [COPD], pneumonia, history of cancer, history of deep vein thrombosis [DVT]), was recorded based on patient medical records. To reduce potential selection bias related to the use of a non-randomized cohort to generate two groups (CTPA and DECT groups) with comparable characteristics, propensity score-matched analyses were performed (14). The following variables were used to develop the propensity score and create a well-matched control group: age, sex, hypertension, diabetes mellitus, smoking, heart disease, COPD, pneumonia, history of cancer, and history of DVT. The balance of covariates between the groups was assessed by the absolute standardized mean difference before and after the matching procedure. An absolute standardized mean difference of 0.1 or less indicates balanced covariates between the two groups (15). The propensity-matched study population included 240 patients with acute PE in the CTPA group and 240 patients with acute PE in the DECT group. Altogether, 260 (54.1%) were men, and the mean age was 64.9 years (64.9 ± 13.5). CT Examination CTPA was performed for all participants by using a 64or 128-channel CT system (Revolution EVO, GE Healthcare, Chicago, IL, USA or Somatom Definition AS, Siemens Healthineers, Forchheim, Germany), and DE CT-PA was performed for all participants by using a dual-source CT system (Somatom Definition Flash, Siemens Healthineers). kjronline.org injector. Following contrast injection, 30 mL of saline was administered. During the scan, patients held their breath on inspiration. Pulmonary trunk attenuation was tracked by a bolus-tracking technique. Image acquisition was triggered manually once attenuation in the pulmonary trunk reached 100 Hounsfield units (HU). Radiation exposure was estimated from the dose-length product (DLP). The calculated mean radiation dose was 5.2 mSv (DLP range, 189-903 mGy*cm) based on the scan range and patient body weight. Image Analysis A radiologist with over 10 years of experience in chest CT analysis analyzed the CT data; the radiologist was blinded to patient identities and clinical histories. All scans were processed and read using a dedicated workstation equipped with dual-energy post-processing software (Syngo MMWP VE36A, Siemens Healthineers). The weighted average image was approximately 120 kV and was automatically generated from a combination of the 140-kV and 100-kV data used for DE CT-PA. Color-coded iodine maps were merged with the corresponding CT angiographic images with soft tissue settings to create fusion images, allowing simultaneous depiction of occluded PAs and lung perfusion. For quantitative analysis, maximal diameters of the right and left ventricles (RV and LV) were measured on transverse sections by identifying the maximal distance between the ventricular endocardium and the interventricular septum perpendicular to the long axis of the heart (Fig. 2). RV/LV diameter ratios were calculated by dividing the maximum diameters of the RV and LV. PD volume was analyzed and quantified from iodine maps by using dedicated Volume analysis software (version VE36A, Siemens Healthineers). PD attenuation values were measured automatically from -1 to -1024 HU in HU (Fig. 3). Total lung volume was analyzed by Lung Parenchyma Analysis (Syngo InSpace, Siemens Healthineers) and measured automatically by determining the sum of values from 1024 to 1 HU and from -1 to -1024 HU. The trachea and bronchus were excluded by a semiautomatic segmentation technique. The PDs values measured on iodine maps were carefully reviewed and compared to CT findings. PDs related to lung parenchymal abnormalities (e.g., infiltration, effusion, or emphysema) were manually excluded. The relative perfusion defect kjronline.org volume (RelPD%) was calculated as follows: RelPD% = PD volume / total lung volume x 100. To assess the interobserver agreement for quantitative measurements, 50 of 240 patients in the DECT group were randomly selected and an independent reviewer with over 5 years of experience in chest CT analysis measured the PD volume and ventricular ratios. Clinical Outcome Clinical outcome data were obtained via a review of the electronic medical records or by telephone contact from a dedicated research nurse who was blinded to the CT results. The primary endpoints of the present study were death within 30 days from any cause. Patient death status was ascertained by querying the National Health Insurance Corporation. Statistical Analysis An analytic sample was created using propensity scorebased matching to correct for differences in patient characteristics in the two groups. Propensity score matching was conducted in a 1:1 ratio by nearest neighbor matching. The adequacy of the propensity model was confirmed by checking the covariate balance before and after matching. Comparisons between the CTPA and DECT groups were performed. The differences between categorical variables were analyzed by chi-squared test or Fisher's exact test. The differences between continuous variables were analyzed by the Shapiro-Wilk test or Mann-Whitney U test. A Cox proportional hazards regression model was used to identify associations between CT parameters and outcomes and to identify potential predictors. Only variables with p values less than 0.20 in univariate analyses were added to the final multivariate models to prevent model over-fitting. From the Cox proportional hazards model, hazard ratios (HRs) and 95% confidence intervals (CIs) were calculated. Concordance (C) statistics were used to compare the predictive prognosis between the two groups. Inter-observer agreement was tested using intraclass correlation coefficients (ICCs). A p value < 0.05 was considered statistically significant. All statistical analyses were performed using R (version 3.2.2., R Foundation for Statistical Computing, Vienna, Austria). Baseline Clinical and CT Characteristics Baseline characteristics of the two groups are shown in Table 1. Both groups were matched for baseline variables, and no significant differences were observed for any of the baseline comparisons. In the CTPA group, patients who died showed a higher prevalence of pneumonia and cancer (all p < 0.05). In the DECT group, patients who died showed a higher prevalence of pneumonia, cancer, and DVT (all p < 0.05). Other clinical characteristics were not significantly different between patients who survived and those who died (Table 2). In both groups, VD ratios (1.10 vs. 0.97; p < 0.001 and 1.09 vs. 0.94; p < 0.001) were significantly higher in kjronline.org the death group than in the survival group. In the DECT group, the RelPD% (10.21% vs. 7.73%; p < 0.001) was also significantly higher in the death group than in the survival group (Table 2). Clinical and CT Variables Associated with Outcome During the median follow-up period of 133 days (interquartile range: 35-401 days), there were 35 deaths within 30 days from any cause in the CTPA group and 45 deaths within 30 days from any cause in the DECT group. In univariate analysis using a Cox hazards regression model in the CTPA group, pneumonia and cancer were predictors of all-cause death within 30 days (all, p < 0.05) ( Table 3). Patients with a larger VD ratio (≥ 1 vs. < 1) had a significantly higher risk of death within 30 days (p = 0.001) ( Table 3). In the DECT group, univariate analysis using a Cox hazards regression model revealed that pneumonia, cancer, and DVT were predictors of all-cause death within 30 days (all, p < 0.05). Patients with a larger VD ratio (≥ 1 vs. < 1) and a larger PD volume had a significantly higher risk of death within 30 days (all p < 0.001) ( Table 3). DISCUSSION This study was designed to investigate whether quantitative DECT parameters provide incremental risk kjronline.org stratification benefits over the CT VD ratio in patients with acute PE by using a propensity score analysis. Based on this study, quantitative measurement of lung PD volume by DECT offered no added benefit over CT VD ratio for predicting allcause death within 30 days. Risk stratification for patients with acute PE is important to establish appropriate treatment and management. CT parameters have emerged as prognostic markers to assess the severity of hemodynamic compromise from acute PE and identify patients at heightened risk for fatal or nonfatal adverse events, thus guiding clinical management (3,16,17). For clinical purposes, the RV/LV diameter ratio measured on CT shows the strongest predictive value across all endpoints and provides the most robust evidence for adverse clinical outcomes in patients with acute PE. Many studies have supported that RV dysfunction assessed on CT was associated with an increased risk of early complications, including all-cause death and PE-related serious adverse events (16)(17)(18)(19)(20)(21)(22). In addition, previous metaanalyses have demonstrated that increased RV/LV diameter ratio is the strongest predictor of adverse clinical outcomes in patients with acute PE (3,21,22). This measurement is a simple quantitative value that can be easily measured in axial or 4-chamber images using CT. Our results are in agreement with those of previous studies. In our study, right ventricular dysfunction was assessed by CT by using two-dimensional axial transverse images. According to our study, right ventricular dysfunction on CT was an independent predictor of all-cause death within 30 days. DECT has been proposed as a new imaging technique kjronline.org for detecting PE (11,12). A unique feature of DECT is that it allows differentiation of materials based on their energy absorption (4,23). Thus, DECT allows simultaneous assessment of pulmonary vasculature and parenchymal iodine distribution (5,6,24). In the lung, the pattern of iodine enhancement on DECT has been shown to correspond to lung blood volume on planar scintigraphy (5). Several studies have reported that the quantitative values of lung PDs on DECT correlated with right ventricular dysfunction and adverse clinical outcomes (8)(9)(10)(11). A previous study reported that the extent of lung PDs on DECT correlated well with right ventricular dysfunction on CT and death (9). Another study demonstrated that of all evaluated CT parameters, the PD volume measured by DECT showed the highest predictive power for detecting an adverse clinical outcome (10). Conversely, our previous study revealed that lung PDs quantified on DECT had no added benefit in predicting death within 30 days or for predicting PE-related death (11). Based on previous studies, quantitative DECT parameters have potential for use as prognostic makers in acute PE. However, the value of quantitative DECT parameters for prognosis and risk stratification in acute PE is controversial. The heterogeneity of study groups, definitions, and outcomes prohibits consensus on the prognostic performance of DECT. We conducted a propensity score-matched study to compare the predictive value of quantitative DECT parameters and CT VD ratio in patients with acute PE. Propensity score adjustment is a method of balancing the distribution of biases and confounders between groups, thereby increasing between-group comparability. Propensity score analysis is increasingly being applied as a statistical method in observational studies (14). We constructed two models to evaluate the added value of DECT parameters (lung PD volume) in predicting all-cause death within 30 days. Although PDs measured on DECT were associated with an increased risk of death within 30 days, C-statistics showed no statistically significant difference between the two groups (CTPA group and DECT group) in predictive prognosis with respect to predicting death in patients with acute PE. These results suggest that DECT parameters (lung PD) had no added benefit over the simple quantified CT value of VD ratio for predicting death within 30 days in patients with acute PE. Lung perfusion imaging is based on quantification of tissue enhancement at serial time points following contrast administration. Previous studies have demonstrated that the extent of PDs, identified by perfusion scintigraphy, correlated with clinical outcomes in patients with PE (25,26). Consequently, the extent of PDs quantified on DECT is potentially predictive of hemodynamic changes in acute PE. Thus, these are emerging as imaging biomarkers for risk stratification. However, there are several issues regarding quantitative measurements in DECT. First, DECT scans are usually obtained at a single time-point, so DECT provides an iodine distribution map of the lung microcirculation at a given time point (27). Therefore, quantitative DECT parameters can vary according to different clinical settings and with different imaging protocols. Second, there is no standardized analytical method for lung PDs using DECT in terms of HU threshold and analytical software. In addition, additional time is required to analyze lung perfusion using special software. Our study has certain limitations. First, this study was conducted at a single center with a modest sample size. In addition, the retrospective nature of this study may be associated with a selection bias. However, we conducted a propensity score-matched study to balance the distribution of biases and confounders between groups. Second, the imaging protocol and analytical method for lung perfusion may have significantly influenced the results. Currently, there is no standardized analytical method for assessing lung PDs using DECT in terms of HU threshold and analytical software. In conclusion, an increased RV/LV diameter ratio was associated with increased risk of all-cause death within 30 days in patients with acute PE. However, quantitative measurement of lung PD volume by DECT offered no added benefit over CT VD ratio in predicting all-cause death within 30 days. Our present data failed to provide an additional benefit of functional lung assessment on DECT for predicting future death in patients with PE. Future large trials with much longer follow-up periods must be performed to estimate the potential influence of DECT findings on treatment strategies and optimize the management and outcome of patients with acute PE. Conflicts of Interest The authors have no potential conflicts of interest to disclose.
2020-07-22T13:06:41.199Z
2020-06-24T00:00:00.000
{ "year": 2020, "sha1": "2a3aa92be1a5f888c8ca938a37bef08e9fa097c3", "oa_license": "CCBYNC", "oa_url": "http://kjronline.org/Synapse/Data/PDFData/0068KJR/kjr-21-1095.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "d475d16bb5cb86d21e1f44d33f6bd06a352fae74", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
21742799
pes2o/s2orc
v3-fos-license
Modified multiplicative decomposition model for tissue growth: Beyond the initial stress-free state The multiplicative decomposition model is widely employed for predicting residual stresses and morphologies of biological tissues due to growth. However, it relies on the assumption that the tissue is initially in a stress-free state, which conflicts with the observations that any growth state of tissue is under a significant level of residual stresses that helps to maintain its ideal mechanical conditions. Here, we propose a modified multiplicative decomposition model where the initial state (or reference configuration) of biological tissue is endowed with residual stress instead of being stress-free. Releasing theoretically the initial residual stress, the initially stressed state is first transmitted into a virtual stress-free state, resulting in an initial elastic deformation. The initial virtual stress-free state subsequently grows to another counterpart with a growth deformation and the latter is further integrated into its natural configuration with an excessive elastic deformation that ensures tissue compatibility. With this decomposition, the total deformation may be expressed as the product of elastic deformation, growth deformation and initial elastic deformation, while the corresponding free energy density depends on the initial residual stress and the total deformation. We address three key issues: explicit expression of the free energy density,predetermination of the initial elastic deformation, and initial residual stress. Finally, we consider a tubular organ to demonstrate the effects of the proposed initial residual stress on stress distribution and on shape formation through an incremental stability analysis. Our results suggest that the initial residual stress exerts a major influence on the growth stress and the morphology of tissues. Introduction It has long been recognized that growth, death and all other bio-behaviors of living matter are controlled by a combination of genetic and epigenetic factors including biochemistry, bioelectricity and biomechanics (Cowin, 2004;Fung, 2013). One of the most accepted biomechanical epigenetic factor in living matter may be its internal mechanical stress, which is also called residual stress in unloaded conditions (Cowin, 2006;Eskandari and Kuhl, 2015;Hosford, 2010;Schajer, 2013). It exists in all real living matter such as ripe fruits, tree trunks, blood vessels or solid tumors, and is generally induced by non-uniform plastic deformation, surface modification, material phase changes and/or density changes (Chen and Eberth, 2012;Chuong and Fung, 1983;Schajer, 2013;Stylianopoulos et al., 2012). Though these residual stresses are locally self-equilibrating in mechanics, they still serve some special functions, as they, for example, influence the morphogenesis, growth rate and internal mechanical condition of bio-tissues (e.g. Ben Amar and Goriely (2005); Fung (1991); Li et al. (2011b); Taber (1998)). The first presentation on the relationship between stress and bio-behavior goes back to the German anatomist and surgeon Julius Wolff (1893) who showed that healthy bone creates structural adaptation where external loads Roux (1894) proposed the functional adaptation concept that stress should be regarded as a functional stimulus to growth and remodeling. Later, Fung and collaborators (Chuong and Fung, 1986;Fung, 1991) used the opening angle method to quantify residual stress in arteries. An explanation on the origin of residual stresses in living bio-tissues was first presented theoretically by Rodriguez et al. (1994) via the multiplicative decomposition (MD) method. They showed that residual stresses in a biotissue are created by heterogeneous growth and can be calculated from a given growth gradient tensor. The constrained growth deformation is decomposed into unconstrained growth deformation and pure elastic deformation ( Figure 1) with the relation F = F e F g , where F is the total deformation, F e the pure elastic deformation, and F g the growth deformation. From a modeling standpoint, this explanation is concise but powerful enough to predict the growth-induced residual stresses. The MD model was subsequently widely employed to solve many biomechanical problems related to growth process (e.g. Balbi et al. (2015); Du and Lü (2017); Li et al. (2011b); Lü and Du (2016); Stylianopoulos et al. (2012); Wang et al. (2017)). One prerequisite of the MD model is that the reference configuration must be a stress-free state, which ensures that the growth process is under the unconstrained condition. To study the growth process of bio-tissues starting from an arbitrary stage, the initial reference state should be properly defined. As evidenced by cutting experiments, many bio-tissues still exhibit large amounts of residual stresses even when the external loads are removed, as seen with a cut scallion, a duck heart or liver, see Figure 2, and also with cut arteries or weasands (Li et al., 2011b). An approximate method to access the stress-free state is by cutting the material to remove constraints from surrounding tissues, which is the basic idea behind the opening angle method (e.g. Chuong and Fung (1986); Gower et al. (2015); Schajer (2013)). However, the extent to which residual stresses can be released depends significantly on the number and direction of the cuts (Figure 2b). In effect, an entirely stress-free state for a real living body can only be accomplished by an infinite number of cuts to release all residual stresses held by the neighboring regions. In practice, this ultimate discrete state is impossible to reach for real living matter (Schajer, 2013). This suggests that stress-free state may not be achievable for real bio-tissues, and, therefore, the stress-free reference configuration in theoretical modelling may not be appropriate. From this point of view, the conventional MD model (Rodriguez et al., 1994) is inadequate for predicting the growth of biological tissues starting from an arbitrary growth state that is regarded as the stressfree reference configuration. A more practical reference configuration containing residual stresses is needed, a fact which has already been well recognized and achieved in mechanics of materials with initial residual stresses. For example, using the concept of mathematical limitation, Hoger and collaborators (Johnson and Hoger, 1995;Hoger, 1997) put forward the idea of a virtual stress-free state to deduce the residual stress T in the form of T =¯ (F, τ ), where¯ is a mapping from the initially stressed state to the current configuration, F is the elastic deformation gradient tensor and τ is the initial stress. The virtual stress-free configuration is adopted only to give a physical interpretation for the mathematical derivation of the stress. Hence, the constitutive equation is directly established in the initially stressed state, and the reference configuration for a large deformation is no longer constrained to be the stress-free state. As an extension of the initial residual stress theory (Johnson and Hoger, 1995;Hoger, 1997), Skalak et al. (1996) proposed a diagram to show the kinematic description for initially stressed growing matter, in which the growth Figure 2: Evidence of residual stress in biological tissues. (a) Demonstration by the opening angle of a scallion ring after cutting along the axial direction; (b) a different deformation induced by releasing residual stresses in scallion strips, where W is the width of the strip and C is the scallion circumference. Influence of the number and direction of cuts to release residual stress in (c) a duck liver and (d) a duck heart. process from an initially stressed state is decomposed into a sequence of releasing initial stresses, growing unconstrained in a stress-free field, and finally yielding the residual stress. This is a more general description of growth, in which, however, initial stresses are created by an initial external load and can be entirely released by removing it. Hence Skalak et al.'s (1996) description of initially stressed growing matter does not consider existing initial residual stress. Goriely and Amar (2007) proposed a cumulative growth law to analyze the cumulative effect of residual stress during a large growth deformation. The total growth deformation was divided into many small steps that are further decomposed by the MD model. Except for the first step, any accumulative step grows from a residually stressed state, which provides a way to analyze the influence of residual stress on the growth process. However, these residual stresses can only be determined from the prior growth step, and, especially in the first step, it is still necessary to assume an initial stress-free state that is very difficult to prescribe in real biological tissues. Later, Shams et al. (2011) proposed a free energy density ψ=ψ (F, τ ) as a function of ten tensor invariants to derive the residual stress created by the elastic deformation from an initially stressed state. Then Gower et al. (2015) showed that the initial stress symmetry (ISS) condition can demonstrate the natural rationality of using the virtual stress-free state. To describe the growth process of a bio-tissue from any state with residual stresses, we propose an initially stressed reference configuration without external load from which the tissue grows to a current configuration. With this basic idea, we may avoid the drawbacks of using an unpractical virtual stress-free configuration as an initial state of growth in the conventional MD model (Rodriguez et al., 1994). However, there remain two challenges for the current approach of modelling growth: one is how to construct a free energy density for the growth with initially stressed configuration, the other is how to perform a complete analysis for the growth process. These challenges will be addressed. The paper is organized as follows. Section 2 outlines the detailed framework of the modified multiplicative decomposition (MMD) model, including the modified kinematic description, the virtual stress-free configurations, and the governing equations. Three key issues involved in the MMD model are also reviewed and primary resolutions are described. In Section 3, we consider a residually-stressed, growing neo-Hookean material as an illustrative example and derive the corresponding constitutive equations. Typically, we perform an inverse analysis to achieve the initial elastic deformation. Then, we analyze theoretically in Section 4 the growth-induced residual stresses for a tubular organ for which the distribution of initial residual stress is determined according to the self-equilibrium conditions. In Section 5, we analyze the growth-induced morphology of an initially stressed tubular organ using incremental theory to signal the onset of wrinkles. Both the influence of initial stress and initial wall thickness on the critical differential growth extent and the stability pattern are investigated. Finally, we discuss the significance of the work and draw some conclusions. Modified kinematic description of growth In the conventional MD growth model, the growth process is decomposed into two successive steps: first the mass accumulation process from the reference configuration B 0 to the virtual stress-free configurationB, second the elastic deformation process fromB to the current configuration B, modulating the morphologic compatibility and inducing residual stresses ( Figure 1). As a result, the entire process of deformation can be expressed mathematically by F = F e F g . As mentioned previously, the reference configuration B 0 in the conventional MD model is a stress-free state that can only be achieved by cutting the solid into an infinite number of discrete elements rather than be prescribed in a real bio-tissue as a continuous configuration. With this consideration in mind, we propose to modify the reference configuration B 0 of the growing tissue by regarding it as a state endowed with a certain level of residual stress τ (τ = 0). Here, we are not concerned with the origin of the initial stress τ , and the boundary conditions in the initial reference configuration may be arbitrarily in an unloaded or a loaded state. Starting from this state, the tissue grows to the current configuration B through which a total deformation F occurs ( Figure 3). In order to determine quantitatively the total deformation F due to growth, the multiplicative decomposition method (Rodriguez et al., 1994) is adopted with modifications. Since the initial state or reference configuration is modified to include an initial residual stress τ , our decomposition is modified by adding a step to release the initial residual stress so that the continuous tissue is discretized into an infinite number of elements that are free of stress, i.e. the virtual stress-free configurationB 0 . Afterwards, the tissue grows (or shrinks) to a state with more (or less) mass or volume but without residual stress, Before growth After growth i.e. another virtual stress-free configurationB. Then, the discrete elements are integrated into the final continuous body in the current configuration B (Figure 3). Here, the second and third steps are similar to those used in the conventional MD model (Rodriguez et al., 1994), and the main difference is in the first step. Similarly, we assume that unconstrained growth only occurs between two stress-free states, while releasing residual stress can only induce elastic deformations or the residual stress is only created by elastic deformations ( Figure 3). Therefore, the total deformation may be expressed as where F e is the elastic deformation, F g is the growth deformation, and F 0 induced by releasing the initial residual stress from the body. This latter deformation gradient refers to what we call the initial elastic deformation. For this modified multiplicative decomposition (MMD) model, there are still some unsolved questions since the initial stress and virtual stress-free configuration were introduced. For instance, what is the form of the free energy function when we simultaneously consider initial stress and growth factor? For a specific real living matter, how can we obtain the initial stress distribution? Moreover, for a given initial stress distribution, how can we construct the relationship between the initial elastic deformation and the initial stress? First, for a pure elastic deformation in a continuum, the strain energy density can be defined as a function of the elastic deformation gradient tensor. So, from the virtual stress-free configurationB to the current configuration B, the energy density function for stress-free materials can be defined as ψ = ψ (F e ). In addition, based on the decomposition shown in Figure 3 and the Eq. (1), the elastic deformation required to create residual stress is F e = FF −1 0 F −1 g . Then, for this current, initially stressed, growing matter, the free energy function can be expressed as where the growth deformation gradient tensor is assumed to be independent of the stress states (Ben Amar and Goriely, 2005). Second, we recall that there are some existing methods to access the distribution of initial residual stresses beyond the destructive experiments such as the opening angle method. For instance, with the minimal stress gradient method combining the initial stress symmetry (ISS) condition for the initially stressed materials, Gower et al. (2015) showed that the initial stress distribution can be accessed via the Cauchy stress distribution solved by the minimal stress gradient method in the current configuration. Alternatively, by adopting the Airy stress function method, Ciarletta et al. (2016a) gave three distribution forms of residual stresses satisfying the equilibrium equation and proposed a morphological method to quantify their magnitude. Finally, by assuming that the constitutive equation for the stress-free material is known and invertible, we will show that the initial elastic deformation for incompressible materials can be solved via a method proposed by Johnson and Hoger (1995) . The virtual stress-free configuration The most basic assumption in this current MMD decomposition is that the stress-free state is a discrete configuration and is unavailable in practice, which means there is no real continuum configuration and it is difficult to define the deformation. Here, we use the limitation concept proposed by Johnson and Hoger (1995) to access approximately the stress-free configuration by letting the infinitesimal volume surrounding a material point tend to zero. This derivation is in the same spirits with the proof of Cauchy's theorem and we do not reproduce it here to save space. In short, it shows that the real stress-free configuration is made of infinitesimal discrete regions and that the elastic deformation gradient tensor F 0 for a stress-free material can be mathematically approximated to the corresponding elastic deformation gradient tensor for a continuum material. Therefore, the usual constitutive equations of continuum mechanics such as the neo-Hookean, Mooney-Rivlin, or Fung models can be used for the stress-free materials. Governing equations Here, if we consider the internal material constrains, the free energy function will be modified by ψ → ψ − pC, where p is a Lagrange multiplier, and C is a scalar function encapsulating the internal elastic constraints. Then the nominal stress S is obtained as where F g0 = F g F 0 , and the volume change J = det (F) appears because the elastic strains are computed from the grown state. In particular, for elastic incompressible materials, C = det (F e ) − 1. Then the nominal stress for incompressible materials becomes Because the relationship between the Cauchy stress σ and the nominal stress S is σ = J −1 FS, we have in general, and for elastic incompressible materials, In the absence of body forces, the equation of equilibrium are where the Div and div are divergence operators in B 0 and B, respectively, and the boundary conditions are Key issues for initially stressed growing matter So far we have presented the kinematic description, the basic assumptions, and the governing equations of the modified multiplicative decomposition (MMD) growth model for initially stressed biological tissues. However, we still need to clarify some key issues: (1) selection of energy density function; (2) determination of initial elastic deformation; (3) determination of initial residual stress. Selection of free energy density function For materials with the initial residual stress τ , the cauchy stress σ in the current configuration satisfies the constitutive equation σ = ζ (F, τ ).Here the free energy density function yielding σ = ζ (F, τ ) should also satisfy the requirement of initial stress symmetry (ISS) in Gower et al. (2015Gower et al. ( , 2017. According to ISS, when we exchange the roles of the current configuration and the reference configuration, the initial stress needs to satisfy the equation τ = ζ (F −1 , σ), showing that the constitutive function has no preferred reference configuration. When F = I, we obtain τ = ζ (I,τ ) which indicates that ISS can also recover the initial stress compatibility (ISC). As pointed out by Gower et al. (2017), the restrictions of ISS are a consequence of energy conservation rather an assumption made for convenience. These restrictions ensure that the predicted stress and strain energy do not depend upon an arbitrary choice of reference configuration. If a strain-energy function does not satisfy these restrictions, the resulting constitutive response function may lead to unphysical behavior (Gower et al., 2017). Moreover, Gower et al. (2015) also showed that the constitutive equation of an initially stressed material derived by using a virtual stress-free configuration satisfies ISS. For an initially stressed growing material, we now explore the consequences of the swapping of the reference configuration and the current configuration for the constitutive equations and show the constitutive equation that satisfy the ISS condition. We call ϑ the constitutive law of the body, giving the stress for a deformation taking place in an initially stress-free state. Both τ and σ are related to their respective stress-free configurationB 0 andB by the same constitutive law, ϑ. Explicitly, we see from Figure 3 that where the scalars p 0 and p are arbitrary Lagrange multipliers, to be determined from the boundary conditions on ∂B 0 and ∂B, respectively. Now by Eq. (1), F e = FF −1 0 F −1 g , and Eq.(9) 2 can be rewritten as where ζ is the constitutive equation for the initially-stressed, growing materials. Then, we may swap the configuration B 0 and B by performing the following swaps for the fields, see Figure 3, With these swaps, Equation (9) 2 now reads as: τ = ϑ F −1 0 F −T 0 , p 0 and Equation (9) 1 now reads as: σ = ϑ F e F T e , p . Effectively, τ and σ have swapped roles, as required. However, this swapping has consequences on the constitutive law ζ, because Eq.(10) now reads Combining Eqs. (10) and (12) now shows that the initially stressed growing material satisfies ISS when a constitutive law of the form σ When a constitutive law is proposed without reference to virtual stress-free configuration, assuming ISS imposes restrictions on its form (see Gower et al. (2015) for these restrictions in a non-growing material). With the introduction of virtual stress-free configurations, we can construct a constitutive law which satisfies ISS, and then we may work with that law without making any further reference to virtual stress-free configuration. In Section 3 we present such a constitutive law, based on the neo-Hookean form of nonlinear elasticity. It presents the further advantage that it allows for the same residual stress to result from different pre-deformation decompositions, see Gower et al. (2017). Note, however, that our analysis is not restricted to this constitutive choice and can easily be applied to other forms, such as the ones proposed by Shams et al. (2011) for instance. Determination of initial elastic deformation In Section 2.3, the constitutive equations for a growing material with initial residual stresses are expressed in terms of the initial stress τ , and not explicitly in terms of F 0 . Recalling the fundamental assumption that the constitutive equation for a stress-free material is unique and invertible, there must exist a relationship between F 0 and τ . For infinitesimal deformation, Lematre et al. (2006) proposed an optimization algorithm to approximately obtain the initial strain component related to the initial pre-stress for layered piezoelectric structures. For finite deformation in soft material or biological tissues, Hoger (1993, 1995) presented a method to access the left Cauchy-Green strain tensor B 0 related to initial residual stress τ . However, note that it is impossible in general to obtain the uniquely explicit expression between the F 0 and τ (Holzapfel, 2000). Nevertheless, by assuming that growth deformation takes place along an axi-symmetric or principal direction, we show an example in the following Section 3 that the Cauchy stress can be obtained from a relation between B 0 and τ . Here we present the method proposed by Hoger (1993, 1995) for obtaining the left Cauchy-Green tensor B from a given initial stress for growing elastic incompressible materials. Here, we note that B Based on Eq. (9) 1 and the assumption that the constitutive equation for stress-free materials is known and invertible, the left Cauchy-Green tensor B (−1) 0 for the deformation fromB 0 to B 0 has a formal functional relationship to the initial stress τ and the Lagrange multiplier p 0 , written as From incompressibility det F −1 By solving Eq. (14), p 0 can be obtained (at least, in principle). Then B (−1) 0 can also be computed by substituting p 0 and the given τ into Eq.(13). Initial residual stress distribution There are two major ways to measure experimentally residual stresses. One relies on relaxation measurement methods, which are suitable to measure residual stresses in simple or axisymmetric shapes such as the tubular organs examined by the opening angle method. The other uses diffraction methods, including ultrasonic, photoelastic, and X-ray diffraction (Schajer, 2013). From a theoretical point of view, it is difficult to access the distribution of initial stress, especially for biological tissue with an arbitrary shape or with a complex structure. For some simple or axisymmetric shapes, as found for tubular organs, we may use the stress potential function method (Ciarletta et al., 2016b), see Section 4. Growth of an initially stressed neo-Hookean tissue Accounting now for the large deformation generated during the growth process, we present a simple constitutive equation for initially stressed growing materials based on the elastic constitutive equation of neo-Hookean solids. Constitutive equation From the virtual stress-free configurationB to the current residually stressed configuration B, the free energy density of a neo-Hookean solid is where µ is the initial shear modulus. Here we assume that the initial shear modulus remains constant during the growth process. Based on Eq. (1), we have the nominal stress S is and the corresponding Cauchy stress σ is For axisymmetric growth deformation, the deformation gradient tensors and the growth process are diagonal in their respective bases of orthogonal unit vectors. Then, by commutativity the Cauchy stress can be expressed as 3.2. The initial elastic deformation From Eq.(18) written in B 0 we see that the initial residual stress τ can be expressed as τ = µB so that B (−1) 0 can be found as Here, p 0 is the only yet unsolved parameter; it is related to the boundary condition in the initially stressed configuration. The first three principal invariants of B (−1) 0 are related to those of τ as (Gower et al., 2015) From the incompressibility constraint, I 3,B (−1) 0 = 1, so that Eq.(21) reads By solving Eq. (22), we obtain the Lagrange multiplier p 0 formally as p 0 = ℘(τ ). As the explicit form is complicated, we do not present the corresponding explicit general expression here to save space. It suffices to note that only one root of the cubic is relevant. The details on how to identify the adequate root of Eq. (22) are given by Gower et al. (2015), based on continuity of the root with changing residual stress. Then the tensor B (−1) 0 can be written as Finally, we find the Cauchy stress for axisymmetric growth deformation based on Eq.(18). Growth stress of a tubular tissue Tubular strutures such as plants, blood vessels, weasands, or gastrointestinal walls are the most common biological tissues found in living organisms. Healthy organs always keep an ideal state, with moderate stress levels and a functional morphology. To understand further the growth or evolution rules for tubular organs, it is important to incorporate residual stress into growth theory. Also, it is vital to analyze the influence of the initial stress in reference configuration on the residual stress in the current configuration so that we can observe the growth process in a more real and practical way. Here we treat the example of a simplified plane strain growing axisymmetric tube model with an initial residual stress field τ , see Figure 4. The reference configuration B 0 is associated with the cylindrical coordinates (R, Θ, Z), and the current configuration B with the coordinates (r, θ, z). Distribution of the initial stress We adopt the Airy stress function method to define a possible type of the distribution of the initial stress field τ which satisfies the self-equilibrium equation. In the reference configuration B 0 , its non-zero components should satisfy subject to the traction-free boundary conditions τ RR = τ RΘ = 0 on the inner and outer surfaces at R = R i , R o . Then, introducing the Airy stress function φ (R, Θ), we find that the general solution is For solutions such that φ = φ (R) only, this reduces to where f = φ is a stress potential function. Here, we take a logarithmic stress potential function for illustration, where α is a non-dimensional measure of the residual stress amplitude. See (Ciarletta et al., 2016b) for other examples of stress potential functions, such as parabolic and exponential variations. We also conducted the analysis presented in this paper for those functions, and found similar results. For the logarithmic function, the radial stress component τ RR varies almost linearly across the wall thickness, and is zero almost at the mid-thickness, see Figure 5. The corresponding initial residual stress components are Figure 5 shows the resulting transmural distribution of the initial residual stress when α > 0. The radial stress is entirely tensile (τ RR < 0), is maximal at the middle thickness, and is small compared to the circumferential stress. The circumferential stress has an almost linearly antisymmetric variation with respect to the centroid surface, the maximum tensile stress is on the outer surface while the maximum compressive stress is on the inner surface (reversed when α < 0). Figure 4 and Figure 3 show the overall diagram and the decomposition for the growth process. First, the corresponding deformation gradients read Growth-induced residual stress in their respective bases, where g r , g θ are the growth factors along the radial and circumferential directions, respectively. Incompressibility for the pure elastic deformation gradient tensor F e = FF −1 0 F −1 g reads as det FF −1 0 F −1 g = 1. Because F 0 is also assumed to correspond to a pure elastic deformation gradient, we have det F −1 0 = 1, then the incompressibility condition reduces to det FF −1 g = 1, which is integrated to Then from Eqs. (18) and (20) we find the following non-zero Cauchy stress components, and the sole non-zero equilibrium equation reads Next, we introduce the dimensionless initial radial position ς = R−R i R 0 −R i and Eq.(32) becomes where H = R o −R i is the thickness of the tube in the initial stressed reference configuration. Integrating this expression subject to the boundary condition σ rr = 0 on the inner surface ς = 0 (r = r i ), the growth-induced residual stress from an initial stress state is obtained as: Finally, imposing the boundary condition σ rr = 0 on the outer surface at ς = 1 (r = r o ) in this expression, we access the value of the inner radius r i for a given initial stress, initial geometry and growth tensor. Then the Cauchy stress components follow from Eqs.(34) and (32). Rodriguez et al. (1994)): circumferential growth only (left: g θ = 1.1, g r = 1.0; right: g θ = 0.9, g r = 1.0). Results First we take the current MMD growth model to start from an initial zero stress state, and check that we recover the results to the MD model (Rodriguez et al., 1994), when α = 0 (no residual stress), g r = 1 (no radial growth), g θ = 1 (circumferential growth only). Figure 6 shows the resulting transmural distribution of differential growthinduced residual stress ("differential growth" means that the body grows differently along different directions). For circumferential expansion, the circumferential stress decreases monotonically from a tensile stress on the inner surface to compressive stress on the outer surface and this behavior is reversed for circumferential shrinkage. As expected, we recover the results from Rodriguez et al. (1994). Now we consider a more realistic scenario, where an initial residual stress exists in the reference configuration, and compare the results to those without considering the initial stress. Figure 7 displays the comparative results: the solid lines correspond to the initial stress state with magnitude α = 1.0 and the dashed lines are for no initial stress (α = 0, as in Figure 6). The residual stresses depend on the ratio of growth factors g θ /g r , and we study the following differential growth scenarios: relative radial growth (RRG), when g θ /g r < 1; isotropically compatible growth (ICG), when g θ /g r = 1; and relative circumferential (a) (b) Figure 7: The transmural distribution of residual stresses for different differential growth ratios (α = 1, full lines): relative radial growth (RRG), when g θ /g r = 0.5; isotopically compatible growth (ICG), when g θ /g r = 1.0; and relative circumferential growth (RCG), when g θ /g r = 2.0, compared with Rodriguez et al.'s (1994) model (without initial residual stress, α = 0, dashed lines). growth (RCG), when g θ /g r > 1. Figures 7(a) and (b) show the transmural distribution of circumferential stress and radial stress, respectively. For isotropically compatible growth (ICG, green curves), the initial residual stresses predicted by the MD model (Rodriguez et al., 1994) are zero throughout. By contrast, the MMD growth model with α = 1.0 is endowed with a significant distribution of initial residual stress. But of course, no matter what the magnitude of the isotropically compatible growth is as long as g θ /g r = 1.0, the curves remain the same for both MD and MMD models, confirming that ICG does not give rise to growth-induced residual stress. Going now from g θ /g r = 1.0 to relative circumferential growth with g θ /g r = 2.0 (RCG, red curves), we see that the distribution of growth-induced residual stress in circumferential direction produces an approximate clockwise rotation, both for the MD and MMD models. Specifically, on the inner side, the circumferential stress changes from zero stress to tensile stress in the MD model and changes from compressive stress to tensile stress in the MMD model. Analogously, going from g θ /g r = 1.0 to relative radial growth with g θ /g r = 0.5 (RRG, blue curves), we see an approximate anticlockwise rotation and an increasing value of compressive stress on the inner surface. Figure 8 shows the changes in circumferential stresses on the inner and outer surfaces with the differential growth ratio g θ /g r . On the inner side, Figure 8: Variations of the residual circumferential stresses on the inner (r = r i ) and outer (r = r o ) faces of the tube with the differential growth ratio (RRG: g θ /g r < 1, ICG: g θ /g r = 1, RCG: g θ /g r > 1), with (α = 1.0, 2.0, full lines) and without (α = 0, dashed lines) an initial residual stress (Rodriguez et al., 1994). circumferential growth creates tensile stress while radial growth creates compressive stress, and vice-versa on the outer side. So here, greater circumferential (radial) growth ratios lead to greater tensile (compressive) stresses and the introduction of initial residual stress accentuates these trends. Clearly, the final distribution of residual stress depends not only on the differential growth ratio, but is also affected by the magnitude of initial stress. In the next section, we study the appearance of wrinkles due to loss of stability for a growing, initially stressed cylinder tube and investigate the role played by these factors on the development of its morphology. Growth-induced morphology of an initially stressed tube Residual stress accumulates as a tubular organ grows. Similarly to tubes that develop circumferential instability under a critical pressure, leading to a non-circular cross-section (Moulton and Goriely, 2011), we expect our tube to buckle with increasing residual stress induced by the accumulation of differential growth. To understand the generation and the development of wrinkles on the inner side of the tube, we now conduct an instability analysis for the residually stressed state. Using linearized incremental theory, Balbi et al. (2015), Ben Amar and Goriely (2005) differential growth ratio leading to instability in a tube with no initial residual initial stress. Here, we use the MMD growth model to show the influence of an initial residual stress on the critical differential growth ratio and the resulting instability patterns. Incremental theory Following the growth process, an infinitesimal elastic deformation χ is applied in the current configuration B relative to the reference configuration B 0 , so that the particle position in the new configuration B I can be expressed as x = χ (X). Letting (dx) · = x − x denote the incremental displacement related to the reference configuration, we introduce the incremental displacement gradientṡ F = (∂x) · /∂X with respect to the initial reference configuration B 0 , anḋ F I = (∂x) · /∂x with respect to the current configuration B. By the chain rule, they are related to the deformation gradient tensor F = ∂x/∂X througḣ Based on the incremental theory for tissue growth introduced by Ben Amar and Goriely (2005), we assume that the incremental deformation is infinitesimal and transient, so that the growth process is independent of the stress and strain fields. In other words,Ḟ I can be seen as pure elastic and not influenced by the growth process. So, combining Eqs. (1) and (35), we havė Next, expanding det F +Ḟ I as follows we find the incremental incompressibility condition as tr(Ḟ I ) = 0. With a Taylor series expansion, the incremental nominal stressṠ can be expressed aṡ is the (fourth-order) referential elasticity tensor. Because the push-forward form of the incremental nominal stress isṠ I = J −1 FṠ, we finḋ is the instantaneous elasticity tensor (Ogden, 1984). In component form, The non-zero components of A I e in the coordinate system aligned with the principal axes are (Ogden, 1984) where ψ i = ∂ψ/∂λ i , ψ ij = ∂ 2 ψ/∂λ i ∂λ j . The equations of incremental equilibrium are Finally, the increment nominal stress and the displacement satisfy the boundary conditionsṠ Incremental field in the tubular organ We write the incremental displacement field aṡ resulting in the following incremental displacement gradient tensoṙ Then the incremental equilibrium equations (43) (47) We now seek a solution in the form where n is the wrinkle number in the circumferential direction, and U , V , P , Σ ij are functions of r only. This mechanical field describes a sinusoidal pattern along the circumferential direction, with amplitude variations along the radial direction. Next we introduce the incremental displacement-traction vector η as and find that the governing equations can be put in the Stroh form as where the components of the 2 × 2 sub-blocks are with, in general, According to Eq.(42), we have for the initially stressed growing neo-Hookean model (15), where is the circumferential stretch ratio of the elastic deformation gradient tensor F e = FF −1 0 F −1 g . The surface impedance method The surface impedance method was first proposed by Biryukov (1985) to investigate wave propagation in inhomogeneous solids and later generalized to study of the stability of inhomogeneously deformed solids (Ciarletta et al., 2016b;Destrade et al., 2010Destrade et al., , 2009. The main result is that the critical growth-induced instability state is reached once the inner surface impedance matrix Z i (r) satisfies the equation where Z i (r) is obtained by using the boundary condition [Σ rr (r i ) , Σ rθ (r i )] T = 0 on the inner surface. To find Z i (r o ) we must integrate numerically the following Riccati differential equation for Z i , through the thickness, from r = r i with the initial boundary condition Z i (r i ) = 0, to r = r o with Eq.(55) as the target. Once the target is reached, we obtain the corresponding critical value of differential growth ratio g θ /g r and Z i (r o ), and also the shape of the outer surface from the following ratio, Finally, to determine the through-thickness incremental displacement field of the tube, we solve simultaneously the following equations for U (r) = [U (r) , V (r)] T and the outer conditional impedance matrix Z o , see Destrade et al. (2009) with the initial boundary conditions: Specialisation to a non-growing, residually stressed tube First we check that we recover the instability analysis of (Ciarletta et al., 2016b) when F g = I. In that case, the tube is not growing and the instability is triggered by increasing the amplitude of the initial stress in Eq.(27) until the critical initial stress amplitude α cr is reached. Figure 10 shows the same results as those by calculated by (Ciarletta et al., 2016b), which validates our code for the MMD growth model when F g = I. Figure 10(a) shows the lines of critical magnitude α cr of the residual stress against the tube aspect ratio R o /R i , for different wrinkle numbers n. For a given R o /R i , there exists a minimal critical stress magnitude, which we record; then by varying R o /R i , we construct the bottom envelope line. We also record the corresponding critical wrinkle number, to create Figure 10(b), showing the variations of n cr with R o /R i . There we see that for a tube with a larger wall thickness, the wrinkle number is smaller, consistent with the results of Ciarletta et al. (2016b). These results show that the instability of a soft tissue with residual stress is directly related to its geometry and to the magnitude of the residual stress. The instability analysis demonstrates that when wrinkles are present, the magnitude α cr of the residual stress can be found non-destructively by observing its shape and counting the number of wrinkles. There is no need to rely on the multiple decomposition method and on cutting the tube. Now we introduce differential growth to see how initial stress, geometry, and growth affect pattern formation. Instability analysis for growing initially stressed tube Here we adopt the constant growth rate model presented by Eskandari and Kuhl (2015) and the logarithmic distribution of initial stress in the reference configuration given by Eq. (27). Hence, the components of the growth deformation gradient tensor at time t are g r (t) = 1 +ġ r t, g θ (t) = 1 +ġ θ t, whereġ r ,ġ θ are the constant growth rates in the radial and circumferential directions, respectively. We call v =ġ θ /ġ r the upper limit of the differential growth ratio g θ /g r with time. Here we take α, the non-dimensional amplitude measure of the initial stress τ and R o /R i , the initial relative wall thickness, as conditional parameters, and take the differential growth ratio g θ /g r as critical parameter. We first analyse the case of varying α and fixed initial relative wall thickness, R o /R i = 2.0. We found in the previous section that α cr = 4.02 for R o /R i = 2.0 when there is no growth. Here we look in turn at the cases Figure 11: Relationship between the wrinkle number n and the differential growth ratio (g θ /g r ) cr for different initial residual stress levels α = 0, 1, 2, 3, 4, when R o /R i = 2. The peak point of the n ∼ g θ /g r curve indicates the onset of wrinkles at a critical value of g θ /g r with a critical half-wave number n cr . when α = 0 (no initial residual stress), 1.0, 2.0, 4.0, and when buckling occurs on the inner surface of the tube due to relative radial growth (RRG), when g θ /g r < 1. To fix the ideas we take v = 0.1 and find numerically the critical differential growth extent g θ /g r such that Eq.(55) is satisfied. Figure 11 reveals that there exists a different maximal differential growth ratio for each initial configuration and that the level of initial stress has a significant influence on the instability pattern. Hence when there is no initial residual stress (α = 0), we find (g θ /g r ) cr = 0.360, which indicates a large difference in the growth rates (and then there are n cr = 32 wrinkles), while when α = 4.0, the growth rates are almost equal: (g θ /g r ) cr = 0.987 (n cr = 13 wrinkles) indicating that only a small relative radial growth process is required to accumulate more residual stress and induce instability. Hence relative radial growth makes it easier to induce instability. This is particularly true when the initial stress state is α = 4, very close to the α cr = 4.02 found for non-growing tubes. This state is very unstable as a small relative radial α=0, n cr =32 Ri/Ro=2, v=0 1 α=1, n cr =28 Ri/Ro=2, v=0 1 α=2, n cr =22 Ri/Ro=2, v=0 1 α=4, n cr =13 Ri/Ro=2, v=0 1 Figure 12: Morphology of unstable states from different initially stressed states for different initial residual stress levels α = 0, 1, 2, 3, 4, when the initial geometry of the tube is given by R o /R i = 2. Figure 13: (a) Relationship between initial stress level α and critical wrinkle number n cr or the critical differential growth ratio (g θ /g r ) cr for the growth-induced unstable state, for R o /R i = 2. (b) The relationship between the initial stress level and the critical geometric size or the critical circumferential residual stress on the inner surface for the growth-induced instable state. growth can induce instability. Once we know the critical wave number n cr and the critical differential growth extent (g θ /g r ) cr , we can compute the whole incremental mechanical displacement field as explained in Section 5.3. Figure 12 presents the resulting unstable morphologies for the same levels of initial residual stress as in Figure 11. For the same initial geometric size R o /R i = 2, we calculate the critical differential growth extent (g θ /g r ) cr when the initial stress level varies continuously between 0 and 4. The resulting Figure 13(a) shows the effect of the initial stress level on growth-induced instability. For an initially stressed material, the conditional parameters α and (g θ /g r ) cr are positively correlated. Figure 14: (a) Critical wave number n cr and (b) critical differential growth extent (g θ /g r ) cr for initially stressed materials Additionally, because the time limit of the differential growth ratio (g θ /g r ) cr is equal to the relative differential growth rate v, the red line in Figure 13(a) can also be seen as giving the critical (or minimal) relative differential growth rate v cr . In other words, if the ratio of the two constant growth rates is less than v cr , then no instability will occur in the tissue. Figure 13(b) shows the influence of α on the ratio of the outer radius to the inner radius r o /r i in the current configuration and on the circumferential residual stress found on the inner surface σ θ (r i ). Considering now the influence of the other conditional parameter, the relative wall thickness R o /R i , we conduct the instability analysis when it is varying continuously between 1.5 and 2.0. Figure 14(a) shows the values of the resulting critical wrinkle number n cr and of the critical differential growth extent (g θ /g r ) cr when the magnitude of the initially residual stress varies between 0 and 4. We see that a higher initial stress amplitude α and a thicker initial relative wall thickness R o /R i lead to less wrinkles and a lower differential growth rate ratio. We conclude that for an arbitrary initially stressed state, the initial conditional parameters such as the initial non-dimensional amplitude measure α of the initial stress τ and the initial relative wall thickness R o /R i are vital to determining the onset of critical instability patterns. We find that the constant different growth rate ratio v is also a decisive parameter to evaluate whether instability occurs. In Figure 15 we display the results of our calculations for different growth rate ratios v = 0.05, 0.08, Figure 15: Influence of the differential growth rate v on instability patterns, when the initial relative wall thickness is R o /R i = 2 and the time-scale is equal to 1. 0.1 when initial stress level α varies from 0 to 4 and when the initial relative wall thickness is fixed as R o /R i = 2. Here, we find that the critical wave number n cr , the critical differential growth extent (g θ /g r ) cr and the critical relative wall thickness (r o /r i ) cr of the growing initially stressed tube do not depend on the differential growth rate ratio v, as the three curves superpose. However, the growth factor (g r ) cr , the absolute wall thickness (r o − r i ) cr and the growth time required for instability t cr do depend on v. Discussion and conclusion In this work, we proposed a modified multiplicative decomposition (MMD) growth model to simulate growth in a general way, starting with an initially stressed reference configuration instead of the inaccessible stress-free state used in the MD model (Rodriguez et al., 1994). In Section 2, we presented the kinematic description for initially stressed growing matter. We introduced a virtual stress-free (VSF) configuration corresponding to the reference configuration to show the unconstrained or incompatible discrete state and we assumed that the constitutive equation for the stress-free state is known and invertible. Based on the current growth model, we derived the free energy function for the material. Based on the neo-Hookean model, we established a constitutive equation for initially stressed growing matter in Section 3. By assuming axisymmetric growth deformation and elastic incompressibility, we obtained the releasing initial stress deformation and further, the Cauchy stress in the current configuration. In Section 4, we treated the example of a growing axisymmetric tube with an initial residual stress field τ that is found by using the Airy function method. Figure 6 shows that the MMD growth model recovers the classical volume growth model when there is no initial residual stress (τ = 0). Then we showed that residual stress is caused by differential growth. Relative radial growth (RRG) accumulates compressive stress on the inner surface, while relative circumferential growth (RCG) generates tensile stress on the inner surface. The residual stress induced by incompatible growth can be quantified by the differential growth ratio g θ /g r . In Figures 7 and 8, we calculated the growth-induced residual stress with different initial stress levels and showed the differences between the results of Rodriguez et al. (1994) (MD growth model) and of our MMD growth model. The initial stress clearly has a significant impact on the residual stress distribution in the current configuration. Finally in Section 5 we applied the incremental theory to initially stressed growing matter and derived the incremental field equations for wrinkles in a growing tube. With the surface impedance method, we accessed the entire incremental displacement and traction fields. In Figure 10, we showed that our MMD model recovers the results of Ciarletta et al. (2016b) for instability in a non-growing residually stressed configuration (F g = I). We checked that residual stress level and relative wall thickness are two critical factors to determine the critical instability pattern. Then we implemented a growth process with a constant growth rate ratio v. We saw in Figure 14 that the same two conditional parameters again have a significant impact on the critical instability pattern. With Figure 15, we showed that the differential growth rate ratio v does not affect the critical wave number, the relative wall thickness and the critical differential growth ratio but will generate different absolute wall thickness and critical time requirement. These results present a valid approach to tune or control the growth process and the tissue morphology. In conclusion, this work shows that the multiplicative decomposition (MD) growth model can be modified and enhanced by considering an initial stress in the reference configuration. Compared with the cumulative growth law (Goriely and Amar, 2007), the current MMD growth model proposes a more general framework where the residual stress does not need to stem only from a prior growth process. Even if the residual stress in the initial state stems from growth, the growth process can be analyzed using the current model in one step rather than using Goriely and Ben Amar's multi-step scheme (Goriely and Amar, 2007), with the first step depending on a prescribed unstressed configuration. Hence growth can be expanded to model the growth process between two arbitrary residually stressed states, a situation which is extremely common in real biological growth. The modified multiplicative decomposition (MMD) growth model proposed here recovers the MD growth model when there is no initial stress and the residually stressed model when there is no growth. Our instability analysis for a cylindrical tube with constant growth rates shows that wrinkle patterns are related to initial conditional parameters including the initial stress level and the corresponding geometric size. The results demonstrate that the initial residual stress may affect the growth and surface morphology of bio-tissues. We believe that the initial residual stress may also affect the formation of creases, or other post-buckling patterns, symmetry breaking, wrinkle mode transition, period doubling, etc. However, this process cannot be covered by the current incremental analyses but requires non-linear finite element simulations, similar to those conducted by Jin et al. (2011) (growth without initial stress) or Ciarletta et al. (2016b) (residual stress without growth). This remains an open topic for future study.
2018-05-19T13:30:42.377Z
2018-09-01T00:00:00.000
{ "year": 2020, "sha1": "d50ba76304fb8559601fd600b8cd4515973a44e9", "oa_license": null, "oa_url": "http://arxiv.org/pdf/2009.01738", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "47e99e835578d14e87ad996fe3151b2d23ee375d", "s2fieldsofstudy": [ "Education" ], "extfieldsofstudy": [ "Mathematics", "Physics" ] }
119176743
pes2o/s2orc
v3-fos-license
Stallings folds for CAT(0) cube complexes and quasiconvex subgroups We describe a higher dimensional analogue of the Stallings folding sequence for group actions on CAT(0) cube complexes. We use it to give a characterization of quasiconvex subgroups of hyperbolic groups which act properly co-compactly on CAT(0) cube complexes via finiteness properties of their hyperplane stabilizers. An action on a CAT(0) cube complex often allows one to deduce algebraic and geometric consequences on the group in question. One example is the recent solution of the Haken conjecture using results of Agol [1], Haglund-Wise [13], Wise [31] and others. This was made possible through the study of quasiconvex subgroups of hyperbolic cubulated groups. An important observation about quasiconvex subgroups of hyperbolic cubulated groups was made by Haglund in [12] who proved that a quasiconvex subgroup of a cubulated hyperbolic group has a convex subcomplex on which it acts cocompactly. It is easy to see that this result could serve as a characterization of quasiconvex subgroups in cubulated hyperbolic groups. A similar statement for relatively hyperbolic groups was proved independently by Sageev and Wise [26]. A finitely generated group H acting on a metric space X is undistorted if some (hence every) orbit map H → X is a quasi-isometry (where H is endowed with the metric of shortest word with respect to a finite generating set), and is distorted otherwise. In the setting of a finitely generated subgroup H in a finitely generated group G one can consider distortion with respect to the length induced from finite generating sets of H and G. If G is hyperbolic, a subgroup is undistorted if and only if it is quasi-convex. A lot of research has been done on distortion of groups. Perhaps the easiest way of finding a distorted subgroup of a hyperbolic group is to find an infinite normal finitely generated subgroup of infinite index. For example the surface fiber subgroup of a fibered hyperbolic 3-manifold group. Rips [22] showed how to construct finitely generated normal subgroup of hyperbolic groups using a small cancellation construction. A similar construction was carried by Wise [29] arranging such that the ambient group is cubulated (in fact, it is the fundamental group of a compact 2-dimensional non-positively curved square complex). Dison and Riley [8] construct examples of groups, called Hydra groups, that are fundamental groups of non-positively curved square complexes, and have very distorted free subgroups. The distortion they achieve exceeds those found in previous works of Mitra [18] and the subsequent 2-dimensional CAT(-1) groups of Barnard, Brady and Dani [2]. This paper aims to provide a characterization of quasiconvex subgroups via finiteness properties of their hyperplane stabilizers. LetĤ be the set of hyperplanes andÎ H = n i=1ĥ i = ∅ n ≥ 0, (ĥ 1 , . . .ĥ n ) ∈Ĥ n be the set of Intersections of collections of pairwise transverse Hyperplanes. We prove the following characterization of quasiconvex subgroups of hyperbolic cubulated groups. Theorem 1.1. Let G be a hyperbolic group acting properly and co-compactly on a finite dimensional CAT(0) cube complex X. Let H ≤ G be a finitely presented subgroup. Then the following are equivalent: 1. The subgroup H is quasiconvex in G. The subgroup H is hyperbolic and for allk ∈Ĥ, Stab H (k) is quasiconvex in H. Before discussing the proof of the theorem, let us examine it in an example. Let G be the fundamental group of a closed hyperbolic 3-manifold which is fibered over the circle, and let H be the fundamental group of the surface fiber. As we mentioned before H is distorted, and forms a short exact sequence 1 → H → G → Z → 1. Consider the cubulation of G obtained in [16,4]. By construction, the stabilizer L = Stab G (t) of the hyperplanet is a quasiconvex surface subgroup of G. Since both L and H are surface subgroups, if L ≤ H then [H : L] < ∞, contradicting the fact that L is undistorted. Hence L/(H ∩L) Z and thus Stab H (t) = H ∩ L is not finitely generated (since it corresponds to an infinite cyclic cover of a surface). This shows that (2), (3) and (4) of Theorem 1.1 do not hold (cf. Theorem 1.2 which also applies to this case). In fact, in this case we showed that every hyperplane stabilizer in H is not finitely generated. The main tool in the proof of Theorem 1.1 is a higher dimensional analogue of Stallings' folds. In his seminal paper [27] Stallings introduced the notion of Stallings' folds in order to study finitely generated subgroups of free groups. Given a finitely generated subgroup of a free group and a generating set, Stallings' folds provide an algorithm to replace the generating set with a minimal generating set. His main observation is that given any combinatorial map between finite graphs, one can find a finite sequence of identifications of adjacent edges, called folds, of the domain graph such that the induced map from the resulting folded graph would be a local isometry. In [28], he extended this idea to more general G-trees. One useful property of these foldings is the following (see [5,Proposition p.455]). Let G be a finitely generated group. Let f be a G-equivariant simplicial map of G-trees T → T sending edges to edges, and assume the action on T has finitely generated edge stabilizers. Then one can perform finitely many Gequivariant folds of T such that the induced map on the resulting tree T → T is a G-equivariant isometric embedding, and the map f is the composition of the folds and of the embedding. In [5], Bestvina and Feighn applied this property in the case where T is the Dunwoody resolution of the tree T (see [9]). In [3], we described a generalization of resolutions in the setting of CAT(0) cube complexes. The construction can be summarized as follow. Let G be a finitely presented group, and let K be its presentation complex. Let X be a cube complex on which G acts. We build a G-equivariant map fromK, the universal cover of K, to the CAT(0) cube complex X . A connected component of the preimage of a hyperplane is called a track, and can be seen as an embedded graph inK. It defines a wall on K, and the set of all such walls defines a CAT(0) cube complex X endowed with a natural action of G and a G-equivariant map to X . The construction and the properties of resolutions are described more thoroughly in [3]. In this paper, we introduce an analogue of Stallings' folds of G-trees in the context of CAT(0) cube complexes. Given a finitely presented subgroup H of a cubulated hyperbolic group G X , we first resolve the cube complex and get an H-equivariant map X → X , where X is the geometric resolution of the action of H on X . We then provide conditions for factoring this resolution through a finite sequence of folds until the resulting folded complex embeds into the cubulation of G with respect to the combinatorial metric on cube complexes. We denote such a folding sequence by where each is an elementary fold, and the map | −→ is an L 1 embedding of cube complexes. Finally we show that under some assumptions H acts coboundedly on the resulting folded complex X n , from which we deduce that H is quasiconvex in G. In general, it is not clear whether one can replace "finitely presented" with "finitely generated" in Theorem 1.1. Since our methods use the geometric resolution, which is only defined for finitely presented groups, we were unable to treat these cases. However, if the group is a surface group or if the cube complex is two dimensional then we obtain the following easier criterion for undistortion. Theorem 1.2. Let G be a finitely presented group that acts properly on a CAT(0) cube complex X , with finitely generated hyperplane stabilizers, and if one of the following holds: 1. the group G is a surface group or a free group, or 2. the complex X is 2 dimensional, then the orbit maps G → X are quasi-isometric embeddings. We remark that in the case of trees it was sufficient for the map to be a local isometric embedding for it to be a (global) isometric embedding. While such a statement is true for the CAT(0) metric on the cube complex (i.e, the L 2 metric), it is not true for the L 1 metric. On the other hand, in the setting of the above theorem we are forced to use the L 1 metric, since having an L 2 combinatorial embedding would imply the existence of a convex cocompact subcomplex core for G. Such a subcomplex does not exist even for the simple example of the cyclic group generated by the translation by (1, 1) on R 2 with its standard tiling by unit squares. Theorem 1.2 Case 2 has been independently proved using similar ideas of Stallings' folds for VH square complexes in Samuel Brown's PhD thesis [7]. In some sense, this paper can be considered as a continuation of our previous paper [3] on resolutions and finiteness properties of resolutions, in that we try to study the properties of the resolution map via Stallings' folding sequences. CAT(0) cube complexes We begin by a short survey of definitions concerning CAT(0) cube complexes. For further details see, for example, [25]. A cube complex is a collection of euclidean unit cubes of various dimensions in which faces have been identified isometrically. A simplicial complex is flag if every (n + 1)-clique in its 1-skeleton spans a n-simplex. A cube complex is non-positively curved (NPC) if the link of every vertex is a flag simplicial complex. It is a CAT(0) cube complex if moreover it is simply connected. A cube complex X can be equipped with two natural metrics, the euclidean and the L 1 (or combinatorial) metric. With respect to the former, the complex X is NPC if and only if it is NPCà la Gromov (see [10]). However, the latter is more natural to the combinatorial structure of CAT(0) cube complexes described below. Given a cube C and an edge e of C. The midcube of C associated to e is the convex hull of the midpoint of e and the midpoints of the edges parallel to e. The hyperplane associated to e is the smallest subset containing the midpoint of e and such that if it contains a midpoint of an edge it also contains all the midcubes containing it. Every hyperplaneĥ in a CAT(0) cube complex X separates X into exactly two components (see [19]) called the halfspaces associated toĥ. Thus, a hyperplane can also be abstractly identified with its pair of complementary halfspaces. For a CAT(0) cube complex X we denote byĤ =Ĥ(X) the set of all hyperplanes in X, and by H = H(X) the set of all halfspaces. For each halfspace h ∈ H we denote by h * ∈ H its complementary halfspace, and byĥ ∈Ĥ its bounding hyperplane, which we also identify with the pair {h, h * }. Conversely, a choice of a halfspace h for a hyperplaneĥ is called an orientation ofĥ. We denote the inclusion of halfspaces by ≤. We briefly review the terminology that will be used throughout the paper. Two distinct hyperplanesĥ,k ∈Ĥ can be either disjoint or transverse. The latter is denoted byĥ k . Two distinct halfspaces h, k ∈ H can be in one of the following arrangements: facing if h > k * , or equivalently, if bothĥ ⊂ k andk ⊂ h. transverse ifĥ andk are transverse. In this case, we denote h k. incompatible if h and k have empty intersection, or equivalently, if h * ≥ k. Otherwise, they are said to be compatible. A hyperplane in a CAT(0) cube complex separates two points if they belong to different halfspaces of the hyperplane. A hyperplaneĥ separates two hyperplanesĥ andĥ if it separates any point ofĥ from any point ofĥ , or equivalently if there exists an orientation of each of them such that h < h < h . The collection is said to be facing if any two distinct hyperplanes in are disjoint andÂ-inseparable, or equivalently if the hyperplanes have an orientation for which every pair is facing. Remark 2.2. Since any two hyperplanes in a CAT(0) cube complex are separated by finitely many hyperplanes, for every non empty set of hyperplanesÂ, and every hyperplaneĥ, there exists a hyperplanek ∈ such thatĥ andk arê A-inseparable. Pocsets to CAT(0) cube complex We adopt Roller's viewpoint of Sageev's construction. Recall from [23] that a pocset is a triple (P, ≤, * ) of a poset (P, ≤) and an order reversing involution * : P → P satisfying h = h * and h and h * are incomparable for all h ∈ P. A pocset is locally finite if for any pair of elements, the set of elements in between them is finite. In what follows we assume that all pocsets are locally finite. The set of halfspaces H of a CAT(0) cube complex has a natural pocset structure given by the inclusion relation and the complement operation * . Roller's construction starts with a locally finite pocset (P, ≤, * ) of finite width (see [25] for definitions) and constructs a CAT(0) cube complex X(P) such that (H(X), ≤, * ) = (P, ≤, * ). We briefly recall the construction, for more details see [23] or [25]. An ultrafilter U on P is a subset verifying # (U ∩ {k, k * }) = 1 for all k ∈ P and such that for all h ∈ U , if h ≤ k then k ∈ U . If we denoteĥ = {h, h * } andP = ĥ h ∈ P , then U can be viewed as a choice function U :P → P. Throughout the paper we will use both viewpoints. An ultrafilter U satisfies the Descending Chain Condition (DCC) if any descending chain k 1 > k 2 > · · · > k n > . . . of element of U has finite length. The vertices of X(P) are the DCC ultrafilters of P. Two vertices of X(P) are connected by an edge if the corresponding ultrafilters differ on a single pair in P = {{h, h * }|h ∈ P}. An n-cube is added to every one skeleton of an n-cube. Or equivalently, any n-cube corresponds to 2 n distinct DCC ultrafilters that differ on a set of n hyperplanes inP. We remark that an ultrafilter can be defined equivalently as a subset of H whose elements are pairwise compatible and it is maximal for this property. Quotients of pocsets The basic construction that enables one to fold hyperplanes is the introduction of quotients of CAT(0) cube complexes. The details of this construction will be given in the language of pocsets, and thus might seem cumbersome and lacking a geometric intuition. However the basic idea remains quite simple; given an admissible equivalence relation ∼ on the hyperplanes of a CAT(0) cube complex X we would like to introduce a pocset structure on the quotient H(X)/∼ such that the dual CAT(0) cube complex X/∼:= X(H(X)/∼) is the 'smallest' CAT(0) cube complex to have a combinatorial map X → X/∼ for which the preimages of hyperplanes are equivalence classes of ∼. To explain the geometric outcome of this construction we illustrate it with an example. The example also shows an interesting phenomenon that may occur; the dimension of the cube complex can increase when passing to a quotient. Example 3.1. Consider the CAT(0) cube complex X shown on the top of Figure 1. The hyperplanes of X are shown as colored midcubes. Let ∼ be the equivalence relation on the hyperplanes of X whose classes are shown by colors, that is, two hyperplanes are ∼-equivalent if they have the same color. The quotient CAT(0) cube complex X/∼ is shown on the bottom of Figure 1 and the image of the map X → X/∼ is shown with bold lines. For the quotient to carry a pocset structure one needs to restrict to a subclass of equivalence relations. Definition 3.2. Let (H, ≤, * ) be a pocset. An equivalence relation ∼ on H is an admissible equivalence if it satisfies the following ∀h, k ∈ H: We define a pocset structure on H/∼ in the following way. • The complementation * : • In Lemma 3.6 we will show that this definition indeed gives a pocset structure. For this we will need the following two easy lemmas. Lemma 3.5. Given two halfspaces h and l and an equivalence class Proof. Letk be a representative of [k] such thatk andĥ are [k]-inseparable. By Remark 2.2, such an element exists and Lemma 3.4 implies that k containsĥ. There are two cases: eitherk andl are [k]-inseparable which by Lemma 3.4 implies that k * containsl, which proves thatk separatesĥ andl, or there is a hyperplanek ∈ [k] such thatk separatesk andl, but sincek cannot intersect h and cannot separateĥ andk, it must also separateĥ andl. In both cases we found a hyperplane in [k] that separatesĥ andl. If moreover h < l, letk be a hyperplane that separatesĥ andl and such thatk andĥ are [k]-inseparable. Then from Lemma 3.4 and h < l it follows that h < k < l. * and that the complementary operation is order reversing. It is thus a pocset. We now prove that the pocset is locally finite. [k] in H/∼ only depends on the orientations of elements of I ∪ J . Let us first prove thatÎ ∪Ĵ is a facing collection of hyperplanes. Assume otherwise that a hyperplanel 1 separatesl 2 andl 3 inÎ ∪Ĵ . We are in one of the following case. We can assume thatl 1 andl 2 belong toÎ andl 3 belongs tô J . As before, we can findl 2 in J such thatl 1 separatesl 2 andl 3 , again a contradiction. Note that the converse to Lemma 3.7 does not hold in general (e.g. the red and light blue hyperplanes in Example 3.1). Maps between pocsets In what follows, quotients will arise from maps between pocsets. For classical Stallings' G-tree folding sequences the maps considered are simplicial (Gequivariant) maps of trees. The analogous notion in our setting will be called resolutions. Similarly, the analogous notion of injective simplicial maps of trees -L 1 isometric embeddings -will be simply called embeddings. The next definitions make these notion precise in the language of pocsets, using the notion of admissible maps. has an orientation h that is compatible with all the halfspaces in f (H). An admissible map f : H → H is an embedding of pocsets if f is injective and for all h, k ∈ H, if f (h) ≤ f (k) then h ≤ k. We denote such a map by H | −→H . Note that by injectivity property (AM3) is superfluous in this case. An admissible map f : The motivating example for this definition is the geometric resolution of an action of a finitely presented group on a CAT(0) cube complex, as the following lemma shows. The proof of the lemma is straight forward from the definition of the geometric resolution. Lemma 4.3. Let G be a finitely presented group, and let X be a finite dimensional CAT(0) cube complex on which G acts. Let X be the geometric resolution of X described in [3]. Then the map f : H(X) → H(X ) is a resolution of pocsets. The following lemma describes how resolutions between pocsets can be realized as maps between the associated CAT(0) cube complexes. Lemma 4.4. Let f : H → H be a resolution, and let X, X be the CAT(0) cube complexes associated with H, H respectively. There is a CAT(0) cube subcomplex Z ⊆ X that decomposes as a product Z = Z 1 ×Z 2 , such that f (H) = H(Z 1 ), and the map f induces a canonical combinatorial (and hence L 1 -distance-nonincreasing) map F : X → Z 1 . In particular, for every choice of vertex z ∈ Z 2 the map f induces a map F : If moreover, f is an embedding of pocsets, the induced map F is an L 1embedding. Proof. We partition the setĤ into three subsets in the following way: letĤ 1 = f (Ĥ); letĤ 2 be the set of all hyperplanes inĤ \ f (Ĥ) that are transverse to all hyperplanes in f (Ĥ); and letĤ 3 be the remaining set, i.eĤ 3 By property (AM4) and the definition, every hyperplaneĥ ∈Ĥ 3 has a unique choice of halfspace h that contains or is transverse to any hyperplane in H 1 . By the definition ofĤ 2 the same choice of h ∈ H 3 will either contain or be transverse any hyperplane inĤ 2 . Thus the subcomplex Z = ĥ ∈H3 h , where h is the unique choice of halfspace that satisfies the above, is isomorphic to X(H 1 ∪ H 2 ), which naturally decomposes as a product We define the map F : X → Z 1 in the following way. For a vertex x of X, which we think of as the ultrafilter choice function x :Ĥ → H, we define F (x) to be the following ultrafilter. For allĥ ∈Ĥ 1 , let F (x)(ĥ ) be f (x(ĥ)) whereĥ is a hyperplane of f −1 (ĥ ) such that x(ĥ) is minimal in x. This is well defined by the axiom (AM3). The function F (x) :Ĥ 1 → H 1 is an ultrafilter. Letĥ andk be distinct hyperplanes, we have to show that F (x)(ĥ ) and F (x)(k ) are compatible. Assume by contradiction that they are incompatible. Then, their pre-images in H/∼ f under the embedding of pocsets H/∼ f | −→H are incompatible. Let h, k be the minimal halfspaces x(ĥ), x(k) associated to hyperplanesĥ andk in f −1 (ĥ ) and f −1 (k ) respectively, as described in the definition of F (x) above. Then they satisfy one of the following. • The hyperplaneĥ separates x andk, which similarly gives a contradiction. • The two halfspaces h and k are facing. We may assume that they are since otherwise we can find representative in x satisfying the first bullet point. Thus, h and k must be incompatible, contradicting the fact that they both contain x. Finally, the ultrafilter F (x) is DCC since x is. This defines a map on vertices. To show that this map extends to edges, it is enough to show that adjacent vertices are sent to adjacent vertices. Recall that two vertices are adjacent if their ultrafilters differ on exactly one hyperplane. Each of the two orientation of this hyperplane is minimal in the corresponding vertex. Hence their images have to differ exactly on this hyperplane by the construction of the map F . Similarly, the map extends to higher dimensional cubes because any pairwise transverse set of distinct hyperplanes projects injectively to a pairwise transverse set of distinct hyperplanes, by property (AM2). If moreover f is an embedding of pocsets, using (AM4) the collection of hyperplanes that separate x and y is in one-to-one correspondence with the collection of hyperplanes that separate F (x ) and F (y ). Thus, F is an L 1embedding. Stallings' folds We say that a group acts without inversion on a CAT(0) cube complex if there are no elements that send a halfspace to its complement. Given a group acting on a CAT(0) cube complex, by replacing the CAT(0) cube complex by its cubical barycentric subdivision we may assume that the action is without inversions. Therefore, in what follows we consider only group actions without inversions. The main goal of constructing Stallings' folds is to prove that, under some conditions, a resolution can be decomposed as a finite sequence of simpler quotients, called elementary folds, which we introduce in the following definition. It is worth noting at this point that classical Stallings' folds for G-trees are indeed elementary folds also in our setting. As we said, the goal is to show that certain resolutions can be factored by a finite sequence of elementary folds; this is the content of Lemma 6.1 and Proposition 6.2. Before diving into more technical lemmas, we demonstrate the basic principles of these lemmas in the following example. Figure 2) As pocsets, the former is H = {h i , h * i |i ∈ Z} with the poset structure h i ≥ h j and h * i ≤ h * j for all i ≤ j, and the obvious complementation involution. The latter is H = {k i , k * i |i ∈ Z/4Z} with the poset structure k * i ≤ k i+2 for all i ∈ Z/4Z. The action of Z =< a > on H is given by ah i = h i+2 (and ah * i = h * i+2 ), and the action of Z on H is given by is an elementary fold. After folding, we obtain a fan shaped square complex, in which Z-many squares share a common vertex and two consecutive squares share an edge, on which Z acts by fixing the shared vertex and shifting the squares (see the middle figure in Figure 2). As a pocset, it is isomorphic to H 1 = {t i , t * i |i ∈ Z}, with the poset t * i ≤ t j for all |i − j| ≥ 2, the Z action is given by at i = t i+1 and the folding map φ 0 is given by The map f induces a map f 1 : H 1 → H which can be written explicitly by f 1 (t i ) = k i (mod 4) . This map is again a resolution, and the Z-equivariant equivalence relation generated by t 0 ∼ t 4 is an elementary fold. The resulting quotient H 2 is isomorphic to the pocset H , and under this isomorphism the quotient map φ 1 : H 1 → H 2 = H is the map f 1 . Thus, our sequence of folds terminated with the pocset H 2 which embeds in H (in this case, they are isomorphic). Our main immediate goal is Lemma 6.1, which states that a G-equivariant resolution can be factored through an elementary fold. However, we will first need some technical lemmas that describe what the equivalence relation of an elementary fold looks like, and how to relate it to the arrangement of the hyperplanes in its quotient. Proof. Let h 1 , h 2 be the orientation ofĥ 1 ,ĥ 2 . By definition h 1 , h 2 are facing. Extend the orientation using the action of G to the orbits G.ĥ 1 and G.ĥ 2 , this is well defined since G acts without inversions. For any two equivalent hyperplaneŝ t 1 ∼t 2 there exists a sequence of distinct hyperplanest 1 =l 1 ,l 2 , . . . ,l n =t 2 , such that for all i there exists g i ∈ G such that (l i ,l i+1 ) = (g i ·ĥ 1 , g i ·ĥ 2 ) or (g i ·ĥ 2 , g i ·ĥ 1 ). Since the set of elementary foldable pairs is stable by the action, all the pairs are elementary foldable and none of the hyperplanesl i separate any of the pairs (l j ,l j+1 ). In addition, by property (AM2),l i is not transverse tol j for all i, j. This implies that for all i = j,l j ⊂ l i , which proves the lemma. Proof. By definition, the equivalence relation ∼ satisfies Property (AER2). Properties (AER1) and (AER4) follows directly from Lemma 5.4. Property (AER3) follows from the fact that the map f 0 is admissible, and thus ifĥ k then their images in H are distinct and in particular they are not ∼-equivalent. The following lemma characterizes when two hyperplanes are transverse after an elementary fold. It might be useful to compare the two cases of the lemma with Example 5.3. The transverse pairs of hyperplanes after the first fold correspond to Case 2, while those of the second fold correspond to Case 1. Using the same construction as in the proof of Lemma 5.4, there exists a sequence of distinct hyperplanesĥ =l 1 ,l 2 , . . . ,l n =ĥ • , such that for all i there exists g i ∈ G such that (l i ,l i+1 ) = (g i ·ĥ 1 , g i ·ĥ 2 ) or (g i · h 2 , g i ·ĥ 1 ). Since no element of [k] is transverse to an element of [h], there exists i such that g i · h * 1 < k < g i · h 2 (up to interchanging h 1 and h 2 ). For the last part of the lemma, let us first remark that from Lemma 5.4 we know thatĥ ∼k. Ifk is transverse to an element of [ĥ] then by definition [ĥ] [k]. So we may assume thatk is disjoint from the elements of [ĥ]. Using the same construction as previously, there exists g ∈ G such thatĥ ∼ g ·ĥ 1 and k separates g ·ĥ 1 and g ·ĥ 2 . Sinceĥ 1 andĥ 2 is an elementary foldable pair, the hyperplanek cannot be equivalent to any other hyperplane separating g ·ĥ 1 and g ·ĥ 2 , that is, both the pairsk, g ·ĥ 1 andk, g ·ĥ 2 are [ĥ]∪[k]-inseparable, which by the definition of the pocset structure of the quotient implies that [ĥ] [k]. Proof. Let h and k be two halfspaces of H and assume φ(h) and φ(k) are transverse. By Lemma 5.6, either some preimages h • and k • are transverse in H, in which case, by Property (AM2) their images are transverse in H . Or, up to swapping h and k there exists k • ∼ k and g ∈ G such that g · h * 1 < k • < g · h 2 and g · h 1 ∼ g · h 2 ∼ h where φ is the elementary fold that is generated by the elementary foldable pair h 1 ∼ h 2 . Since h 1 and h 2 are an elementary foldable pair, the only preimage of f (k) separating g ·ĥ 1 and g ·ĥ 2 is k • . Therefore, by ĥkl hklĥkl Figure 3: Hyperplanes that were separated can be reunited thanks to Stallings. the definition of the order, the images of h and k in H/∼ f are transverse, and thus f (k) and f (ĥ) are also transverse. In the setting of trees, if two edgesĥ andl are separated by a third edgek, and after a fold φ the edge φ(k) does not separate φ(ĥ) and φ(l), then either φ foldsk to one ofĥ,l, or φ foldsk to another edgek that also separatesĥ andl, and the triple φ(ĥ), φ(l) and φ(k) is a facing triple. See Figure 3. The following lemma describes a similar behavior of CAT(0) cube complex folds. Again, it is worth comparing also to Example 5.3, where after the first fold there is no pair of hyperplanes which is separated by a third. To prove the last part of the lemma, notice that there are at least two lifts of [k ] in betweenĥ andk. Indeed under the orientation such that all three halfspaces are facing, every inseparable lift of pairs should be facing, but this implies that there is more than one lift of [k ] in betweenĥ andl. Lemma 5.4 shows that there are exactly two. Folding sequences Let us now state the key lemma in proving that resolutions can be decomposed as sequences of folds. Proof. To simplify the notation, we denote H ∼ , H 0 and H 1 instead of H/∼, H/∼ f0 and (H/∼)/∼ f1 . Elements in H ∼ , H 0 and H 1 will be denoted with indices · ∼ , · 0 and · 1 . We first show that f 1 is admissible. • Property (AM2) is given by Lemma 5.7 • Property (AM3). Let h ∼ and k ∼ be facing halfspaces in H ∼ and assume that f 1 (ĥ ∼ ) = f 1 (k ∼ ) and that they are )-inseparable, they are not separated by the images under φ of the elements of f −1 0 (f 0 (ĥ)) that separateĥ andk in H. Thus by Lemma 5.8 the number of elements in f −1 0 (f 0 (ĥ)) in betweenĥ andk is even because they come in pairs of ∼ equivalent hyperplanes. Now using Property (AM3) of f 0 , we obtain that these elements form an alternating sequence of facing and incompatible pairs, therefore We are left to show that H 0 = H 1 . By definition there is a bijection between the two sets, and it is easy to see that it commutes with * . We need to show that the pocset structure is the same. That is, given two elements h and k in H, • if h 0 and k 0 are transverse then h 1 and k 1 are transverse, For the first bullet point, assume h 0 and k 0 are transverse. Then, from the definition of the partial order of a quotient, two cases may occur. • There are preimages in H that are transverse, in which case, by construction of H ∼ and Property (AM2) of f 1 , the halfspaces h 1 and k 1 are transverse. • There are two pairs of halfspaces (h, k) and (h , )-inseparable with contradictory orientations. But then their images (h ∼ , k ∼ ) and (h ∼ , k ∼ ) are transverse or inseparable with contradictory orientations, therefore h 1 and k 1 are transverse. For the second bullet point, let h 1 and k 1 be the elements associated to h 0 and k 0 . We want to show that h 1 < k 1 . By definition The fact that no elements in f −1 1 (h 1 ) and f −1 1 (k 1 ) are transverse is a direct application of Lemma 5.7. Let It is sufficient to show thatĥ ∼ ⊂ k ∼ , then by symmetry of the argument , such thatk andĥ are f −1 0 (k 0 )-inseparable, and is betweenĥ andk or equal tok. Since h 0 < k 0 , we haveĥ ⊂ k , and thus by inseparabilityĥ ∼ ⊂ k ∼ . Lemma 6.1 together with Remark 5.2 show that as long as the resolution f is not an embedding, it admits an elementary fold φ, and it can be decomposed as f = f 1 • φ where f 1 is a resolution. Iterating this gives us a factorization of f into a sequences of folds. Proposition 6.2 shows, under some conditions, that if we choose the folds correctly then this process terminates. Proposition 6.2. Let G be a group, let H, H be G-pocsets, and let f : H → H be a G-equivariant resolution. If G has only finitely many orbits in H, and the G-stabilizers of hyperplanes inĤ are finitely generated, then there exists a finite folding sequence Let us first prove the following lemma. Proof of Proposition 6.2. We first show that by a finite folding sequence one can get H i such that H i /G → H /G is injective. If this map is not injective, choose two hyperplanesĥ,k such that f (ĥ) = f (k) butĥ andk belong to different G-orbits. By Lemma 6.3, we can find a sequence of folds that identifies h and k, and thus reduces the number of G-orbits of hyperplanes. Now, we may assume that the map f /G : H/G → H /G is injective. For each G-orbit of a hyperplane G.ĥ in its image we fix a hyperplaneĥ in H that belongs to this orbit and a finite set Sĥ of generators of the hyperplane stabilizer Stab G (ĥ ). We choose some representativeĥ ∈ H that is mapped by f toĥ . Let cĥ be the number of generators in Sĥ that do not belong to Stab G (ĥ). Let us define the complexity of f to be We prove that if c f > 0 then by a finite folding sequence one can reduce c f . Letĥ be such that cĥ > 0, and let Sĥ andĥ be as in the definition of cĥ , and let s ∈ Sĥ be a generator of Stab G (ĥ ) that does not belong to Stab G (ĥ). Note that f (ĥ) = f (sĥ) thus by Lemma 6.3, we can perform finitely many elementary folds untilĥ and sĥ are identified. The stabilizer of the resulting hyperplane contains s thus reduces the complexity by at least 1. We complete the proof by observing that if f /G is injective and c f = 0 then f is an embedding of pocsets. Finally, we show that a cobounded action remains cobounded after folding. Note that this is immediate for classical G-tree folds, however this becomes less clear in our setting since the dimension can increase. Lemma 6.4. Let G be a group acting coboundedly without inversion on a CAT(0) cube complex X. Let X be a CAT(0) cube complex obtained by a Gequivariant elementary fold φ of H(X) and assume that X has finite dimension. Then the action of G on X is cobounded. Proof. Let F : X → X be the map between CAT(0) cube complex associated to φ. From Lemma 4.4, the map F is distance non-increasing. It is thus sufficient to prove that X is at bounded distance from the image of F . We will show that any maximal cube in X contains at least a vertex in the image, which would imply that that the L 1 distance between X and F (X) is bounded by the dimension of X . For a maximal cube C in X , we associate a DCC ultrafilter U on H whose corresponding vertex x ∈ X satisfies F (x) ∈ C . LetĈ be the set of hyperplanes that cross C . We partition the set of hyperplanes inĈ into two disjoint subsets: the setĈ N F of non-folded hyperplanes inĈ (i.e, hyperplanes that have one preimage under φ), and the setĈ F of folded hyperplanes inĈ . LetĈ,Ĉ N F ,Ĉ F be the corresponding sets of preimages. We define the ultrafilter U according to the following cases: • Forĥ inĈ N F , we set U (ĥ) to be an arbitrary choice of orientation ofĥ. • For eachĥ inĈ F , by lemma 5.4, there is an orientation for the hyperplanes in φ −1 (φ(ĥ)) for which they are facing. We set U (ĥ) to be this orientation. • For eachĥ inĤ \Ĉ, since C is maximal, φ(ĥ) is disjoint from some hyperplanek ∈Ĉ . We choose U (ĥ) to be the orientation h ofĥ that contains the preimages φ −1 (k ). Such an orientation exist since otherwise the hyperplaneĥ would be transverse to a preimage ofk or separate two preimages ofk . But then, by the converse implication of Lemma 5.6, φ(ĥ) andk would be transverse, contradicting the hypothesis. Let us first check that U is an ultrafilter of X. Since any hyperplane in H \Ĉ is oriented such that it contains all preimages of a hyperplane inĈ , it is sufficient to check that the orientations on hyperplanes inĈ are compatible. Now takeĥ andk inĈ . By assumption, the hyperplanesĥ andk are transverse. By Lemma 5.6, either there existsk in the preimage ofk that is transverse to a preimageĥ ofĥ , in which case, since all orientations of φ −1 (ĥ ) ∪ φ −1 (k ) containk ∩ĥ, any pair of preimages is oriented in a compatible way, or, up to interchangingĥ andk , there are two hyperplanesk 1 andk 2 in φ −1 (k ) that are separated by a hyperplaneĥ ∈ φ −1 (ĥ ). So U (k 1 ) and U (k 2 ) are facing and containĥ. By construction any hyperplane in φ −1 (ĥ ) is oriented to containĥ, and hyperplanes in φ −1 (k ) are oriented such that they are pairwise facing, in particular they containk 1 andk 2 , and thus alsoĥ. This shows that any two hyperplanes in φ −1 (ĥ ) ∪ φ −1 (k ) have compatible orientations in U . Moreover U satisfies the DCC condition. Indeed, assume by contradiction that there is an infinite descending chain (h n ). By construction, each h n is oriented towards all preimages of an element ofĈ . As there are finitely many elements ofĈ , there exist a preimage of an element ofĈ that is contained in every halfspace h n (we have h n ⊂ h n−1 ), contradicting the local finiteness given by Lemma 3.6. Let x be the vertex corresponding to the ultrafilter U . Let us prove now that F (x) belongs to C . We have to show that any hyperplaneĥ not in C is oriented toward a hyperplane of C in the ultrafilter defined by F (x). Take the hyperplaneĥ in φ −1 (ĥ ) such that x(ĥ) is minimal in x. By construction there existsk ∈Ĉ such that U (ĥ) contains all preimages ofk . Hence, F (x)(ĥ) containsk . Undistorted subgroups In this section we prove the two main results regarding undistorted (and quasiconvex) group actions on CAT(0) cube complexes. We begin this section by recalling the construction of the geometric resolution, and provide the setting for the remainder of the section. Let G be a finitely presented group, let K be a finite two-dimensional simplicial complex such that G = π 1 (K), and letK be its universal cover. Let G act on a finite dimensional CAT(0) cube complex X . Without loss of generality we may assume that G acts without inversions. Since G K (0) freely, there exists a G-equivariant mapK (0) → X (0) . Extending this map G-equivariantly by mapping edges to combinatorial geodesics and simplices to area minimizing discs, we get a G-equivariant mapK → X . The connected components of preimages of the hyperplanes of X are embedded graphs inK, which we call tracks. Each track separatesK into two connected components. Therefore, the collection of all tracks defines a natural pocset structure (associated to a system of walls) which we denote by H. In this pocset structure each hyperplaneĥ ∈Ĥ is associated to a track which we denote by tĥ. Moreover there is a natural resolution of pocsets H → H , where H is the pocset of halfspaces of X . The collection of tracks is invariant under the action of G, and thus, descends to a collection of immersed graphs in K. Since K is finite, this collection is finite. This shows that G has finitely many orbits of hyperplanes inĤ. Thus, if we further assume that for allĥ ∈Ĥ the stabilizer Stab G (ĥ ) is finitely generated then by Proposition 6.2 there is a finite folding sequence factoring H → H . Let us denote by the corresponding sequence of CAT(0) cube complexes. Moreover, since K is finite each track has a finite (immersed) image in K. Thus, it is easy to see that tĥ is Stab G (ĥ)-invariant (in fact, Stab G (tĥ) = Stab G (ĥ)) and that Stab G (ĥ) acts cocompactly on tĥ. However, it might not act cocompactly onĥ. In the following claim we generalize these properties of tracks to hyperplanes inĤ i . To do that we extend tracks to immersed graphs (which are not necessarily tracks) which we call "saturated tracks". Claim 7.1. For each 1 ≤ i ≤ n there is a graph T i with an action of G and an immersion T i K such that the following holds. 1. The graph T 0 is the disjoint union of the tracks of the geometric resolution, that is, T 0 = ĥ ∈Ĥ tĥ →K where tĥ →K is the track associated tô h ∈Ĥ =Ĥ 0 . 2. For all 1 ≤ i ≤ n, T i−1 ⊆ T i and both the G-action and the immersion of T i extend that of T i−1 . Let T i+1 K extend the map T i K G-equivariantly such that the edgeα is mapped to α. By construction T i ⊆ T i+1 comes with a cocompact G-action, and an immersion T i+1 K which is G-equivariant and extends T i K , proving properties 2 and 3. G acts cocompactly on T i and the immersion T The equivalence relation on hyperplanes that produces the quotientĤ i Ĥ i+1 is generated by G-equivariantly identifying the two hyperplanesĥ i 1 ∼ h i 2 . Similarly, the equivalence relation on i-saturated tracks (i.e, components of T i ) in which two i-saturated tracks are equivalent if they belong to the same component of T i+1 is generated by G-equivariantly identifying tĥi 1 and tĥi 2 (as this is exactly the effect of connecting them with the edgeα). Thus the following is a well defined bijection. The (i + 1)-saturated track tĥ i+1 associated to a hyperplaneĥ i+1 ∈Ĥ i+1 is the connected component of T i+1 that contains the i-saturated track tĥ i of a preimageĥ i ofĥ i+1 under the mapĤ i Ĥ i+1 . Clearly, this bijection is G-equivariant which proves property 4. Finally, notice that since the saturated tracks are connected components, if g ∈ G is such that gtĥ i ∩ tĥ i = ∅ then g ∈ Stab G (tĥ i ) = Stab G (ĥ i ). Therefore, property 5 follows from property 3. Proof. Ifĥ i k i ∈Ĥ i then from Lemma 3.7 there are two possible cases for their preimages under the mapĤ →Ĥ i . There are intersecting preimagesĥ,k ∈Ĥ ofĥ i ,k i respectively, which implies that tĥ and tk intersect, and thus also tĥ i and tk i . Or, up to exchangingĥ i ,k i there is a preimageĥ ofĥ i that separates two preimagesk 1 ,k 2 ofk i . In this case, the corresponding track tĥ separates the tracks tk 1 , tk 2 . But since tĥ ⊆ tĥ i and tk 1 , tk 2 ⊆ tk i , and tk i is connected we see that tĥ i and tk i intersect. Theorem 7.3. Let G be a finitely presented group that acts properly on a CAT(0) cube complex X , with finitely generated hyperplane stabilizers and such that the action on the geometric resolution is cobounded. Then the orbit maps G → X are quasi-isometric embeddings. Proof. Following the above discussion, we see that there is a folding sequence By applying inductively Lemma 6.4, G acts coboundedly on each of X i , and in particular on X n . Since G acts properly on X and the map X n | −→X is combinatorial, G acts properly on X n . The action of G on X n is proper and cobounded, and therefore X n is G-equivariantly quasi-isometric to G. The Gequivariant embedding X n | −→X completes the proof. As a Corollary we obtain Theorem 1.2. Proof of Theorem 1.2. Using theorem 7.3, we only need to show that in each of these cases, the action on the resolution is cobounded. Case 1 follows from [24]. Case 2 follows from Theorem 1.1 of [3]. The second theorem gives equivalences between different quasiconvex properties of subgroups and the quasiconvexity of the group. For this we need the notion of cocompact core. For the remainder of this section we assume that G is Gromov hyperbolic. The following lemma can be viewed as a generalization of the thin triangle condition for higher dimensional simplices. Definition 7.5. For R ≥ 0, a collection of subsets A 1 , . . . , A n is R-coarsely intersecting if their R-neighborhoods have a non-empty common intersection. A collection of subsets A 1 , . . . , A n is pairwise R-coarsely intersecting if any pair is R-coarsely intersecting. The following lemma appears as Lemma 7 in [20] in the case of convex subsets. Even though their proof works in the quasiconvex case, we include a proof for the purpose self-containment. Lemma 7.6 (The Thin Simplex Lemma). Let H be a δ-hyperbolic geodesic space, let A 1 , . . . , A d be R-quasiconvex subsets for some R, and assume that A i pairwise R-coarsely intersect. Then there exists R = R (δ, d, R) such that A 1 , . . . , A d R -coarsely intersect. Proof. Let x i,j denote an intersection point of the R-neighborhoods of A i and A j . By [6, Proposition 3.2], there exists a finite metric tree T with distinguished points y i,j (1 ≤ i, j ≤ d), and a quasi-isometric embedding ψ : T → H such that ψ(y i,j ) = x i,j and the quasi-isometry constants depend only on δ and d. For each 1 ≤ i ≤ d, let T i be the subtree of T spanned by {y i,j |1 ≤ j ≤ d}. Since the subtrees T i pairwise intersect, by the Helly property there exists an intersection point y ∈ d i=1 T i . The image y under ψ is on the quasiconvex sets ψ(T i ), which by the quasiconvexity of A i and the stability of quasi-geodesic, are at bounded distance R = R (δ, d, R) from each A i . When applying the previous lemma on cosets of quasiconvex subgroups we can deduce the following. Lemma 7.7. If H is a hyperbolic group and L k , k = 1, . . . , r, are R-quasiconvex subgroups. Then for any d, H acts co-finitely on collections of d pairwise Rcoarsely intersecting cosets of L k . Proof. From the Thin Simplex Lemma there exists R such that any collection of pairwise R-coarsely intersecting cosets of L k , 1 ≤ k ≤ r must intersect some R ball. Since H acts on itself coboundedly, up to the action of H there are only finitely many such balls, and thus only finitely many collections of d cosets that intersect them. We are now ready to prove Theorem 1.1. In fact, we prove the following version. Theorem 7.8. Let G be a hyperbolic group acting properly and co-compactly on a finite dimensional CAT(0) cube complex X . Let H ≤ G be a finitely presented subgroup. Then the following are equivalent: 1. The subgroup H is quasiconvex in G. 2. For allt ∈ÎH (see definition in the introduction), the group Stab H (t) is finitely presented. 4. The subgroup H is hyperbolic and for allk ∈Ĥ , Stab H (k) is quasiconvex in H. The subgroup action H X satisfies Property CC. Proof. We recall the following 4 facts: (a) the intersection of (finitely many) quasiconvex subgroups is quasiconvex, (b) a quasiconvex subgroup of a hyperbolic group is itself hyperbolic, and therefore finitely presented, (c) a subgroup of a quasiconvex subgroup is quasiconvex in the subgroup if and only if it is quasiconvex in the ambient group, and (d ) if a hyperbolic group G acts properly co-compactly on a CAT(0) cube complex then its hyperplane stabilizers Stab G (ĥ) are quasiconvex in G. From the above facts it is easy to see that 1 implies 2, 3 and 4. Indeed from (d ), hyperplanes stabilizers in G are quasiconvex in G. From (a), elements in IH are quasiconvex, and from (b), they are finitely presented. Therefore 1 implies 2 and 3. Using (c), we deduce 1 =⇒ 4. In the remaining cases we are in the setting of the discussion at the beginning of the section, where G is replaced by the subgroup H (i.e, H = π 1 (K) etc.). Therefore, we have a geometric resolution X → X for the action of H on X and a folding sequence Letĥ n 1 , . . . ,ĥ n r be a set of H-orbit representatives for H Ĥ n (recall that the action of H onĤ n has finitely many orbits). Let us denote by L k = Stab H (ĥ n k ) for all 1 ≤ k ≤ r. By Property 5 of Claim 7.1 for all 1 ≤ k ≤ r, L k acts cocompactly on the n-saturated track tĥn k . If we fix x 0 ∈K, then there exists R ≥ 0 such that the orbit L k .x 0 is at Hausdorff distance at most R from the image of tĥn k inK for all 1 ≤ k ≤ r. It follows that the set gL k .x 0 is at Hausdorff distance at most R from g.tĥn k inK for all g ∈ G and 1 ≤ k ≤ r. Therefore, by Claim 7.2 if gĥ n k g ĥ n k then the corresponding n-saturated tracks gtĥn k and g tĥn k intersect, and the corresponding sets gL k .x 0 and g L k .x 0 are 2R -coarsely intersecting. Notice that the orbit map H →K defined by g → g.x 0 is a quasiisometric, in particular there exists R ≥ 0 such that if gL k .x 0 and g L k .x 0 are 2R -coarsely intersecting, then gL k and g L k are R-coarsely intersecting (for all g, g ∈ G and 1 ≤, k, k ≤ r). To summarize this discussion, there exists R such that a collection of transverse hyperplanes inĤ n corresponds to a pairwise R-coarsely intersecting collections of cosets of the hyperplane stabilizers L k in H. 4 =⇒ 5: Since H is hyperbolic and L i are quasiconvex, we can apply Lemma 7.7 to show that H acts cofinitely on cubes of X n . This implies that it acts cocompactly on X n . Finally, since X n | −→X and H acts properly and cocompactly on X n we have shown that H has Property CC with Y = X n . 3 =⇒ 5: As before, there exists R such that a collection of transverse hyperplanes in H n corresponds to a pairwise R-coarsely intersecting collection of cosets of the hyperplane stabilizers L k in H. Since H is a (finitely generated) subgroup of G, there exists R 2 such that a pairwise R-coarsely intersecting collection of cosets in H is pairwise R 2 -coarsely intersecting in G. Since the hyperplane stabilizers are assumed to be quasiconvex in G, and G is hyperbolic, we can apply Lemma 7.6 to deduce that there exists R 3 such that any such collection is R 3 -coarsely intersecting in G. But since H, being a finitely generated subgroup of G, is coarsely embedded in G, there exists R 4 such that a collection of subsets of H that is R 3 -coarsely intersecting in G is R 4 -coarsely intersecting in H, and thus, as in the proof of Lemma 7.7, H acts cofinitely on such collections, which again proves Property CC with Y = X n . Finally we prove 2 =⇒ 1 by induction on the dimension of X . If dim(X ) = 1, then it follows from the well known fact that a finitely generated subgroup of a group acting properly on a tree is quasiconvex. Now let dim(X ) = d + 1 ≥ 2. The hyperplanes of X are CAT(0) cube complexes of dimension at most d. For allk ∈Ĥ(X ) the subgroup Stab H (k) is a finitely presented group acting on a d-dimensional CAT(0) cube complex with finitely presented intersections of hyperplane stabilizers. Hence, by the induction hypothesis, they are quasiconvex in Stab G (k), and thus also quasiconvex in G. The desired conclusion follows from 3 =⇒ 1, that we have already proved.
2017-08-31T13:50:32.000Z
2016-05-09T00:00:00.000
{ "year": 2016, "sha1": "baa0a6098158b812505f53ee31c7f92d8e01cce2", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "d8177cd702e740159c0d533be87f7d3cb4f7fafe", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
212519961
pes2o/s2orc
v3-fos-license
Alternative Didactic Strategy to Link Teaching with Environmental Protection To link education with the protection of the environment and soils through a didactic, innovative and interesting strategy in the training and awareness process, the development of an agricultural Biol was carried out as an alternative to produce ecological fertilizers, low cost for the producer and institutions dedicated to producing different agricultural items, acting as an excellent foliar stimulant for the plants and a complete soil rejuvenator; with decomposing materials such as; manure, water, milk (liquid), chopped grass, molasses, organic humus (fertilizer) and minerals as a supplement, through a biodigester (Release of accumulated gases). Educational alternative in young people who are in the process of training in Secondary Education in Mérida Venezuela, given the indiscriminate use of chemicals that damage the planet, plants and soil. This article focuses on an action research, supported by a theoretical and practical teaching planning. As a result, relevant learning was obtained for the pedagogical and agricultural system, managing to focus students on acquiring new strategies and ideas, conserving and protecting the environment through cooperation, interest and motivation to recommend this alternative to other students, teachers and Farmers develop new fertilizers that benefit the different agricultural items and have healthy and quality food for human consumption. Introduction Since man began on his land, the cultivation of a variety of crops and life in his crops showed that the soil was tired, the nutrients were depleting over time, so he chose to apply new measures aimed at recovering his productivity The first were: that the land rested after each harvest, the rotation of the fields destined for sowing and the obtaining of agricultural items, this is how in middle education the alternative of including theoretical and practical components in the process of teaching, thus expanding the capacity of young people in the agricultural area as a reflective approach to protect the environment. Faced with this reality, teaching is the most important world where we can transform the human being; in it, adolescents have the opportunity to participate, integrate and analyze a teaching process where they must focus on learning new knowledge and then use them in field work as an alternative. In this sense, the Lyceum becomes a place where young people are trained with different strategies considered important in the training process, in order to generate changes in the human being, from different levels and modalities that the pedagogical system offer, developing their own physical, psychic, cultural and social status to create the conditions of aptitude, vocation and aspiration to conserve the soil and items. Consequently, the population has been increasing food needs, demanding that the land produce abundantly and permanently, since the results are weak lands with the need to improve their nutrients and obtain better crops. Hence the alternative to solve the problem by obtaining a homemade Biol "Liquid Fertilizer" that fulfills the functions of foliar stimulant and fertilizer for the soil, from organic biodigester wastes such as cattle manure, water, molasses, liquid milk, chicha, mineral components, chopped green grass and organic fertilizer (humus). However, the development and application of agricultural Biol is a simple and practical technique that consists of two components: a solid and a liquid. The first is known as biosol and is obtained as a product of the discharge or cleaning of the biodigester where the Biol is made, the liquid part is known as foliar fertilizer where the solid residue is constituted by non-degraded organic matter excellent for the production of any crop. That is, it allows to balance the nutrient content present in the soil, the plants grow remain healthy and resistant, offering abundant and quality products, reflecting in them the conservation of the environment and better health to the consumer through products brought from the field With ecological alternatives. Similarly in the field of education and agriculture, it is considered important what is evaluated by other researchers and collect data that respond to the application of Biol and chemical fertilization in the rehabilitation of pastures in Pichincha, Ecuador. "He determined that the application of pure Biol and 75% acted positively" in the recovery of pastures for grazing purposes compared to the application of urea, and even more economically satisfied the investment made " [1]. It also describes the effect of cow manure, pig and guinea pig bioles on lettuce and cabbage crops in Colombia. The authors reported "that they did not find very significant differences between the applications of bioles and a commercial foliar fertilizer. Manure, pig and guinea pig " [2]. However, the conservation of the plants is reflected in the continuity of the growth of the shoots, fruits and leaves, which results in a greater foliar area to maximize the photosynthetic efficiency of the cultures through hormones that allow Stimulates cell division and with it establishes a "base" or structure on which growth continues. "The application of this natural fertilizer allows to balance the content of nutrients in the soil, plants grow better, remain healthy and resistant, their products are abundant and quality" [3]. Given this reality, it is important to incorporate into the educational system new innovative and interesting activities that focus on the current agriculture that Venezuela and the world lives, or what is the same, generate awareness in students and farmers, and become part of the Learning process in educational institutions, "putting into practice the new academic projects in the school system, thus applying new instructional strategies, achieving a more thoughtful, critical and protagonist student in each of the proposed Activities" by the [4]. In this sense, the pedagogical methods in secondary education in Venezuela "focus on the integral education of adolescents and young people, between approximately 12 and 19 years old, offering study alternatives, such as Bolivarian High Schools and Robinsonian Technical Schools" [5]. Have secondary schools as the main objective of the training and adolescents and young people with historical, potential and thinking skills, cooperative, reflective, liberating; and contribute to the solution of problems in the classroom and spaces destined for agriculture as a teaching, learning strategy. This is where the master specialist in agriculture and endogenous development has the responsibility of integrating the classroom and the field destined to agriculture in the new pedagogical practices, that is to say to apply a planning with creative and transformative strategies that benefit the students in the secondary education general and be approached by other institutions that point to this type of alternative, becoming a space of understanding between teachers, students and farmers, full of new options and ideas to produce bioles that protect the natural environment. Hence the problem, where teachers rarely apply in their school programming new teaching strategies that allow them to be more interesting in their theoretical and practical activities in agricultural education, where the student expresses what he has acquired in his academic training and serves as aid or alternative taken into account by the producer in the different agricultural items in Venezuela and in the world. Purpose (s) Achieved in the Project; Preparation of the Agricultural Biol "Liquid Fertilizer" a) Motivate (25) young people from secondary education to develop a Biological product as an alternative to produce at a lower cost, conserve the environment, the soil and the items to be produced. b) Know the rules of the experiment for compliance during the development of the activity. c) Stimulate the students of the José Jesús Osuna Rodríguez Bolivarian High School in the state of Mérida Venezuela to learn new teaching strategies, learning through the theory and practice of the field in the agricultural area. d) That the (25) young people know new alternatives when establishing their crops in the area and outside it for the benefit of our collective and the environment. Theoretical Argument However, education is considered within the teaching, learning process as a component where the human being, based on different perspectives, provides training opportunities for the human being. In this way, the documentary information, background, practices were gathered and for which the Agricultural Biol "Liquid Fertilizer" was prepared, and are described below: What is an Organic Fertilizer: it is a fertilizer made from animal manure and plant residues that can be: solids (compost) and liquids (Biol). Why should we use organic or natural fertilizers? Because they are affecting the soils due to the indiscriminate use of chemical or agrochemical fertilizers, and it makes the production, every day less, and the presence of pests and diseases becomes uncontrollable and the lands or items become immune to the chemicals. This also increases production costs, pollutes the natural environment and is harmful to health. That is why it is essential to have a varied and complete fertilization program, as an alternative the use of natural fertilizers that protect and develop the life of microorganisms and improve the structure of the soil: that is, we give life to the soil, to the lands that produce a great variety of items in the fields of Venezuela and other parts of the world. [6] Thus, the production of liquid fertilizer Biol is a relatively simple and low-cost process; its preparation inputs are local. Biol has two components: a solid and a liquid. The first is known as biosol and is obtained as a product of the discharge or cleaning of the biodigester where Biol is made, the liquid part is known as foliar fertilizer. The solid residue is constituted by non-degraded organic matter, excellent for the production of any crop [7]. In Biol we can use any type of manure. Applying this natural fertilizer allows to balance the content of nutrients in the soil, plants grow, remain healthy and resistant, their products are abundant and of quality. It also reflects soil conservation; better health as better products brought from the field will be consumed, preserving the environment [8]. Recommended for coffee production, it stimulates the development of foliage and flowering of the plant. This fertilizer gradually loses its effectiveness over time, should be used within the first three months of its preparation. Biol revitalizes plants that suffer stress, whether from pests, diseases or disruption of their normal development processes through timely, sustained and good nutrition, thus offering food free of chemical residues [9]. In fact, the elaboration of Biol is not necessary a recipe, we simply elaborate it with the residues that are around us, it stimulates and strengthens the development of plants, improves the production of fruits, the crops become resistant to the attack of diseases and adverse weather changes. For the aforementioned, a scheme can be visualized in a summary form. What is a Biol? And thus better understanding of it and interpretation [10]. The Biol It is a liquid fertilizer that originates from the FERMENTATION of organic MATERIAL, as shown in the following figure: [11] What Type of Organic Fertilizers Exists In this case they are classified according to the type of application or use. Some that are used directly to the ground and others that are applied in foliar form to the plants. The main organic fertilizers used are: [12] a) Compost. b) Earthworm humus. c) Animal manure. d) Green fertilizers. e) Biofertilizers. f) Bioles or foliar fertilizers, within them we have: Supermagro, compost tea. Of these organic fertilizers, Biol Supermagro is the one that is being developed and applied in the area of Santa Rosa, Menocucho and Catuay of the district of Laredo Ecuador, with good results in the production of strawberry, lettuce, radish, among other crops. All these theoretical demonstrations that give understanding to the development of natural Biol together with research done by other authors such as: He evaluated the response to the application of Biol and chemical fertilization in the rehabilitation of grasslands in Pichincha, Ecuador. He determined that the application of pure and 75% Biol acted positively in the recovery of grasslands for grazing purposes compared to the application of urea, and even more economically satisfied the investment made [1]. "He visualized the effect of cow manure, pork and guinea pig bioles on lettuce and cabbage crops in Colombia. The authors reported finding no significant differences between the applications of the bioles and a commercial foliar fertilizer" [2]. The activity of the plants is reflected in the continuity of growth of the shoots and their leaves, which has a greater foliar area to maximize the photosynthetic efficiency of the cultures by means of hormones that allow to stimulate cell division and thereby establish a base or structure over which growth continues. Applying this natural fertilizer allows balancing the nutrient content in the soil, plants grow better, remain healthy and resistant, their products are abundant and of quality [3]. Methodology The research that served as a basis for this article as a result of the indiscriminate use of agrochemicals, so that the research is framed in qualitative terms. Therefore, qualitative research is "the collection of data, analysis and interpretation that cannot be measured objectively, that is, it cannot be synthesized in the form of numbers. However, this does not imply a lack of objectivity of the results" [13]. So it is important to point out that in carrying out this type of research it leads us to interpret the data collected in the field in a more objective way for its interpretation of the results. From a more general perspective, the research seeks to propose through the development of agricultural Biol, implement new learning strategies that are innovative, interesting recreational for the (25) students of the José Jesús Osuna Rodríguez Bolivarian High School, so that it makes sense of awareness and provide ideas that benefit the production of the different items of the agricultural field of the campus and producers of the Santa Elena de Arenales area of the state of Mérida Venezuela, preparing a more economical liquid fertilizer based on decomposing material, such as cattle manure which helps to take care of the environment, the soil and the health of consumers, in the face of indiscriminate use of agrochemicals. In addition, qualitative research "is based on a rethinking of the subject-object relationship in that focus of the object of study directly where the problem lies. The dialectical integration of subject-object is the articulating principle of the entire epistemological scaffolding of research qualitative" [14]. In other words, it focuses on action research, because the data of interest is collected directly from reality by the researcher. Also "are those that refer to the methods that will be used when the data of interest are collected directly from reality, through the specific work of the researcher and his team, these data obtained directly from the empirical nature, are called primary, since which are the product of research without intermediaries of any nature" [15]. However, we worked directly with young people of the second (2nd) year of secondary education, with an enrollment of (25) students who presented the following characteristics from 11 to 13 years of age, residents of the study area and one (1) Professor specialist in the area of education for work and endogenous development in education. These people make up the universe of the unit of analysis that served as a case study (case study intentional and unique). In other words, a recipe is not necessary for the elaboration of Biol, it is simply transformed by applying and mixing different components that are in our field or different planting spaces, helping to stimulate and strengthen the development of plants, improve the production of fruits where crops become resistant to disease attack and adverse weather changes. By allowing balancing the content of nutrients in the soil, plants grow healthy and resistant being abundant and quality products. Results and Discussion Specify that it is possible to develop an agricultural Biol a natural product at a lower cost with material that is in our hands in the educational institution and in the agricultural field, such as: milk, chicha, molasses, chopped grass, water (H2O), mineral components between they sulfur, sulfate and cow dung. Another achievement was to pack the product in 2-liter bottles for later application in the agricultural spaces of the Jose Jesus Osuna Rodríguez Bolivarian High School and producers as an alternative to apply in crops and use them as new teaching strategies in the classroom and in the field in the process of school training. Another contribution was to improve the knowledge of the (25) young people of the Jose Jesus Osuna Rodríguez Bolivarian High School in the state of Mérida Venezuela, hence the farmers resume this type of activity and strategy as an alternative to produce natural, homemade, healthier and cheaper fertilizers to apply in articles consumed by the population, protect our environment, soil and consumer health. Similarly, students learn to change their skills and actions to preserve the environment and our educational environment to be more aware of the indiscriminate use of agrochemicals in the field destined to plant products in Venezuela. A greener product was also achieved to be presented to the producers of the Obispo Ramos de Lora Municipality, Mérida state, Venezuela. After obtaining and applying the agricultural Biol, the didactic and academic process is strengthened by applying new learning strategies in the agricultural subject with the help of the tasks of beginning, development and closing of the pedagogical planning, helping to conserve the soil, the planet, health, this allowed an active participation of students in the design and development of the product. Then you choose a place without slope and clean, it must be a safe space, out of reach of children, animals or other people outside the site selected to work. Place the 10 liter drum in a place of ease for product movements. Subsequently add sulfate, sulfur and the corresponding minerals in quantities of 250 gr. It should be noted, that, the students were answering during the fertilizer preparation the questions asked. And so some brainstorming of the master specialized in agriculture and endogenous development explaining the appropriate method for the development of the experiment "Agricultural Biol" that gives starts the objectives planned in the didactic action and the students are clear at the time of the execution of the activity. Development and Preparation of the Agricultural Product The elaborated Biol is an encouraging part for (25) students to learn through new teaching strategies, they are trained through other alternatives that will be used in the field, in this way to comply with the procedures in the preparation and application of agricultural Biol as natural liquid fertilizer. Then the product was developed with the guidance of the teacher and the participation of the students after completing the initial phase, the organization of work groups for the development of Biol as part of the learning process in the field of work, assuming the responsibility to improve and protect the environment, the soil, the plants and the elements produced by the farmer. It all starts with the drilling of the container to install the 3/8 "hose that serves as a biodigester to release the accumulated gases in the Biol container, adding each of the ingredients in the required container such as water, liquid milk. Molasses, chicha, minerals, cattle manure, humus (organic fertilizer), grass, chopped grass and stir until a homogeneous sample are obtained. All these actions can be observed in the photographs taken during the development of the experiment with the help and guidance of the specialist teacher and the integration of learners. Move the Biol before adding the ingredients, this baby procedure should be performed every three to five days as shown in figure number 5. Closure of the Activity, Packaging of Agricultural Biol Products As a closing stage, the liquid fertilizer was carried out, the product is packaged in containers of (2) liters, labeled for identification and subsequent application to some crops (YES) others (NO) in the field for agriculture in the institution Public Liceo Bolivarian Jose Jesus Osuna Rodríguez located in the state of Merida Venezuela. Likewise, a type of group discussion was applied in the classroom and the spaces reserved for agriculture, projecting a favorable learning in the teaching of agronomy and important contributions to the educational system through new ideas for the teaching and awareness of the human being, reducing The indiscriminate use of agrochemicals that threaten the health of those who consume the articles of the field, contribute in this way to conserve the soil and the planet earth. Therefore, secondary education integrates elements of structuring and integration of knowledge in those learning practices in the classroom and spaces for planting, in which they must be considered important in the pedagogical process to promote values with a sense of conscience and virtues in education. Young people as a fundamental pillar in academic training as "environment, integral health, interculturality, information and communication technologies and liberating work". [4], where the integrative axes will be applied as transversal axes in all learning areas, in the daily plans, they will be emphasized looking for strategies that improve the values in each of Young. Conclusions The development of agricultural Biol as an innovative, interesting and participatory learning strategy to improve the teaching process in the agricultural area: I help two important aspects, first: the growth of plants, foliage and leaves more resistant to insects and pests, providing minerals, nutrients to the soil and plants. Second: in the teaching process, learning improved the theme of agriculture in students, allowing a cooperative participation between each work group in the design and development of the practice, making use and benefit of the materials that are in our environment and lower production costs in products to grow. Finally, make use and benefit of educational, recreational activities for the improvement of young people, thus becoming aware of the protection of the environment, soil and consumer health, and otherwise be retaken by the producers of the study area as a balanced alternative and be applied in the fields of work. Recommendations Incorporate new curricular alternatives in institutions, thus strengthening knowledge in the area of education for work and endogenous development as an educational subject, taking into account the needs of students when planning theoretical and practical activities. Develop awareness activities for teachers, students and producers on the conservation of the environment, soils, plants and the health of human beings. For publishers, adapt textbooks to the new requirements of the ministerial curriculum and the world, aimed at preserving the environment and health. To directors, pedagogical coordinators and other educational institutions, to promote the methodological exchange full of important ideas in the educational and agricultural field; and the reflection among teachers, students and farmers, that there is integration between the institutions and the community. At the agricultural level, recommended for coffee production, it stimulates the development of foliage and flowering of the plant. This fertilization gradually over time is losing its effectiveness; it must be used within the first three months of its development. Biol revitalizes plants that suffer stress due to pests, diseases or disruption of their normal development processes through timely, sustained and good nutrition, which offers food free of chemical residues.
2020-03-07T07:45:29.097Z
2019-11-05T00:00:00.000
{ "year": 2019, "sha1": "8d0895fb63649cb5a5f272e72af9c0c9be132e48", "oa_license": "CCBY", "oa_url": "http://article.sciencepublishinggroup.com/pdf/10.11648.j.sjedu.20190706.11.pdf", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "8d0895fb63649cb5a5f272e72af9c0c9be132e48", "s2fieldsofstudy": [ "Education" ], "extfieldsofstudy": [ "Business" ] }
245634787
pes2o/s2orc
v3-fos-license
Revisiting the constraints on primordial black hole abundance with the isotropic gamma ray background We revisit the constraints on primordial black holes (PBHs) in the mass range $10^{13}-10^{18}$ g by comparing the 100\,keV-5\,GeV gamma-ray background with isotropic flux from PBH Hawking radiation (HR). We investigate three effects that may update the constraints on the PBH abundance; i) reliably calculating the secondary spectra of HR for energy below 5\,GeV, ii) the contributions to the measured isotropic flux from the Galactic PBH HR and that from annihilation radiation due to evaporated positrons, iii) inclusion of astrophysical background from gamma-ray sources. The conservative constraint is significantly improved by more than an order of magnitude at $2\times10^{16}$g$\lesssim M\lesssim 10^{17}$g over the past relevant work, where the effect ii is dominant. After further accounting for the astrophysical background, more than a tenfold improvement extends to a much wider mass range $10^{15}$g$\lesssim M\lesssim 2\times 10^{17}$g. I. INTRODUCTION Primordial black holes (PBHs) are the only candidate that can solve the dark matter (DM) problem without involving new physics [1,2]. At present, there are still open PBH mass windows (e.g., 10 17 -10 22 g) that can constitute all or most of the DM all over the universe, see e.g., [1]. If the PBHs contribute significantly to the DM, then the time-integrated Hawking radiation (HR) of PBHs with a mass of about 10 13 -10 18 g should significantly affect on the observed isotropic diffuse gamma-ray background (IGRB) and/or cosmic X-ray background (CXB) for energy above 100 keV [1,[3][4][5][6][7]. In order for HR to propagate through the intergalactic medium, these PBHs must survive at least into the transparent age of the cosmic microwave background (CMB) [4]. This condition sets the above PBH mass lower limit. In addition, the energy at the peak of HR decreases with increasing PBH mass. The contribution from PBHs larger than 10 18 g is concentrated in CXB below 100 keV, but is negligible [4]. Carr et al. [3] limited the PBH abundance conservatively within about 10 13 -10 18 g based on the evaporated contribution of PBHs that did not exceed the observed extragalactic photon background in the 100 keV-100 GeV range. Recently, Arber et al. [4] and Carr et al. [8] updated the bound with updated background observations (new Fermi-LAT data), and the former [4] also studied extended mass functions of Kerr black holes. Ballesteros et al. [5] and Iguaz et al. [6] set tighter bound in the mass range 10 16 -10 18 g with hard CXB and soft IGRB taking into account the emission from AGNs (Active galactic nucleus), respectively. It is worth emphasizing that [6] considers the contribution of the Galactic PBH emission and the annihilation radiation due to evapo-rated positrons to the measured isotropic flux. In addition to the above constraints using the isotropic cosmic photon background, there are many other limits on PBHs in the mass range 5 × 10 14 -10 18 g based on the inconsistency between predicted HR-induced signatures and actual observations. These observations can be electron or positron cosmic rays [13], the cosmic microwave background [10][11][12], gamma-rays and X-ray fluxes from specific objects [14][15][16][17], neutrinos [18,19], and so on. Their resulting excluded region of PBH mass and abundance are about similar to those from the cosmic photon background. In this work, we aim to improve the constraints on the PBH abundance by comparing the simulated isotropic flux of PBHs and astrophysical sources with the observed IGRB in the 0.1 MeV-5 GeV range 1 . Compared with past relevant works, in particular Ref. [4], three improvements on the treatments of the isotropic flux that may eventually improve the constraints will be investigated: (i) The IGRB from unresolved sources, such as AGNs and SFGs (star-forming galaxies), is modeled according to [25] and [26]. (ii) We reliably model the secondary spectrum of PBH HR below 5 GeV with the latest public code BlackHawk. 2.0 [28], which calculates the final state radiation (FSR) and decays of PBHs's primary particles (leptons and pions) below 5 GeV as discussed in [20]. (iii) We extend the analysis of Ref. [6] from the observed 10 keV-10 MeV flux to our 100 keV-5 GeV one. We take into account the diffuse flux contributed by the Galactic, which is measured by the detectors but cannot be separated from the truly extragalactic contribution, and an indirect component by the e ± annihilation via a positronium generated by the charge exchange between atomic hydrogen and positron evaporated from PBHs [6]. The plan of this paper is as follows. In Sec. II and III, we provide computational details of the isotropic flux from PBH HR, illustrate the data sets of cosmic background spectrum, and describe the models for the astrophysical background. Then, in Sec. IV we analyze the data and give our new constraints on the PBHs abundance. Finally, discussions and conclusions are presented in Sec. V and VI. II. HAWKING RADIATION FROM PBHS A. Hawking radiation from singe PBH In this work, we assume the mass distribution of PBHs is monochromatic and let us denote E as the photon energy in the local cosmic frame. The total photon spectrum dṄ dE emitted by a single PBH with mass M , in unit energy and unit time, can be written as (see e.g., [3]) The primary component dṄ pri dE results from the direct Hawking emission, which is similar to the blackbody radiation but with a greybody factor counting the probability that a Hawking particle evades the PBH gravitational well. The secondary component (the second term) comes from the decay of the hadrons produced by the fragmentation of primary quarks and gluons. In recent years, the HR spectra dṄ dE are usually calculated with the popular public code BlackHawk. 1, e.g., [4][5][6]. However, for the primary particles with energy below 5 GeV, it uses "extrapolation tables" to compute the secondary spectra, and thus these spectra are unreliable [20]. Thanks to Coogan et al.'s work [20], the latest version of this code (BlackHawk. 2.0 [28]) incorporating the method of [20] can reliably simulate the secondary spectra from the FSR and the decay of the primary particles. This method takes advantage of the Altarelli-Parisi splitting functions to model the FSR and uses new Hazma to compute the photon spectrum from decays by [20]: where dṄ pri i dEi are the primary spectra of leptons and pions. The explicit expression of dN decay dE and the FSR spectra can be seen in [9] and [20], respectively. As an improvement over the past relevant works [4][5][6], we simulate the secondary photon spectrum and the primary spectra of all particles with BlackHawk. 2.0 [28]. Fig. 1 shows the resulting low energy correction in the secondary spectrum. For small PBH mass range, e.g., M = 10 15 g, the new method (the blue line in Fig. 1) gives higher flux in the lower energy band than BlackHawk. 1. For large mass, e.g., M = 3.5 × 10 16 g, it first gives weaker flux and then turns to higher in the lower energy band. B. The flux from extragalactic and Galactic PBHs We consider the total diffuse flux emitted by PBHs throughout the universe. The flux can be separated to an extragalactic and Galactic part as Each part includes a direct photon component and an indirect one from e + annihilations [6], corresponding to the photon spectrum dṄ dE and dṄ ann. dE from a single PBH respectively. The measured extragalactic flux from a population of extragalactic PBHs of mass M with different ages is the redshifted sum over all epoch emissions [3,4] where M (t) is a time dependent mass of a PBH, z(t) = (H 0 t) −2/3 − 1 is the redshift parameter with the Hubble constant H 0 , and n PBH (t 0 ) ≈ f ρ M represents the number density of PBHs for a given initial mass M at the present universe's age t 0 . Factor f =Ω PBH /Ω DM is a fraction of the total DM density in the Universe, and ρ = 2.17 × 10 −30 g cm −3 denotes the current DM density of the Universe [29]. The integration runs from the time at last scattering of the CMB t min =380 000 years to We assume the positrons emitted in the PBH evaporation can form positronium (Ps) with electrons of atoms in cosmological medium, following Ref. [6]. Since the case of e ± annihilation via Ps formation (f Ps =1) is more realistic than the direct e ± annihilation (f Ps =0), we only consider the former [30]. An annihilation of para-positronium can yield two photons and an ortho-positronium one yields three, with a total energy of 2 m e c 2 [31]. Thus, the indirect photon component in Eq. (4) can be written as is the number of e + emitted by a PBH in unit time [6] andṄ e + M (t c ) represents the e + number at t c , where t c satisfies the equation The factors 1 4 and 3 4 describe the rate of the emitted photons in para-positronium and ortho-positronium state, respectively [31]. The energy spectrum of parapositronium annihilation is described by the Heaviside step-function θ (it should be Dirac function if the photons are not redshifted). The normalized spectrum of ortho-positronium annihilation is denoted by [32] 1 where x = E/m e c 2 and the relation 1 Since we are embedded inside the Galactic halo, an diffuse flux from the Galactic PBHs cannot be separated into the truly extragalactic contribution when the flux is measured by detectors; see [6] for details. Therefore, it should be taken into account in our simulation. As this flux depends on the integral along the line of sight of the Galactic DM distribution, and a conservative estimation of it is given by [6] dN where D min is the minimum of D-factor: where GAC denotes the line of sight towards the Galactic anti-center, namely l = 180 • and b = 0 • in galactic coordinates, and ρ g is the DM distribution assumed a Navarro-Frenk-White profile with parameters r s = 9.98 kpc, ρ s = 2.2 × 10 −24 g cm −3 [33]. The photon spectrum of positronium annihilation is dṄ ann. where 1 N3γ dN3γ dE is defined by Eq. (6) letting z = 0. We calculate the photon and positron spectra from a single PBH with BlackHawk. 2.0 [28]. Fig. 2 shows the contributions from the Galactic and extragalactic direct/indirect components to the total simulated IGRB flux in Eq. (3), for M = 7 × 10 16 g, f = 1 (top panel) and M = 10 15 g, f = 1 (bottom panel) independently. As one can see, the flux (red line) from the Galactic direct HR can be several times larger than the extragalactic one (blue line) at around the peak of the spectrum, especially for massive PBHs. The component from the e + annihilation can be larger than direct one, which would play an important role in contributing to the 100-511 keV IGRB. [48], EGRET [50], COMPTEL [49], SMM [47] and Nagoya balloon missions [52]. The Fermi-LAT data corresponds to the foreground model C [48], and the Galactic-foreground modeling uncertainty is added in quadrature to the data with 1σ stochastic error. Kimura model [27] (blue dashed line) represents the contribution of low-luminosity AGN to the IGRB flux, Roth model [25] (red dashed line) represents the contribution from star-forming galaxies (SFG), and the PBHs's contribution is represented with dashed black line. The sum of the three models is also shown with the red line. The IGRB is the measured gamma-ray emission including all unresolved extragalactic emissions in a given survey and any residual (approximate) isotropic Galactic foregrounds [48]. In this study, we use the observed IGRB of HEAO-1+balloon, SMM, COMPTEL, EGRET and Fermi-LAT (foreground model C) from 100 keV to 5 GeV 2 as shown in Fig. 3, which corresponds to the region where the PBH HR in the mass range 10 13 g-10 18 g is expected to contribute to the IGRB significantly. The Fermi-LAT data with energy 6 GeV is not chosen because of the two following facts. The background above GeV is expected to be mainly contributed by the 10 14 -10 15 g PBHs [3]. Though the HR of 10 13 -10 14 g PBHs, whose lifetime is shorter than the present age of the Universe (t 0 ≈ τ (5 × 10 14 g)), is expected to be concentrated at GeV band (the BH temperature T BH ≈ 1 10 13 g M GeV [3,34]) in the co-moving reference frame, it would be redshifted to MeV band in the observed reference frame. Therefore, the HR from our considered PBHs should not significantly contribute to the IGRB above GeV. Secondly, since a considerable part of HR photons for energy 10 GeV propagating over cosmological distances would be absorbed by soft background (e.g., extragalactic background light) photons via electron-positron pair production, only lowredshift (z 8) PBHs could significantly contribute to the IGRB with corresponding energy. Considerable efforts have been devoted to interpreting the IGRB in terms of a superposition of many unresolved extragalactic gamma-ray sources. It is widely believed that the observations between 100 to 200 keV are predominantly produced by coronal thermal emission from radio-quiet AGN (Seyferts) [37]. Recently, Ref. [26,27] and [23] have explained the MeV (100 keV to several MeV) IGRB together with PeV neutrinos background as the accretion-disk emission in low-luminosity AGN. Meanwhile, other possible candidates including radio-loud AGN [22,35,36], Kilonovae and type-Ia supernovae, are found to only contribute a limited share to the MeV IGRB [51,53]. We take into account the simulated IGRB of Kimura et al [27] into the analysis for PBH constraint, and the model from [23] is also discussed in Sec. V. The former seems more realistic [54,55], where the thermal electrons inside hot accretion flows naturally emit soft gamma-rays via synchrotron self-Compton processes. For the 100 MeV-5 GeV IGRB, the primary candidate sources provided the bulk of such backgrounds are unresolved SFGs [25,[38][39][40][41][42] and radio galaxies (RG, misaligned radio-loud AGN) [43] (Blazar, aligned radio-loud AGN, should be the primary candidate at energy above ∼ 5 GeV too [44,45]). However, the contribution for SFG given by [43] has huge uncertainty from 10% to 100%, and a recent study of [21] finds that RGs only contribute 4%-18% of the IRGB using a large sample of Fermi-LAT RGs. The SFG origin for this IGRB is strongly favored by the recent work of [25], whose method is based on a physical model for the gamma-ray emission with no free parameters (all quantities that are measured directly) rather than simple empirical scalings. In addition, the statistical analyses of angular fluctuations in the IGRB and cross-correlations between IGRB and galaxies also support the SFG origin [41,46]. Therefore, we model the astrophysical component in the IGRB below 5 GeV with the SFG contributions provided by [25]. The red dashed line in Fig. 3 represents the Roth et al model. The total flux from simulated "SFGs (Roth 2021)+AGNs (Kimura 2021)+PBHs" with M = 5 × 10 15 g and f = 8 × 10 −5 (black dash line) is represented by the red line. In this scheme, the IGRB round 10 MeV band still cannot be resolved, and thus we can expect that it gives a relatively weak constraint on the PBHs abundance in the relevant mass range. IV. RESULTS: CONSTRAINTS ON THE PBH ABUNDANCE In this section, we will present our results about the constraint on the PBH abundance f with monochromatic mass distribution in the interested mass range. Since angular momentum makes PBHs evaporate faster and thus makes the bounds on their abundance stronger, we assume a population of non-rotating PBHs in our simulation [5]. In this sense, our results are conservative compared to the case considering rotating PBHs. We derive conservative bounds on the PBH abundance without above astrophysical component modeling. These constraints require that the flux from PBHs, Eq. (3), does not exceed any measured IGRB data-points by more than 1 σ, as done in [5]. The bound thus obtained is displayed in Fig. 4 (green line), compared with that in the literature [4] (blue line). We rule out the totality of DM in the form of PBHs for 10 13 g M 10 17 g with 68 % C. L. The improvement over the upper limit given by [4] is significant, especially more than an order of magnitude improvement at 2 × 10 16 g M 10 17 g. In order to distinguish where the improvement comes from, we also show the result (orange line) without the Galactic component of PBH HR and the indirect e +annihilation component in Fig. 4. As we can see, the improvement at 3 × 10 13 g M 7 × 10 14 g results from the reliable calculation for the secondary spectra with BlackHawk. 2.0 or Eq. (2). As the photon flux calculated with Eq. (2) become stronger at relevant energy than that with BlackHawk. 1 (see the bottom panel in Fig. 1) for lighter PBHs, our constraint turns from more stringent to weaker at M 7 × 10 14 g. At M 10 15 g, the additional component to the flux leads to the improvement. The reliable secondary-spectra calculation effects the improvement around 3 × 10 16 g too. We can repeat the bound of [4] well, which verifies our calculations, see the red line in Fig. 4. counting for the AGNs and SFGs contribution given by [27] and [25], respectively (see, Fig. 3). For comparison, we also show the constraint with 68% C.L. in the literature [4] by blue line. B. With astrophysical component modeling Since the measured IGRB flux should contain the components from gamma-ray sources, such as AGNs and SFGs, more strength and realistic bounds on f would be obtained if the components are accounted. Therefore, we add the astrophysical component given by [27] and [25] (see the "Kimura 2021" and "Roth 2021" lines in Fig. 3) into our simulated flux from the PBHs. Since the model [25] is beyond the upper limit of the data-points at several GeVs, the uncertainties in Galactic-foreground modeling are added in quadrature to the Fermi-LAT data 3 [48], and thus, our constraint is derived if the simulated flux does not exceed any measured IGRB datapoints by more than 2 σ. Fig. 5 shows our bound with orange line and that in [4] (with 68% C.L.) for comparison. Our improvements in simulating the diffuse flux, i.e., calculating the secondary spectra with BlackHawk. 2.0 and including the e + -annihilation radiation, Galactic PBH component, and the astrophysical background, lead to more than an order of magnitude tighter constraint at 10 15 g M 10 17 g, pushing the constrained PBH mass to M = 2 × 10 17 g. Since the "Kimura+Roth" model can only account for a small fraction of the IGRB at 20-100 MeV where the HR emitted by PBHs of 10 14 g M 10 15 g is concentrated, the improvement at this mass band due to the inclusion of the background is not significant. V. DISCUSSION In this section, we will discuss other model explaining the measured IGRB. The model in [23] can also explain the MeV background together with PeV neutrinos background with the accretion-disk emission in low-luminosity AGN. Based on the observation evidence of nonthermal synchrotron emission in two nearby Seyfert galaxies, the MeV gammarays are generated by nonthermal electrons via inverse Compton scattering of disk photons in this model, rather than thermal electrons via Comptonization of their synchrotron photons as in [27]. Fig. 6 (top) displays this model with blue dash line and the total of the three models (Inoue 2019 [23]+Roth 2021 [25]+PBHs) with maroon line. The derived bound using the analysis method described in Sec. IV B is reported in the bottom panel of Fig. 6 by green line. Not surprisingly, the advance in the treatment of the simulated flux notably improves the constraint on f over that from [4] (blue line) at most of the considered mass range. The "Inoue 2019 + Roth 2021" bound is more conservative at high mass band 10 15 g M 2 × 10 17 g but more stringent below 10 15 g than the "Kimura 2021 + Roth 2021" one (green line). These phenomena can be attributed to the relative size of the contributions of the two background models to the IGRB. Namely, the flux modeled by Kimura 2021 (blue The green line represents the result of the AGNs and SFGs contribution given by [23] and [25], respectively. For comparison, we also show the constraint in the literature [4] by blue line and that for "Roth 2021+Kimura 2021" model in Fig. 5. dashed line in the top panel of Fig. 3) above 2 MeV is more conservative than Inoue 2019 [23] but slightly higher at 100 keV-2 MeV. These results suggest that the Fermi-LAT observations only mainly affect the constraint on the PBHs around 8 × 10 14 g. Future MeV space missions, such as GRAMS [56], AMEGO [57,58], XGIS-THESEUS [59], and e-ASTROGAM [60], will be able to detect a larger number of point sources and will help to verify the models for the unresolved astrophysical background and thus test our constraints on the PBHs. VI. CONCLUSION In this article, a population of PBHs with monochromatic mass distribution in the range 10 13 g − 10 18 g hase been considered. New upper limits on the PBH abundance f constituting all or part of the DM have been set by comparing the measured IGRB flux at 100 keV-5 GeV band with the diffuse flux from the Galactic and extragalactic Hawking radiation. We have reliably calculated the secondary spectra produced by primary particles for energy below 5 GeV according to Eq. (1), with the latest public code BlackHawk. 2.0 [28]. As a result, the conservative upper limit at 3×10 13 g M 7×10 14 g is several times weaker than the relevant one of Arber et al. [4] but stronger at around 3 × 10 16 g, see Fig. 4. Furthermore, our model takes into account the Galactic PBHs' diffuse radiation and that of the positronium annihilation from evaporated positrons, see Eq. (5) and (7). The resulting constraint on f is significantly improved over literature Arber et al. [4], especially more than an order of magnitude at 2 × 10 16 g M ×10 17 g, see Fig. 4. After further accounting for the astrophysical background model given by [26,27] and [25], the constraint is tighter more than an order of magnitude at 10 15 g M 10 17 g compared to [4], pushing the constrained PBH mass to M = 2 × 10 17 g, see Fig. 5. Other constraints, based on the alternative astrophysical model [23] explaining the MeV IGRB, are also discussed (see Fig. 6). They are helpful to comprehend the uncertainties of our results from the astrophysical background modeling. For general extended mass function, the corresponding bounds should be tighter or similar (see e.g., [4,6]). Future MeV telescopes are hoped to test our constraints with background modeling.
2022-01-03T02:15:19.561Z
2021-12-31T00:00:00.000
{ "year": 2021, "sha1": "455263a75a0ecc350d6c43187755c09517e424d7", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "455263a75a0ecc350d6c43187755c09517e424d7", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
226298693
pes2o/s2orc
v3-fos-license
Application of Physical Factors in Complex Etiopathogenetic Therapy Patients with Coronavirus-19 Complex in two aspects: how it affects the body (pharmacodynamics) and what happens to it the body Abstract Fundamental scientific research of domestic and foreign scientists strongly suggests that the progress of medicine is impossible without the widespread use of modern physical factors in the diagnosis, prevention, treatment and rehabilitation of almost all nosologically forms of diseases from newborns to old age of patients. Each micro-and macro-organism has individual bioenergetic characteristics corresponding to its type, which is the main condition for normal life activity of the organism. In the case of “alien” bioenergetic characteristics, specific biological processes inherent in this organism are disrupted, which leads to its death [1]. The biopotential of each person is strictly individual in both normal and pathological conditions. The degree of deviation of the biopotential corresponds to the stage of development of the disease, i.e. the formation of intermediate States of the body with a violation of its supramolecular structures. TNN.MS.ID.000564. 3(3).2020 Pharmacodynamics studies the localization, mechanism of action and pharmacological effects of medicinal substances. Pharmacokinetics studies the patterns of absorption, distribution, and elimination of medicinal substances in the human and animal bodies [2]. At the same time, it should be noted that the speed, scale, content, and time of formation of intermediate formations of a pharmacological drug in the body for each patient are strictly individual. All drugs in interaction with the body before their introduction into medical practice, in accordance with pharmacodynamics, pharmacokinetics should be studied by fast, harmless, and highly informative methods [5]. Compliance with this concept contributes to ensuring a high therapeutic effect, primary and secondary prevention of diseases, prevention of complications and side effects on the body. This is not observed in medical and pharmaceutical practice due to the lack of research methods for pharmacological preparations at the supramolecular level [6]. The biopotential for each person is strictly individual both in norm and in pathology. In this regard, any nosology in everyone causes a deviation of its biopotential in accordance with the stage of development of the disease, i.e. the formation of intermediate States of the body with certain violations of its supramolecular structures. This, in turn, determines the clinical picture at the time of examination of the patient and is the leading condition for choosing the right treatment tactics for any specialist doctor, so that the regression of the disease is accompanied by the restoration of destroyed supramolecular structures, excluding new gross violations at any level of the entire body [6,7]. This concept is not the main principle for drug therapy, due to the lack of highly informative research methods: frequency, dose, mechanism of action of pharmacological drugs at the supramolecular level. It is known that the final characteristic of any drug at the supramolecular level is "energy", which is difficult to dose and regulate for therapeutic purposes. The impact of energy from any physical factor is dosed and regulated (physio dynamics) using nanotechnology and its path can be freely traced to each molecule of the whole organism (physio kinetics) without disrupting supramolecular structures, without negative consequences by means of a Nano sensor.(patents for inventions: Russia # 2675006, Germany # 20 2017 006 896.) Modern pharmaceutical science does not have such a high level of control over the path of a medicinal substance in the body. "The conversion of the energy of photons, light particles, into electrical energy takes place in several stages," explains Professor Christoph Well, head of the IFG Institute. First, light is absorbed on the surface of the light-sensitive material [11]. Under the influence of the energy of photons of light, the electrons leave their places, leaving in their place electronic holes, with which they immediately form quasiparticles called polaritons. These polaritons exist only for a very short time, moving to the boundaries of the material, where they break up into electrons and holes, which continue to move further on their own. And the future fate of these charge carriers already depends on the nature of the light-sensitive material used" [11]. In this regard, any therapeutic effect on the body should be considered a trigger for restoring homeostasis, connecting its own internal systems. After studying the officially proposed and published Russian media medicines designed to combat coronavirus-19, we must admit not only their ineffectiveness, but sometimes even harm. Their original assumptions are clearly wrong. In fact, physiotherapy is seen as a more appropriate approach. It is based not only on the stated considerations, but also on half a century of experience in use. Highly effective methods of physical therapy include I. Light Therapy of the device "BIOPTRON" Its spectral range-480-3400 nm-reproduces the dominant types of Solar radiation on Earth-visible and IR radiation, under the influence of which the body absorbs and uses radiant energy. Polychromatic visible and infrared polarized (PVIP) light activates the enzymes nicotinamidadenin-dinucleotide phosphate-oxidase (NADP-oxidase) and nucleotide containing biopteroflavoprotein-NO-synthesis, localized in the cell membrane and using the surrounding oxygen produce its active forms-superoxide anion, hydrogen peroxide, hydroxyl radical and nitric oxide (NO) [15]. They conduct a light signal from the surface of the irradiated cell to its nucleus, affecting specialized intracellular mechanisms for conducting the activation signal (protein phosphorylation, the state of calcium channels, the content of calcium in the cell, etc.). The enzymes responsible for the formation of ROS and NO, as themselves and intermediaries, are found in cells and tissues, in all types of white blood cells, platelets, endothelial and smooth muscle cells of blood vessels. It was found that nitric oxide-NO, is an important part of the mechanism of blood vessel dilation and platelet aggregation, without which phototherapy could hardly be highly effective [10,14]. After daily 5-10 irradiations, the number of mononuclear leukocytes-monocytes and lymphocytes-circulating in the blood increases by 14-17%. 30 minutes after the first exposure to PVIP light, Pro-inflammatory cytokines-tumor necrosis factor (TNF-α), interleukins-IL-6, IL-2, and IL-12-disappear from the circulating blood. So, at the initial increased content of TNF-α, it falls by 30 times, IL-8-by 4-6 times, IL-2-by 4-10 times and IL-12-by 12 times, by the end of the course [14,16]. Simultaneously, the plasma content of anti-inflammatory cytokines-IL-10 and transforming growth factor-TFR-β1 increases [14]. A feature of PVIP-light phototherapy is a rapid 6-fold increase in the blood of the most important immunomodulator-interferon-γ (IFN-γ). The most important function of this cytokine is to activate cellular immunity (the functional state of monocytes, macrophages, natural killers, and cytotoxic T-lymphocytes), which primarily increases the body's antiviral and antitumor resistance [13] (Figures 1-3). II. Application of dry carbon dioxide baths "Reabox" Dry carbon dioxide baths (SUV) -a method of percutaneous therapeutic action of carbon dioxide on a patient whose body is located up to the neck level in a specially equipped box. Application (SUV) "Reabox" provides non-invasive, i.e. does not violate the integrity of the skin, the introduction of carbon dioxide, which distinguishes this method from CO2 injections. Direct action of carbon dioxide on the respiratory center. The excitation of the respiratory center is not caused by carbonic acid itself, but by an increase in the concentration of hydrogen ions due to an increase in its content in the cells of the respiratory center. The specificity of carbonic acid as a respiratory center pathogen was revealed by the experiments of Frederick and Holden, who found that h+ and HCO3 ions pass poorly through the cell membrane, and undifferentiated carbonic acid passes well: undifferentiated H2CO3 diffuses into the cells of the nerve center, which dissociates already in the nerve cells, releasing the irritating H+ion. Faster diffusion into cells than other acids is a specific feature of carbonic acid, and this is associated with a stronger irritating effect on the respiratory center [12,15]. Hyperventilation for a short time (several tens of minutes) leads to death due to the loss of carbon dioxide by the body. Humoral regulation of respiration, the role of carbon dioxide, oxygen, and blood pH in this process. The main respiratory stimulant is CO 2 . Blood pH also plays an important role in the regulation of respiration. When the pH of arterial blood decreases in comparison with the normal level (7.4), lung ventilation increases, and when the pH increases above the norm, ventilation decreases. Increasing the content of CO2 in the blood stimulates respiration both by reducing the pH and directly by the action of CO 2 itself [12,15]. The effect of CO2 and H+ ions on respiration is mediated mainly by their action on special structures of the brain stem that have chemosensitivity (Central chemoreceptors are part of the blood-brain barrier; low sensitivity threshold). It was found that a decrease in the pH of the cerebrospinal fluid by only 0.01 is accompanied by an increase in pulmonary ventilation by 4l/min. [15] Lack of O 2 can be a respiratory stimulant in the case of barbiturates as narcotic drugs, because in this case, the sensitivity of the respiratory center to CO 2 is suppressed. Breathing pure oxygen (O 2 ) in patients with reduced sensitivity to CO 2 is very dangerous, because when the O 2 voltage increases in the arterial blood the last respiratory stimulant (lack of O 2 ) is eliminated in the blood and respiratory arrest may occur. In such cases, it is necessary to use an artificial respiration device (Table 1). III. Extremely high frequency therapy (EHF) is the therapeutic use of millimeter-wave electromagnetic waves The experience of using it for more than 30 years shows high efficiency in the treatment of a wide range of diseases, including cancer patients. Extremely high frequencies occupy the range of 30-300 GHz (the wavelength range is 10-1 mm). The peculiarity of this frequency range is that millimeter radiation of cosmic origin is almost absorbed by the earth's atmosphere, so the biological evolution of all living organisms took place with a very small natural EHF electromagnetic background. This, apparently, explains the active influence of low-intensity millimeter radiation on a person. The following wavelengths are most used in EHF therapy: 4.9mm (60.12 GHz), 5.6 mm (53.33 GHz), and 7.1 mm (42.19 GHz) [8]. Lowintensity millimeter radiation refers to non-ionizing radiation, i.e. it cannot have a destructive harmful effect on the biological tissues of the body, and therefore it is safe. A specific feature of EHF exposure is its normalizing nature, i.e. EHF radiation normalizes only the physiological parameters of a number of States of the body that deviate from it: it increases the values of reduced indicators and reduces the values of inflated values. Parameters that are normal do not respond to radiation of the body with a millimeter field. That is, the features of EHF therapy as non-invasiveness, lack of Allergy TNN.MS.ID.000564. 3(3).2020 to EHF radiation, drug-free therapy contribute to the normalization of intracellular energy of any cell in the whole body (Figures 4-6). IV. Multifunctional device for spot infrared and magnetic therapy for effective pain relief (Rayforce) IR wavelength: 850nm. Magnet power: 1000 Gauss. Charging: from sunlight and artificial light. IR-therapy. It is proved that waves of different ranges affect the body in different layers and levels. IR radiation has the greatest penetration depth. In physiotherapy, waves are used in the range from 780 to 1400nm, i.e. short, penetrating the tissues to a depth of 5cm. The effect of IR radiation is aimed at accelerating the physical and chemical processes reactions: the processes of tissue repair and regeneration are stimulated, the vascular network expands, blood flow accelerates, cell growth increases, biologically active substances are produced, and white blood cells are sent to the lesion site. The reserve functions of the body are awakened. Permanent magnetic field (PMP) improves microcirculation, stimulates healing processes, activates immunological reactions, has anti-inflammatory and sedative effects [16,17]. Experimental studies were conducted at the scientific center for fiber optics (ncvo) of the Russian Academy of Sciences, Moscow. New non-toxic, non-hygroscopic silver halide light guides with low optical losses in a wide spectral range of 3-15 microns have been developed by the ntsvo staff, allowing to obtain skin spectra in vivo with a good signal-to-noise ratio even on the uncooled standard DNGS Fourier spectrometer Bruker. Results of RayForce treatment effectiveness based on IR spectroscopy data. This experiment confirmed the high therapeutic effectiveness of the RayForce device: For rice.5 IR spectroscopy shows the absence of pain in the left elbow joint after the treatment of RayForce device in the form of a complete restoration of the spectrum in the form and amplitude of light transmission, as well as correction of morphological changes in this area of exposure in the wavelength range of 970-1400nm. Based on the data in Fig.6. according to IR spectroscopy, there is reason to assert that the right elbow joint in the experiment was healthy and should be considered IR spectroscopy of the right elbow joint control. the alveoli. Each such alveola is surrounded by capillaries, through which, in fact, carbon dioxide is removed from the blood and oxygen is supplied. Red blood cells (red blood cells) are responsible for their transport through tissues and organs. Alveolar cells that participate in gas exchange are of two types: type I. Thin. Oxygen passes through them; Type II. A surfactant is isolated -a substance that envelops the alveola and protects it from damage. The coronavirus attacks mainly type II cells. Spiked proteins on its surface are bound by angiotensin converting enzyme 2 (APF2) on their surface. So, the virus "breaks" the protection and gets inside the cell, starting to replicate its RNA. The host cell soon dies, and the coronavirus spreads to neighboring cells and thus gradually affects the lungs. Naturally, our immune system does not sit still and actively produces macrophage cells. The result of this struggle is the death of the alveoli and a decrease in the turnover of gas exchange. This continues until the so-called alveolar collapse occurs and the acute respiratory distress syndrome begins. In severe inflammation, fluid rich in inflammatory proteins enters the bloodstream and spreads to other organs and tissues. This is how the systemic inflammatory response syndrome (SARS) develops, followed by septic shock and multiple organ failure. Therapeutic breathing exercises 5 minutes, 1 time a day, daily, 14 days of treatment. In case of complications, the physiotherapist plans of daily physiotherapy with hourly correction individually for each patient according to the state of the clinic. Physical therapy is performed in combination with medication. The proposed physiotherapy plan provides indications and contraindications at the supramolecular level, as well as for children from two years of age with a 50% reduction in the exposure time of each method, i.e. if an adult is 5 minutes, then children are 2.5-3 minutes. When establishing the diagnosis of "pneumonitis", the oxygen-helium mixture should be inhaled according to the developed method of academician A. G. Chuchalin from the AKGS-31 apparatus of the Minsk research Institute of radio materials. Conclusion Given the characteristics of coronavirus (COVID-19) infection, its differences from other known viruses are 1) suddenness of occurrence 2) high speed, scale and unhindered distribution 3) program selectivity of penetration into the intracellular space 4) Consistency of the striking nature at the supramolecular level of chronically weakened organs and systems, considering their biological age. The wave origin of coronavirus-19, that is, based on quantum mechanics (entanglement), should be assumed. In this regard, it should be argued that a global solution to the problem of neutralizing the damaging insidious actions of the virus (COVID-19) is possible at the level of quantum physics and can only be done by a group of scientific physicists led by professor Lukin Mikhail Dmitrievich of the United States Harvard University. The physiotherapeutic methods proposed above for the prevention, treatment, and rehabilitation of patients with coronavirus (COVID-19) infection are also consistent with quantum physics, since their mechanism of action on the whole organism is identical to quantum touch, so they should be included in the program for combating coronavirus infection (COVID19).
2020-11-11T06:39:12.032Z
2020-06-20T00:00:00.000
{ "year": 2020, "sha1": "487d2e3e5eb4e683fa2ba6812dd99729e5c7d5fb", "oa_license": "CCBY", "oa_url": "http://crimsonpublishers.com/tnn/pdf/TNN.000564.pdf", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "f4554d987e451945e7e3c7c4a5373ef07f2882df", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
237723697
pes2o/s2orc
v3-fos-license
PALMARIS LONGUS MUSCLE CONTRIBUTION TO MAXIMUM TORQUE AND STEADINESS IN HIGHLY SKILLED GRIP AND NON-GRIP SPORT POPULATIONS Background: The anatomy, origin, function, and appearance of the Palmaris Longus Muscle (PLM) in different populations are well studied. However, little is known about its contribution to wrist fl exion movements in sports. This study investigates whether the existence or absence of the PLM affects maximal torque output or torque consistency of submaximal wrist fl exion moment. Methods: One hundred ninety-seven well-trained sports students were clinically examined to ascertain the presence of the PLM. Forty of them from different sport disciplines were assigned to two groups (athletes in handgrip sports: HG, athletes in non-handgrip sports: NHG). Their 80 upper limbs were divided based on the PLM-presence/absence and hand-dominance/non-dominance. Maximal Isometric Torque (MIT) at 150o, 180o, and 210o wrist angle, and torque steadiness at 150o and 180o, at 25%, 50%, and 75% of MIT were measured on a Humac Norm dynamometer. Results: In all MIT tests, HGs signifi cantly surpassed NHGs, independently of the dominant or non-dominant side in presence of the PLM (p <.05). Steadiness was signifi cantly higher in HGs than in NHGs in dominant hands having the PLM, at 25% and 75% of MIT at both angles (p <.05). Conclusions: It is concluded that the existence of the PLM provides an advantage in sustained handgrip sports (throwers, racquet sports, basketball, handball players), contributing positively to decreased torque variability and higher maximal torque independently of muscular length. Important implications for sports performance and injury prevention have also resulted. INTRODUCTION The Palmaris Longus Muscle (PLM) is one of the most variable muscles of the human body (Park et al., 2010). It can have the characteristics of a phylogenetic retrograde muscle -namely, a short belly and a long tendon . It a) acts as a weak wrist fl exor, b) has its origin at the epicondylus medialis, and c) its insertion point is on the aponeurosis palmaris (Natsis et al., 2012;Schünke et al., 2016). The most common variation is its absence uni-or bilaterally in about 22.4% of the human population (Reimann et al., 1944), although there are reports that in some cases this percentage can be signifi cantly higher and reach 63.9%, especially in Caucasian populations (Dimitriou et al., 2015). Regarding the estimation of the functionality of the muscle for the hand, there are few related studies (Fowlie et al., 2012;Sebastin, Puhaindran, et al., 2005). Cetin, Genc, Sevil, and Coban (2013) evaluated the influence of the absence or presence of the PLM on the grip and pinch strength, concluding that there was no significant difference in hand strength between the two groups, while Gangata, Ndou, and Louw (2010) showed the contribution of the muscle in thumb abduction. Moreover, the existence of the muscle may mean a larger reservoir of receptors inside the forearm, which may be useful in reducing errors during firmly holding an object or in increasing gripping accuracy. The muscle spindles found in the muscle belly are thought to be responsible for the sensation of position and movement of the limb (Winter et al., 2005). Finally, histological studies have shown the presence of Ruffini and Pacini mechanoreceptors and Golgi's tendon organs in the PLM (Jozsa et al., 1993). To date, controversial findings exist about the function of the PLM in sports. There are doubts about how important the PLM is for handgrip (Eric et al., 2019;Vercruyssen et al., 2016), while other researchers support that the PLM contributes, or even provides, an advantage to handgrips used in sports such as basketball, handball, wrestling, tennis, badminton, rowing, etc (Fowlie et al., 2012). Specifically, they found that the presence of the PLM was higher amongst both elite and non-elite athletes competing in "sustained grip" sports compared with "intermittent grip" sports. During sustained gripping in sports, submaximal isometric co-contraction of carpo-metacarpal flexors and finger flexors is required to maintain a stable grip (Chow et al., 1999;Wei et al., 2006). In addition, Wadsworth (1983) reported that the PLM's contribution to the metacarpal flexion could lead to a stronger and more stable grip. The PLM, with a greater pool of proprioceptors, may provide athletes participating in sustained grip sports with a superior grip precision. The relationship between torque variability and handgrip performance in sports has been better defined by Hamilton and Wolpert (2002) and Harris and Wolpert (1998). They suggested that the application of steady isometric force from the wrist flexors to the tennis racquet and to the ball in handball and basketball might be of great importance for the goal-directed movement. Moreover, it has been reported that highly skilled tennis and handball players showed a significantly lower coefficient of torque variability at all examined submaximal isometric wrist contractions than sedentary individuals (Salonikidis et al., 2009). Neural and mechanical mechanisms seem to underlie the torque steadiness during isometric actions (Enoka et al., 2003). However, the exact physiological mechanism behind the differences in torque steadiness between highly skilled and novice athletes is not well understood. Therefore, it could be of practical interest to study whether the presence of the PLM as an additional wrist flexor in systematically trained hand grippers might further stabilize the wrist joint and lead to better torque steadiness and performance. To our knowledge, there has been no published research examining the function of the PLM during isometric contractions and it is difficult to determine whether its presence contributes in, or even provides an advantage to, handgrips used in sports. Based on the current literature, the presence of the PLM: i. Is higher amongst athletes participating in sports that require handgrip, ii. May assist metacarpal flexion in the contraction of muscles around the wrist, iii. Helps maintain steadiness, and iv. Improves precision due to the additional muscle spindles and mechanoreceptors (Fowlie et al., 2012). For these reasons, further research is required to determine whether the actual function of the PLM may be advantageous in pro-viding a more stable grip that is required in a higher level of skill between athletes performing grip sports. We hypothesise that the existence of the PLM affects the ability to provide a constant application of submaximal strength in HGs more than in NHGs during a range of wrist angles. Secondly, we hypothesise that the presence of the PLM provides an advantage to maximum strength in athletes in handgrip sports. Participants The sample consists of young athletes of both sexes who have not yet started with their occupational careers. Initially, 197 volunteer sports students (18-25 yrs.) from different sport disciplines were examined to determine the existence of the PLM in their forearms. Then, 40 athletes (20 males and 20 females) were deliberately assigned to two groups; 20 participants without PLM (10.15% from the initial sample) and an identical second group of 20 participants with PLM were selected for statistical reasons of comparison. Out of these 40 participants, 26 (15 males, and 11 females) were active in sports disciplines with high (HG) wrist involvement (11 track and field throwers, 2 pole-vault athletes, and 5 basketball, 4 handball, 4 tennis, and 2 volleyball players) and 14 (5 males, and 9 females) participated in disciplines with no wrist (NHG) involvement (5 football players, 5 runners, 2 long jumpers, and 2 dancers). Their anthropometric characteristics are shown in Table 1. (HG = "athletes in handgrip sports", NHG = "athletes in non-handgrip sports"). Unequal group-sizes occurred during the selection process and we decided to test everyone instead of rejecting individuals from the beginning. Their 80 hands were assigned into a) dominant and b) non-dominant and into i. Hands having the PLM and ii. Hands without the PLM. All test participants were admitted to the survey provided they had no anamnesis or recent injury of the wrist, the fingers, and the forearm, either on the right or the left side. Also, six participants from the initial sample who were found to be ambidextrous were excluded from the research. Approval for the experiment was obtained from the institutional ethics committee on human research in accordance with the declaration of Helsinki and written consent was obtained from each participant. Data acquisition An individual assessment form was used to record the demographic data, the setting of the device, and the performance of each participant. Moreover, all participants filled out the "16-question Handedness Questionnaire", to determine their dominant hand (Tran et al., 2014). The tests were carried out on a calibrated isokinetic dynamometer (HUMAC NORM -CSMi Medical Solutions, Stoughton, MA). The sampling frequency of the device was set at 100 Hz (torque measurement accuracy is ± .5%). The participants were fastened to the device seat and their wrists were stabilised according to the instructions of the manufacturer's operating manual. The angle of 180º was selected as the neutral position of the wrist joints since this position corresponds to "anatomical zero". Their forearms were in a supine position so that their palms faced upwards. With a goniometer (Model 01135, Lafayette), the elbow angle was positioned at 160º, aiming at minimising the involvement of the biceps muscle. Measurement process Initially, the presence/absence of the PLM in both hands was examined with the "Schaeffer's test" (opposing the thumb to the little finger and flexing the wrist) (Johnson et al., 2020). Subsequently, those who showed absence of the PLM in this test were subjected to further examination with the following four tests: "Thompson's test" (involves flexion of fingers to make a fist, followed by wrist flexion, and then opposing the flexed thumb over the fingers), "Mishra's test I" (passive hyperextension of the metacarpophalangeal joints, followed by resisted active flexion at the wrist), and "Mishra's test II" (resisted abduction of the thumb), as described by (Johnson et al., 2020), and "Pushpakumar's two-finger sign" (index and middle fingers are fully extended, the other fingers and the wrist are flexed and finally the thumb is both opposed and flexed), as Pushpakumar, Hanson, and Carroll (2004) suggest. Only when in all five tests the PLM was not detected, then the muscle was considered as completely absent. For the measurements of MIT and the steadiness during the wrist flexion, two protocols were followed and prior to the start of the tests a standardised 5-minute warm-up of the wrist muscles took place by both resistances and stretching. The first protocol included measurements of maximum isometric torque of the wrist joint in both hands, which was recorded at three angles: 180º (neutral position), 150º (slight bended wrist), and 210º (slight extended wrist). The side-starting-order of the attempts was random to avoid any fatigue effects. In each position, two valid trials of 5 s duration were recorded. Between each of the two trials, there was a 1-min-rest, and during each angle change, a 2-min-rest was allowed. Participants were familiarised with the testing procedures in an extra session a week prior to the main measurements. During the main efforts, continuous verbal encouragement was given, so that the participants could achieve their best possible performance. After 3 days from the initial test, the steadiness of the isometric torque production during wrist flexion was measured, due to the second protocol, at 150º and 180º wrist angle. Meanwhile, for each participant, the individual power percentages they had achieved were calculated. Again, the starting order of each side was randomly chosen. Participants were given the opportunity to monitor their performance on a computer screen in real time, both as a number and as a graph (bar chart), so they could try to produce a constant torque. At both wrist positions and for every percentage, two valid attempts of 10 s were recorded. The targeted torque levels were set at 25%, 50%, and 75% of the individual MIΤ. Between the two trials at 25%, there was a 30 s rest, at 50% a 1-min-rest, and at 75% a 2-min-rest. During the change from 25% to 50% a 1-min-rest and from 50% to 75% a 2-min-rest were given. Similarly, to the initial test, there was again verbal encouragement to motivate the participants to achieve their most consistent performances. Processing of raw data To avoid the observed variation both in the beginning and at the end of the 10 s trial, the first two and the last two seconds were removed from all participants so that only the remaining 6 s were evaluated. Again, from the recorded two valid efforts at each percentage rate, only the most consistent one was used for statistical analysis (Figures 1a, b). The Coefficient of Variation according to the equation: CV=(SD/Mean)×100 was computed, and the resulting index was put into the SPSS program. rate, only the most consistent one was used for statistical analysis (Figures 1a, b). The Coefficient of Variation according to the equation: = ( ) × 100 was computed, and the resulting index was put into the SPSS program. Statistical analysis A priori analysis (Gpower 3.1) showed that at least 36 subjects in total were required to detect a moderate effect size (partial η 2 p > 0.06) among means with the statistical design performed (ANOVA with between and repeated factors) with alpha and power levels set at 0.05 Figures 1a, b. Representative best submaximal isometric constant torque (ΜΙΤ) recordings at 25, 50, and 75% of MIT by "athletes in handgrip sports" (HG) and "athletes in non-handgrip sports" (NHG) who had about the same levels of strength. Statistical analysis A priori analysis (Gpower 3.1) showed that at least 36 subjects in total were required to detect a moderate effect size (partial η 2 p > 0.06) among means with the statistical design performed (ANOVA with between and repeated factors) with alpha and power levels set at 0.05 and 0.80, respectively. The statistical analysis of the results was made using SPSS 25.0. For all examined variables the Means and Standard Deviations (SD) were calculated with descriptive statistics. Prior to the main data analysis, the normal distribution of the scores at the dependent variables was checked by the Kolmogorov-Smirnov test. ANOVAs with repeated measurements were applied, for both maximum strength and steadiness. The defined factors were dominance and the presence/absence of the PLM, wrist angles and levels of torque. Respectively, the dependent variables were the achieved strength performances at each tested wrist angle, in presence or absence of the PLM, both on the dominant and non-dominant side, as well as the CV-scores during the effort to apply constant submaximal moments at all percentages, which were set as the target at both angles of the wrist for both upper limb sides. As a statistical significance limit for all analyses p < .05 was chosen. In addition, the effect sizes were calculated using partial eta squared (η 2 p) for ANOVAs. The small, medium, and large effects would be reflected for η 2 p in values greater than 0.0099, 0.0588, and 0.1379, respectively. The 95% confidence intervals for the differences between means (CI95%) for Tukey pairwise comparisons were calculated. All statistical significance values of the Kolmogorov-Smirnoff test were found above the level of p > .05, thus confirming that the dependent variables followed a normal distribution. The sample was constituted from 26 HGs and 14 NHGs. In HG participants, the PLM was detected 27 times (67.5% of the hands), either uni-or bilaterally, whereas in NHGs the muscle was detected 11 times (32.5%). MIT HGs surpassed the NHGs in torque performance in all three angles and in both limb sides, when having the PLM. There were differences (p < .05) between HGs and NHGs on the dominant side in all wrist angles and for the non-dominant limb the MIT-performances differed at 210 O (Table 2). Also, there were no significant torque differences (p > .05) between HGs and NHGs in both conditions (presence and absence of the PLM) after trans-posing the absolute torque values in relative ones to the body mass (N.mkg-1). There was an interaction effect between dominance and the PLM-presence (F1,36 = 2.847, p = .046, η 2 p = 0.170). Pairwise comparisons showed that the presence of the PLM in the dominant limb contributed to higher torque levels independently of the different angles (CI95%: 11.260 to 15.497, 10.167 to 13.245). Interaction was also observed among group and the PLM-presence (F1,72 = 2.894, p = .044, η 2 p = 0.068). Pairwise comparisons within the 2-way interaction effect revealed that HGs showed higher torque values in all three wrist angles when having the PLM, compared to NHGs (CI95%: 13.722 to 17.267, 8.554 to 13.385) for the differences between means for all statistically significant (p < .05). Pairwise comparisons did not include the zero value, suggesting that the means were different. HGs also reached higher torque at 210 O than NHGs in the absence of the PLM. There was not any triple interaction between three angles X groups (HG vs NHG) X PLM (presence vs absence) (Figures 2a, b, c). Application of constant torque There was no statistically significant effect of the factor angle on CV and no interactions were observed among angle, group, and the PLM-presence on the same variable (p > .05). On the contrary, there was a statistically significant main effect of the factor dominance on steadiness (F 1,36 = 14.678, p = .001, η 2 p = 0.099). Pairwise comparisons showed that the dominant side reached higher steadiness than the non-dominant in all examined conditions (CI95%: 2.146 to 2.652, and 2.319 to 2.824). There was also interaction effect between dominance and group (F 1,72 = 3.097, p = .05, η 2 p = 0.086), and between dominance and the PLM presence (F 1,72 = 3.033, η 2 p = 0.068, p = .039). Pairwise comparisons within the 2-way interaction effect revealed that the HG group presented lower CV-values compared to the NHG group and the PLM-presence in the dominant side in the HG group strengthened the steadiness performance (CI95%: 2.098 to 2.520, and 2.374 to 2.950 respectively). There was no significant main effect of the factor torque level on steadiness performance, and no interaction was observed between the factors angle and torque level (p > .05). At both angles (150 o and 180 o ), there were differences in CV-values at the three examined levels. However, statistical marginal inter-action was found among the factors angle, dominance, and torque level (F 2,72 = 2.900, p = .051, η 2 p = 0.058). The dominant limb showed better steadiness at 150 o in all three torque levels and at 180 o in two levels (25 and 75%). A multiple interaction was found among the factors angle, dominance, group, torque level, and the PLM-presence (F 6,72 = 3.100, p = .039, η 2 p=0,069). At both angles, the HG group in presence of the PLM in the dominant side showed lower CV-values at the 25% MIT-level than the NHG group. Similar results were obtained at 75% torque level for both groups. At 50%, both groups showed the same behaviour according to CV-index, independently of whether they had the PLM or not. However, the HG group presented better steadiness than the NHG group. In addition, at 180 o there were no statistical differences between the two groups in steadiness performance (Figures 3 and 4). percentages (25, 50, and 75%) by "athletes in handgrip sports" (HG), and "athletes in non-handgrip sports" (NHG). DISCUSSION AND IMPLICATIONS The main findings of this study were that grip-sports students having the PLM in their dominant hand presented a greater ability to perform steady submaximal isometric wrist flexions at 25% and 75% of MIT compared to nongrip-sports students. The greater steadiness was not muscle length specific. The HG-group outmatched the NHG-group in MIT performance in all experimental conditions, especially when the PLM was present. The above findings confirm the initial hypothesis that the existence of the PLM affects the ability to exert a constant submaximal strength in different wrist angles, but also, that the presence of the PLM provides an advantage to maximum strength in athletes involved in hand-grip-sports. In many sports, such as basketball, handball, wrestling, tennis, badminton, rowing, volleyball, the performances during wrist flexion movements are critical to the overall performance of an athlete in skills that require the development of high levels of maximum strength, as well as high-quality steadiness in the maintenance of submaximal torque. Salonikidis et al. (2009) suggested that experts were more accurate than sedentary young adults having the same level of force and this result was not due to differences at the level of muscles activation patterns. Therefore, the extended practice of the skill-trained individuals in a specific skill may affect the strength variability during submaximal torque testing rather than strength level. In the same line, our study showed that HGs have more frequent presence of the PLM in the often-used hand than NHGs and that this is accompanied by a higher MIT and greater ability to perform steady submaximal isometric wrist flexions at all tested levels of torque. Our findings are supported by Fowlie et al. (2012) members of sports clubs and national athletes. Methods: Participants were invited to complete a questionnaire that assessed their main sport, elite or non-elite level of participation, and level of activity. The presence of the palmaris longus was assessed visually using a standardised test. Main outcome measures: Presence of the palmaris longus, type of hand grip required for the sport and the level of participation. Results: The presence of the palmaris longus was higher in elite athletes (21/22, 96%, who demonstrated that the PLM-presence was more frequent amongst both elite and nonelite athletes competing in sustained grip sports compared to intermittent grip sports. These results suggest that the presence of the PLM may be of benefit to sustained and intermittent grip sports that require a higher level of skill and more accuracy. In elite grip sports, increased steadiness may provide more precise execution of the actions and reduced errors. Previous studies supported the suggestion that the presence of the PLM may also assist in metacarpal flexion via its attachment to the palmar aponeurosis, and its contribution to the action of metacarpal flexion may provide a stronger and more stable grip which is important for a cylindrical grip Wadsworth, 1983). The palmaris longus, also with a greater pool of proprioceptors, can contribute to superior grip precision in these sports. Assuming that the PLM, when present, is active in assisting metacarpal flexion, it may also assist in the contraction of muscles around the wrist to maintain steadiness and performance. The additional muscle spindles and mechanoreceptors that the PLM provides or influences, may offer an advantage in peripheral feedback for cupping the palm (Jozsa et al., 1993). The presence of the PLM may reinforce the findings of previous research that high skill HGs dominate in grip performance and steadiness. The significantly lower torque variation in 25% of MIT at both measured wrist angles, in HGs with presence of the PLM, confirmed the above assumptions. The specific typology and mechanics of the PLM, which consist of a muscle with short belly and a long tendon, will produce more steadiness at a given torque level and could also produce sufficient forces in the muscle fiber's short lengths because of tendon elastic properties (Troiani et al., 1999). The importance of the existence of the PLM in the dominant wrist flexors on steadiness performance is demonstrated in Figures 3 and 4. When the muscle was absent, the steadiness perturbated in both groups similarly. Furthermore, at 75% of MIT, when the requirements on torque production are increased, the presence of the PLM is more important for the joint steadiness and the steady torque application, adding more muscle activation and supporting from its mechanoreceptors, as is demonstrated in Figures 3 and 4. According to our findings, the PLM's absence at this difficult level of torque worsens the steadiness independently on the wrist angle and muscle length. To our knowledge, this is the first study that addresses the contribution of the PLM with a greater pool of muscle fibres and/ or proprioceptors in the near-maximum torque and steadiness of the wrist flexors, especially in individuals participating in grip-sports. In terms of MIT, we found better performance in HGs than in NHGs when the PLM was present, at all angles. On the other hand, it seems that the side dominance did not play any role in MIT-performance for both groups. This is in contrast with previous findings where no significant differences in MIT between high skilled and sedentary individuals were observed and this could be partially justified by the contribution of the PLM to the action of metacarpal flexion providing a stronger and more stable grip (Jubeau et al., 2006;Salonikidis et al., 2009). The existence of one more muscle may offer more strength to the joint, especially providing to athletes participating in grip sports an additional pool of muscle fibres that can be recruited for strength development. In terms of wrist angle effect on torque development and steadiness, no significant differences were observed between both groups, suggesting that the PLM as a part of the wrist flexors is acting across the whole range of motion in a similar manner in HGs and NHGs. Especially in the extended wrist angle for the HGs, the PLM contributes further to the steadiness of the wrist joint at the increased torque level. The above finding is in line with the findings of other researchers (Friden & Lieber, 1998;Salonikidis et al., 2011). According to steadiness and MIT-increases (%), our results did not confirm any relationship, regardless of the presence of the PLM and the skill level at both wrist angles. This is in accordance with previous studies (Löscher & Gallasch, 1993;Newell & Carlton, 1988), but it is in contrast with other studies which reported that during isometric handgrip, force tremor amplitude decreased from 5% to 60% MΙΤ (Christou & Carlton, 2002;Laidlaw et al., 2000;Salonikidis et al., 2011). Differences in protocols, type of action, skill's expertise, muscular group, and visual feedback could only partly explain such a discrepancy. The lack of consideration of the factor existence/absence of the PLM in the previous studies could be another reason for deriving different results in the present study. The existence of the PLM seems to be important especially for the movement accuracy and grip performance in high level HGs who have more frequent presence of the PLM in the often-used hand. Moreover, HG's without PLM could be recommended a more focused training on movement accuracy and for a long time which might have a positive impact both on grip steadiness and hand flexion performance. Injury prevention is another important consideration. Greater wrist stability in grip sports would be expected to provide added benefits when it comes to avoiding injuries -both acute and chronic ones. However, this remains to be confirmed through appropriate experimental protocols and future studies. Several limitations need to be considered. The study had a cross-sectional design, and the participants were divided into grip and no-grip expertise regarding sports activity in which they were participating at the time of the questionnaire completion. In addition, the presence or absence of the palmaris longus in this study has not been examined against a diagnostic ultrasound or Magnetic Resonance Imaging. Moreover, the simultaneous use of EMG is thought important to estimate how and when the muscle is activated during its flexion. It is acknowledged that less-developed variations of the palmaris longus may not have been identified by the assessment procedure used in this study. CONCLUSION The PLM may benefit athletes who participate in sports that require a dominant-handed or two-handed grip which seems to support the first hypothesis of this study. At an applied level, this research aspires to contribute to further knowledge of whether the genetically determined presence of the PLM gives an advantage in steadiness or in the development of greater torque during wrist flexion, especially in elite grip-sports athletes. This second hypothesis has been partly confirmed. MIT was higher in the dominant hand with the presence of the PLM than the hand in which there was an absence of the PLM. The importance of the PLM-existence in individuals with extended practice in grip sports seems to be beneficial for task precision. However, further research is required to determine the actual function or benefit that the PLM may provide in cylindrical and sustained grip sports played at elite and non-elite levels.
2021-09-27T20:55:53.553Z
2021-07-20T00:00:00.000
{ "year": 2021, "sha1": "447788c1d9c132e9b22f80658d5f944707c3b8cf", "oa_license": "CCBYNCSA", "oa_url": "https://doi.org/10.37393/jass.2021.01.6", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "b5ac16f44faaf1df7c57307c2ebafe9b03c3234b", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
24462546
pes2o/s2orc
v3-fos-license
Employee Perceptions of Workplace Health Promotion Programs: Comparison of a Tailored, Semi-Tailored, and Standardized Approach In the design of workplace health promotion programs (WHPPs), employee perceptions represent an integral variable which is predicted to translate into rate of user engagement (i.e., participation) and program loyalty. This study evaluated employee perceptions of three workplace health programs promoting nutritional consumption and physical activity. Programs included: (1) an individually tailored consultation with an exercise physiologist and dietitian; (2) a semi-tailored 12-week SMS health message program; and (3) a standardized group workshop delivered by an expert. Participating employees from a transport company completed program evaluation surveys rating the overall program, affect, and utility of: consultations (n = 19); SMS program (n = 234); and workshops (n = 86). Overall, participants’ affect and utility evaluations were positive for all programs, with the greatest satisfaction being reported in the tailored individual consultation and standardized group workshop conditions. Furthermore, mode of delivery and the physical presence of an expert health practitioner was more influential than the degree to which the information was tailored to the individual. Thus, the synergy in ratings between individually tailored consultations and standardized group workshops indicates that low-cost delivery health programs may be as appealing to employees as tailored, and comparatively high-cost, program options. Introduction Workplace health promotion programs (WHPPs) have become an increasingly popular means of promoting positive health behaviors in employees that are mutually beneficial to employers and employees [1][2][3]. Such programs can improve the overall health of the individual [4][5][6][7], increase physical activity [8,9], lead to small improvements in healthy weight status [6,10], have positive effects on dietary behaviors [3,11] and improve employee productivity [12][13][14]. Whilst there is overall support for the effectiveness of WHPPs, the reported extent to which such programs achieve lasting changes in behavior varies [10,15,16]. This variation in reported effectiveness is expected given the diversity in the design and quality of programs offered. In practice, WHPPs are often designed and delivered by workplace health and safety (WH&S) professionals with limited specialized training in health intervention design and evaluation, or access to current scientific publications. As a result, practitioners tend to employ a "one-size-fits-all" approach (e.g., offering of a standardized boot camp), frequently with cost rather than engagement and outcomes being the primary driver of program selection [5,23,24,29,37]. Hence, it is typically the practitioner's opinions about program design, mode of delivery, and content that are considered during the process of program definition and selection, while the perspective of employees'-whom are subsequently asked to choose whether or not to participate-remain overlooked [7,17,29]. This "one-size-fits-all" approach often fails to attract considerable employee participation and engagement, especially among the individuals who would benefit most from the program but are least likely to volunteer, particularly when grouped with colleagues who are already engaging in healthy lifestyle behaviors. In turn, such a practitioner-driven program design approach can result in low participation and/or high rates of attrition [10,33,38]. Of the research investigating participation and engagement, Crutzen et al. [20,30] developed an evaluation strategy to measure user perceptions of health promotion interventions based on the Technology Acceptance Model [39]. In addition to overall impressions of the program, the authors derived two key constructs of program user perceptions. The constructs were characterized as 'affective user perceptions' and 'cognitive user perceptions of utility'. More specifically, Crutzen et al. [30] posited that sustained participant engagement in a health promotion program can be achieved through positive affect (i.e., emotional responses related to: enjoyment; interest; and support) and high perceptions of program utility (i.e., cognitive evaluations related to: ease of use; outcomes; and program value). In the design and evaluation of WHPPs, participant perceptions of affect and utility predict an individual's participation and engagement [30]. A positive affect is likely to increase interest in the program and result in increased time participating [40]; whilst positive perceptions of utility (i.e., user experience) lead to increased participant loyalty and dedication to the program [41]. Given the important contribution of employee perceptions to the effectiveness of any WHPP, and the paucity of scientific literature that considers these constructs, further investigation is warranted to validate the measurement, application, and evaluation of employee perceptions of WHPPs. In particular, it is important to investigate whether any synergy exists between the often-selected "one-size-fits-all" standardized, cost-effective WHPP design and that of a more tailored but higher-cost approach recommended in the literature. At the time of publication, the authors were unable to identify any study that provided a comparison of employee perceptions of WHPPs categorized by the degree of program tailoring or mode of delivery, and which could be associated with the comparative cost of program implementation. Thus, the purpose of this study was to compare employee perceptions of the overall program offered, together with perceptions of program affect and utility across WHPPs that differed in their levels of tailoring and mode of delivery. The three WHPPs included in the study were designed by a single research team with expertise in health behavior change, psychology, exercise physiology, and dietetics. All programs aimed to increase healthy lifestyle behaviors including physical activity and nutritional consumption. The programs comprised: (1) A fully tailored one-on-one health consultation with a registered dietitian and exercise physiologist; (2) A semi-tailored SMS health messages program with tailoring based on employees' reported readiness to change exercise and eating behaviors; and (3) A standardized group health workshop presentation delivered by a registered dietitian and exercise physiologist. It was hypothesized that the participant ratings of the overall program, as well as participant ratings of affect and utility, would be highest in the individual tailored condition, with a gradual decline as the rate of tailoring (i.e., personalized content contained within the program) diminished. Participants A sample of 339 employees from an Australian transport company were recruited to participate in the study. Participants were aged between 19 and 64 years. Consistent with the organization's workforce demographics, the majority of participants (62%) were male. Due to operational limitations of the project, including restricted access to employees and work time availability, the number of participants, and volume of program evaluations available, for each program varied considerably. For each program, employees who completed both the program and evaluations are described below. Tailored Individual Consultation Nineteen employees (79% male) from three discrete worksites evaluated the tailored individual consultation delivered by a registered dietitian and exercise physiologist. In this program, participant age ranged between 25 and 64 years, with the majority (39%) in the 35 to 44 years cluster. Semi-Tailored SMS Health Message Program Two hundred and thirty-four employees (67% male) evaluated the SMS health message program. Participating employees were aged between 19 and 64 years, with the majority (31%) in the 25 to 34 years group. Standardized Group Workshop Eighty-six employees (55% male) evaluated the group workshop program. Participants were aged between 19 and 64 years, with most (41%) in the 25 to 34 years cluster. Design The study employed a cross-sectional design. Employees' perceptions of the overall program, affect, and utility for each WHPP were assessed by a combination of quantitative and qualitative items in an evaluation survey. Ethics Approval This study was approved by the by the Uniting Care Health Human Research Ethics Committee (# 2013.12.83) and informed consent was obtained from each participant. All three health programs were designed to promote positive health behaviors aimed at achieving healthy levels of physical activity and nutritional consumption. In all programs, the content was designed to increase participant's knowledge and motivation, such that employees would be empowered to change or maintain their behavior consistent with the national guidelines [42]. Recruitment Program trials were advertised by strategically located posters at worksites, promotional emails sent by site Managers to employee email addresses, and in-person promotion during site visits by the research team. During these visits, researchers invited all employees to voluntarily participate in a corporate health survey and trial the WHPP offered at their worksite. All three trials and evaluation surveys were completed between January and August 2016. In order to maintain the integrity of each trial and comply with the operational requirements of the organization, the specific WHPP offered differed according to geographic site location. Employees were permitted to participate in the WHPP during work hours. To control for fidelity in the delivery of the individually tailored consultation and the standardized group presentation, one expert consultant with qualifications in exercise physiology and dietetics delivered all consultations and workshops. The consultant also contributed to the refinement of the text message content for the SMS health messages program. Tailored Individual Consultation Employees were invited to schedule an appointment with the dietitian and exercise physiologist during the trial period. Appointments ranged between 30 and 45 min in duration depending on the specific needs of the participant. The consultation was semi-structured, with the employee self-nominating the nutrition and/or physical activity issues to be discussed. Techniques used by the consultant included: motivational interviewing; goal setting; and the provision of expert nutrition and exercise planning assistance tailored to the individual's health goals, risks, and current lifestyle behaviors. Semi-Tailored SMS Health Message Program Invited employees self-selected to participate in the SMS health message program. The health survey included independent measures of 'readiness to change' physical activity and nutritional consumption behaviors based on the readiness ruler used in motivational interviewing theory [43,44]. Participant responses were applied in the process of semi-tailoring the program. Employees who indicated a score of less than or equal to seven out of 10 on the readiness ruler regarding their intention to change their health behaviors were categorized as 'low readiness to change'. Scores of greater than or equal to eight out of 10, and those who selected 'I'm already trying to achieve healthy exercise/eating habits' or 'I'm already achieving 30 min of moderate exercise on most days/I'm already eating healthy', were categorized as 'moderate to high readiness to change'. Notably, categorization of readiness to change physical activity and nutrition were assessed independently, with many participants being categorized as 'low readiness to change' for one behavior (e.g., nutritional consumption) and 'moderate to high readiness to change' for the other (e.g., physical activity). A database of messages was developed by the research team and subjected to an extensive content validity validation process that included a modified Delphi technique with a panel of experts in the fields of health psychology, exercise physiology, and dietetics; and pilot trial with employees of a similar organization [45]. Example SMS messages include: 'Fitness is about being a better you. Have you challenged yourself today?' (applicable to participants demonstrating a low readiness for exercise health behavior change) or 'Read the nutritional panel on your food packaging. For every 100 g, choose items with less than 3 g saturated fat and less than 400 mg sodium' (applicable to individuals identified as demonstrating moderate-to-high readiness for nutrition health behavior change). Participation in the trial required the provision of a mobile telephone number. Participating employees received three text messages per week for a period of 12 weeks with delivery times alternating between 9 a.m. and 4 p.m. local time, on a Monday, Wednesday and Friday. Standardized Health Workshop Employees were invited to participate in a 30-minute group workshop on nutrition. The presentation was delivered at five scheduled sessions across two office locations in order to maximize employee participation. The key objective of the workshop was to improve participant understanding of common food behavior patterns and develop positive short-term nutrition intake decisions for long-term health outcomes. Selected content included: the internal thought rationalization process; nutrition for optimal brain functioning; and strategies for improving concentration at work. Employees were provided with the opportunity to ask questions during the session regarding workshop content. No individual or tailored health advice was offered during the workshop sessions. Program Evaluation The program evaluation survey was completed by willing participants immediately following the consultation and workshop programs. SMS health message program evaluation surveys were completed during a follow-up site visit at the completion of the 12-week trial. Readiness to Change Participants' readiness to change physical activity and nutritional consumption behaviors were measured as part of the baseline health survey on an 11-point scale, with two supplementary response options indicating active progress toward behavioral change and achievement of national physical activity and nutrition guidelines [42], refer to Figure 1. Program Evaluation The program evaluation survey was completed by willing participants immediately following the consultation and workshop programs. SMS health message program evaluation surveys were completed during a follow-up site visit at the completion of the 12-week trial. Readiness to Change Participants' readiness to change physical activity and nutritional consumption behaviors were measured as part of the baseline health survey on an 11-point scale, with two supplementary response options indicating active progress toward behavioral change and achievement of national physical activity and nutrition guidelines [42], refer to Figure 1. Overall Program Rating Employee perception of the overall program was assessed in the evaluation survey by a single quantitative item followed by a series of three open-response qualitative measures. The quantitative item of overall program rating asked, 'Overall, how would you rate the service?' and was presented as an 11-point Likert scale with response options ranging from 0 (very poor) to 10 (excellent). The qualitative survey items were based on Walthouwer and colleagues' [46] evaluation of an obesity Overall Program Rating Employee perception of the overall program was assessed in the evaluation survey by a single quantitative item followed by a series of three open-response qualitative measures. The quantitative item of overall program rating asked, 'Overall, how would you rate the service?' and was presented as an 11-point Likert scale with response options ranging from 0 (very poor) to 10 (excellent). The qualitative survey items were based on Walthouwer and colleagues' [46] evaluation of an obesity prevention intervention. Items included: 'What did you like most about the service?'; 'What did you like least about the service?'; and 'Do you have any suggestions to improve the service?'. Affect and Utility Measures of participants' affect and utility of the trials were each assessed in the evaluation survey by eight items measured on a 5-point Likert scale ranging from one (strongly disagree) to five (strongly agree). These items (shown in Figure 2) were based on health promotion program evaluation items developed by Crutzen et al. [20,30]. prevention intervention. Items included: 'What did you like most about the service?'; 'What did you like least about the service?'; and 'Do you have any suggestions to improve the service?'. Affect and Utility Measures of participants' affect and utility of the trials were each assessed in the evaluation survey by eight items measured on a 5-point Likert scale ranging from one (strongly disagree) to five (strongly agree). These items (shown in Figure 2) were based on health promotion program evaluation items developed by Crutzen et al. [20,30]. Results Data consisted of participants' responses to both quantitative and qualitative items in the program evaluation survey. As a result of consistent observations of breaches of normality, all analyses of quantitative data were subjected to a bias-corrected and accelerated (BCa) bootstrap model with 1000 samples using IBM SPSS version 21 [47]. Table 1 illustrates the descriptive statistics of overall program rating for each program condition. A Kolmogrov-Smirnov distribution analysis revealed that scores were significantly non-normal across all three program conditions, D (19) = 0.334, p < 0.001; D (161) = 0.142, p < 0.001; D (123) = 0.215, p < 0.001, respectively. Levene's test confirmed that variability in program ratings differed between the groups (F (2, 300) = 25.81, p < 0.001) with variability in overall rating greatest in the semi-tailored SMS health message program. Results Data consisted of participants' responses to both quantitative and qualitative items in the program evaluation survey. As a result of consistent observations of breaches of normality, all analyses of quantitative data were subjected to a bias-corrected and accelerated (BCa) bootstrap model with 1000 samples using IBM SPSS version 21 [47]. Table 1 illustrates the descriptive statistics of overall program rating for each program condition. A Kolmogrov-Smirnov distribution analysis revealed that scores were significantly non-normal across all three program conditions, D (19) = 0.334, p < 0.001; D (161) = 0.142, p < 0.001; D (123) = 0.215, p < 0.001, respectively. Levene's test confirmed that variability in program ratings differed between the groups (F (2, 300) = 25.81, p < 0.001) with variability in overall rating greatest in the semi-tailored SMS health message program. Note. x = mean, SD = standard deviation, 95% CI = 95% confidence interval. Overall Rating Overall program ratings were high, with the greatest satisfaction being reported in the tailored individual consultation condition. A bootstrapped one-way ANOVA was performed to compare overall program ratings. The ANOVA test showed that differences in program ratings were significant, and the effect of program type on participant rating was large, F (2, 300) = 93.31, p < 0.001, η 2 = 0.38. Post hoc Dunnett T3 comparisons revealed that the mean overall rating for the semi-tailored SMS health messages program was significantly lower than both the tailored individual consultation (MD = −3.055, 95% CI = −3.68, −2.43, p < 0.001) and standardized group workshop, MD = −2.57, 95% CI = −2.95, −2.20, p < 0.001. However, no significant difference between the overall rating of the tailored individual consultation and standardized group workshop was detected, MD = 0.49, 95% CI = −0.06, 1.03, p = 0.093, n.s. A thematic analysis was conducted on the qualitative program experience question responses. The overall results of the thematic analysis are presented in Table 2. In total, six themes were identified in the program evaluation feedback, including: authority; content; delivery; self-awareness; tailoring; and other. Qualitative feedback on what participants liked most (n = 210) about the programs revealed that across all three programs, responses related to program content were common (consultations = 31.6%; SMS messages program = 60.4%; group workshops = 44.2%). For the consultation condition, responses that reflected the program tailoring were also common (31.6%) and reflected the personalized nature of the advice provided. Workshop participants also provided frequent responses related to delivery (44.2%) and overwhelmingly reflected the engaging presentation style of the presenter. Across all program conditions, few participants (n = 66) responded to the question regarding what they liked least about the program offered. Consultation participants' responses predominantly aligned with the theme of self-awareness (80.0%) and reflected difficulty related to the process of behavior change (e.g., "Knowing I have lots of work ahead of me"). SMS program participant responses to this question varied considerably with the most common responses related to program content (25.9%) and tailoring (25.9%). Content theme responses reflected the generalized nature of message content that was referred to by one respondent as "common sense"; whereas tailoring theme responses reflected participants' desire for a service more customized to their personal circumstances and health goals. Overwhelmingly, participants reported that what they liked least about the program related to delivery (85.3%), and requested the service to be provided on a regular basis and/or for a greater duration. One hundred and one employees provided suggestions for improvement of the workplace health promotion service trailed. Within both the consultation and workshop conditions, responses most frequently related to program delivery (consultation = 83.3% and workshop = 62.2%). For both programs, participants overwhelmingly requested an increase in the frequency and duration of the service. In the SMS program condition, the most common response related to the theme of program content (41.4%), and reflected participants' desire for more detailed information and resources to supplement the information provided in the SMS messages. Table 3 presents the descriptive statistics relating to affect and utility for each program condition. What did you like least about the service? 27 Affect and Utility The advice was very general and 'common sense'. I didn't benefit from the information given" Delivery 6 "Our irregular starting times meant that I was getting SMS messages while sleeping" Other 6 "It probably works for some, but not for me" Self-awareness 1 "I need to make more of a commitment" Tailoring 7 "I felt as though the messages assumed I was unhealthy" "I feel a bit more motivated" Self-awareness 1 "Raised my awareness" What did you like least about the service? 34 Note: x = mean, SD = standard deviation, 95% CI = 95% confidence interval. Scale Reliability A reliability test of the affect and utility scales was conducted. Results of the reliability test revealed that the scales of affect and utility both held overall very high internal reliability, n = 8, Cronbach's α = 0.93; n = 8, Cronbach's α = 0.94, respectively. Furthermore, an analysis of the interrelatedness of each item to the overall scale showed that the removal of any singular item from either the affect or utility scales would not significantly improve the internal reliability of the scale. Affect Analyses of the affect ratings of each program revealed that participants consistently reported a strong affect in the individually tailored consultation and standardized group workshop trials. By contrast, participants in the semi-tailored SMS health messages program reported a positive, but slightly lower, affect. A Kolmogrov-Smirnov distribution analysis revealed that scores were significantly non-normal across the tailored (D (19) = 0.318, p < 0.001), semi-tailored (D (169) = 0.108, p < 0.001) and standardized conditions, D (140) = 0.118, p < 0.001. Levene's test confirmed that variability in program ratings differed between the groups (F (2, 325) = 34.09, p < 0.001) with the greatest variability shown in the semi-tailored SMS health message program rating of affect. Utility Overall, participants' program utility ratings were high in the tailored individual consultation and standardized group workshop trials. Participants in the semi-tailored SMS health messages trial rated the program as marginally lower for utility. A Kolmogrov-Smirnov distribution analysis revealed that scores were significantly non-normal across the tailored (D (19) = 0.246, p = 0.004), semi-tailored (D (169) = 0.122, p < 0.001) and standardized trial conditions, D (140) = 0.129, p < 0.001. Levene's test confirmed that variability in program ratings differed between the groups (F (2, 325) = 42.43, p < 0.001), with greatest variability of results observed in the semi-tailored SMS health message program trial. Discussion The current study both confirms and extends upon contemporary literature highlighting the importance of employee perceptions in designing and evaluating WHPPs. Specifically, this study was the first to apply the evaluation constructs of affect and utility, as described by Crutzen et al. [20,30], together with an overall program rating [46] to three WHPPs offered within a single organization. With each WHPP offering varying degrees of individual tailoring and mode of delivery, this study demonstrates that employee perceptions of a program's design substantially contribute to the uptake, likely ongoing participation in, and loyalty to, an offered WHPP. The findings of each evaluation construct are subsequently discussed. Overall Rating Contrary to our prediction, all three programs received moderate-to-high overall ratings which may suggest that employees value efforts by their employer to support worker health and provide WHPP services in general. Notably, higher levels of satisfaction were reported for the WHPPs that offered the tailored individual consultation and standardized workshop conditions when compared to the semi-tailored SMS trial, suggesting that the physical presence of a highly regarded and engaging expert presenter may have positively influenced overall program ratings more than the degree of tailoring provided within the program. These hypotheses are further supported by the qualitative feedback in which the presentation style and manner of the exercise physiologist and dietitian was overwhelmingly reported as a positive aspect of the program by participants in the consultation and workshop trials. Similarly, SMS program participants were least satisfied with the content and tailoring of the program, despite the researcher's efforts to match the level of messaging to their reported readiness to change health behaviors. This feedback may also reflect the unidirectional nature of SMS messaging and absence of the physical presence of an expert with whom participants could build a rapport. More generally, this finding may also suggest that personal communication, regardless of whether it be delivered via a private or a group setting, is more highly valued by employees in WHPPs than programs delivered through a virtual platform (e.g., SMS messaging). For all three trials, participants most frequently reported program content as what they liked most about the service. This reflects previous research which found that employees value WHPPs that promote health education and disseminate relevant information more highly than programs that approach health behavior change via simple give-aways, such as the provision of free fruit in lunch rooms, which may be perceived to be tokenistic [48]. Within the workshop trial condition, employees initially reported 30 minutes as an appropriate session duration that could be completed during their lunch break. Many attendees later requested that longer sessions be offered on an ongoing basis due to the reported perception that additional information would have been appreciated. However, longer workshop session durations may limit participation by those who are unable to extend their lunch break due to work commitments. These requests for longer and ongoing support by participants is supported by the literature suggesting that the frequent contact improved participant outcomes and sustained health-promoting behaviors [6]. Similarly, although some SMS trial participants suggested that the service may be improved by the addition of links to websites containing further information such as recipes, such suggestions may be impractical where such links would, by necessity, be those of external providers. This would also nullify the use of the SMS delivery program-adopted with the 160-character limit such that the content of each message was required to be pointed and concise-as well as potentially compromising the program delivery as the ongoing content of any provided links could not be regulated by the employer. Additionally, providing access to such external resources may substantially increase the cost of program delivery depending on the approach adopted. Affect Measures of affect comprise the emotional responses and reactions of program participants and reflect how individuals feel about aspects of the program (e.g., interest, enjoyment, and trust, etc.). On average, overall ratings of affect were high across all three programs. According to Crutzen et al. [30], such high levels of reported affect are associated with high levels of program engagement and would likely result in sustained participation. The hypothesis that the affect ratings would be greatest in the individual tailored condition, with a gradual decline as the rate of tailoring (i.e., personalized content) diminished, was not supported by the findings in this study. While overall affect ratings were highest in the tailored individual consultation condition, ratings were higher for the standardized workshop than the semi-tailored SMS trial. This trend is consistent with that of the overall program ratings and may further infer that the in-person delivery of program content, and presentation style or rapport between the audience and presenter, may lead to more positive perceptions of affect and the program overall, with degree of tailoring playing a lesser role in employee perceptions of program design than previously reported. It is, however, noted that variability of results was also highest in the SMS condition, a finding which may reflect the substantial difference in participation rates or may suggest that the SMS program design and content was more appealing to some employees than others. Utility Ratings of program utility represent cognitive responses to the program (e.g., ease of use and program value, etc.). In the present study, the trends observed for measures of affect were replicated in the utility data and are also incongruent with the study hypothesis. In particular, the average overall affect ratings were high across all three trials, which has been associated with elevated user loyalty to the program [41]. Overall utility ratings were highest in the tailored individual consultation and standardized workshop conditions, with no statistical difference observed between the utility ratings of these programs. However, the semi-tailored SMS trial received significantly lower ratings, and contained greater variability in responses. The comparatively lower rating of perceived utility by participants in the SMS trial further indicates that employees prefer the experience of a physical person delivering such programs over that of a unidirectional information service, thereby suggesting that mode of delivery was more influential than degree of tailoring in influencing employee perceptions in this study. It is possible that higher variability in the perceptions of utility in the SMS trial may reflect the substantially higher number of participants in the trial or may also be related to variability in the level of technological literacy of users. This effect could potentially be lessened where employers were able to offer concurrent WHPPs for participants to self-select the mode of delivery they found most appealing. That is, programs that rely on technological platforms (e.g., computer, mobile telephone, or video gaming technology), such as the SMS trial, should be offered alongside more traditional designs (e.g., educational workshops). However, it would be expected that substantially increased costs would be associated with the development and delivery of multiple WHPPs by a single employer, which may render such an approach impractical in many circumstances. Clinical Implications This study provides practical contributions to the applied workplace health promotion literature. Based on the findings herein, the employees' perception of any offered WHPP represents an integral variable which is predicted to translate into observable differences in rate of user engagement (i.e., participation) and program loyalty. Therefore, the practice of WHPP development should be refined to include consideration of employee perceptions and preferences. Furthermore, the findings of this study are incongruent with the current scientific literature whereby, unlike the findings of previous studies, program tailoring was not a primary variable in determining the success of the WHPP in terms of user engagement and participation. However, it should be noted that the success of voluntary WHPPs is highly dependent upon rates of participation, and therefore employee perceptions. In this study it was observed that employee perceptions of the overall program, affect, and utility were of a similarly high level for participants in both the tailored and standardized study conditions, with significantly lower ratings in the semi-tailored condition. As the degree of tailoring typically influences program administration costs, this finding indicates that tailoring may play a less important role in attracting participation than previously reported. That is, standardized, more cost-effective programs such as standardized group workshops can be as attractive to employees as the more expensive tailored consultations, as long as the content of the offered WHPP is perceived as favourable by employees. Moreover, qualitative feedback from participating employees implied that mode of delivery, and in particular the physical presence of an expert health practitioner with whom they could build a rapport, was more important to employees than the degree to which the information provided was tailored to the individual. This result was demonstrated regardless of the extent to which the provided contact was individual-or group-based, and further supports the findings that program mode of delivery may also have a greater influence on user engagement and loyalty than program tailoring. Limitations There are a number of limitations to the current study that require comment, including: the cross-sectional design; the low response rate for the evaluation survey; sampling bias; replication of the consultation and workshop programs; and differences in program intensity. Firstly, while the cross-sectional design employed in the current study is an important first step toward understanding employee perceptions of different WHPPs, longitudinal studies are needed to investigate whether positive affect and utility ratings translate into participant engagement and loyalty, as predicted. Secondly, the low response rate for evaluation surveys, particularly in the tailored individual consultation condition, represents a major limitation to the current study. Arguably, this reflects the difficulty of obtaining feedback from employees following the conclusion of the free service, primarily due to time and operational constraints. However, regardless of the reason for this low rate of response, such an outcome reduces confidence in the accuracy of any follow-up findings. Thirdly, the recruitment strategy of seeking voluntary participation in WHPPs may have contributed to a selection bias whereby employees self-selecting into the programs disproportionally represent those who are already engaging in healthy lifestyle behaviors rather than those who require the most assistance with health behavior change but are least likely to volunteer for participation. Fourthly, the consultation and workshop programs received extensive positive feedback about the presenter and presentation style, this relates to the individual characteristics of the presenter in this case and evaluation results may vary where a different presenter was employed in a replication study. Finally, it is imperative to acknowledge that this study involved the comparison of WHPPs that differed in intensity (i.e., a 12-week SMS trial versus a one-off consultation or workshop). This notable difference in duration of engagement may have further influenced employee perceptions of the program to an extent greater than that provided in the evaluation survey results. Overall, the limitations of this study are consistent with other applied research within the workplace health promotion discipline. However, the authors acknowledge that the project may have been improved by quantifying: (1) the number of employees who declined to participate; (2) WHPP attrition rates; and (3) the rate of conversion from program completion to evaluation. Future Research Based on the findings in this study, it is recommended that future research investigate how to improve employee perceptions of WHPPs, particularly for engaging with those least likely to volunteer for participation. Such studies may include the measurement of what motivates employees to participate or which aspects of the program are most appealing to potential participants. Following this, employee feedback and preferences should be investigated and applied in the design and/or refinement of organization-specific WHPPs and evaluated longitudinally to confirm the previous findings that positive affect and utility ratings lead to increased participation [40] and loyalty [41]. Conclusions This study compared employee perceptions of overall program rating, affect and utility across three WHPPs. Each offered program had differing levels of tailoring and mode of delivery. Overall, a trend was observed in the employee perceptions which revealed that participant ratings of the overall program, affect, and utility were similarly higher for the tailored individual consultation and standardized workshop condition when compared to the semi-tailored SMS trial. These findings suggest that the physical presence of a highly regarded expert may positively influence the overall program ratings of a standardised WHPP. Based on previous findings, higher ratings of participant perception are predicted to lead to higher levels of user engagement and loyalty within a WHPP. The synergy in ratings between the individually tailored consultation and standardized group workshop indicates that low-cost delivery health programs may, if delivered by a suitable expert, potentially be as appealing and beneficial to employees as comparatively high-cost options. This study was the first to apply the evaluation constructs of affect and utility to workplace health programs and highlights the importance of considering employee preferences in the design of such programs to ensure maximum effectiveness, including employee participation and loyalty. Author Contributions: T.D.S. and S.J.L. contributed equally to the project design, data collection, data analysis, and manuscript preparation. All authors read and approved the final manuscript.
2018-05-03T02:53:33.721Z
2018-04-28T00:00:00.000
{ "year": 2018, "sha1": "707e1529de340f0d4941a876cece1f0e220a6fbd", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1660-4601/15/5/881/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "707e1529de340f0d4941a876cece1f0e220a6fbd", "s2fieldsofstudy": [ "Business", "Medicine" ], "extfieldsofstudy": [ "Medicine", "Psychology" ] }
93227290
pes2o/s2orc
v3-fos-license
Ionic Liquids for the Electric Double Layer Capacitor Applications demonstration EDLC cells using three kinds of current collector: a conventional aluminum oxide foil for EDLC, an aluminum foil and an aluminum foil with CLC. The cell with the CLC had a much higher rate capability than the cell without CLC. Only the CLC cell was able to discharge at a current density of 500C. This cell shows a slight deterioration in capacity in a high Introduction The properties of ionic liquids (ILs) include non-volatility, non-flammability, and relatively high ionic conductivity (Rogers & Seddon, 2001). As novel green, reusable solvents that can substitute for organic solvents, ionic liquids have attracted much attention as good media in organic synthesis and other chemical processes (Zhao et al., 2009(Zhao et al., , 2011. At the same time, some studies have been reported which aim to improve the high-temperature safety and durability of such electrochemical devices as lithium rechargeable batteries , electric double layer capacitors (EDLC) , and titanium oxide dyesensitized solar cells (Papageorgiou et al., 1996). The electric double layer capacitor (EDLC) is an energy storage device based on the operating principle of the electric double-layer that is formed at the interface between an activated carbon material and an electrolyte. Various solvents and salts (solutes in otherword) are available, offering specific advantages such as high capacitance and low temperature performances. Generally, an organic electrolyte that is a solid quaternary ammonium salt, such as N,N,N,N-tetraethylammonium tetrafuluoroborate (TEA-BF 4 ), dissolved in the high dielectric constant solvent propylene carbonate (PC) has been used for high voltage EDLCs of 2V or more. This device stores electricity physically, and lacks the chemical reactions found in rechargeable batteries during charging and discharging (Zheng et al., 1997). Therefore, compared to rechargeable batteries, the EDLC has a remarkably long cycle life and high power density. Such devices are now widely used in power electronics for peak power saving and back up memories, and in electronic power supplies for automated guided vehicle systems and construction equipment. One of their most promising applications is for use in transportation, especially in hybrid electric vehicles (HEVs). However, some issues in the development of EDLCs remain. 1. A lower energy density compared with lithium ion secondary batteries. 2. Flammable electrolytes raise safety concerns. 3. Low cycle durability in the high temperature region. 4. Poor charge-discharge properties at low temperatures. In order to overcome these challenges, we are actively pursuing research and development in the use of ionic liquids for the electrolyte of EDLCs. The energy E stored in an EDLC is proportional to its capacity, C, as well as the voltage V applied between the positive and the negative electrodes: In order to increase the energy density of EDLCs, it is necessary to increase C and V. A higher ionic concentration in the electrolyte is required for the improvement of the C value. And, since the energy density is proportional to the square of the cell voltage, increasing the cell voltage is a very important development target for EDLCs. Conventional electrolytes for EDLCs are solutions in which a certain type of solid ammonium salt is dissolved in an organic solvent. In this case, a high ionic concentration level is not achieved due to the restriction of the solubility of the ammonium salt; many kinds of solid ammonium salt have solubilities of only about 2M. On the contrary, ionic liquids are composed of dissociated cations and anions with a high concentration, and it is possible to use them alone as an electrolyte for EDLCs, that is to say, the ionic liquid electrolyte is both the solute and solvent. In this case, a high capacity can be expected because there are extremely high concentrations (3-5 M) of ion species that contribute to double layer formation in the ionic liquid. Furthermore, the non-flammability of the ionic liquid should be an attractive feature for the enhanced safety of EDLCs. An ionic liquid with a wide electrochemical stability window is needed for the achievement of a high energy density EDLCs. Some cations of ionic liquids that have been the subjects of many previous studies are shown in Fig At first, many researchers focused on aromatic type ionic liquids, such as 1-ethyl-3methylimidazolium tetrafluoroborate (EMI-BF 4 ), which has a relatively low viscosity and high ionic conductivity (Koch et al.,1995;Koch et al.,1994;McEwen et al.,1999;McEwen et al.,1997) for use in a variety of applications for electrochemical devices, including batteries and capacitors. However, since aromatic quaternary ammonium cations, such as imidazolium and pyridinium, have relatively low cathodic stability, electrochemical devices using these ionic liquid as electrolyte have not yet been shown to be practical (Ue et al.,2002). On the other hand, some papers have been published on the use of aliphatic quaternary ammonium-based ionic liquids, which would be expected to have a higher cathodic stability; this is due to the fact that the small size aliphatic quaternary ammoniumbased ionic liquid has a relatively high melting temperature, compared with the aromatic ones. MacFarlane and co-workers reported that aliphatic quaternary ammonium cations which had relatively short (C1-C4) alkyl chains could not easily form an ionic liquid near room temperature , as cited in Sun et al.,1997,1998a,1998b& MacFarlane et al.,1999,2000. However, Angell and co-workers and Matsumoto et al. reported that some asymmetric and short-chain aliphatic quaternary ammonium cations with a methoxyethyl or methoxymethyl group on the nitrogen atom formed ionic liquids below room temperature (Cooper & Angell,1986;Emanuel et al.,2003) (Matsumoto et al.,2000(Matsumoto et al., ,2001(Matsumoto et al., ,2002. Recently, the major advances in research and development of ionic liquid electrolytes have mainly concerned quaternary ammonium-based ionic liquids where the anion has a relatively high anodic stability, such as BF 4 , bis(trifuluoromethanesulfonyl)imide ( -N(SO 2 CF 3 ) 2 : TFSI) and other fluorinated anions. Neat ionic liquid electrolytes for EDLC application 2.1 Aliphatic quaternary ammonium type ionic liquids having a methoxyethyl group on the nitrogen atom Generally, small sized aliphatic quaternary ammonium cations cannot easily form an ionic liquid, however, by attaching a methoxyethyl group to the nitrogen atom, many aliphatic quaternary ammonium salts can form ionic liquids with BF 4 − and TFSI anions. For instance, DEME-BF 4 and DEME-TFSI are novel ionic liquids whose liquid state covers a wide temperature range ( Fig. 2) (Maruo et al.,2002). Fig. 2. Schematic illustration of the molecular structure of the DEME cation. Since the electron donating feature of an oxygen atom in a methoxyethyl group weakens the cation's positive charge, the electrostatic binding between the ammonium cation and anion weakens, and an ionic liquid forms. The limiting reduction and oxidation potentials (E red and E oxd ) on platinum of the ionic liquids were measured by cyclic voltammetry at room temperature as shown in Fig. 3. Fig. 3. Cyclic voltammogram of the ionic liquids based on the DEME cation and EMI-BF 4 at 25 •C. Scan rate: 1mVs −1 , working and counter electrode: platinum, reference electrode: Ag/AgCl electrode. www.intechopen.com The E red and E oxd were defined as the potential where the limiting current density reached 1mAcm −2 . The potential window between the onset of E red and E oxd was 6.0V for the DEME-BF 4 and 4.5V for the EMI-BF 4 , respectively. Since the E red of the DEME-based ionic liquid was approximately 1V lower than that of the EMI-BF 4 , we realized that DEME had a higher cathodic stability than aromatic type ionic liquids since it does not have a -electron conjugated system. Compared with DEME-TFSI, DEME-BF 4 is a little more stable in the limiting reduction potential. Since DEME-BF 4 has the largest potential window so far reported, it should be suitable as an electrolyte for a high operating voltage EDLC. The electric double layer capacitor using DEME-BF 4 as an electrolyte The EDLC is an energy storage device based on charge storage between an activated carbon and an electrolyte solution. It may be possible to improve the safety and durability at high temperatures using an ionic liquid instead of an inflammable organic solvent. However, a typical ionic liquid, EMI-BF 4 cannot be used for an EDLC operating at a high voltage because of decomposition on the anode. We tried to prepare a test cell for the EDLC which would be stable at high temperature using DEME-BF 4 as an electrolyte. The cell configuration of a demonstration EDLC is shown in Fig.4. Table 1 summarizes the capacitance C of the EDLC using various electrolytes at temperatures higher than room temperature. The capacity and the Coulombic efficiency between the charged and discharged capacity were obtained from the charge and discharge curves. As Table 1 shows, the EDLCs using ionic liquids showed higher discharge capacities than those using the standard electrolyte TEA-BF 4 in PC solution at all the temperatures tested. Within high ionic concentration electrolyte, the double layer might be specifically formed on the surface, while ion adsorption does not occur at a low ionic concentration, although the reason for this behavior is not clear. However, for the EDLC using EMI-BF 4 we observed both, the generation of gas from the decomposition of the ionic liquid and a remarkable drop in the capacity at 70°C. In the case of TEA-BF 4 in PC, the large drop in capacity was observed at 100 •C. On the other hand, the EDLC using DEME-BF 4 exhibited little gas generation and did not show such a large capacity drop at 100°C and capacity could be observed even at 150°C. These results indicate that the cell using DEME-BF 4 is more stable compared to the EDLC using the general organic electrolyte TEA-BF 4 /PC. At high temperatures, the cell using DEME-BF 4 has the highest relative Coulombic efficiency of the tested electrolytes. Fig. 5 demonstrates that, even at 100 •C, the EDLC using DEME-BF 4 showed a practical good level of durability over cycles of charging and discharging. After 500 charge and discharge cycles, the capacity loss was just 15% of the initial discharge capacity. It is possible that an EDLC using DEME-BF 4 may be a practical energy storage device, which can stably charge and discharge at high temperature. The relation between the capacity C and the discharge current I is shown in Fig. 6. At 25•C the discharge capacity of the EDLC using DEME-BF 4 decreased significantly faster than that using TEA-BF 4 as the discharge current increased. It is probable that the larger decrease in C at discharge currents using DEME-BF 4 resulted from the large internal resistance of the EDLC cell when using this high viscosity electrolyte. In the case of an EDLC using an ordinary electrolyte the discharge capacity decreased only by 15% for a 100 times large current. The high viscosity of the ionic liquid decreases the mobility of ionic species. However, in operation at 40 •C shown in Fig. 6b, the capacity of the EDLC using DEME-BF 4 is remarkably improved. In this case the capacity of the ionic liquid cell is higher than that of a cell using TEA-BF 4 /PC. A slight elevation of the temperature has decreased the viscosity of the ionic liquid, since the effect of a temperature change on the viscosity of ionic liquids is known to be large. We suggest that, at temperatures of 40 •C or above, an EDLC using DEME-BF 4 has a practical useable performance comparable to that of a conventional EDLC using an organic electrolyte. Furthermore, the durability at high temperatures of such an ionic liquid cell is higher than that of a conventional EDLC. However, below room temperature, the capacity of an EDLC using this ionic liquid was inferior to that of the conventional EDLCs using solid-ammonium salt / PC system due to the high viscosity of the ionic liquid and the fact that the charge and discharge operation of our demonstration cell might be compromised around 0°C. On the other hand, Japanese researchers (Ishikawa et al., 2006 andMatsumoto et al., 2006) reported that ILs containing bis(fuluorosulfonyl)imide (FSI) had a quite low viscosity and high ionic conductivity when compared with those based on the TFSI anion, and various ILs containing the FSI anion had some suitable properties for use as the electrolyte of a lithium ion battery. Also, Handa et al., (2008) reported the application of an EMI-FSI ionic liquid as an electrolyte for an EDLC, and found that demonstration cell showed excellent capabilities comparable to one using a solid ammonium salt / PC electrolyte. Further, Senda et al. (2010) reported that fluorohydrogenate ionic liquids (FHILs) exhibit a low viscosity, high ionic conductivity and low melting point compared to other ionic liquids. Their EDLCs using some FHILs were operable even at -40 o C, exhibiting a capacitance higher than that of TEA-BF 4 / PC. The use of the ionic liquids with these new fluorine system anions as the electrolyte of various electrochemical devices will probably develop in the future, because it the structure of the anion has a big influence on the viscosity of the ionic liquid. Use of ionic liquids for EDLC performance improvement at low temperatures Because a quaternary ammonium type ionic liquid has a higher solubility in the carbonate solvent than the previously investigated solid quaternary ammonium salts, it is possible to make an electrolyte with a high ion concentration, giving the EDLC a high capacitance. Some solid ammonium salts might be precipitated by recombination of dissociated ions the low temperature region due to their low solubility. As a result, an EDLC including a solid ammonium salt solute will suffer a large drop in capacitance at low temperatures. In many cases, an electrolyte including an ionic liquid that has a high solubility in propylene carbonate (PC) exhibit a high ionic conductivity, even at low temperatures, compared to a traditional solid ammonium salt electrolyte. Also, DEME-BF 4 can be homogeneously mixed with PC to produce a uniform electrolyte solution and it does not cause precipitation of the salt, even at temperatures below -40 o C. Figure 7 shows the relation between the ionic liquid concentration in PC and the solution viscosity at room temperature. Many ionic liquids cause a remarkable viscosity decrement for just a small amount of solvent addition. It is the simplest method by which the problem of the high viscosity of the ionic liquid can be eased. Nisshinbo Industries Inc. in Japan manufactures successful large size EDLCs that have an extremely attractive high charge rate and discharge performance even at −40 •C by the use of an electrolyte including DEME-BF 4 diluted with PC (http:// www.nisshinbo.co.jp /english/r_d/capacitor/index. html). www.intechopen.com Ionic liquids containing the tetrafluoroborate anion have the best performance and stability for electric double layer capacitor applications 3.2.1 Ionic liquids having a methoxyalkyl group on the nitrogen atom as a solute Generally, small sized aliphatic quaternary ammonium cations cannot easily form an ionic liquid; however, by attaching a methoxyalkyl group to the nitrogen atom, many aliphatic quaternary ammonium salts can form ionic liquids with BF 4 − and TFSI anions. In this section, we report on the performance and thermal stability of the EDLCs using various ionic liquids and some solid ammonium salts with methoxyethyl and methoxymethyl groups on the nitrogen atom as a solute in an electrolyte. The evaluation was performed using a large size cell (265F) with strict quality control at the industrial product manufacturing level. Of special interest is to determine which ionic liquid or ammonium salt with methoxyalkyl group shows the most attractive performance at low temperature and good thermal and electrochemical stability in a practical large size EDLC. We compare the direct current resistance of EDLCs at a relatively large current at low temperature, and the capacitance deterioration and the internal resistance increase when continuously charging at high temperature. It has been generally thought that a high viscosity of the electrolyte is detrimental to direct current resistance. However, we report that the direct current resistance of the EDLC depends on the size of the solute anion and is independent of the viscosity and the specific conductivity of the electrolyte. We prepared 14 kinds of ammonium salt that have 6 kinds of cation and 3 kinds of anion species as candidate electrolytes for EDLCs (Yuyama et al. 2006). Nine of these were ionic liquid in nature at 25 •C and some ammonium salts were not liquid at room temperature. The melting temperatures are summarized in Table 2. We prepared large size EDLCs using the 14 kinds of salt in Table 2. Because PF 6 and TFSI salts of N, N-diethyl-N-methoxymethyl-Nmethylammonium (DEMM) and N-methoxymethyl-N-methylpyrrolidinium (MMMP) are expected to have a poor EDLC performance from the results of other experiments, they were not used. Comparison of EDLC capacitance and internal resistance We evaluated the cell performance and durability of EDLCs using all 14 types of ammonium salts, including 9 ionic liquids. The capacitance of the EDLCs using various www.intechopen.com ammonium salts with a methoxyalkyl group on nitrogen atom, and some imidazolium salts, in a 1M propylene carbonate solution was determined by charge-discharge cycling from 0 to 3.0V at 25 •C. Fig. 8 shows the variation of the cell capacitance in the 10th cycle as a function of the total molecular weight of the salts (a), and as a function of the molecular weight of the separate cations (b) and anions (c). Cod e o f cation cation In general, it has been thought that an ion with a high molecular weight shows lower cell capacitance than a small size ion. However, we have previously observed that the EDLC using 1M DEME-BF 4 , which a molecular weight of 233.05, has an approximately 10% higher capacitance than a cell using a 1M solution of a conventional ammonium salt, tetraethylammonium tetrafluoroborate (TEA-BF 4 ), which has a molecular weight of 217.6 (Kim et al. 2005). However, in the results shown in Fig. 8(a), the cell capacitance with a 1M EMI-TFSI propylene carbonate solution exhibited an exceptionally very low value, and a clear correlation cannot be found between the molecular weights of the salts and cations and the cell capacitance based on the data in Fig. 8(a and b). We did not observe a large difference in the cell capacitance with the cation used even though the molecular weights of the cations varied by approximately 1.4 times. However, we realized from the data shown in Fig. 8(c) that the capacitance of the cell depends not on the total molecular weight of the salt or that of the cation, but on that of the anion. EDLCs using the BF 4 anion species had higher capacitances than those using other anion species. Because the deviation in the measured capacitance of each tested EDLC was only about 0.4%, the 2 F or more differences in capacitance found between different cells are significant. The ammonium salts with a methoxymethyl group show somewhat larger capacitance than ones with methoxyethyl group. In addition, it is undoubtedly the case that the state of the ammonium salt itself at room temperature, namely whether it is an ionic liquid or not, is irrelevant to the EDLC capacitance when used in PC solution. www.intechopen.com We also understand that the kind of anion species used also influences the internal resistance (ESR) of an EDLC. Fig. 9 shows the ESR of EDLCs using various electrolytes at 1 kHz, 25 •C. Those EDLCs using the BF 4 anion in PC have the practicable value of ESR of 2.5mΩ. On the other hand, the ESR associated with the use of TFSI anions is twice or more than that of BF 4 . The EDLC using the PF 6 anion showed a poor ESR value below a practical level. It seems that the ESR value of EDLCs significantly increased with an increase of the molecular weight of anion. As a result, we suggest that the use of solutes other than those containing BF 4 as electrolytes is unpractical for high voltage EDLCs. Comparison of DCIR for EDLCs at low temperature The major advantage of an EDLC versus a battery is long cycle life and high power density. Therefore, because a high power density is achieved by a low DCIR, an EDLC that has lower DCIR is desirable. It is obvious that cells using electrolytes including BF 4 have a significantly lower DCIR at room temperature than cells using other anion species. Cells using solutes with a large molecular weight anion have a high DCIR and ESR. The DCIR at the very low temperature of −30 •C increased between six to seven times compared to the room temperature value for BF 4 solutes, from ten to twelve times for PF 6 solutes, and for TFSI solutes by a factor of 30 or more. Fig. 10 shows the DCIR of EDLCs using various 1M solutions as an electrolyte at −30 •C. In the case of EDLCs using EMI-TFSI and DEME-TFSI, the cells could not discharge because the cell resistance was too large at low temperatures, and so they are not plotted in the figure. www.intechopen.com It is generally considered that, since ionic liquids have high solubility in PC, an EDLC using an ionic liquid as a solute in the electrolyte should have a good performance compared to a cell using solid ammonium salts. However, based on our results, we suggest that not only should an ionic liquid be used as an electrolyte to obtain an EDLC with a high power density at low temperatures, but in particular, the ionic liquid should contain the BF 4 anion. However, MMTM-BF 4 in PC had a low DCIR at both temperatures, even though it was not an ionic liquid. Possibly, the very compact nature of the cation in this case was responsible for its good performance. It is generally thought that the ionic conductivity of the electrolyte solution has an influence on the DCIR of an EDLC, namely, a high ionic conductive electrolyte gives a low DCIR value. The ionic conductivity (σ) for various electrolytes below 30 •C is shown in Fig. 11. Even though the nature of the anion and cation varied widely, the ionic conductivities of the various electrolytes were quite similar at a given temperature, over a range of temperatures from −30 •C to room temperature. There is essentially no difference among the six electrolytes' temperature dependence. The difference of ionic conductivity between BF 4 and TFSI at 25 •C was only 1.1 and 1.5mScm −1 in the MEMP and DEME series, respectively. It is not reasonable to suppose that a large difference in a DCIR value was caused by a small difference in ionic conductivity. It was surprising that at −30 •C, the DCIR difference between the BF 4 and TFSI anion combined with DEME or MEMP is ten times or more, yet the difference in ionic conductivity between BF 4 and TFSI was only 0.3mScm −1 at the same temperature. The dynamic viscosity (η) of 1M solutions of DEME-BF 4 , DEME-TFSI, MEMP-BF 4 and MEMP-TFSI in PC at various temperatures are displayed in Fig. 12. We focus attention first on the viscosity of DEME-BF 4 and its TFSI anion at −30 •C. If the viscosity of the electrolyte www.intechopen.com has a big influence on DCIR, due to a highly viscous electrolyte decreasing the mobility of the ionic species, then an electrolyte using DEME-BF 4 should have a large DCIR value compared to one using DEME-TFSI. However, an EDLC using DEME-BF 4 in PC had a remarkably small DCIR value compared with a DEME-TFSI cell. In fact, the EDLC using a DEME-TFSI electrolyte could not discharge as a result of having too large a value of DCIR at −30 •C. In the case of MEMP, the large difference of DCIR that existed between cells using BF 4 and TFSI anions seemed to overcome the difference of the viscosity. In a dilute PC system, the dynamic viscosity and the ionic conductivity of electrolyte did not much influence the DCIR of EDLCs with the same molar concentration. This suggests that the kind of anion species has the most significant effect on the DCIR at low temperatures. It seems that the molecular weight of the anion or the molecular size of the solvated anion most influences the DCIR value. In designing cells with such ions, it will be necessary to pay attention to the specific ion species used, especially for the anion, because the ease of ion adsorption and desorption in the confined space of porous activated carbon may influence the cell's ultimate DCIR. The compounds MEMP-BF 4 , MMMP-BF 4 and DEMM-BF 4 in PC are more attractive candidates than DEME-BF 4 for use as an EDLC electrolyte from the view point of capacitance and DCIR. Practical stability of EDLCs using various electrolytes In contrast to batteries, the cycling test is less important for an EDLC, because deterioration mostly occurs at the maximum operating voltage. So, as a more useful life test we continuously operated the cell at 3.0V, 70 •C. Presumably, a good response to this test will indicate good durability at room temperature. Fig. 13 demonstrates that the EDLCs using MEMP-BF 4 and DEME-BF 4 showed a good practical level of durability after 1000 h of use. After 1000 h, the capacity loss was just 15 and 20% for EDLCs using MEMP-BF 4 and DEME-BF 4 , respectively. On the other hand, MMMP-BF 4 and DEMM-BF 4 had good capacitance and DCIR value at low temperature, but their durability at high temperatures was inferior to that of DEME-and MEMP-BF 4 cells. Our conclusion is that ammonium salts with a methoxyethyl group have higher stability than those with a methoxymethyl group. Therefore, we recommend MEMP-BF 4 in PC as the preferred electrolyte for an EDLC in terms of capacitance, low temperature performance and thermal durability in practical use. The detailed ESR and maintenance ratio of the capacitance after 500 and 1000 h operation are summarized in literature (Yuyama et al. 2006). We developed a new kinds of ammonium salt with a methoxyalkyl group on the nitrogen atom, including several kinds of ionic liquids, as electrolytes for an EDLC. A cell using an electrolyte containing the BF 4 anion had a higher capacitance at 25 •C, 3V than those including PF 6 and TFSI anions. The capacitance of an EDLC at room temperature depends on the nature of the anion, rather than the cation, species or whether the solute is an ionic liquid or is a solid itself at room temperature. The values of the resistance that are most relevant to the power density performance of an EDLC also differed greatly with different ionic species. At room (25 •C) and low (−30 •C) temperatures, both the resistance parameters, ESR and DCIR, followed the same trend that the cell resistance increased in the order of BF 4 , PF 6 and TFSI. This order corresponds to the ranking of the molecular weights of the anion. Even in the ionic liquids, those including the PF 6 and TFSI anions had cell resistances that would make the practical performance of the cell such as to make it unusable. Of the ionic liquids tested, MEMP-BF 4 and DEME-BF 4 , compounds that possess an aliphatic ammonium group including a methoxyethyl group, performed well at continuous charging at 70 •C. The aromatic type of ionic liquids of the EMI series were inferior to the aliphatic ones in terms of their practical long life stability. Our tests show that MEMP-BF 4 is the preferred ionic liquid for use as an electrolyte solute in an EDLC. A thin layer including a carbon material improves the rate capability of an electric double layer capacitor The discussion above showed that ionic liquids have very high potential as practicable EDLC electrolytes. However, the dilution of the ionic liquid with organic solvent is an indispensable step in order that such EDLCs may have a high power performance that is equal or better than cells with a conventional electrolyte such as the solid ammonium salt/PC solution system. While the addition of the organic solvent brings about a spectacular high power performance of the EDLC at low temperatures, at the same time it also has negative implications for the safety of the cells. On the other hand, we develop a new method to improve the rate capability of an electric double layer capacitor (EDLC) using a thin polymer layer having a high concentration of carbon material on a current collector (CLC) (Sato et al., 2011). A novel thermocuring coating composed of a glycol-chitosan, a pyromellitic acid and a conductive carbon powder can form stable CLC on a metal foil current collector simply by spreading and curing at 160 •C for a couple of minutes. We compared the performance of some demonstration EDLC cells using three kinds of current collector: a conventional aluminum oxide foil for EDLC, an aluminum foil and an aluminum foil with CLC. The cell with the CLC had a much higher rate capability than the cell without CLC. Only the CLC cell was able to discharge at a current density of 500C. This cell shows a slight deterioration in capacity in a high www.intechopen.com temperature, continuous charging, life test, and the CLC has a suppressing effect on the internal resistance increase of EDLCs. The use of a CLC film current collector is one of the most effective and simple methods for the improvement of EDLC rate performance. In particular, a current collector consisting of aluminum foil coupled with a CLC promises to be a low cost alternative to the aluminum oxide foil commonly used in EDLCs. Also, these effects of a CLC have been confirmed in an EDLC with an undiluted ionic liquid as an electrolyte. In this connection, we note the use of a combination of undiluted aliphatic cyclic ammonium type ionic liquid, MEMP-BF 4 (Fig.14), which has a relatively low viscosity, with an aluminum collector, coupled with a CLC. Thermocuring coating on the current collector of an EDLC electrode To improve rate performance, we have prepared cells containing a carbon layer on the current collector (CLC). This consists of a thin layer having a high concentration of conductive carbon material, situated between the activated carbon electrode layer and the current collector. We have found that a hydroxyalkylated chitosan (glycol-chitosan) derivative and 1,2,4,5-benzene-tetracarboxylic acid (pyromellitic acid) mixture acts as a thermally activated binder that adheres strongly to metal foil was very effective in improving the rate capability of EDLC cells. Chitosan is a natural and low cost biopolymer prepared by the deacetylation of chitin, the most abundant polymer after cellulose, and is mainly obtained from crab shells (Fig. 15). Due to its unique physicochemical properties such as non-toxicity, chemical and thermal stability, hydrophilicity, remarkable affinity towards certain substances and film formation with relatively high physical strength, it has been extensively studied and is used in many fields (Rinaudo,2006;Mourya,2008) We found that we could form a stable coating layer on the metal substrate by spreading and heating an ink composed of a glycol-chitosan and a polycarboxylic acid compound. Because it is well known that the secondary amino group of the chitosan forms an imide linkage by reacting with two carboxylic acids, we propose that such cross-linkage between chitosan molecules can be formed by use of a tetracarboxylic compound. It is probable that a stable three-dimensional polymer network had been formed by the synergistic effect of combining a chitosan with a relatively rigid backbone with a chemically stable imide crosslinkage (Fig. 15). In addition the amino group of the chitosan has a strong affinity to carboxylic and hydroxyl groups on the surfaces of inorganic materials. Because the conductive carbon material used here, such as acetylene black and Ketjen black, has a lot of carboxylic acid and hydroxyl groups on the powder surface, it is likely that the chitosan binds strongly to them. The chitosan dissolves easily in aqueous acids, though it does not dissolve in the NMP conventionally used as a solvent in electrode making. However, the chitosan can be converted into the amphipathic partially hydroxyalkylated derivative obtained on reacting the hydroxyl and amino groups in the glucosamine unit with epoxide compounds such as ethylene oxide, propylene oxide and butylene oxide. We developed a novel coating to use as a conductive layer on a metal current collector by reacting glycolchitosan and pyromellitic acid. CLC effect on EDLC cells In this study, we assumed that the cell impedance could be decreased by decreasing the contact resistance between the activated carbon electrode layer and the current collector. The method we employed simply places stable, thin carbon layer composed of acetylene black and glycol-chitosan (CLC) between the activated carbon electrode and the current collector. The rate performance of demonstration EDLCs at 25 •C with, and without, a CLC on the current collector and various electrolytes are summarized in Table 3. The EDLC cell with an aluminum oxide current collector showed a slightly higher capacitance at low discharge current values. The pores present on the aluminum oxide foil surface might be contributing to a small capacity increase. The conventional aluminum foil cell has the lowest capacitance of the examined cells, resulting from the large internal resistance of the cell, even for low rate discharging, such as that at 1C. On the other hand, the cell with a thin carbon layer on both positive and negative conventional aluminum foil collectors (CLC) showed an excellent discharge rate character for an EDLC. Evidently, this cell has about the same capacitance as a common EDLC cell at 20C discharging, and at 50C, the discharging capacity of the CLC cell surpassed that of the common cell. The capacitance of the CLC cell at 300C discharging was five times larger than that of the common cell, and was thirty times for a 500C discharging process. At 500C discharging, corresponding to approximately 94 mAcm −2 of current density to the electrode surface area, the EDLC with an aluminum oxide could not discharge. However, the CLC cell maintained 40% of the capacitance at 1C discharging. A remarkable improvement in the rate characteristic of an EDLC was also observed with a CLC using neat ionic liquid as an electrolyte. Such a cell showed almost the same output capacity as an aluminum oxide collector cell with diluted PC system as an electrolyte, although the viscosity at room temperature of the MEMP-BF 4 IL used is 72 mPa s, higher by a factor of 20 times or more compare with that of the conventional PC-diluted electrolyte. Fig. 16 shows discharging profiles of demonstration EDLCs at various discharge rates. In the discharge with a relatively large current, such as 300C and 500C, it is clear that the CLC cell has better rate character than the conventional EDLC. These results indicate that the installation of a CLC had an extremely positive effect on the rate performance of the cell. Nyquist plots for a demonstration cell in the frequency range from 10mHz to 20 kHz are shown in Fig. 17. They consist of a semicircle at high frequency followed by an inclined line and a vertical line in the low frequency region. The intercept with the real axis at high frequency gives an estimate of the solution resistance (Rs). The diameter of semi-circle, namely the difference between the high frequency intercept (Rs) and low frequency intercept, indicates the interfacial resistance (Ri), which is attributed to the impedance at the interface between the current collector and carbon particles, as well as that between the carbon particles themselves. The R S value is different because of a difference in the ionic conductivity of the electrolyte. The most important conclusion is that the diameters of the semicircles were different in each cell. The cell with a CLC has a very small semicircle compared with AlOx cells. The cell with the largest semi-circle was that with a conventional aluminum current collector (not displayed in the figure). In the case of the cell with neat ionic liquid, the R i value is small, similar to that of a CLC cell with diluted electrolyte. The activated carbon layer was prepared by the same methods and materials in each case, so the diameter of a semi-circle indicates the difference in the interfacial resistance between the activated carbon layer and current collector. The installation of a CLC has the effect of decreasing the interfacial resistance between the electrode layer including the active material and the collector. Of course, the viscosity of the ionic liquid does not decrease by employing a CLC, however, it is possible to improve the performance of a cell using neat ionic liquid to be similar to that of a conventional EDLC available on the market. It should be possible to develop nonflammable and high durability EDLCs by the combination of CLC technology and low viscosity ionic liquids including FSI or FH anions. In contrast to batteries, the cycling test is less important for an EDLC, because deterioration mostly occurs at the maximum operating voltage. So, as a more useful life test we continuously operated the cell at 2.5 V, 60 •C. Presumably, a good response to this test will indicate good durability at room temperature. The maintenance of capacity of demonstration cells after 500 h operation is summarized in Fig. 18. All cells showed a good practical level of durability, the capacity loss being just 8%, regardless of the presence of a CLC in diluted electrolyte. In the case of the neat IL cell, the capacity loss from 100 to 500 hours was only 3%, though some deterioration was seen in the first 100 hours of operation. However, the interfacial impedance Ri of cells with neat ionic liquid cell and aluminum oxide collectors small increased after continuous charging at 60 •C, as shown in Fig.17(b). We opened the demonstration CLC cell after the continuous charging examination and investigated the bonding between the electrode and current collector by rubbing the electrode surface with paper. The activated carbon layer of both electrodes with both the aluminum collector and aluminum oxide collector were peeled off easily by rubbing one or 0.00 0.50 two times. However, that on the CLC did not separate even after rubbing more than 20 times in both diluted and neat ionic liquid cells. Both the interfacial bonding between the CLC and aluminum foil, and between the CLC and the carbon electrode remained completely intact. The scanning electron micrograph observation data are summarized elsewhere (Sato. et.al 2011).We can conclude that the CLC acted as a long-lasting internal adhesive layer that improves durability. Conclusion 1. The DEME-based ionic liquids have relatively high conductivities and remarkably wider potential windows compared to the other aromatic type ionic liquids that have been reported. 2. An EDLC using DEME-BF 4 as the electrolyte exhibits excellent stability and cycle durability even at temperatures over 100°C. Although below room temperature the capacity of an EDLC using this ionic liquid was inferior to that of a conventional EDLC using TEA-BF 4 /PC due to the high viscosity of the ionic liquid, the ionic liquid EDLC showed a higher capacity for the discharge of a large current at temperatures of 40°C or above. 3. The major problem for the practical use of ionic liquids has been their high viscosity. Ionic liquids with relatively low viscosity with new type of anions, such as bis(trifuluorosulfonyl)imide and fluorohydrogenate have been developed recently, leading to the hope that the problem of the viscosity will be soon overcome. 4. The viscosity of many kinds of ionic liquids decreases markedly on adding an organic solvent. In this way, it is possible to make an electrolyte with a high ion concentration, giving the EDLC a high capacitance. Ionic liquids never precipitate and crystallize even at low temperatures. EDLCs made with PC diluted ionic liquid (DEME-BF 4 ) have an extremely attractive high charge rate and discharge performance even at −40 •C 5. We have evaluated various kinds of ammonium salt with a methoxyalkyl group on the nitrogen atom, including several kinds of ionic liquids, as solutes in PC for an EDLC. A cell using an electrolyte containing the BF 4 anion had a higher capacitance at 25 •C, 3V than those including PF 6 and TFSI anions. The capacitance of an EDLC at room temperature depends on the nature of the anion, rather than the cation, species or whether the solute is an ionic liquid or is a solid itself at room temperature. 6. The value of the resistance that is most relevant to the power density performance of an EDLC, the cell resistance, increased in the order of BF 4 , PF 6 and TFSI. Of the ionic liquids tested, MEMP-BF 4 and DEME-BF 4 , both of which have a methoxyethyl group on nitrogen atom, showed the highest durability in a continuous charging life test. We conclude that ionic liquids containing the tetrafluoroborate anion have the best performance and stability for electric double layer capacitor applications. 7. We have developed a new method that improves the rate capability of EDLCs. It comprises a very simple method of creating a thin polymer film containing a high concentration of carbon material (CLC) on the current collector that makes high power discharging at 500C possible. It was revealed from impedance analysis that a CLC that was only 2.5 μm thick was effective in decreasing the interfacial impedance between the current collector and the electrode layer. The CLC technique was extremely effective This volume, of a two volume set on ionic liquids, focuses on the applications of ionic liquids in a growing range of areas. Throughout the 1990s, it seemed that most of the attention in the area of ionic liquids applications was directed toward their use as solvents for organic and transition-metal-catalyzed reactions. Certainly, this interest continues on to the present date, but the most innovative uses of ionic liquids span a much more diverse field than just synthesis. Some of the main topics of coverage include the application of RTILs in various electronic applications (batteries, capacitors, and light-emitting materials), polymers (synthesis and functionalization), nanomaterials (synthesis and stabilization), and separations. More unusual applications can be noted in the fields of biomass utilization, spectroscopy, optics, lubricants, fuels, and refrigerants. It is hoped that the diversity of this volume will serve as an inspiration for even further advances in the use of RTILs. How to reference In order to correctly reference this scholarly work, feel free to copy and paste the following: Takaya
2018-06-01T22:36:55.216Z
2011-09-22T00:00:00.000
{ "year": 2011, "sha1": "f53849900baffc98a856814cf512bb23171a835c", "oa_license": "CCBYNCSA", "oa_url": "https://doi.org/10.5772/23412", "oa_status": "HYBRID", "pdf_src": "Adhoc", "pdf_hash": "c50767310508fdaebfc6b3e23cc4b0cfe0932425", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [ "Materials Science" ] }
10680611
pes2o/s2orc
v3-fos-license
Prognosis of sepsis induced by cecal ligation and puncture in mice improved by anti-Clonorchis Sinensis cyclopholin a antibodies Background Cyclophilin A (CyPA), a ubiquitously distributed intracellular protein, is thought to be one of the important inflammatory factors and plays a significant role in the development process of sepsis. In the form of cytokine, CyPA deteriorates sepsis by promoting intercellular communication, apoptosis of endothelial cells and chemotactic effect on inflammatory cells. In our previous study, cyclophilin A of Clonorchis sinensis (CsCyPA), a type of excretory-secretory antigen, could induce the patients infected with Clonorchis sinensis to produce specific anti-CsCyPA antibodies. In this study, we investigated whether anti-CsCyPA antibodies could cross-react with CyPA and then play a protective role against sepsis, just like other anti-cytokine antagonists. Methods The mice model with sepsis was established with cecal ligation and puncture (CLP). Fifty mg/kg purified anti-CsCyPA antibodies were injected via the caudal vein 6 h after the CLP operation, and persistent observation was performed for 72 h. Blood samples and tissues were collected at 6 h, 12 h, 24 h, 48 h and 72 h after CLP. Cytokines in serum were measured by ELISA. Lung and mesentery tissues were stained with hematoxylin-eosin. Endothelial cells (ECs) isolated from murine aorta were co-cultured with CyPA of mice (MuCyPA) and anti-CsCyPAs for 24 h, then, viability was measured by Cell Counting Kit-8. Results Anti-CsCyPA antibodies could combine with MuCyPA and inhibite its peptidyl prolyl isomerase (PPIase) activity. In the antibodies treatment group, blood coagulation indicators including PT, aPTT, D-dimer and platelet count were obviously more ameliorative, the proinflammary factors like IL-6, TNF-α, IL-1β were significantly lower at 12 h and 24 h after surgery and the viability of ECs was significantly improved compared to those in the control group. Furthermore, the survival rate was elevated, ranging from 10.0 % to 45.0 % compared to the control group. Conclusions These antibodies may have a favorable effect on sepsis via inhibition of enzymic activity or protection of endothelial cells. Background Sepsis is defined as a systemic inflammatory response syndrome (SIRS) coupled with a documented infection that may result in septic shock and multiple organ failure (MOF) [1]. The sepsis mortality in humans has been high at more than 50 % [2]. SIRS serves as a hallmark sign of sepsis, and is characterized by a hyperinflammatory response of the host to invading pathogens that are primarily mediated by cytokines [3]. However, treatment of patients suffering from sepsis with traditional proinflammatory cytokine antagonist, such as anti-TNF-α, interleukin-1 receptor antagonist, bradykinin antagonist and others, did not prove effective in controlling multiorgan damage and mortality [4]. Cyclophilins (CyPs) are a family of ubiquitous proteins evolutionarily well conserved and present in all prokaryotes and eukaryotes [5]. Equipped with PPIase activity, CyPs catalyze the isomerization of peptide bonds from the trans form to cis form at proline residues and facilitate protein folding [6]. Cyclophilin A (CyPA), a universally expressed protein belonging to the CyPs family, can be secreted from cells in response to inflammatory stimuli such as hypoxia, infection, sepsis and oxidative stress [7][8][9][10]. In the form of cytokine, CyPA deteriorates sepsis by promoting intercellular communication, apoptosis of endothelial cells and chemotactic effect on inflammatory cells [11]. Clonorchis sinensis (C. sinensis), causing clonorchiasis, has been one of the most important food-borne parasites in China [12]. Most people infected with C.sinensis present no apparent clinical manifestations. Only 5 %-10 % of infected people have non-specific symptoms such as abdominal pain in the right upper quadrant, flatulence, and fatigue [13,14]. A C.sinensis adult full-length complementary DNA (cDNA) plasmid library was established in our laboratory in 2004 [15]. CsCyPA was found to be an excretory protein and able to induce high anti-CsCyPA antibodies (anti-CsCyPAs) titers in patients infected with C.sinensis in our previous study [16]. In 1989, David P Strachan proposed a hygiene hypothesis, according to which the decreased incidence of infections with parasites in developed countries may be the underlying cause for some diseases [17,18]. Nowadays, parasites and their products constitute the targets of studies as a potential alternative approach for parasitic, viral, bacterial, and autoimmune diseases [19][20][21]. Therefore, the aim of this study was to determine whether anti-CsCyPAs could, like other anti-cytokine antagonists, play a protective role against sepsis. Six SD rats were divided randomly into two groups, one group was injected subcutaneously with 100 μg rCsCyPA emulsified with equal volume of complete Freund's adjuvant (CFA, Sigma), followed by three boosts with 50 μg antigen emulsified with incomplete Freund's adjuvant (IFA, Sigma) at 2-week intervals. The other group was immunized with PBS as control. Two weeks after the last vaccination, serum samples were collected from the mice and the rCsCyPA-specific IgG detected by ELISA. Antisera were precipitated three times with ammonium sulphate (33 % saturation), the pellet dissolved in TBS buffer (20 mM Tris-HCl, pH 7.5, 0.15 M NaCl) and dialyzed against the same buffer for 18 h. Antibodies were purified by affinity chromatography on a G-Sepharose column. Antibodies were eluted from the column with 0.1 M glycine-HCl, pH8.8, and then, dialyzed against TBS solution for 18 h. The concentration of anti-CsCyPAs was measured by using a BCA Protein Assay Kit (Thermo, USA) following the manufacturer's instructions. PPIase activity and inhibition Colorimetric detection of PPIase activity was performed by the chymotrypsin-coupledcleavage assay according to Fischer et al. [22]. Briefly, 10 ug of rMuCyPA per reaction system was co-cultured with 1 ug or 10 ug anti-CsCyPAs for 1 h at 37°C before experiment. The enzymatic activity was performed in 50 mM HEPES (N-2-hydroxyethylpiperazine-N'-2-ethanesulfonic acid) buffer, pH 8.0, at 10°C. The reaction was started by the addition of the synthetic peptide Suc-Ala-Phe-Pro-Phe-p-nitroanilide. Pnitroaniline chromophore release from the all-trans peptide was monitored at 390 nm using the Infinite F500 (TECAN, Swit). CLP model and anti-CsCyPAs treatment Ethical approval All animal experiments in this paper were performed in strict accordance with the Guide for the Care and Use of Laboratory Animals of Sun Yat-sen University (Permit Numbers: SCXK (Guangdong) 2009-0011). 220 KM male mice (5-6 weeks of age and weighing 20-22 g) were purchased from the Experimental Animal Center of Sun Yat-sen University (Guangzhou, China) housed in a temperature controlled, light-cycle room in animal facilities, with unlimited food and water. Sepsis was induced in the mice model by CLP [23]. Twenty mice were selected randomly from the total animals as a sham-surgery group. The other two hundred were randomly divided to two groups: <1 > The sham group skipped the steps of cecal perforation, instead the peritoneum was immediately closed after exposure of the cecum. Normal saline (150 μl) was injected via the caudal vein 6 h after surgery. <2 > CLP control group were injected with 150 μl of normal saline via caudal vein 6 h after CLP surgery. < 3 > CLP treatment group were injected with 150 μl of normal saline including 50 mg/kg of purified antibodies. Mice in each group were divided equally into five subgroups, which were sacrificed at 6, 12, 24, 48 and 72 h respectively after surgery. There were four mice in each sham surgery subgroup, twenty in each CLP treatment subgroup and twenty in each CLP control group. In each subgroup, survival mice were anesthetized with diethyl ether, then blood collected into anticoagulant and coagulant tubes through the eyeball and the mesentery and lungs separated. The detection of survival rate was presented in the 72 h subgroup. Measured cytokine and CyPA in serum Blood samples in each subgroup collected in coagulant tubes (B&D, USA) were clotted for two hours at room temperature before centrifugation for 15 min at 1000xg. Serum was removed and stored samples at −80°C. MuCyPA were measured using a mouse cyclophilin A ELISA Kit (CUSABIO, USA). TNF-α, IL-6, IL-1β, IL-4, IL-10 and IFN-γ were determined by the corresponding ELISA Kit (R&D, USA) according to the manufacturer's instructions. All samples were measured at OD 450nm in a Sunrise Absorbance reader (TECAN, Swit). Pathological observation of lung and mesentery tissues The mesentery and lung tissues were fixed in 10 % formaldehyde for 24 h and then embedded in paraffin. Subsequently, the paraffin-embedded samples were cut into 5 μm thick sections and stained with hematoxylineosin. All samples were photographed and examined immediately by Leica DM Microscopes (DM 2500B, Germany, ×400). Blood coagulation indicator Blood samples in each subgroup collected by anticoagulant tubes (Improve Medical, Guangzhou, China) were texted within four hours. Prothrombin time (PT), activated partial thromboplastin time (aPTT) and fibrinogen were detected in an automated coagulometer (Sysmex CS2000i; Fuji, Japan). Platelet count was performed using an automatic blood cell counter (Sysmex XS1000i; Fuji, Japan). D-dimer was detected by D2D ELISA Kit (R&D Systems, USA). Vascular endothelial cells isolation, culture and treatment The isolation of ECs from murine aorta was described by Mika Kobayashi in 2005 [24]. Briefly, the aorta of KM mice was dissected out from the aortic arch to the abdominal aorta, and the connective tissues removed under a stereoscopic microscope. ECs were isolated from aorta by collagenase type II solution (2 mg/ml, Different concentrations of rMuCyPA and anti-CsCyPAs were co-cultured with ECs for 72 h before being measured by Cells Counting Kit-8 (CCK-8) (Beyotime, Jiangsu, China) according to the manufacturer's instructions. Each well was incubated with 10 μl of CCK-8 solution at 37°C for 2 h and measured at OD 450nm in a Sunrise Absorbance reader (TECAN, Swit). Each experiment was repeated three times. Statistical analysis Date was reported as the mean ± SD. All statistical analysis was performed using Prism 5.0 (GraphPad Software, USA). A significance level of 0.05 was considered to be significant for all calculations. Homology analysis The amino acid sequences of CsCyPA (AFI24615. Immunoreactivity between rCyPA and anti-CsCyPAs Western blot analysis demonstrated that rCsCyPA, rSjCyPA, rMuCyPA and rHsCyPA could be recognized by anti-CsCyPAs, whilst not reacting with PBS in Fig 2. The results indicated that rCyPA of these species shared similar immunoreactivity with anti-CsCyPAs. Survival rates The survival rates were analyzed 72 h after the CLP surgery. As is shown in Fig 4, the survival rate of Shamsurgery group was 100 %. Two of the twenty mice left in the CLP control 72 h subgroup, a survival rate of 10 %. Nine of the twenty mice left in CLP treatment 72 h subgroup, with a survival rate of 45 %. A statistically significant difference between the CLP control group and the CLP treatment group was observed (p < 0.05). CyPA level in serum Serum samples were collected from both the shamsurgery group and the CLP control group. MuCyPA in the CLP control group was statistically higher compared with the sham group at 6, 12, 24 and 48 h (p < 0.05). There was no significant difference at 72 h (p > 0.05). In the CLP group, the CyPA level reached a climax at 6 h followed by a time-dependent decrease in Fig 5. Cytokine levels in serum Cytokines which promoted inflammation like TNF-α, IL-6 and adjusted inflammation like IL-4, IL-10 and IFN-γ were chosen to represent the systematic inflammation level. Administration of antibodies significantly reduced the levels of TNF-α, IL-6 and IL-1β in serum compared with CLP control mice (Fig 6a, b, c) at 12 h and 24 h after CLP. Adjustment Cytokine had no statistical meaning (Fig 6d, e, f ). Histology of lung and mesentery tissues H&E staining of the lung and mesentery of both the CLP control group and the CLP treatment group were shown in Fig 7. Lung tissue specimens in the CLP control group presented apparent thrombus at 12 h, and were characterized by leukocyte influx, edema, hemorrhage, wall thickening and alveolar consolidation at 48 h and 72 h (Fig.7a). In contrast, lung tissue in the CLP treatment group presented apparent thrombus at 24 h, and demonstrated leukocyte influx and edema without obvious hemorrhage and alveolar consolidation at 48 h and 72 h (Fig. 7b). Significantly, comparing lung tissue at 48 h and 72 h, pathological change seemed ameliorated in the CLP treatment group, while deteriorated in CLP control group. Compared with the CLP treatment groups, the mesentery tissue in the CLP control group demonstrated more serious inflammatory reaction. Tons of inflammatory emigrated from vessel at 6 h. The boosting of cytolysis, fusiform cells and intercellular substance led to diffuse mesentery fibrosis at 72 h after surgery (Fig. 7c). In the CLP treatment group, inflammation was limited and some normal fat cells could still be observed at 72 h (Fig. 7d). Effects of antibodies on blood coagulation indicator As the laboratory diagnosis of diffuse intracellular coagulation (DIC) in the mice was defined by Minna JD [25], the CLP model successfully induced DIC, as the changes of blood markers were shown in Fig 8. Compared with the sham group, the platelet count, fibrinogen concentration, PT, aPTT and D-dimer in the CLP control group began to change significantly at 6 h post-CLP. Comparing the CLP control group with the CLP treatment group, there were significant differences on the indicators including PT, aPTT, D-dimer and platelet count. Furthermore, antibodies in mice obviously ameliorated these four indicators at 48 h and at 72 h after the surgery. Although there was no significant difference in fibrinogen concentration, the significant changes of other indicators showed strong evidences for the efficacy. Viability of ECs with MuCyPA and anti-CsCyPAs by CCK-8 assays To evaluate the viability of ECs after being co-cultured with rMuCyPA, ECs were incubated in 0 μg (control), 0.1 μg, 1 μg and 10 μg MuCyPA for 24 h, and cell viability was measured using the CCK-8 assays in Fig 9. Compared with the control group, the viability of ECs exposed to rMuCyPA showed a dose-dependent decrease. In the inhibition test, 0.1 μg, 1 μg and 10 μg antibodies were co-cultured with 10 μg MuCyPA for 1 h before reaction. Compared with 10 μg MuCyPA group, the result showed significant increase on the percentage of cells in the 1 μg and 10 μg antibody groups in a dose-dependent manner (p < 0.05). There was no significant difference between 10 μg MuCyPA group and 0.1 μg antibodies group (p > 0.05). Discussion In this study, we found that the blood level of MuCyPA increased in the development process of sepsis induced by CLP surgery. Anti-CsCyPAs approach had a beneficial effect on the survival rate of CLP mice and obviously improved the observational indicators including blood coagulation, lung and mesentery pathology and cytokine levels in serum. In 1992, Barbara et al. [9], found that lipopolysaccharideactivated macrophages could secrete CyPA as a proinflammatory factor, the role of which in the inflammatory disease was evaluated in following studies. It has been found that CyPA significantly increased in the development process of infection in a couple of studies. For example, in 2007, Dear et al., found that CyPA highly increased in the liver after CLP [26]. Huang et al., reported that CyPA expression was modulated in peripheral lymphocytes from Pseudomonas aeruginosa induced sepsis [27] and Staphylococcus aureus invasion [28]. Similarly, in our study, the MuCyPA in serum, which might originate from the cells receiving inflammatory stimuli increased in abundance and reached the climax at 6 h in Fig 5. The mechanism of CyPA working as a proinflammatory factor unequivocally depended on the combination with CD147 receptor [11]. CD147, a single transmembrane glycoprotein, is widely expressed on the cell surface which can be expressed in all of the white blood cells, platelets and endothelial cells in most normal tissues in weak or no expression [29]. The proline 180 and glycine 181 residues in the extracellular domain of CD147 were critical for signaling and chemotactic activities mediated by CD147 which could be accomplished by the PPIase activity of CyPA [30]. The activated CD147 could transfer information into cells, leading to chemotaxis, release of factor and apoptosis of ECs which all induced deterioration of the sepsis. Significantly, the expression of CD147 in membrane was obviously increased in sepsis as was CyPA. Furthermore, in 1997, Tegeder et al. reported that CyPA PPIase activity was significantly higher in patients with severe sepsis compared with healthy subjects. In addition, elevated PPIase activity was associated with high mortality [31]. In this study, anti-CsCyPAs could recognize and crossreact with MuCyPA and effectively inhibit the PPIase enzymic activity as the obvious identity of CyPA amino acid sequence in different species, which may in turn induce inhibition on the activity of CD147. Comparing the CLP control group and the CLP treatment group in Figs 6 and 7, antibodies approach had a significantly favorable effect on the improvement of inflammation. The proinflammatory cytokines including IL-6, IL-1β and TNF-α were obviously lower compared with the control group, while anti-inflammatory cytokines including IL-10, IL-4 and IFN-γ presented no significant difference between the two groups. The level of CyPA increased at an early stage and followed by a drop to the normal in 72 h, which suggested that CyPA plays a primary role in the SIRS stage of sepsis, but is not significantly involved with the compensatory anti-inflammatory response syndrome (CARS) stage of sepsis. This reaction might alleviate the systematic inflammatory reaction and reduce the possibility of SIRS. Meanwhile, CyPA also serves as a key determinant for TNF-α inducing ECs apoptosis, which could also increase vascular permeability and induce hypovolemic shock [11]. Therefore, the protection on ECs might be the main reason behind the increased survival rate in the early stage. DIC is a common complication of sepsis, and is associated with a poor prognosis. The blood coagulation indicator in the treatment There were three limitations of this study that warrant notice. Firstly, the efficacy of antibodies might correlate with the combination of antigen and antibodies that inhibit CyPA from combining with CD147. This mechanism might play an important role as inhibition of enzyme. Secondly, no in vivo experiment on the effect of antibodies was carried out. It was found to be impossible to separate pure mouse peritoneal macrophage due to the open wound and persistent infection in abdomen after reviewing many studies. Finally, anti-CyPA antibodies had been verified to increase in some autoimmune diseases like rheumatoid arthritis, systemic lupus, erythematosus and so on, thus the safety of this "parasite medication" should be further verified in further studies. Conclusion Anti-CsCyPA antibodies had a verified beneficial effect on sepsis induced by CLP surgery by inhibiting the PPIase activity. The indicators of pathology, cytokines, blood coagulation indicator and survival rate all generally improved. As far as its safety is concerned, the preventative injection of CsCyPA might be controversial, but anti-CsCyPAs as an alternative treating approach for acute sepsis patients may be considerable.
2017-07-14T08:15:34.249Z
2015-10-01T00:00:00.000
{ "year": 2015, "sha1": "d3934157d6a5de074f73ac584963363bc4df8629", "oa_license": "CCBY", "oa_url": "https://parasitesandvectors.biomedcentral.com/track/pdf/10.1186/s13071-015-1111-z", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "d3934157d6a5de074f73ac584963363bc4df8629", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
213294783
pes2o/s2orc
v3-fos-license
Automated Negotiation for Peer-to-Peer Electricity Trading in Local Energy Markets : Reliable access to electricity is still a challenge in many developing countries. Indeed, rural areas in sub-Saharan Africa and developing countries such as India still encounter frequent power outages. Local energy markets (LEMs) have emerged as a low-cost solution enabling prosumers with power supply systems such as solar PV to sell their surplus of energy to other members of the local community. This paper proposes a one-to-one automated negotiation framework for peer-to-peer (P2P) local trading of electricity. Our framework uses an autonomous agent model to capture the preferences of both an electricity seller (consumer) and buyer (small local generator or prosumer), in terms of price and electricity quantities to be traded in different periods throughout a day. We develop a bilateral negotiation framework based on the well-known Rubinstein alternating offers protocol, in which the quantity of electricity and the price for different periods are aggregated into daily packages and negotiated between the buyer and seller agent. The framework is then implemented experimentally, with buyers and sellers adopting different negotiation strategies based on negotiation concession algorithms, such as linear heuristic or Boulware. Results show that this framework and agents modelling allow prosumers to increase their revenue while providing electricity access to the community at low cost. methodology, C.E. and V.R.; software, C.E., B.C., and C.O.; supervision, V.R. and W.-G.F.; validation, V.R., W.-G.F., and D.F.; visualization, B.C.; writing—original draft, C.E. and B.C.; writing—review and editing, V.R. Introduction Universal access to affordable, reliable, and sustainable energy is one of the sustainable development goals (SDG) of the United Nations [1]. Indeed, access to electricity is still a major challenge in developing countries. Eleven percent of the world population (840 million people) lack access to electricity especially in rural areas, which represent 87% of this population. With 573 million people out of those 840 million, the sub-Saharan African (SSA) region includes twenty of the leastelectrified countries of the world [2]. In most of these countries, insufficient generation capacity often due to total dependence on fossil-fuel generators and weak grid infrastructure [3] are the cause of this lack of electricity access. The size of the countries along with the cost of new generation development and grid reinforcement make it economically unfeasible to extend the high voltage grid to supply the whole population, both in the short and medium term [4]. Hence, such developing countries face important shortage, polluting electricity production, and low quality electricity with large voltage fluctuations that can be harmful to power electronics-based devices [5,6]. Consequently, small scale fossil fuel-based generators are widely used by electricity consumers for off-grid electricity generation. However, the high electricity cost resulting from these generators hinders the economic development in these regions [7,8]. In addition, the environmental pollution caused by the widespread use of (often inefficient) carbon fuel generators [9,10] is a growing health and safety concern due to toxic generator emissions; as well as the recorded deaths from these emissions, especially in developing countries [11,12]. Several innovative and cost-efficient solutions incorporating available local renewable energy sources (RES) such as solar, wind, and hydro have also been proposed and widely utilized. These include solar lighting [13], solar home systems (SHSs) integrating battery storage [14,15], hydropower [16], as well as community microgrids with battery storage systems [17,18]. These solutions provide several advantages such as reduced cost of electricity, ease of deployment, and environmental sustainability. This is especially valuable in developing country settings, where installing large-scale generation, centralized transmission/distribution, and storage assets are often expensive, due to limited power system infrastructure and access to finance [4]. Off-grid SHSs and community microgrids have also been observed as being the most effective solution in enabling millions of rural dwellers gain access to electricity [19]. Given the variable characteristic of RES in general, these offgrid SHS and community microgrids are usually designed and sized to generate and store adequate electricity for daily consumption, during periods of low-resource availability such as severe weather conditions [20]. Conversely, they also generate excess, unutilized/wasted energy during periods of high resource availability, which could be used by neighbours or peers via peer-to-peer (P2P) energy sharing/trading within the community, as demonstrated in the Swarm Electrification project of Bangladesh [21] where local consumers and prosumers (consumers who own solar home systems) are connected and morphed into village microgrids for energy sharing purposes, using a DC distribution system. P2P trading has been defined as a decentralized structure where all peers in the structure cooperate to trade a good or service-in this case, electricity [22]. Indeed, P2P electricity sharing/trading has emerged as a new paradigm, solving local network issues such as voltage fluctuation, congestion, or electricity deficit [22][23][24][25]. This novel concept of electricity trading is driven by the development of distributed energy resources (DER) and smart metering technologies along with communications systems [26][27][28]. It provides prosumers with the unique opportunity of selling any excess electricity generated to other households in need of electricity [29] especially in the case of islanded microgrids or during electricity cut events, while empowering communities to take charge of their own energy supply and usage. A typical case study is the Brooklyn Microgrid Project (BMG)-a network of local neighbourhood prosumers and consumers who trade locally-available, cheaper and greener electricity via a private blockchain system [30]. Other commercial P2P electricity trading projects include xGrid and µGrid by Powerledger (www.powerledger.io/our-technology/); PicloFlex by Open Utility (www.picloflex.com/); Vanderbron (www.vandebron.nl/) and Sonnenbatterie community where members share their self-produced surplus energy stored in their Sonnenbatteries with other members of the Sonnembatterie community (www.sonnenbatterie.co.uk/sonnencommunity/). Other European research projects focused on the design and implementation of P2P electricity marketplace include ElecBay-a P2P electricity trading platform for grid-connected microgrids [31]; EnerChain-a blockchain-based energy trading platform "for wholesale products, flexibility options and kWhs within energy communities" (www.enerchain.ponton.de/) [32]; NOBEL-a P2P energy market for trading electricity in smart grid neighbourhoods (www.nobelgrid.eu/) [33] and P2P-SmartTest project (www.p2psmartest-h2020.eu/) [34]. Hence, local peer-to-peer (P2P) trading would provide a great opportunity for populations with limited access to electricity to access any unused/excess local energy within the community. While most P2P electricity markets use market clearing algorithms such as the double-auction that match the prices and quantities proposed by prosumers and consumers in the market [35,36]; others use distributed optimization methods [37] based on dual prices such as alternating direction method of multipliers (ADMM) [38] and consensus-based optimization [39]. However, automated negotiations have emerged as a key technology for P2P electricity markets. In an automated negotiation, software agents negotiate and trade energy on behalf of their human owners in order to maximize the utility defined by their owner. Hence, automated negotiations are defined as a form of interaction in which a group of software agents (buyers and sellers), with conflicting interests but desirous of cooperating with one another, choose to work together with the aim of reaching an agreement that is acceptable by all parties in the process [40]. Fast, efficient, and reliable automated negotiation is seen as a key coordination mechanism for the interaction of producers, suppliers, and consumers in electronic markets such as P2P electricity markets [41]. Hence, the development of market frameworks has been the focus of recent research in P2P electricity market sector. Optimization of electricity production and consumption schedule in a wholesale market where buyer and seller agents cooperate to determine the best energy contract, is one of such developmental research frameworks [42,43]. Current research also focuses on bilateral contract networks between suppliers and centralized producers [44]; computational properties of negotiation algorithms [45]; prosumers' behavioural pattern based on their bidding strategy [46]; optimized scheduling of energy storage for a P2P microgrid model using automated negotiations [47]; as well as enabling communication technologies [48]. Furthermore, agent-coordinated electricity trading between homes in a cooperative residential setting has been shown to lead to an overall battery capacity reduction and reduced energy losses [49,50]. However, electricity pricing was not considered for negotiations in these studies. In this paper, we present bilateral agents' negotiation heuristics that enable community P2P energy market in order to reduce the bill of electricity traders and promote access to electricity in the case of developing countries. The aim of the paper is to provide a new formal model of buyers and sellers agents that can be used to carry out automated negotiations. This formal model is then integrated within a novel automated negotiation framework where electricity quantities and prices are electronically negotiated by players in the market. The paper shows how consumers can access more local electricity at lower cost, while allowing local prosumers to increase their benefits and reduce their generation system's payback period. The paper also proposes novel heuristic negotiation strategies (applicable in such localized P2P markets) where agents representing residential prosumers and consumers bargain with each other towards reaching a mutually satisfactory trade. In Section 2 of the paper, we describe this novel automated negotiation framework, including the formal model of buyers and sellers and the negotiation protocol. In Section 3, we present different negotiation strategies that can be used in a local energy market. Then, in Section 4, the proposed framework and negotiation strategies are applied to a case study focused on developing countries (specifically India), where the considered community represents a rural area with poor access to electricity. Experimental analysis shows that the proposed framework increases access to electricity, while increasing the revenues for SHS owners. Finally, in Sections 5 and 6, we discuss and conclude on the potentials of the proposed automated negotiation framework for P2P energy markets and highlight how it allows for reduction in the electricity deficit of some communities while promoting the use of community-based renewable energy sources. Automated Negotiation Framework We develop a negotiation framework in which two types of agents (buyer and seller) bilaterally negotiate to trade electricity. The framework is constituted by a negotiation protocol and models of agents, as explained below. Negotiation Protocol The negotiation protocol is defined by the operator or facilitator of the local market in which the two agents interact. It defines the rules of this interaction, including the type of energy contract that can be exchanged, and the different steps that each agent has to follow in the negotiation. The automated negotiation framework consists of a Rubinstein alternating offers protocol [51] and the issues (the quantity of electricity and the price for each considered period) that are negotiated are discrete and daily packaged. Agents have imperfect information, which means they do not know the preferences or utility function of the opponent. This subsection presents the protocol that must be followed by the agents during their negotiation. First, the operator or facilitator of the local market determines the number of consumption periods for the following day. In this paper, four (4) periods of electricity consumption are used in the negotiation model: Morning, afternoon, evening, and night. For each of these periods, the operator defines two issues: The quantity of electricity to be traded, noted where Ni is one of the four periods determined previously, and the price at which the quantity is traded during this period. The price per unit is taken as a constant throughout the whole day in this paper ( = ), but the negotiation model also applies in the case of variable price per periods. This is because the emphasis in our negotiation model is on equitable and mutually beneficial exchange of electricity between peers, not on the pricing mechanism. For each of these issues, the model includes some bounds on the quantities that the energy exchanges in each period can take, based on the physical constraints of the application setting. For example, in the first period ( = = Morning), minimum and maximum quantity of energy that can be traded are defined. The minimum quantity of electricity to be traded ( = min ) is usually 0, while the maximum quantity depends on the local constraints of the distribution network and the duration of the period. As an example, in a very small market with small consumption, the maximum quantity of electricity that could be traded in a period could be defined as = 3.5 kWh, which means that the agents cannot trade more than 3.5 kWh within the period Ni. The same applies to the issue of price where a minimum and a maximum price is also determined; for example, = [0.1, 0.9] ($/kWh). Hence, our automated negotiation framework is defined by the following set of five issues (in the case of four periods and one price for the whole day): In the case where price is variable over one day, the set of issues is defined as = , … , , , … , . For simplicity, each of the five issues ( , ) consists of discrete values (quantities) that are predefined by the contract types. This means that negotiation can only result in traded quantities that are already within the set of feasible quantities that are possible to be physically traded in the local distribution network. Given the description above, the proposed negotiation protocol consists of a bundled multi-discrete issue. Thus, for each round within a negotiation, an agent A will determine one quantity for each issue (within the limits previously defined) in order to constitute an offer, noted , = , … , , (or , … , , , … , in the case of variable prices), proposed to agent B. The set of all possible offers , is the negotiation domain determined by Equation (2): Hence, a negotiation consists of the following steps:  (Pre-negotiation): First, the parties (agents) define the issues to be negotiated and the associated possible (allowed) quantities for each of them.  An agent A determines the offer he will propose during the first round → to agent B. The offer consists in the quantities of electricity for each period ( , … , ) and the price ( ).  Agent B receives the offer; accepts it or discards it. In the first scenario, the negotiation is over. In the second, he proposes a counteroffer → by determining its preferred quantities for each issue.  Agent A can either accept it, propose a new offer → (in which case we go back two steps above), or close the negotiation (no deal).  Once the negotiation is done, the trade is validated against the physical constraints of the power exchange network, verifying that the network can support the agreed energy transfer.  The next day, the agents commit to their energy trade. In terms of the physics, the proposed model can be used in a number of physical settings. In the case of communities in many developing countries, there is no central distribution network and the exchange occurs over a privately-owned wire. In cases where prosumers have access to a local distribution network, a distribution system operator could be used to verify and enforce physical network constraints, but the contracts are agreed through P2P negotiation. Thus, given the negotiation protocol description above, the next section will focus on the modelling of the agents in order to allow automated negotiations by software agents. Agent Models Having described the negotiation process in subsection 2.1 above, this section presents the modelling of the software agents representing the different market traders. The agent modelling consists mainly in determining an agent's utility function. The utility function is defined for each offer and represents the agent's preference or value for an offer. Thus, a buyer will have a high utility for an offer that consists of its desired quantities of electricity at a low price, whereas a seller will have the highest utility for offers with high prices. Hence, it is necessary to distinguish seller and buyer agents, as explained below. Buyer Agent Model The utility that an agent A will give to an offer noted , is defined as a function of the total cost and quantity of electricity supply for the day. Thus, the utility function of an agent A for an offer is defined as proposed in Equation (3): where and are weight-coefficients such that + = 1. represents the importance of the electricity cost for the agent, while represents the importance the agent gives to the quantity of electricity he will obtain in an offer. represents the total cost of the offer through the whole period (day) and is given by Equation (4), which includes the case where the prices can be different for every period: where and are the quantity of electricity and price at period Ni that constitute the current offer, and is the quantity of electricity required by the agent for period Ni. While is defined for every period Ni, as the minimum between the price of electricity on the grid and the cost of the generation of one unit of energy from a generator that would be owned by the buyer and used in case there is no deal. This is relevant especially for remote places or countries in India and sub-Sahara Africa where households use small fossil fuel-based generators to generate electricity when there is a long outage on the grid. Where there is no grid and no generator, is equal to = -the highest possible price for period Ni. Additionally, min and min are the minimum quantities of electricity and price that can be traded in period Ni, respectively. This expression of the cost ensures the utility of an offer with a low cost will be higher than the utility of an offer trading the same quantity of electricity at a higher cost. It will also prevent the agent from negotiating electricity at a higher cost than the grid's price or the cost of a self-owned generator. Finally, in Equation (3) is defined as the agent's utility for the quantities of electricity that constitutes the offer. Indeed, an agent has a need for specific quantities of electricity at specific times. If an offer meets its needs, the offer will have a high utility. However, if the offer surpasses its needs, the utility is 0 for an agent who is not flexible (unable to increase its consumption). Thus, is defined as shown in Equation (5): where represents the matching between the electricity quantity corresponding to the offer for period Ni and the required electricity quantity for the same period. is given by Equation (6): where is the flexibility the consumer has (in kWh) for overconsumption in period , and is a number such that ≪ 1 allows to be defined even for a period where = 0. It can be seen that is equal to 1 only when the agent receives an offer that exactly meets its needs. are the period's weight coefficients (∑ = 1), directly representing the importance of a period Ni to the buyer in comparison with other periods. It is given by Equation (7): where max is the maximum quantity of electricity per period the agent requires, and is a coefficient given by the agent to state if the period is important or not. For example, an agent might not need a large quantity of electricity for period Ni but might be in dire need of this quantity for a different period. In this case, the agent can specify it by allocating a large value to ( = 1 for example). Seller Agent Model The seller agent represents a prosumer with a microgeneration asset. This asset consists of either a fossil fuel engine, a solar panel with or without a battery, etc. The seller agent's utility for an offer is determined by the revenue the seller will get from it. Thus, is given as shown in Equation (8): where an offer is said to be feasible if the agent has enough energy in each period to supply its own needs and the electricity quantities proposed in the offer. Algorithm 1 presents the steps followed in order to remove unfeasible offers from the negotiation domain. is the seller's expected revenue from an offer = , … , , , and is given by Equation (9) where there are variable prices per period: where max = max ∑ − min ∑ is the maximum revenue a seller could expect from the space of possible offers and is the maximum quantity of energy the prosumer could sell in time period Ni. The seller's marginal cost of production for one unit of electricity in period Ni-private to the seller only-is denoted as . if < then. = −1. Now that the utility function of the seller has also been defined, the next subsection presents the different steps followed by the agents during a negotiation. Agents' Electricity Negotiation Having defined the buyer and seller agents' respective utilities, this section presents the different steps followed by the agents during the negotiation. 1. The agent first determines its required self-consumption ( ) for each of the four periods, as well as the different marginal costs for electricity supply ( , ). 2. The seller agent also determines the forecast for its distributed energy resource (DER) production, while the buyer agent determines the values, as well as the weights and . 3. The agents compute the utility function for each of the possible offers from the sets of discrete quantities and determined by the market facilitator. Thus, each agent generates the set of possible outcomes . 4. The agents sort the set of possible outcomes and each determine the threshold below which it will not accept any offer. is the utility of the reserved or least package an agent can concede. Thus, any package with a utility below will be discarded. 5. Negotiations begin with an agent (say agent A) initiating and sending the first offer/bid → to the opponent (agent B). 6. Upon receiving the offer, agent B evaluates the utility of the offer and determines if the offer is first suitable or not; that's above or not. Depending on its strategy, agent B will either accept the offer; or refuse the offer by proposing a new offer/bid → , etc. within the specified deadline, until a bargain is either reached or the negotiation is closed without a deal. Finally, for each agent, the threshold defined above corresponds to the package which contains no electricity quantities as each agent is in the market to negotiate electricity quantities. is defined by Equation (10) With the maximum/minimum price acceptable for the buyer/seller respectively, and min {0, … ,0, }, , … , , corresponding to one of the two packages ({0, … ,0, } or , … , , ) that gives the lowest utility. Now that the negotiation protocol and the agents modelling have been defined, the next section describes the different negotiation strategies that have been considered in this study. Negotiation Strategies A negotiation strategy determines how an agent generates a bid, as well as accepts an offer. Several negotiation concession strategies have been proposed in negotiation literature. For this paper, we have selected and adapted on three strategies that show the best performance, which will be used in the next section in order to validate the negotiation protocol and model defined above. 3.1.1. "Zero Intelligence" (ZI) Strategy The first agent's strategy to be considered is the zero intelligence (ZI) strategy. This strategy consists of generating a random bid from the agent's set of feasible packages determined previously ( > ). In conceding to a received offer, the agent will accept it if the utility of the received offer is greater than that of its previous randomly generated bid. This strategy serves as a baseline strategy to determine the feasibility of automated negotiations as a trading mechanism in P2P electricity markets. Linear Heuristic Strategy The linear heuristic (LH) agent strategy consists in choosing an offer among a reduced set of feasible packages. During the first round of a negotiation, the LH agent (called agent A) starts by defining the minimal utility (called reservation value) such that the set of feasible packages for this round is given by . For the first round, is chosen close to the maximum utility computed, as explained in Section 2. If an offer from the opponent is within the set computed for the considered round, the offer is accepted. Otherwise, the agent proposes a second round for which he determines a new minimal utility and a corresponding set . The specificity of the LH agent is that the minimum utility used to determine the set of possible packages for round k is determined as a linear function of the round number, as shown in Equation (11): where M is a coefficient that corresponds to the speed at which the agent concedes in a negotiation. As explained above, once the LH agent receives an offer for round k, it will either accept it if the utility of this offer is above , or propose a new bid. The new bid corresponds to the package from the set of that is closest to the received offer from the opponent, where the dimensions for the distance computation are the issues considered , … , , . Expert Agent Strategy The expert agent strategy uses a heuristic strategy similar to the LH strategy, as well as the Boulware strategy [52]. Similar to the LH agent, the expert agent also defines a new set of feasible packages for each round k, determined as the set of all packages , such that , > where the minimal utility is defined by Equation (12): It is against this feasible package set , the expert agent evaluates a received offer with a view to accepting it; or it proposes a counteroffer from this set choosing the bid closest to the received offer in terms of quantities. Now that the different agents' strategies have been presented, the following section will present a case study in which these strategies will be evaluated. Case Study The case study used to test and validate our model focuses on developing countries where rural or semi-urban areas often have unreliable or no connectivity to a central power grid (i.e., "weak grid" environments), such as those present in several parts of sub-Saharan Africa, southern Asia, or in India. Until the development of solar PV technology, consumers faced either the choice to have no electricity during the whole period of the outage (sometimes up to several days), or to use small fossil fuel-based generators. Recently, some households have invested in solar home systems (SHSs) integrating battery storage. However, due to lack of incentive or possibility to share the excess energy inherent in such systems, large amounts of electricity that could serve others in the community are often unutilized/wasted. The case study proposed in this paper addresses this issue by implementing the negotiation protocol described in previous sections to a rural community consisting of a prosumer with a solar home system and a battery; and different consumers with different preferences for energy and prices. For comparison purposes, the consumers are defined as having the same consumption needs. The prosumer acts as the seller, and the consumers are the buyers. In our model, peer-to-peer (P2P) bilateral (one-to-one) negotiations are considered; hence, negotiations are between the seller and one buyer at a time. In more detail, the settings used in our model (such as the two solar generation availability scenarios described below, and the values used to model our utility functions for exporting/consuming electricity) are inspired from a large UK-India research project the authors are involved in: Community-scale Energy Demand Reduction in India (CEDRI-www.cedri.hw.ac.uk/). CEDRI uses a combination of data-driven analytics, user surveys, and computational modelling to study the energy demand/consumption behaviour in a number of local communities in India, such as the Auroville community and surrounding villages in Tamil Nadu. Buyers' Profiles Three buyers reflecting three different energy consumption behaviours but similar electricity consumption needs, typical of a rural community setting are modelled. The World Bank has proposed to categorize consumers based on their electricity need [53]. As shown in Table 1, a "Tier 1" consumer corresponds to a consumer with a daily need of 12 Wh of electricity for lighting, whereas "Tier 5" consumers typify households that have an electricity consumption above 8.2 kWh. In our scenario, we focus on rural India and SSA communities, so we consider that buyers belong to the "Tier 2-3" category, with a daily consumption only for lighting, air circulation (fans), television, and phone charging. Thus, the buyers' daily electricity consumption is estimated as = , , , = { 0.25, 0.25, 0, 1 } kWh. This means that in this scenario, the buyers require 0.25 kWh of electricity at night, 0.25 kWh in the morning, and 1 kWh during the evening. The buyers' different energy consumption patterns are also represented in their different preferences in terms of price or need for electricity . 1. Buyer 1 is a consumer with equal preference for the cost of electricity, as well as the quantity of electricity he receives, provided it is close to its electricity need . 2. Buyer 2 is a consumer who prefers having the amounts of electricity given by , irrespective of price. Hence, = and = 1 − . 3. Buyer 3 is a consumer who is most concerned with price and will adjust consumption based on the price as this buyer does not want to pay much money for its electricity consumption. Hence, = . We also consider a "Tier 1" consumer household with maximal electricity consumption of less than 0.2 kWh daily (often as a result of low household income); to determine if such a consumer can participate and benefit from the LEM using automated negotiating agents. We modelled this consumer as a type of buyer 3 as this consumer is mostly concerned about its cost of electricity. Seller Profile The seller owns a small solar PV system of 1.5 kW with a battery of 2.8 kWh of available capacity. The seller also has a need for electricity that will self-supply in the first place, and then sell any extra energy to others. Its daily consumption (also known demand) is assumed to be = , , , = { 0.65, 0.86, 0.43, 1.4 } kWh. His battery is assumed to be completely full (2.8 kWh) at the start of the day and two cases of solar microgeneration are considered with a derating factor of 65% pre-applied to the forecasted solar PV generation to generate DER PV : 1. Cloudy day case where the solar PV installation produces a power given by = , , , = { 0, 1, 0.5, 0 } kWh, respectively. 2. Sunny day case with a PV production given by: = { 0, 2, 2.5, 0 } kWh. This derating factor caters for system losses, as well as variability in solar irradiation. It also aids to ensure that a seller agent can only negotiate energy quantities it can generate and deliver. The next section will focus on the obtained results from the negotiations between all these buyers and the seller in the two solar production cases, implementing the different negotiation strategies described in subsection 3.1. Experimental Results The proposed agents modelling utilizing the proposed strategies and negotiations protocol were applied to the case study described in Section 3.2 and for the two solar availability scenarios to demonstrate the benefits of automated negotiations in local energy markets. The results are presented in this section in two parts. In the first part, the agreed negotiation bargains/outcomes are presented in order to evaluate the negotiation protocol proposed, while the second part compares the outcomes of the different strategies proposed. All simulations were run on the MATLAB software with an i5 processor at 1.7 GHz, and the different Central Processing Unit (CPU) requirements are displayed in Table 2. Table 2. CPU processing time and memory requirements for the simulations (in average). Negotiation Framework Implementation The two cases proposed to evaluate the negotiation framework allows us to explore two scenarios. In the first scenario, the seller (prosumer) does not have a lot of energy to sell, as the day is a cloudy day. While each buyer will try to find the maximum amount of energy it can obtain, at the lowest price, and at its most preferred period. Implementing = 0 in (7) for all periods and for all the buyers, each buyer will try to maximize its quantity of energy at the period where it has the greatest demand-in our case, the evening period as the required demand is {0.25, 0.25, 0, 1} kWh. The left graph of Figure 1 presents the results from negotiation between the three buyers and the seller when the day is cloudy. Since there is not enough energy to supply the needs from the seller and the buyer, the buyer focuses on the evening period. Figure 1 shows that the only package feasible is 0.75 kWh electricity exchange in the evening. As shown, buyer 3, who has a clear preference for having a low cost for its energy consumption negotiates the lowest price. From the seller's perspective, provided the offered price is above its marginal cost of energy production, the deal is still profitable. The right graph of Figure 1 presents the results of negotiations when the day is sunny. In this case, the seller has enough energy to supply his demand and the demand of the buyer. Buyer 1 (no preference) and buyer 3 (preference for a low cost) negotiate the same quantities of energy to meet their need. Buyer 3 obtains the best price but stands the risk of not reaching a deal. If its utility weight for low cost had been higher, he might not have proposed any offer suitable to the seller at prices above the seller's marginal cost of its PV and battery installation. Finally, buyer 2 is a buyer who aims to obtain at least the required quantity of energy or more, by attaching very low importance to the cost of its electricity supply. In a real-life scenario, if the obtained contract/deal does not suit the buyer, it can adjust its weight for cost in future negotiations in order to either reduce the obtained price or increase its energy access. Hence, an important outcome from these simulations is that the agreed bargain meets the buyers' needs, while providing additional revenue to the seller. It specifically allows buyers with preference such as the buyer 2 profile, to increase their energy access, which is an important feature in developing countries, as economic development requires increasing energy access. For low revenue community members, it also allows them to negotiate better prices for their energy supply, while still giving satisfaction to the prosumers/sellers. As discussed, P2P negotiations are mostly specific to settings with poor access to a central power grid, where there is no direct interaction or competition with a central power supplier or utility company. Similarly, Figure 2 displays the averaged outcomes over 100 negotiations in the case of zero intelligence agents (buyers and seller). Hence, the agents do not have any intelligence in their strategy, such that they end the negotiation anytime a negotiation round's outcome gives a better utility than the previous round. Still, each outcome follows the buyer's characteristics (preference for the energy cost or for the quantity of energy), which tends to validate the model proposed in this paper. During a negotiation, different packages are proposed between the buyer and the seller. Figures 3 and 4 show the space of all possible Sunny-Day negotiaton outcomes between seller expert agent and buyers 1 and 2 expert agents, respectively; while Figure 5 shows the negotiation domain-space for a buyer 3 (Tier 1 household) expert agent negotiating with a seller expert agent on a sunny day. Figure 3 shows that the obtained bargain for this particular scenario is efficient and provides a good utility to the buyer but quite low utility to the seller. This is due to the fact that the seller's utility is proportional to the price, whereas the buyer's utility is the sum between a term proportional to the price, and a term independent of the price (relative to the quantity of energy). This also explains the distance between the obtained bargain and the Nash (which maximizes the product of the two negotiating agents' utilities) or Kalai-Smorodinsky (KS) (which maximizes the minimum) bargaining solutions [54]. The distance to these points (computed for each negotiation domain) is used in the negotiation literature as a measure of fairness of the agreed negotiation outcome. As explained above, given the difference in the way utilities are computed for the buyer and the seller, the Nash bargain corresponds to packages with the highest price. Such packages provide large utility for the seller, and not too small utility for the buyer as the quantities of energy are still fulfilling its needs. For buyer 2, its utility function is impacted more by the quantities of energy, and less by the price. Thus, the negotiation outcome in Figure 4 is mutually satisfying to both agents and also efficient with a shorter proximity to the Nash and KS bargaining solutions. More so, the concave outline of Figure 4 negotiation domain showcases the agents asymmetric preferences with buyer 2 having a higher preference for energy over price and the seller agent having a higher preference for price. Figure 5 also shows that a Tier 1 household with low energy demand and mainly concerned with the cost of electricity can benefit from such a local energy market. Unlike Figures 3 and 4, the Pareto frontier in Figure 5 closely resembles a zero-sum game. Likewise, the sparsity of the space of possible outcomes shows the very limited agreeable outcomes available to both agents. This is because both negotiating agents are mainly concerned about the same issue-price; with the seller expert agent focused on maximizing its revenue and the buyer 3 expert agent interested in minimizing cost. Both agents reached an outcome more satisfactory to the buyer and less so to the seller; because of the small quantity of electricity agreed upon, relative to the total electricity available for sale by the seller. In all the negotiations, the obtained bargains are pareto efficient (as seen on the Pareto-Front), provided the flexibility for overconsumption (given by ) is close to 0. Negotiation Strategies Comparison Now that the negotiation protocol and agents modelling have been successfully implemented, the different negotiation strategies for the buyers and the sellers are hereby compared. To carry out this study, only the case with enough energy to meet the seller and buyer's demands was considered. Each negotiation strategy was implemented for the seller. For each implementation (strategy), the seller bilaterally negotiated with each of the three buyers (that is, one at a time), where all three buyers also implemented the three strategies alternatively. Outcomes from all the negotiations were averaged in order to compare the different strategies. For negotiations with a zero intelligence agent (based on random selection of packages), 100 negotiations were simulated, and the utility averaged over the 100 outcomes in order to obtain statistically reliable results. Figure 6 shows the average value of all obtained outcomes for the seller as a function of its strategy. When utilizing the expert agent strategy by the seller, the average outcome from its negotiations with the three buyers implementing each one of the three proposed strategies was 57%, while utilizing the linear heuristic strategy by the seller yielded 54%. The average outcomes a buyer can expect when negotiating with a seller are also shown. It was computed by averaging the outcomes from the three buyers implementing each one of the proposed strategies while negotiating with the seller who also implemented each one of the strategies. LH agents were observed to be more suitable for buyers, but this trend is mostly due to the negotiation outcomes from negotiations with zero intelligence (ZI) agents. When excluding the negotiations with ZI agents, LH and expert agents strategies give similar results. Thus, linear heuristics and expert agents provide similar results in terms of utility outcomes, even if LH seem more efficient for sellers than buyers in the proposed context. Moreover, the number of iterations (rounds) to reach a deal is of high interest, in order to reduce the computational cost of the automated negotiation. As seen in Figure 6, the computational cost to reach an agreement is higher for an expert agent in general than for a linear heuristic strategy. This is due to the lower speed at which the expert agent increases the size of between two consecutive negotiation rounds, where is the set of feasible outcomes for round k. Discussion The implementation of the proposed agent modelling and framework to a rural area case of India demonstrated the benefits it could provide to a community, by allowing community members to access cheaper electricity and increase their energy consumption while providing extra-revenues to local prosumers. Indeed, consumers with price constraints were able to negotiate low prices for their electricity supply, whereas consumers attaching more importance to their electricity consumption than the cost of their energy supply were able to increase their energy access. Within a community, it is most likely that several profiles of consumers will be represented. Thus, utilizing automated one-to-one negotiations, prosumers will be able to make their energy surplus available to the community; where some consumers will mostly negotiate the price in order to meet their required consumption need; while others will be able to take advantage of the negotiations to increase their energy consumption until all the energy available has been used. Three negotiation strategies were also proposed and assessed within this study. The Boulware and linear heuristic based negotiation strategies obtained similar outcomes, although the linear heuristic strategy seems to be more suitable to buyer agents, in our analysis. Furthermore, linear heuristic strategies provide similar outcomes in a shorter time, which makes it an interesting strategy to be implemented in real life applications. Hence, automated negotiations using such framework would also allow end users to take an active role in the electricity of retail markets in developed countries, as it is recommended by European policies. Indeed, [55] specifies the evolution of future European electricity markets, which should give citizens ownership of their electricity supply by providing them the possibility to trade their flexibility. It also specifies the principles of citizen energy communities that would be strongly enabled by automated negotiations, as it would allow community members to automatically access local electricity supply/demand at better costs. Indeed, automated negotiations at the peer-to-peer level is mostly applicable either in developing countries, especially where the grid is unreliable, or within a community in countries with a strong grid, as peers do not currently have the required power to enter the wholesale market [56]. At a community level, collective self-consumption and the emergence of the sharing economy for smart grids [57][58][59] provides a way for citizens to promote investment in DER while reducing their electricity bill [60], especially in Europe where P2P trading is likely to be supported by European policies [59]. Hence, automated negotiations as presented in this paper would provide a replicable framework that would allow citizen communities to maximize the quantity of self-consumed renewable energy at a low cost. Similarly, in other regions of the world, as the United States of America or the United Kingdom, Electricity Markets Authorities as OFGEM are drawing the principles of future supply markets [61] and discussing the possibility of a supplier hub [62] which could allow automated negotiations between producers and consumers. Hence, regulations are currently breaking down the barriers to flexibility, allowing flexible loads owners to contribute to the electricity markets [63]. In this context, automated negotiations would allow actors unfamiliar with the market context to easily agree on prices and quantities of electricity. However, several obstacles still exist to local energy markets, as the recent network charging evolution in the UK with the Targeted Charging Review [64] or with the trend towards half-hourly (or quarter-hourly in Belgium for example) settlement that would require negotiations to be done at the aggregator level instead of the end user level, as such short settlement time would require a very accurate forecast which is only achievable with aggregated loads. Conclusions This paper proposed a new automated negotiation framework for energy, including agents modelling, which demonstrated interesting benefits for rural areas in developing countries. The novelty of this framework mainly lies in the modelling of the negotiating agents and their strategies (buyers and sellers) in the energy domain. It allows these agents to configure their own preferences for energy or price and agree on a price and quantities of electricity for every considered period. This paper also presented the use of different negotiation strategies in the context of P2P energy markets. Using a case study specific to rural areas of India as an example, the experimental analysis demonstrated the benefits that the agents modelling and the negotiation protocol could provide to a community by increasing the access to low-cost electricity, while increasing local producers' benefits. Hence, our work shows that P2P local energy markets using automated negotiations can be an important vector to support the economic development of these rural areas by improving access to electricity to consumers who would be cut off otherwise when the central power grid experiences power cuts. Future research will include many-to-many negotiations, where multiple buyers negotiate with multiple sellers in order to converge towards an equilibrium for the whole community, which will be representative of real-life scenario. Future research will also focus on the impacts of automated negotiations on voltage fluctuation and frequency regulation by implementing the proposed model into a community with an islanded constrained grid.
2020-02-27T09:35:11.573Z
2020-02-19T00:00:00.000
{ "year": 2020, "sha1": "82092d402cf7539754413f4a5a419b6829f97423", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1996-1073/13/4/920/pdf", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "76a5a9ea46df12dcd69ef39f7db0064b17c2e43d", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Business" ] }
317507
pes2o/s2orc
v3-fos-license
Pectoral Fin of the Megamouth Shark: Skeletal and Muscular Systems, Skin Histology, and Functional Morphology This is the first known report on the skeletal and muscular systems, and the skin histology, of the pectoral fin of the rare planktivorous megamouth shark Megachasma pelagios. The pectoral fin is characterized by three features: 1) a large number of segments in the radial cartilages; 2) highly elastic pectoral fin skin; and 3) a vertically-rotated hinge joint at the pectoral fin base. These features suggest that the pectoral fin of the megamouth shark is remarkably flexible and mobile, and that this flexibility and mobility enhance dynamic lift control, thus allowing for stable swimming at slow speeds. The flexibility and mobility of the megamouth shark pectoral fin contrasts with that of fast-swimming sharks, such as Isurus oxyrhinchus and Lamna ditropis, in which the pectoral fin is stiff and relatively immobile. Introduction The first specimen of a megamouth shark (Megachasma pelagios) was captured off Hawaii in 1976. It was described as a new genus and species in its own family within the order Lamniformes, and it became the third known planktivorous shark, along with basking and whale sharks [1]. Subsequent studies have explored the skeletal and muscular system of this rare shark, mainly in the cranial region, to clarify its phylogenetic relationships with other sharks [2,3] and to investigate its feeding ecology [3,4,5]. The present study reports on the gross anatomy of the pectoral fin of the megamouth shark, and the histological features of the pectoral fin skin. The skeletal and muscular systems of pectoral fins have been used for taxonomic investigations in some shark lineages [6,7,8,9]. However, the anatomy of the pectoral fin has only been examined in 5 out of 15 species of extant lamniforms: Carcharodon carcharias, Carcharias taurus, Lamna nasus, Mitsukurina owstoni, and Pseudocarcharias kamoharai [10,11,12,13,14]. In addition to its use in phylogenetic investigations, the pectoral fin anatomy is useful for understanding and predicting the styles of locomotion of fishes whose ecologies are poorly known. Recent applications of a flow visualization technique and high-speed digital imaging have revealed that sharks regulate lift by movements of their pectoral fins [15,16], which strongly suggests that the mobility of the pectoral fins is closely linked to lift control. The purpose of the present study was twofold: 1) to describe the skeletal and muscular systems of the megamouth shark pectoral fin; and 2) to discuss the locomotory style of the megamouth shark based on the study of its fin anatomy. Specimens Examined Anatomical and histological features were examined in two specimens. The first specimen is a female with a total length (TL) of 3.7 m and a body weight (BW) of 361 kg (OCA-P 20110301). It was incidentally caught on 9 July 2007 during a purse seine fishing operation off Ibaraki prefecture, Japan. The specimen was kept frozen at Okinawa Churaumi Aquarium (Okinawa, Japan) prior to anatomical analysis. Following external observations, on 2 March 2011, the left pectoral fin and girdle were removed at Okinawa Churaumi Aquarium and fixed in 20% formalin at Tokai University (Shimizu, Japan) for later anatomical examination. The second specimen is a female of 5.4 m TL that was incidentally caught in a set net off Shizuoka prefecture, Japan, on 24 June 2011 (OCA-P 20111217). This specimen was also kept frozen at Okinawa Churaumi Aquarium prior to anatomical analysis. The specimen was partly dissected (including the 'peeling' off its pectoral fin skin) and fixed in 20% formalin at Okinawa Churaumi Aquarium. Since 20 March 2012 the specimen has been exhibited at the Main Rest House, Churaumi Plaza, next to Okinawa Churaumi Aquarium. All the specimens of fishes analyzed in this study were accidentally caught by fishermen and are housed at Okinawa Churaumi Aquarium. The aquarium issued permits allowing examination of the specimens as part of this study. Anatomical Terminology The anatomical terminology of the endoskeletal and muscular systems follows that of [8]. Radial Cartilages of the Pectoral Fin Elasmobranchs have cartilaginous skeletons whereby individual pieces of the skeleton are comprised of uncalcified cartilage with a mineralized outer layer. The pectoral fins are connected to the pectoral girdle proximally, with the skeletal support for the fins consisting of three larger elements proximally (propterygium, metapterygium, and mesopterygium), articulating distally with a series of radial cartilages (Fig. 1). We measured two variables associated with the radial cartilages of the megamouth shark: 1) the radial number, which is defined as the number of radial cartilages; this number is generally stable during ontogeny and highly uniform within shark species [7]; and 2) the segment number, which is defined as the number of segments in a single radial cartilage. As the pectoral fin consists of multiple cartilages with variable numbers of segments (with the number roughly dependent on the position of the cartilage in the fin), the number of segments in the longest radial cartilage was selected for comparative analysis. Histology of the Pectoral Fin Skin Skin samples were taken from three sites on the left pectoral fin of megamouth shark specimen OCA-P 20110301 ( Fig. 2A). The tissue samples were soaked in 5% formic acid for one week to decalcify the skin denticles, then embedded in paraffin and sectioned transversely (sections of 8-10 mm thickness) using a rotary microtome. The sections were stained with hematoxylin and eosin (HE) for general histological examination, or using the Elastica van Gieson (EVG) staining protocol to examine the extracellular matrixes of the fibrillar connective tissue. In this protocol, the elastic fibers are stained purple-black by the Weigert's resorufin fuchsin solution. The stained specimens were examined using an Olympus light microscope (40-4006 magnification), and photographed using an attached digital camera. For comparison, transverse sections of the pectoral fin skin of Lamna ditropis (HUMZ117738) and Isurus oxyrhinchus (HUMZ136626) were collected and stained using the same methods as those described above. External Morphology The external morphological measurements of OCA-P 20110301 are presented in Table 1. Pectoral fin is basally broad, distally elongated, tapering, and falcate. Anterior margin is slightly convex. Caudodistal tip is broadly rounded. Posterior margin is nearly straight ( Fig. 2A, B). Dorsal surface of pectoral fin is black with a white blotch at the tip. Ventral surface is white with a conspicuously black anterior margin. On both sides of the skin surface of the pectoral fin, there are lines of naked area without dermal denticles, forming a complex network ('naked-line network', sensu [17]). The networks, which were previously described from the corner of the mouth and the buccal skin, may function to increase skin elasticity [4,17]; they are most well developed around the base of the pectoral fin, and the skin around this area is highly stretchable (TT and ST, pers. obs.; also see Fig. 3). Skeletal System Basal cartilages. The proximal region of the pectoral fin consists of three basal cartilages, which, from the anterior to the posterior of the fin, are the propterygium (pt), mesopterygium (ms), and metapterygium (mt) cartilages, as in other sharks (Fig. 2). The basal cartilages are proximally twisted by approximately 90u such that the surfaces that are ventrally directed distally are directed anteriorly at their proximal ends (Fig. 4). The surfaces of the proximal ends of the basal cartilages are smooth, convex, and dorsoventrally elongated; these surfaces fit the articular condyle (Fig. 4A, B) located at the posterior edge of the pectoral girdle, thus forming a hinge joint (Figs. 4B, 5). The rotation axis of the hinge joint is nearly perpendicular to the plane surface of the pectoral fin. The basal cartilages support 23 radials. Radials. Twenty three radial cartilages, which are shaped as slender rods, support the distal region of the pectoral fin. The cartilages, which are oriented towards the anterior pectoral margin, extend 88% of the length of the fin; these features show that the fin is a 'plesodic-type' fin (sensu [7]) (Fig. 2). The posterior radial cartilages have 2-6 branches at their distal ends, which distinguishes them from the anterior radial cartilages. Each radial cartilage is segmented into as many as 12 small pieces. Adjacent radial segments belonging to the same radial are articulated with one another by connective tissues. 1) Levator pectoralis. The lpe extends posterolaterally and covers the dorsal side of the pectoral fin. It originates from a wide shallow medial concavity in the pectoral girdle, and inserts onto the dorsal sides of the basal cartilages and radials. This muscle extends distally to approximately one-third of the length of the radials. 2) Depressor pectoralis. The dpe is subdivided into two components which have different origins and insertions. The anterior component (dpe 1) originates from a concavity on the lateral side of the pectoral girdle, and inserts on the anterior edge of the propterygium cartilage (pr); this component is referred to as the ''protractor muscle'' in some references [18,19]. The posterior component (dpe 2) originates from the ventral side of the pectoral girdle and inserts on the ventral surfaces of the pectoral radials. This muscle extends distally to approximately one-third of the length of the radials. The anterior component (dpe 1) may function to rotate the pectoral fin anteriorly, whereas the posterior component (dpe 2) may function to depress the pectoral fin (Fig. 6B). Radial Number and Segment Number The radial number of the megamouth shark (23) is within the range of the number in other lamniform species (16-34) ( Table 2). In contrast, the segment number of the megamouth shark (12) is much larger than that of other lamniform species (2)(3)(4)(5). Skin Histology The dermis of the megamouth shark consists of papillary and reticular regions (Fig. 7). The former is contiguous with the epidermis and is composed of a thin layer of densely packed cells. The latter mainly consists of an extracellular matrix of collagen and elastic fibers. A high density of elastic fibers was observed in the reticular region, and collagen fiber bundles were abundant only in the middle of this region. The collagen fiber bundles run proximodistally, are gathered together, and vary in size. At site p3, ceratotrichia were observed and the fiber bundles are small and uncommon. The exterior and interior of the reticular region comprise loose connective tissue composed of collagen and elastic fibers. The dermis of Lamna ditropis and Isurus oxyrhinchus, as in that of the megamouth shark, consists of papillary and reticular regions (Fig. 8). The papillary region is composed of a thin layer of densely packed cells. The reticular region is composed mainly of dense collagen fiber bundles. Unlike the megamouth shark, there are no elastic fibers in the reticular region. The three layers of the reticular region exhibit distinct angles and sizes of fiber bundles. The exterior layer consists of small fiber bundles, whereas the middle layer comprises large fiber bundles and ceratotrichia. The ceratotrichia are located in the lower part of the layer and are elongate proximodistally at the same angle as the fiber bundles. The fiber bundles of the interior layer are orientated in a different direction to those in the middle layer. At site p3 in L. ditropis, the structure of the three layers is obscure, but several ceratotrichia are evident in the interior. Discussion The pectoral fin of the megamouth shark is characterized by the following three features: 1. Large number of radial segments. The number of radial segments in the megamouth shark pectoral fin (12) is so far the greatest among sharks. Because each radial segment in sharks is distally articulated with an adjacent radial segment by connective tissues, the large number of radial segments in the megamouth shark may enhance fin flexibility. 2. High skin elasticity. The skin of the pectoral fin of the megamouth shark is unique in having dense naked-skin networks on its surface. This network may function to increase skin elasticity [4,17]. The elasticity of the pectoral fin skin is also reflected in the high density of elastic fibers in the reticular region, which is in marked contrast to the abundance of densely packed collagen fiber bundles in Lamna ditropis and Isurus oxyrhinchus. This suggests that the pectoral fin skin of the megamouth shark is suppler than that of the pectoral fins of the other two sharks. 3. A hinge joint between the pectoral fin and girdle with a rotation axis nearly perpendicular to the fin surface. This structure indicates that the megamouth shark has the capacity for a high degree of rotation of its pectoral fin forward and backward. Such movement is largely restricted in most sharks, where the pectoral fins typically articulate with the coracoid via an anteroposteriorly broad articulation surface, largely limiting pectoral fin motion to the dorsoventral plane (Fig. 5) [8,20]. Based on these morphological and histological features, we conclude that the pectoral fin of the megamouth shark is highly flexible and mobile. This hypothesis is confirmed by video and still images of a live specimen, in which the pectoral fin is tightly bent, with its ventral side facing upwards (see time segments 00:10-00:22 and 00:36-00:38, in the ARKive video at http://www. arkive.org/megamouth-shark/megachasma-pelagios/video-00. html), and in which the right and left pectoral fins are rotated asymmetrically (ARKive, http://www.arkive.org/megamouthshark/megachasma-pelagios/image-G5954.html). The flexible and mobile pectoral fin of the megamouth shark is probably associated with a slow-swimming planktivorous ecology. According to acoustic tracking data, the megamouth shark swims at a speed of 1.5-2.1 km h 21 (mean = 1.8 km h 21 , representing a speed of approximately 0.1 body lengths sec 21 ). This swimming speed is one of the slowest known among elasmobranchs [21]. In general, slow-swimming fishes expend more energy for controlling body posture and depth than do fast-swimming fishes, as locomotory stability is drastically reduced at slow swimming speeds [22]. A flow visualization technique (digital particle image velocimetry, DPIV) and high-speed digital video imaging have shown that sharks actively regulate lift by movements of the posterior portions of the pectoral fins, or by depression/elevation of the pectoral fins [15,16]. It is therefore likely that the pectoral fins of the megamouth shark are highly specialized for controlling body posture and depth at slow swimming speeds. The flexibility and mobility of the pectoral fins of the megamouth shark strongly contrast with the stiff and relatively immobile pectoral fins of cruising specialists such as Isurus oxyrhinchus and Lamna ditropis. Flexible and mobile pectoral fins may provide high controllability for body posture and depth, although the motions cause large hydrodynamic disturbances [22]. The restricted motions of the pectoral fins of I. oxyrhinchus and L. ditropis may reduce self-generated hydrodynamic disturbances, allowing energetically efficient swimming at high speeds [22,23]. (F) Enlarged view of a longitudinal section of collagen fiber bundles from site p1 with EVG staining. Scale bars in A, B, D, and E = 0.2 mm, and in C and F = 0.05 mm; ce, ceratotrichia; cfb, collagen fiber bundles; dd, dermal denticles; ed, epidermis; EF, elastic fibers; mp, melanophores; pl, papillary layer; rle, exterior reticular layer; rli, interior reticular layer; rlm, middle reticular layer; rc, radial cartilage; *, artificial spaces produced by the cutting procedure. doi:10.1371/journal.pone.0086205.g007 Oceanic and Atmospheric Administration, for comments that improved the manuscript. Mason N. Dean, Max Plank Institute of Colloids and Interface, and an anonymous reviewer provided constructive criticism and informative suggestions that greatly improved this paper.
2016-05-17T16:23:05.925Z
2014-01-21T00:00:00.000
{ "year": 2014, "sha1": "f08ad0a72265b6baa58e6b67e8fde38a6df895e0", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1371/journal.pone.0086205", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "371b6f26a322a661b8fb3b9f4e6a35cf91ef461f", "s2fieldsofstudy": [ "Biology", "Environmental Science" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
251649782
pes2o/s2orc
v3-fos-license
The effect of osteotomy technique (flat-cut vs wedge-cut Weil) on pain relief and complication incidence following surgical treatment for metatarsalgia in a private metropolitan clinic: protocol for a randomised controlled trial Background Weil osteotomies are performed to surgically treat metatarsalgia, by shortening the metatarsal via either a single distal oblique cut with translation of the metatarsal head (flat-cut) or through the removal of a slice of bone (wedge-cut). The wedge-cut technique purportedly has functional and mechanical advantages over the flat-cut procedure; however, in vivo data and quality of evidence are currently lacking. This study aims to investigate whether wedge-cut Weil osteotomy compared to traditional flat-cut Weil is associated with increased pain relief and fewer complications up to 12 months postoperatively. Methods Patient, surgical and clinical data will be collected for 80 consecutive consenting patients electing to undergo surgical treatment of propulsive metatarsalgia in a randomised control trial, embedded within a clinical registry. The primary outcome is patient-reported pain as assessed by the Foot and Ankle Outcome Score (FAOS) - Pain subscale, and the secondary outcome is the incidence of procedure-specific complications at up to 12 months postoperatively. The groups will be randomised using a central computer-based simple randomisation system, with a 1:1 allocation without blocking and allocation concealment. A mixed-effects analysis of covariance will be used to assess the primary outcome, with confounders factored into the model. A binary logistic regression will be used to assess the secondary outcome in a multivariable model containing the same confounders. Discussion To the best of the authors’ knowledge, the trial will be the first to examine the clinical efficacy of the wedge-cut Weil osteotomy compared to the flat-cut technique with a prospective, randomised control design. Trial registration Australian New Zealand Clinical Trials Registry ACTRN12620001251910. Registered on 23 November 2020. Supplementary Information The online version contains supplementary material available at 10.1186/s13063-022-06591-4. Background and rationale {6a} Propulsive metatarsalgia is defined as pain under one or more metatarsal heads during the "third rocker" phase (30 to 60%) of the gait cycle, from heel lift-off to the end of propulsion by the great toe [1]. Weil osteotomy is a reliable distal oblique osteotomy procedure performed to treat lesser metatarsal deformities and alleviate metatarsalgia, by shortening the metatarsal in the transverse plane. The osteotomy is performed with or without adjunct procedures for toe correction, which may include proximal interphalangeal arthrodesis [2], fusion/arthrodesis of the first metatarsophalangeal (MTP) joint [3] or hallux valgus correction [4]. While Weil osteotomies are commonly performed, complications include floating toe, joint stiffness, avascular necrosis, transfer of metatarsalgia to subsequent toes and plantar flexion of the metatarsal [1,5,6]. Traditional flat-cut Weil osteotomies involve a single distal oblique incision in the dorsal aspect of the metatarsal head, with translation of the head by 5-10 mm. Wedge-cut osteotomy is a modification of the flatcut Weil procedure and includes a second incision to remove a slice of bone [1]. A wedge is created either with parallel sides or with an apex on the plantar aspect of the metatarsal, and the procedure is purported to reduce plantar translation of the metatarsal head, maintain the MTP centre of rotation and improve intrinsic muscle function as demonstrated on sawbone models [7]. However, there is limited in vivo data for the clinical efficacy of this technique, and the quality of evidence is lacking [7,8]. The proposed study has therefore been designed to investigate whether the modified wedge-cut Weil osteotomy compared to the flat-cut technique, with or without required adjunct procedures, is associated with increased pain relief and fewer complications at up to 12 months postoperatively in patients presenting with propulsive metatarsalgia. Objectives {7} The primary objective of this study is to determine, in patients electing to undergo surgical intervention for propulsive metatarsalgia, whether wedge-cut osteotomy compared to flat-cut Weil (with or without adjunct procedures) is associated with increased pain relief during activities of daily living up to 12 months postoperatively. The secondary objective of this study is to determine whether a lower incidence of procedure-specific complications (floating toe or stiffness) at up to 12 months of follow-up is associated with the wedge-cut procedure, compared to the flat-cut Weil. Trial design {8} The proposed study is designed as a randomised controlled superiority trial with two parallel groups comprising 1:1 allocation. Study setting {9} Patient recruitment and data collection will be performed at the private hospitals and consulting clinics for the participating surgeons (New South Wales, Australia). Recruitment is expected to be complete by April 2023. Patients will be randomly allocated to either intervention using a central computer-based allocation randomisation system prior to surgery. The trial is embedded in a clinical registry with patient, surgical and clinical data captured as part of routine clinical care. Figure 1 outlines the schedule of study activities, and Fig. 2 outlines the study design for the trial described above. The trial is scheduled to commence in January 2021 and is due to be completed by June 2023. The study has been registered on an online registry for clinical trials (Australia New Zealand Clinical Trial Registry (ANZCTR)), where the study site and sponsor details are listed (ACTRN12620001251910). The study will be reported as per the CONSORT guidelines [7,8]. Eligibility criteria {10} Patients presenting to the participating surgeons for treatment of propulsive metatarsalgia who are over 18 years of age and eligible for surgical intervention (having failed a minimum of six months of conservative interventions) will be eligible for recruitment. Patients will need to be registered in the SOFARI registry and provide additional written informed consent for participation in the present randomised controlled trial. Patients who have declined or revoked consent for use of clinical data for research (for the SOFARI registry or the current trial) or are unable to provide informed Fig. 1 Schedule of enrollment, interventions and assessment for the trial consent will be excluded. Patients will additionally be excluded if they have had recent (< 6 months) prior surgery to the affected forefoot, if they require additional procedures involving the soft tissues of the foot-ankle complex, have had their surgery booking cancelled with the participating surgeon or have been judged by the participating surgeons as incapable to complete patient reported outcome measures (PROMs) as required for the study due to psychological impairment or insufficient English language capacity (Fig. 3). Who will take informed consent? {26a} The participating surgeons will discuss the study with prospective participants and obtain written informed consent prior to the participant undergoing surgery. Additional consent provisions for collection and use of participant data and biological specimens {26b} Potential uses of the datasets generated and/or analysed during the trial may include subsequent analysis of subgroups for investigation of surgical/management and patient-centred outcomes for internal clinic purposes. The research data may form the basis of potential future studies, including additional research sites within allied health networks or collaborations with other research groups for the biomechanical evaluation of patients. Where appropriate, open-access principles will be followed to enable collaborative works with aligned groups, where the interest is academic and not for commercial uses, under appropriate licencing arrangements of deidentified data. Any future studies requesting the use of this data will seek appropriate amendment of the relevant ethical approval, and further consent will be obtained or waived on application to the HREC, as appropriate. Explanation for the choice of comparators {6b} The two interventions are used by the participating surgeons for the surgical management of metatarsalgia. Intervention description {11a} Eligible patients will be randomised in equal proportions between those receiving the flat-cut Weil osteotomy (the "control" intervention, group A) and the wedge-cut Weil osteotomy (the "experimental" intervention, group B). Group A will receive an active control intervention. Preoperative planning will be undertaken to determine the amount of metatarsal shortening required to restore the parabola of the affected metatarsal. The amount of shortening will be determined by recreating the Maestro criteria for metatarsal parabola [9]. At the time of surgery, the patient will be prepared in the standard fashion, placed supine on the operating table with the operated limb prepared and draped to create a sterile area. General intravenous sedation will be administered in addition to prophylactic antibiotics, and an ipsilateral ankle tourniquet is applied and inflated prior to the first incision. A semi-elliptical transverse incision will be used to expose each metatarsophalangeal joint to be operated, spanning between the tendons of the extensor muscles without cutting or lengthening the tendons. Additional care will be taken to avoid release of other soft tissues such as the medial and lateral collateral ligaments. The metatarsal and phalangeal joint surfaces will be inspected and additional resection performed to remove plantar capsule adhesions where required [10]. The metatarsal bone will be dissected, retracting other soft tissues, including the joint capsule to aid in visualisation and access to the metatarsal neck. A rasp will be used to remove the soft tissue on either side of the osteotomy location. The dorsal zone of cartilage inferior to the metatarsal head will be identified and used as a landmark for the osteotomy. Neurovascular bundles in the intertarsal space will be retracted and a microsagittal motorised saw held parallel to the weight-bearing surface of the foot moved from the dorsal to plantar cortices of the metatarsal to dissect the head from the neck in preparation for repositioning. The plantar plate will be inspected and repaired as required and the metatarsal head shifted proximally to achieve the desired shortening, avoiding medial or lateral shifting. [7] with the key stages identifying patients that will be included in the analysis. R indicates randomisation The osteotomy will be stabilised with a Kirschner wire placed across the osteotomy (along the metatarsal shaft) to guide screw fixation. Once the screw is fully engaged, the K-wire will be removed. The overhanging bone ledge will be resected with a bone saw or rasp, followed by releasing of the soft tissues under retraction closure of the joint capsule. The remaining soft tissues will be closed in layers and dressed with strips and bandages. The patient will be moved to recovery and discharged as per normal. The preparation of the patient and surgery site for group B will be as described for group A (the control intervention). The surgical procedure will be modified by adding a second osteotomy proximally to remove a rectangular wedge of bone for the purposes of shortening the metatarsal and repositioning the floating segment of the metatarsal head proximally to the metatarsal neck [11]. Postoperative management for both groups will include elevation of the foot for the initial 72 h after surgery and stitches removed in 10-14 days. Analgesics and antibiotics will be prescribed for the first 14 days after surgery, with weight bearing as tolerated. Criteria for discontinuing or modifying allocated interventions {11b} If the participant requests a particular intervention, they will be unenrolled from the trial, and their data will not be included in later analyses. Strategies to improve adherence to interventions {11c} Adherence to the randomised intervention allocation will be assessed by comparing the patient osteotomy group allocation as identified in the operation report, to the trial master sheet with the allocation information. Follow-up and PROM compliance will be encouraged through follow-up reminders, as part of the SOFARI registry processes. The primary outcome will be collected electronically, with controls in place to prevent partial completion of the questionnaire. Relevant concomitant care permitted or prohibited during the trial {11d} If required, the osteotomies (both control and interventional) are performed with adjunct procedures for toe correction, based on the individual case and patient's anatomy: • Proximal interphalangeal arthrodesis, which involves the longitudinal insertion of a Kischner wire or pin to fuse the most proximal joint of the lesser toe [2] • Fusion/arthrodesis of the first MTP joint, which involves the longitudinal insertion of a Kirschner wire or pin to fuse the most proximal joint of the first ray [3] • Hallux valgus correction, which is a correcting osteotomy performed to realign the first ray in the presence of first tarsometatarsal joint hypermobility [4] No specific postoperative prohibitions above the standard of care will be communicated to participants during the trial. Provisions for post-trial care {30} Patients enrolled in the study are covered by indemnity for negligent harm through standard medical indemnity insurance. The intervention is routine and well-practised, and the insurance covers non-negligent harm associated with the protocol. This includes cover for additional health care and compensation or damages, whether awarded voluntarily or by claims pursued through the courts. Incidences judged to arise from negligence (including those due to major protocol violations) will not be covered by study insurance policies. Outcomes {12} Primary outcome measures The primary outcome of this trial will be the Foot and Ankle Outcome Score (FAOS) -Pain subscale, collected at up to 12 monthly postoperatively. The FAOS was developed as a foot and ankle-specific PROM assessment analogous to the Knee injury and Osteoarthritis Outcome Score (KOOS), with content validity confirmed with 213 patients with ankle instability [12]. The FAOS consists of 5 subscales: pain, other symptoms, function in daily living (ADL), function in sport and recreation (Sport(Rec)), and foot-and ankle-related quality of life (QOL). Patients rate the questions on a 5-point Likert scale, with 0 representing no symptoms/problems and 4 representing extreme symptoms. The FAOS has been used in patients with lateral ankle instability, Achilles tendonitis and plantar fasciitis, with reliability confirmed in instability patients [12] and responsiveness confirmed in patients with Achilles tendinosis [13] and hallux rigidus [14]. Given the interventions performed in this trial are to treat metatarsalgia, a condition characterised by pain during weight-bearing activity, particularly noticeable during the push-off phase of the gait cycle, the pain subscale of the FAOS Questionnaire was selected as the most clinically relevant patient-reported measure for the purposes of measuring the primary outcome of this trial. One study reported an MCID of 15.3 points (95% confidence interval 10-20.6) [15], with a difference between groups equal to half this (7.7) deemed sufficient to guide future decision-making with respect to technique selection. For analysis purposes, a mixed-effects model will be used, which will aggregate scores from all responders and report mean and standard deviation (SD) associated with different combinations of model predictors, such as intervention group, sex and adjunct procedures. Secondary outcome measures The secondary outcome of this trial will be the incidence of procedure-specific complications at up to 12 months postoperatively. The most commonly reported complication of the Weil osteotomy is a floating toe, with an overall occurrence of 36% [6]. A floating toe is defined as a toe that is not in contact with the floor under weight-bearing conditions [16]. It will be recorded using a weight-bearing coronal view static photograph and manually rated. Recurrence is reported in 15% of the cases, followed by transfer metatarsalgia (7%) and delayed union, non-union and malunion collectively reported in 3% of the cases [6]. Metatarsal osteotomy union will be defined using cortical continuity [17], with patients failing to show any healing (nonunion) and insufficient cortical continuity (delayed) by the 12-month review, identified in the trial data. Participant timeline {13} The time schedule for participant recruitment, interventions and assessment in this trial will follow the standard clinical pathway embedded within the SOFARI registry (Fig. 3). The recruitment, allocation, surgery and 12 months follow-up time points relevant to the current trial are described in Fig. 1. Sample size {14} The sample size required for the primary outcome was established to provide adequate power to detect half an MCID difference (15.3/2) in the FAOS Pain score between the control flat-cut Weil and experimental wedge-cut groups, with an average estimated baseline standard deviation of 20.8. The MCID and baseline SD were derived from a previous study [15], and 0.5MCID was determined to be a clinically important effect that would influence future clinical decision-making regarding technique selection. A mixed-effects model (analysis of covariance (ANCOVA)) was selected as the most appropriate to answer the question posed, with power (β) at 0.9 and α of 0.05, with an allowance for a dropout rate of 10%. The required sample size was determined using GPower [18] to be N = 80 patients, with patient age at surgery, sex, body mass index, single/multiple procedures and adjunct procedures included as model covariates. The sample size required for the secondary outcome was established to provide adequate power to detect a 15% reduction in complication incidence from a baseline of 60% incidence. The reduction in incidence was deemed to be the minimal amount of improvement that would influence future decision-making regarding technique selection in this patient population. A one-sided test within a logistic regression model was selected within GPower, with the R 2 for other predictors in the model (patient age at surgery, sex, body mass index, single/ multiple procedures and adjunct procedures) set to 0.1. The required sample size was determined to be N = 123 procedures. Recruitment {15} The principal investigator for the study performed a total of 127 Weil osteotomies in 87 patients over the 12 months preceding the study. To meet the required sample size of N = 80 patients with N = 123 procedures to assess the primary and secondary outcomes, respectively, the expected recruitment period will extend over 12 months. All prospective patients eligible for recruitment will be approached to participate in the study. Assignment of interventions: allocation Sequence generation {16a} Participants will be randomly assigned to receive either a flat-cut Weil osteotomy (control, group A) or wedgecut osteotomy (experimental, group B) procedure using a central computer-based simple randomisation system, with a 1:1 allocation without blocking. The random allocation sequence will be embedded within the clinical registry via a randomisation algorithm (Matlab 2018b, Mathworks Inc., USA). The allocation will be communicated electronically to the treating surgeon via an allocation code comprising nondescript terminology, whereby the allocation cannot be inferred from the label to other study personnel. Concealment mechanism {16b} The system will not release the allocation code to the treating surgeon until the patient has been recruited into the trial (on the day of scheduled surgery) to maintain allocation concealment. Implementation {16c} The senior data engineer will be responsible for generating the allocation sequence and assigning interventions. Assignment of interventions: blinding Who will be blinded {17a} Blinding is not feasible for the treating surgeon or the senior data engineer responsible for coding the randomisation sequence, who will have access to the allocation code and allocation code key. The senior data engineer will not have further involvement in the study beyond the allocation and data coding. Trial participants will be blinded to the intervention assignment and will be counselled at the time of recruitment regarding both surgical techniques, supplementing the information provided on the participant information sheet and consent form (PISCF). Surgical assistants, practice staff, clinical outcome assessors members of the research team and study statisticians involved in the data analysis will be blinded to the allocation, with access to only the allocation code which will comprise a label where grouping is not able to be inferred. Postoperative clinical evaluations and assessment of serious adverse events will be conducted by a surgical fellow that is hosted within the clinic on a rotating basis for 6 months. In cases where the surgical fellow assisting with surgery also performs the postoperative evaluation, masking will occur through access to only the allocation coding described above. While the treatment evaluator will have access to postoperative radiographs of the operated structures within the foot, it is expected the intervention allocation cannot be derived from these images. Patients returning for evaluation at the 12-month followup will be assessed by a fellow not involved in the surgery. Procedure for unblinding if needed {17b} In situations where an adverse event or complication has occurred such that reoperation or other invasive intervention is deemed necessary, it may be necessary to unmask the intervention allocation to plan appropriate treatment. Requests to unmask will be made electronically via the treating surgeon and logged within the study masterlist. Requests will be managed by the senior engineer with access to the allocation key. Data collection and management Plans for assessment and collection of outcomes {18a} Patient personal and medical data is routinely collected and stored in the participating surgeons' practice management systems (Bluechip, MedicalDirector, Australia and Genie, Genie Solutions, Australia). Patient demographic data, comorbidities and full patient history relating to the history and onset of the foot and ankle condition will be recorded in the practice management system during patient consultation. Radiological reports collected routinely as part of diagnosis, surgical planning and postoperative followup, full description of diagnosis, mode of treatment and details of nonoperative and surgical interventions, and timing of treatments will also be recorded in the practice management system during patient consultation. Intraoperative data, patient-reported outcomes data and findings from clinical examination will be electronically entered into the SOFARI registry via web-based forms (Google Suite, Google, USA) by the clinical research nurse, research team, surgeon or the patient. Intraoperative (surgical technique), patient-reported (FAOS-Pain) and clinical outcome (complications) data collected during this trial will additionally be linked to the SOFARI registry via the same modes of data entry. Data collection will continue until the minimum sample size requirements for the study are met with complete records established. Plans to promote participant retention and complete follow-up {18b} Once a participant is enrolled in the study, they will be contacted at 3, 6 and 12 months postoperatively for PROMs follow-up data collection as per the existing SOFARI registry procedures. The routine contact will facilitate participant retention up until the 12-month follow-up. Follow-up reminders will be administered from the clinic, and the electronic questionnaires can be completed remotely at the patient's convenience. There is an additional opportunity to collect PROM data when the patient visits the clinic for their 12-month follow-up, if they have not completed the electronic forms. In cases where the intervention protocol is not followed, the participant will be removed from the trial and the randomisation slot filled by the next participant recruited into the trial. Data management {19} Data sources Patient personal and medical data is routinely collected at the consulting clinics and entered into the practice management systems (Bluechip, Medi-calDirector, Australia and Genie, Genie Solutions, Australia). The data is collated with PROMs and organised into the Sydney Orthopaedic Foot and Ankle Research Institute (SOFARI) Registry established within the surgeon's practice (ACTRN12620000331932), and will form the primary source for patient, clinical, intraoperative and postoperative data for the current trial. Ethical approval for use of the practice registry for research was provided by the New South Wales/Victoria branch of the Ramsay Health Care HREC (HREC approval number 2020-007). Registry data is hosted in a secure, HIPAAcompliant cloud-based database (Google Suite, Google, USA). Data entry All data collected in this trial will be entered electronically via web-based forms (Google Suite, Google, USA) by either the investigators, research team or participants. The research team will be responsible for the organisation of the data within a HIPAA-compliant database environment (Google Suite, Google, USA) and performing quality assurance checks on the consolidated dataset. Clinic data management Identifiable data (collected only as necessary for treatment and management of the health services) will be stored permanently on the practice management software, as is standard clinical practice. The data will be stored on a secure server within the consulting rooms, with restricted access. Data linkage processes Identifiable personal and medical information collected from patients will be stored and kept indefinitely within the database to link patient records to patient information from other sources (e.g. electronic medical records within the practice management systems, clinic notes or radiology records). The research team will access the practice management software in order to conduct quality assurance audits which will involve access to identifiable data, within the environment of the password-protected server. Web-based forms will be provided to patients to capture validated PROMs using identifiers to link the form responses to the rest of the registry dataset. No identifiable information (name, date of birth, address, contact information) will be included or requested of patients in the forms. Standard operating procedures for data entry will be available to individuals who will require access to the database via a web browser interface to streamline and control data-entry processes. All members with access to the data will have signed a non-disclosure agreement. Access to the trial data will be password protected. Data will be stored in a HIPAA-compliant environment (Google Suite, Google, USA) prior to further processing locally by the research team. The research team will retain access to the data for the duration of the trial. At the termination of the trial, any study-relevant data will be transferred to the clinic servers and deleted from the research team's environment. The data gathered will be retained on the existing password-protected servers of the clinic for future reference, publications and potential future studies. Supplementary information within clinical notes in external databases, such as those at the hospitals where the treatment or surgery is performed or radiology services, will not be integrated directly with the study database. This data will be requested from these systems on an as-needed basis by the surgeons or practice managers, matching patients by name, date of birth and treatment and will be manually transcribed into the practice management system and subsequently into the database. Confidentiality {27} Confidentiality and privacy of patient information in the trial will be protected through the execution of nondisclosure agreements between all involved parties, prohibiting them from sharing identifiable information externally. Identifiable trial data is kept electronically on a secure server at the study clinic sites, and de-identified study data will be stored in a secure, HIPAA-compliant online database with access restricted to individual staff members responsible for handling the data. No identifiable data from the study will be externally shared without consent from the patients. All identifying information such as name, date of birth, email or phone will be removed from any data prior to the transfer of this data to sites not listed in the document. Patient identification numbers will be used as a substitute for identifiable data, and only the investigators will be able to re-identify patients. Patients will not be identified in any publication or presentation resulting from the trial. All investigators will have unrestricted access to the cleaned data sets. Study data sets will be housed and secured as per the data management procedures outlined in this protocol. Plans for collection, laboratory evaluation and storage of biological specimens for genetic or molecular analysis in this trial/future use {33} Not applicable; no biological specimens will be collected during this study. Statistical methods for primary and secondary outcomes {20a} Principles An analyst blinded to treatment allocation will perform the primary and secondary analyses, by comparing the intervention group to the control group. Data will be analysed following intention-to-treat principles (i.e. participants will be analysed in the group to which they were randomised to). Data normality of baseline characteristics and process measures will be evaluated by visual inspection of histograms. Continuous variables will be presented as means and standard deviations for normally distributed variables: median, minimum, maximum and interquartile range for non-normally distributed variables. Frequencies and percentages will be used to summarise categorical variables. Percentages will be calculated using the number of participants for whom data are available as the denominator. Alpha will be set at 0.05 and effect sizes as described in the sample size calculation will be considered of interest. Data integrity All data points for the study core dataset will be retrieved electronically, so comparing to original paper records will not be necessary. Data integrity will be defined in the context of this study as completeness, consistency and validity. An error rate of < 3% will be considered acceptable across the total of fields assessed. Patients who are otherwise lost to follow-up will be included in the analysis under an intention-to-treat framework. Where appropriate, imputation will be used to provide estimates of outcomes for patients lost to follow-up. To mitigate the effects of loss to follow-up on the analysis, the sample size factored in an estimated dropout rate of 10% to ensure adequate power for analyses requiring listwise deletion of missing data. Evaluation of demographics and baseline characteristics Baseline characteristics will be presented in a table stratified by treatment group. Hypothesis testing of baseline characteristics between the groups will not be performed in accordance with the recommendations of the CON-SORT statement [7]. Figure 4 describes the prognostic factors which will be treated as potential confounders of the effect of the intervention and included in the models used to analyse primary and secondary outcomes. These confounders were selected based on the available literature on pain ratings in metatarsalgia [19][20][21][22][23]. Primary outcome analysis The primary outcome of the trial is postoperative pain at 12 months follow-up represented by the pain subscale of the FAOS. A mixed-effects ANCOVA will be used to test for differences between the groups with the following factors and covariates included in the model: The overall model fit will be assessed with partial eta 2 and coefficients (β) with 95% confidence limits will describe the strength and direction of relationships between factors and the postoperative FAOS-Pain score. Post hoc comparisons with Dunnett tests (against control) will be used to compare osteotomy groups. Cohen's d will be used to report the effect size of differences between the groups. A Cohen's d exceeding 0.36 will be deemed clinically significant. Secondary outcome analysis The secondary outcome of the trial will be the incidence of procedure-specific complications by the end of the follow-up period (12 months). A binary logistic regression will be used to assess the effect of intervention allocation on complications incidence in a multivariable model containing confounders described above. Alpha will be set at 0.05 and partial eta 2 used to assess model fit. Odds ratios with 95% confidence limits will describe the strength and direction of relationships between model predictors and the probability of having a complication compared to not having one. Interim analyses {21b} No interim analysis has been planned that would lead to early termination based on the results of the study. The study will be monitored for adverse events on a continual basis and reported regularly to the investigators, and an unacceptably high incidence in either treatment group (50% greater than reported in the equivalent literature) may be reviewed by the investigators and cause the trial to be terminated. Methods for additional analyses (e.g. subgroup analyses) {20b} Depending on the overall incidence of procedure-specific complications (assuming that the incidence is > 5% overall) a Cox regression will be employed to compare the groups for time to event (complication detected) between day 1 postoperatively to 12-month evaluation with right censoring for participants that complete the study follow-up with no complications reported. Hazard ratios with 95% confidence intervals will be used to describe the strength and direction of the relationship between intervention allocation and time to event. Methods in analysis to handle protocol non-adherence and any statistical methods to handle missing data {20c} The nature of the dataset does not lend itself to multiple imputation in the first instance. If patients have not returned the form at follow-up, despite multiple reminders, a multiple imputation approach will be employed based on published guidelines for analysis of trial data [24]. Secondary outcomes are unlikely to have missing data due to the nature of clinical follow-up and the acute nature of procedure-specific complications (i.e. all patients are reviewed in person or by phone within the timeframe that complications would be observed). In the event that missing secondary outcomes are apparent, analyses will be performed to assess the randomness of the missing data and to determine whether complete case analysis is appropriate. In the event that the missing data does not conform to a random pattern, sensitivity analysis using worst-case, best-case scenarios for missing data will be constructed and reported [24]. Plans to give access to the full protocol, participant-level data and statistical code {31c} There are no restrictions in the public dissemination of results within the current scope of this protocol. The results will primarily be communicated via peer-reviewed publications and abstracts. Planned publications include a manuscript summarising findings from the trial to be published at the conclusion of the study. Patients will not be identified in any publication or presentation that will be published as a result of the trial. Patients are able to request the results of the trial and any resulting publications by contacting the clinic. The results from the study may also be informally shared by the investigators with peer networks in internal meetings, and be made available upon request to the ethics board and other health/regulatory authorities. Supplementing this protocol manuscript, the statistical code used in the trial will be uploaded to GitHub (GitHub Inc, USA) to enable peer review where appropriate. The deidentified study dataset will be made available upon request, keeping in line with open access principles to enable collaboration, where the interest is academic and not for commercial uses. Only deidentified data will be shared if required, under the appropriate licencing arrangements. Composition of the coordinating centre and trial steering committee {5d} A governance and steering committee with the principal investigators and the research team will maintain the governance of the trial. This committee will meet on a quarterly basis to discuss trial progress. Fatima et al. Trials (2022) 23:690 Composition of the data monitoring committee, its role and reporting structure {21a} The scope of the proposed protocol does not warrant a data monitoring committee due to the nature of the interventions and the outcomes selected for comparison between groups. Adverse event reporting and harms {22} An adverse event is defined as any deviation from the normal recovery trajectory requiring alteration to the patient's care or medical intervention. Adverse events will be collected after the subject has provided consent and enrolled in the trial. A serious adverse event (SAE) for this study is any untoward medical occurrence within the follow-up period of the study and results in any of the following: life-threatening condition, severe or permanent disability, prolonged hospitalisation or a significant hazard. Investigators will determine the relatedness of an event to the trial based on a temporal relationship to the intervention, as well as whether the event is unexpected or unexplained given the subject's clinical course, previous medical conditions and concomitant medications. A separate form will be set up for adverse reporting, to be completed by the evaluator or the trial monitoring team based on updated clinical notes for the patient within the practice management system and electronically linked to the study database. Frequency and plans for auditing trial conduct {23} Quality assurance of the trial will be maintained through auditing and reporting of data completeness, consistency and validity: completeness (all data fields required are filled in) will be monitored in real time and compared to the requirements outlined in the core dataset; consistency (data responses match the rules specified within the core dataset) will be examined by assessing continuous variables for outliers and categorical variables for consistency with pre-specified responses within the core dataset; validity (data is accurate and correct) will be assessed in a subsample of patients (10%) by spot-checking the information held in the study masterlist relative to the original source in the practice management system. Discrepancies will be investigated and rectified (where possible) by practice staff or the research team. Audit reporting will be provided to the registry steering committee and communicated to the principal investigators on a quarterly basis. Plans for communicating important protocol amendments to relevant parties (e.g. trial participants, ethical committees) {25} Ethics approval for this study was provided through the NSW/VIC branch of the Ramsay Health Care Human Research Ethics Committee (HREC approval number 2020-007). Any modifications to the protocol which may impact the conduct of the study, patient safety or significant administrative aspects (e.g. changes to study sponsorship) will require a formal amendment to the protocol. Such amendments will be agreed upon by the investigators and approved by the NSW/VIC Ramsay Health HREC prior to implementation. Modifications will also be reflected in the public trial registry record on the ANZCTR. Dissemination plans {31a} Trial results There are no restrictions in the public dissemination of results within the current scope of this protocol. The results will primarily be communicated via peer-reviewed publications and abstracts. Planned publications include a manuscript summarising findings from the trial to be published at the conclusion of the study. Patients will not be identified in any publication or presentation that will be published as a result of the trial. Patients are able to request the results of the trial and any resulting publications by contacting the clinic. The results from the study may also be informally shared by the investigators with peer networks in internal meetings, and be made available upon request to the ethics board and other health/regulatory authorities. Authorship For any resulting publications, authorship eligibility for anyone involved in the study design, management and conduct of the trial will be determined in accordance with the International Committee of Medical Journal Editors (ICMJE) authorship guidelines [25]. Anyone not meeting the requirements for authorship (minimum 10% contribution) will be listed in the acknowledgements section of the manuscript. Discussion The trial will provide clinical data pertaining to the efficacy of the modified wedge-cut Weil osteotomy procedure compared to the traditional flat-cut method. Flat-cut Weil osteotomy is a routinely performed procedure with complications frequently reported [1,5,6]. The wedge-cut modification purportedly has functional and mechanical advantages over the flat-cut technique; however, in vivo data and quality of evidence are currently lacking [11,26]. A previous study reported satisfactory performance on the American Orthopaedic Foot and Ankle Society (AOFAS) forefoot score (with 65% of patients reporting no pain and 23% reporting mild pain), but higher than expected complication rates for transfer metatarsalgia, floating toes, infection and wound healing complications in patients receiving segmental resection (wedge-cut) osteotomies [11,26]. The study, however, was a retrospective case series, lacking baseline data and with a short (minimum 6 months) followup. To the best of the authors' knowledge, the current trial will be the first to directly examine the clinical efficacy of the wedge-cut Weil osteotomy within a randomised control design. Quality assurance of the trial will be maintained through regular auditing and reporting on data completeness, consistency and validity. The trial has been designed in accordance with the Standard Protocol Items: Recommendations for Interventional Trials (SPIRIT) 2013 guidelines, in order to enhance transparency and facilitate appraisal of its scientific merit, ethical considerations, and safety aspects [27].
2022-08-19T15:16:05.327Z
2022-08-19T00:00:00.000
{ "year": 2022, "sha1": "74c074598e243ee8a2982447456c952b9b5365f4", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Springer", "pdf_hash": "74c074598e243ee8a2982447456c952b9b5365f4", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
15413192
pes2o/s2orc
v3-fos-license
The Entropy Gain of Linear Time-Invariant Filters and Some of its Implications We study the increase in per-sample differential entropy rate of random sequences and processes after being passed through a non minimum-phase (NMP) discrete-time, linear time-invariant (LTI) filter G. For such filters and random processes, it has long been established that this entropy gain, Gain(G), equals the integral of log|G(exp(jw))|. It is also known that, if the first sample of the impulse response of G has unit-magnitude, then this integral equals the sum of the logarithm of the magnitudes of the non-minimum phase zeros of G, say B(G). In this note, we begin by showing that existing time-domain proofs of these results, which consider finite length-n sequences and then let n tend to infinity, have neglected significant mathematical terms and, therefore, are inaccurate. We discuss some of the implications of this oversight when considering random processes. We then present a rigorous time-domain analysis of the entropy gain of LTI filters for random processes. In particular, we show that the entropy gain between equal-length input and output sequences is upper bounded by B(G) and arises if and only if there exists an output additive disturbance with finite differential entropy (no matter how small) or a random initial state. Instead, when comparing the input differential entropy to that of the entire (longer) output of G, the entropy gain equals B(G) without the need for additional exogenous random signals. We illustrate some of the consequences of these results by presenting their implications in three different problems. Specifically: a simple derivation of the rate-distortion function for Gaussian non-stationary sources, conditions for equality in an information inequality of importance in networked control problems, and an observation on the capacity of auto-regressive Gaussian channels with feedback. equality in an information inequality of importance in networked control problems, and an observation on the capacity of auto-regressive Gaussian channels with feedback. I. INTRODUCTION In his seminal 1948 paper [1], Claude Shannon gave a formula for the increase in differential entropy per degree of freedom that a continuous-time, band-limited random process u(t) experiences after passing through a linear time-invariant (LTI) continuous-time filter. In this formula, if the input process is bandlimited to a frequency range [0, B], has differential entropy rate (per degree of freedom)h(u), and the LTI filter has frequency response G(jω), then the resulting differential entropy rate of the output process y(t) is given by [1,Theorem 14]h The last term on the right-hand side (RHS) of (1) can be understood as the entropy gain (entropy amplification or entropy boost) introduced by the filter G(jω). Shannon proved this result by arguing that an LTI filter can be seen as a linear operator that selectively scales its input signal along infinitely many frequencies, each of them representing an orthogonal component of the source. The result is then obtained by writing down the determinant of the Jacobian of this operator as the product of the frequency response of the filter over n frequency bands, applying logarithm and then taking the limit as the number of frequency components tends to infinity. An analogous result can be obtained for discrete-time input {u(k)} and output {y(k)} processes, and an LTI discrete-time filter G(z) by relating them to their continuous-time counterparts, which yields h({y(k)}) =h({u(k)}) + 1 2π π −π log G(e jω ) dω, whereh ({u(k)}) lim n→∞ 1 n h(u(1), u(2), . . . , u(n)) is the differential entropy rate of the process {u(k)}. Of course the same formula can also be obtained by applying the frequency-domain proof technique that Shannon followed in his derivation of (1). The rightmost term in (2), which corresponds to the entropy gain of G(z), can be related to the structure of this filter. It is well known that if G is causal with a rational transfer function G(z) such that lim z→∞ |G(z)| = 1 (i.e., such that the first sample of its impulse response has unit magnitude), then where {ρ i } are the zeros of G(z) and D {z ∈ C : |z| < 1} is the open unit disk on the complex plane. This provides a straightforward way to evaluate the entropy gain of a given LTI filter with rational transfer function G(z). In addition, (3) shows that, if lim z→∞ |G(z)| = 1, then such gain is greater than one if and only if G(z) has zeros outside D. A filter with the latter property is said to be non-minimum phase (NMP); conversely, a filter with all its zeros inside D is said to be minimum phase (MP) [2]. NMP filters appear naturally in various applications. For instance, any unstable LTI system stabilized via linear feedback control will yield transfer functions which are NMP [2], [3]. Additionally, NMP-zeros also appear when a discrete-time with ZOH (zero order hold) equivalent system is obtained from a plant whose number of poles exceeds its number of zeros by at least 2, as the sampling rate increases [4,Lemma 5.2]. On the other hand, all linear-phase filters, which are specially suited for audio and imageprocessing applications, are NMP [5], [6]. The same is true for any all-pass filter, which is an important building block in signal processing applications [5], [7]. An alternative approach for obtaining the entropy gain of LTI filters is to work in the time domain; obtain y n 1 {y 1 , y 1 , . . . , y n } as a function of u n 1 , for every n ∈ N, and evaluate the limit lim n→∞ 1 n (h(y n 1 ) − h(u n 1 )). More precisely, for a filter G with impulse response g ∞ 0 , we can write . . . . . . where y 1 n [y 1 y 1 · · · y n ] T and the random vector u 1 n is defined likewise. From this, it is clear that where det(G n ) (or simply det G n ) stands for the determinant of G n . Thus, regardless of whether G(z) (i.e., the polynomial g 0 + g 1 z −1 + · · · ) has zeros with magnitude greater than one, which clearly contradicts (2) and (3). Perhaps surprisingly, the above contradiction not only has been overlooked in previous works (such as [8], [9]), but the time-domain formulation in the form of (4) has been utilized as a means to prove or disprove (2) (see, for example, the reasoning in [10, p. 568]). A reason for why the contradiction between (2), (3) and (6) arises can be obtained from the analysis developed in [11] for an LTI system P within a noisy feedback loop, as the one depicted in Fig. 1. In Figure 1. Left: LTI system P within a noisy feedback loop. Right: equivalent system when the feedback channel is noiseless and has unit gain. this scheme, C represents a causal feedback channel which combines the output of P with an exogenous (noise) random process c ∞ 1 to generate its output. The process c ∞ 1 is assumed independent of the initial state of P , represented by the random vector x 0 , which has finite differential entropy. For this system, it is shown in [11,Theorem 4.2] that with equality if w is a deterministic function of v. Furthermore, it is shown in [12,Lemma 3.2 ] that if |h(x 0 )| < ∞ and the steady state variance of system P remains asymptotically bounded as k → ∞, then where {p i } are the poles of P . Thus, for the (simplest) case in which w = v, the output y ∞ 1 is the result of filtering u ∞ 1 by a filter G = 1 1+P (as shown in Fig. 1-right), and the resulting entropy rate of {y(k)} will exceed that of {u(k)} only if there is a random initial state with bounded differential entropy (see (7a)). Moreover, under the latter conditions, [11,Lemma 4.3] implies that if G(z) is stable and |h(x 0 )| < ∞, then this entropy gain will be lower bounded by the right-hand side (RHS) of (3), which is greater than zero if and only if G is NMP. However, the result obtained in (7b) does not provide conditions under which the equality in the latter equation holds. Additional results and intuition related to this problem can be obtained from in [13]. There it is shown that if {y(k)} is a two-sided Gaussian stationary random process generated by a state-space recursion of the form for some A ∈ C M ×M , g ∈ C M ×1 , h ∈ C M ×1 , with unit-variance Gaussian i.i.d. innovations u ∞ −∞ , then its entropy rate will be exactly 1 2 log(2π e) (i.e., the differential entropy rate of u ∞ −∞ ) plus the RHS of (3) (with {ρ i } now being the eigenvalues of A outside the unit circle). However, as noted in [13], if the same system with zero (or deterministic) initial state is excited by a one-sided infinite Gaussian i.i.d. process u ∞ 1 with unit sample variance, then the (asymptotic) entropy rate of the output process y ∞ 1 is just 1 2 log(2π e) (i.e., there is no entropy gain). Moreover, it is also shown that if v ℓ 1 is a Gaussian random sequence with positive-definite covariance matrix and ℓ ≥ M , then the entropy rate of y ∞ 1 + v ℓ 1 also exceeds that of u ∞ 1 by the RHS of (3). This suggests that for an LTI system which admits a statespace representation of the form (8), the entropy gain for a single-sided Gaussian i.i.d. input is zero, and that the entropy gain from the input to the output-plus-disturbance is (3), for any Gaussian disturbance of length M with positive definite covariance matrix (no matter how small this covariance matrix may be). The previous analysis suggests that it is the absence of a random initial state or a random additive output disturbance that makes the time-domain formulation (4) yield a zero entropy gain. But, how would the addition of such finite-energy exogenous random variables to (4) actually produce an increase in the differential entropy rate which asymptotically equals the RHS of (3)? In a broader sense, it is not clear from the results mentioned above what the necessary and sufficient conditions are under which an entropy gain equal to the RHS of (3) arises (the analysis in [13] provides only a set of sufficient conditions and relies on second-order statistics and Gaussian innovations to derive the results previously described). Another important observation to be made is the following: it is well known that the entropy gain introduced by a linear mapping is independent of the input statistics [1]. However, there is no reason to assume such independence when this entropy gain arises as the result of adding a random signal to the input of the mapping, i.e., when the mapping by itself does not produce the entropy gain. Hence, it remains to characterize the largest set of input statistics which yield an entropy gain, and the magnitude of this gain. The first part of this paper provides answers to these questions. In particular, in Section III explain how and when the entropy gain arises (in the situations described above), starting with input and output sequences of finite length, in a time-domain analysis similar to (4), and then taking the limit as the length tends to infinity. In Section IV it is shown that, in the output-plus-disturbance scenario, the entropy gain is at most the RHS of (3). We show that, for a broad class of input processes (not necessarily Gaussian or stationary), this maximum entropy gain is reached only when the disturbance has bounded differential entropy and its length is at least equal to the number of non-minimum phase zeros of the filter. We provide upper and lower bounds on the entropy gain if the latter condition is not met. A similar result is shown to hold when there is a random initial state in the system (with finite differential entropy). In addition, in Section IV we study the entropy gain between the entire output sequence that a filter yields as response to a shorter input sequence (in Section VI). In this case, however, it is necessary to consider a new definition for differential entropy, named effective differential entropy. Here we show that an effective entropy gain equal to the RHS of (3) is obtained provided the input has finite differential entropy rate, even when there is no random initial state or output disturbance. In the second part of this paper (SectionVII) we apply the conclusions obtained in the first part to three problems, namely, networked control, the rate-distortion function for non-stationary Gaussian sources, and the Gaussian channel capacity with feedback. In particular, we show that equality holds in (7b) for the feedback system in Fig. 1-left under very general conditions (even when the channel C is noisy). For the problem of finding the quadratic rate-distortion function for non-stationary auto-regressive Gaussian sources, previously solved in [14]- [16], we provide a simpler proof based upon the results we derive in the first part. This proof extends the result stated in [15], [16] to a broader class of non-stationary sources. For the feedback Gaussian capacity problem, we show that capacity results based on using a short random sequence as channel input and relying on a feedback filter which boosts the entropy rate of the end-to-end channel noise (such as the one proposed in [13]), crucially depend upon the complete absence of any additional disturbance anywhere in the system. Specifically, we show that the information rate of such capacity-achieving schemes drops to zero in the presence of any such additional disturbance. As a consequence, the relevance of characterizing the robust (i.e., in the presence of disturbances) feedback capacity of Gaussian channels, which appears to be a fairly unexplored problem, becomes evident. Finally, the main conclusions of this work are summarized in Section VIII. Except where present, all proofs are presented in the appendix. A. Notation For any LTI system G, the transfer function G(z) corresponds to the z-transform of the impulse response g 0 , g 1 , . . ., i.e., G(z) = ∞ i=0 g i z −i . For a transfer function G(z), we denote by G n ∈ R n×n the lower triangular Toeplitz matrix having [g 0 · · · g n−1 ] T as its first column. We write x n 1 as a shorthand for the sequence {x 1 , . . . , x n } and, when convenient, we write x n 1 in vector form as x 1 where () T denotes transposition. Random scalars (vectors) are denoted using non-italic characters, such as x (non-italic and boldface characters, such as x). For matrices we use upper-case boldface symbols, such as A. We write λ i (A) to the note the i-th smallest-magnitude eigenvalue of A. If A n ∈ C n×n , then Figure 2. Linear, causal, stable and time-invariant system G with input and output processes, initial state and output disturbance. A i,j denotes the entry in the intersection between the i-th row and the j-th column. We write [A n ] i1 i2 , with i 1 ≤ i 2 ≤ n, to refer to the matrix formed by selecting the rows corresponds to the square sub-matrix along the main diagonal of A, with its top-left and bottom-right corners on A m1,m1 and A m2,m2 , respectively. A diagonal matrix whose entries are the elements in D is denoted as diag D II. PROBLEM DEFINITION AND ASSUMPTIONS Consider the discrete-time system depicted in Fig. 2. In this setup, the input u ∞ 1 is a random process and the block G is a causal, linear and time-invariant system with random initial state vector x 0 and random output disturbance z ∞ 1 . In vector notation, whereȳ 1 n is the natural response of G to the initial state x 0 . We make the following further assumptions about G and the signals around it: Assumption 1. G(z) is a causal, stable and rational transfer function of finite order, whose impulse response g 0 , g 1 , . . . satisfies g 0 = 1. It is worth noting that there is no loss of generality in considering g 0 = 1, since otherwise one can write G(z) as G ′ (z) = g 0 · G(z)/g 0 , and thus the entropy gain introduced by G ′ (z) would be log g 0 plus the entropy gain due to G(z)/g 0 , which has an impulse response where the first sample equals 1. Assumption 3. The disturbance z ∞ 1 is independent of u ∞ 1 and belongs to a κ-dimensional linear subspace, for some finite κ ∈ N. This subspace is spanned by the κ orthonormal columns of a matrix Φ ∈ R |N|×κ (where |N| stands for the countably infinite size of N), such that |h(Φ T z 1 ∞ )| < ∞. Equivalently, z 1 ∞ = Φs 1 κ , where the random vector s 1 κ Φ T z 1 ∞ has finite differential entropy and is independent of u 1 ∞ . As anticipated in the Introduction, we are interested in characterizing the entropy gain G of G in the presence (or absence) of the random inputs u ∞ 1 , x 0 , z ∞ 1 , denoted by In the next section we provide geometrical insight into the behaviour of G(G, x 0 , u ∞ 1 , z ∞ 1 ) for the situation where there is a random output disturbance and no random initial state. A formal and precise treatment of this scenario is then presented in Section IV. The other scenarios are considered in the subsequent sections. III. GEOMETRIC INTERPRETATION In this section we provide an intuitive geometric interpretation of how and when the entropy gain defined in (10) arises. This understanding will justify the introduction of the notion of an entropybalanced random process (in Definition 1 below), which will be shown to play a key role in this and in related problems. A. An Illustrative Example Suppose for the moment that G in Fig. 2 is an FIR filter with impulse response g 0 = 1, g 1 = 2, g i = 0, ∀i ≥ 2. Notice that this choice yields G(z) = (z − 2)/z, and thus G(z) has one non-minimum phase zero, at z = 2. The associated matrix G n for n = 3 is , whose determinant is clearly one (indeed, all its eigenvalues are 1). Hence, as discussed in the introduction, , and thus G 3 (and G n in general) does not introduce an entropy gain by itself. However, an interesting phenomenon becomes evident by looking at the singular-value decomposition (SVD) of G 3 , given by where Q 3 and R 3 are unitary matrices and D 3 diag{d 1 , d 2 , d 3 }. In this case, D 3 = diag{0.19394, 1.90321, 2.70928}, and thus one of the singular values of G 3 is much smaller than the others (although the product of all singular values yields 1, as expected). As will be shown in Section IV, for a stable G(z) such uneven distribution of singular values arises only when G(z) has non-minimum phase zeros. The effect of this can be visualized by looking at the image of the cube volume), then G 3 u 1 3 would distribute uniformly over the unit-volume parallelepiped depicted in Fig. 3, and hence h( 1 3 , and with Φ ∈ R 3×1 , the effect would be to "thicken" the support over which the resulting random vector y 1 3 = G 3 u 1 3 + z 1 3 is distributed, along the direction pointed by Φ. If Φ is aligned with the direction along which the support of G 3 u 1 3 is thinnest (given by q 3,1 , the first row of Q 3 ), then the resulting support would have its volume significantly increased, which can be associated with a large increase in the differential entropy of y 1 3 with respect to u 1 3 . Indeed, a relatively small variance of s and an approximately aligned Φ would still produce a significant entropy gain. The above example suggests that the entropy gain from u 1 n to y 1 n appears as a combination of two factors. The first of these is the uneven way in which the random vector G n u 1 n is distributed over R n . The second factor is the alignment of the disturbance vector z 1 n with respect to the span of the subset {q n,i } i∈Ωn of columns of Q n , associated with smallest singular values of G n , indexed by the elements in the set Ω n . As we shall discuss in the next section, if G has m non-minimum phase zeros, then, as n increases, there will be m singular values of G n going to zero exponentially. Since the product of the singular values of G n equals 1 for all n, it follows that i / ∈Ωn d n,i must grow exponentially with n, where d n,i is the i-th diagonal entry of D n . This implies that G n u 1 n expands with n along the span of {q n,i } i / ∈Ωn , compensating its shrinkage along the span of {q n,i } i∈Ωn , thus keeping h(G n u 1 n ) = h(u 1 n ) for all n. Thus, as n grows, any small disturbance distributed over the span of {q n,i } i∈Ωn , added to G n u 1 n , will keep the support of the resulting distribution from shrinking along this subspace. Consequently, the expansion of G n u 1 n with n along the span of {q n,i } i / ∈Ωn is no longer compensated, yielding an entropy increase proportional to log( i / ∈Ωn d n,i ). The above analysis allows one to anticipate a situation in which no entropy gain would take place even when some singular values of G n tend to zero as n → ∞. Since the increase in entropy is made possible by the fact that, as n grows, the support of the distribution of G n u 1 n shrinks along the span of {q n,i } i∈Ωn , no such entropy gain should arise if the support of the distribution of the input u 1 n expands accordingly along the directions pointed by the rows {r n,i } i∈Ωn of R n . An example of such situation can be easily constructed as follows: Let G(z) in Fig. 2 have nonminimum phase zeros and suppose that u ∞ 1 is generated as G −1ũ∞ 1 , whereũ ∞ 1 is an i.i.d. random process with bounded entropy rate. Since the determinant of G −1 n equals 1 for all n, we have that h(u 1 n ) = h(ũ 1 n ), for all n. On the other hand, y 1 n = G n G −1 nũ 1 n + z 1 n =ũ 1 n + z 1 n . Since z 1 n = [Φ] 1 n s 1 κ for some finite κ (recall Assumption 3), it is easy to show that lim n→∞ The preceding discussion reveals that the entropy gain produced by G in the situation shown in Fig. 2 depends on the distribution of the input and on the support and distribution of the disturbance. This stands in stark contrast with the well known fact that the increase in differential entropy produced by an invertible linear operator depends only on its Jacobian, and not on the statistics of the input [1]. We have also seen that the distribution of a random process along the different directions within the Euclidean space which contains it plays a key role as well. This motivates the need to specify a class of random processes which distribute more or less evenly over all directions. The following section introduces a rigorous definition of this class and characterizes a large family of processes belonging to it. B. Entropy-Balanced Processes We begin by formally introducing the notion of an "entropy-balanced" process u ∞ 1 , being one in which, for every finite ν ∈ N, the differential entropy rate of the orthogonal projection of u n 1 into any subspace of dimension n − ν equals the entropy rate of u n 1 as n → ∞. This idea is precisely in the following definition. Equivalently, a random process {v(k)} is entropy balanced if every unitary transformation on v n 1 yields a random sequence y n 1 such that lim n→∞ 1 n |h(y n n−ν+1 | y n−ν 1 )| = 0. This property of the resulting random sequence y n 1 means that one cannot predict its last ν samples with arbitrary accuracy by using its previous n − ν samples, even if n goes to infinity. We now characterize a large family of entropy-balanced random processes and establish some of their properties. Although intuition may suggest that most random processes (such as i.i.d. or stationary processes) should be entropy balanced, that statement seems rather difficult to prove. In the following, we show that the entropy-balanced condition is met by i.i.d. processes with per-sample probability density function (PDF) being uniform, piece-wise constant or Gaussian. It is also shown that adding to an entropy-balanced process an independent random processes independent of the former yields another entropy-balanced process, and that filtering an entropy-balanced process by a stable and minimum phase filter yields an entropy-balanced process as well. Lemma 1. Let u ∞ 1 be an i.i.d. process with finite differential entropy rate, in which each u i is distributed according to a piece-wise constant PDF in which each interval where this PDF is constant has measure greater than ǫ, for some bounded-away-from-zero constant ǫ. Then u ∞ 1 is entropy balanced. The working behind this lemma can be interpreted intuitively by noting that adding to a random process another independent random process can only increase the "spread" of the distribution of the former, which tends to balance the entropy of the resulting process along all dimensions in Euclidean space. In addition, it follows from Lemma 2 that all i.i.d. processes having a per-sample PDF which can be constructed by convolving uniform, piece-wise constant or Gaussian PDFs as many times as required are entropy balanced. It also implies that one can have non-stationary processes which are entropy balanced, since Lemma 2 imposes no requirements for the process v ∞ 1 . Our last lemma related to the properties of entropy-balanced processes shows that filtering by a stable and minimum phase LTI filter preserves the entropy balanced condition of its input. Lemma 3. Let u ∞ 1 be an entropy-balanced process and G an LTI stable and minimum-phase filter. Then This result implies that any stable moving-average auto-regressive process constructed from entropybalanced innovations is also entropy balanced, provided the coefficients of the averaging and regression correspond to a stable MP filter. We finish this section by pointing out two examples of processes which are non-entropy-balanced, namely, the output of a NMP-filter to an entropy-balanced input and the output of an unstable filter to an entropy-balanced input. The first of these cases plays a central role in the next section. IV. ENTROPY GAIN DUE TO EXTERNAL DISTURBANCES In this section we formalize the ideas which were qualitatively outlined in the previous section. Specifically, for the system shown in Fig. 2 we will characterize the entropy gain G(G, (10) for the case in which the initial state x 0 is zero (or deterministic) and there exists an output random disturbance of (possibly infinite length) z ∞ 1 which satisfies Assumption 3. The following lemmas will be instrumental for that purpose. Fig. 2, and suppose z ∞ 1 satisfies Assumption 3, and that the input process u ∞ 1 is entropy balanced. Let G n = Q T n D n R n be the SVD of G n , where D n = diag{d n,1 , . . . , d n,n } are the singular values of G n , with d n,1 ≤ d n,2 ≤ · · · ≤ d n,n , such that | det G n | = 1 ∀n. Let m be the number of these singular values which tend to zero exponentially as n → ∞. Then Lemma 5. Consider the system in (The proof of this Lemma can be found in the Appendix, page 34). The previous lemma precisely formulates the geometric idea outlined in Section III. To see this, notice that no entropy gain is obtained if the output disturbance vector z 1 n is orthogonal to the space spanned by the first m columns of Q n . If this were the case, then the disturbance would not be able fill the subspace along which G n u 1 n is shrinking exponentially. Indeed, if [Q n ] 1 n z 1 n = 0 for all n, then , and the latter sum cancels out the one on the RHS of (12), while lim n→∞ 1 n h([R n ] 1 n u 1 n ) = 0 since u ∞ 1 is entropy balanced. On the contrary (and loosely speaking), if the projection of the support of z 1 n onto the subspace spanned by the first m rows of Q n is of dimension m, then h([D n ] 1 m R n u 1 n + [Q n ] 1 m z 1 n ) remains bounded for all n, and the entropy limit of the sum lim n→∞ 1 n (− m i=1 log d n,i ) on the RHS of (12) yields the largest possible entropy gain. Notice that − m i=1 log d n,i = n i=m+1 log d n,i (because det(G n ) = 1), and thus this entropy gain stems from the uncompensated expansion of G n u 1 n along the space spanned by the rows of [Q n ] m+1 n . Lemma 5 also yields the following corollary, which states that only a filter G(z) with zeros outside the unit circle (i.e., an NMP transfer function) can introduce entropy gain. Fig. 2 and let u ∞ 1 be an entropy-balanced random process with bounded entropy rate. Besides Assumption 1, suppose that G(z) is minimum phase. Then Corollary 1 (Minimum Phase Filters do not Introduce Entropy Gain). Consider the system shown in Proof: Since G(z) is minimum phase and stable, it follows from Lemma 4 that the number of singular values of G n which go to zero exponentially, as n → ∞, is zero. Indeed, all the singular values vary polynomially with n. Thus m = 0 and Lemma 5 yields directly that the entropy gain is zero (since the RHS of (12) is zero). A. Input Disturbances Do Not Produce Entropy Gain In this section we show that random disturbances satisfying Assumption 3, when added to the input u ∞ 1 (i.e., before G), do not introduce entropy gain. This result can be obtained from Lemma 5, as stated in the following theorem: Theorem 1 (Input Disturbances do not Introduce Entropy Gain). Let G satisfy Assumption 1. Suppose that u ∞ 1 is entropy balanced and consider the output where b 1 ∞ = Ψa 1 ν , with a 1 ν being a random vector satisfying h(a 1 ν ) < ∞, and where Ψ ∈ R |N|×ν has orthonormal columns. Then, Proof: In this case, the effect of the input disturbance in the output is the forced response of G to it. This response can be regarded as an output disturbance z ∞ 1 = G b ∞ 1 . Thus, the argument of the differential entropy on the RHS of (12) is Therefore, The proof is completed by substituting this result into the RHS of (12) and noticing that Remark 1. An alternative proof for this result can be given based upon the properties of an entropy- balanced sequence, as follows. Since det(G n ) = 1, ∀n, we have that h(G n (u 1 n + b 1 n )) = h(u 1 n + b 1 n ). Let Θ n ∈ R ν×n and Θ n ∈ R (n−ν)×n be a matrices with orthonormal rows which satisfy Θ n [Ψ] 1 where we have applied the chain rule of differential entropy. But which is upper bounded for all n because h(a 1 n ) < ∞ and h(Θ n u 1 n ) < ∞, the latter due to u ∞ 1 being entropy balanced. On the other hand, since b 1 n is independent of u 1 n , it follows that h(u 1 where the last equality stems from the fact that u ∞ 1 is entropy balanced. B. The Entropy Gain Introduced by Output Disturbances when G(z) has NMP Zeros We show here that the entropy gain of a transfer function with zeros outside the unit circle is at most the sum of the logarithm of the magnitude of these zeros. To be more precise, the following assumption is required. e., such that |ρ 1 | > |ρ 2 | > · · · > |ρ M | > 1, with ℓ i being the multiplicity of the i-th distinct zero. We denote by As can be anticipated from the previous results in this section, we will need to characterize the asymptotic behaviour of the singular values of G n . This is accomplished in the following lemma, which relates these singular values to the zeros of G(z where the elements in the sequence {α n,l } are positive and increase or decrease at most polynomially with n. (The proof of this lemma can be found in the appendix, page 36). We can now state the first main result of this section. Theorem 2. In the system of Fig. 2, suppose that u ∞ 1 is entropy balanced and that G(z) and z ∞ 1 satisfy assumptions 4 and 3, respectively. Then whereκ min{κ, m} and κ is as defined in Assumption 3. Both bounds are tight. The upper bound is where the unitary matrices Q T n ∈ R n×n hold the left singular vectors of G n . The second main theorem of this section is the following: Theorem 3. In the system of Fig. 2, suppose that u ∞ 1 is entropy balanced and that G(z) satisfies Assumption 4. Let z ∞ 1 be a random output disturbance, such that z(i) = 0, ∀i > m, and that |h(z m 1 )| < ∞. Then Proof: See Appendix, page 39. V. ENTROPY GAIN DUE TO A RANDOM INITIAL SATE Here we analyze the case in which there exists a random initial state x 0 independent of the input u ∞ 1 , and zero (or deterministic) output disturbance. The effect of a random initial state appears in the output as the natural response of G to it, namely the sequenceȳ n 1 . Thus, y n 1 can be written in vector form as This reveals that the effect of a random initial state can be treated as a random output disturbance, which allows us to apply the results from the previous sections. Recall from Assumption 4 that G(z) is a stable and biproper rational transfer function with m NMP zeros. As such, it can be factored as where P (z) is a biproper filter containing only all the poles of G(z), and N (z) is a FIR biproper filter, containing all the zeros of G(z). We have already established (recall Theorem 1) that the entropy gain introduced by the minimum phase system P (z) is zero. It then follows that the entropy gain can be introduced only by the NMP-zeros of N (z) and an appropriate output disturbanceȳ ∞ 1 . Notice that, in this case, the input process w ∞ 1 to N (i.e., the output sequence of P due to a random input u ∞ 1 ) is independent ofȳ ∞ 1 (since we have placed the natural responseȳ ∞ 1 after the filters P and N , hose initial state is now zero). This condition allows us to directly use Lemma 5 in order to analyze the entropy gain that u ∞ 1 experiences after being filtered by G, which coincides withh(y ∞ 1 ) −h(w ∞ 1 ). This is achieved by the next theorem. Theorem 4. Consider a stable p-th order biproper filter G(z) having m NMP-zeros, and with a random initial state x 0 , such that |h(x 0 )| < ∞. Then, the entropy gain due to the existence of a random initial state is Proof: Being a biproper and stable rational transfer function, G(z) can be factorized as where P (z) is a stable biproper transfer function containing only all the poles of G(z) and with all its zeros at the origin, while N (z) is stable and biproper FIR filter, having all the zeros of G(z). LetC n x 0 and C n x 0 be the natural responses of the systems P and N to their common random initial state x 0 , respectively, whereC n , C n ∈ R n×p . Then we can write Since P (z) is stable and MP, it follows from Corollary 1 that h(w 1 n ) = h(u 1 n ) for all n, and therefore Therefore, we only need to consider the entropy gain introduced by the (possibly) non-minimum filter N due to a random output disturbance z 1 n =ȳ 1 n = N nC n x 0 + C n x 0 , which is independent of the input w 1 n . Thus, the conditions of Lemma 5 are met considering G n = N n , where now N n = Q T n D n R n is the SVD for N n , and d n,1 ≤ d n,2 ≤ · · · ≤ d n,n . Consequently, it suffices to consider the differential entropy on the RHS of (12), whose argument is where v 1 n u 1 n +C n x 0 has bounded entropy rate and is entropy balanced (sinceC n x 0 is the natural response of a stable LTI system and because of Lemma 2). We remark that, in (35), v 1 n is not independent of x 0 , which precludes one from using the proof of Theorem 2 directly. On the other hand, since N (z) is FIR of order (at most) p, we have that C n = [E T p | 0 T ] T , where E p ∈ R p×p is a non-singular upper-triangular matrix independent of n. Hence, C n x 0 can be written as According to (35), the entropy gain in (25) arises as long as h([Q n ] 1 m C n x 0 ) is lower bounded by a finite constant (or if it decreases sub-linearly as n grows). Then, we need [Q n ] 1 m [Φ] 1 n to be a full row-ranked matrix in the limit as n → ∞. However, where [Q (p) n ] 1 m denotes the first p columns of the first m rows in Q n . We will now show that these determinants do not go to zero as n → ∞. Hence, the minimum singular value of [Q Using this result in (37) and taking the limit, we arrive to Thus is upper and lower bounded by a constant independent of n because v ∞ 1 is entropy balanced, [D n ] 1 m has decaying entries, and h(s p 1 ) < ∞, which means that the entropy rate in the RHS of (12) decays to zero. The proof is finished by invoking Lemma 6. Theorem 4 allows us to formalize the effect that the presence or absence of a random initial state has on the entropy gain using arguments similar to those utilized in Section IV. Indeed, if the random initial state x 0 ∈ R p has finite differential entropy, then the entropy gain achieves (3), since the alignment between x 0 and the first m rows of Q n is guaranteed. This motivates us to characterize the behavior of the entropy gain (due only to a random initial state), when the initial state x 0 can be written as [Φ] 1 p s 1 τ , with τ ≤ p, which means that x 0 has an undefined (or −∞) differential entropy. Corollary 2. Consider an FIR, p-order filter F (z) having m NMP-zeros, such that its random initial state can be written as x 0 = Φs 1 τ , where |h(s τ 1 )| < ∞ and Φ ∈ R p×τ contains orthonormal rows . Then, T is a non-singular matrix, with C n defined byȳ 1 n = C n x 0 (as in Theorem 4). Proof: The effect of the random initial state to the output sequence y ∞ 1 can be written as y 1 remains bounded, for n → ∞, if and only if lim n→∞ det( then the lower bound is reached by inserting (43) in (12). Otherwise, there exists L large enough such that τ n ≥ 1, We then proceed as the proof of Theorem 2, by considering a unitary (m × m)-matrix H n , and a This procedure allows us to conclude that h([D n ] 1 n R n u 1 n + [Q n ] 1 m C n Φs 1 τ ) ≤ m i=τn+1 log d n,i , and that the lower limit in the latter sum equalsτ +1 when [Q n ] 1 m C n Φs 1 τ is a full row-rank matrix. Replacing the latter into (12) finishes the proof. Both results, Theorem 4 and Corollary 2, reveal that the entropy gain arises as long as the effect of the random initial state aligns with the first rows of Q n , just as in the results of the previous section. VI. EFFECTIVE ENTROPY GAIN DUE TO THE INTRINSIC PROPERTIES OF THE FILTER If there are no disturbances and the initial state is zero, then the first n output samples to an input u n 1 is given by (4). Therefore, the entropy gain in this case, as defined in (10), is zero, regardless of whether or not G is NMP. Despite the above, there is an interesting question which, to the best of the authors' knowledge, has not been addressed before: Since in any LTI filter the entire output is longer than the input, what would happen if one compared the differential entropies of the complete output sequence to that of the (shorter) input sequence? As we show next, a proper definition of this question requires recasting the problem in terms of a new definition of differential entropy. After providing a geometrical interpretation of this problem, we prove that the (new) entropy gain in this case is exactly (3). A. Geometrical Interpretation Consider the random vectors u [u 1 u 2 ] T and y [y 1 y 2 Suppose u is uniformly distributed over [0, 1] × [0, 1]. Applying the conventional definition of differential entropy of a random sequence, we would have that because y 3 is a deterministic function of y 1 and y 2 : In other words, the problem lies in that although the output is a three dimensional vector, it only has two degrees of freedom, i.e., it is restricted to a 2-dimensional subspace of R 3 . This is illustrated in Fig. 4, where the set [0, 1] × [0, 1] is shown (coinciding with the u-v plane), together with its image throughG 2 (as defined in (45)). As can be seen in this figure, the image of the square [0, 1] 2 throughG 2 is a 2-dimensional rhombus over which {y 1 , y 2 , y 3 } distributes uniformly. Since the intuitive notion of differential entropy of an ensemble of random variables (such as how difficult it is to compress it in a lossy fashion) relates to the size of the region spanned by the associated random vector, one could argue that the differential h(y 1 , y 2 , y 3 ) = −∞? Simply put, the differential entropy relates to the volume spanned by the support of the probability density function. For y in our example, the latter (three-dimensional) volume is clearly zero. From the above discussion, the comparison between the differential entropies of y ∈ R 3 and u ∈ R 2 of our previous example should take into account that y actually lives in a two-dimensional subspace of R 3 . Indeed, since the multiplication by a unitary matrix does not alter differential entropies, we could consider the differential entropy of  ỹ 0 whereQ T is the 3 × 2 matrix with orthonormal rows in the singular-value decomposition ofG 2 andq is a unit-norm vector orthogonal to the rows ofQ (and thus orthogonal to y as well). We are now able to compute the differential entropy in R 2 forỹ, corresponding to the rotated version of y such that its support is now aligned with R 2 . The preceding discussion motivates the use of a modified version of the notion of differential entropy for a random vector y ∈ R n which considers the number of dimensions actually spanned by y instead of its length. Definition 2 (The Effective Differential Entropy). Let y ∈ R ℓ be a random vector. If y can be written as a linear transformation y = Su, for some u ∈ R n (n ≤ ℓ), S ∈ R ℓ×n , then the effective differential entropy of y is defined ash where S = A T T C is an SVD for S, with T ∈ R n×n . It is worth mentioning that Shannon's differential entropy of a vector y ∈ R ℓ , whose support's ℓvolume is greater than zero, arises from considering it as the difference between its (absolute) entropy and that of a random variable uniformly distributed over an ℓ-dimensional, unit-volume region of R ℓ . More precisely, if in this case the probability density function (PDF) of y = [y 1 y 2 · · · y ℓ ] T is Riemann integrable, then [17, Thm. 9.3.1], where y ∆ is the discrete-valued random vector resulting when y is quantized using an ℓ-dimensional uniform quantizer with ℓ-cubic quantization cells with volume ∆ ℓ . However, if we consider a variable y whose support belongs to an n-dimensional subspace of R ℓ , n < ℓ (i.e., y = Su = A T T Cu, as in Definition 2), then the entropy of its quantized version in R ℓ , say H ℓ (y ∆ ), is distinct from H n ((Ay) ∆ ), the entropy of Ay in R n . Moreover, it turns out that, in general, On the other hand, since in this case Ay = u, we have that Thus The latter example further illustrates why the notion of effective entropy is appropriate in the setup considered in this section, where the effective dimension of the random sequences does not coincide with their length (it is easy to verify that the effective entropy of y does not change if one rotates y in R ℓ ). Indeed, we will need to consider only sequences which can be constructed by multiplying some random vector u ∈ R n , with bounded differential entropy, by a tall matrixG n ∈ R n×(n+η) , with η > 0 (as in (45)), which are precisely the conditions required by Definition 2. B. Effective Entropy Gain We can now state the main result of this section: Theorem 5. Let the entropy-balanced random sequence u ∞ 1 be the input of an LTI filter G, and let y ∞ 1 be its output. Assume that G(z) is the z-transform of the (η + 1)-length sequence {g k } η k=0 . Then Theorem 5 states that, when considering the full-length output of a filter, the effective entropy gain is introduced by the filter itself, without requiring the presence of external random disturbances or initial states. This may seem a surprising result, in view of the findings made in the previous sections, where the entropy gain appeared only when such random exogenous signals were present. In other words, when observing the full-length output and the input, the (maximum) entropy gain of a filter can be recasted in terms of the "volume" expansion yielded by the filter as a linear operator, provided we measure effective differential entropies instead of Shannon's differential entropy. Proof of Theorem 5: The total length of the output ℓ, will grow with the length n of the input, if G is FIR, and will be infinite, if G is IIR. Thus, we define the output-length function It is also convenient to define the sequence of matrices {G n } ∞ n=1 , whereG n ∈ R ℓ(n)×n is Toeplitz with G n i,j = 0, ∀i < j, G n i,j = g i−j , ∀i ≥ j. This allows one to write the entire output y ℓ 1 of a causal LTI filter G with impulse response {g k } η k=0 to an input u ∞ 1 as Let the SVD ofG n beG n =Q T nD nȒn , whereQ n ∈ R n×ℓ(n) has orthonormal rows,D n ∈ R n×n is diagonal with positive elements, andȒ n ∈ R n×n is unitary. The effective differential entropy of y n(ℓ) 1 exceeds the one of u n 1 by where the first equality follows from the fact that u 1 n can be written as I n u 1 n , which means thath(u 1 n ) = h(u 1 n ). ButG SinceȒ n is unitary, it follows that detD 2 n = detG T nG n , which means that detD n = 1 2 detG T nG n . The product H n G T nGn is a symmetric Toeplitz matrix, with its first column, [h 0 h 1 · · · h n−1 ] T , given by Thus, the sequence {h i } n−1 i=0 corresponds to the samples 0 to n − 1 of those resulting from the complete convolution g * g − , even when the filter G is IIR, where g − denotes the time-reversed (perhaps infinitely large) response g. Consequently, using the Grenander and Szegö's theorem [19], it holds that where G(e jω ) is the discrete-time Fourier transform of {g k } ℓ k=0 . In order to finish the proof, we divide (58) by n, take the limit as n → ∞, and replace (63) in the latter. A. Rate Distortion Function for Non-Stationary Processes In this section we obtain a simpler proof of a result by Gray, Hashimoto and Arimoto [14]- [16], which compares the rate distortion function (RDF) of a non-stationary auto-regressive Gaussian process x ∞ 1 (of To be more precise, let be the impulse responses of two linear time-invariant filters A andà with rational transfer functions Notice also that lim |z|→∞ A(z) = 1 and lim |z|→∞à (z) = 1/ M i=1 |p i |, and thus Consider the non-stationary random sequences (source) x ∞ 1 and the asymptotically stationary sourcẽ x ∞ 1 generated by passing a stationary Gaussian process w ∞ 1 through A(z) andÃ(z), respectively, which can be written as x 1 n =à n w n 1 , n = 1, . . . . where, for each n, the minimums are taken over all the conditional probability density functions f u n 1 | x n 1 and fũn 1 |x n 1 yielding E u 1 n 2 /n ≤ D and E ũ 1 n 2 /n ≤ D, respectively. The above rate-distortion functions have been characterized in [14]- [16] for the case in which w ∞ 1 is an i.i.d. Gaussian process. In particular, it is explicitly stated in [15], [16] that, for that case, We will next provide an alternative and simpler proof of this result, and extend its validity for general (not-necessarily stationary) Gaussian w ∞ 1 , using the entropy gain properties of non-minimum phase filters established in Section IV. Indeed, the approach in [14]- [16] is based upon asymptotically-equivalent Toeplitz matrices in terms of the signals' covariance matrices. This restricts w ∞ 1 to be Gaussian and i.i.d. and A(z) to be an all-pole unstable transfer function, and then, the only non-stationary allowed is that arising from unstable poles. For instance, a cyclo-stationarity innovation followed by an unstable filter A(z) would yield a source which cannot be treated using Gray and Hashimoto's approach. By contrast, the reasoning behind our proof lets w ∞ 1 be any Gaussian process, and then let the source be A w, with A(z) having unstable poles (and possibly zeros and stable poles as well). The statement is as follows: Theorem 6. Let w ∞ 1 be any Gaussian stationary process with bounded differential entropy rate, and let x ∞ 1 andx ∞ 1 be as defined in (68) and (69), respectively. Then (72) holds. Thanks to the ideas developed in the previous sections, it is possible to give an intuitive outline of the proof of this theorem (given in the appendix, page 40) by using a sequence of block diagrams. More precisely, consider the diagrams shown in Fig. 6. In the top diagram in this figure, suppose that y = C x + u realizes the RDF for the non-stationary source x. The sequence u is independent of x, and the linear filter C(z) is such that the error (y − x) ⊥ ⊥ y (a necessary condition for minimum MSE optimality). The filter B(z) is the Blaschke product of A(z) (see (168) in the appendix) (a stable, NMP filter with unit frequency response magnitude such thatx = B x). If one now moves the filter B(z) towards the source, then the middle diagram in Fig. 6 is obtained. By doing this, the stationary sourcex appears with an additive error signalũ that has the same asymptotic variance as u, reconstructed asỹ = Cx +ũ. From the invertibility of B(z), it also follows that the mutual information rate betweenx andỹ equals that between x and y. Thus, the channelỹ = Cx +ũ has the same rate and distortion as the channel y = C x + u. However, if one now adds a short disturbance d to the error signalũ (as depicted in the bottom diagram of Fig. 6), then the resulting additive error termū =ũ + d will be independent ofx and will have the same asymptotic variance asũ. However, the differential entropy rate ofū will exceed that ofũ by the RHS of (72). This will make the mutual information rate betweenx andȳ to be less than that betweeñ x andỹ by the same amount. Hence, Rx(D) be at most R x (D) − M i=1 log |p i |. A similar reasoning can be followed to prove that R B. Networked Control Here we revisit the setup shown in Fig. 1 where {p i } M i=1 are the poles of P (z) (the plant in Fig. 1). By using the results obtained in Section V we show next that equality holds in (7b) provided the feedback channel satisfies the following assumption: Fig. 1 can be written as Assumption 5. The feedback channel in where 1) A and B are stable rational transfer functions such that AB is biproper, ABP has the same unstable poles as P , and the feedback AB stabilizes the plant P . 2) F is any (possibly non-linear) operator such thatc F (c) satisfies 1 n h(c n 1 ) < K, for all n ∈ N, and An illustration of the class of feedback channels satisfying this assumption is depicted on top of Fig. 7. Trivial examples of channels satisfying Assumption 5 are a Gaussian additive channel preceded and followed by linear operators [20]. Indeed, when F is an LTI system with a strictly causal transfer function, the feedback channel that satisfies Assumption 5 is widely known as a noise shaper with input pre and post filter, used in, e.g. [21]- [24]. Theorem 7. In the networked control system of Fig. 1, suppose that the feedback channel satisfies Assumption 5 and that the input u ∞ 1 is entropy balanced. If the random initial state of the plant P (z), Proof: Let P (z) = N (z)/Λ(z) and T (z) A(z)B(z) = Γ(z)/Θ(z). Then, from Lemma 9 (in the appendix), the output y 1 n can be written as where s 0 is the initial state of T (z) andũ (see Fig. 7 Bottom). Then whereC 0 maps the initial state s 0 to y 1 n ,C n maps the initial state x 0 to the output ofG(z), and C n maps the initial state x 0 (of Λ(z)) to y 1 n . Since u ∞ 1 is entropy balanced andc ∞ 1 has finite entropy rate, it follows from Lemma 2 thatũ ∞ 1 is entropy balanced as well. Thus, we can proceed as in the proof of Theorem 4 to conclude that This completes the proof. C. The Feedback Channel Capacity of (non-white) Gaussian Channels Consider a non-white additive Gaussian channel of the form where the input x is subject to the power constraint and z ∞ 1 is a stationary Gaussian process. The feedback information capacity of this channel is realized by a Gaussian input x, and is given by B z x v y Figure 8. Block diagram representation a non-white Gaussian channel y = x + z and the coding scheme considered in [13]. where K x 1 n is the covariance matrix of x 1 n and, for every k ∈ N, the input x k is allowed to depend upon the channel outputs y k−1 1 (since there exists a causal, noise-less feedback channel with one-step delay). In [13], it was shown that if z is an auto-regressive moving-average process of M -th order, then C FB can be achieved by the scheme shown in Fig. 8. In this system, B is a strictly causal and stable finite-order filter and v ∞ 1 is Gaussian with v k = 0 for all k > M and such that v 1 n is Gaussian with a positive-definite covariance matrix K v 1 M . Here we use the ideas developed in Section IV to show that the information rate achieved by the capacity-achieving scheme proposed in [13] drops to zero if there exists any additive disturbance of length at least M and finite differential entropy affecting the output, no matter how small. To see this, notice that, in this case, and for all n > M , since det(I n +B n ) = 1. From Theorem 3, this gap between differential entropies is precisely the entropy gain introduced by I n +B n to an input z 1 n when the output is affected by the disturbance v 1 M . Thus, from Theorem 3, the capacity of this scheme will correspond to 1 are the zeros of 1 + B(z), which is precisely the result stated in [13,Theorem 4.1]. However, if the output is now affected by an additive disturbance d ∞ 1 not passing through B(z) such that d k = 0, ∀k > M and |h(d 1 , then we will have In this case, But lim n→∞ 1 n (h((I n + B n )z 1 n + v 1 n + d 1 n ) − h((I n + B n )z 1 n + d 1 n )) = 0, which follows directly from applying Theorem 3 to each of the differential entropies. Notice that this result holds irrespective of how small the power of the disturbance may be. Thus, the capacity-achieving scheme proposed in [13] (and further studied in [25]), although of groundbreaking theoretical importance, would yield zero rate in any practical situation, since every real signal is unavoidably affected by some amount of noise. VIII. CONCLUSIONS This paper has provided a geometrical insight and rigorous results for characterizing the increase in differential entropy rate (referred to as entropy gain) introduced by passing an input random sequence through a discrete-time linear time-invariant (LTI) filter G(z) such that the first sample of its impulse response has unit magnitude. Our time-domain analysis allowed us to explain and establish under what conditions the entropy gain coincides with what was predicted by Shannon, who followed a frequencydomain approach to a related problem in his seminal 1948 paper. In particular, we demonstrated that the entropy gain arises only if G(z) has zeros outside the unit circle (i.e., it is non-minimum phase, (NMP)). This is not sufficient, nonetheless, since letting the input and output be u and y = G u, the difference h(y n 1 ) − h(u n 1 ) is zero for all n, yielding no entropy gain. However, if the distribution of the input process u satisfies a certain regularity condition (defined as being "entropy balanced") and the output has the form y = G u + z, with z being an output disturbance with bounded differential entropy, we have shown that the entropy gain can range from zero to the sum of the logarithm of the magnitudes of the NMP zeros of G(z), depending on how z is distributed. A similar result is obtained if, instead of an output disturbance, we let G(z) have a random initial state. We also considered the difference between the differential entropy rate of the entire (and longer) output of G(z) and that of its input, i.e., h(y n+η 1 ) − h(u n 1 ), where η + 1 is the length of the impulse response of G(z). For this purpose, we introduced the notion of "effective differential entropy", which can be applied to a random sequence whose support has dimensionality smaller than its dimension. Interestingly, the effective differential entropy gain in this case, which is intrinsic to G(z), is also the sum of the logarithm of the magnitudes of the NMP zeros of G(z), without the need to add disturbances or a random initial state. We have illustrated some of the implications of these ideas in three problems. Specifically, we used the fundamental results here obtained to provide a simpler and more general proof to characterize the rate-distortion function for Gaussian non-stationary sources and MSE distortion. Then, we applied our results to provide sufficient conditions for equality in an information inequality of significant importance in networked control problems. Finally, we showed that the information rate of the capacity-achieving scheme proposed in [13] for the autoregressive Gaussian channel with feedback drops to zero in the presence of any additive disturbance in the channel input or output of sufficient (finite) length, no matter how small it may be. A. Proofs of Results Stated in the Previous Sections Proof of Proposition 1: Let σ 2 u be the per-sample variance of u ∞ 1 , thus h(u 1 n ) = n 2 log(2π e σ 2 u ). Let y ν+1 n Φ n u 1 n . Then K y ν+1 ≤ h(y ν+1 n |c 1 n ) + I(c 1 n ; u 1 n ), where the inequality is due to the fact that u 1 n and y ν+1 n are deterministic functions of u 1 n , and hence c 1 n ←→ u 1 n ←→ y ν+1 n . Subtracting h(u 1 n ) from (96) we obtain Hence, where the last equality follows from Lemma 7 (see Appendix B) whose conditions are met because, given c 1 n , the sequence u 1 n has independent entries each of them distributed uniformly over a possibly different interval with bounded and positive measure. The opposite inequality is obtained by following the same steps as in the proof of Lemma 7, from (199) onwards, which completes the proof. Proof of Lemma 2: Let y 1 n [Ψ T n |Φ T n ] T w 1 n , where [Ψ T n |Φ T n ] T ∈ R n×n is a unitary matrix and where Ψ n ∈ R ν×n and Φ n ∈ R (n−ν)×n have orthonormal rows. Then We can lower bound h(y 1 ν |y ν+1 n ) as follows: Substituting this result into (102), dividing by n and taking the limit as n → ∞, and recalling that, since u ∞ 1 is entropy balanced, then lim n→∞ 1 n h(Ψ n u 1 n |Φ n u 1 n ) = 0, lead us to lim n→∞ The opposite bound over h(y 1 ν |y ν+1 n ) can be obtained from where (w G ) 1 n is a jointly Gaussian sequence with the same second-order moment as w 1 n . Therefore, h(Ψ n (w G ) 1 n ) ≤ ν 2 log(2π e max{σ 2 w (i)}), with σ 2 w (i) being the variance of the sample w(i). The fact that w 1 n has a bounded second moment at each entry w(i), and replacing the latter inequality in (102), satisfy lim n→∞ − 1 n h(y 1 ν |y ν+1 n ) = lim n→∞ 1 n (h(Φ n w 1 n ) − h(w 1 n )) ≥ 0, which finishes the proof. Proof of Lemma 3: is a unitary matrix and where Ψ n ∈ R ν×n and Φ n ∈ R (n−ν)×n have orthonormal rows. Since w 1 n = G n u 1 n , we have that Let Ψ n G n = A n Σ n B n be the SVD of Ψ n G n , where A n ∈ R ν×ν is an orthogonal matrix, B n ∈ R ν×n has orthonormal rows and Σ n ∈ R ν×ν is a diagonal matrix with the singular values of Ψ n G n . Hence h(Ψ n w 1 n ) = h(Ψ n G n u 1 n ) = h(A n Σ n B n u 1 n ) = log det(Σ n ) + h(B n u 1 n ). It is straightforward to show that the diagonal entries in Σ n are lower and upper bounded by the smallest and largest singular values of G n , say σ min (n) and σ max (n), respectively, which yields But from Lemma 4, lim n→∞ (1/n)σ min (n) = lim n→∞ (1/n)σ max (n) = 0, and thus where the last equality is due to the fact that u ∞ 1 is entropy balanced. This completes the proof. Proof of Lemma 4: The fact that lim n→∞ λ n (A n A T n ) is upper bounded follows directly from the fact that A(z) is a stable transfer function. On the other hand, A n is positive definite (with all its eigenvalues equal to 1), and so A n A T n is positive definite as well, with lim n→∞ λ 1 (A n A T n ) ≥ 0. Suppose that lim n→∞ λ 1 (A n A T n ) = 0. If this were true, then it would hold that lim n→∞ λ n (A −1 n A −T n ) = ∞. But A −1 n is the lower triangular Toeplitz matrix associated with A −1 (z), which is stable (since A(z) is minimum phase), implying that lim n→∞ λ n (A −1 n A −T 1 ) < ∞, thus leading to a contradiction. This completes the proof. Proof of Lemma 5: Since Q n is unitary, we have that where Applying the chain rule of differential entropy, we get Notice that w 1 m = [D n ] 1 m R n u 1 n +[Q n ] 1 m z 1 n . Thus, it only remains to determine the limit of h(w n m+1 | w m 1 ) as n → ∞. We will do this by deriving a lower and an upper bound for this differential entropy and show that these bounds converge to the same expression as n → ∞. To lower bound h(w n m+1 | w m 1 ) we proceed as follows where (a) follows from includingz m 1 (or v m 1 as well) to the conditioning set, while (b) and (d) stem from the independence between u ∞ 1 andz ∞ 1 . Inequality (c) is a consequence of h(X + Y ) ≥ h(X), and (e) follows from includingz n m+1 to the conditioning set in the second term, and noting that h(v m 1 ) is not reduced upon the knowledge of z n 1 . On the other hand, then, by inserting (127) and (126) in (118), dividing by n, and taking the limit n → ∞, we obtain where the last equality is a consequence of the fact that u ∞ 1 is entropy balanced. We now derive an upper bound for h(w n m+1 | w m 1 ). Defining the random vector Therefore, n are the covariance matrices of A n x m+1 n and A n ( m+1 [D n ] n ) −1zm+1 n , respectively, and where the last inequality follows from [26]. The fact that λ max (K x m+1 n ) and λ max (Kzm+1 n ) are bounded and remain bounded away from zero for all n, and the fact that λ min ( m+1 [D n ] n ) either grows with n or decreases sub-exponentially (since the m first singular values decay exponentially to zero, with | det D n | = 1), imply in (134) that But the fact that det D n = 1 implies that log det( m+1 [D n ] n ) = − m i=1 log d n,i . This, together with the assumption that u ∞ 1 is entropy balanced yields which coincides with the lower bound found in (129), completing the proof. Proof of Lemma 6: The transfer function G(z) can be factored as G(z) =G(z)F (z), whereG(z) is stable and minimum phase and F (z) is stable with all the non-minimum phase zeros of G(z), both being biproper rational functions. From Lemma 4, in the limit as n → ∞, the eigenvalues ofG T nG n are lower and upper bounded by λ min (G TG ) and λ max (G TG ), respectively, where 0 < λ min (G TG ) ≤ λ max (G TG ) < ∞. LetG n =Q T nD nRn and F n = Q T n D n R n be the SVDs ofG n and F n , respectively, withd n,1 ≤d n,2 ≤ · · · ≤d n,n and d n,1 ≤ d n,2 ≤ · · · ≤ d n,n being the diagonal entries of the diagonal matricesD n , D n , respectively. Then Denoting the i-th row of R n by r T n,i be, we have that, from the Courant-Fischer theorem [27] that Likewise, Thus The result now follows directly from Lemma 8 (in the appendix). Proof of Theorem 2 : In this case Notice that the columns of the matrix [Q n ] 1 m [Φ] 1 n ∈ R m×κ span a space of dimension κ n ∈ {0, 1, . . . ,κ}, which means that one can have 1 n = 0) then the lower bound is reached by inserting the latter expression into (12) and invoking Lemma 6. We now consider the case in which lim n→∞ [Q n ] 1 m [Φ] 1 n = 0. This condition implies that there exists an N sufficiently large such that κ n ≥ 1 for all n ≥ N . Then, for all n ≥ N there exist unitary matrices where A n ∈ R κn×m and A n ∈ R (m−κn)×m have orthonormal rows, such that Thus The first differential entropy on the RHS of the latter expression is uniformly upper-bounded because u ∞ 1 is entropy balanced, [D n ] 1 m has decaying entries, and h(s κ 1 ) < ∞. For the last differential entropy, notice that being unitary, Σ n ∈ R (m−κn)×(m−κn) being diagonal, and W n ∈ R (m−κn)×n having orthonormal rows. We can then conclude that Now, the fact that allows one to conclude that Recalling that A n = [H n ] κn+1 m and that H n ∈ R m×m is unitary, it is easy to show (by using the Cauchy interlacing theorem [27]) that Substituting this into (12), exploiting the fact that u ∞ 1 is entropy balanced and invoking Lemma 6 yields the upper bound in (25). Clearly, this upper bound is achieved if, for example, is non-singular for all n sufficiently large, since, in that case, κ n =κ and we can choose A n = [Iκ 0] and A n = [0 I m−κ ]. This completes the proof. (28), the transfer function G(z) can be factored as G(z) =G(z)F (z), whereG(z) is stable and minimum phase and F (z) is a stable FIR transfer function with all the nonminimum-phase zeros of G(z) (m in total). Lettingũ 1 n G n u 1 n , we have that h(y 1 Proof of Theorem 3 : As in This means that the entropy gain of G n due to the output disturbance z ∞ 1 corresponds to the entropy gain of F n due to the same output disturbance. One can then evaluate the entropy gain of G n by applying Theorem 2 to the filter F (z) instead of G(z), which we do next. Since only the first m values of z ∞ 1 are non zero, it follows that in this case and the sufficient condition given in Theorem 2 will be satisfied for is the left unitary matrix in the SVD F n = Q T n D n R n . We will prove that this is the case by using a contradiction argument. Thus, suppose the contrary, i.e., that Then, there exists a sequence of unit-norm vectors {v n } ∞ n=1 , with v n ∈ R m for all n, such that For each n ∈ N, define the n-length image vectors t T n v T n [Q n ] 1 m , and decompose them as such that α n ∈ R m and β n ∈ R n−m . Then, from this definition and from (156), we have that As a consequence, where the last equality follows from the fact that, by construction, t T n is in the span of the first m rows of Q n , together with the fact that Q n is unitary (which implies that [Q n ] m+1 n t n = 0). Since the top m entries in D n decay exponentially as n increases, we have that where ζ n is a finite-order polynomial of n (from Lemma 8, in the Appendix). But Taking the limit as n → ∞, where we have applied (158) |F (e jω )| 2 > 0 (166) (the inequality is strict because all the zeros of F (z) are strictly outside the unit disk). Substituting this into (165) we conclude that which contradicts (160). Therefore, (155) leads to a contradiction, completing the proof. Proof of Theorem 6: Denote the Blaschke product [29] of A(z) as which clearly satisfies where b 0 is the first sample in the impulse response of B(z). Notice that (169) implies that lim n→∞ 1 n B n u 1 n 2 = lim n→∞ 1 n u 1 n 2 for every sequence of random variables u ∞ 1 with uniformly bounded variance. Since B(z) has only stable poles and its zeros coincide exactly with the poles of A(z), it follows that B(z)A(z) is a stable transfer function. Thus, the asymptotically stationary processx ∞ 1 defined in (69) can be constructed asx where B n is a Toeplitz lower triangular matrix with its main diagonal entries equal to b 0 . The fact that B(z) is biproper with b 0 as in (170) implies that for any u 1 n with finite differential entropy which will be utilized next. For any given n ≥ M , suppose that C(z) is chosen and x 1 n and u 1 n are distributed so as to minimize n , u 1 n is a realization of R x,n (D)), yielding the reconstruction Since we are considering mean-squared error distortion, it follows that, for rate-distortion optimality, u 1 n must be jointly Gaussian with x 1 n . From these vectors, definẽ where d 1 n is a zero-mean Gaussian vector independent of (ũ 1 n ,x 1 n ) with finite differential entropy such that d k = 0, ∀k > M . Then, we have that 2 nR x,n (D) = I(x 1 n ; y 1 n ) (a) = I(B n x 1 n ; B n y 1 n ) = I(x 1 n ;ỹ 1 n ) (177) where (a) follows from B n being invertible, (b) is due to the fact thatỹ 1 (172)). Equality holds in (e) becausẽ x 1 n ⊥ ⊥ (ũ 1 n , d 1 n ) and in (f ) because of (176). The last inequality holds becauseȳ 1 n =ỹ 1 n +d 1 n and d 1 n ⊥ ⊥ỹ 1 n . But from Theorem 3, lim n→∞ where (a) holds because d 1 n = d 1 M is bounded, and (b) is due to the fact that, in the limit, B(z) is a unitary operator. Recalling the definitions of Rx(D) and Rx(D), we conclude that lim n→∞ 1 n (x 1 n ;ȳ 1 n ) ≥ Rx ,n (D), and therefore In order to complete the proof, it suffices to show that R x (D) − Rx(D) ≤ M i=1 log |p i |. For this purpose, consider now the (asymptotically) stationary sourcex 1 n , and suppose thatŷ 1 n =x 1 n + u 1 n realizes Rx ,n (D). Again,x 1 n and u 1 n will be jointly Gaussian, satisfyingŷ 1 n ⊥ ⊥ u 1 n (the latter condition is required for minimum MSE optimality). From this, one can propose an alternative realization in which the error sequence isũ B n u 1 n , yielding an outputỹ 1 n =x 1 n +ũ 1 n withỹ 1 n ⊥ ⊥ũ 1 n . Then nRx ,n (D) = I(x 1 n ;ŷ 1 n ) = h(x 1 n ) − h(x 1 n |ŷ 1 n ) (189) where (a) follows by recalling thatŷ 1 n =x 1 n + u 1 n and becauseŷ 1 n ⊥ ⊥ u 1 n , (b) stems from (172), (c) is a consequence ofỹ 1 n ⊥ ⊥ũ 1 n , (d) follows from the fact thatỹ 1 n =x 1 n +ũ 1 n . Finally, (e) holds because B n is invertible for all n. Since, asymptotically as n → ∞, the distortion yielded by y 1 n for the non-stationary source x 1 n is the same which is obtained whenx 1 n is reconstructed asŷ 1 n (recall (169)), we conclude that R x (D) − Rx(D) ≤ M i=1 log |p i |, completing the proof. B. Technical Lemmas Lemma 7. Let u ∞ 1 be a random process with independent elements, and where each element u i is uniformly distributed over possible different intervals [− ai 2 , ai 2 ], such that a max > |a i | > a min > 0, ∀i ∈ N, for some positive and bounded a min < a max . Then u ∞ 1 is entropy balanced. Proof: Without loss of generality, we can assume that a i > 1, for all i (otherwise, we could scale the input by 1/a min , which would scale the output by the same proportion, increasing the input entropy by n log(1/a min ) and the output entropy by (n − ν) log(1/a min ), without changing the result). The input vector u 1 n is confined to an n-box U n (the support of u n 1 ) of volume V n (U n ) = n i=1 a i and has entropy log( n i=1 a i ). This support is an n-box which contains n k 2 n−k k-boxes of different k-volume. Each of these k-boxes is determined by fixing n − k entries in u 1 n to ±a i /2, and letting the remaining k entries sweep freely over [− ai 2 , ai 2 ]. Thus, the k-volume of each k-box is the product of the k support sizes a i of the associated selected free-sweeping entries. But recalling that a i > 1 for all i, the volume of each k-box can be upper bounded by n i=1 a i . With this, the added volume of all the k-boxes contained in the original n-box can be upper bounded as We now use this result to upper bound the entropy rate of y ν+1 n . Let y 1 n [Ψ T n |Φ T n ] T u 1 n where [Ψ T n |Φ T n ] T ∈ R n×n is a unitary matrix and where Ψ n ∈ R ν×n and Φ n ∈ R (n−ν)×n have orthonormal rows. From this definition, y ν+1 n will distribute over a finite region Y ν+1 n ⊆ R n−ν , corresponding to the projection onto the k-dimensional span of the rows of Φ n . Hence, h(y ν+1 n ) is upper bounded by the entropy of a uniformly distributed vector over the same support, i.e., by log V n−ν (Y ν+1 n ), where V n−ν (Y ν+1 n ) is the (n − ν)-dimensional volume of this support. In turn, V n−ν (Y ν+1 n ) is upper bounded by the sum of the volume of all (ν − k)-dimensional boxes contained in the n-box in which u 1 n is confined, which we already denoted by V n−ν (U n ), and which is upper bounded as in (197). Therefore, Dividing by n and taking the limit as n → ∞ yields where (a) follows because [Ψ T n |Φ T n ] T is an orthogonal matrix. Letting (y G ) 1 ν correspond to the jointly Gaussian sequence with the same second-order moments as y 1 ν , and recalling that the Gaussian distribution maximizes differential entropy for a given covariance, we obtain the upper bound where (a) follows since the {u i } n i=1 are independent, and (b) stems from the fact that Ψ n ∈ R ν×n has orthonormal rows and from the Courant-Fischer theorem [27]. Since max{σ 2 ui } n i=1 is bounded for all n, we obtain by substituting (200) into (199) that lim n→∞ We re-state here (for completeness and convenience) the unnumbered lemma in the proof of [15, Theorem 1] as follows: Lemma 8. Let the function ι be as defined in (23) where the elements in the sequence {α n,l } are positive and increase or decrease at most polynomially with n. Lemma 9. Let P (z) = N (z) D(z) be rational transfer function of order p with relative degree 1, with initial state x 0 ∈ R p . Let T (z) = Γ(z) Θ(z) be a biproper rational transfer function of order t with initial state s 0 ∈ R t . Let where u is an exogenous signal. Then where the initial state of D(z) is x 0 and the initial state of Θ/(ΘD + N Γ) can be taken to be [x 0 s 0 ]. Proof: Let D(z) = 1 − p i=1 d i z −i and N (z) = p i=1 n i z −i . Define the following variables: Then the recursion corresponding to P (z) is This reveals that the initial state of P (z) corresponds to Let Γ(z) = t i=0 γ i z −i and Θ(z) = 1 − t i=1 θ i z −i . Then v = T (z)w can be written as which reveals that the initial state of T (z) can be taken to be s 0 [s 1−t s 2−t · · · s 0 ]. Since y k = u k − v k , it follows that Combining the above recursions, it is found that y is related to the input u by the following recursion: which corresponds to
2015-12-11T14:25:31.000Z
2015-12-11T00:00:00.000
{ "year": 2015, "sha1": "a6d160f5ffe90a9873b402661f6318d332e8f86b", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "a6d160f5ffe90a9873b402661f6318d332e8f86b", "s2fieldsofstudy": [ "Engineering", "Mathematics", "Physics" ], "extfieldsofstudy": [ "Computer Science", "Mathematics" ] }
119328626
pes2o/s2orc
v3-fos-license
Spin-distribution functionals and correlation energy of the Heisenberg model We analyse the ground-state energy and correlation energy of the Heisenberg model as a function of spin, both in the ferromagnetic and in the antiferromagnetic case, and in one, two and three dimensions. First, we present a comparative analysis of known expressions for the ground-state energy $E_0(S)$ of {\it homogeneous} Heisenberg models. In the one-dimensional antiferromagnetic case we propose an improved expression for $E_0(S)$, which takes into account Bethe-Ansatz data for $S=1/2$. Next, we consider {\it inhomogeneous} Heisenberg models (e.g., exposed to spatially varying external fields). We prove a Hohenberg-Kohn-like theorem stating that in this case the ground-state energy is a functional of the spin distribution, and that this distribution encapsulates the entire physics of the system, regardless of the external fields. Building on this theorem, we then propose a local-density-type approximation that allows to utilize the results obtained for homogeneous systems also in inhomogeneous situations. We conjecture a scaling law for the dependence of the correlation functional on dimensionality, which is well satisfied by existing numerical data. Finally, we investigate the importance of the spin-correlation energy by comparing results obtained with the proposed correlation functional to ones from an uncorrelated mean-field calculation, taking as our example a linear spin-density wave state. I. INTRODUCTION In this paper we study the ground-state energy, correlation energy and related quantities of the Heisenberg model. The homogeneous Heisenberg model is defined by the HamiltonianĤ where theŜ i are spin vector operators satisfyinĝ S 2 i |Sm = S(S + 1)|Sm andŜ i,z |Sm = m|Sm , and S and m are the spin quantum numbers of the particles under study. Although in accordance with common terminology theŜ i are called spin operators, they really represent total angular momentum and are not restricted to be of purely spin origin. ij indicates a sum over nearest neighbors on a lattice of dimensionality d, and J is the spin-spin interaction constant, parametrizing the exchange interaction of the underlying microscopic Hamiltonian. [1][2][3] For antiferromagnetism J > 0, while for ferromagnetism J < 0. This model was originally proposed in 1926 to explain ferromagnetism in transition metals, 4,5 but has since then found a large number of other applications to the magnetic properties of matter. [1][2][3] Recent examples are antiferromagnetic chains in complex oxides and other low-dimensional magnets 6 or studies of magnetic effects on crystal-field splittings in rare-earth compounds. 7 The inhomogeneous Heisenberg model, characterized by broken translational symmetry, is obtained by adding a spatially varying magnetic field toĤ 0 , where B i can either be an externally applied field or an internal field due to magnetism in the system. A variety of different sources and manifestations of the inhomogeneity B i has been studied in the recent literature, often in conjunction with the synthesis and investigation of real materials, whose magnetic properties are necessarily spatially inhomogeneous, but still to some extent describable by modified Heisenberg models. 6,8 The interpretation of experimental data for realistic materials in terms of the Heisenberg model, in particular in the presence of staggered or otherwise spatially varying magnetic fields, must be based on a solid understanding of the behaviour of the model (2) in the presence of inhomogeneity. The homogeneous Heisenberg model in d = 1 dimension and for S = 1/2 has an exact analytical solution in terms of the Bethe Ansatz, 9,10 but the same Ansatz does not work in higher dimensions, in which no exact solution is known. It is also hard to generalize to inhomogeneous situations. In the present paper we combine exact and approximate results obtained within a variety of different approaches and techniques, to provide a systematic analysis of the ground-state and correlation energy of the Heisenberg model both in the ferro and in the antiferro-magnetic case and for d = 1, 2, 3 dimensions. In Sec. II we provide a comparative analysis of available expressions for the ground-state energy of the homogeneous Heisenberg model as a function of spin S, dimenionality d, and coupling constant J. This section also contains a proposal for an improved expression that goes beyond those available in the literature in taking into account a Bethe Ansatz result for S = 1/2. In Sec. III we use concepts of density-functional theory to extend the utility of the homogeneous results discussed in the preceeding section to inhomogeneous situations. This section includes a proof of a Hohenberg-Kohn-type theorem for a large class of generalized Heisenberg models, and a proposal for a simple local-density approximation. In Sec. IV we then study the correlation energy in homogeneous and inhomogeneous Heisenberg models, in order to assess the importance of correlations and quantum fluctuations as a function of dimensionality and spin. As an explicit example for a physically interesting type of inhomogeneity we consider a linear spin-density wave, and explore the differences between a mean-field and a density-functional treatment of the resulting inhomogeneous spin distribution. II. HOMOGENEOUS HEISENBERG MODELS: GROUND-STATE ENERGY AS FUNCTION OF SPIN This section provides a brief review of what is known about the ground-state energy of homogeneous Heisenberg models. Although the expressions collected below for J > 0 and J < 0 and d = 1, 2, 3 are given at various places in the literature, we have not found a systematic collection and comparison at one single place. For the convenience of the reader and future reference we therefore provide such a comparison below. We also add a new expression to the list [Eq. (15)], which is a slight improvement on one of the earlier results. A. Ferromagnetic case Let us first consider the ferromagnetic case, which is much simpler than the antiferromagnetic one. At zero temperature all spins are parallel, and the corresponding spin operators commute with each other. The exact ground-state energy is then the same as that obtained in the mean-field approximation, and is given by where z is the number of nearest neighbours, which on linear, square and cubic lattices is related to the dimensionality by d = z/2 and, as above, J < 0 for ferromagnetism. In Fig. 1 we show the resulting curves for the energy per site and interaction strength for one, two and three dimensions. B. Antiferromagnetic case The antiferromagnetic (AFM) case is much more complicated than the ferromagnetic one, because in spite of its name the ground-state of the antiferromagnetic Heisenberg model is not simply the 'antiferromagnetic' state consisting of alternating spin up and spin down states with respect to a fixed direction (i.e., the Neel state), but a quantum superposition of states involving also spins along the perpendicular axes. In the onedimensional S = 1/2 case the structure of the corresponding ground-state for N → ∞ is known exactly by means of the Bethe Ansatz. 3,9,10 The corresponding ground-state energy is For the energy per site and interaction strength one obtains from this 10 Similar exact results are not available in the general case, for arbitrary S and d. However, a set of useful approximate expressions was derived in two early papers by Anderson. 11,12 In the first of these it is shown by means of a variational argument that the energy of the antiferromagnetic ground state must lie in the interval 11 where z = 2d is again the coordination number of the linear, square or cubic lattice under study. A simple estimate is obtained by using the center of this interval, 12 i.e., but the quality of this estimate, which is quite good for d = 3, deteriorates for d = 2 and d = 1. A numerical calculation based on spin-wave theory leads to the more precise results 12 We can assess the quality of these expressions by substituting S = 1/2 in the first of them. The result e AF M 0 (S = 1 2 , d = 1) = −0.43169 is within 2.6% of the exact Bethe Ansatz value reported above. Indeed, significant improvement over Eq. (10) has only been obtained recently, with the use of modern computing facilities and advanced numerical techniques. In such work Lou et al. 13 used (50 years after Ref. 12) the densitymatrix renormalization group (DMRG) to calculate corrections to Eq. (10) for values of S ranging from 1/2 to 5 in steps of 1/2. These authors propose a fit to their numerical data which in our present notation reads We note that for S = 1/2, where it predicts e AF M,f it1 0 (S = 1 2 , d = 1) = −0.516459, this fit is actually worse than the earlier expression (11), deviating by 17% from the exact Bethe Ansatz value. On the other hand, as shown in Ref. 13 by comparison with DMRG data and other highly precise numerical results, the fit is excellent for higher values of S. In order to obtain a closed expression that can also be applied at S = 1/2, a slight modification to the fit by Lou et al. is sufficient. To this end we propose the alternative expression which differs from (14) in the inclusion of two cubic terms in 1/S. The value at S = 1/2 predicted by this expression, e AF M,f it2 0 (S = 1 2 , d = 1) = −0.446253, deviates by only 0.7% from the exact Bethe Ansatz value. Figures 2 and 3 display the various AFM energy expressions collected above. In Fig. 2 we compare the rigorous but rather wide interval provided by expression (8), with the spin-wave results (11) - (13) and (in d = 1) the DMRG data of Ref. 13. Obviously, on this scale the spin-wave expression (11) already provides an excellent approximation to the DMRG data. Neither of the two fits (14) and (15) can be distinguished from the much simpler expression (11) on the scale of the figure. Unfortunately, no highly precise numerical reference data, similar to the DMRG results of Ref. 13, seem to be available in d = 2 and d = 3, but the approximations leading to the simple analytical formulae given above are expected to work better as d increases. This expectation is corroborated by noting that the rigorous interval (8) shrinks with increasing d. Interestingly, both the numerically highly precise DMRG data of Ref. 13 (in d = 1) and the spin-wave expressions (11) to (13) (in d = 1, 2, 3) systematically lie closer to the more negative boundary of the interval than to the less negative one, and in d = 1 the DMRG values are still a little closer to this boundary than the curve predicted by Eq. (11). This shows that the lower bound in Eq. (8) is tighter than the upper one. The simple estimate (9), on the other hand, by construction falls in the middle of the interval and becomes less reliable for lower d. In the interest of readability we have not displayed the curves corresponding to this estimate in the figures. Figure 3 shows that on a smaller scale the differences between the more precise expressions become important. Here we compare the two fits, (14) and (15), to the DMRG data. To make the details of the fits, and the interesting oscillatory structure they display, clearly visible, we have subtracted the spin-wave expression (11), which is common to both fits. The fit proposed above, Eq. (15), is slightly inferior to the one developed by Lou et al., Eq. (14), around S = 3/2, but unlike the latter recovers the exactly known S = 1/2 data point to within less than one percent. In the present paper we are mainly concerned with the homogeneous or inhomogeneous Heisenberg model on linear, square and cubic lattices. Expressions (or numerical values) for the ground-state energy can also be derived for many other variations of the Heisenberg model, such as lattices with helical boundary conditions, 14 or with anisotropic interactions. 15 Although we do not consider such models in the present paper, many of our results can be extended to them in a straightforward way. III. INHOMOGENEOUS HEISENBERG MODELS: SPIN-DISTRIBUTION FUNCTIONALS Based on the analysis of the preceeding section we recommend the use of expressions (11) to (13) in calculations requiring simple expressions for the ground-state energy of the homogeneous antiferromagnetic Heisenberg model in one, two, and three dimensions. In one dimension, where the simple expressions fare worst, either of the two fits (14) and (15) provides a significant improvement in accuracy, but only the latter recovers the Bethe Ansatz value at S = 1/2. However, the utility of any of these expressions is rather limited due to the restriction to spatial homogeneity. Externally applied magnetic fields that vary in space, calculations of magnetic effects on crystal-field splitting, description of nontrivial inter-nal order, etc., require use of the inhomogeneous Heisenberg model. 8 Unfortunately, if translational invariance is broken the Bethe Ansatz, spin-wave theory, DMRG and most other approaches encounter very significant computational difficulties. In the case of ab initio calculations a many-body technique that has had considerable success in the application to inhomogeneous systems is density-functional theory (DFT), 16-18 but it is not very common to apply DFT also to model Hamiltonians. However, following pioneering work by Gunnarsson and Schönhammer, 19 DFT was recently formulated and applied for the one-dimensional Hubbard model. [20][21][22] In this section we build on this experience to explore how DFT can become a useful tool also in studies of the inhomogeneous Heisenberg model. To this end we prove, in subsection III A, a Hohenberg-Kohn-type theorem for a wide class of generalized Heisenberg models (of which the models discussed above are special cases). In subsection III B we then use this theorem and the explicit expressions discussed in Sec. II to construct a simple local-density approximation for inhomogeneous Heisenberg models. A. Hohenberg-Kohn theorem for generalized Heisenberg models A first question that must be answered before DFT can be usefully employed is what the fundamental variable is. In ab initio DFT one mostly chooses the particle density n(r) or its spin-resolved counterpart n σ (r), [16][17][18] although other choices are occasionally useful. [23][24][25] In the case of the Hubbard model the basic variable is the site occupation number n i . [19][20][21][22] In the present case we propose to use the spin vector S i , which is the only fundamental dynamical variable appearing in the definition of the Heisenberg model. In the interest of generality, in the present section we consider a generalized Heisenberg model of the form Unlike in Eqs. (1) and (2) the sum in the first term on the right-hand side is not restricted to nearest neighbours, and the interaction J ij can depend in any way on the indices of the involved sites. In particular, it can extend to next-nearest neighbours and beyond, or alternate between ferromagnetic and antiferromagnetic along some direction in the crystal. Both of these features are found in realistic magnetic crystals. This relaxation of constraints on J may appear a considerable complication, but it turns out that the proof of the Hohenberg-Kohn theorem is essentially unaffected by the extra generality. (As pointed out above, we do not consider anisotropic Heisenberg models, in which J couples differently to different components of S, in this paper, but the generalization of the theorem to this case is straightforward.) Following the steps of Hohenberg and Kohn, we now consider two Hamiltonians with same interaction J ij , but exposed to two different magnetic fields B i and B ′ i . Thuŝ where E 0 and E ′ 0 are the ground-state energies in the fields B i and B ′ i , and Ψ and Ψ ′ are the corresponding ground-state wave functions. As a consequence of the variational principle we have the inequality since Ψ ′ is not the ground-state wave function belonging toĤ (assumed nondegenerate). By adding and subtracting the term iŜ i · B ′ i on the right-hand side, the inequality becomes Here the first term on the right-hand side is just the ground-state energy E ′ 0 , of HamiltonianĤ ′ . With the Now we repeat the same argument starting with the HamiltonianĤ ′ . The variational principle guarantees that By adding and subtracting the term iŜ i ·B i , we obtain, in the same way as before, Addition of Eqs. (21) and (23) leads to If we now assume that S ′ i = S i , i.e., that the two spin distributions corresponding to the two different wave functions Ψ and Ψ ′ are identical, then the previous equation reduces to the contradiction This contradiction shows that two distinct nondegenerate ground states can never lead to the same spin distribution. Hence, given some arbitrary spin distribution S i there is at most one nondegenerate wave function which gives rise to it. In other words: the spin distribution uniquely determines the wave function. This means that the wave function is a functional 26 of the spin distribution, i.e., Ψ = Ψ[S i ]. This is the statement of the Hohenberg-Kohn theorem for the Heisenberg model. For completeness we mention that the above proof by contradiction, patterned after the one first presented by Hohenberg and Kohn in the ab initio case, 27 is not the only possible one. The constrained-search technique of Levy 28 and Lieb 29 is also easily adapted to the present case. The ground-state wave function in this approach is uniquely defined by its spin distribution as the wave function that minimizes Ψ|Ĥ|Ψ and reproduces S i . This minimization defines the functional whose minimum is the ground-state energy. An immediate consequence of either formulation of the proof is that the ground-state expectation value of any observableÔ, is also a functional of the spin distribution, defined via and this functional is the same regardless of the strength and direction of the magnetic field B i , i.e., it is universal with respect to external fields. Note that the theorem applies to any ground-state observable. For example, it implies that also all multi-spin correlation functions of the general form C n,n+1,n+2,... := Ψ|Ŝ nŜn+1Ŝn+2 . . . |Ψ (28) are uniquely determined by the single-spin expectation value S n = Ψ|Ŝ n |Ψ . This is trivially true for the nearest-neighbor correlation function C hom n,n+1 of a homogeneous one-dimensional system, which as a consequence of the definition of the homogeneous Hamiltonian (1) is simply given by but the above proof guarantees that the more complicated correlation functions involving more than two spins and/or spatially inhomogeneous spin distributions are, in principle, also functions of S i only. B. Local-density approximation Another consequence of this Heisenberg-model formulation of the Hohenberg-Kohn theorem is that the model's ground-state energy and spin distribution can be obtained by application of the variational principle to spin distributions, instead of wave functions. In complex situations this can be a major simplification, but to extract this information is, of course, still highly nontrivial. The most straightforward thing to do would be to set up an approximation for the total energy of the system under study as a functional of the spin distribution, and to minimize with respect to S i . In ab initio DFT this is not the preferred way to proceed because it turns out to be hard to conceive good density functionals for the kinetic energy. In practical applications of ab initio DFT one therefore commonly employs an indirect minimization scheme, leading to the widely used Kohn-Sham equations. Although this could also be done in the present case, there is no need for introducing a Kohn-Sham system for the Heisenberg model, since there is no kinetic-energy term in the first place. Direct minimization of total-energy expressions seems much more convenient (and more analogous to the way the model is usually treated in statistical physics) than indirect minimizations. In order to explore how such a direct minimization can proceed we first construct, in this subsection, a localdensity approximation for the simpler Heisenberg models discussed in the preceeding section. Let us write the two contributions to the ground-state energy of the inhomogeneous Heisenberg model as E J and E B , which are the ground-state expectation values of the first and second term on the right-hand side of Eq. (2), respectively. The mean-field approximation for E J yields where, as above, S i = Ψ|Ŝ i |Ψ . In Sec. IV we quantify the error made by the neglect of correlation effects arising from use of E MF J [S i ] in place of E J [S i ]. A guideline for the construction of better functionals than E MF J [S i ] is provided by ab initio DFT 16 or recent work on the Hubbard model. [20][21][22] In the former, the total energy of an arbitrarily inhomogeneous system is written as where T s is the noninteracting kinetic energy, the Hartree energy, and E v the potential energy arising from the external field v(r). The local-density approximation (LDA) for the exchange-correlation (xc) energy is This expression locally substitutes the xc energy of the inhomogeneous system by the one of a homogeneous system of same density. The necessary input expression for e xc (n) is obtained by subtracting the Hartree and noninteracting kinetic energy from the ground-state energy of the homogeneous system, e 0 (n). In the present context we write, in analogy to Eq. (31), where E MF J is defined in Eq. (30), E B is the potential energy arising from the external field B i , and E c is by definition the difference between the mean-field result and the correct one, i.e., the correlation energy. (There is no Heisenberg-model counterpart to the kinetic energy term, and we avoid the expression 'exchange-correlation energy' because in common terminology the entire Heisenberg Hamiltonian is due to 'exchange'.) To obtain an explicit scheme we now propose to approximate, in analogy to (33), where e c (S) is obtained by subtracting the mean-field energy, −dS 2 , from the homogeneous expressions for e 0 (S) discussed in Sec. II. As an explicit example, the LDA approximation for the correlation energy of an inhomogeneous antiferromagnetic Heisenberg model in one dimension becomes where we used Eq. (10) for e 0 (S). Of course Eqs. (14) or (15) can be used in the same way in d = 1, and Eqs. (12) and (13) in d = 2 and d = 3, respectively. The full ground-state energy is then for any d approximated as Clearly Eq. (35) is a rather simple approximation, whose quality may vary widely depending on the circumstances (e.g., values of J and d, or spatial dependence of B i ). At present it is motivated mainly by the considerable practical success of its counterpart in ab inito DFT, [16][17][18] and by the encouraging results obtained recently with a Bethe-Ansatz based LDA for inhomogeneous Hubbard models. [20][21][22] It is clear, however, that the LDA contains essential correlation effects not accounted for by the mean-field expression (30). In spite of the extra term, minimization of (34) with (35) is no more complicated than that of (30). Eq. (35) thus shows one way in which the expressions listed in Sec. II for homogeneous systems can be applied to inhomogeneous situations. A simple application of these ideas is worked out in Sec. IV B. C. A scaling hypothesis An interesting feature of the functionals obtained by combining Eq. (35) with the explicit formulae of Sec. II is that the resulting expressions depend explicitly on the interaction J and the dimensionality d. The dependence of the ab initio functionals on these parameters is not well known, and in particular the d-dependence of the functional is still subject of many ongoing investigations. 30 Even in the much simpler case of the Hubbard model it is only the interaction dependence which is featured explicitly in the available functionals, [19][20][21][22] whereas the dependence on dimensionality is essentially unknown. In this context it may be useful to have, for the Heisenberg model, a number of simple expressions that depend explicitly on dimensionality and interaction, so that the role of these parameters in the functional can be explored in a simplified environment. As an explicit example, we consider scaling properties of the functional as a function of dimensionality d. The Hartree-like term ∝ S 2 in Eqs. (11) to (13) Interestingly, the correlation energy contribution ∝ S also obeys a similar, albeit less obvious, scaling law. From the explicit expressions (11) to (13) where x and y are exponents to be determined. Numerically one finds x = −0.201 and y = −0.203. The near-equality of these two exponents among each other and to the integer fraction −1/5 leads us to conjecture the following dimensional scaling law: where the scaling exponent η = 1/5. This scaling law accounts for the numbers in Eqs. (11) to (13) to within ≈ 10 −3 . Of course, at present the scaling law (41) is only a conjecture, but one that is consistent with the numbers of spin-wave theory. It also correctly predicts that as d → ∞ the correlation energy vanishes, leaving behind only the mean-field contribution to the total energy. Since very little is known about the dimension dependence of density functionals we cannot say at present whether the existence of such a law is a mere coincidence, a particular property of the Heisenberg model, or a general phenomenon, but we hope that our observation of dimensional scaling stimulates further research along these lines. One practical use that can be made of Eq. (41) is to convert an approximate functional obtained for some value of d into one for another dimensionality. Counterparts to this property for other Hamiltonians would be interesting not only for ab initio calculations (in which many results are known for d = 3, but much less in d = 2 or d = 1), 30 but also in the case of the Hubbard model, in which the LDA functional is known only for d = 1. 20-22 IV. CORRELATION ENERGY OF THE ANTIFERROMAGNETIC HEISENBERG MODEL In this section we apply the results obtained above to a study of the correlation energy of the antiferromagnetic Heisenberg model. In Sec. IV A we compare the homogeneous expressions of Sec. II with their mean-field approximation, to assess the importance and behaviour of the correlation energy. In Sec. IV B we study a physically interesting inhomogeneity, a spin-density wave, with the LDA functional (36). A. Homogeneous system As in the previous section we define the correlation energy as the difference between the total ground-state energy and its mean-field approximation. For a homogeneous system on a linear, square or cubic lattice the latter yields and thus e 0 (S) = −dS 2 . In the inset of Fig. 4 we plot the difference between this value and the total energy expressions Eqs. (11), (12) and (13). For one dimension we also plot the difference between the mean-field result and the more precise expression (15). These differences represent the absolute size of the correlation energy. In the main part of Fig. 4 we display the relative size of the correlation energy as compared to the mean-field energy. Several conclusions can be drawn from inspection of these curves: (i) The inset of Fig 4 shows that the absolute size of the correlation energy increases towards larger spins. This seems counterintuitive, because larger spins should more closely mimick the classical limit, in which there are no quantum fluctuations and the mean-field approximation becomes exact in the ground state. However, as shown in the main figure, the relative weight of the correlation energy as compared to the mean-field energy decreases towards larger spins. Interestingly, the naive expectation that correlations should become less important near the classical limit is thus only true in relative terms, but not in absolute ones. (ii) On similar grounds one would expect that correlations become less important for larger dimensionality. This is confirmed both by the main figure and the inset, showing that correlations decrease in absolute size and relative to the mean-field energy as d increases. The way the d-dependence approaches the classical limit is thus qualitatively different from the way the spin-dependence does. (iii) The mean-field energy is not reliable for any dimensionality d ≤ 3 and S < 5, leading to errors that can be larger than J in absolute size and larger than 50% in relative terms. This observation puts tight limits on the reliability of this rather widely used approximation. (iv) The improved treatment of correlations in d = 1, represented by the curves labeled 'DMRG fit', does not invalidate conclusions (i), (ii), and (iii), obtained on the basis of the spin-wave expressions (11) to (13). However, both the main figure and the inset of Fig. (4) show that it enhances the importance of correlations with respect to the mean-field values, as compared to the simpler expressions. B. Example of an inhomogeneous system: a linear spin-density wave As an example of a truly inhomogeneous situation, to which our LDA functional (35) can be applied, we now consider a simple but physically interesting inhomogeneity, namely a spin-density wave (SDW) imposed by an external field on a chain with antiferromagnetic coupling. We model the SDW state by taking where φ n = 2π(n − 1)/λ and u x denotes the unit vector in the x direction. This choice describes a linear SDW of amplitude S and wave length λ, polarized along the x-direction. (The lattice is taken to be a chain along the z-direction.) The corresponding mean-field energy is where N is the number of lattice sites, and B n is a magnetic field that can be thought of as either externally applied [thus forcing the system into a state with spin distribution (43)] or generated self-consistently, or a combination of both. The LDA approximation for the ground-state energy is, on the other hand, where we use, for simplicity, Eq. (36) for E LDA c [S i ]. For the given spin distribution (43) we now compare the predictions of the mean-field and LDA expressions for the interaction energy E J . Note that this is not a self-consistent calculation, but a comparison of the two expressions (44) and (45) for a fixed distribution specified by (43). In Fig. 5 we plot E MF J , E LDA J and E LDA c as functions of λ, the wave length of the SDW. Since the term E B , arising from the magnetic field, is the same in both approximations we only display the interaction energy E J . The presence of an external field with symmetry different from the one of the ground-state of the unperturbed antiferromagnetic system gives rise to a rich physics. Particularly, we note the following points: (i) Addition of the correlation energy to the mean-field result lowers the total energy considerably. This is, qualitatively, the same behaviour we found previously in homogeneous systems and illustrates again the importance of going beyond the mean-field expression for the energy. (ii) As λ → ∞ the SDW approaches a ferromagnetic spin configuration. Since the unperturbed homogeneous model is antiferromagnetic this state is energetically unfavored, and corresponds to a maximum of the E versus λ curves. In the opposite limit the spatial modulation of the SDW can take local advantage of the AFM tendency of the underlying homogeneous system. This leads to a lowering of the energy as λ → 0. (iii) Once λ is larger than approximately 10 lattice constants the E versus λ curves saturate. We interpret this in terms of the correlation length of the antiferromagnetic model by noting that once the SDW modulation takes place on a scale larger than a few correlation lengths, the system will be relatively insensitive to further approximation to the ferromagnetic state. (iv) The overall downshift of the LDA curve compared to the mean-field one implies that the ferromagnetic configuration is energetically less unfavorable in the former approximation than in the latter. Physically this is reasonable, because the correlations accounted for by E c break up the rigid AFM pattern of the Neel state found in the mean-field approximation, and replace it by a complex ground-state involving spins along all three directions in space. V. SUMMARY AND OUTLOOK Density-functional theory is commonly applied to the ab initio Hamiltonian, in the context of electronic structure calculations of molecules or solids. [16][17][18] Applications to model Hamiltonians are rare, although they can be useful both for the analysis of these models in the presence of inhomogeneity (broken translational invariance), and for further development of DFT. The present work on the Heisenberg model serves to exemplify these two complementary aspects. The Heisenberg Hamiltonian is much simpler than the Hubbard Hamiltonian, which describes the chargedegrees of freedom in addition to the spin ones, or than the ab initio Hamiltonian involving the long-range Coulomb interaction between charges in a real crystal. The Heisenberg model may therefore be considered a simplified environment in which concepts and methods of DFT can be tested and analysed. An interesting example is the study of the interaction and dimension dependence of xc functionals, about little is known in the ab initio case. On the other hand, the LDA for inhomogeneous Heisenberg models is no more complicated, formally, than the mean-field approximation, but it accounts by construction for essential correlation effects missed by the latter. Analysis of the correlation energy of homogeneous and inhomogeneous Heisenberg models, as function of spin and dimensionality, illustrates that such effects are crucial for a quantitative description of the ground-state. The above LDA functional may thus be useful in studies of the behaviour of the Heisenberg model in spatially varying external fields or in the presence of internal magnetic order that breaks translational symmetry. The simple LDA-type correlation functional based on Eqs. (35) and (11) is seen to produce significant quantitative change and qualitative improvement over the meanfield approximation, at very little extra computational cost. This observation encourages us to envisage more complex applications of this functional, such as to impurity states in the Heisenberg model. An extension of the present work to a study of the thermodynamics of inhomogeneous Heisenberg models (employing the T > 0 formulation of DFT 31 ) is also planned for the future.
2019-04-14T02:02:52.888Z
2003-05-29T00:00:00.000
{ "year": 2003, "sha1": "feace26c7d14ab25b742b3405de512776e6dffda", "oa_license": null, "oa_url": "http://arxiv.org/pdf/cond-mat/0305690", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "a6562d80c02d554011f0ac680f82251fa80c228c", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
103050360
pes2o/s2orc
v3-fos-license
Morphological, Chemical, and Thermal Characteristics of Nanofibrillated Cellulose Isolated Using Chemo-mechanical Methods The objective of this research was to analyze the morphology, crystallinity, elemental components, and functional group changes, as well as thermal stability of nanofibrillated cellulose (NFC). Nanofibrillated cellulose has an irregular and aggregated shape with a diameter of about 100 nm. NFC self-aggregations were observed due to hydrogen bonding and Van-der Waals forces. The cellulose crystallinity index, atomic size, and polymorph of the NFC sample were found to be 63.57%, 2.2 nm, and cellulose I, respectively. The NFC sample was composed of various elemental components, such as C, O, N, Na, Al, Si, and K. IR analysis showed only small amounts of hemicellulose and lignin deposits, whereas cellulose functional groups appeared in several wavenumbers. Aromatic and oxygenated compounds, such as carboxylic acids, phenols, ketones, and aldehydes, were deposited as extractive on NFC; these compounds were associated with cellulose, hemicellulose, and lignin. The NFC thermal degradation process consisted of four steps: water evaporation (50-90 oC); hemicellulose degradation and glycosidic linkage cleavage (250-325 oC); amorphous cellulose and lignin degradation (325-429.29 oC); and cellulose crystalline degradation (above 429.29 oC). Introduction The utilization of cellulose nanofibers as a reinforcing agent has been tremendously interesting for versatile nanocomposite products in several industries. This is due to its excellent properties, such as the large and active surface area, high abundance, renewability, biodegradability, biocompatibility, low weight, high strength and stiffness, low thermal expansion, environmental friendliness, and low cost [1][2][3][4]. Different lignocellulosic wastes were used as the source of cellulose nanofibers, beneficial as nanocomposite reinforcing agents, such as oil palm empty fruit bunch [5], paper [6], cotton [7], flax [8], wood [9], wheat straw [10], sugarcane bagasse [11], and bamboo [12]. One of the forms of cellulose nanofibers is nanofibrillated cellulose (NFC), or microfibrillated cellulose (MFC), which can be isolated by mechanical methods, such as disk milling, high-pressure homogenization, microfluidizers, or sonication. The isolation of NFC has been reported in previous studies using different methods, such as high-pressure homogenization [13], ultra-fine grinder [4,14], disk milling [15], and ultrasonication [16]. To optimize the production of NFC, chemical pretreatment can be initially applied to obtain pure cellulose without deposited cementing agents (hemicellulose, pectin, extractives, and lignin). To obtain the purified cellulose, low molecular weight carbohydrate (hemicellulose and pectin) and extractives removal are indispensable. This process can be carried out, respectively, by hot water and organic solvent extraction [17,18], and enzyme or fungi retting method [19][20][21]. Certain lipophilic extractives associated with cellulose also contribute as inhibiting agents for cellulose purification, such as condensed tannin, fatty acid, wax, alcohol, sterol, glyceride, ketone, and other oxygen-containing compounds [22]. Besides those agents, recalcitrant amorphous lignin can be removed through delignification and bleaching process. Sodium hydroxide and sulfuric acid can be used for delignification and hemicellulose [23] removal, whereas sodium hypochlorite [24] and sodium chlorite [25] are commonly used for the bleaching process of cellulose nanofibers. Sodium hydroxide is generally used for lignin and hemicellulose removal [26]. However, the acidified sodium chlorite is not an ecofriendly reagent for delignification because it produces a chlorine radical (Cl•), which is toxic in nature. In addition, alkaline (KOH) and acid (HCl) can be used to remove lignin and hemicellulose [27,28], and its combination can be used to remove silica before fabricating nanocellulose [29]. An eco-friendly chemical pretreatment used for delignification and bleaching requires the use of hydrogen peroxide and organic acids, such as acetic acid and formic acid. The utilization of these reagents has been reported by Nazir et al. [30], Rayung et al. [31,and Fatah et al. [5]. Hydrogen peroxide plays a role as an oxidizing bleaching agent that generates an obvious effect on the brightness of lignocellulose fibers, whereas formic acid is a less corrosive and more stable medium for monosaccharide than sulfuric acid. Besides the chemical pretreatment, ultrasonication is an effective mechanical method for synthesizing nanofibrillated cellulose with a diameter of between 5 nm and 15 nm [32][33][34]. High-intensity ultrasonication is also utilized with output power of between 1000 and 1200 W in the time range of 30-60 minutes [35,36]. Ultrasound energy induces cavitation, which can disrupt physical and chemical systems, and degrade polysaccharide linkages [2]. The disruption of polysaccharide linkages is brought about by sonochemistry energy with approximately 10-100 kJ/mol in which the energy is within the hydrogen bond energy scale [37]. From the above illustration, a combination method that includes eco-friendly chemical treatment, dry disk milling, high-speed centrifugation, and ultrasonication is an effective and promising method to isolate NFC. Dry disk milling pretreatment has not yet been utilized for isolating nanofibrillated cellulose but it has been used for obtaining microfibers before producing lignocellulose nanofibers [15,38,39]. In this study, sodium hydroxide, hydrogen peroxide, and formic acid were used as chemical purification agents for extracting cellulose prior to NFC isolation. After utilizing these reagents, centrifugation and ultrasonication will be harnessed to synthesize NFC. The resultant NFC was analyzed for its basic characteristics using SEM, TEM, PSA, XRD, EDS, FT-IR, GC-MS, and STA. The objective of the research was to analyze the morphology, crystallinity, elemental components, and functional group changes, as well as thermal stability. Materials and Methods Materials. Oil palm empty fruit bunch (OPEFB) fibers were taken from PT Perkebunan Kelapa Sawit Nusantara VIII, Bogor, West Java, Indonesia. Prior to utilization, the fibers were manually cut and separated into two major parts: spikelet and stalk. From these parts, the ratio of spikelet and stalk utilized for this study was 1:1 (wt%). The analytical grades of the chemicals were 99.8% ethanol, 100% acetone, 30% hydrogen peroxide, 98% formic acid, and sodium hydroxide (pellet), indirectly supplied from Merck KGaA, 64271 Darmstadt, Germany. Preparation of dry-disk milled OPEFB microfibers. OPEFB spikelet and stalk fibers were weighed at a ratio of 1:1 (wt%) and mixed. These mixed fibers were washed with water and detergent to leach impurities (oil, sand, soil, stones, etc.). They were subsequently air-dried under sunlight for 7 days. To easily obtain the micro-sized fibers of about 75 µm, the fibers were pretreated in a conventional oven (Memmert, Germany) at 100°C for 15 min, and milled with a dry disk milling equipped with AC motor (Siemens TEC 112M, Germany) at 2895 rpm for 3-4 cycles [38]. The pulverized fibers were sieved with a 200-mesh (75 µm) test sieve analys (ABM Test Sieve Analys ASTM E:11, Indonesia). Chemical treatment of OPEFB fibers. Since the extractives were dominantly deposited in the cell lumina [40] and cell walls [41] of the OPEFB fibers, the June 2017 Vol. 21  No. 2 obtained microfibers (20 g) were extracted using soxhlet in 300 mL of ethanol:acetone at a ratio of 1:2 (v/v%) for 8 h. The extraction was intended to eliminate some lipophilic extractives (waxes, fatty acids, phenols, ketones, etc.) that might inhibit the isolation of cellulose [42,43]. This chemical method used for cellulose isolation was modified from Nazir et al. [30]. The mixtures of 5% of 100 mL sodium hydroxide and 5% of 100 mL hydrogen peroxide were used to remove lignin (delignification) encrusted in OPEFB microfibers (2 gram). Suspended OPEFB microfibers in the solution were then autoclaved (Autoclave ES-315 Tomy Kogyo Co Ltd, Japan) at 121ºC under a pressure of 1.5 bar for 1 h. The autoclaved microfibers were washed several times with deionized water until clear water was obtained. Another delignification process was carried out by immersing the microfibers in 10% hydrogen peroxide and 20% formic acid solution at a ratio of 1:1, and in a shaking wise bath (WSB-30, South Korea) at 85 ºC with a shaking rate of 75 rpm for 2 h. After the process, the microfibers were re-washed with deionized water. OPEFB microfibers were re-suspended in a mixture of 5% hydrogen peroxide and 5% sodium hydroxide, and heated in a shaking wise bath at 60°C with a shaking rate of 90 rpm for 90 min. Furthermore, the resultant cellulose derived from the OPEFB microfibers was re-washed with deionized water several times. Mechanical method for NFC isolation. The obtained cellulose was centrifuged at high-speed for 20 min at 13000 rpm until supernatant and filtrate were completely separated. Cellulose supernatant (1% v/v) was diluted with deionized water (before ultrasonication). Ultrasonication (Ultrasonic Processor Cole Parmer Instrument, USA) process was undertaken in an ice water bath for 25 min with an amplitude of 40%, power of 130 Watt, and frequency of 20 kHz. The device was equipped with a 6 mm titanium probe as well as a tip and foot switch connector. Characterizations Morphology and nanostructure analysis. External surface morphology and nanostructure of NFC were undertaken using a scanning electron microscope (JSM6510LV JEOL, Japan) and a transmission electron microscope (JEOL, Japan), respectively. For SEM, NFC was coated with gold using an autofine coater (JEOL JFC 1600, Japan) at an acceleration voltage of 10 kV with 900× magnification. A drop of diluted NFC suspension was deposited using a sputter coater (JEC 560, JOEL Japan) onto glowdischarged, 400-mesh carbon-coated TEM copper grid. From the analysis, the sample was not stained with uranyl acetate. Particle size distribution. Particle size distribution of NFC was investigated with a particle size analyzer (VASCO FlexTM French) equipped with NanoQ software. In the investigation, the cumulants method was used over the statistical and Pade-Laplace methods. Prior to measurement, 1% (v/v) NFC supernatant was diluted with deionized water, and was analyzed in the operating system at pH 7.0 with a laser power of 100% at room temperature. Crystallinity index and size. NFC crystallinity index (CrI) and atomic crystalline size (ACS), and cellulose polymorph were analyzed using an X-ray diffraction analysis (MAXima X XRD-7000 Shimadzu, Japan). The instrument was also equipped with JCPDS ICDD Software 1997 to investigate the cellulose polymorph. The diffraction angle ranged from 2θ = 10 º to 2θ = 40 º at a speed of 2 º/min with monochromatic CuKα radiation (λ = 0.15418 nm) utilized as an initial measuring parameter. Crystallinity index (CrI) and size of the NFC samples were calculated based on Segal's empirical method [44] and Scherrer Equation [45,46], respectively. (1) where I 002 is the intensity value of crystalline cellulose (2θ = 22.2°) and I am is the intensity value of amorphous cellulose (2θ = 17.2°). (2) where D is the atomic crystal size (nm), k is the medium form factor (0.94), λ is the X-ray radiation wavelength (1.5406 A°), β is the full width at half maximum (FWHM) of 002 reflection, and θ is the highest peak in the diffraction angle. Elemental components. The measurement of NFC elemental analysis was carried out using an energy dispersive x-ray spectroscopy (JEOL EDS, Tokyo, Japan) equipped with A ZAF Method Standardless Quantitative Analysis. The measurement was conducted at 10.0 kV accumulation voltage and 0-20 keV energy range along with the analysis of the microstructure of NFC using a scanning electron microscope (JSM6510 LV JEOL, Japan). X-ray probe was detected using a lithium-drifted silicon detector in a solid-state device. Functional chemical groups. A Fourier transforminfrared spectroscopy (MB3000, ABB Canada) was used to analyze NFC chemical functional group changes. Wavenumber range used was between 4000 and 370 cm -1 with KBr:NFC at a ratio of 1:1. Bio-oil and extractives components. Extractives and chemical composition deposited on NFC was investigated by using a gas chromatography-mass spectrometry (GC-System 7890A/MS 5975 Agilent Technology, USA). The instrument was equipped with a medium polarity capillary column (HP-5MS column (60 m × 0.25 mm) with a film thickness of 0.25 ߤm, Agilent) with an Ultra High Purity helium flow rate of 1.0 mL/min. Diluted NFC (one microliter) dissolved in ethanol was injected using splitless mode (split ratio 10:1) with an injector temperature of 325 ºC (100°C for 0 min then 15°C/min to 290°C for 28 min), and a total runtime of 40.667 min. The scan mass range used was between 50 and 1000 m/z, and the potential of electron ionization was 70 eV, as well as solvent delay time at 6 min. Thermal stability analysis. Thermal stability analysis of NFC was conducted using a simultaneous thermal analysis (PerkinElmer STA 6000, USA). In this study, the instrument was merged with TGA and DSC, and the acquired curve of the analysis was the change in weight and heat flow as a function of temperature. NFC sample (about 3.15 mg) was placed in an aluminum pan, and heated in the temperature range of 40-800 ºC. The scanning rate of the measurement was 10 ºC/min with a nitrogen purge gas at a scanning rate of 20 ml/min. Results and Discussion Morphology and nanostructure analysis. Figure 1 shows SEM and TEM images of NFC derived from OPEFB fibers. Generally, NFC fibers have different sizes ranging from micro-sized fibers (Figure 1a-b) to nano-sized fibers (Figure 1c). Due to the influence of oven drying on NFC (Figure 1a-b), NFC was selfaggregated into micro-sized fibers with irregular and uneven shapes. The process, known as hornification, occurs due to the strong interaction of hydrogen bonding and Van der Waals force. TEM photograph (Figure 1c) also depicts the aggregation of NFC dispersed in distilled water after the ultrasonication process. Some of the NFCs had irregular and aggregated shapes with diameters of about 100 nm. Previous studies also reported that the diameter and length of the NFCs varied depending on the isolation method and fiber sources [46,47]. To maintain good dispersion of NFC in the water, Mesquita et al. [9] recommended two indispensable strategies that involve the use of surfactants and chemical surface modification. In addition, the solvent exchange method was used to assist with freeze drying as it can prevent self-aggregation of NFC better than the oven-drying method. The OPEFB fibers were purified using eco-friendly chemical pretreatment that was effectively able to bleach and delignify some cementing agents, such as hemicellulose and lignin (Figure 1b). Some silica bodies disappeared because they were removed by dry disk milling and chemical treatment during NFC isolation. The pretreatment even produced a regular and smooth external surface of the fibers. In addition, the various sizes of the OPEFB fibers under 1 µm were successfully produced after chemical pretreatment assisted with the auto-hydrolysis of autoclave. The analysis referred to the cumulants method in which the Z-average (average mean particle diameter) of NFC sample was 245.08 nm ranging from 37.16 nm to 1778.75 nm. However, Dmean of NFC particle size distribution by number was 101.55 nm, and about 75% of the NFC particle size distribution was under 100 nm, ranging from 37.16 nm to 97.75 nm (Figure 1b). The alteration of NFC nano-sized fibers into microfibers during a PSA analysis was due to reversible selfaggregation. The aggregation occurs presumably because of the internanoparticle and water-nanoparticle interaction generated by hydrogen bonding and Van der Waals force [39]. Crystallinity index and size. X-ray diffraction was used to analyze the CrI and ACS of NFC sample in which JCPDS ICDD Software 1997 and JADE XRD Pattern Processing 1995-2016 were used to analyze the cellulose substance pattern and polymorph, respectively. Figure 2 depicts the NFC pattern of XRD with the highest peak at 2θ = 22.21 º. An initial peak (2θ = 19 º -25 º) and two small crystalline peaks (2θ = 14º -17.5 º and 26º -27 º) indicate the improvement of NFC crystallinity. This phenomenon was similar in a previous study by Maiti et al. [7]. In addition, there was a noticeable increase of CrI derived from OPEFB raw materials and NFC, which were 41.4% [31] and 63.57%, respectively. The increase of CrI was presumably because of the removal of the amorphous region of cellulose and hemicellulose so it could enhance the CrI of NFC. In addition, the increase of CrI was due to the removal of silica bodies after dry disk milling [15,38]. After the analyses using JCPDS ICDD Software 1997, NFC isolated with chemo-mechanical methods contains cellulose that consisted of pure galactose, xylose, glucose, arabinose, and polysaccharide phase (JCPDS [15,48,49]. ACS of NFC was about 1.36 nm, which was smaller than that of a study by Deraman et al. [50] and Solikhin et al. [39], revealing that the raw OPEFB fibers were in the range of 2.42 and 4.69 nm. Elemental components. Elemental components of NFC sample were analyzed using an energy dispersive x-ray that was used in conjunction with a scanning electron microscope (SEM). The result of this study shows that C (40.71%), O (29.66%), and N (28.69%) are the most dominant elemental components of NFC. Other predominant elemental components are Na, Al, Si, and K. These elements can be in the form of ash deposited on OPEFB fibers, which are essential for healthy plant growth. NFC extracted from OPEFB was a lignocellulose organic source so that C, O, and N were the initial elemental components. These components were composed chemical elements of NFC, including cellulose, hemicellulose, pectin, lignin, and extractives. However, those components were dominated by cellulose after delignification and bleaching processes were carried out. Besides these organic components, silica (Si) remained although chemical and mechanical treatments were given. The existence of Si in NFC was presumably because the chemical pretreatment and mechanical method applied to the OPEFB fibers did not intensively damage the silica bodies. The presence of Si was imparted as a filler in the OPEFB fibers, and the removal of the component must be carried out by acid hydrolysis and ultrasonication as well as high-pressure steam [51]. Functional chemical groups. Figure 6 shows the FTIR spectrum of NFC, where the wavenumber of the transmittance band observed was in the range of 4000 cm -1 and 390 cm -1 . From the analysis, the presence of cellulose was in the broad transmittance wavenumber of 3800-3000 cm -1 , indicating the O-H stretching vibration of cellulose hydroxyl groups and absorbed water [15,39]. . FTIR Spectrum of OPEFB NFC A peak at 2950 cm -1 was indicated with the presence of CH 2 group of cellulose. The appearance of peak 2361 cm -1 was correlated with CO 2 in which hygroscopic organic material and NFC were able to absorb CO 2 and H 2 O. The chemical interaction between moisture and NFC was associated with the presence of a transmittance peak at 1643 cm -1 . A transmittance peak at 1520 cm -1 was attributable to lignin existence. In addition, a peak at 1744 cm -1 was designated to C=O acetyl group of hemicellulose or ester carbonyl groups of lignin [38]. Thermal stability analysis. Figure 7 and 8 show the graph of a simultaneous thermal analysis equipped with DSC and TGA of OPEFB NFC, respectively. From the thermograph (Figure 7), there are four steps of thermal degradation of NFC vapor and cementing agents. At the beginning of the decomposition, water evaporation (drying process) in the temperature ranged of 50 ºC and 90 ºC. The second decomposition was about 250 ºC, while the maximum decomposing temperature was about 325 ºC. These temperatures indicated the degradation of hemicellulose, amorphous cellulose, and cellulose glycosidic linkage breakage. Hemicellulose is easy to thermally degrade due to the presence of acetyl groups. Compared with the previous study [38], the lower decomposition of amorphous and glycosidic cellulose at 250 ºC was due to the nano-size effect in terms of very small and uniform particle sizes (Z-average) as well as high surface-to-volume ratio [15,39]. The lignin and crystalline cellulose region was depolymerized at a temperature of above 325 ºC until it leveled off at 492.29 ºC. At 492.29 ºC (Delta H = 71.95 J/g), a melting point of NFC was at 498.84 ºC (Delta H = 7.29 J/g). At the last degradation or melting temperature, crystalline cellulose was totally degraded. From Figure 8, it can be observed that at the beginning of the evaporation phase, there was about 5% to 10% loss of NFC weight, which was in accordance with the loss of absorbed water in NFC. The highest loss (65-70%) of NFC weight occurred in the temperature range of 250 ºC and 325 ºC. A final NFC residue of 18% was attained at temperatures above 900 ºC in which the residue was in the form of char. Conclusion Nanofibrillated cellulose (NFC) was successfully isolated by using mechanical methods assisted by eco-friendly chemical pretreatment. NFC has an irregular and aggregated shape with a diameter of about 100 nm. Based on the cumulants method, Z-average (average mean particle diameter) of NFC sample was 245.08 nm ranging from 37.16 nm and 1778.75 nm, whereas 75% Dmean of NFC particle size distribution by number was under 100 nm, ranging from 37.16 nm and 97.75 nm. The bigger size of NFC was presumably due to selfaggregation of NFC generated by hydrogen bonding and Van der Waals forces. Crystallinity index and size of NFC were 63.57% and 1.36 nm, respectively, with cellulose I polymorph. NFC belongs to native cellulose comprising of pure galactose, xylose, glucose, arabinose, and polysaccharide. C (40.71%), O (29.66%) and N (28.69%) were the most dominant elemental components, whereas Na, Al, Si, and K were predominant components of NFC. IR analysis showed only small amounts of hemicellulose and lignin were deposited on NFC due to the recalcitrant characteristics of these components, and several wavenumbers appeared, indicating the presence of cellulose chemical functional groups. Extractives were still imparted in NFC in the form of aromatic and oxygenated compounds, such as carboxylic acids, phenols, ketones and aldehydes. These extractives were also associated with the deposition of cementing agents of cellulose, hemicellulose, and lignin. There were four steps of NFC thermal degradation process, which were: water evaporation (50-90 ºC), hemicellulose and amorphous cellulose degradation, and glycosidic linkage cleavage (250-325 ºC), crystalline cellulose and lignin depolymerization (325-429.29 ºC), and cellulose crystalline degradation (above 429.29 ºC). The percentage of NFC residue was about 18% in the form of char.
2019-04-09T13:02:45.794Z
2017-03-07T00:00:00.000
{ "year": 2017, "sha1": "b18877a3f77b226ef8e1a24a67f4c4a67a47bcb8", "oa_license": "CCBYSA", "oa_url": "https://doi.org/10.7454/mss.v21i2.6085", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "4d85330e1d08c5e268acebc2f13148bfe50d2150", "s2fieldsofstudy": [ "Materials Science", "Chemistry" ], "extfieldsofstudy": [ "Chemistry" ] }
244401686
pes2o/s2orc
v3-fos-license
Replication of European hypertension associations in a case-control study of 9,534 African Americans Objective Hypertension is more prevalent in African Americans (AA) than other ethnic groups. Genome-wide association studies (GWAS) have identified loci associated with hypertension and other cardio-metabolic traits like type 2 diabetes, coronary artery disease, and body mass index (BMI), however the AA population is underrepresented in these studies. In this study, we examined a large AA cohort for the generalizability of 14 Metabochip array SNPs with previously reported European hypertension associations. Methods To evaluate associations, we analyzed genotype data of 14 SNPs for their associations with a diagnosis of hypertension, systolic blood pressure (SBP), and diastolic blood pressure (DBP) in a case-control study of an AA population (N = 9,534). We also performed an age-stratified analysis (>30, 30≥59 and ≥60 years) following the hypertension definition described by the 8th Joint National Committee (JNC). Associations were adjusted for BMI, age, age2, sex, clinical confounders, and genetic ancestry using multivariable regression models to estimate odds ratios (ORs) and beta-coefficients. Analyses stratified by sex were also conducted. Meta-analyses (including both BioVU and COGENT-BP cohorts) were performed using a random-effects model. Results We found rs880315 to be associated with systolic hypertension (SBP≥140 mmHg) in the entire cohort (OR = 1.14, p = 0.003) and within women only (OR = 1.16, p = 0.012). Variant rs17080093 associated with lower SBP and DBP (β = -2.99, p = 0.0352 and - β = 1.69, p = 0.0184) among younger individuals, particularly in younger women (β = -3.92, p = 0.0025 and β = -1.87, p = 0.0241 for SBP and DBP respectively). SNP rs1530440 associated with higher SBP and DBP measurements (younger individuals β = 4.1, p = 0.039 and β = 2.5, p = 0.043 for SBP and DBP; (younger women β = 4.5, p = 0.025 and β = 2.9, p = 0.028 for SBP and DBP), and hypertension risk in older women (OR = 1.4, p = 0.050). rs16948048 increases hypertension risk in younger individuals (OR = 1.31, p = 0.011). Among mid-age women rs880315 associated with higher risk of hypertension (OR = 1.20, p = 0.027). rs1361831 associated with DBP (β = -1.96, p = 0.02) among individuals older than 60 years. rs3096277 increases hypertension risk among older individuals (OR = 1.26 p = 0.0015), however, this variant also reduces SBP among younger women (β = -2.63, p = 0.0102). Conclusion These findings suggest that European-descent and AA populations share genetic loci that contribute to blood pressure traits and hypertension. However, the OR and beta-coefficient estimates differ, and some are age-dependent. Additional genetic studies of hypertension in AA are warranted to identify new loci associated with hypertension and blood pressure traits in this population. Introduction Persistently elevated blood pressure, or hypertension [1], is one of the major preventable risk factors for heart disease, as well as a leading contributor to mortality globally [2]. It affects approximately 50 million individuals in the US and accounts for 4.5% of the global burden of all diseases. Hypertension commonly co-exists with comorbidities, including diabetes, obesity, chronic kidney disease, coronary heart disease, depression, and HIV that could associate with poorer health outcomes of hypertensive individuals [3]. The National Health and Nutrition Examination Survey (NHANES 2011(NHANES -2012, conducted by the Centers for Disease Control (CDC), described the highest prevalence of hypertension in the US to be among African-American (AA) adults (42.1%), compared with non-Hispanic whites (28.0%), Hispanic (26.0%), and non-Hispanic Asian (24.7%) adults. Compared to other ethnic groups, AAs have higher mean blood pressure (BP) (both systolic [SBP] and diastolic [DBP]); with earlier age of onset of hypertension [4], and most importantly, a nearly three-fold higher death rate from high blood pressure [5]. The disparity in hypertension risk among AAs is likely due to a complex combination of socioeconomic, environmental (e.g., perceived racial discrimination), and genetic factors which may also influence downstream development of comorbid conditions [6][7][8][9][10][11][12][13]. Much of the differences in hypertension prevalence in AA remain unexplained [14]. The heritability for both SBP and DBP traits is high-30-55% across global populations [15]. Genome-wide association studies (GWAS) and admixture mapping studies in Europeandescent populations have identified over 200 genetic loci [16][17][18][19][20] that explain only 3.5% of the inter-individual variation in BP traits (SBP and DBP) [21,22]. Therefore, the small effects of genetic variants could be one of the potential factors leading to differential hypertension prevalence across populations. African-descent populations are the most ancestral, and AAs are an admixed population with strong components of West African descent; thus, for many variants, it remains unknown if loci identified in European-descent individuals have equivalent influence on hypertension in individuals of AA ancestry [23,24]. Genetic signatures for hypertension may be more variable in AA than in the more heavily studied European-descent populations; hence, additional studies of hypertension genetics within AA populations are needed. In the early guidelines of the United States Joint National Committee (JNC) on hypertension, more emphasis was placed on DBP than on SBP as a predictor of cardiovascular events. The Framingham studies, which first identified hypertension as a significant health concern, have contributed significantly to the "Seventh Report of the JNC on prevention, detection, evaluation, and treatment of high blood pressure" (JNC-7) in 2003 [25]. This report redefined clinical criteria for hypertension diagnosis to include SBP�140 mmHg and/or DBP�90 mmHg. Age is also a well-recognized predictor of elevated blood pressure [26][27][28]. Published studies also showed that the effect of genetics on blood pressure varies by age [29,30] and demonstrated that age is a critical player in hypertension development. In 2014, panel members of JNC-8 recommended for the first time age-stratified thresholds for hypertension diagnosis and initiation of treatment, mainly to reduce potential adverse effects in the elderly-SBP/DBP �140/90 mmHg (SBP�140 mmHg and/or DBP�90 mmHg) for adults aged less than 60 years, and SBP/DBP�150/90 mmHg (SBP�150 mmHg and/or DBP�90 mmHg) for individuals aged 60 years and older [31,32]. Furthermore, differences in hypertension and associated cardiovascular disease by sex are well documented [33], however sex-specific differences due to genetic architecture in the AA population warrant further investigation. As the vast majority of GWAS of hypertension have been performed in European-descent populations, relatively little is known about whether risk estimates for these genetic polymorphisms generalize to the AA population. Furthermore, no genetic study has examined variant associations using the revised hypertension definition from JNC-8 guidelines. In this study, we selected 14 single-nucleotide polymorphisms (SNPs) available on the Illumina Metabochip array with reported European BP associations. We attempted to replicate these associations in the AA population for SBP, DBP, and following JNC-7 and JNC-8 diagnostic criteria for hypertension. We successfully replicated associations of three variant associations with hypertension, SBP, and DBP, indicating their impact on blood pressure measurements in AAs as well. Study population and hypertension phenotypes The study population was drawn from EAGLE-BioVU, the Epidemiologic Architecture for Genes Linked to Environment study accessing BioVU, the Vanderbilt University Medical Center's (VUMC) biorepository linked to de-identified electronic health records (EHRs) as previously described [34,35]. Race/ethnicity within BioVU is defined by administrative assignment, which shows strong agreement with genetic ancestry for the AA population [36,37]. Two outcomes of the study were SBP and DBP, which were obtained at the first clinic visit (baseline). Due to limited information about medication use in this study, the first recorded SBP and DBP were considered as pre-treatment measures (the best available surrogate for pretreatment measures). The third outcome of the study was a hypertension diagnosis based on SBP and DBP measurements; for the purposes of this GWAS, all individuals were defined as hypertensive (SBP�140 mmHg) or non-hypertensive, regardless of comorbidities. All clinical and demographic data were obtained from de-identified individuals' EHRs. DNA was isolated from discarded blood drawn from these de-identified patients as part of routine clinical care. Detailed clinical and demographic information was extracted from patient EHRs for research purposes. Genetic ancestry of participants was further evaluated by principal components (PCs) of ancestry-informative markers, and via local ancestry inference using RFMix, as previously described [38]. The Continental Origins and Genetic Epidemiology Network-Blood Presure (COGENT-BP) Study, consisting of 29,378 individuals across 19 studies was included to perform meta-analysis. In brief, all individuals were >20 years old. For individuals reporting the use of antihypertensive medications, SBP and DBP were adjusted by the addition of 15 mmHg and 10 mmHg for SBP and DBP, respectively [21,39]. Significant outliers (individuals with SBP or DBP >4 standard deviations (SDs) from the mean) were excluded. Each COGENT-BP sub-study received institutional-review-board approval of its consent procedures, examination and surveillance components, data security measures, and DNA collection with approved use for genetic research. All participants in each COGENT-BP sub-study provided written, informed consent [21]. Genotyping Genotyping was performed using the custom Illumina Metabochip array, which includes approximately 200,000 SNPs selected to fine-map loci genome-wide that were previously associated with metabolic traits. Genotyping was performed according to the manufacturer's protocol (Illumina TM ) [40,41] at the Vanderbilt University Center for Human Genetics Research (CHGR) DNA Resources Core. Details of the genotype calling process are described elsewhere [40]. In brief, genotypes were called using both the GenCall 2.0 algorithm (as part of Illumina Genome Studio) and the GenoSNP algorithm [42], and discordance between the two methods was used as a quality check (QC) filter. QC statistics were calculated using GenomeStudio and PLINK [43], including call rate, Hardy-Weinberg equilibrium deviations, and duplicate genotype inconsistencies. PCs were generated to estimate genetic ancestry based on HapMap/1,000 Genomes reference populations. We performed model-based clustering, using the mclust R package to differentiate five ancestry groups: European descent, African descent, East Asian descent, South Asian descent, and Hispanic descent [37], and restricted all analyses to individuals within the African-descent cluster. Genotyping for the COGENT-BP study was performed using a variety of Affymetrix and Illumina arrays, all imputed to the Combined HapMap phase II+III reference as described in [21]. SNP selection and annotation To assess hypertension and blood pressure-related trait associations which were previously identified in European-descent populations and available within our Metabochip variants, we first tabulated already-published, most significant associations within a given genomic region [19], and among them, 14 were present on the Metabochip array (S1 Table in S1 File). Among these 14 SNPs, rs1327235 was previously linked with SBP and DBP in AA (S2 Table in S1 File) [23]. All 14 SNPs have minor allele frequencies greater than 0.1 in our AA population ( Table 1) and were evaluated using the HaploReg database to select the strongly linked SNPs (r 2 � 0.8) with the index SNP in African-descent populations from the 1,000 Genomes Project [44]. None of these 14 SNPs were in linkage disequilibrium (LD) with each other in the AA study population. 9-CM)-based codes where individuals with three or more instances of the code on separate visit dates are positive (1), and all others are negative (0). Using these binary phenotype arrays, we then computed cPCs using the prcomp function in R and selected the top 10. Multivariable logistic and linear regression modeling was used to evaluate association of SNPs with hypertension, SBP, or DBP (all three dependent variables), using BMI, age, age 2 , sex, cPC1-cPC10, and gPC1-gPC10 as model covariates. The age-stratified analysis was also conducted based on JNC-8 definitions (SBP/DBP �140/90 mmHg for adults aged less than 60 years, and SBP/ DBP�150/90 mmHg for individuals aged 60 years and older). Sex-differences in such phenotypes are of growing interest and even understudied. Therefore, sex-stratified and-adjusted studies were also explored. In all models, the odds ratio (OR) and beta-coefficient (β) and their 95% confidence interval (CI) were estimated. Meta-analyses among BioVU and COGENT-BP cohorts were performed using the random-effects model in the metafor R package [21]. Ethical statement The VUMC protocol for EAGLE-BioVu is considered non-human subjects research (The Code of Federal Regulations, 45 CFR 46.102 (f)) [36,47]. The COGENT-BP study (for which we accessed meta-analysis results) was approved by the Institutional Review Board (IRB) # 04-95-72 and study-related committees (see Appendix I in S1 File). All participants of the COGENT-BP study provided informed consent for DNA research and data are publicly available in dbGaP. Also, we have followed recommendations from the STrengthening the Reporting of Genetic Association Studies Statement in this study (see Appendix II in S1 File) [48]. We next investigated the role of SNPs according to the age stratifications outlined by the JNC-8 definition (2014). We found no association with hypertension in individuals under age 30 (N = 2,123) (Tables 1 and 3B). However, in this strata SNP rs17080093 (A allele) was associated with both SBP and DBP (β = -2.99 (-5.48, -0.51), and β = -1.69 (-3.26, -0.12) respectively, both p<0.05). Among mid-aged individuals ((�30 to <60 years) (N = 5,351), rs880315 associated with higher risk of hypertension (OR (95% CI) = 1.13 (1.01, 1.26), p = 0.03) (Tables 1 and 3C). Among older individuals � 60 years (N = 2,060), rs1361831 significantly associated with 1 and 3D) after adjustment for BMI, clinical PCs, genetic PCs and sex. We also evaluated the impact of local ancestry on each SNP effect and found that the observed associations did not change after these adjustments (S3 Table in S1 File). This suggests that the effects of associated alleles were not due solely to the inheritance of the alleles from a European background via admixture. In sex-stratified analysis, we found G allele of SNP rs880315 was associated with increased risk of hypertension in women (OR = 1.16, 95% CI = 1.03-1.31, p<0.01). Though the direction of the effect was same in men, it was not statistically significant (OR = 1.11, 95%CI = 0.96-1.28, p = 0.15) ( Table 4A). Moreover, we did not observed any differences in SBP and DBP among men and women. Age-stratified analysis by sex revealed that allele A of rs17080093 significantly associated with decrease in SBP and DBP among younger women (β = -3.92, 95% Table 4D). Sex-stratified models were adjusted for age, age 2 , BMI, clinical PCs, and genetic PCs, and age-sex stratified models adjusted BMI, clinical PCs, and genetic PCs. Given these results, we next sought additional replication of associations via random effects meta-analysis among independent participants of BioVU and the COGENT-BP study. The results of the meta-analysis combining data across samples with pooled estimates of BioVU and COGENT-BP summarized in Table 5. rs880315-T, and rs17080093-T, were significantly associated with SBP, DBP, and hypertension (all p-values < 0.05 for SBP, DBP, and hypertension, respectively). rs1361831-T associated with reduced risk of hypertension development and rs2272007-T significantly increases DBP and hypertension risk. Discussion While the genetic risk for hypertension has been studied extensively in multiple Europeandescent populations, much less is known about the impact of these hypertension SNPs in AA populations. Furthermore, previously reported associations with hypertension phenotypes used older definitions (SBP�140 mmHg). In this study, we evaluated European-associated SNPs for hypertension and BP within an AA population using multiple definitions of hypertension. We replicated four SNPs associations (which reached nominal significance), including three intronic SNPs in rs880315 (CASZ1), rs3096277 (CDH13), and rs17080093 (PLEKHG1) gene and one intergenic SNP rs1361831 close to the RSPO3 gene. The sex-stratified analysis showed a differential effect of genetic variants on blood pressure traits in AA. Considering new hypertension diagnostic criteria established by the 8 th JNC report, we performed age-stratified analyses. We found variant rs3096277 (CDH13-Cadherin 13) associated with hypertension (SBP�150 mmHg and/or DBP�90 mmHg) in individuals aged 60 years and above and SBP in younger women as well. However, this variant strongly correlated with an increased risk of SBP elevation in younger men. Associations between CDH13 SNPs with SBP, DBP, and hypertension are consistent with prior studies, though the specific variants PLOS ONE appear to be ancestry-dependent [49]. In a study of Mexican participants, rs3096277 was not associated with BP traits; but another polymorphism from this region rs11646213, which was not present on Metabochip array, was associated with the risk of hypertension development [50]. The pairwise LD analysis using 1,000 Genomes Project genotype data for African Ancestry in Southwest US (ASW) (most representative of the present cohort) revealed that rs11646213 was not in LD (r 2 = 0.05) with rs3096277. Moreover, a recent trans-ethnic association study showed another SNP, rs7500448, which was not present on Metabochip array, from the CDH13 gene (not in LD with SNP rs3096277 in ASW (r 2 = 0.01) using 1,000 Genomes Project data), associated with SBP and DBP in the Million Veteran Program (MVP) and the UK Biobank (UKB) [51]. The CDH13 gene encodes for a calcium-dependent cell-cell adhesion glycoprotein and is predominantly expressed in the nervous and cardiovascular systems in particular spinal cord and aorta, the carotid, iliac and renal arteries, and the heart [52] and is known to play an essential role in regulating angiogenesis and blood vessel remodeling [53]. This gene has also been associated with different cancer types, such as lung, breast, prostate cancers, and tumor angiogenesis [54][55][56]. All three SNPs in CDH13, rs3096277, rs7500448, and rs11646213, were associated with different BP traits in different populations and not in LD with each other, suggesting that these variants cannot substitute for one another and could potentially influence the CDH13 gene function independently. SNP rs880315 in CASZ1 (Zinc Finger Protein Castor Homolog 1) nominally associated with hypertension (DBP�90 mmHg) in the mid-tier age group (�30 to <60 years), but not in younger (<30 years) or older individuals (�60 years). This variant has also been associated with an increased risk of hypertension in women despite similar frequencies in men. Previously, this variant was associated with SBP in women of European ancestry (Women's Health Study; WHS) [57], and to hypertension, SBP, and DBP in the Japanese population [58]. This gene encodes castor homolog-1, a zinc finger transcription factor that is known to be involved in cell-cycle signaling and apoptosis regulation. This gene is expressed in many tissues, including cardiac myocytes, and the nearby genomic region (chr1:10,630,927-10,790,973) has also been implicated in neuroectodermal tumors [59,60]. Variant rs17080093 located in the intron of PLEKHG1 (pleckstrin-homology-domain-containing, family G [with RhoGef domain] member 1) was associated with reduced risk of SBP and DBP elevation in the younger age group (<30 years), particularly in younger women. Previously, this gene has been linked to childhood obesity in the Hispanic population [61]. This PLEKHG1 gene plays an essential role in vascular endothelial cell reorientation by targeting RhoA, Rac1 and/or Cdc42, which are involved in cyclic-stretch-induced perpendicular reorientation, thus makes it a potential candidate for studies on blood pressure traits [62]. Another SNP in the same gene, rs17080102 (not genotyped by the Metabochip) associated with SBP and DBP in another cohort of AA, European, and East Asian ancestry [23]. Furthermore, this locus showed a significant association with SBP and DBP in a trans-ethnic meta-analysis [17,20,23]. Recently, this gene has been associated with maternal preeclampsia, another hypertension-related phenotype [63]. SNP rs17080102 was found to be in moderate LD (r 2 = 0.45) with rs17080093 among ASW individuals from the 1,000 Genomes Project. HaploReg reports that the intronic SNP rs17080102 from the PLEKHG1 gene is present in and significantly alters various regulatory motifs (S4 Table in S1 File). Thus, we speculate that the association of rs17080093 to hypertension in our study is tagging the effect of rs17080102. The rs1530440 is intronic SNP in C10orf107, an open reading frame of unknown function was associated with both SBP and DBP in younger women in the studied cohort. Cheh et al reported association of this SNP with DBP in the European population. The ARID5B (AT rich interactive domain 5B (MRF1 like)) gene is located in the close proximity of C10orf107 and its higher expression was reported in cardiovascular tissue and involved in smooth muscle cell differentiation [64]. rs16948048, located on 5' upstream to ZNF652, showed significant difference among young hypertensive and non-hypertensive individuals. This SNPs has previously been linked to DBP in European population [64] and essential hypertension among Chinese individuals [65] suggesting that tis gene might be involved in the blood pressure traits. Variant rs1361831 (intergenic region, chr6:126,859,944) located near to RSPO3-chr6:127,118,671-127,199,481; (R-spondin family member 3), gene significantly associated with DBP in the older age (�60) group of the studied cohort. A large-scale genome-wide association study revealed a relationship between a genomic region close to this gene and renalfunction-related traits, including blood urea nitrogen in East Asian populations [66]. This locus showed an association with the waist-to-hip ratio in the GIANT study, which included individuals of European and European-American descent [67]. SNP rs13209747 (not genotyped by the Metabochip) is located near this RSPO3 associated with SBP and DBP in the COGENT-BP study in AA and East Asian individuals and trans-ethnic meta-analysis [23]. rs1361831 and rs13209747 were found to be in strong LD (r 2 = 0.80 in ASW, using 1,000 Genome Project data), suggesting that these were introduced recently into the population and may substitute for each other. A recently published trans-ethnic study showed an association of novel missense and rare markers in this region with SBP, DBP, and pulse pressure (the difference between SBP and DBP after addition of the constant values) in male veterans of European, African, Hispanic, Asian, and Native American descent [51]. Our results showed an opposite direction of effect from the COGENT-BP replication results and from this trans-ethnic meta-analysis, possibly due to differences in the male/female ratio between the studies. In the trans-ethnic study, the sample was heavily skewed towards males with African ancestry; however, our study was approximately 65% female. These results may indicate effect modification of this SNP by sex and age. Moreover, additional adjustments with local ancestry suggested African ancestry also contributes to observed allelic effects, and associations were not due to the inheritance of European genetic allele only. Our meta-analysis results indirectly linked CASZ1, PLEKHG1, C20orf187, RSPO3, ULK4, and LRRC10B genes with altered blood pressure levels among AA population that merits in-vitro validation of these findings. Also, these genetic variants would be helpful in stratifying "at-risk" individuals predispose to higher risk of elevated SBP, DBP and hypertension development based on their genetic architecture. Differences in allele frequency of variants among different populations could lead to clinical heterogeneity. Additionally, age-dependent exposure to various non-genetic risk factors for hypertension could be linked to the differential prevalence of blood pressure/hypertension [68]. Our reported associations have less extreme p-values than associations made in more extensive European studies, and these differences are likely attributed to reduced statistical power in our study. AA is an admixed population of African and European ancestry. Consequently, variants associated with hypertension or other BP traits (SBP and DBP) in Europeanancestry cannot necessarily be generalized to other populations. However, replication of associated variants would help us to identify essential and significant trans-population genetic variants. Here, we replicated European hypertension variant associations in AA. Our findings indicate remarkable differences in allele frequencies and p-values, which further strengthen the concept of population-specific hypertension pathogenesis components. Therefore, the investigation of genetically diverse populations is necessary to identify causal variants in affected populations. Our present cohort of approximately 10,000 AA individuals has a reasonable power to detect common variants; however, to evaluate the possible effect of other hypertension-associated variants found in European-descent studies, larger samples are needed. Another reason for the observed differences could be adjustments for potential confounders such as age, sex, BMI, use of hypertensive drugs, and ethnicity/race that affects study outcomes. In the EAGLE-BioVU dataset, we considered presumed pre-treatment SBP and DBP measurements for hypertension diagnosis; therefore, we used SBP and DBP measurements without adjustment for medication effects. However, we made adjustments for clinical PCs. On the other hand, the COGENT-BP study, consistent with other published studies in European ancestry, adjusted measured SBP and DBP for hypertensive medication usage (+15 and +10 mmHg, respectively) [21]. Strengths The present study was conceived as a replication of SNPs previously associated with hypertension from European-descent studies in an AA cohort to assess their generalizability. For the present study, hypertension outcome has different diagnosis criteria, according to the revised JNC-8 guidelines; however, observed associations with hypertension did not change with different hypertension diagnosis criteria. We also performed an age-stratified analysis, which was not performed in other studies investigating the association of genetic markers with hypertension. Furthermore, unlike other genetic epidemiological studies, we have included clinical PCs to adjust for confounding comorbidities, which is not typical in other studies. These findings suggest that previously associated variants with blood pressure traits in European populations may be generalized to African American decent individuals. Age-stratified associations further confirm that age plays a significant role in hypertension risk. Therefore, identifying individuals, from various age groups, with a greater risk for hypertension development may help target preventative measures. The genetic admixture may confound genetic association studies, and thus, we have included genetic PCs to control population stratification or hidden population substructures. Furthermore, we used the local ancestry association approach to check the robustness of our associations. After adjusting for local ancestry for AA, our SNP associations remain unchanged. Hence, the association of SNPs in the present cohort shows that the risk alleles' effect is similar to the background of African ancestry and not likely due to European admixture. Limitations Our study has a few limitations that we need to acknowledge. First, SNPs selected were limited to those available on the Metabochip array. Also, as smoking status was not well documented in the EHR, it could not be reliably included in the model. We did not adjust for medication use, and instead used the first SBP and DBP measurements to define hypertensive and nonhypertensive individuals; we considered the first measurements to be the best approximation for pre-therapy values. While this may introduce a bias, we would expect this to lower statistical power rather than increasing the false-positive rate. As this study aimed to replicate hypertension-associated SNPs found previously in a new (AA) population, the p-values were not corrected for multiple testing. Finally, the meta-analysis was based on the unadjusted estimates, and stratified analysis on the basis of various confounding variables such as age and sex could not be conducted. Conclusion Future studies to investigate age-and sex-stratified associations of genetic variants with hypertension diagnosis, SBP, and DBP measurements, including pre and post-treatment of antihypertensive medications, are warranted as other AA populations may have different genetic substructure and not be truly comparable. The study of multiple blood pressure readings over considerable time, including medication compliance and genetic variants, would be informative for the early diagnosis of individuals at hypertension risk in the understudied AA population. Furthermore, genetic loci stratified investigations aimed at determining the role of the respective genetic variance in hypertension could get precise treatment recommendations as part of personalized medicine that may be developed in the near future.
2021-11-20T05:12:23.316Z
2021-11-18T00:00:00.000
{ "year": 2021, "sha1": "0889d8f5d80fffad1d66c1ccef266deacba678d0", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0259962&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "0889d8f5d80fffad1d66c1ccef266deacba678d0", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
222378189
pes2o/s2orc
v3-fos-license
Where’s the Question? A Multi-channel Deep Convolutional Neural Network for Question Identification in Textual Data In most clinical practice settings, there is no rigorous reviewing of the clinical documentation, resulting in inaccurate information captured in the patient medical records. The gold standard in clinical data capturing is achieved via “expert-review”, where clinicians can have a dialogue with a domain expert (reviewers) and ask them questions about data entry rules. Automatically identifying “real questions” in these dialogues could uncover ambiguities or common problems in data capturing in a given clinical setting. In this study, we proposed a novel multi-channel deep convolutional neural network architecture, namely Quest-CNN, for the purpose of separating real questions that expect an answer (information or help) about an issue from sentences that are not questions, as well as from questions referring to an issue mentioned in a nearby sentence (e.g., can you clarify this?), which we will refer as “c-questions”. We conducted a comprehensive performance comparison analysis of the proposed multi-channel deep convolutional neural network against other deep neural networks. Furthermore, we evaluated the performance of traditional rule-based and learning-based methods for detecting question sentences. The proposed Quest-CNN achieved the best F1 score both on a dataset of data entry-review dialogue in a dialysis care setting, and on a general domain dataset. In most clinical practice settings, there is no rigorous reviewing of the clinical documentation, resulting in inaccurate information captured in the patient medical records. The gold standard in clinical data capturing is achieved via "expert-review", where clinicians can have a dialogue with a domain expert (reviewers) and ask them questions about data entry rules. Automatically identifying "real questions" in these dialogues could uncover ambiguities or common problems in data capturing in a given clinical setting. In this study, we proposed a novel multichannel deep convolutional neural network architecture, namely Quest-CNN, for the purpose of separating real questions that expect an answer (information or help) about an issue from sentences that are not questions, as well as from questions referring to an issue mentioned in a nearby sentence (e.g., can you clarify this?), which we will refer as "c-questions". We conducted a comprehensive performance comparison analysis of the proposed multichannel deep convolutional neural network against other deep neural networks. Furthermore, we evaluated the performance of traditional rule-based and learning-based methods for detecting question sentences. The proposed Quest-CNN achieved the best F1 score both on a dataset of data entry-review dialogue in a dialysis care setting, and on a general domain dataset. Introduction In healthcare, real-world data (RWD) refers to patient data routinely collected during clinic visits, hospitalization, as well as patient-reported results. In recent years, RWD's volume has become enormous, and invaluable insights and real-world evidence can be generated from these datasets using the latest data processing and analytical techniques. However, RWD's quality remains one of the main challenges that prevent novel machine learning methods from being readily adopted in healthcare. Therefore, creating data quality tools is of great importance in health care and health data sciences. Erroneous data in healthcare systems could jeopardize a patient's clinical outcomes and affect the care provider's ability to optimize its performance. Common data quality issues include missing critical information about medical history, wrong coding of a condition, and inconsistency in documentation across different care sites. Manual review by domain experts is the gold standard for achieving the highest data quality but is unattainable in regular care practices. Recent developments in the field of Natural Language Processing (NLP) has attracted great interest in the healthcare community since algorithms for identifying variables of interest and classification algorithm for diseases have been recently developed (Jiang et al., 2017). In this paper, we presented a novel model for the extraction of queries (questions) in a corpus of dialogue between data entry clinicians and expert reviewers in a multi-site dialysis environment. The main contributions of this work are: (i) To the best of our knowledge, we are the first to benchmark the performance of different rule and learning-based methods for the extraction of question sentences from logs of real-world (medical) systems by providing specific misclassification cases that emphasize the limitation of each technique. (ii) We proposed a new deep neural network architecture, namely Quest-CNN which unifies syntactic, semantic and statistical features and is capable of identifying real questions that expect an answer, questions referring to an issue mentioned in a nearby sentence (c-questions) and sentences that are not questions. (iii) We examined the importance of the above mentioned features and we experimented extensively with different state of the art deep learning models in order to determine the best architecture for this particular task. (iv) We investigated the effect of incorporating domain knowledge on the performance of a model by examining whether word embeddings and semantic features (that will be described in section 4.1) which are pre-trained in a domain-specific dataset rather than in a general dataset are more beneficial for the model. Finally, in addition to evaluating our model's performance in a medical context, we also experimented in section 5 with a general-domain dataset (questions in the Twitter social platform) to show our model's generalizability. The rest of the paper is organized as follows. Related work is presented in section 2. The different question detection methods that will be examined, are described in section 3. Section 4 details the characteristics of the proposed multi-channel CNN model. Finally, the results of the experiments are reported in section 5 and a conclusion and a plan for future work are given in section 6. Related Work Different question-detection methods have mainly been focused on the extraction of questions in social online settings (e.g. emails, Twitter) (Li et al., 2011;Zhao and Mei, 2013). These methods can be classified into two categories: (i) Rule based methods that make use of rules like 5W1H words (What, Who, Where, When, Why, How) or question marks (QM) to identify questions (ii) Learning-based methods, which train a classifier based on the patterns of sentences. Recently, different deep-neural networks have achieved a state-of-the-art results in text classification. In the KIM-CNN model (Kim, 2014) t filters are applied to the concatenated word embeddings of each document in order to produce t feature maps, which are fed to a max pooling layer, in order to create a t-dimensional representation of the document. In addition, in (Liu et al., 2017) the XML-CNN network was introduced, where a dynamic max-pooling scheme and a hidden bottleneck layer were used to achieve a better representation of documents. Another state-of-the art deep model is Seq-CNN (Johnson and Zhang, 2015) where each word is represented as a V -dimensional one-hot vector where V is the vocabulary of the dataset and the concatenation of the word vectors are passed through a convolutional layer, followed by a special dynamic pooling layer. Furthermore, in FastText (Joulin et al., 2017) the embedding of the words that appear in a document were averaged to create a document representation. Finally, a comprehensive analysis of clinical-domain embedding methods is presented in (Khattak et al., 2019). Question detection As mentioned above, the task of identifying sentences that contain questions and c-questions can be broken into two sub-problems: (i) Detecting question sentences from unstructured data (logs) (ii) Correctly identifying the c-questions and the real questions that require an answer. In order to create the corpus of candidate questions, we explored both rule and learning based approaches. We also compared the performance of each method on the task of identifying questions on (medical) unstructured text and analyzed the most common misclassification cases. Rule-based Approach: In particular, we employed the following rules: (i) The last character of a sentence is a question mark (ii) The rules that were introduced in (Efron and Winget, 2010) (e.g I* [try*,like, need] to [find, know]) (iii) The sentence contains a 5W1H word (iv) The refined 5W1H rules from (Li et al., 2011): The sentence contains a 5W1H word at its beginning or it contains auxiliary words (e.g. "what is" instead of what or "how does" instead of how) Learning-based Approach: These approaches learn specific patterns that can be used to identify sentences that are questions. In this study, we evaluated the following methods: (i) A Naive Bayes Classifier which was trained on the NPS Chat Corpus that consists of over 10,000 posts from instant messaging sessions (Bird et al., 2009). As these posts have been labeled with dialogue act types, such as "Statement","ynQuestion", we used the classifier without any further training. (ii) The syntactic parser from the Stanford CoreNLP Natural Language Processing Toolkit (Manning et al., 2014) which provides a full syntactic analysis of an input sentence. The question-sentences were identified, by examining the syntactic structure of each sentence in the dataset. Quest-CNN Architecture In this section, we describe our proposed model which has as input all the sentences that the above question extraction method identified as questions and it distinguished between the sentences that are Let x i ∈ R k be the k-dimensional word vector corresponding to the i-word of each sentence. Each sentence can be presented as the concatenation of the word embeddings of its words. By using multiple t convolutional filters the model can capture multiple features for each sentence, as each filter u ∈ R hk will create one feature c i for each word region x i:i+h−1 (of size h): where b ∈ R is a bias term and f is a nonlinear activation function such as the rectified linear unit (ReLU). By padding the end of the sentence, the model can produce a feature map c u = [c 1 , c 2 , ..., c n−h+1 ] that is associated to a filter u, where n is the number of words of the largest sentence. In addition, by using t filters (with different window sizes) the model can create t feature maps that are passed through a max pooling operation in order to obtain one value c max for each filter. The output of the pooling layer is then concatenated with the vector of the statistical features of each sentence thus creating a richer representation of each sentence. Finally, the representation of the sentences are passed through two fully connected layers and a softmax layer that outputs the probability distribution over the labels of the sentences. Finally, our model consists of multiple channels that represent different features. Each filter i is applied to all the channels and the feature map c i is calculated by adding the results in each channel. In the ablation study (section 5.2.3), we presented the performance of different variations of the model which consist of one to three channels and use (or not) the information of the statistical features. Feature description Since the classification of the sentences is challenging due to their short length, we tried to utilize not only lexical features but also syntactical features, statistical features and semantic features using domain knowledge. In particular, we investigated the influence of the above mentioned features by creating multiple feature-channels for each sentence. Word representation: The first channel consists of the k-dimensional word embeddings of the words of a sentence where x i ∈ R k corresponds to the i-th word in the current sentence. For the initialization of the word embeddings, we examine multiple options. We can either randomly initialize the vectors and then allow their modification during training or use pre-trained word embeddings that were trained in a much bigger corpus. In the second case, we also investigated the effect of two popular pre-trained word embedding datasets. The first one is a general domain dataset, the Google News dataset that contains 300dimension word2vec embeddings that were trained on 100 billions words (Mikolov et al., 2013). The second one is the Multiparameter Intelligent Monitoring in Intensive Care (MIMIC) III dataset which is a publicly available ICU dataset (Johnson et al., 2016) that consists of medical data (e.g. laboratory tests, medical notes) collected between 2001 and 2012. Since a training course is mandatory in order to access it, there are no available word embeddings. Thus we trained our own embedding model that produced 300-dimension word2vec embeddings based on the NOTEEVENTS table which consists of 2,083,180 rows of patients notes. We believed that the comparison of the effectiveness of our model when it is provided with a large general domain dataset or a smaller specific domain dataset can be a useful guide not only to our work-case but for other NLP problems as well. Words that do not appear in the pre-trained vocabulary are randomly initialize using a uniform distribution in the range (Aguilar et al., 2017). POS-tagging word representation: Questions usually have specific syntactic structures and by including this channel we hoped to capture these meaningful syntactic features. By obtaining the POS tags of the words in each sentence, we produced embeddings of each POS tag which has the same dimension as the word embeddings. The vectors of POS tags in the model can either be initialized randomly using a uniform distribution in and allow their modification during training or they can be created as one-hot vectors (vectors with all 0s and one 1 which indicates the existence of a specific POS tag word). By comparing these two representations we aimed to understand whether a richer representation of POS-tagging word can significantly improve the performance of the model. Semantic Features: Semantic features were also introduced in order to boost the model's performance by connecting different words that share a semantic meaning. In order to extract relevant clinical entities, we used the UMLS Metathesaurus, a large biomedical thesaurus, which organizes words by (medical) groups and links similar words (Humphreys et al., 1998). The identification of the medical words and their UMLS groups was accomplished using the open-source Apache clinical Text Analysis and Knowledge Extraction System (cTakes) (Savova et al., 2010). We examined two different strategies for incorporating semantic features: (i) Replacing the words that appear in the database with their respective medical group name or (ii) Creating a new channel where, for each group, a new vector is created which was initialized either randomly or by using the pre-trained word2vec vectors as described above. As our final goal for the model is to work in any domain, in the experiment section, we also presented our model's accuracy by extracting concepts from WordNet, which is an extensive lexical database where words are grouped into sets of synonyms (synsets). Finally, four statistical features were included in the model namely the length of the sentence, the number of words, the number of capitalized words and the coverage of the vocabulary. These features were introduced in (Zhao and Mei, 2013) for the identification of questions in the Twitter platform. Regularization For regularization, we used "embedding dropout", an idea that was introduced in language modeling in (Merity et al., 2018), and performed dropout on entire word embeddings, thus removing some words in each training phrase. Although the "embedding dropout" technique was used for the purpose of regularization RNN-based models, we observed that it can work equally well in a CNNbased model as its main purpose is to make a model rely less on a small set of input words. In addition, we applied dropout on the last two fully connected linear layers. The Dropout mechanism randomly sets a portion of hidden units to zero during training, thus preventing the co-adaptation of neurons (Srivastava et al., 2014). Experiments In this section, we presented the result of an empirical evaluation on the question-detection methods that were described in section 3 and our proposed model in section 4. In particular, we analyzed the performance of the rule and learning based methods and we provided specific examples that highlighted their limitations. We also provided a comparison between different deep learning text classification methods that were described in section 2 in order to show the efficiency of our proposed model. Finally, we conducted an ablation study that demonstrated the importance of the features (channels) for achieving high-accuracy results. Datasets. The main dataset used in this study is a review log consists of the query-answer dialogue in the Dialysis Measurement, Analysis and Reporting System (DMAR ® ). DMAR ® is a web-based application that collects patient-level, clinical data within a renal program (Blake et al., 2013;Oliver et al., 2010) and its review module facilitates the "query-answer" style of communication between the reviewer (Neurologist) and users (renal coordinators) during routine care process. A user would post a question to the reviewer only when she/he was unsure about the correctness of patient data (for example why the hernia was considered absolute contraindications?'). Therefore, the questions in this dialogue dataset provided a good indicator of data quality issues. The dataset used in this study was extracted from DMAR ® between 2013-2019. 1 This dataset offered a rare (in terms of quantity) opportunity to examine a vast array of different questions types during medical data review processes over a fairly long period of time in a multi-sites real-world clinical setting. The annotations of the sentences were created by manually checking each sentence, as a ground truth was non existent. Unfortunately, due to the sensitive nature of the dataset (contains medical information of patients), we cannot provide a link to a downloadable version of the data without the approval of the research ethics boards. In addition, in order to test our model in a general setting, we experiment with the Twitter dataset which was created and analyzed in (Zhao and Mei, 2013). This dataset contains 2462 tweets which were annotated by two human annotators as questions (conveying an information need) or non-questions. This dataset was used to evaluate the models in the binary classification task to separate actual questions from sentences that do not require an answer. Table 1 lists the statistics of both datasets. For the evaluation of the question detection methods, the accuracy, recall and F1 score (the harmonic mean between recall and precision) were reported. For the evaluation of the deep-learning models, we reported the micro-averaged F1 score and the F1 score for the multi-class and binary-class dataset respectively on the testing and the validation set. Question Extraction The performance of different question extraction methods on the DMAR dataset was presented in Table 2. The results showed that simple rules method, i.e. identifying question marks, had a medium performance. The reason for the misclassification cases may be due to the language patterns in casual conversational log, where people sometimes forgot to add a question mark (?) in a question (what symptoms did the patient present.) or they used a question mark to show irony (the procedure has already begun so maybe an update next time?), or tried to be polite when they encouraged a person to take action (please see comments?). By adding the 5H1W rule, the recall could be boosted but the precision dropped significantly, due to the fact that many sentences could contain 5W1H but are not questions (e.g. just wait and see what happens). Furthermore, we observed that by applying the refined rules introduced in (Li et al., 2011), the model could maintain almost the same recall, while achieving a significant improvement in precision. Finally, the results revealed that the rules in (Efron and Winget, 2010) had the overall best (F1) performance. This indicated that even if these rules were made for a different context (twitter), they could be applied to other domains (e.g. medical). Methods Prec By investigating the performance of the learningbased approaches, we observed that the parsing algorithm could significantly improve the precision of question-detection but with low recall performance as it missed a large number of questions. This is largely caused by the irregular syntax patters used in real-life casual conversation (e.g. he is still sitting in my baseline?). Finally, the Naive Bayes Classifier identified half of the questions with low precision which indicates that using transfer learning (it is trained on the NPS Chat Corpus) cannot achieve a good performance especially in cases where the conversation is domain-specific, such as in a clinical setting or a electronic medical record system. Thus, methods of this type would need to be trained with domain-specific examples, which are usually resource-intense. It should be noted that we did not evaluate the different question extraction methods on the Twitter dataset as it was constructed by tweets that were all "potential" questions (they all contain a question mark) and thus it cannot provide a fair comparison of the methods. To evaluate the QUEST-CNN model's ability to identify the actual question and the c-questions, we used the Twitter dataset and the dataset that contains all the sentences extracted by all the question extraction methods from the DMAR dataset. Multichannel-CNN experiments In this section, we reported the evaluation of our proposed model in identifying questions and cquestions. Firstly, we presented a comparison of our model with the other deep-learning models that were described in section 2. In addition, we also tested the performance of a bi-directional LSTM (BI-LSTM) model that used as input all the features that we described in section 4.1 where the last hidden state is fed to a fully-connected softmax layer. Finally, we compared our model to logistic regression (LR) using an one-vs-rest multi-class objective and a support vector machine (SVM) model using a linear kernel. For the last two methods, we used the tf-idf measure of the words as features. In addition, we conducted an ablation analysis of our model. In summary, the baseline was a CNN-rand model where the word embeddings were randomly initialized but could be modified during training. We also provided the performance of our model when the word2vec embeddings were used (Google News or MIMIC) and we examined whether allowing further training of the embeddings (CNN-non static) can achieve better results than keeping them static(CNN-static). Finally, we examined whether the statistical, the POS-tag and the semantic features were meaningful features and which form of the features (as they were described in section 4.1) could achieve a better performance (multi-channel scenario). All the experiments were performed with Py- In the interest of providing a fair comparison we tuned the hyperparameters of each model. To train XML-CNN, we selected a dynamic pooling window of length 3, a learning rate 5e-5, a batch size of 32, feature map 128, hidden linear layer of size 77, filter sizes of (3,4,5) and dropout of 0.338. For Kim-CNN, we used batch size 64, filter sizes of (2,4,8), feature map of 164, learning rate 0.003 and dropout 0.077. For FastText, we used a batch size of 64 with learning rate 0.051. For training the Bi-LSTM, we chose a batch size of 64, learning rate 0.004, dropout of 0.056 and hidden size of 50. For SEQ-CNN, we used a batch size of 32, learning rate 0.084, filter size of 4, feature map 1033 and dropout 0.215. Finally, for the SVM and the LR model, we chose the default hyperpameters of the scikit-learn library. For the QuestCNN, we used a batch size of 32, a learning rate 0.012, a feature map of 160, filter sizes of (3,4,5), a hidden linear layer of size 96, a dropout rate of 0.164 and an embedding dropout of 0.016. In addition, we applied spatial batch normal- Table 3: Results of mean ± standard deviation of five runs from each model on the test and the validation test; average running time (R.T.) and number of trainable parameters (param.) are also provided for each model. The number of parameters is different between datasets as we included the embeddings vectors that are fine-tuned for each dataset; all models used the same (MIMIC) embedding; best values are bolded ization to the output of each channel of our model and batch normalization to the output of the hidden linear layer which is a technique for normalizing layer inputs. This technique has been shown to accelerate the training of different deep learning models (Ioffe and Szegedy, 2015). The Adam optimizer (Kingma and Ba, 2015) was chosen, with cross-entropy loss, as our optimization objective. Finally, each model was trained for 30 epochs. Furthermore, in Figure 2, we provided the validation performance of the models based on the technique in (Dodge et al., 2019) that measures the mean and variance of the performance of a model as a function of the number of hyperparameter trials. Finally, as the datasets do not have a standard dev set, we split the dataset to 80% training, 10% validation and 10% testing. In order to present more robust results, we ran our model on five different seeds and we provided the average scores for the testing and the validation set. Deep Learning Model comparison The mean and standard deviation (SD) of the scores for our model and the other competing models on the task of identifying questions are reported in Table 3. QUEST-CNN was shown to have achieved the best F1 score for both datasets (86.9% and 65.5%). This is due to the fact that the model utilizes information from all the features (syntactic, semantic, statistical) and uses regularization and normalization techniques. In section 5.2.3, we analyzed in detail the effect of each feature. The FastText model performed the poorest on DMAR (77.8%) as this model didn't consider the word order of the sentence but it required the least amount of running time. In addition, Kim-CNN, XML-CNN and Seq-CNN architectures had similar performances on the DMAR dataset (83.9%, 85.5% 84.2%) but on the twitter dataset Seq-CNN had the second best performance (after Quest-CNN) with 64.2%. However, the running time of the Seq-CNN is the largest by a considerable margin. Furthermore, the Bi-LSTM model which utilized all the features, achieved a non-optimal performance, which confirmed our hypothesis that a CNN-architecture is more suitable for the task of question identification. Also, it should be noted that SVM and LR achieved a decent performance. SVM in particular, achieved a similar performance to the FastText model on the DMAR dataset and even surpassed the BI-LSTM on the Twitter dataset. Ablation Study The comparison of the performance of different variants of our model is presented in Figure 3. Firstly, even our baseline model (CNN-rand) per- Figure 3: F1 scores of the Ablation study for the Quest-CNN in the DMAR dataset; the word context means that we did not create a new channel but we replaced the words in the sentence with their semantic group name. formed better than classic machine learning models (LR and SVM). In addition, by using pre-trained vectors a static model could achieve a better performance than CNN-rand. Further fine-tuning of the pre-trained models could improve the performance even more. Finally, the multi-channel model achieved a better performance than the single channel model in every case, as the best performance was achieved by using all the 3 feature types and the statistical features. We observed that the strategy of creating a new channel for the semantic features achieved a better performance than replacing the words with their respective medical group name. This provided evidence that a richer representation of the sentences could help the model to make more accurate evaluations. Using Domain Knowledge By comparing the behavior of our model when it utilizes pre-trained word embeddings from a general (Google) or a specific (Mimic) domain, (for the word embedding channel and for the semantic feature channel) we observed that the MIMIC III pre-trained embeddings have a more positive effect on the behavior of the model. In addition, we experimented with the creation of the semantic channel by either using medical group names (from UMLS, a medical Metathesaurus) or concepts from WordNet (a general-domain database). For Word-Net, as the assignment of words to concepts can be ambiguous, we considered only the first concept. This choice was based on the assumption that WordNet returns a list of concepts where the most common meaning is listed first (Elberrichi et al., 2008). Similar to the embeddings case, we observed that concepts from a specific domain are more beneficial for the performance of the model (Figure 3). These experiments indicated that when applying a classification model on documents of a specific domain (i.e medical), exploiting domain knowledge from a dataset is more advantageous than using general knowledge from a larger dataset. Finally, our experiments showed that using one-hot representation was generally weaker than a word embedding representation, even if the embeddings were initialized randomly and were allowed to be further updated during the training phase of the model. Using Pre-determined Characteristics Finally, we examined whether adding information from the question extraction method could improve the performance of the model. Specifically, for each sentence, a n-vector was created where n was the number of question extraction methods. For each position i was set to 1 if the i method classified the sentence as a question. Otherwise, it was set to 0. This vector was concatenated with the output of the max-pooling layer of the Quest-CNN model (like the statistical features) and then fed to the last fully connected layers in our model. By comparing the performance of the model when these characteristics were available to it (Figure 3), we could observe that our model was capable of identifying these characteristics on its own, and any external guidance would not further improve the performance of the model. Conclusion and Future work In this paper, we have provided an analysis of the performance of existing methods for question extraction with real-world misclassification examples that showed the weak point of each method. Furthermore, we have proposed a novel approach for the automatic identification of real questions and c-questions. We have also shown empirically that the proposed architecture of unifying syntactic, semantic and statistical features achieved a state-ofthe-art F1 score for this particular task. Finally, we have presented the relevance of exploiting domain knowledge in the overall performance of a model. We are in the process of obtaining access to datasets from different application contexts in order to examine the generalizability of our model. As for future work, we plan to extend our work by calculating the similarity of questions in order to create groups of questions that represent the most impactful "problems" of a given application environment. Finally, we plan to compare our model with recent language representation models like the BERT model in (Devlin et al., 2019) both for the task of question identification and for the task of creating the above mentioned "problem" groups. Appendices A UMLS Metathesaurus As we described in the paper, for the creation of the semantic features of our model, relevant medical group names from the UMLS Metathesaurus were extracted. An example of mapping words to medical group names can be observed in Figure 4 where different words that have a medical significance can be categorized as a medical procedure, a disorder or an anatomy term. We also provide a complete list of all the Semantic Group Names and Semantic Type Names in Table 4 that our semantic feature model can choose from. In our experiment, we chose to only use the more general Semantic Group Names, since sentences are generally short in length, so we wanted to have a dense connection between different words that are semantically similar as possible. Finally, it should be noted that UMLS also contained additional Semantic groups (like Occupations, Organizations), which we decided to remove from the process of creation of semantic features, as these groups are irrelevant to the domain of the testing dataset. However, these medical groups did not influence the general performance of our model. B C-question Format In this paper, we introduced the notion of cquestion, which is a question referring to an issue mentioned in a nearby sentence. The annotations of the sentences were created by manually checking each sentence and we adhered to the following rules: A sentence can be classified as a c-question if: • It can be classified as a real question i.e it expects an answer (information or help) about an issue. Finally, in Table 5, we list the statistics of the sentences that were classified as question-sentences and the c-questions in order to make the distinguishment between these categories more understandable. For example, we can observe that even if the average number of words of questions is 2.9 higher than the average number of words in c-questions, the average number of demonstrative pronouns is the same for both categories.
2020-10-16T01:01:17.870Z
2020-10-15T00:00:00.000
{ "year": 2020, "sha1": "759a2f0549a0950422a2d83c55456ddb5180f328", "oa_license": null, "oa_url": "http://arxiv.org/pdf/2010.07816", "oa_status": "GREEN", "pdf_src": "ACL", "pdf_hash": "80d0d21ba733bddcc1c655dcf3644681cfe9a1df", "s2fieldsofstudy": [ "Computer Science", "Medicine" ], "extfieldsofstudy": [ "Computer Science" ] }
237331206
pes2o/s2orc
v3-fos-license
Plant-Derived Pesticides as an Alternative to Pest Management and Sustainable Agricultural Production: Prospects, Applications and Challenges Pests and diseases are responsible for most of the losses related to agricultural crops, either in the field or in storage. Moreover, due to indiscriminate use of synthetic pesticides over the years, several issues have come along, such as pest resistance and contamination of important planet sources, such as water, air and soil. Therefore, in order to improve efficiency of crop production and reduce food crisis in a sustainable manner, while preserving consumer’s health, plant-derived pesticides may be a green alternative to synthetic ones. They are cheap, biodegradable, ecofriendly and act by several mechanisms of action in a more specific way, suggesting that they are less of a hazard to humans and the environment. Natural plant products with bioactivity toward insects include several classes of molecules, for example: terpenes, flavonoids, alkaloids, polyphenols, cyanogenic glucosides, quinones, amides, aldehydes, thiophenes, amino acids, saccharides and polyketides (which is not an exhaustive list of insecticidal substances). In general, those compounds have important ecological activities in nature, such as: antifeedant, attractant, nematicide, fungicide, repellent, insecticide, insect growth regulator and allelopathic agents, acting as a promising source for novel pest control agents or biopesticides. However, several factors appear to limit their commercialization. In this critical review, a compilation of plant-derived metabolites, along with their corresponding toxicology and mechanisms of action, will be approached, as well as the different strategies developed in order to meet the required commercial standards through more efficient methods. Introduction Pesticides may be defined as any compound or mixture of components intended for preventing, destroying, repelling or mitigating any pest [1]. Additionally, herbicides or weed-killers may also be considered as pesticides, and are used to kill unwanted plants in order to leave the desired crop relatively unharmed and well provided with nutrients, leading to a more profitable harvest [2]. Nevertheless, the world food production is constantly affected by insects and pests during crop growth, harvest and storage. As a matter of fact, there is an estimated loss of 18-20% regarding the annual crop production worldwide, reaching a value of more than USD 470 billion [3]. Furthermore, insects and pests not only represent a menace to our homes, gardens and reservoirs of water, but also, they transmit a number of diseases by acting as hosts to some disease-causing parasites. Therefore, the mitigation or control of Acetylcholinesterase Enzyme (AChE) Acetylcholinesterase (AChE), an enzyme that hydrolyzes the neurotransmitter acetylcholine, plays an important role regulating the transmission of the cholinergic nervous impulse, and may also be a target for biopesticides. Coumaran (2,3-dihydrobenzofuran), an active ingredient found in Lantana camara L. (Verbenaceae), inhibits this enzyme, building up the concentration levels of acetylcholine in the synapse cleft, causing an excessive neuroexcitation due to the prolonged biding of the neurotransmitter to its postsynaptic receptor, leading to restlessness, hyperexcitability, tremors, convulsion, paralysis and death [30]. It presents low toxicity to mammals and works rapidly against houseflies and grain storage pests, in spite of its short residual activity. Regarding its mechanism of action, coumaran may also be compared to the monoterpene 1,8-cineole [31] or other synthetic pesticides such as organophosphates and carbamates [32]. Moreover, Khorshid, et al. [33] have presented the inhibitory activity of methanolic extract from Cassia fistula L. (Fabaceae) roots and proposed the indole alkaloids as new potential active agents. However, it should be noted that many essential oils and terpenes therefrom have demonstrated anti-AChE activity in vitro, but their contribution to insect mortality is questionable [34]. Nicotinic Acetylcholine Receptors In relation to nicotinic acetylcholine receptors, they are present in the insect nervous system, either on pre or postsynaptic nerve terminal, as well as the cell bodies of the inter neurons, motor neurons and sensory neurons [35]. Nicotine, an alkaloid firstly isolated from Nicotiana tabacum L. (Solanaceae), can mimic acetylcholine by acting as an agonist of the acetylcholine receptor, leading to an influx of sodium ion and generation of action potential. Under normal conditions, the synaptic action of acetylcholine is terminated by AChE. However, since nicotine cannot be hydrolyzed by AChE, the persistent activation caused by the nicotine leads to an overstimulation of the cholinergic transmission, resulting in convulsion, paralysis and finally death [35]. Nicotine is an extremely fast nerve toxin, most effective towards soft-bodied insects and mites. However, is the most toxic of all botanicals and extremely harmful to humans [2]. Alternatively, there is another class of insecticides inspired on nicotine chemical structure, called neonicotinoids, which may be represented by imidacloprid, acetamiprid and thiamethoxam. Similarly to nicotine, neonicotinoids interact with nicotinic acetylcholine receptors. However, they are more specific, being much more toxic to invertebrates such as insects than to mammals. Additionally, they have higher water solubility, which permits its application to soils and therefore, its absorption by plants, promoting a more efficient defense [36]. They may act as a contact or ingestion poison, leading to the cessation of feeding within several hours of contact followed by death shortly after [37]. GABA-Gated Chloride Channels GABA-gated chloride channels are potential targets for insecticides. Once they are blocked by its antagonists (such as the α-Thujone (isolated from Artemesia absinthium L. (Asteraceae)) or the picrotoxine (Anamirta cocculus (L.) Wight and Arn (Menispermaceae)), neuronal inhibition is reduced, leading to hyper-excitation of the central nervous system (CNS), convulsion and death. Previous research have reported the monoterpene thujone as a neurotoxin, since it acts on GABAA receptors as an allosteric reversible modulator, and as a competitive inhibitor of [3H]Ethynylbicycloorthobenzoate ([3H]EBOB binding) [38]. Additionally, the GABA receptor may be inhibited by the monoterpenoids carvacrol, pulegone and thymol through [3H]TBOB binding [39]. Similarly, the silphinene-type sesquiterpenes, plant-derived natural compounds, antagonize the action of aminobutyricacid (GABA), by stabilizing non-conducting conformations of the chloride channel [24,38]. As GABA is an endogenous ligand related to stimulate feeding and evoke taste cell responses on most herbivorous insects, the chemicals that antagonize GABA receptors may also be considered as antifeedant or deterrent compounds, affecting mostly aphids, lepidopterans and beetles [40,41]. Octopamine Receptors Octopamine is a multi-functional endogenous amine that acts as a neurotransmitter, neurohormone and neuromodulator on invertebrates [42]. Its receptors are widely distributed in the central and peripheral nervous systems of insects, comprising the octopaminergic system, constituting of several subtypes of octopamine receptors, which are coupled to different second messenger systems, therefore playing a key role in mediating physiological functions and behavioral aspects [43][44][45]. For instance, octopamine1 receptor modulates myogenic rhythm of contraction in locust extensor-tibiae through changes in intracellular calcium concentrations, whereas octopamine2A and octopamine2B receptors mediate their effects through the activation of adenylatecyclase. Moreover, octopamine3 receptors mediate changes in cyclic adenosine monophosphate (CAMP) levels in the locust central nervous system [46]. The rapid action of monoterpenes against some pests suggests a neurotoxic mode of action. This hypothesis was confirmed by Reynoso, et al. [47], who have demonstrated repellent and insecticidal activity of eugenol against the blood-sucking bug Triatoma infestans (Klug; Reduviidae) through activation of the octopamine receptor. Previous studies have reported the presence of octopamine receptors in a large variety of insects, including, firefly, flies, nymphs, cockroaches and lepidopterans [46][47][48]. As these receptors do not conform to the receptor categories that have been recognized in vertebrates, agonists of octopamine receptors may be a valuable candidate for a commercial pesticide, once they are target-specific, less toxic to mammals and have a different mechanism of action when compared to the majority of pesticides currently in the market [47]. Plant Derived Insecticides That Affect Respiratory or Energy System Cellular respiration is a process that converts nutrient compounds into energy or adenosine triphosphate (ATP) at a molecular level. More specifically, this process is performed by the electron transport chain of the mitochondria, which comprises several important enzymes that are potential targets for insecticides. Rotenone is the most common natural product among rotenoids, a type of isoflavonoid and is usually found in species from Derris and Lonchocarpus (in Fabaceae) and Rhododendron (in Ericaceae), spread throughout East Indies, Malaya and South America [20]. Rotenone is defined as a complex I inhibitor of the mitochondrial respiratory chain, which works both as contact and stomach poison. It blocks the nicotinamide adenine dinucleotide (NADH) dehydrogenase, stopping the flow of electrons from NADH to coenzyme Q, therefore, preventing ATP formation from NADH, but maintaining ATP formation through flavine adenine dinucleotide (FADH 2 ); therefore, it is one of the slowest acting botanical insecticide, and yet readily degradable by air and sunlight, taking several days to kill insects, affecting primarily nerve and muscle cells, leading to cessation of feeding, followed by death, from several hours to a few days after exposure. Moreover, this bio-based pesticide is constantly applied to protect lettuce and tomato crops as it has a broad spectrum of activity against mite pests, including leaf-feeding beetles, lice, caterpillars, mosquitoes, ticks, fire ants and fleas. Furthermore, its effects are substantially synergized by PBO or pyrodone (MGK 264). Rotenone is highly toxic to mammals and fish [24,49]. Its activity and persistence are comparable to dichlorodiphenyltrichloroethane (DDT) [2]; moreover, previous studies have correlated a possible link between its exposure and Parkinson's Disease (PD) [50]. However, in spite of its high toxicity, rotenoids may be a potential source of novel complex I inhibitors, acting as a prototype for the development of safer and more efficient pesticide derivatives [51]. Plant Derived Insecticides That Affect the Endocrine System Chemical constituents that interfere with the endocrine system of insects are classified as insect growth regulators (IGR). They may act either as juvenile insect hormone mimics or inhibitors, as well as chitin synthesis inhibitors (CSI). Normally, the juvenile hormones are produced by insects in order to keep its immature state. When a sufficient growth has been reached, the production of the hormone stops, triggering the molt to the adult stage [53]. Triterpenes from Catharanthus roseus (L.) G. Don (Apocynaceae), such as α-amyrin acetate and oleanolic acid, have demonstrated interesting growth regulator activity [54]. Acyclic sesquiterpenes such as davanone, ipomearone and the juvenile hormone from silkworm are perfect examples of natural products with IGR activity as well. Therefore, the constant application of IGR towards the crops will maintain the insects in its larvae state, preventing a successful molting and resulting in an efficient pest control [55]. On the other hand, it has been reported the antijuvenile hormone activity of two chromenes found in Ageratum conyzoides L. (Asteraceae), precocene I and II promotes a precocious metarmophosis of the larvae and production of sterile, moribund and dwarfish adults after exposure [56]. Although, resistance to azadirachtin has been demonstrated [57], indicating that insects can develop resistance to natural hormones or hormone-related compounds; however, this class of compounds remains a natural potential for commercial bio-based pesticides [55]. Additionally, complex polyphenolic fractions also present a wide range of insecticidal activities, interfering with the fecundity and inducing the disruption of the oogenesis [58,59] (WO 94/13141). Moreover, previous researches have reported a natural insecticide of broad-spectrum activity, which has low mammalian toxicity and is the least toxic among botanical insecticides. It is called azadirachtin, a complex tetranortriterpenoid limonoid, majorly found in the seeds of Azadirachta indica A. Juss. (Meliaceae), a plant species commonly known as the Neem tree which originated from Burma, but is currently grown in more arid, tropical and subtropical zones of Southeast Asia, Africa, Americas and Australia [24,26,60]. Azadirachtin is considered a contact poison of systemic activity, which may be categorized in two ways: direct effects towards cells and tissues, or indirect effects, represented by endocrine system interference. It is a powerful compound that acts mainly as a feeding deterrent and insect growth regulator, comprising a wide variety of insect taxa including Lepidoptera, Diptera, Hemiptera, Orthoptera, Hymenoptera [60]. As for its growth regulatory effects, azadirachtin affects the neurosecretory system of the brain insect, blocking the release of morphogenetic peptide hormones (e.g., prothoracicotropic hormone (PTTH) and allatostatins). These hormones control the function of the prothoracic glands and the corpora allata, respectively. Therefore, as the moulting hormone (which controls new cuticle formation and ecdyses) and the juvenile hormone (JH) (which controls the juvenile stage at each moult) are regulated by prothoracic glands and the corpora allata, any disruption on this biochemical cascade may lead to moult disruption, moulting defects or sterility. The effects on feeding, developmental and reproductive disruption are caused by effects of the molecule directly on somatic and reproductive tissues and indirectly through the disruption of endocrine processes [60]. Neem-based non-commercial products are normally found as neem oil, obtained from the cold pressing of its seeds, in order to control phytopathogens (including insects). The other product is a medium-polarity extract containing azadirachtin (0.2-0.6% of seed/weight) [2], whereas the actual commercial product is a 1 to 4.5% azadirachtin solution [61]. Despite its 20 h half-life, it ensures a reasonable persistence in field applications due to its systemic action [2]. In relation to CSIs, they inhibit the production of chitin, a β-(1,4)-linked homopolymer of N-acetyl-D-glucosamine, one of the most important structural components of nearly all fungi cell walls, and also a major component of the insect exoskeleton, which provides physical protection and osmoregulation. As chitin is absent on plant and mammalian species, while it is abundant in arthropods and most fungi, chitin biosynthesis has become an important target for developing more specific insecticides and antifungal agents. Previous research has reported chitin synthase inhibition activity of 2-benzoyloxycinnamaldehyde (2-BCA), a natural product isolated from the roots of Pleuropterus ciliinervis Nakai (Polygonaceae), which is a plant species traditionally used in Chinese folk medicine to treat inflammation and several types of infection [62]. Plant Derived Insecticides That Affect the Water Balance Insects have a thin layer of wax covering their body, which provides the ecological function of preventing water loss from the cuticular surface. For instance, vegetable crude oils of rice bran, cotton seed and palm kernel, as well as saponins (natural soaps) may act by disrupting this protective waxy covering, affecting the water balance of insects through a rapid water loss from the cuticle, therefore leading to death by desiccation. Interestingly, the action of soaps affects the wax covering of insects [63]. The action of soaps on the wax covering of insects is influenced by the temperature [64]. Additionally, the crude oils may also act by interfering with insect respiration by plugging the orifices called spiracles, resulting in death by asphyxiation, controlling several types of insects such as whiteflies, mites, caterpillars, leafhoppers and beetles [1]. Other Classes of Pesticides The botanical pesticide agents may also be categorized into repellents, attractants, antifeedants or deterrents, molluscisides, fungicides, phytotoxins (herbicides) and phototoxins [15]. These classes are less common in plant sources than the insecticides [65]. Sometimes, a given compound may act as an insecticide and/or as a repellent. The major difference between those two is that the repellent does not kill insects, but only keeps them away by releasing pungent vapors or exhibiting a slight toxic effect [66]. Attractants In relation to attractants, they are considered semio-chemicals or communication compounds, released by plants in order to attract insects or to attract natural predators of the insects that feed on the plant [74]. Miller [75] have related the release of (−) and (+) limonene from white pine (Pinus strobus L. (Pinaceae)) to the attraction of the white pine cone beetle, Conophthorus coniperda Schwarz (Curculionidae), as well as the attraction of the predator beetle, Enoclerus nigripes Say (Cleridae), through the release of (−)-α-pinene, as well as the sesquiterpene caryophyllene [76]. Antifeedants or Deterrents Previous studies have correlated antifeedant activity to a chemoreception mechanism, consisting in the blockage of receptors that normally respond to phagostimulants or through stimulation of deterrent cells (primary antifeedancy). According to Qiao et al., [77] azadirachtin reduces the cholinergic transmission of neurons related to the suboesophageal ganglion (SOG) of Drosophyla melanogaster Meigen (Drosophylinae), which are strongly related to feeding behavior. Additionally, food consumption may also be reduced due to its toxic effects after the first intake (secondary antifeedancy), promoting astringency, bitter taste or anti-digestive activity to certain herbivores [78,79]. For instance, Okwute and Nduji [80] have reported that schimperii, a gallotannin isolated from Anogeissus schimperi (Hochst. ex Hutch and Dalziel) (Combretaceae) was responsible for conferring this unattractive taste to herbivores. Similar effects were reported to, isoflavonoids [81], acetogenines [82,83] or cyanogenic glycosides, such as linamarin [84]. However, as demonstrated by Huang et al., in spite of numerous natural plant natural products acting as antifeedants, no commercial product based on this mode of action have been produced. Insect habituation to feeding deterrents considerably limits their utility in crop protection [85]. Phytotoxines or Herbicides Regarding phytotoxins, they may be defined as natural herbicides that are naturally released by plant species in order to interfere with the growth or germination of specific targets around them, such as weeds, leaving the emitting plant with more chances to survive. In nature, such action is called allelopathy, and the compounds that promote this action are defined as allelopathic agents [86][87][88]. Clay, et al. [89] have reported a study regarding herbicidal activity of citronella oil against different weed species: the oil at a dose of 504 kg a.i. ha-1 largely killed the foliage of the weed species within one application. However, most species have regrown substantially after two months, except for Senecio jacobaea L. (Asteraceae), which was the most susceptible one. According to Ismail, et al. [76], its herbicidal activity occurs through inhibition of photosynthesis. Besides essential oils herbicidal activity, Ismail, Hamrouni, Hanana and Jamoussi [90] have also reported plant-derived isolated compounds, such as eugenol and 1,8-cineole, with herbicidal activity promoted through inhibition of DNA synthesis and mitosis. Furthermore, several classes of secondary metabolites have been already described as phytotoxins, including naphtoquinones, such as juglone [91,92], amino acids such as m-tyrosin e [93] and L-tryptophane [94], terpenoids as 5,6-dihydroxycadinan-3-ene-2,7-dione [2,95] and citronnellol [90], catechins [2,96], polyphenols [97] and alkylamides [98]. Phototoxins There is a class of phytochemicals called phototoxins or light-activated compounds that instead of losing their efficiency due to sunlight degradation, they are actually increased or activated by two different mechanisms. In the first mechanism (less common), molecular oxygen from the phototoxin absorbs the energy from the light, generating activated species of oxygen which ultimately damage important biomolecules [99]. The other mechanism of action is photogenotoxic, where phytochemicals cause damage to DNA, triggered by sunlight activation, regardless of the presence of oxygen in the phototoxin. In actuality, thephototoxin on its ground state, absorbs the photon, reaching its excited state, which interacts with ground state O 2 located in the tissue of its target, generating singlet oxygen and enabling insecticidal activity. This peculiar mode of action of photoxins is so different from conventional synthetic pesticides that cross resistence among them is unlikely [100,101]. Light-activated phototoxins may be exemplified by several classes such as quinones, furanocoumarins, substituted acetylenes and thiophenes. For instance, Marchant and Cooper [102] have reported several phototoxins, such as 3-methyl-3-phenyl-1,4-pentadiyne, an oil constituent from Artemisia monosperma Delile (Asteraceae), which under sunlightinduced conditions exerts an activity similar to DDT against the housefly Musca domestica L. (Muscidae) and cotton leaf worm Spodoptera littoralis Boisduval (Noctuidae) larvae. They have also discovered that an acetylenic epoxide from Artemisia pontica L. (Asteraceae), called ponticaepoxide, exhibits an LC 50 of 1.47 ppm against mosquito larvae when submitted to UV light. Additionally, Nivsarkar, et al. [103] have also found that the major compound from the roots of Tagetes minuta L. (Asteraceae), a thiophene called terthiophene or α-terthienyl (αT), is highly toxic against several organisms when co-submitted to near UV light radiation, such as nematodes, red flower beetles (Tribolium casteneum Herbst (Neoptera)), blood-feeding insects such as Manduca sexta L. (Sphingidae) and mosquito larvae (dipteres): Aedes aegypti L. (Culicidae), Aedes atropalpus Coquillett (Culicidae), Aedes intrudens Dyar (Culicidae) and yet, to our knowledge no commercial product has been generated. Plant-based natural product chemical structures with their corresponding pesticide activity and targets may be found in Table 2. Discussion Since ancient times, efforts to protect the agricultural harvest against pests have been reported. The use of inorganic compounds to control pests was reported between 500 B.C and the 19th century. They included products based on sulphur, lead, arsenic and mercury [2]. On the other hand, plant biodiversity has proved to be an endless source of biologically active ingredients, used for traditional crop and storage protection. Egyptian and Indian farmers used to mix the stored grain with fire ash [104]. The ancient Romans used false hellebore (Veratrum viride Aiton (Melanthiaceae)) as a rodenticide. Moreover, pyrethrum (extract from Tanacetum cinerariifolium (Trevir.) Sch.Bip (Asteraceae)) was used as an insecticide in Persia and Dalmatia, whereas the Chinese have discovered the insecticidal properties of Derris spp. (Fabaceae) [105]. Previous studies have already reported more than 2500 plant species belonging to 235 families, which have demonstrated their biological activity against several types of pests [1]. However, in spite of the remarkable potential as natural sources for commercial botanical pesticide development, not many have been found on the market, remaining in use only for small organic crops and commonly classified as so-called farming products [106]. Plant-derived pesticides can be processed in various ways: as crude plant material in the form of dust or powder; as extracts or as pure plant natural products, formulated into solutions or suspensions [2]. Several different classes of natural compounds that promote pesticide activity have already been reported, namely: fatty acids, glycolipids, aromatic phenols, aldehydes, ketones, alcohols, terpenoids, flavonoids, alkaloids, limonoids, naphtoquinones, saccharides, polyolesthers, saponins and sapogenins [20,[107][108][109][110]. However, several factors appear to limit the commercialization of botanical pesticides, such as: problems in large scale production, non-availability of raw materials, poor shelf life, diminished residual toxicity under field conditions, limitations regarding standardization and refinement of the final product. Additionally, as the phytochemical profile of plant species may vary according to its genome/transcriptome/proteome/metabolome, and this variation depends on several edaphic-climatic factors (i.e., temperature, relative humidity, level of sunlight radiation, altitude, photoperiod and type of soil) as well as ecologic interactions, (i.e., herbivory or mutualism), manufacturers must take additional care in order to maintain efficiency and ensure that their products will perform consistently (standardization). Finally, even if all these issues are addressed, regulatory approval remains as the major barrier. A serious drawback to commercialization of botanicals is the high cost of processing plant materials to meet the standards of pesticide regulatory authorities [111]. Marrone [112] provided an overview of the current state of biopesticides and offered some ideas for improving their adoption, including conducting on-farm demonstrations and more education and training on how the products work and how to incorporate them into integrated pest management. In many jurisdictions, no distinction is made between synthetic pesticides and biopesticides. Simply because a compound is a natural product, it does not mean that it is safe, since most of the toxic poisons are natural products or inspired by them. Furthermore, if biopesticides are used indiscriminately, as wells as the synthetics, they may also lead to the development of pest resistance [113]. In this context, only a few new sources of botanicals have reached commercial status in the past twenty years. Thus, the major commercial botanic pesticides currently in use include: pyrethrin, rotenone, azadirachtin and essential oils in general [111], along with three other products, commercialized in a more limited way: ryania, nicotine and sabadilla or veratrine alkaloids [20]. Therefore, the best strategy for a botanical pesticide to meet all the standards required and reach commercial status in a more efficient and pragmatic way is by performing bioassay-guided fractionation in a high scale [82,109,114]. In other words, the bioassays assessment of several plant extracts and its fractions, obtained whether by sophisticated or unsophisticated purification procedures, may lead to the discovery of the most effective compound or mixture of compounds correlated to the pesticide activity of each corresponding species. The isolated compound may act as a lead compound or prototype for the synthesis or semi synthesis of pesticide derivatives, which, by structure-activity relationship (SAR) techniques may result in more effective and safer products. However, sometimes, when the compound is presented on its isolated form, it may promote no activity at all, proving that extracts or fractions from a certain plant species are more effective than its isolated compounds, due to synergic effect of compound mixture, which may also lead to the manufacture of potential raw material for commercial biopesticides [114,115]. Table 1. Toxicity and mechanism of action of bio-based natural insecticides. Conclusions Despite several factors that appear to limit botanical pesticide commercialization, such as problems in large scale production, non-availability of raw materials, poor shelf life, diminished residual toxicity under field conditions and lack of extract standardization, a multidisciplinary approach, comprising bioassay-guided fractionation, combined with structure-activity relationship (SAR) and analytical techniques, has revealed to be an extremely efficient strategy in order to develop bio-based pesticides that meet all the commercial standards required. In summary, plant-derived pesticides have indicated their potential as a great alternative for pest management, once they become cheap, targetspecific, less hazardous to human health, biodegradable and ecofriendly; therefore, they may improve crop efficiency and reduce food crisis while maintaining sustainability.
2021-08-28T06:17:20.368Z
2021-08-01T00:00:00.000
{ "year": 2021, "sha1": "cb96a133d5d975c7f36631cfd996e79edeb4d7c1", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1420-3049/26/16/4835/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "309e3d2d3abed621b812bebccc373c9694ab23b4", "s2fieldsofstudy": [ "Agricultural and Food Sciences", "Environmental Science" ], "extfieldsofstudy": [ "Medicine" ] }
201665647
pes2o/s2orc
v3-fos-license
Hantaviridae: Current Classification and Future Perspectives In recent years, negative-sense RNA virus classification and taxon nomenclature have undergone considerable transformation. In 2016, the new order Bunyavirales was established, elevating the previous genus Hantavirus to family rank, thereby creating Hantaviridae. Here we summarize affirmed taxonomic modifications of this family from 2016 to 2019. Changes involve the admission of >30 new hantavirid species and the establishment of subfamilies and novel genera based on DivErsity pArtitioning by hieRarchical Clustering (DEmARC) analysis of genomic sequencing data. We outline an objective framework that can be used in future classification schemes when more hantavirids sequences will be available. Finally, we summarize current taxonomic proposals and problems in hantavirid taxonomy that will have to be addressed shortly. Introduction Recent environmental, animal, and plant metagenomic studies have resulted in an avalanche of viral genomic sequencing data, vastly expanding the known virus biodiversity [1][2][3][4][5][6][7][8]. These advancements in the field of virus discovery led to a striking discrepancy between the number of potential new viral taxa described in literature and the number of officially recognized taxa by the International Committee on Taxonomy of Viruses (ICTV) [9]. Reasons for the backlog in official classification were not only the sheer number of novel viruses but also the absence of described biological properties of these viruses beyond genomic sequencing data and sequence-inferred characteristics. In the past, most ICTV Study Groups were reluctant to create new taxa in the absence of additional information on phenotypic virus properties, such as host range, antigenic relatedness, and virion morphology [9,10]. A consensus statement endorsed by the ICTV Executive Committee, explicitly permitting classification based on genomic sequence data alone (while still encouraging the acquisition of additional data) has opened the door to a much-needed reformation of the taxonomy of many virus families [9] and, therefore, an improved official depiction of the evolutionary relationships in the virosphere [11]. These criteria indirectly imply that hantavirid classification into a species requires knowledge of its natural host, significant coverage of the viral genome sequence, and virus isolation in culture. In addition, cross-neutralization experiments, typically requiring biosafety level 3 containment, should be performed. Given stringent criteria, not all hantavirid species listed in the ninth ICTV report actually meet the these criteria [17]. For a minority of hantavirids, isolates were not available. For three hantavirids, the M segment sequence was incomplete or unavailable. Furthermore, certain hantavirids can cross host species barriers in opposition to the first criterion that suggests that a distinct hantavirus should be associated with a unique ecological niche [43,44]. Moreover, not all hantavirids listed in the ninth ICTV report meet the second criterion that denotes a minimum amino acid difference of 7% in nucleocapsid (encoded by the small (S) genomic segment) and glycoprotein (encoded by the medium (M) genomic segment) amino acid sequences. Consequently, the second criterion was proposed to be changed to a difference of >10% amino acid differences of the nucleoprotein and >12% amino acid difference of the glycoprotein [45]. Taxonomy is a continuous process that needs to keep pace with virus discovery and novel methodologies. The taxonomy of Hantaviridae clearly requires a comprehensive overhaul. The rationale and methodology for the beginning of this overhaul, formulated in official ICTV taxonomic proposals (TaxoProps) 2016.023a-cM, 2017.006M, 2017.012M, and 2018.010M (https://talk.ictvonline.org/), is outlined in the next sections of this manuscript. DEmARC Analysis for Hantaviridae The analysis was limited to hantavirids for which coding-complete S and M segment sequences were available. The deduced amino acid sequences of the proteins encoded by these segments (nucleoprotein and glycoprotein, respectively) of all available tentative hantavirid sequences were downloaded from NCBI's GenBank. A concatenated multiple sequence alignment was generated with MAFFT v7 [46]. Bayesian phylogenetic inference was conducted in BEAST 1.8.4 [47] using 20 independent runs that continued until adequate effective sample sizes (ESS > 200) were obtained. Independent runs were combined using LogCombiner 1.8.4 (BEAST) [47], employing a burn-in of 10%. A consensus tree was built using TreeAnnotator 1.8.4 (BEAST) [47] with the maximum clade credibility method and visualized in FigTree v1.4 [48]. This consensus tree was used as a guide tree for the DivErsity pArtitioning by hieRarchical Clustering (DEmARC) analysis [49,50]. Pairwise evolutionary distance (PED) values were calculated using a maximum-likelihood approach with a WAG substitution model in Tree-Puzzle. A PED cut-off value of 0.1 was used for species demarcation within Hantaviridae. Phylogenetic Inference for the Bunyavirales The polymerase amino acid sequences of significant representative members of Bunyavirales were extracted from NCBI's GenBank. In addition, new sequences stemming from viruses likely to be related to order members, including Jiāngxià mosquito virus 2 (JMV-2) [4], were considered in the analysis. Multiple sequence alignment was performed with MAFFT v7 after which a Bayesian phylogenetic reconstruction was conducted with BEAST 1.8.4. Two independent Markov Chain Monte Carlo analyses were run until adequate ESS were obtained. A consensus tree was built employing a burn-in of 10% in TreeAnnotator 1.8.4. Change of Demarcation Criteria To establish an impartial hantavirid classification that is easily reproducible and adheres to the consensus about the exclusive use of sequencing data, we abandoned the ninth ICTV report's species demarcation criteria and instead applied a classification approach based solely upon genetic data. DEmARC analysis was used to objectively define classification ranks based upon PED [49] and to establish taxonomic revisions of Hantaviridae in consecutive years since 2017. Ideally, sequence-based classification relies on complete or at least coding-complete genome sequences [51], which, in the case of hantavirids, would be sequences of the three genomic segments S, M, and large (L). Unfortunately, for a large number of hantaviruses, availability of coding-complete sequences is limited, and, in particular, L segment sequences are frequently missing. Because the coding sequence of a single genomic segment does not contain sufficient information to achieve meaningful classification, we used a multiple sequence alignment of concatenated amino acid sequences of the S and M segments. DEmARC analysis gave a frequency distribution of PED values of which a threshold of 0.1 gave an optimal clustering cost of zero. Consequently, this threshold was selected as a hantavirid demarcation criterium at the species rank. Genera are demarked by a PED-value threshold of 0.95. Subfamilies are demarked based on their distinct clustering in the maximum clade credibility tree (see Figure 3) and a PED-value threshold of 3.5. Addition of New Taxa to Hantaviridae Numerous new hantavirid species were incorporated into the ICTV-official taxonomy based on DEmARC analysis in 2017. Of these, 8 hantavirids have rodents as their natural hosts, whereas 3 newly discovered hantaviruses infect bats, 5 infect moles, and 8 infect shrews. Today, these viruses are distributed among the four mammantavirin genera Loanvirus, Mobatvirus, Orthohantavirus, and Thottimvirus (Table 1) [52]. Addition of New Taxa to Hantaviridae Numerous new hantavirid species were incorporated into the ICTV-official taxonomy based on DEmARC analysis in 2017. Of these, 8 hantavirids have rodents as their natural hosts, whereas 3 newly discovered hantaviruses infect bats, 5 infect moles, and 8 infect shrews. Today, these viruses are distributed among the four mammantavirin genera Loanvirus, Mobatvirus, Orthohantavirus and Thottimvirus (Table 1) [52]. In 2018, viral metagenomics led to the discovery of new hantavirids in reptiles, jawless fishes (Agnatha), and ray-finned fishes (Actinopterygii) [7]. In line with the DEmARC analysis results, five additional hantavirid species were created in three genera: Actinovirus (Actantavirinae), Agnathovirus (Agantavirinae) and Reptillovirus (Repantavirinae). In addition, the complete genome sequences of 2 additional orthohantaviruses became available ( Using metagenomics, Jiāngxià mosquito virus 2 (JMV-2) was discovered. This virus is a highly divergent virus most closely (albeit very distantly) related to hantavirids [4] that has subsequently been described as the first mosquito-borne hantavirid [40]. However, phylogenetic analysis of the amino acid sequence of the coding-complete sequence of the JMV-2 L segment demonstrates that JMV-2 is divergent from all hantavirids and likely represents a novel family in Bunyavirales (Figure 2). Abolishment of Hantavirid Taxa and Declassification of Hantavirids From the introduction of new objective classification criteria based on sequence data, 8 previously recognized hantavirid species were abolished because they did not fulfill all criteria used for DEmARC analysis-based classification: • Creation of Subfamilies and Genera within Hantaviridae The recent discoveries of hantaviruses in a wide spectrum of host species have significantly increased the known hantavirus diversity. Phylogenetic inference divides Hantaviridae in well-supported subclades ( Figure 3). These taxonomic sub-groups are now better defined by the introduction of genera and subfamilies (Table 3). Creation of Subfamilies and Genera within Hantaviridae The recent discoveries of hantaviruses in a wide spectrum of host species have significantly increased the known hantavirus diversity. Phylogenetic inference divides Hantaviridae in wellsupported subclades (Figure 3). These taxonomic sub-groups are now better defined by the introduction of genera and subfamilies (Table 3). Megataxonomy of Hantaviridae A recent global phylogenetic analysis confirmed the monophyly of negative-sense RNA viruses [11]. A top taxonomic rank was introduced by the ICTV for all RNA viruses [80] including a phylum, 2 subphyla, and several classes for negative-sense RNA viruses [81,82]. The current megataxonomy of Hantaviridae is outlined in Table 4. Future Taxonomic Perspectives In 2020, hantavirid taxonomy will likely only change minimally, because only a single TaxoProp has been submitted by the 2019 submission deadline. This TaxoProp outlines the addition of one loanvirus species for the recently discovered Brno virus (BRNV) [83]. The megataxonomic placement of Hantaviridae will likely remain steady, but phylum Negarnaviricota will likely be included in the newly proposed kingdom "Orthornavira". Novel TaxoProps are already expected to be submitted by the next submission deadline in 2020 for the 2021 taxonomy cycle to accommodate several recently described putative mobatviruses [84,85]. Furthermore, the ICTV Hantaviridae Study Group is currently discussing whether hantavirids, for which coding-complete S+M+L genomic segment sequences are not available, ought to be declassified and whether hantavirid name abbreviations ought to be unique (and be changed if they are not). The ICTV Hantaviridae Study Group is also discussing how species "complexes" (species that harbor more than one member virus) could be resolved, and how hantavirid species names could be changed to Linnaean binomial names [86] should this become an ICTV requirement. Discussion The current hantavirid taxonomy (Table 5) is based upon concatenated amino acid sequences of S and M genomic segment-encoded proteins. To provide a more robust framework, ideally only coding-complete sequences of hantavirids should be used for any classification efforts, with various methods analyzing all segments. Unfortunately, very few hantavirus genomes have been sequenced fully, precluding such a robust taxonomic classification for now. Increased sequencing efforts of partially characterized hantavirids, some of them discovered decades ago, could substantially improve future taxonomic efforts. In many cases, obtaining missing sequence information is not challenging scientifically, as most historic hantavirids have been isolated in culture. However, owing to the high sequence diversity and saturation of informative sites, classification with inclusion of the M segment might become increasingly difficult as hantavirid diversity may be enormous. Such diversity is indicated by detection of more divergent hantavirids in metagenomic samples and in fish and reptiles. Although hantavirid interspecies segment reassortment is thought to be fairly limited, reassortment events have shaped hantavirid evolution [43,44,87] and may further complicate classification efforts. We are calling on the hantavirid research community to weigh in on these issues and to contribute to taxonomic efforts, including TaxoProp writing and submission, to achieve a taxonomy that best reflects hantavirid evolutionary relationships.
2019-08-30T16:45:35.982Z
2019-08-27T00:00:00.000
{ "year": 2019, "sha1": "411a388bb659e11f2433d613c1c12a26c7bbd830", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1999-4915/11/9/788/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "aaba8540e2f0d719acf1e66bfdfc77e71fe150bf", "s2fieldsofstudy": [ "Biology", "Environmental Science", "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
15102500
pes2o/s2orc
v3-fos-license
The effects of sympathectomy and dexamethasone in rats ingesting sucrose. Both high-sucrose diet and dexamethasone (D) treatment increase plasma insulin and glucose levels and induce insulin resistance. We showed in a previous work (Franco-Colin, et al. Metabolism 2000; 49:1289-1294) that combining both protocols for 7 weeks induced less body weight gain in treated rats without affecting mean daily food intake. Since such an effect may be explained by an increase in caloric expenditure, possibly due to activation of the sympathetic nervous system by sucrose ingestion, in this work, and using 10% sucrose in the drinking water, male Wistar rats were divided into 4 groups. Two groups were sympathectomized using guanethidine (Gu) treatment for 3 weeks. One of these groups of rats received D in the drinking water. Of the 2 groups not receiving Gu, one was the control (C) and the other received D. After 8 weeks a glucose tolerance test was done. The rats were sacrificed and liver triglyceride (TG), perifemoral muscle lipid, and norepinephrine (NE) levels in the liver spleen, pancreas, and heart were determined. Gu-treated rats (Gu and Gu+D groups) showed less than 10% NE concentration compared to C and D rats, less daily caloric intake and body-weight gain, more sucrose intake, and better glucose tolerance. The area under the curve after glucose administration correlated significantly with the mean body weight gain of the rats, except for D group. Groups D (D and Gu+D) also showed less caloric intake and body-weight gain but higher liver weight and TG concentration and lower peripheral muscle mass. The combination of Gu+D treatments showed some peculiar results: negative body weight gain, a fatty liver, and low muscle mass. Though the glucose tolerance test had the worst results for the D group, it showed the best results in the Gu+D group. There were significant interactions for Guan X Dex by two-way ANOVA test for the area under the curve in the glucose tolerance test, muscle mass, and muscle lipids. The results suggest that dexamethasone catabolic effect is not caused by sympathetic activation. INTRODUCTION Weight gain or loss in otherwise healthy humans and animals is because of an imbalance between caloric intake and caloric expenditure. The first term is represented by the quantity of macronutrients ingested during a time interval (days, weeks, etc) taking into account that each macronutrient has a specific caloric value. Caloric expenditure is more difficult to estimate and is usually measured using indirect calorimetry. Glucocorticoids are involved in the control of energy metabolism and body weight. Long-term administration of dexamethasone (D) in low doses in the drinking water of rats (3 -4 μg/day) reduced weight gain without affecting food intake, suggesting an increase in caloric expenditure [1,2]. Feeding a sucrose diet induces some effects similar to those of D administration, such as the increase in insulin levels and resistance [3,4] and in plasma glucose levels [5]. The common mechanism of the action of glucose and D on glucose metabolism seems to be the sensitization of pancreatic β-cells to glucose-induced insulin secretion [6,7]. In a previous work when we used both D treatment and high-sucrose diet in rats, the stronger effect on fat accumulation relative to body weight was seen only after the combination of both treatments [2]. Another factor clearly involved in the control of metabolism and energy balance is the activity of the sympathoadrenal system. Both epinephrine (E) and norepinephrine (NE) are assumed to be implicated in the control of feeding [8] and of body weight [9], along with their well-known effect on plasma glucose and free fatty acid levels that results from their actions on the hepatic and adipose tissue. In spite of the important role played by the sympathetic nervous system in the cardiovascular, renal, respiratory, and metabolic functions, chronic sympathectomy, induced experimentally by guanethidine (Gu) treatment [10], produces only small alterations when administered either to neonatal or adult rats. Gu treatment has been reported to have no effect on body weight [11] or just to lower it slightly [12,13]. Caloric intake is slightly reduced [11,14]. Recently we showed that neonatal rats treated with Gu show, as adults, a higher hypophagic response to intraperitoneal catecholamine administration, a lower resting oxygen consumption, and a higher respiratory quotient at rest, suggesting a lower rate of lipid use [15]. While both high-sucrose diet and D administration induces similar changes in glucose and lipid metabolism, the experimental manipulation of these two factors has opposite effects on the sympathetic nervous system activity: sucrose ingestion stimulates it [16] whereas D has been reported to suppress it [17]. Because of our interest in the metabolic effects of sympathectomy [15] the aim of the present work was to assess in adult rats the effects of D on a background of a high-sucrose diet, on food intake, body weight, glucose tolerance, and lipid accumulation in the liver and muscle in rats treated with Gu. It was hypothesized that the poor food efficiency shown by D-treated rats [1,2] might be caused by an increase in energy expenditure perhaps due to an increase in sympathetic activity. Animals and diet Twenty-eight male Wistar rats (377 ± 7 g; 6 -8 months old) were housed in individual cages in a temperature-controlled room (23° ± 1°C) with a 12 h light/12 h dark cycle. Powdered Rodent Lab Chow 5001 and 10% sucrose in tap water were available ad libitum. At the start of the experiment, the rats were divided into 4 groups, each having 7 rats of similar mean body weight. Treatments After one week, 2 groups of rats received guanethidine (Guanethidine monosulfate, Sigma Chem., St Louis, MO) 50 mg/kg intraperitoneally (ip) three times a week for 3 weeks. After two weeks of guanethidine (Gu) treatment, 2 groups of animals, one having received Gu and another without Gu, started to drink water containing, beside sucrose, dexamethasone (Decadron, Lab. Merck Sharp & Dohme, Mexico) in doses calculated, according to the water intake of each treated rat, to represent approximately 2.2 μg dexamethasone (D) per rat per day. Oral D administration was continued until rats were sacrificed. This yielded 4 experimental groups: control (C), guanethidine (Gu), dexamethasone (D), and Gu +D. Measurements Food and water intake were measured daily and body weight weekly for 7 weeks, one week before the start of the Gu treatment, the 3 weeks of Gu treatment and the 4 weeks of D treatment, with one week overlap of the two treatments. Water intake measurements allowed the determination of sugar intake. Total caloric intake expressed as kJ/day was calculated by summing the sucrose ingestion (estimated to be 4 Cal = 16.74 kJ) and the chow intake (3.96 Cal = 16.57 kJ). After these 56 days, and continuing D oral administration, glucose tolerance was determined in the rats that had fasted overnight. Blood was obtained from the tail tip and whole-blood glucose concentrations were measured using an enzymatic kit (Farmaceuticos Lakeside, Mexico) before and 30, 60, and 120 min after ip injection of 3.6 g/kg (20 mmol/kg) of glucose. Finally, 7 to 10 days later and after an overnight fast the rats were anesthetized with 60 mg/kg sodium pentobarbital (Anestesal, SmithKline Beecham, Farmaceutica, Mexico), their abdomens were opened, their liver was removed and weighed, and then frozen for further triglyceride determinations. A sample of liver, pancreas, spleen, and heart ventricle was also immediately removed, homogenized in ice-cold 0.4 N perchloric acid, centrifuged, and kept at -20°C for catecholamine analysis. The perifemoral muscles of one leg were also dissected and frozen for further lipid determinations. Liver triglycerides were determined using a kit from Sigma Chem., St. Louis, MO. The total lipids in dried samples of muscle were obtained by extraction in a Soxhlet Extractor (Kimax, Mexico) with petroleum ether. Catecholamines were extracted, after thawing of tissue samples, by adsorption on acid-washed alumina in Tris buffer (pH 8.6) containing EDTA, washed with deionized water several times, and eluted in 200 μL 0.1 N perchloric acid. Norepinephrine (NE) and epinephrine (E) concentrations were measured using high-performance liquid chromatography with electrochemical detection (ESA Coulochem II; Bedford, MA) by using a degassed mobile phase in an isocratic 0.6 mL/min flow, with 3,4dihydroxybenzylamine added as an internal standard [15,18]. Statistics The data were analyzed by two-way (Guanethidine X Dexamethasone) and one-way analysis of variance (ANOVA). Results are expressed as mean ± SEM. Linear regression was calculated between the glycemia AUC and the mean daily body weight gain. The level of significance was set at P < 0.05. RESULTS Norepinephrine levels were drastically reduced by Gu treatment in the 4 organs (Table 1). Epinephrine levels in the unsympathectomized groups of rats were much lower than those of NE and Gu reduced them only in the heart: 5.7 ± 1 ng/g vs 9.7 ± 1.2 ng/g (F = 7.5, P < 0.02). Dexamethasone treatment did not affect catecholamine concentrations. Total caloric intake (kJ/day), percentage of sucrose intake, and body weight gain (g/day) were calculated for 7 weeks for the control group (C), 6 weeks for the Gu-treated rats (groups Gu and Gu+D) and 4 weeks for the group that received only dexamethasone (group D). Table 2 shows less food intake for the three treated groups, without significant differences among them. Relative sucrose intake was higher in Gu+D rats. Body-weight gain was much more affected in both groups receiving D treatment. All three variables were affected more by the combination of sympathectomy with D treatment. Figure 1 shows the evolution of blood glucose after 3.6 mg/kg glucose ip. Glucose tolerance was significantly improved in the sympathectomized groups and the D group showed the worst tolerance. Excepting the D group, there is a significant correlation (P < 0.02) between glucose AUC after glucose administration and body-weight gain (Fig. 2). Table 3 shows the relative liver weight, the relative content of triglycerides in the liver and lipids in the perifemoral muscles, and the wet weight of the muscle. Relative liver weight and liver triglycerides were significantly higher in both groups of D-treated rats, whereas lipids in muscle were higher only in the Gu+D group. The significant interaction between Gu and D treatment in this case shows that only their combination increased muscle lipids under these particular conditions. Similar differences between groups are shown by the perifemoral muscle mass. DISCUSSION Guanethidine administration in adult rats three times per week for three weeks resulted in low NE concentration in four internal organs (Table 1), as measured more than two months after having concluded the treatment, confirming that this protocol is as effective as the neonatal administration [19]. The effects in heart and liver were similar to those obtained in a previous experiment in which Gu was injected into neonatal rats [15]. Sympathectomy alone had weak effects on food intake and body-weight gain (Table 2), confirming the results of other authors [11][12][13][14]. The Gu group of rats showed better glucose tolerance. Glucose tolerance depends on peripheral insulin resistance, hepatic insulin resistance, and/or glucose-induced pancreatic insulin secretion. Sympathetic activation stimulates peripheral glucose uptake by a β 3 -adrenergic effect [20][21][22] but inhibits insulin secretion, an α 2 -adrenergic effect [22]. Insulin was not measured in this work but the better glucose tolerance of the sympathectomized rats might be attributed to the practical lack of NE in the pancreas, which should permit the release of more insulin under the stimulus of glucose. Chronic sucrose intake induces chronic hyperinsulinemia [24][25][26]. This results in a stimulation of leptin secretion, [26,27] more so in the presence of low sympathetic activity, because β 3 -adrenergic action inhibits leptin secretion [26,28,29]. Compared with the situation of the control group, sympathectomy may have increased leptin production resulting in improved glucose uptake and lower body-weight gain [30,31]. A relationship between these variables is shown by their significant linear correlation when the D group was not included (see below). Generally speaking, dexamethasone was administered by various authors in much larger doses and for less time. Coderre et al. [24] combined sucrose in the drinking water with subcutaneous injection of D (0.4 or 1.0 mg/day) for 7 days. The D treatment with the lower dose induced body-weight loss, hyperglycemia, hyperinsulinemia and hypertriglyceridemia. Sucrose affected only the two last variables, and the combination of treatments did not increase the effects of these large doses of D. In the present work, glucose tolerance of the D group was the worst and may be easily attributed to D-induced insulin resistance [3]. The relationship between glycemic AUC and body-weight gain was different from the other three groups and, although not reaching statistical significance because of the data of one rat (see Fig. 2), the slope of the linear regression for the other six rats is much steeper, suggesting that glycemic AUC increased much more for each increase in body weight gain. This shows that D-induced insulin resistance is increased proportionally more when body weight increases. Some interesting results were shown by the Gu+D group of rats. The huge reduction of NE content induced by sympathectomy in the 4 organs was not affected by D administration but food intake was significantly lower and body-weight actually decreased showing the catabolic action of D is probably not due to sympathetic activation. The proportional intake of sucrose was significantly higher in this group. The differences between these variables in D and Gu+D are caused, perhaps directly, by the low sympathetic activity in the second group. As already discussed, sympathectomy may increase leptin production and it is known that glucocorticoids stimulate it [32,33]. The difference between the two groups could be that, in the D group, the inhibitory action of the sympathetic system on leptin secretion is present but is lacking in the Gu+D group. High leptin levels could tentatively explain also the preference for sucrose by these rats. Beck et al. [34] reported that ghrelin, which stimulates feeding, is inversely correlated with plasma leptin and has lower plasma levels in the carbohydrate-preferring rats. This suggests indirectly that, on the contrary, high leptin levels may be associated with carbohydrate preference. Another indirect clue is given by the article of Velasquez-Mieyer et al. [35] reporting that the suppression of insulin secretion in obese humans induced, among other effects, a decrease in leptin levels and in carbohydrate craving. The same Gu+D group of rats showed the best glucose tolerance. Because this was the group that had lost weight and had supposedly also the highest leptin levels, two conditions that improve glucose tolerance [29,30] such a result might be expected. The statistically significant correlation between bodyweight gain and glucose AUC support this interpretation. The D-induced increase in relative liver weight and liver triglyceride content confirms our previous results [2]. Muscle lipids are associated with insulin resistance [36,37]. However, the present results show that the group Gu+D had the best glucose tolerance but the highest lipid content in the perifemoral muscles, clearly caused by the combination of the two treatments, as shown by the significant interaction between their effects. The relatively higher levels of fat might be caused by the relatively low level of the main component of muscle, the proteins, as shown by the low muscle mass in the Gu+D group. The proteolytic action of glucocorticoids is well-known and, as the catecholamines inhibit proteolysis [38], this muscle mass reduction could be caused by a potentiation of effects of the two treatments. In summary, in adult rats drinking a 10% sucrose solution chemical sympathectomy caused by guanethidine treatment for 3 weeks reduced norepinephrine levels in four organs by more than 90% and decreased food intake. It also improved glucose tolerance, a result that has not been reported before. Dexamethasone, besides reducing food intake and body-weight gain and increasing liver triglyceride content, impaired glucose tolerance. When guanethidine-treated animals received dexamethasone some novel aspects were seen: an increase in drinking the sucrose solution, a negative body-weight gain, and a consistently better glucose tolerance. This last effect was the only improvement shown by combining sympathectomy with dexamethasone treatment. In exchange, the negative food efficiency suggests that dexamethasone catabolic effect is not due to sympathetic activation.
2014-10-01T00:00:00.000Z
2006-03-04T00:00:00.000
{ "year": 2006, "sha1": "1efea048fb35056f19da156351944bb3401f79b9", "oa_license": "CCBYNC", "oa_url": "http://www.ijbs.com/v02p0017.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "1efea048fb35056f19da156351944bb3401f79b9", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }
37086093
pes2o/s2orc
v3-fos-license
Inhibition of plasmin, urokinase, tissue plasminogen activator, and C1S by a myxoma virus serine proteinase inhibitor. The myxoma and malignant rabbit fibroma poxviruses are lethal tumorigenic viruses of rabbits whose virulence is modulated by the production of a virus-encoded secreted serine proteinase inhibitor, SERP-1. This viral protein was detected in medium harvested from myxoma and malignant rabbit fibroma virus-infected cells, and its inhibitory profile has been characterized by gel and kinetic analysis. SERP-1 forms complexes with and inhibits the human fibrinolytic enzymes plasmin, urokinase, and two-chain tissue-type plasminogen activator (association rate constants 3.4 x 10(4), 4.3 x 10(4), and 3.6 x 10(4) M-1 s-1 respectively). It is also able to inhibit C1S, the first enzyme in the complement cascade with an association rate constant which was unaffected by the addition of heparin (1.3 x 10(3) M-1 s-1). SERP-1 acts as a substrate for and is cleaved by thrombin, porcine trypsin, human neutrophil elastase, porcine pancreatic elastase, thermolysin, subtilisin, bovine alpha-chymotrypsin, and factor Xa. Incubation with kallikrein and cathepsin G had no effect. The structure of SERP-1 has been modeled on other members of the serpin family which revealed the characteristic serpin architecture apart from the absence of the D-helix. Structural analysis and kinetic assays demonstrate that the absence of this region does not prevent inhibitory activity and furthermore allow the identification of cysteine residues involved in internal and intermolecular disulfide bonding. The superfamily of serine proteinase inhibitors or serpins includes a number of well characterized regulatory proteins such as al-antitrypsin, al-antichymotrypsin, antithrombin 111, and heparin cofactor I1 (1). The inhibitory specificity of these proteins is in the main defined by the residues at the PI-P1' positions of the reactive center (2). These amino acids act as a pseudosubstrate for the cognate proteinase which then binds in a 1:l ratio with the serpin and is inactivated by complex formation (3). There are other members of the serpin family which are not proteinase inhibitors such as ovalbumin (4), thyroxine *The study was supported by the Medical Research Council (United Kingdom), the Wellcome Trust, and an operating grant (to G. M.) from the National Institute of Canada. The costs of publication of this article were defrayed in part by the payment of page charges. This article must therefore be hereby marked "advertisement" in accordance with 18 U.S.C. Section 1734 solely to indicate this fact. ** Medical Scientist of the Alberta Heritage Trust Fund for Medical Research. binding globulin ( 5 ) , and angiotensinogen (6), and some serpins whose function has yet to be determined. Important examples of the last group are the serpins expressed by the poxviruses. These proteins have gained membership to the family by virtue of amino acid sequence homology and have been identified in the vaccinia virus genome (7)(8)(9), the cowpox virus (lo), the fowlpox virus ( l l ) , and myxoma and malignant rabbit fibroma viruses (12). The viral serpins are important in determining virulence as their deletion from the viral genome significantly reduces host mortality (10,12). For example, rabbits infected with the MYX' or MRV develop primary tumors at the site of inoculation. The viruses then spread through the host lymphatic channels to induce a rapidly lethal systemic infection characterized by severe immunosuppression, the appearance of secondary tumors distal to the inoculation site, and death within 2 weeks caused by opportunistic Gram-negative infection (13). Deletion of the region from the viral genome that encodes the SERP-1 member of the serpin superfamily results in a reduction in the severity of bacterial infections and improved rabbit survival (12). Although it is apparent that the viral serpins are important for biological virulence, their mode of action remains unclear. The vaccinia viral serpin is able to block cell fusion (14,15), and there is evidence that the cowpox viral serpin may interfere with the host inflammatory response (16, 17). The effect of this serpin on the host immune response has recently been clarified by Ray et al. (18), who showed that the cowpox viral serpin is able to inhibit interleukin-1/3-converting enzyme. In this paper we report the characterization of the inhibitory specificity of the viral serpin SERP-1. Our data suggest that targets for the SERP-1 protein of MYX and MRV include members of the fibrinolytic and complement pathways. Furthermore we have modeled the structure of this protein on other members of the serpin family which allows the identification of cysteine residues involved in internal and intermolecular disulfide bonding. MATERIALS (19). CIS was from Dr. P. A. Pemberton, Immu-nology Division, Children's Hospital, Boston; and C1 inhibitor was purified by a modification of the method of Harrison (20) with the eluate from the lysine Sepharose being applied to a dextran-sulfate Sepharose column (5 X 30 cm) equilibrated with TCE (50 mM Tris, 20 mM sodium citrate, 5 mM EDTA, pH 7.4) containing 50 mM NaCl. After washing with TCE containing 0.1 M NaC1, C1 inhibitor was eluted with a 2-liter linear gradient from TCE, 0.1 M NaCl to TCE, 0.5 M NaCl. Fractions containing C1 inhibitor were pooled, dialyzed against 20 mM KHzP04, 150 mM KCl, pH 7.0, before loading onto hydroxylapatite as described previously (20). Human plasmin, kallikrein, Bz-Pro-Phe-Arg-pNA, Chromozym TRY, and Chromozym X were obtained from Boehringer Mannheim, East Sussex, U. K.; and Val-Leu-Lys-pNA and Isoleu-Pro-Arg-pNA were from Kabivitrum Middlesex, U. K. Two-chain tPA, urokinase, and all other enzymes and substrates were from Sigma Chemical Co., Dorset, U. K. Cells and Viruses-The source and culture of MRV and MYX were as described previously (21); and the serpin-deleted mutant of MRV, MRV-S1, was prepared as described by Upton et al. (12). SERP-1 is only expressed during the late phase (6-24 h) of viral replication, and the secreted protein was concentrated in a Centriprep-10 cell (Amicon) and sterilized by filtration through a 0.2-pm membrane. The glycosylated viral serpin was detected in the media from infected cells by SDS-polyacrylamide gel electrophoresis and Western blots'; MYX has two copies of the SERP-1 gene and MRV (a recombinant between myxoma and Shope fibroma viruses) one copy (12). Assessment of Serpin-Enzyme Complex Formation-The crude late phase viral supernatant from MYX-or MRV-infected cells containing SERP-1 (or else from MRV-S1-infected cells) was incubated with a range of serine proteinases (approximately 12 pmol of serpin/6 pmol of enzyme) at 37 "C for 10 min or 1 h in 50% (v/v) 0.03 M sodium phosphate, 0.16 M NaCl, 0.1% polythylene glycol 4000, pH 7.4, reaction buffer. The reaction was stopped by the addition of 3% SDS loading buffer and heating at 95 "C for 2 min. The proteins were then separated by 7.5-15% (w/v) SDS-polyacrylamide gel electrophoresis (22) and electroblotted onto nitrocellulose paper at 400 mA for 2 h in 0.0125 M Tris, 0.48 M glycine, 20% (v/v) methanol, pH 8.8 (23). Adequate transfer was determined by staining with Ponceau S, which also allowed the position of the molecular weight markers to be determined. The nitrocellulose paper was blocked by shaking with 0.05 M Tris, 0.002 M CaCl', 0.05 M NaC1, pH 8.0 with 5% (w/v) skimmed milk powder, 0.02% Nonidet P-40 for 30 min. SERP-1 was visualized by shaking with 0.5% (v/v) polyclonal rabbit anti-SERP-1 antiserum' in blocking buffer for 1 h and then after washing, with a second antibody for a further h. The second antibody was either 0.4% (v/v) horseradish peroxidase-labeled swine anti-rabbit antibody or 0.14 MBq of '251-labeled anti-rabbit antibody. The nitrocellulose paper was then washed with 0.05 M Tris, 0.002 M CaCl2, 0.05 M NaCl, pH 8.0, and SERP-1 visualized with aminoethylcarbazole or by autoradiography. Actiue-site Titration of Serine Proteinases-The active-site titrations of bovine a-chymotrypsin, porcine trypsin, and human a-thrombin were performed according to the method of Chase and Shaw (24) using the suicide substrates p-nitrophenyl acetate for bovine a-chymotrypsin andp-nitrophenyl-p'-guanidino benzoate for porcine trypsin and human a-thrombin. Bovine a-chymotrypsin was then used to determine the activity of recombinant al-antitrypsin, recombinant arginine (Pl) a]-antitrypsin, and plasma al-antichymotrypsin, whereas porcine trypsin was used to determine the activity of plasma a1-antitrypsin, plasma antithrombin 111, a'-macroglobulin, and C1 inhibitor. Active-site titration was performed by incubating 50 nM enzyme with increasing concentrations of inhibitor and 0.03 M sodium phosphate, 0.16 M NaC1, 0.1% polyethylene glycol 4000, pH 7.4, reaction buffer in a volume of 100 pl. The residual proteolytic activity was determined by adding reaction buffer containing the appropriate substrate (final concentration, 0.16 mM) to a final volume of 1 ml and observing the change in AIos for 3 min. Active-site values are obtained by plotting residual proteolytic activity against the amount of inhibitor and extrapolating to the x intercept (3). These inhibitors were then used to calculate the active-site values of 10 serine proteinases using the substrates shown in Table I. In all cases the enzyme and inhibitor were incubated at 37 "C for at least 5 times the predicted J. L. Macen, C. Upton, and G. McFadden, manuscript in preparation. half-time for complex formation as determined from the published association rate constants. The active-site values for tPA and urokinase were later verified to be correct by titration against SERP-1. Actiue-site Titration of SERP-I-The active-site of SERP-1 was determined in a similar manner using a 50 nM concentration of an enzyme of known activity which had previously been demonstrated to form complexes with SERP-1. The enzyme and inhibitor were incubated in 100 pl at 37 "C for 30 min before adding the substrate. In all of the above active-site titrations 1:l enzyme/inhibitor interactions have been assumed. The metalloproteinase thermolysin was included as a negative control as equimolar active-site concentrations of enzyme and serpin should result in complete cleavage at the reactive center. Determination of Association Rate Constants-These were determined under second order conditions as described by Beatty et al. (3). Active-site equimolar concentrations of enzyme and late phase viral supernatant were incubated at 20 "C with reaction buffer for varying time intervals. The reaction was stopped and the quantity of free enzyme determined by adding reaction buffer containing the substrate to a final volume of 1 ml. The value of the association rate constant (kasJ was calculated from the half-life (t112) of enzyme inhibition using Equation 1 where Eo is the initial enzyme concentration. Modeling of SERP-I-All model building, energy minimization, and molecular dynamics studies were conducted using Quanta/ CHARMm version 3.0 (Polygen Corporation, Waltham, MA). The simulations were performed on a Silicon Graphics Personal Iris 4D2, equipped with stereo graphics facility. Homology Modeling-A three-dimensional model of residues 8-356 of SERP-1 was constructed using the atomic coordinates from the crystal structure of ovalbumin (25). Sequence alignments based on those of Huber and Carrel1 (26) were computed using the Needleman-Wunsch algorithm with modification to penalize for the introduction of gaps in regions of defined secondary structure (27). This confirmed 34% amino acid homology between SERP-1 and ovalbumin in all major areas of secondary structure. The final model was constructed using amino acid substitution by residue replacement on the ovalbumin backbone with tools and default atom geometries present in the protein modeling package of Quanta/CHARMm. No crystal data were available for modeling the amino-terminal 13 amino acids and carboxyl-terminal 7 amino acids of SERP-1, and these were therefore excluded. Molecular Dynamics-For all simulations and calculations the default values for CHARMm molecular mechanics and dynamics parameters were used. The polar hydrogen atom descriptions (hydrogen bound to carbon not explicitly represented) were selected from the AMINO.RTF file. Models were minimized using the adopted basis Newton Raphson algorithm, which was continyed until the root mean square gradient was less than 0.08 kcal/mol/A. After minimization, the structure was equilibrated to 300 K for 20 ps, followed by relaxation at 300 K for 10 ps. The conformation exhibiting the lowest energy during molecular dynamics was chosen, minimized, and solvated with a 10 A shell of (TIP3P) water molecules. All water molecules that came too close to the structure were removed. The system was composed of 8065 atoms, being made up of 3403 protein atoms and 1554 water molecules. Finally a 20-ps molecular dynamics run at 300 K was performed and the final structure energy minimized and analyzed. RESULTS To determine whether SERP-1 acts as a substrate or an inhibitor for a panel of serine proteinases, the proteins secreted from MYXor MRV-infected cells or from cells infected with MRV-S1 were mixed with candidate proteinases and the status of SERP-1 monitored by Western blots. SERP-1 was identified on the Western blot as having a molecular mass of 55 kDa (Fig. 1, lanes 1 and 13). There was a higher molecular mass band at 110 kDa, which resolved after incubation with 5% (v/v) P-mercaptoethanol, suggesting that it represented SERP-1 dimer (data not shown). There was also a minor lower molecular weight band detected at 40 kDa, Inhibition of Fibrinolytic Enzymes by the Myxoma Viral Serpin which could represent a proteolytic digestion product of SERP-1 or possibly nonspecific binding of the antibody to another protein as it was present in the early media (Fig. 1, lane 12), and SERP-1 is expressed from a late promoter. MYX SERP-1 formed complexes of 130,90, and 106 kDa with human plasmin, urokinase, and tPA, respectively ( Fig. 1; lanes 2-7). It is plausible that the 106-kDa complex with tPA may represent cleavage of the dimer, although this is unlikely as subsequent studies have demonstrated tPA. SERP-1 complex in batches of late supernatant which have no detectable dimer. A 50-kDa cleavage product of SERP-1 but no higher molecular weight complexes were detected with thrombin, porcine trypsin (Fig. 1, lanes 8-11 ), human neutrophil elastase, porcine pancreatic elastase, thermolysin, subtilisin, bovine a-chymotrypsin, and factor Xa (data not shown). Incubation with kallikrein and cathepsin G had no effect. The 110-kDa dimer of SERP-1 was also cleaved in each case and did not form any detectable complexes. Having confirmed complex formation the active-site concentration of MYX SERP-1 was determined against human plasmin assuming 1:l complex formation. This value varied between SERP-1 batches, but a typical value was 3.4 pg/ml (Fig. 2). Similar inhibitory profiles were obtained for urokinase and tPA. Interestingly, this deduced active-site concentration is significantly less than the value of 18 pg/ml for the protein mass of SERP-1 estimated from the Coomassie- stained SDS-polyacrylamide gel electrophoresis. This would suggest either a comigrating protein on the SDS-polyacrylamide gel electrophoresis or inactive SERP-1. The latter hypothesis was supported by the autoradiograph which showed cleaved SERP-1 as well as enzyme-inhibitor complexes (Fig. 1). This cleavage of SERP-1 is likely to be at the active site as preincubation of late MYX supernatant with thrombin resulted in a similarly cleaved species which was inactive as an inhibitor of human plasmin. The SERP-1-containing MYX sample did not turn over the substrate in the absence of plasmin, urokinase, or tPA. Furthermore, the crude early phase MYX supernatant (which lacks SERP-1) did not inhibit plasmin, urokinase, or tPA after incubation at 37 "C for 30 min (data not shown). Interestingly, early samples from MYX-infected cells (before SERP-1 was expressed) increased the hydrolysis of substrate in the presence of tPA but not significantly with plasmin or urokinase. Although the mechanism of tPA enhancement by early supernatants was not investigated substrate hydrolysis was enhanced 2.5-fold by early MYX samples in the presence of tPA at concentrations at which the late phase extract fully inhibited tPA activity. Malignant Rabbit Fibroma Virus-MRV is a recombinant leporipoxvirus formed by the acquisition of sequences from Shope fibroma virus into a MYX background (21). MRV is notable because it causes invasive tumors and immunosuppression similar to MYX but contains only a single identical copy of SERP-1 (12). The late supernatants from MRVinfected cells also formed complexes with plasmin, urokinase, and tPA (data not shown), and active-site titration with plasmin revealed an effective concentration of 9.8 pg/ml. MRV-S1 was treated in an identical fashion, and Western blotting was used to confirm the failure of the virus to produce Inhibition of Fibrinolytic Enzymes by the Myxoma Viral Serpin 519 the serpin. This modified virus failed to secrete proteins capable of inhibiting plasmin, urokinase, or tPA. Furthermore, MRV extract containing SERP-1 also inhibited Cis, but that lacking the serpin had no inhibitory activity. Association Rate Constants-The association rate constants of SERP-1 with human plasmin, urokinase tPA, and Cls were determined as 3.4 f 0.3 X lo4, 4.3 k 0.5 X lo4, 3.6 f 0.6 X lo4, and 1.3 * 0.1 X lo3 M-' s-l, respectively. The association rate constant of SERP-1 with Cls was unaffected by heparin (1.2 f 0.6 X 10' M-' s-'). All values are the weighted mean of two readings with standard error, and the final concentration of each enzyme was 50, 30, 40, and 80 nM, respectively. The substrates and their final concentrations are as shown in Table I. Modeling of SERP-1- Fig. 3 shows the alignment of SERP-1 with ovalbumin and four other serpins modified from Huber and Carrel1 (26). The interesting features of this alignment are the extended carboxyl-terminal tail of SERP-1 and the complete absence of a sequence corresponding to the D-helix. CI-nciIBmuR 5 ~-RTUV------~Pr-"p"p~ The serpin was modeled on the structure of uncleaved ovalbumin by standard homology modeling techniques (28, 29) with the resulting structure being energy minimized before manual repositioning of poor torsion angles and close atom contacts. The final structure is shown in Fig. 4, projected onto the ovalbumin backbone. In the final model, root mean square deviation between the ovalbumin and SERP-1 main chain carbon atoms was 1.7 A. The alignment of SERP-1 with ovalbumin clearly shows that the loss of the D-helix in SERP-1 (arrowed) produces very little perturbation of the structure, with the ordering and orientation of the major secondary structural elements remaining unchanged. The SERP-1 structure was also compared with the structures of cleaved al-antitrypsin (30) and cul-antichymotrypsin (31) and gave root mean square deviations for main chain carbon atoms of 2.3 and 2.7 A, respectively. Of the 4 cysteine residues in SERP-1, Cys', CysI3, and CysIs are sufficiently close to potentially form a disulfide bridge (Fig. 5), whereas CYS"~ is too distant to be able to form a disulfide bond. DISCUSSION The poxviruses MRV and MYX are lethal tumorigenic viruses of rabbits (13) whose virulence is dependent upon the synthesis and secretion of a variety of proteins which permit virus propagation in rabbits but which have little effect on showing the high degree of homology between the actual ovalbumin structure and the computed SERP-1 structure. The position of the ovalbumin D-helix is arrowed. As an active inhibitor the reactive center loop of SERP-1 is likely adopt a canonical structure rather than the three-turn helix (shown here) of ovalbumin. replication in cultured cells, SERP-1 is an example of one such virulence factor and is notable because it is encoded in two gene copies in MYX, a single copy in MRV but is absent from a closely related virus, Shope fibroma virus, that causes only benign tumors in infected rabbits. The role of SERP-1 in viral pathogenesis is further underscored by the fact that the SERP-1 deletion in MRV (12) and a comparable double deletion in MYX' significantly reduce virulence in infected rabbits. The SERP-1 protein has approximately 30% amino acid homology with members of the serpin superfamily (12), the 34% identity with ovalbumin allowing homology modeling based on the well refined structure of this protein (25). As SERP-1 acts as a functional inhibitor it is likely that the reactive center loop adopts a partially reinserted canonical conformation (32, 33) rather than the helical form shown in Fig. 4. An unusual feature of SERP-1 is the lack of a D-helix which is normally highly conserved among the serpins (26). Modeling work presented here indicates that the absence of this helix is still compatible with correct folding and maintenance of the general serpin architecture. Indeed, C1 inhibitor lacks some 60% of the D-helix, whereas barley Z protein lacks the structural features corresponding to helices A -F1, the first three strands of the A @-sheet and the last two strands of the C P-sheet. Despite this both C1-inhibitor and barley Z protein are fully functional as inhibitors (34,35), and thus the loss of the D-helix alone should not mitigate against SERP-1 possessing serine proteinase inhibitory activity. Superposition of the predicted structure of SERP-1 on other serpins showed that the root mean square deviation for main chain carbon atoms was 2.3 and 2.7 8, when compared with al-antitrypsin and al-antichymotrypsin respectively. These values are higher than that based on ovalbumin (1.7 A) as the structures of al-antitrypsin and al-antichymotrypsin are those of cleaved serpins, with a six-rather than a fivemembered A-sheet (32). Despite this SERP-1 still retains a high degree of homology with these cleaved proteins. Fig. 1 demonstrated a 110-kDa form of the SERP-1 monomer which disappeared on treatment with P-mercaptoethanol. This is compatible with dimer formation with other proteins through disulfide bridges, suggesting the presence of at least 1 freely available cysteine residue. Modeling has allowed the prediction of the positions of all 4 cysteine residues. Cys' is in the extended amino-terminal tail, Cys13 and Cys16 are both in the A-helix, and CysZu is in the second strand of the C p-sheet. F?r a disulfide bridge to form, the S-S distance must be 2.05 A (36); and as Cyd3 and Cys" are constrained in the A-helix, only Cysg could come into close enough contact with another cysteine residue to form a disulfide bond. It is plausible that Cys' could disulfide bond internally with either Cyd3 or Cys", leaving the other cysteine residue free. Thus there are at least 2 free cysteines in SERP-1, Cys244 and either Cys13 or Cys", which could potentially disulfide bond with a second SERP-1 molecule. The Western blot (Fig. 1) shows that the 110-kDa SERP-1 dimer is noninhibitory in that it is unable to form higher molecular weight complexes with the fibrinolytic enzymes plasmin, tPA, or urokinase. Other serpins which form Cyslinked dimers, protease nexin I, al-antitrypsin, and al-antichymotrypsin, retain their inhibitory function despite dimer f~r m a t i o n .~ This suggests that dimerization of SERP-1 compromises the conformation of the reactive center of the molecule. Fig. 5 illustrates that CYS'~~ lies very close to the reactive center, and thus dimerization involving this residue would result in pertubation and obsfuscation of the reactive center, rendering the serpin inactive. As the dimer is inactive both D. L. Evans, unpublished observations. SERP-1 molecules must be in noninhibitory conformations SO that cys224-Cys224 is the most likely linkage with the addition of the proteinase leading to cleavage and degradation of the dimer and not complex formation. Despite good structural and sequence homology to the other serpins, SERP-1 has a unique combination of arginine-asparagine residues at the P1-P1' positions (12) which is atypical and makes it difficult to predict the inhibitory specificity. Our results show that SERP-1 inhibits three enzymes which play an important role in the fibrinolytic pathway: plasmin, urokinase, and tPA, but not a fourth enzyme, kallikrein. Furthermore it also inhibits the first enzyme in the complement pathway Cis. It is apparent that not all of SERP-1 in a particular preparation is active and therefore available to form complexes. The inactive material acts as a substrate for the enzyme and is degraded with a consequent downward band shift of 4-5 kDa. This is compatible with specific cleavage within the reactive center loop as is seen with other serpins (37,38). A similar band shift is seen following incubation of SERP-1 with enzymes such as thrombin and porcine trypsin (Fig. 1). Preincubation of the late MYX supernatant with thrombin resulted in a loss of inhibitory activity supporting the conclusion that the downward band shift of SERP-1 represents reactive center loop cleavage. The assays of complex formation allowed the functional active-site titration of SERP-1 with plasmin. This showed the value to be very low and therefore the kinetic experiments were performed directly on unfractionated supernatants. Such assays are valid as the late MYX supernatant does not itself turn over the enzyme substrates and the early MYX supernatant, which lacks SERP-1, failed to inhibit plasmin, urokinase, or tPA. As a further control MRV, which is >90% identical to MYX (21), also expressed SERP-1 which inhibited plasmin, urokinase, tPA, and Cis. The MRV-S1 mutant, with a deletion in the SERP-1 gene (12), failed to secrete proteins which inhibited any of these enzymes. Thus the findings must reflect the inhibitory profile of SERP-1 and not other "contaminating" proteinase inhibitors. The association rate constants between the fibrinolytic enzymes and SERP-1 were approximately 4 X lo' M" s-l, and that with ClS was 1.3 X lo3. These values, which are only moderately fast, are as high as others such as Cls with C1 inhibitor (34,39) and protein C with protein C inhibitor (40), which are thought to be physiologically relevant. In the context of the inflammatory microenvironment of infected rabbit tissues the serpin and enzyme concentrations are likely to be high, and therefore complex formation and inhibition are more likely to result (41). Furthermore, MYX and MRV are rabbit viruses and all of the enzymes tested are human, which may serve to reduce the association rates from their proper physiological values. An alternative explanation is that the moderately fast association rate constants indicate that the effect of SERP-1 on the fibrinolytic and complement enzymes is of secondary importance and that the enzyme inhibited by SERP-1 in uiuo has yet to be determined. It is of interest, however, that despite the low association rate constant, SERP-1 is only the second naturally occurring inhibitor of ClS to be reported. SERP-1 clearly has an important role in determining virulence and has now been shown to be a specific inhibitor of enzymes within the fibrinolytic and complement pathways. The method by which the inhibition of this pathway leads to the demise of the rabbit is unclear but may be linked to the profound immunosuppression produced by the virus. Such immunosuppression may be mediated by inhibition of the complement cascade or by inhibition of cell surface urokinase which in turn may serve to reduce neutrophil migration (42). The inhibition of these components of the inflammatory response may impair host response and so allow viral replication and secondary Gram-negative infection. This modification of the host immune response by viral serpins has also been demonstrated in the cowpox virus which encodes a serpin that is a specific inhibitor of interleukin-lp-converting enzyme (18). Interestingly this serpin, which has the amino acid residues aspartic acid-cysteine at the P1-PI' positions, has no effect on the fibrinolytic enzymes plasmin or tPA but mediates its effect by modifying host cytokine response. It is plausible that SERP-1 may also act on the host cytokine response in addition to its effect on the fibrinolytic and complement pathways.
2018-04-03T03:56:15.634Z
1993-01-05T00:00:00.000
{ "year": 1993, "sha1": "b582df6b5fcc44f3754b5f4e3e5b28a578574869", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1016/s0021-9258(18)54181-8", "oa_status": "HYBRID", "pdf_src": "Adhoc", "pdf_hash": "d87e6ee93a003801b045cb466edc6772770b78dd", "s2fieldsofstudy": [ "Biology", "Chemistry" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
9069564
pes2o/s2orc
v3-fos-license
Leukemia cutis preceding bone marrow relapse in acute monocytic leukemia How to cite this article: Morgaonkar M, Gupta S, Vyas K, Jain SK. Multiple hyperpigmented patches in Waardenburg syndrome type 1: An unusual presentation. Indian J Dermatol Venereol Leprol 2016;82:711‐3. Received: November, 2015. Accepted: January, 2016. This is an open access article distributed under the terms of the Creative Commons Attribution‐NonCommercial‐ShareAlike 3.0 License, which allows others to remix, tweak, and build upon the work non‐commercially, as long as the author is credited and the new creations are licensed under the identical terms. achromatic patches has been reported frequently, our patient has the distinctive feature of concurrent presence of multiple large hyperpigmented patches all over the body. It may be attributed to the fact that Waardenburg syndrome occurs due to aberrant development and/or migration of melanocyte precursors to epidermis. We were unable to find any previous reports of a similar presentation in a case of Waardenburg syndrome type 1. Declaration of patient consent The authors certify that they have obtained all appropriate patient consent forms. In the form, the legal guardian has given his consent for images and other clinical information to be reported in the journal. The guardian understands that names and initials will not be published and due efforts will be made to conceal patient identity, but anonymity cannot be guaranteed. Financial support and sponsorship Nil. Leukemia cutis preceding bone marrow relapse in acute monocytic leukemia Sir, Leukemia cutis is a non-specific term used to describe an extra-medullary manifestation of any type of leukemia characterized by skin infiltration (epidermis, dermis or subcutaneous tissues) with neoplastic leukemic cells resulting in clinically identifiable skin lesions. [1][2][3] A 50-year-old woman presented to the hematology department with a 2-week history of generalized nodular skin lesions along with mild bone pain and fever. The patient had a history of acute monocytic leukemia which was diagnosed 6 months prior to the present admission. She experienced a complete remission which lasted for 5-6 months following an induction course of chemotherapy with cytosine arabinocide and daunorubicin (7 + 3) and three additional consolidation courses of the same protocol (5 + 2). On examination, we found that the skin lesions were of different shapes and sizes ranging from small to large nodules (0.5-3.0 cm in diameter), bullae and erythematous plaques distributed all over the body including the trunk and the ventral aspect of extremities. Tender nodules were felt on the extensor surface of the hands and shin [ Figure 1a-d]. She also had mild splenomegaly. A biopsy from the largest Three weeks following the appearance of the initial skin lesions, the symptoms gradually worsened and the patient developed severe pain in the bones coupled with high fever, anorexia, night sweats and productive cough with purpura and ecchymosis in the skin. Physical examination revealed gum hypertrophy with mild splenomegaly. Laboratory investigations revealed moderate anemia, leukocytosis and severe thrombocytopenia with an erythrocyte sedimentation rate of 76 mm/h and raised lactate dehydrogenase levels of 564 IU. Further investigations revealed obvious signs of relapsed acute monocytic leukemia in both the peripheral blood smear (5% of blasts) and the bone marrow aspirate (32% blasts) [ Figure 2d]. The blast cells were large with abundant moderately basophilic cytoplasm. Although a partial remission was obtained after a course of aggressive chemotherapy with FLAG-IDA (fludarabine 30 mg/m², Ara-C 2 g/m² for 5 days, idarubicin 10 mg/m² for 3 days and granulocyte-colony stimulating factor from day 6 until H and E, ×200) neutrophil recovery), the patient could not achieve a complete remission. Eventually, she died 35 days after the initial presentation with skin lesions. Death was attributed to complications including severe febrile neutropenia and septicemia. Leukemia cutis is an extramedullary skin infiltration which usually occurs in leukemias, especially in acute monocytic and myelomonocytic leukemias. It may, sometimes, occur before the appearance of peripheral and/or bone marrow leukemia (called aleukemic leukemia cutis) or it may occur concomitantly with systemic leukemia or after a complete remission. The duration may range from weeks to months or even more. [4] Cases of leukemia cutis are often associated with a bad prognosis. [1,5,6] Therefore, once a patient is diagnosed to have leukemia cutis, urgent treatment with aggressive chemotherapy is required. Our patient was treated with FLAG-IDA protocol and had entered a period of pancytopenia and febrile neutropenia but eventually died 35 days after the diagnosis of leukemia cutis. Isolated extramedullary recurrence of acute non-lymphoblastic leukemias always heralds bone marrow relapse and should be treated with re-induction chemotherapy. [7] The increased risk of extramedullary relapse is presumably related to the higher incidence of extramedullary disease as in FAB (French-American-British classification) M4 and M5 subclasses. [8] However, very rarely, leukemia cutis precedes peripheral or bone marrow leukemia as noted in our case. To sum up, we describe a case of aleukemic leukemia cutis which preceded the relapse of acute monocytic leukemia occurring after a complete remission, with aggressive symptoms and a very short survival time. Due to its aggressive clinical presentation, we strongly recommend that aleukemic leukemia cutis be carefully sought when patients with a history of leukemia, when develop cutaneous nodules. Financial support and sponsorship Nil. Conflicts of interest There are no conflicts of interest.
2018-04-03T00:05:51.199Z
2016-11-01T00:00:00.000
{ "year": 2016, "sha1": "035d8bf20a3707605a3b150dad8e59b4b5399845", "oa_license": null, "oa_url": "https://doi.org/10.4103/0378-6323.190851", "oa_status": "GOLD", "pdf_src": "WoltersKluwer", "pdf_hash": "851c227946d5a6bdb3c00a13d73a8f4edab8e52a", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
216425876
pes2o/s2orc
v3-fos-license
What is the functional mobility and quality of life in patients with cerebral palsy following single-event multilevel surgery? Abstract Purpose To report functional mobility in patients with diplegic cerebral palsy (CP) at long-term follow-up after single-event multilevel surgery (SEMLS). The secondary aim was to assess the relationship between functional mobility and quality of life (QoL) in patients previously treated with SEMLS. Methods A total of 61 patients with diplegic CP, mean age at surgery 11 years, eight months (sd 2 years, 5 months), were included. A mean of eight years (sd 3 years, 10 months) after SEMLS, patients were contacted and asked to complete the Functional Mobility Scale (FMS) questionnaire over the telephone and given a weblink to complete an online version of the CP QOL Teen. FMS was recorded for all patients and CP QOL Teen for 23 patients (38%). Results Of patients graded Gross Motor Function Classification System (GMFCS) I and II preoperatively, at long-term follow-up the proportion walking independently at home, school/work and in the community was 71% (20/28), 57% (16/28) and 57% (16/28), respectively. Of patients graded GMFCS III preoperatively, at long-term follow-up 82% (27/33) and 76% (25/33) were walking either independently or with an assistive device at home and school/work, respectively, while over community distances 61% (20/33) required a wheelchair. The only significant association between QoL and functional mobility was better ‘feelings about function’ in patients with better home FMS scores (r = 0.55; 95% confidence interval 0.15 to 0.79; p = 0.01). Conclusion The majority of children maintained their preoperative level of functional mobility at long-term follow-up after SEMLS. Level of Evidence IV Introduction Cerebral palsy (CP) is a disorder of movement and posture caused by a defect or lesion of the immature brain. 1 Differential growth between muscle-tendon units and bone results in muscle-tendon contractures, torsion of long bones and joint contractures or instability. [2][3][4] The musculoskeletal sequelae of CP contribute to gait impairments, activity limitations and participation restrictions. 4,5 Over the last three decades, single-level surgery has been replaced with the concept of single-event multilevel surgery (SEMLS), in which deformities at multiple anatomical levels are addressed during the same operative procedure. [6][7][8] Studies evaluating SEMLS for children with CP usually focus on gait-related outcomes. 8 There are now multiple reports that SEMLS improves gait kinematics at both short [9][10][11][12][13] and long-term follow-up. 10,13 However, the impact of SEMLS on functional mobility and quality of life (QoL) is less well understood as these domains have not been fully investigated. 8,14,15 The literature on function and QoL following SEMLS for children with CP is limited by small study samples 9,16-20 and short-term follow-up. 12,17,18,[20][21][22][23] Previous studies have reported stability of the Gross Motor Function Classification System (GMFCS) in patients with CP over time, [24][25][26] including after SEMLS, 27 and the GMFCS is suggested as an ideal classification tool. The primary aim of this study was to report the functional mobility of patients with diplegic CP at long-term follow-up after SEMLS with respect to their preoperative Gross Motor Function Classification System (GMFCS) grade. We hypothesize that patients would maintain functional mobility at long-term follow-up relative to their preoperative mobility status. The secondary aim was to assess if functional mobility is related to QoL in patients previously treated with SEMLS. Study design and setting This study was a case series of children with CP that underwent SEMLS in a tertiary referral centre between 1 January 2005 and 31 December 2016 (level IV evidence). The study was approved by the Clinical Governance and Audit Team (ID: 5085). Participants Patients were included if they met the following criteria: 1) diplegic CP; 2) GMFCS I to III; 5 3) SEMLS at age ≤ 18; 4) and completed the Functional Mobility Scale (FMS) questionnaire at long-term follow-up. 28 SEMLS was defined as two or more bone and/or soft-tissue procedures at two or more anatomical levels bilaterally during one operative procedure. 8 The gait laboratory database at our institution identified 74 eligible patients. Of these, 13 were excluded: six underwent surgery at a different unit and seven could not be contacted. Only one patient was classified as GMFCS I. As patients graded GMFCS I and II are, by definition, independent ambulators, these groups were combined when performing analyses. Outcome measures Patients undergoing SEMLS at our institution are routinely assessed with 3D gait analysis preoperatively and six, 12 and 24 months postoperatively. The cohort of 61 patients included in the study was subsequently contacted to evaluate long-term functional mobility and QoL. In May 2018 we contacted the patients meeting the inclusion criteria by telephone. If appropriate we asked the patient, but if not, we asked the parents to complete the FMS questionnaire over the telephone. FMS rates walking ability at three specified distances (five metres, 50 metres and 500 metres), taking into account the range of assistive devices a child/adolescent might use. 28 These distances represent the patient's mobility at home, school/work and in the community. Following completion of the FMS questionnaire over the telephone, we assessed QoL using an online adapted version of the CP QOL Teen self-reported version 2. 29 The CP QOL Teen is a condition-specific QoL instrument that reports on five domains of QoL including 'general wellbeing and participation', 'communication and physical health', 'school wellbeing', 'social wellbeing' and 'feelings about function'. Following the authors' permission, minor adaptions were made to make the questions applicable to the age of all patients (11 years, 10 months to 31 years, 4 months). Questions pertaining to school-related issues were modified for adults at work (e.g. 'How do you feel about how you are accepted by other students at school or individuals at work?') and a question relating to changes during puberty were only asked to those under 18 years old. The adapted CP QOL was converted into an online questionnaire, using REDCap electronic data capture tools (Vanderbilt University Medical Centre, Nashville, Tennessee), 30 and a link to complete it online was sent to patients or parents. In an attempt to increase participation, an e-mail reminder was sent one and two weeks after the initial invitation to those who had not completed the online questionnaire. Due to concerns about multiple testing when the QoL domains are examined against the FMS, hypotheses were formulated prior to the study. Firstly, we hypothesized that in patients graded GMFCS I and II preoperatively, better 'feelings about function' would be associated with better community FMS scores. Secondly, we hypothesized that in patients graded GMFCS III preoperatively, better 'feelings about function' would be associated with better home FMS scores. Although all 61 patients completed the FMS questionnaire over the telephone at long-term follow-up, only 23 (38%) responded to the weblink to complete the online CP QOL Teen self-reported version 2. There were no significant differences in sex, GMFCS grade, age at surgery, preoperative Gait Profile Score (GPS), 31 FMS scores or age at long-term follow-up between responders and nonresponders. The GPS at the routine 24-month postoperative gait analysis was compared with the preoperative value. A medical records review was performed to identify the surgical procedures performed as part of the SEMLS. Statistical analysis Continuous variables were summarized by mean and sd. Change in GPS was analyzed using a paired t-test. FMS was summarized by median and range. The associations between the five domains of QoL and FMS scores were summarized using Pearson correlation coefficients and tested for statistical significance. The QoL postulates were examined using methods of linear regression. Comparison of QoL domains between GMFCS grades I and II versus grade III utilized Student t-tests. The analyses were performed using IBM SPSS Statistics 24 (Armonk, New York). Statistical significance was concluded when p < 0.05. Results Demographics of the 61 included patients are reported in Table 1. There was a significant improvement in GPS from preoperative to 24 months postoperatively (mean 3.3°; sd 4.8; p < 0.001). There was a mean of 3.0 bone (sd 1.5) and 5.7 soft-tissue (sd 2.0) procedures per child as part of the SEMLS (Table 2). Of patients graded GMFCS I and II preoperatively, at long-term follow-up the median FMS score at home, school/work and in the community was 5 (1 to 6), 5 (1 to 6) and 5 (1 to 6), respectively. At long-term follow-up, the proportion walking independently at home, school/work and in the community was 71% (20/28; Fig. 1a), 57% (16/28; Fig. 1b) and 57% (16/28; Fig. 1c), respectively. The mean scores for the five QoL domains, both overall and according to preoperative GMFCS grade, are shown in Table 3. There were no statistically significant differences between the GMFCS grades for any domain. When comparing the regressions of 'feelings about function' with home FMS in the two GMFCS groups it was found that there was no statistically significant difference between the groups (p = 0.89). There was no significant association between community FMS and 'feelings about function' (r = 0.22; 95% CI -0.24 to 0.59; p = 0.34). All other associations between any of the five QoL domains and home, school/work and community FMS scores were all non-significant (minimum p = 0.11). Discussion SEMLS is the standard treatment for correcting the musculoskeletal manifestations of CP. Most studies examining change following SEMLS focus on gait. Of the studies assessing function and QoL, the majority are limited by small study samples or insufficient length of follow-up. This study attempted to overcome these limitations by reporting the FMS in 61 children after a mean follow-up of eight years. The children included in this study had routine gait analysis preoperatively and at six, 12 and 24 months postoperatively. After 24 months the GPS had decreased by a mean of 3.3°, representing a two-fold improvement with respect to the minimally clinically important difference (1.6°). 32 Table 2 reports a mean of 3.0 bone and 5.7 soft tissue procedures per child as part of the SEMLS, comparable with a recent large multicentre study that reported 8.7 procedures per child as part of SEMLS. 13 Of patients graded GMFCS I and II preoperatively, at long-term follow-up, eight years later, over two-thirds were walking independently at home and over half were walking independently at school/work and in the community. Previous studies have reported that following a deterioration at three and six months postoperatively, the FMS returns to baseline by 12 months. 33 This study adds to previous short-term studies by reporting outcomes at long-term follow-up. Our findings suggest that some of these children may be losing functional mobility in the longer term. However, the overall functional mobility of patients at final follow-up in this study echoes a recent long-term follow-up of children with flexed-knee gait that maintained function into adulthood following SEMLS. 10 A previous short-term study reported that while 71% of children graded GMFCS III were using a wheelchair for community distances preoperatively, this had reduced to 58% at nine months and 50% at 12 months postoperatively. 33 Rodda et al 9 also reported improvements in functional mobility, but over a longer follow-up of five years. Although their study included a longer follow-up, it is limited by the inclusion of only ten patients. In our study, 61% (20/33) of patients graded GMFCS III preoperatively were using a wheelchair over community distances after a mean of eight years. Given that a large majority of patients graded GMFCS III require a wheelchair for community distances, the reported use at long-term follow-up in this study represents a success of surgery. Gorton et al 12 compared the change in function and QoL in a cohort of 75 children with spastic CP that underwent SEMLS with a matched cohort that did not undergo surgery. Despite reporting significant improvements following SEMLS, the study only reported outcomes after 12-month follow-up. Similarly, Cuomo et al 22 reported that SEMLS improved QoL in a cohort of 57 ambulatory children with CP. However, the mean follow-up time was only 15.2 months. We hypothesized that better 'feelings about function' would be associated with better community FMS scores in those graded GMFCS I and II preoperatively, and better 'feelings about function' would be associated with better home FMS scores in those graded GMFCS III preoperatively. These pre-specified hypotheses were not confirmed. Although there was a significant association between 'feelings about function' and home FMS scores, it applied to patients irrespective of their GMFCS grade. A review by Livingston et al 34 reported that while functional status measures such as the GMFCS are reliable indicators of variations in physical function, they do not correlate consistently with psychosocial wellbeing. Shelly et al 35 examined the strength of association between function and QoL domains using the CP QOL Child. In contrast to our study, all domains of QoL in their parent proxy-report were significantly associated with function levels, except access to services. For the child self-report, feelings about function, participation and physical health and pain and feelings about disability were significantly associated with functional level. It may be that function plays a less important role in QoL for older children and young adults in this cohort, or that we This study has limitations to consider. First, there were no preoperative FMS or QoL data and there was no control group. This makes it difficult to draw conclusions about the effect SEMLS has on these parameters. However, this study adds to the literature by providing valuable information on the functional mobility of an unselected cohort of patients following SEMLS that has not previously been reported. Second, only 38% (23/61) of patients responded to the QoL survey. The lack of significant associations between FMS and QoL may be due to the small patient numbers. Third, the CP QOL Teen self-reported version 2 is not validated for the entire age range of the study participants. Minor adaptions were made in an attempt to make the questionnaire more applicable. This allowed valuable insight into patients across a wide range of ages that would otherwise not be possible. In conclusion, this study reports that the majority of patients graded GMFCS I and II preoperatively are still ambulating independently at long-term follow-up. Similarly, the majority of patients graded GMFCS III preoperatively either walk independently or with an assistive device at home and school/work eight years after SEMLS. Despite the favourable functional mobility at long-term follow-up, there was little evidence in this small cohort to establish a link between functional status and quality of life. FUNDING STATEMENT No benefits in any form have been received or will be received from a commercial party related directly or indirectly to the subject of this article. OA LICENCE TEXT This article is distributed under the terms of the Creative Commons Attribution-Non Commercial 4.0 International (CC BY-NC 4.0) licence (https://creativecommons. org/licenses/by-nc/4.0/) which permits non-commercial use, reproduction and distribution of the work without further permission provided the original work is attributed. ETHICAL STATEMENT Ethical approval: All procedures performed in studies involving human participants were in accordance with the ethical standards of the institutional and/or national research committee and with the 1964 Helsinki declaration and its later amendments or comparable ethical standards. Informed consent: No patient identifying information was used in this study.
2020-03-26T10:18:31.178Z
2020-03-19T00:00:00.000
{ "year": 2020, "sha1": "2d2386b35c60c0c0b36669c745a0e51e2901e7e8", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.1302/1863-2548.14.190148", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "8864dc5ee8b4e8beaf7a606ff35acf01a170b61a", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
236937304
pes2o/s2orc
v3-fos-license
How to ensure vaccine safety: An evaluation of China’s vaccine regulation system Vaccination is the most economic and effective measure to deal with infectious diseases and protect public health. Nowadays, due to the spread of COVID-19 and the ensuing pandemic, safe, effective vaccines are in urgent need. However, due to concerns about vaccine safety, there is still reluctance to vaccinate. In China, in response to the Changchun Changsheng Vaccine incident, the National People’s Congress Standing Committee passed the Vaccine Administration Law in 2019, which marks China’s first comprehensive piece of legislation on vaccine regulation. The law establishes a regulatory system covering the entire life cycle of vaccines, introduces the vaccine marketing authorization holder system, stipulates the legal responsibilities of all parties, and further clarifies the compensation system for any individuals who exhibit abnormal reactions to vaccination. In addition, it emphasizes the use of modern technology to build a national vaccine electronic platform for tracing. To balance vaccine efficacy and safety, it is necessary to further improve the vaccine risk management mechanism, promote cooperation between government and non-governmental actors, and avoid improper interventions in the vaccine market. Introduction A vaccine is a biological product that can be administered to prevent certain diseases. Even today, as access to medicines and healthcare continues to increase, there is an ongoing push to develop vaccines to ward off various new influenza pandemics. 1 In the past few decades, the United States has supported several congressional vaccination programs to deal with infectious diseases. 2 In 2020, the unprecedented global crisis posed by the COVID-19 pandemic once again underscored the importance of vaccines. Accordingly, in the context of the high politicized pandemic, suggestions have been put forward for mandatory vaccination programs. 3 Due to their important status, many countries, both developed and developing, have put rules in place to regulate the entire vaccine life cycle as a means to ensure their safety and boost public confidence. 4 Nevertheless, despite the implementation of far-reaching regulatory frameworks, vaccine scandals do still occur. For example, in 2016, a police force in China's Shandong Province uncovered an operation illegally selling large numbers of vaccines. By the time the scandal came to light, the illegal vaccines had spread across 24 of China's provinces, with the total illegal income generated from vaccine sales reaching 570 million RMB. 5 More recently, in July 2018, the vaccines produced by Changchun Changsheng Biotechnology Co., Ltd (Changsheng Company below), a Chinese company, were reported to be substandard. At this point, over 65 million vaccines were implicated, thus catching the attention of the media and the public. Reflecting on the series of vaccine incidents in China, 7 on June 29th, 2019, the Standing Committee of the National People's Congress of China passed the Vaccine Administration Law of the People's Republic of China (hereafter, the Vaccine Administration Law). This law, which took effect on December 1st, 2019, marks the first time China has specifically regulated vaccines and the vaccine sector more broadly. This law aims to refine the vaccine regulation system in China and reestablish society's confidence in the safety of vaccines. 8 As a newly enacted law, it has its own unique characteristics and analyzing the law from a contemporary context may shed some light on how the law functions in relation to COVID-19 vaccinations. As for the structure of this article, it will first briefly describe the background of the Vaccine Administration Law before analyzing its substantive content. After introducing the basic framework, the article will evaluate the law, including both its breakthroughs and its shortcomings. Background: The former vaccine regulation system and its defects Prior to the enactment of the Vaccine Administration Law, the diffuse rules regulating vaccines in China were found in many different laws, such as the Drug Administration Law of the People's Republic of China (hereafter, the Drug Administration Law), the Law of the People's Republic of China on Prevention and Treatment of Infectious Diseases (Infectious Diseases Prevention law, below), and the Regulation on the Administration of Circulation and Vaccination of Vaccines (Vaccines Circulation Regulation, below) amongst others. Although many of the provisions set out rules in these pieces of legislation have covered the subject of vaccines, each law has its own application scope, none of which directly focus on the regulation of vaccines. By analyzing these rules, the former vaccine regulation system in China can be better understood and its defects can be identified. Different departments involved in the chain of supervision Currently, a number of government departments are involved in the regulation system for vaccines, with each taking responsibility for certain aspects (see Table 1 below). According to the Vaccines Circulation Regulation, the Department of Health, operating under the State Council, is responsible for supervising vaccinations throughout the country, whilst the Drug Administration Department, also operating under the State Council, is charged with supervising the quality of vaccines and administering them throughout the country. As for the qualification assessment of vaccine companies, this falls within the remit of each province's industrial and commercial departments. In one sense, having many different key parties involved in the regulatory system can allow for input from a wide-range of professionals and fully exert the potentials of the preset restriction mechanism. Thus, these departments can cumulatively contribute to the regulation process. However, the issue with an uncoordinated system such as this is the lack of collaboration among different departments. This lack of unity also then introduces the possibility of a break in the supervision chain, which will also inevitably lead to many loop-holes in the vaccine regulation framework. For instance, traditionally, the departments have paid greater attention to vaccine production, whilst the custody aspect has been largely ignored. The custody process, however, does have a notable impact on the final quality of the vaccine. A typical example of this is the Shandong vaccine incident detailed above. According to the official investigation, although the vaccines involved were produced by licensed manufacturers, their quality was questionable as they were stored and transported without proper refrigeration. 9 Limited supervision resources During the supervision process, the limited amount of resources and the difficult task of regulation mean the problems are unavoidable. An interview revealed that by 2017, in China, there were only 3478 Centers for Disease Control and Prevention at all levels, but more than 200,000 vaccination centers need to be supervised by them. 10 From these figures, it is readily apparent that it is a nearimpossible task to effectively supervise such a huge number of vaccination centers. Additionally, some vaccination centers are located in remote areas of China, which further frustrates supervision efforts. 11 In practice, not every vaccination center will be inspected. 12 In addition to the issues related to the limited availability of human resources, there is also a large gap between the professional knowledge needed to effectively regulate the industry and the educational background of the relevant officials currently supervising the sector. The data in Dr. Hu's article shows that there is an urgent need for qualified supervisors to work in vaccine regulation. It Table 1 Main departments involved and their responsibilities. Department Responsibilities for vaccine regulation National-level National Medical Products Administration the supervision of the quality and circulation of vaccines nationwide National Health Commission the supervision of vaccination nationwide Provincial-level Drug administration departments of provinces the supervision of the quality and circulation of vaccines within their respective administrative areas Health department of provinces the supervision of vaccination within their respective administrative areas 7 In the past decade or so, there have been three serious vaccine incidents in China. In addition to the two vaccine incidents mentioned in this article, the other one is the Shanxi vaccine incident. In 2010, media reports questioned that dozens of children in Shanxi were disabled due to the vaccination of vaccines exposed to high temperatures. Although the official survey results showed that only three children were confirmed to be abnormal vaccine reactions, this report still triggered a crisis of confidence in the vaccine. 8 (2) qualified doctors or nurses; (3) qualified refrigerating equipment. From these provisions, it can be seen that it is not difficult to set up a vaccination center. 12 Unannounced inspection means that the responsible departments do on-the-spot inspections without notifying those being inspected in advance. The unannounced inspection model has its advantages as well as its disadvantages. Under this model, there is a great possibility that vaccine producers and the huge volumes of vaccines they produce cannot be effectively supervised. should be noted that in the whole country, less than 500 people hold the necessary qualifications to conduct medical regulation. 13 Due to the lack of knowledge and human resources, the phenomenon of power rent-seeking occurs, which is evidenced by the large number of criminal cases related to vaccines in the database of Chinese Judgments 14 . In these cases, the parties are typically charged with bribery, abuse of power, illegal business activities, and so on. According to statistics, more than half of the corruption cases related to vaccine regulation judged by the court between 2014 and 2018 involved officials from the grass-roots levels Centers for Disease Control and Prevention accepting bribes in the vaccine procurement process. 15 Vaccine sellers usually pay kickbacks to the staff responsible for vaccine procurement to secure a sale. In the Chen Dexin bribery case, Chen Dexin, the former director of the Center for Disease Control and Prevention in Dafang County, Bijie City, Guizhou Province, accepted RMB 585,000 in bribes from the sales teams of Jiangxi Linyuan Biotechnology Co., Ltd from 2012 to 2015, in return for purchasing the company's varicella vaccine and typhoid vaccine. 16 Incomplete rules By and large, regulation of the full vaccine life cycle has yet to be established in China. Typically, there are five phases of vaccine regulation: development, registration, production, circulation, and vaccination. The Vaccines Circulation Regulation focuses solely on circulation and vaccination, without touching on the development, registration and monitoring aspects. The main subject of the Infectious Diseases Prevention Law is to prevent and control the spread of infectious diseases. However, despite the huge scope of its responsibilities, its provisions only mention that ''vaccines used for prophylactic vaccination shall conform to the quality standards of the State." With regard to the Drug Administration Law, it is formulated to guarantee the quality and safety of drugs. This law views vaccines as an indistinct type of drug, it emphasizes the production aspects of the vaccine life cycle whilst ignoring the vaccination procedure. However, in contrast with typical forms of drugs, vaccines have their unique features and thus special procedures need to be followed. 17 Regarding the uniqueness of vaccines, the Drug Administration Law does almost nothing to attend to these aspects of vaccines. It can clearly be seen that prior to the implementation of the Vaccine Administration Law, the regulatory framework consisted of a patchwork of different rules and pieces of legislation, none of which could adequately take responsibility for or effectively oversee the regulation of vaccines (See Table 2, below). ''In order to prepare for a pandemic, or any other public health emergency, public health officials must know the laws under which they operate." 18 In practice, the disparate rules will undoubtedly lead to confusion amongst the parties responsible for supervising vaccine production and administration. Difficulties in tracing Technically, the location of substandard vaccines 19 cannot be effectively traced. Broadly speaking, in China, there are two classes of vaccines: those vaccines belonging to the first class are provided to citizens free of charge, whilst for the vaccines in the second class, citizens are voluntarily inoculated at their own expense. Since 2006, the former State Food and Drug Administration has explored the use of a 20-digit Electronic Drug Monitoring Code to ensure every vaccine can be tracked and located. 20 Unfortunately, this endeavor has ended in failure. 21 There are many factors contributing to this outcome: on the one hand, the systems used by different departments are incompatible, and different provinces and regions in China have different vaccination plans. Therefore, it is a technical challenge to trace vaccines. On the other hand, due to the lack of mandatory legal requirements, some Centers for Disease Control and Prevention did not scan the Electronic Drug Monitoring Code, which made it impossible to record key information on vaccine circulation and subsequent vaccination. Although the price of the second class of vaccines is determined and set by the government, it is the vaccination centers that have the power to choose providers. Due to the limited budget provided by the government, the local centers for disease control and prevention are incentivized to pursue profit by selling the vaccines in the second class to individuals who wish to be inoculated. In order to maximize their profits, vaccination centers purchase vaccines that are approaching their expiration date and they lack the proper custody for from agents at a low price. 22 Whilst this method increases profits, it goes without saying that such practices may negatively impact the quality of the vaccines customers receive. The main contents of the vaccine administration law Following the Changsheng incident, both the Chinese Communist Party Central Committee and the State Council of China became acutely aware of the importance of vaccine regulation and ordered that a long-term mechanism for vaccine regulation should be devised and implemented. On November 11, 2018, the draft of Vaccine Administration Law and a draft explanation were published on the government's website to solicit public opinions on the law. 23 On December 23, 2018, the draft of Vaccine Administration Law was referred to the Standing Committee of the National People's Congress for deliberation. Finally, on June 29th, 2019, it was passed into law, taking only six months from the time of drafting to enactment. The final version of the Vaccine Administration Law spans eleven chapters and consists of one hundred provisions. It covers the entirety of the vaccine life cycle, with provisions regulating vaccine development, registration, production, circulation, vaccination, monitoring, safeguarding measures, and legal liability, amongst others, the legislation's main innovations can be generalized as follows. Setting up a whole-process regulatory system To avoid regulatory gaps, the Vaccine Administrative Law has established a regulatory system that covers the entire process from vaccine development to post-marketing management (see Fig. 1 below). Formal regulation begins with vaccine testing: a sponsor who wishes to conduct clinical trials for a vaccine must obtain approval from the National Medical Products Administration (NMPA). 24 If the clinical trials demonstrate that the vaccine is safe and effective, the applicant can submit a registration application to the NMPA. 25 NMPA shall review the production technology and quality control standards of the vaccines applied for marketing. 26 After obtaining a drug registration certificate, the vaccine can then enter the market. In contrast with the general rules which apply to drugs 27 , the Vaccine Administration Law imposes strict requirements for vaccine production. On the one hand, in addition to the conditions of drug manufacturing prescribed in the Drug Administration Law, vaccine manufacturing must also meet the following requirements: being equipped with appropriate scale and sufficient capacity reserves; possessing systems, facilities and equipment to ensure biosafety; and meeting the needs of disease prevention and control. 28 On the other hand, the whole vaccine production process shall comply with the standards of good manufacturing practice (GMP). 29 It should be noted that the National Medical Products Administration revised the GMP for biological products in light of relevant provisions of the Vaccine Administration Law. All parties involved in the vaccine circulation process must strictly comply with vaccine storage and transportation management requirements, particularly the necessary temperature and light requirements which must be maintained to ensure the safety and effectiveness of the vaccines. 30 Moreover, the parties must establish honest, accurate, and complete records to ensure transparency and also allow for the whole process to be tracked and monitored. Also, the Vaccine Administration Law extends the minimum record-keeping period from 2 years to 5 years after the shelf life of the relevant vaccines. 31 Furthermore, the Vaccine Administration Law also sets out specific requirements for disease control and prevention institutions, vaccination providers, and medical personnel. Finally, and most importantly, this piece of legislation marks the first time that the risk management requirements have been clearly specified for post-marketing of vaccines. Introducing the vaccine marketing authorization holder system For the first time, the Vaccine Administrative Law specifies a marketing authorization holder (MAH) system for vaccines. In recent years, a pilot drug MAH regime has been implemented in select areas across China. The General Office of the State Council issued a notice on June 26, 2016, formally authorizing a pilot program for drug MAH systems in ten provinces. However, vaccines were notably excluded from the pilot plan. 32 Under the law, the MAH regime will hold the vaccine MAH responsible for the safety, effectiveness and quality of vaccines throughout the vaccine's whole life cycle. 33 In practice, this means that vaccine safety no longer depends so heavily on the government department. Instead, vaccine companies will assume a more active role throughout the entire chain and vaccine life cycle. The regulations on vaccine MAHs are more stringent than those imposed on drug MAHs. On the one hand, the conditions for becoming a vaccine MAH are stricter than those for general drugs. According to the 2016 Pilot Plan, drug research institutions and personnel within the pilot regions are eligible to become drug MAHs. The 2019 revision of the Drug Administration Law stipulates that the drug MAH will be implemented nationwide and defines a drug MAH as an enterprise or research institution that has obtained a drug registration certificate. Significantly, the Vaccine Administration Law further restricts vaccine MAHs to enterprises that have obtained both a vaccine registration certificate and a drug manufacturing license. 34 On this basis, research institutions will not be qualified to become vaccine MAHs. On the other hand, unlike the requirements for drug MAHs, vaccine MAHs must have vaccines manufacturing capacity, and usually they should produce vaccines by themselves. According to Article 32 of the revised Drug Administration Law, a drug MAH The responsibility covers the phrases of development, registration, production, distribution, post-market management and so on. As for its application scope, this regulatory system applies to Chinese drugs, chemical drugs, and biological products, including vaccines. This means that under the current regulation system, the regulation of vaccines should follow the rules in Drug Administration Law ; however, if more stringent requirements are set out in the Vaccine Administration Law, then the latter should be followed. 28 produce the drug by itself or mandate another drug manufacturing enterprise to produce the drug. However, the vaccine MAHs can only mandate other entities to produce vaccines after obtaining the approval of the NMPA. 35 Establishing the national electronic vaccine tracking system From a technical perspective, due to the proliferation of Internet and big data technologies, it is easier than ever before to effectively trace vaccines. According to Article 10 of the Vaccine Administration Law, a national vaccine electronic tracking platform is to be established by the National Medical Products Administration and the National Health Commission. In accordance with the design, this platform will be linked to the electronic tracking systems for vaccines set up by the vaccine producers to allow for the integration of production, circulation, and vaccination information. In this way, the whole-life trace of vaccines will be realized and the data barriers between different departments can be eliminated. In order to promote the establishment of the vaccine tracking system, the National Medical Products Administration and the National Health Commission jointly issued a circular on December 6, 2019, 36 which requires Beijing, Tianjin, Inner Mongolia, Shanghai, Jiangsu, Hainan, and Chongqing to take the lead in establishing provincial-level vaccine tracking system, and then connect with the national vaccine tracking system under construction. In Shanghai, for example, a comprehensive information management system has already been established. In this system, once the code is scanned, almost every phase of vaccine regulation (purchase, custody, circulation, etc.) can be tracked and investigated in the preset program. Besides gathering the information about vaccines, the identities of vaccine recipients and qualifications of doctors will also be verified at the same time. Meanwhile, parents can check their child's previous vaccination records through this system and in the instance that a vaccine is found to be substandard, the system will immediately send out an alert. Another important value of the collaboration platform is that it can help to improve regulatory oversight through the effective use of the program. ''Information is the life's blood of the regulatory process." 37 This platform can not only collate information throughout the whole vaccine life cycle in the smallest packaging unit, but also can break down the information barriers between different regulatory agencies. By analyzing various data on the platform, regulators can develop a better understanding of the life cycle of vaccines, issue early warnings for vaccine stocks in different provinces, and understand the growth rate of vaccine production. More importantly, regulators are able to monitor the validity period of vaccines and ensure vaccine quality as part of their efforts to protect public health and safety. Clearly establishing the legal responsibilities of all parties According to the Vaccine Administration Law, the relevant party will be subject to severe penalties if certain requirements have not been met or strictly adhered to. For instance, if a vaccine is found to be counterfeit, a fine of no less than 15 times(but no more than 50 times), the value of the relevant vaccine will be imposed. Meanwhile, where the quality of a vaccine is found to be inferior, a fine can be issued which is as much as 30 times the value. 38 In addition to the high fines, several other penalties can also be imposed on the relevant bodies, such as confiscating materials, suspending business operations, revoking the drug registration certificates, and so on. If certain acts have violated the criminal law, the person will also be charged under the relevant criminal provisions. The deterrence enforcement strategy assumes that those regulated are rational actors, and claims that the severity of punishment and the frequency of inspection are the main factors that affect the behavior of the regulated object. In this sense, increasingly severe sanctions, coupled with effective supervision, will function to deter illegal behaviors, to some extent. It should also be noted that, in addition to the company producing the vaccine, any officials or personnel in the government departments that have contravened the Vaccine Administration Law will also be punished. Implementing the compensation system for abnormal reactions to vaccination Vaccines are an excellent means for protection from illness for most individuals and also for wider society. However, they are not universally effective nor are they completely safe. 39 ''Abnormal reactions to vaccination" refers to the adverse reactions of qualified vaccines which cause damage to the vaccinated person's tissues, organs or functions in the process of or following standardized vaccination, and for which no relevant party has any fault. 40 In practice, the number of abnormal reactions to vaccination is relatively low, but once it occurs, it may be disastrous for the whole family. In China, the vaccine adverse event reporting system was established in 2005. Due to the ongoing pandemic, following the administration of COVID-19 vaccines, serious adverse reactions should be reported in this system within two hours. According to the data released by the Chinese Center for Disease Control and Prevention, by April 30, 2021, 265 million vaccines doses have been administered, with just 31,434 abnormal reactions being reported. 41 To compensate individuals who experience adverse reactions, the Vaccine Administration Law ascribes different responsibilities to various parties for different categories of vaccines. 42 This marks the first time that an abnormal reaction compensation system has been clearly established in Chinese Law. As for the scope of the compensation offered, the Vaccine Administration Law stipulates that ''the scope of compensation shall be subject to management by catalogue and dynamically adjusted according to actual circumstances." On December 7, 2020, the National Health Commission issued the ''Reference Catalogue and Description of Compensation Scope for Abnormal Vaccination Reactions." 43 As a new compensation management method, the compensation catalogue helps to make the scope of compensation clearer and easier to carry out specific compensation. However, it should be noted that the compensation catalogue is only for reference. If the damage is not within the scope listed in the catalogue, but there is a causal relationship between the damage and the vaccination, compensation should also be given. It should also be noted that the compensation system is somewhat different from the established western no-fault compensation schemes. A no-fault compensation scheme does not use legal fault as the basis for determining liability. Instead, the dispositive issue usually turns on whether the doctor caused the injury. 44 Meanwhile, no-fault compensation schemes would examine several factors, including whether the injury falls within the scope of compensable damages, as defined by statute. 45 But in China, the compensation catalogue is only for reference. Besides, if it falls under abnormal reactions to vaccination or abnormal reactions cannot be excluded, compensation shall also be made. In this sense, the scope for compensation is broader than that of no-fault compensation schemes. 37 Administration Law, if a person produces and sells fake drugs, a fine of no less than 15 times(but no more than 50 times)will be imposed. If the drug is found to be inferior, the body will be fined no more than 20 times of the unlawful income. By comparing these two articles, it can be concluded that fines relating to vaccines are more severe than those related to normal drugs. 39 Emphasizing the development of new vaccines China has the largest population of any country in the world and produces the largest number of vaccines globally each year. 46 However, most of the vaccines registered in China are traditional vaccines 47 and there is still a large gap in the development of novel vaccines when compared with developed countries. Projects to develop new vaccines typically require large investments of time and capital and the potential market for the vaccine is a major consideration for companies when determining whether to proceed. In the past, on average, Chinese biotechnology companies devoted about 5% of their annual income to development), which is much lower than the amount channeled into development in similar countries abroad. 48 According to the Vaccine Administration Law, government funds will support the development of polyvalent, multivalent, and other new-type vaccines. This goal will be achieved in two ways. The routine path is to encourage vaccine marketing license holders to increase investment in R&D and optimize production processes through favorable policies or economic incentive mechanisms. Due to the length of the R&D cycle, the large capital investment required, and the high risk of failure, the market mechanism may fail. Following the outbreak of infectious diseases, it is difficult to rely solely on enterprises to develop vaccine in a timely manner. Therefore, for vaccines urgently needed for epidemic prevention and control, the state will organize vaccine marketing license holders, scientific research entities, medical and health institutions and other actors to cooperate in vaccine R&D. 49 Government support of this kind will undoubtedly be a catalyst to speed up the development of novel vaccines. In the past year, ''The COVID-19 pandemic has illustrated the need for the swift development of new vaccines targeting emerging pathogens causing outbreaks of infectious diseases." 50 In order to organize and mobilize resources promptly in state of emergency, 12 departments, including the Ministry of Science and Technology and the National Health Commission jointly established a COVID-19 vaccine task force, which is affiliated with the Joint Prevention and Control Mechanism of the State Council and directly reports to the Vice Premier of China. 51 China has issued conditional market approval to four domestically made COVID-19 vaccines, and two of which vaccines have been approved for emergency use by the World Health Organi-zation (WHO). 52 A report reveals that by May 18th, 2021, more than 435,689,000 doses of vaccines for COVID-19 have already been administered in China. 53 This achievement can, to some extent, be attributed to the new mechanism established by the Vaccine Administration Law. Evaluation A scientific regulatory system can be an impetus for the healthy development of the vaccine industry. Briefly speaking, the Vaccine Administration Law has demonstrated the Chinese government's sincere determination to regulate the vaccine sectors effectively and the implementation of this legislation will affect the current vaccine regulation system in China. Although it has yielded many benefits and proven to be effective, further efforts are still needed to further refine the regulatory system. Improving the risk management system The public's tolerance to risk from vaccines is lower than for other pharmaceutical products since vaccine recipients are mainly from healthy populations which include infants and children. 54 Risk management 55 is therefore not just directly related to the development of the vaccine industry, but also is closely related to vaccine regulation. One of the main advancements of the Vaccine Administration Law is to establish the risk management mechanism. The Vaccine Administration Law explicitly lists ''risk management" as the main principle of the vaccine regulatory system, and stipulates a series of specific requirements on this basis. Article 60 of Vaccine Administration Law requires vaccine MAHs to establish quality retrospective analysis and risk reporting systems, and annually report on vaccine production, circulation, post-marketing research, risk management, and other aspects to the National Medical Products Administration. In terms of vaccine lot release, inspection items and frequency will be dynamically adjusted based on a vaccine risk assessment. The surveillance of adverse events following immunization (AEFI) is an important part of the risk management process after the vaccine is marketed. Taking the COVID-19 vaccine as an example, the statement issued by the Chinese Center of Disease Control and Prevention(China CDC) pointed out that a total of 31,434 AEFI cases were reported between Dec.15,2020 and April 30,2021 when the country administered 265 million vaccine doses. Abnormal reactions accounted for 17.04%, of which 188 cases were deemed severe. The serious abnormal reactions reporting ratios is 0.07/100,000 doses, which is extremely rare (less than 1 in 10,000). However, compared with other countries, China's AEFI surveillance system still has room for improvement. First, the AEFI surveillance system should be open to the public. Unlike the United Kingdom's Yellow Card Scheme, 57 China's AEFI information system does not accept adverse reaction reports submitted directly by the public. According to the Vaccine Administration Law and China's national AEFI guidelines, 58 healthcare facilities, vaccination centers, Centers for Disease Control and Prevention, adverse drug reaction monitoring agencies, and vaccine MAHs are responsible reporters of AEFIs. If the vaccine recipient or guardian suspects that the symptoms may be related to vaccination, they can only notify the above authorized reporters, and then the authorized reporters will submit the AEFI data to the National AEFI Information System. This makes it difficult for the recipients of vaccines to directly report suspected AEFIs in a timely manner. In order to accurately estimate the incidence of adverse events, the public needs to be provided with different ways to report AEFIs that are easy and convenient. Second, in addition to passively receiving adverse reaction reports, vaccination providers can take a more proactive approach, such as issuing reminder cards or sending out text messages and phone reminders to guide the public to rationally understand the risks of vaccination and reduce people's fear of AEFIs. 59 Finally, drawing on the experience of the vaccine risk monitoring database in the United States, 60 bringing together and integrating information on vaccines, medical illnesses and other scattered data can aid in tracking adverse reactions to vaccines, which can further help to continually evaluate the safety of vaccines. Strengthening the government's regulatory capabilities As mentioned above, due to the rapid development of the pharmaceutical industry, including the vaccine sectors, government regulators are increasingly faced with issues such as heavy inspection burdens, inadequate staffing, and insufficient professional capabilities. In response to these difficulties, the State Council issued a circular on July 8, 2019, proposing the establishment of professional drug-inspection teams both at the national and provincial levels in the next 3 to 5 years, which will be responsible for conducting compliance inspections and risk assessments in relation to the places and activities related to drug R&D and production, with a particular emphasis on strengthening the inspection of high-risk drugs, such as vaccines. This is beneficial as it will boost the number of drug inspectors, whilst also helping to increase the professionalism of inspectors. Based on their professional skills, and other factors, drug inspectors will be divided into four levels: junior, intermediate, senior, and expert inspectors. Whilst the qualifications and responsibilities of drug inspectors at each level are different, all inspectors must undergo training in business, legal, and regulatory training for no less than 60 h per year. Given the limited number of positions and the low level of remuneration, it remains to be seen whether enough professionals can be attracted to join the drug inspection team. Another useful way to strengthen the government's ability to regulate vaccines is to refer to specialists from universities or research centers when faced with complex professional problems. When confronted with vaccine risks with inherent uncertainties, it is often necessary to make judgments on scientific frontier issues. 61 By holding meetings and consultations with expert, a consensus can gradually be gathered from the scientific community to ensure that vaccines are as safe and effective as possible. In recent years, government regulators have increasingly relied on the advices provided by expert groups in their drug regulation activities. According to Article 17 of Drug Registration Regulation of 2019, the Center of Drug Evaluation shall, as needed for work, form an expert advisory committee, and request experts' opinions on major issues during the process of evaluation, inspection, examination, and confirmation of common names. Meanwhile, Article 42 of the Vaccine Administration Law obliges the National Health Commission to establish an expert advisory committee for national immunization programs. Drawing on the experience of the World Health Organization and other countries, it is necessary to further specify the expert selection mechanisms, meeting procedures, rights and obligations, and information disclosure requirements to ensure the independence of the expert advisory committee. Furthermore, an advisory committee can be set up to specifically attend to vaccine safety issues. For example, in Australia, the Vaccine Advisory Committee is responsible for providing independent medical and scientific advice to the Minister for Health and the Therapeutic Goods Administration on issues relating to vaccine safety, quality, and effectiveness, including problems that may arise at various stages of the vaccine life-cycle such as pre-market evaluation and post-market surveillance. 62 Communication and cooperation with international organizations will also help strengthen the capabilities of the regulatory authority, improve the China's vaccine regulation system, and align the country's vaccine sector with international technical standards to ensure the safe and quality of vaccines. From a global regulatory network perspective, even though there are many differences between countries, cross-border exchange of information may still trigger both rational learning and normative emulation. 63 For example, by becoming a regulatory member of the International Council for Harmonization of Technical Requirements for Pharmaceuticals for Human Use (ICH), China's drug regulatory authority can learn the latest regulatory scientific outcomes and advanced regulatory concepts globally. Since joining ICH in 2017, China has transformed and implemented 46 ICH guidelines, which has promoted the reform of China's drug review rules and procedures. 64 Paying attention to the role of non-governmental actors Regulatory resources are dispersed amongst different actors, rather than being concentrated in the government. Accordingly, the regulatory capabilities of non-governmental actors are increasingly recognized by the state. 65 The report delivered by President Xi 57 The Yellow Card Scheme is the UK system for collecting and monitoring information on suspected safety concerns or incidents involving medicines and medical devices. Anyone can report a suspected side effect of vaccination to the MHRA through the Yellow Card Scheme. For details about yellow card, see https:// yellowcard.mhra.gov.uk/the-yellow-card-scheme/ (Last visited:2020-08-08). 58 National guideline for the surveillance of adverse events following immuniza-Jinping at the 19th National Congress of the Communist Party of China pointed out that strengthening the cooperation between multiple actors is an important element of promoting the modernization of the national governance system and its governance capabilities. 66 In view of the conflict between the limited pool of government resources and the heavy regulatory tasks demanded to ensure public safety, it is important that the resources of non-governmental actors are effectively utilized to perform certain vaccine regulatory functions. The Vaccine Administration Law specifically stipulates the role of vaccine industry associations and news media. Following the implementation of the Vaccine Administration Law, the China Association for Vaccines (CAV) was created on the basis of the original ''Chinese Pharmaceutical Enterprise Development Promotion Association". 67 The CAV, which is the first national-level vaccine and biological products industry organization to be created in China, not only aims to promote the high-quality development of the vaccine industry, but also implement effective self-regulation to ensure vaccine safety. Similarly, the role of the media cannot be ignored. On the one hand, the media can monitor the misconduct of vaccine manufacturers, vaccination providers, the government, and other relevant actors to a certain extent. The news media sometimes plays an important policing role and can expose the illegal activities of vaccine manufacturers before the government takes action. Accordingly, the pressure of public opinion caused by media exposure will force the government and vaccine manufacturers to actively respond. On the other hand, vaccine information can be fully exchanged through social media, which helps people rationally assess the benefits and potential harms of vaccination. To achieve the goal of vaccine regulation, it is critical to effectively tap into the resources of non-governmental actors and form a cooperative regulatory space through complementary advantages. In this regard, it is necessary for the government to implement some specific mechanisms, such as actively requesting the opinions of non-governmental entities in the process of vaccine policy formulation, and encouraging them to monitor vaccine quality. For example, Canadian vaccine regulators often engage in open dialogues with non-governmental bodies on relevant immunization issues, which not only contributes to the formation of effective vaccine policies, but also enhances the transparency and accountability of the vaccine review process. 68 In addition, the mechanism of public interest litigation can be further improved so as to allow for a more effective use of non-governmental actors to ensure vaccine safety. The Administrative Litigation Law currently only allows People's Procuratorates to bring public interest litigation on the basis of the government's misconduct or nonfeasance in drug regulation. 69 In order to promote participation by civil society and its actors in the enforcement of the Vaccine Administration Law, non-governmental organizations (NGOs) should be granted the ability to initiate public interest litigation. Avoiding excessive government intervention The implementation of the Vaccine Administration Law can help to upgrade the industrial structure of vaccine sectors. 70 After analyzing the relevant provisions in the law, it can be concluded that the new law has imposed a high burden and exacting standards on vaccine manufacturers. Following the implementation of these strict requirements, small vaccine companies that cannot comply with the legislation will gradually be removed from the sector by operation of the law. Besides, ''the critical importance of vaccines highlights the need to ensure robust vaccine innovation to combat global health threats." 71 Compared with small companies, the vaccine companies that have the necessary foundation for innovative development will be given greater access to new opportunities due to government financial support, as well as the technological benefits from the research institutions. In this way, those companies that cannot gain from the policy will be marginalized by the market. Whilst strict legislation does help to promote the high-quality development of the vaccine industry, it is worth noting that government regulators should avoid excessive intervention in the vaccine market, especially for the price of vaccines in the second class. There is a certain degree of administrative monopoly in the procurement of vaccines, and there is a clear imbalance in the relative negotiating positions of vaccine manufacturers and the government in terms of pricing. Meanwhile, vaccine manufacturers cannot exceed the maximum retail price set by the vaccine bidder, otherwise, they will lose their market share. Narrow profit margins force vaccine companies to continuously lower their production costs which may be detrimental to product quality and will also inhibit their enthusiasm for innovation. 72 For example, an investigation by the State Council team into the Changsheng Biotechnology Company vaccine incident indicated that the company violated the approved production process in order to reduce costs. 73 Conclusion As a special form of drug, vaccines are the basic public product that can be used to protect public health. At the same time, it is also the most economical and effective measure to deal with infectious diseases. Frequent vaccine incidents have severely damaged people's confidence in vaccines and immunization programs more broadly. 74 In response, the 2019 Vaccine Administration Law provides for the ''strictest" vaccine management with tough penalties, which is of particular significance for ensuring vaccine safety and promoting the development of the vaccine industry. The effectiveness of the vaccine regulatory system depends on the proper implementation of the vaccine management law. To this end, it is 66 Xi Jinping's report at 19th CPC National Congress, See http://www.gov.cn/ necessary to further improve risk management mechanisms, enhance the regulatory capacity of government regulators, effectively use the resources of non-governmental actors, and avoid improper government interventions in the vaccine market. At the time of writing, the COVID-19 pandemic is still ongoing. Despite the implementation of various measures to stop the spread of the virus, it appears almost certain that COVID-19 will not be controlled globally without the development of a vaccine. 75 By making necessary improvements to its vaccine regulation system, China can certainly make a greater contribution to curbing the spread of the virus and reducing its associated illness and mortality. Declaration of Competing Interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. 75
2021-08-07T13:17:56.583Z
2021-08-01T00:00:00.000
{ "year": 2021, "sha1": "b9ea0b286ee26328910882c200fbef5cf5b725a5", "oa_license": null, "oa_url": "https://doi.org/10.1016/j.vaccine.2021.07.081", "oa_status": "BRONZE", "pdf_src": "PubMedCentral", "pdf_hash": "c72e32ec0c4b14a0859a4916b34d00657365b60e", "s2fieldsofstudy": [ "Medicine", "Law", "Political Science" ], "extfieldsofstudy": [ "Medicine" ] }
269966572
pes2o/s2orc
v3-fos-license
A 21st century shift in the mechanisms of the early-winter United States snowfall variability Snowfall is a critical element of natural disasters to the United States (US) with strong climatic and socioeconomic influences. Meanwhile, snowfall acts as a driving force to the US water supplies for agriculture, drinking water and hydropower. However, so far, what factors influence the US snowfall variations and how these factors change under global warming remain unclear. Here, we found that large-scale influences of the early-winter US snowfall experienced a shift from the Pacific to the Atlantic side around 2000, through observational analysis and climate model simulations. The Pacific/North American pattern was identified as a dominant driver of the early-winter US snowfall before 2000, but its impact became much weaker in the 21st century as its associated western North American cell shifted northward away from the US. Instead, the tropical and subpolar North Atlantic surface temperature has been influencing the early-winter US snowfall variations via teleconnections after 2000. This changed influence of US snowfall around 2000 is demonstrated to be related to the observed global warming pattern since the 1950s. Our study provides new perspectives in understanding large-scale snowfall pattern and variability and its connection to the global warming pattern. Introduction Snowfall invariably influences both human society and natural ecosystems with positive or negative impacts (Schmidlin and Kosarik 1999, Smith and Matthews 2015, Stoy et al 2022, Zhu et al 2022, Do et al 2023).In December 2022, a sequence of winter storms hit the majority of the United States (US), which killed at least 87 people and caused billions of dollars in damage (National Centers for Environmental Information 2023).Meanwhile, more than one sixth of the global population relies on the snowmelt water supply (Barnett et al 2005), and the western US has emerged as a hot spot of snow drought in the past four decades (Huning and AghaKouchak 2020).It is therefore of great importance to understand why the US snowfall fluctuates from year to year and how it will change in a warming climate. The formation of snowfall involves complex physical processes and requires favorable climatic conditions, including moisture convergence and low temperature (Kapnick et al 2014, Yang et al 2019). Various climate factors, depending on the region of interest, have been proposed to potentially influence the North American snowfall, including the Pacific/North American pattern (PNA) (Serreze et al 1998, Coleman and Rogers 2003, Notaro et al 2006), the Pacific sea surface temperature (SST) variability associated with the El Niño-Southern Oscillation (ENSO) or the related Pacific decadal oscillation (PDO) (Kunkel and Angel 1999, Neal et al 2002, Newman 2007, Sobolowski and Frei 2007, Seager et al 2010),the North Atlantic oscillation (Notaro et al 2006, Ghatak et al 2010, Seager et al 2010), and the Arctic sea ice variability (Liu et al 2012).However, it remains ambiguous whether there is a dominant driver of the US snowfall variations on the continental scale and if so, how it may respond to global warming.These questions will be addressed in this study.We will focus primarily on the early-winter season (i.e.October-December or OND) when the US snowfall typically starts to accumulate and could cause severe societal impacts although the total amount of snowfall may be less than that for winter (Schmidlin and Kosarik 1999, Stoy et al 2022, Yao et al 2023).As we will show later, this is also the season when a prominent teleconnection pattern shift relevant to US snowfall variability is identified.The interannual variations of OND-averaged snowfall are directly linked to the first snowfall date, across the entire US (figure S1), which holds significant implications for individuals and areas affected by early snowfall. Observational data For snowfall, we used National Operational Hydrologic Remote Sensing Center (NOHRSC) snowfall dataset provided by the National Oceanic and Atmospheric Administration, and European Centre for Medium-Range Weather Forecasts (ERA5) snowfall (Hersbach et al 2020).NOHRSC uses all the available snowfall observations to produce a best estimate of snowfall characteristics over the US with 1 km spatial resolution and hourly temporal resolution (Carroll et al 2006).While this data might not perform optimally in high-resolution and complex terrain (Gillan et al 2010), our study concentrates on examining snowfall variability across a broad scale throughout the US.The study also incorporates the widely used ERA5 snowfall data (Nouri and Homaee 2021) due to the limited time span of NOHRSC, which commences only from 2008.For the PNA index, we mainly used the PNA index constructed by the modified pointwise method from Climate Prediction Center (CPC).Besides, we also used the PNA index calculated based on the rotated principal component analysis from CPC to verify the result.For SST, we used the Hadley Centre Sea Ice and SST data set (Rayner et al 2003).For surface air temperature, precipitation, water vapor convergence, we used ERA5 dataset.For 500 hPa geopotential height (Z500), we mainly used ERA5 dataset and also used NCEP-NCAR Reanalysis 1 dataset (Kalnay et al 1996) for validation.All data have a monthly resolution, except for ERA5 snowfall with a daily resolution.ERA5 data, NCEP-NCAR Reanalysis 1 data, SST and PNA indices have a temporal coverage for 1959-2022, and NOHRSC covers from October 2008 to December 2022. Coupled model intercomparison project phase 6 (CMIP6) outputs Monthly data from 30 CMIP6 models (Eyring et al 2016) (table S1) are used in this research (we use all models with available monthly data of snowfall, 500 hPa geopotential height and SST).In particular, we use each model's 'r1i1p1f1' member (to weigh all models equally), from two experiments: historical (between 1959 and 2014), SSP5-8.5 (between 2015 and2022). Atmosphere-only general circulation model (AGCM) simulations In this study, we conduct sensitivity model experiments using the general circulation model (GCM) developed by the National Center for Atmospheric Research (NCAR), the Community Atmosphere Model version 5 (Tilmes et al 2015) to investigate the SST-US snowfall link.We perform atmosphere-only GCM experiments forced by various SST boundary conditions with the radiative forcing fixed at the values in 2000.The horizontal resolution of the atmospheric model is about 1.9 • latitude × 2.5 • longitude (i.e.f19).This spatial resolution may not be able to accurately resolve the snowfall behavior in mountainous regions on a fine scale but should be sufficient to serve our purpose as the main focus of this study is on the large-scale climatic control of snowfall variability across the US.We conduct two pairs (control and forcing scenarios) of large-ensemble AGCM simulations in the Pacific SST forcing experiments and Atlantic SST forcing experiments, respectively.The details of these experiments can be found in section 3. We start the model simulations on September 1st to minimize the influence of initial conditions on the OND snowfall change.For each set of simulations, we conduct 50 ensemble members that start from slightly perturbed atmospheric initial conditions (on the order of 10 −14 K) to isolate the influence of SST forcing on the snowfall change.The total OND response to the SST forcing is the ensemble mean difference in each experiment, and statistical significance is evaluated by the Student's t test of the difference between ensemble means (Wilks 2006). Definitions of the PNA index in CMIP6 Consistent with the observation, the monthly PNA index is constructed by following the modified pointwise method (Wallace and Gutzler 1981): where Z500 * denotes normalized monthly Z500 anomaly from each CMIP6 model, which has been interpolated to the same horizontal resolution (1 • latitude × 1 • longitude). Interannual variations of early-winter US snowfall We analyze the NOHRSC snowfall dataset covering 2008-2022 and find that the recent devastating winter storms mentioned above were in fact part of the widespread, strong snowfall in early winter 2022 across the US, especially over the Sierra Nevada, the Cascade Range, and the Rocky Mountains (figure 1(a)).Accompanied with that, the northeastern US and the central US to the southeast of the Rocky Mountains experienced weak, negative snowfall anomalies.Although the high-resolution and broad-coverage NOHRSC dataset is useful, its limited length of duration precludes long-term statistical analysis.Therefore, we investigated the atmospheric reanalysis, ERA5, and found that it well captured this observed anomalous snowfall pattern (figure 1(b)).ERA5 will be the main dataset for our following analysis, given its longer temporal coverage . Based on empirical orthogonal function (EOF) analysis, the dominant mode (EOF1; 40% variance explained) of the US OND snowfall in 1959-2022 (figure 1(c)) largely resembles the pattern of 2022 OND snowfall anomaly with a spatial correlation of 0.78 (figure 1(b)).It implies that the early-winter storms in 2022 were not just coincidental or regional extreme events, but rather reflected spatially-coherent climate fluctuations potentially with large-scale influences.Furthermore, we find that the principal component associated with this dominant mode (PC1) is tightly correlated with the averaged OND snowfall in the entire US (r = 0.80; figure 1(d)).Therefore, understanding this dominant pattern of OND US snowfall and its associated temporal variations is relevant for both the snowfall onset in the western US but also the total US snowfall amount on a continental scale. Dominant role of PNA before 2000 What influences the interannual variability of earlywinter US snowfall?PNA is the dominant mode of winter atmospheric variability over the North America (Wallace andGutzler 1981, Barnston andLivezey 1987) with strong impacts on surface temperature and precipitation variations (Leathers et al 1991).We find that the PC1 associated with the dominant mode of US OND snowfall variability is negatively correlated with the PNA index during 1959-2022 with a correlation of −0.55 (figure 2(a)).Their connection was particularly strong before 2000 (r = −0.68),but surprisingly, their connection disappeared at the beginning of the 21st century (r = −0.03).Our 17 year running correlation analysis further confirms that their negative correlation remained statistically significant since the 1960s until it sharply declined in the early 2000s (figure S2(a)).The reduction of correlation can potentially be due to the weakened variabilities of both the PNA and the US snowfall (i.e.reduced signal-to-noise ratio).To test that, we isolated the years during 1959-2000 that had the normalized values of PNA and PC1 both between −1 and 1, similar to those after 2000.In total, 19 years was selected during 1959-2000, and the correlation coefficient of the PNA index and the PC1 of US snowfall was −0.52, still significantly more negative than that for 2001-2022.It implies that the reduction of PNA-PC1 correlation after 2000 is not solely attributed to the reduced variability. Before 2000, a typical negative phase of the PNA is characterized by a wave-train structure extending from the North Pacific to the North America, resulting in a cyclonic circulation over the northwestern North America and an anticyclonic circulation over the southeastern North America (figure 2(c)). The resultant westerly wind anomalies over the western US and the southeasterly wind anomalies in the eastern US carry additional water vapor towards the continent, leading to the convergence of water vapor flux and thus increased precipitation almost across the entire US (figure S3).In the meantime, the cold advection on the west and the warm advection on the east lead to a cold-warm dipole structure across the US (figure S3).Taken together, the combined effects of cold temperature and enhanced precipitation anomalies contribute to a prominent increase in snowfall in the western and north-central US (figure 2(c)) as is also seen in the EOF1 pattern (figure 1 The second pair, named Pacific-Post2000, is similar to the first pair, except that we use the global climatological SST over 2001-2022.Each set contains 50 ensemble members, and the ensemble-mean difference of the US snowfall response between the two sets highlights the net impact of a La Niña-like SST forcing (figure 2(e)).Our modeling results confirm that a La Niña-like SST pattern can indeed induce a negative PNA-like circulation response over the North Pacific-North American sector, a cold west-warm east dipole, and an enhanced precipitation across the US (figure S4), which together lead to an increased snowfall in the western and north-central US (figure 2(e)).This simulated snowfall response pattern looks similar to the observed pattern (figure 1(c), and its magnitude also agrees reasonably well with the observations despite a spread across the ensemble members (red line in figure 2(g)). So why did the US snowfall-PNA connection break after 2000?Interestingly, the negative phase of PNA during the post-2000 period exhibits a different pattern with the pre-2000 cyclonic circulation originally centered over the US-Canada border now shifting northward to the northern Canada (figure 2(d)).This PNA pattern change around 2000 is a robust feature across different reanalysis datasets and different PNA indices (figure S5).Consequently, the PNA-associated cooling is found only in the far western US and is much weaker than the pre-2000 period (figure S3).Meanwhile, the nationwide moisture convergence found for a negative PNA before 2000 now becomes much weaker after 2000 (figure S3).As a result, the impact of PNA on the US OND snowfall is rather weak during the 21st century (figure 2(d)).Consistent with that, our sensitivity simulations demonstrate that the same Pacific SST anomaly pattern imposed onto the 2001-2022 climatological SST barely affects the US snowfall in the ensemble mean (figures 2(f) and (g)). Effects of global warming pattern The simulated distinct impacts of La Niña-like SST on the US snowfall before and after 2000 (figures 2(e) and (f To further confirm that the historical warming pattern may modulate the PNA-US snowfall relation, we next investigate the spread among 30 CMIP6 models that can well simulate both the dominant mode of early-winter US snowfall variability and the PNA pattern (figures S6 and S7).We find that most of the models meeting those two criteria can reasonably capture the negative correlation between the PNA and the dominant mode of snowfall variability during 1959-2022 with a correlation coefficient ranging from −0.24 to −0.79; on average, the correlation is about −0.56, close to the observations (figure 3(c)).For each individual model, the correlation coefficient can vary for the two periods before and after 2000 but mostly within a smaller range than the observations.In terms of multi-model average, the PNA-snowfall correlation keeps almost unchanged for those two periods.These results imply that climate models may have underestimated the influence of internal variability or have misrepresented the global warming pattern, among other possibilities.After regressing the OND mean SST warming magnitude (i.e. 2001-2022 minus 1959-2000) of each model against their PNAsnowfall correlation during 2001-2022, we identify a pattern that somewhat resembles the observed warming pattern with a warmer Northern Hemisphere and a warmer Indo-western Pacific (figure 3(d)).These results, together with our model experiments (figures 2(e)-(g)), indicate that the observed drop of the PNA-snowfall correlation from −0.68 during 1959-2000 to −0.03 during 2001-2022 is likely linked to the observed warming pattern. Increasingly important role of Atlantic SST after 2000 When the PNA lost its influence on the US snowfall after 2000, what might have taken the turn to drive the snowfall variability instead?As illustrated Our linear regression analysis indicates that after 2000 the tropical and subpolar North Atlantic warming is associated with a strong cyclonic circulation over the western North America (figure 4(d)).The westerly wind anomalies at the southern flank of this cyclonic circulation transport additional water vapor inland and cause the moisture convergence and precipitation increase over the western and the northern US (figure S8).Dominated by the precipitation effect, the snowfall in the region significantly increases as seen in the dominant pattern of snowfall variability (figure 1(c)), despite only a limited area of cooling near the western coast and the northern US (figure S8).Before 2000, however, the cyclonic circulation is displaced to the west coast of the North America and also much weaker in magnitude, limiting its effect on the US snowfall increase (figure 4(c)). To verify the Atlantic impact on the US snowfall after 2000, we performed another two sets of 50-member AGCM simulations forced by the 2001-2022 climatological SST and the Atlantic SST anomalies (shown in figure 4(a)) added to the climatology, named Atlantic-Pre2000 and Atlantic-Post2000, respectively.We find that the imposed tropical and subpolar North Atlantic warming induces a cyclonic circulation anomaly in the western North America (figure S9) and a snowfall increase in the western US (figure 4(f)), similar to the observations.The detailed mechanisms of the snowfall increase induced by the Atlantic warming differ between the observations and the model simulations due to the exact location of the anomalous cyclonic circulation.More specifically, the snowfall increase is dominated by the precipitation increase as the anomalous southwesterlies transfer more moisture inland in the observations (figure S8) but is dominated by the temperature decrease associated with the anomalous advection of cold air from the north in the model simulations (figure S9).Despite this discrepancy, both the observations and the model simulations exhibit an anomalous cyclonic circulation in the Northeastern Pacific-North America that leads to a snowfall increase over the western US accompanied.When the background climatological SST is taken from 1959-2000, the same tropical and subpolar North Atlantic warming has a much weaker impact on the US snowfall (figure 4(e)).The probability distribution function of western US snowfall is also clearly different for the two sets of simulations with the post-2000 one being closer to observations (figure 4(g)).These results imply that the historical global warming pattern, while weakening the PNA impact, strengthens the Atlantic impact on the US snowfall. Summary and discussions Using atmospheric reanalysis, sensitivity model experiments, and CMIP6 historical simulations, we report a dominant mode of early-winter US snowfall and its changing influence from the PNA to Atlantic SST around 2000.The change of influencing factor for the US snowfall is shown to be related to the northward shift of the western North American cell of the OND PNA pattern after 2000.It is noteworthy that this PNA pattern shift is a robust feature that can be seen in different reanalysis, and with different definitions of PNA (figure S5).Although our paper is focused on the US snowfall, this PNA pattern shift presumably may also affect other aspects of the climate system.Why state-of-the-art coupled climate models fail to reproduce this observed PNA pattern shift and whether the PNA pattern shift will continue into the future also need further investigations.In the meantime, if the tropical and subpolar North Atlantic SST variability is intensified under global warming (Yang et al 2021), whether it will keep serving as the dominant influence on the US snowfall in the coming decades remain to be explored. We propose a potential role of the historical warming pattern in modulating the influencing factor of early-winter US snowfall.We speculate that, after 2000, the weakened PNA impact is related to the Indo-western Pacific warming and the central-eastern Pacific cooling, while the strengthened Atlantic impact is related to the enhanced warming in the North Atlantic (figure 3(a)).Although the factor that changes the US snowfall influence from the Pacific to the Atlantic may be thought to be associated with the transition of the PDO or the Atlantic multidecadal oscillation (AMO) phase, the correlation coefficient between these natural variabilities with US snowfall is not significant.This implies that the 21st century shift in the influencing mechanism of the earlywinter US snowfall cannot be solely attributed to the natural variability of the PDO or AMO and that anthropogenic warming can be critical.Follow-up work on a more definitive attribution is currently underway.It remains under debate to what extent this observed warming pattern is externally forced by radiative agents or internally generated by natural climate variability (Seager et al 2022, Rezaei et al 2023, Tian et al 2023), directly relevant to the future projection of US snowfall variability (O'Gorman 2014).Our study demonstrates that the spatial structure of global ocean warming not only influences, e.g.climate sensitivity (Armour et al 2013), tropical rainfall pattern (Xie et al 2010), inter-ocean teleconnections (Jia et al 2016), and tropical cyclone distribution (Vecchi and Soden 2007), but also significantly affects regional and continental climate variability and the cryosphere, pointing to an urgent need of accurately projecting global warming pattern. Figure 1 . Figure 1.(a), (b) 2022 October to December (OND) US snowfall anomaly from (a) National Operational Hydrologic Remote Sensing Center (NOHRSC) and (b) ERA5 over the period of 2008-2022.(c), (d) The first empirical orthogonal function (EOF) mode of the OND US snowfall (c) and the associated normalized principal component (PC1) time series (d, red line), and the normalized OND US average snowfall index (d, black line) over the period of 1959-2022.The correlation coefficient (r) between the PC1 and US average snowfall is 0.80, statistically significant at the 95% confidence level. Figure 2 . Figure 2. (a) The normalized PC1 of the OND US snowfall (red line), and the OND PNA index (black line).The correlation coefficient (r) between the PC1 and PNA is −0.68 during 1959-2000, statistically significant at the 95% confidence level, and only −0.03 during 2001-2022.(b) The regression of Pacific SST against the normalized PC1 during 1959-2000, and the dotted areas indicate the correlation coefficients between PC1 and SST at the 95% confidence level.(c), (d) OND Z500 (contours at 4 m intervals) and snowfall (shaded) regressed against the negative PNA index over 1959-2000 (c) and 2001-2022 (d), respectively.The zero-contour line is omitted for clarity.(e), (f) The OND snowfall responses over the US in the Pacific-Pre2000 experiment (e) and Pacific-Post2000 experiment (f), respectively.The dotted areas indicate the responses at the 95% confidence level.(g) Probability density function (PDF) for the average of snowfall responses over the western US (orange box in e or f) in the Pacific-Pre2000 experiment (red curve) and Pacific-Post2000 experiment (blue curve).The black line corresponds to the observed magnitude of snowfall anomaly in (c). (c)).Since the PNA can often be triggered by theENSO (Lau 1997, Trenberth et al 1998), we attempt to further link the US snowfall to ENSO by regressing the global SST anomalies onto the PC1 of US OND snowfall.Our results show that the dominant pattern of US snowfall variability (figure 1(c)) is associated with a La Niña-like SST structure (figure 2(b)), consistent with previous studies(Simmons et al 1983).To confirm this La Niña-PNA-US snowfall link before 2000, we conducted two sets of large-ensemble AGCM simulations forced by various SST fields, named as the Pacific SST forcing experiments.In the first pair, named Pacific-Pre2000, one set uses the global climatologicalSST over 1959SST over -2000 as the boundary condition, and the other uses the regressed pattern of Pacific SST onto the snowfall PC1 over 1959-2000 (figure 2(b)) plus the global climatological SST over 1959-2000 as the boundary condition. )) imply a potential role of varying background SSTs in modulating the US snowfall influencing factors.Indeed, the early-winter background SST change from 1959-2000 to 2001-2022 exhibit notable patterns on top of global warming (figure 3(a)).The Indian Ocean, North Atlantic, western Pacific, and extratropical North Pacific all experience enhanced warming as compared to the global average, while the equatorial central-eastern Pacific, southwest subtropical Pacific, South Atlantic, and the Southern Ocean undergo a suppressed warming or even a cooling.Historical simulations from the CMIP6 can reproduce some of the features (e.g.hemispheric asymmetry), suggesting an external origin, but the model-observation mismatches are also apparent (figure 3(b)).For example, the observed SST change shows an Indo-western Pacific warming and a La Niña-like pattern, which is absent in the multi-model average, implying a model bias or a role of internal variability (Lehner et al 2020, Tokarska et al 2020) or more likely a combination of both. Figure 3 . Figure 3. (a), (b) Difference between the average of1959-2000 and 2001-2022 (2001-2022 minus 1959-2000) in OND SST for observation (a) and multi-model mean of the CMIP6 historical simulations (b).The dark green contour corresponds to the global average.(c) The correlation coefficient between the PC1 and the corresponding PNA index of each CMIP6 model at different stages; the black asterisks correspond to 1959-2022, the red circles correspond to 1959-2000, and the blue boxes correspond to 2001-2022.The two rightmost columns correspond to the average correlation coefficient of multi-model and the observed correlation coefficient, respectively.The dotted lines of different colors represent the 95% confidence level of each stage.(d) The difference of OND mean SST (i.e.2001-2022 minus 1959-2000) regressed against the correlation coefficient of PNA and PC1 during 2001-2022 across 30 CMIP6 models (table S1 in supporting information S1). Figure Figure (a) The regression of the Atlantic SST against the normalized first principal component (PC1) during 2001-2022, and the dotted areas indicate the correlation coefficients between PC1 and SST at the 95% confidence level.(b) The normalized PC1 of the OND US snowfall (red line), and the OND Atlantic SST index (ATL, black line) based on the average of the subpolar North Atlantic (top orange box in a) and the tropical Atlantic (bottom orange box in a).The correlation coefficient (r) between the PC1 and ATL is only −0.11 during 1959-2000, but reaches 0.74 during 2001-2022, statistically significant at the 95% confidence level.(c), (d) OND Z500 (contours at 4 m intervals) and snowfall (shaded) regressed against the ATL over 1959-2000 (c) and 2001-2022 (d), respectively.The zero-contour line is omitted for clarity.(e), (f) The OND snowfall responses over the US in the Atlantic-Pre2000 experiment (e) and Atlantic-Post2000 experiment (f), respectively.The dotted areas indicate the responses at the 95% confidence level.(g) Probability density function (PDF) for the average of snowfall responses over the western US (orange box in e or f) in the Atlantic-Pre2000 experiment (red curve) and Atlantic-Post2000 experiment (blue curve).The black line corresponds to the observed magnitude of snowfall anomaly in (d).
2024-05-23T15:10:13.410Z
2024-05-21T00:00:00.000
{ "year": 2024, "sha1": "13c1b3000aacdd201deab95ebdab192f0e5e68d4", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1088/1748-9326/ad4e4d", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "edc25ada9ae030cfdb87a13c614de28981e75c4b", "s2fieldsofstudy": [ "Environmental Science", "Geography" ], "extfieldsofstudy": [ "Physics" ] }
268512662
pes2o/s2orc
v3-fos-license
Hypertensive Disorders of Pregnancy Hypertensive disorders of pregnancy (HDP) complicate 13% to 15% of pregnancies in the United States. Historically marginalized communities are at increased risk, with preeclampsia and eclampsia being the leading cause of death in this population. Pregnant individuals with HDP require more frequent and intensive monitoring throughout the antepartum period outside of routine standard of care prenatal visits. Additionally, acute rises in blood pressure often occur 3 to 6 days postpartum and are challenging to identify and treat, as most postpartum individuals are usually scheduled for their first visit 6 weeks after delivery. Thus, a multifaceted approach is necessary to improve recognition and treatment of HDP throughout the peripartum course. There are limited studies investigating interventions for the management of HDP, especially within the United States, where maternal mortality is rising, and in higher-risk groups. We review the state of current management of HDP and innovative strategies such as blood pressure self-monitoring, telemedicine, and community health worker intervention. increased lactate dehydrogenase, elevated alanine aminotransferase or aspartate aminotransferase, or persistent headache or other cerebral or visual disturbance and persistent epigastric pain.Preeclampsia can progress to eclampsia, additionally defined by new-onset seizures during or after labor. 3Gestational hypertension is diagnosed as hypertension diagnosed for the first time during pregnancy that returns to normal <12 weeks postpartum and without proteinuria or diagnostic features of preeclampsia. 4Chronic hypertension is persistence of elevated BP $12 weeks postpartum or hypertension in pregnancy that was also present prior to 20 weeks' gestation. 4Together, HDP accounts for 7% of maternal deaths in the United States. 5The incidence of preeclampsia has increased over recent decades, surpassing the rate of diseases like Alzheimer's, obesity, diabetes, and chronic kidney disease. 6,7The incidence of HDP has exponentially increased from 528.9 per 10,000 deliveries in 1993 to 912.4 per 10,000 deliveries in 2014. 8Risk factors for preeclampsia include age, with a higher incidence among those ages <25 and >35 years, diabetes, obesity, and preexisting cardiovascular disease (CVD). 7ditionally, dietary intake of sodium has been found to increase risk for HDP, with those consuming >3.5 g/d having a 54% higher risk for chronic hypertension and a 20% higher risk for preeclampsia compared to those consuming <2.8 g/d. 8P is prevalent and undertreated in pregnant individuals from historically marginalized communities due to systemic barriers to health care.There is increased risk for preeclampsia among minority groups, with higher frequency of preeclampsia among non-Hispanic Black women compared to non-Hispanic White women (16.7% vs 13.4%, respectively). 8,9Maternal mortality is approximately 3-fold higher in Black individuals, with preeclampsia and eclampsia leading causes of death, accounting for 30% of pregnancy-related deaths. 9A single-center prospective study that screened >10,000 postpartum women detected a new diagnosis of postpartum hypertension in 8%, most frequently in Black individuals with higher body mass index. 10ort-term adverse outcomes resulting from preeclampsia include increased risk for cesarean delivery, placental abruption, prolonged maternal hospital stay, and increased mortality.Those who develop preeclampsia prior to 37 weeks have an 8fold increase in adverse pregnancy outcomes like preterm birth and placental abruption. 7Additionally, hypertension is the foremost indication for postpartum readmission. 11ng-term adverse outcomes due to HDP are well documented.Pregnant individuals with HDP have approximately double the risk of ischemic heart disease in the first 12 years after pregnancy compared to those who are normotensive. 12There may be a significant relationship between the timing of onset of HIGHLIGHTS HDP complicate 13 to 15% of all pregnancies in the United States. Individuals with HDP require more frequent and intensive peripartum management than routine standard of care. Innovative management strategies are needed, but the best approach, timing, frequency, and intensity are uncertain. Blood pressure self-monitoring, telemedicine, and community health worker intervention may be novel approaches to HDP. preeclampsia and long-term cardiovascular outcomes.Individuals who develop preeclampsia after 37 weeks have a 2-fold increase in long-term adverse cardiovascular outcomes. 7There are data demonstrating an association of advanced maternal age (35-44 years) and very advanced maternal age (>45 years) on HDP risk itself, though whether that translates to increased risk of major cardiovascular events in older women is not well established. 13Individuals with HDP have a higher risk of developing stroke compared to those who do not have HDP (34.5% vs 6.9%), and the risk of stroke is higher among non-Hispanic Black and Hispanic/Latina women as compared to non-Hispanic White women. 8eeclampsia is also associated with adverse longterm kidney outcomes, including an increased risk of developing glomerular or proteinuric kidney disease and end-stage kidney disease within 5 to 10 years after pregnancy. 14The risk is higher in women with multiple preeclamptic pregnancies or previous preterm preeclampsia. Late-onset preeclampsia can present in individuals with or without history of antepartum HDP.Acute BP elevation is frequently observed 3 to 6 days postpartum.Thus, the standard of care postpartum visit at 6 weeks after delivery fails individuals with preeclampsia occurring in the first few days postpartum. Most readmissions for acute hypertension occur within 10 to 20 days of discharge, well before the usual postpartum visit. 15From the American College of Obstetricians and Gynecologists recommendations, "blood pressure evaluation is recommended for women with hypertensive disorders of pregnancy no later than 7 to 10 days postpartum, and women with severe hypertension should be seen within 72 hours; other experts have recommended follow-up at 3 to 5 days". 16Prioritizing those with the highest BPs and more severe features for postpartum BP checks or visits by 3 days appears reasonable. Individuals without a previous diagnosis of hypertension are at particularly increased risk for severe maternal morbidity from acute BP elevation compared to those with chronic hypertension.Over half of all maternal deaths occur in the postpartum period. 17Causes of death from HDP include systolic heart failure, cerebrovascular disease, myocardial infarction, and cardiac arrest. 18In the context of HELLP syndrome (hemolysis, elevated liver enzymes, and low platelets), additional causes of HDP-related death are placental abruption, acute respiratory distress syndrome, disseminated intravascular coagulation, hepatic hemorrhage, hypoxic ischemic encephalopathy, and acute kidney injury. 19Among HDP-related deaths, 44.3% of deaths occurred on day 1 and 37.1% of deaths on days 2 to 7. 20 In a review of 232 pregnancy-related deaths evaluated by 13 state Maternal Mortality Review Committees, approximately 60% of postpartum maternal deaths were determined to be preventable. 21Prevention of mortality likely requires a multipronged approach from highly responsive health systems, such as patient education on BP, healthy diet/lifestyle, medical therapy (aspirin, antihypertensives, magnesium), surveillance for proteinuria and elevated BP, use of safety bundles, strategically timed delivery, and provider action to treat emergent symptoms. 22,23The immediate postpartum period, commonly termed the "fourth trimester," is a prime window of opportunity for intervention and transition to primary or cardiovascular care.Short-term management of HDP immediately postpartum may influence long-term BP control, although data is limited in this regard. 24 DISORDERS OF PREGNANCY The pathophysiology of HDP is not well understood. Hypertension in pregnancy is thought to be due to a combination of improper trophoblast differentiation and abnormal regulation of cytokines, adhesion molecules, major histocompatibility complex molecules, and metalloproteinases, which lead to abnormal differentiation of spiral arteries and subsequent placental hypoperfusion and ischemia. 25tiangiogenic factors that are released also play a role in the development of systemic hypertension due to systemic endothelial dysfunction. 25Current research includes several hypotheses as to how preeclampsia can occur, including contact between the maternal immune system and the placental semiallogeneic trophoblast, chronic uteroplacental ischemia, oxidative stress, immune maladaptation, imbalance of angiogenic factors, genetic imprinting, altered renal hemodynamics with impaired renin angiotensin signaling, an exaggerated maternal inflammatory response, and vascular endothelial dysfunction. 26giogenic factors play a key role in regulation of placental vascular differentiation; thus, there may be an imbalance between angiogenic factors in the pathogenesis of preeclampsia. 27In early gestation of normal pregnancy, proangiogenic factors like endoglin (Eng), placental growth factor (PlGF), fms-like tyrosine kinase-1(Flt1), and vascular endothelial growth factor are highly expressed by invasive trophoblasts.However, decreased expression of these factors can lead to the inadequate cytotrophoblastic invasion seen in preeclampsia.The splice variants of Radparvar et al with epigenetic modifications, with CpG methylation sites as markers for identification of postpartum preeclampsia. 27,28RRENT MANAGEMENT OF HYPERTENSIVE DISORDERS OF PREGNANCY Preventing HDP altogether is most optimal to avoid adverse outcomes.There appears to be an inverse relationship between physical activity, gestational hypertension, and preeclampsia. 29Preventative medical management with aspirin is recommended per American College of Obstetrics and Gynecology (ACOG) guidelines for those at increased risk for preeclampsia, including those with previous preeclampsia, multifetal gestation, chronic kidney disease, autoimmune disease, type 1 or type 2 diabetes mellitus, and chronic hypertension. 30For these individuals, there is a Level of Evidence: A recommendation to start low-dose aspirin (81 mg) before 16 weeks of gestation for prevention of preeclampsia. Individuals who develop preeclampsia after 37 weeks of gestation are recommended to undergo immediate delivery. 22,30vere hypertension and preeclampsia must be treated early on to prevent adverse outcomes.The BP target for acute severe hypertension and preeclampsia ante or postpartum is a BP of 140-150/90-100 mm Hg. 31 The optimal BP target for the treatment of mild chronic hypertension is an area of active investigation.In an open-label, multicenter, randomized controlled trial (RCT) assigning pregnant women with mild chronic hypertension (BP <160/100 mm Hg) to antihypertensive medication, a BP target of 140/90 mm Hg in patients with chronic hypertension was associated with lower risk of developing preeclampsia with severe features, medically indicated preterm birth (<35 weeks of gestation), placental abruption, or fetal or neonatal death, and there was no increase in the risk for small-for-gestational-age birth weight. 32Medical therapies for HDP antepartum include hydralazine, labetalol, nifedipine, and methyldopa.Postpartum, the angiotensin-converting enzyme inhibitor, enalapril, may be useful and is compatible with breastfeeding in normal-term neonates. 33Magnesium is used for seizure prophylaxis and diuretics for edema or volume overload postpartum. The routine use of diuretics is unclear in preeclampsia. 34Valensise et al 35 hypertension at postpartum day 7. 39 Additional research is needed to validate different preeclampsia phenotypes and the best therapy. Given that preeclampsia can develop de novo after delivery, women in the postpartum period should be given discharge instructions that include information about the signs and symptoms of preeclampsia as well as the importance of presenting for medical evaluation in the event that they occur. 16As In severe cases, such as when medication fails to reduce BP to the target range, consultation with anesthesiologist, maternal fetal medicine subspecialist, or critical care subspecialist is recommended. 31OG and the Royal College of Obstetricians and Gynecologists recommend achieving a BP of 140/90 mm Hg in the immediate postpartum period. 40,41wever, there are no standardized management guidelines for specific antihypertensive agents or parameters for medication uptitration in the postpartum period.Thus, physician preference and experience, as well as safety of medical therapy during breastfeeding often affect the approach. 42 INNOVATIVE MANAGEMENT OF HYPERTENSIVE DISORDERS OF PREGNANCY BLOOD PRESSURE SELF-MONITORING AS A STRATEGY FOR TREATMENT OF HYPERTENSIVE DISORDERS OF PREGNANCY.Self-monitoring with a health systemprovided BP cuff, when combined with outreach to patients by medical personnel for antihypertensive management, appears to be a viable strategy for antepartum BP control (Table 2).Two well-powered RCTs, BUMP1 and BUMP2, did not find differences between those randomized to self-monitoring of BP vs usual care. 43,44However, those trials did not include automated transfer of BP readings to physicians via the electronic medical record (EMR) or outreach by the medical team; instead, they relied on the volunteer to contact the medical team for elevated BP readings.An observational study studying universal screening noted lack of participation in self-monitoring unless a BP cuff was provided. 10By comparison, SNAP-HT demonstrated feasibility and improved diastolic BP in the intervention group at 6 months with a 7 mm Hg decrease in long-term BP using self-management of postpartum hypertension with daily home BP monitoring and automated selfcontrolled medication adjustment via telehealth. 457][48] In these studies, there were frequent reminders and contact with the study teams. In BUMP2, volunteers highlighted interactions with clinicians, structured follow-up, and individualized support as aspects that they preferred and were motivated to reduce HDP in future pregnancies. There were significant health knowledge gaps despite college-level education, pointing to the importance of universal education on HDP. 44In the University of Wisconsin study of postpartum management, where telehealth was used for communication, there was a 95% retention rate and high patient satisfaction, and investigators also identified 16% of their cohort with uncontrolled BP. 47 The Safe@Home study using telemonitoring to monitor BP, admissions for suspected preeclampsia and hypertension were lower in those with telemonitoring compared to those without, in addition to high levels of satisfaction in the telemonitoring group. 48Additionally, an observational study investigating self vs ambulatory BP monitoring, found that patients were more comfortable with selfmonitoring, citing less anxiety and discomfort. 49rthermore, a University of Pittsburgh observational study found that when patients were given educational materials on discharge, with follow-up at 1-week intervals, studying universal screening noted lack of participation in self-monitoring; in those who did participate, 8% of the cohort was diagnosed with potential new-onset hypertension and 0.7% were diagnosed with severe hypertension. 10Interestingly, a meta-analysis of home monitoring showed no significant differences between those as compared to usual care with respect to postpartum readmission and BP monitoring acutely after discharge, possibly due to clinical heterogeneity and low quality of evidence. 50While more RCT data are needed, a multilevel intervention with patient-provider interaction appears more effective in achieving the goal of BP reduction than self-monitoring in isolation. TELEMEDICINE FOR HYPERTENSIVE DISORDERS OF PREGNANCY.Greater than 40% of women do not attend a recommended postpartum visit by 6 weeks; thus, it is important to study the utilization of Radparvar et al Innovative Management of Hypertensive Disorders of Pregnancy telehealth and telemedicine to aid in attendance of these visits. 16Telemedicine is a viable alternative with greater attendance to postpartum visits in several studies (Table 3).A single center study in a U.S. urban minority community indicated women who did not attend an in-person office visit were more often Black (87% vs 56%), P < 0.01) and younger (29.1 vs 31.4 years, P ¼ 0.04), but no disparity by race or ethnicity was seen for telehealth visits. 51Attendance for a telehealth visit was high, 70% vs 32% for an in-person visit. 51Telephone, text message, and video conference-based communication were well received by patients and providers in several studies. 52,53However, a University of Arkansas study of self-monitoring in a rural cohort using mobile health (mHealth) wireless transmitting equipment found several challenges with this strategy. 52Those using the application found it easy to use, while nonusers were concerned about incorporating it into their daily routine as new parents.Barriers to using mHealth included concern for wireless transmission in rural areas, single BP cuff size availability, and stress associated with monitoring of BP.Most telehealth or mobile health studies with BP cuff prototypes have been tested in small studies and lack integration with EMRs or broader medical practice. 54e "OB Nest" program at Mayo Clinic is an antepartum model involving self-monitoring, texting, and an online peer support community. 53The program has reduced in-person prenatal care, decreasing health care costs, with similar delivery outcomes. Participants in the program reported feeling more connected with providers, less anxious, and more knowledgeable. 53A randomized control trial at the University of Pennsylvania used texting to communicate postpartum BP. 55 Their study found that the texting group was more likely to identify BP spikes than those presenting to office visits alone in the first 10 days postpartum, a high-risk period. 55e of telehealth may also reduce racial disparities. 51,56,57In a study at the University of Pennsylvania, Hirshberg et al 56 found decreased racial disparity when using a text-messaging-based platform.Their study found that over 90% of Black participants provided a BP measurement compared to the 33% who presented for an in-person BP visit as Radparvar et al Innovative Management of Hypertensive Disorders of Pregnancy per routine care.The implementation of telehealth with audio-based visits in a study done by Khosla et al 57 found significant improvement in adherence with at least 1 visit for follow-up for hypertension in the postpartum period among Black patients (48.5%-76.3%,P < 0.001). Lifestyle interventions have been conducted via televisits.In a subset of the BP2 trial, women were more likely to adopt healthy habits if provided with an intense intervention with in-person consultation with a provider and dietitian followed by 6 months of telephone-based coaching. 58Participants cited high accountability, increased education about healthy habits, and more motivation toward making lifestyle changes along with increased perceived risks of their cardiovascular health postpregnancy. 58In the HH4H (Heart Health 4 Moms) study, a total of 150 women were enrolled in an RCT to reduce CVD risk through an online intervention vs self-directed care in controls. 59The HH4H group improved CVD risk knowledge, self-efficacy to achieve a healthy diet, and reduced physical inactivity.This study enrolled mostly White, higher-income, and college-educated women who were normotensive at baseline. 59A small RCT in urban minority centers (Philadelphia Women, Infants, and Children program) found the use of texting BP measurements to providers and peer coaching to improve the frequency of BP measurement and weight loss. 60 when CHWs were integrated into the inpatient care team. 66Combining CHWs with other strategies may be effective to reduce health care utilization but requires further investigation, particularly for antepartum or postpartum care.There are limited RCTs focused on cost and utilization with variability in the magnitude of the effect, likely due to differences in study design, and evidence is insufficient to draw conclusions about the effects of CHWs on chronic disease management. 67A systematic review encompassing 8 studies (n ¼ 6,500) found that 7 studies found no impact on mental health quality of life or mental health outcomes; 2 studies in the United States found improved quality of care in those with multiple-morbidity conditions and reduced hospitalizations. 68However, these studies have low certainty of evidence due to risk of bias, inconsistency, and imprecision, thus pointing to a need for future studies investigating the role of CHWs in these conditions. 68 Combined interdisciplinary approach for patients with HDP has been found effective. 71Interventions might include education for both providers discharging patients with HDP and patients with a diagnosis of HDP (by nurse educators or CHWs), the provision of free BP monitors to all patients with HDP, Innovative Management of Hypertensive Disorders of PregnancyFlt1 and Eng, namely soluble Flt1 (sFlt1) and soluble Eng (sEng), serve as ligand traps by binding to angiogenic factors.In normal pregnancy, sFlt-1 blood levels remain low in early pregnancy and increase toward the third trimester to allow for cytotrophoblastic invasion during early pregnancy.However, elevated levels of sFlt1 in pregnancy, as seen in preeclampsia, can lead to defective cytotrophoblast invasion and high plasma sFlt1:PlGF ratios, reflecting severe disease and associated with adverse clinical outcomes.Levels of sFLT-1 can persist early postpartum and lead to preeclampsia usually within 48 hours to 6 weeks postpartum.Interestingly, postpartum preeclampsia has also been associated assessed pregnant individuals with uterine artery Doppler to evaluate placental arterial waveforms and maternal transthoracic echocardiography, calculating stroke volume and peripheral vascular resistances.At a hemodynamic level, there appeared to be 2 phenotypes of preeclampsia: 1) early onset (<34 weeks), characterized by low cardiac output, high resistance, and depleted intravascular volume, a phenotype more commonly associated with bilateral notching of the uterine artery Doppler, fetal growth restriction, and worse maternal and perinatal outcomes; and 2) late onset ($34 weeks), characterized by high cardiac output, reduced resistance, and increased intravascular volume, a phenotype more commonly associated with obesity, normal fetal growth, and more favorable maternal and perinatal outcomes.Understanding different preeclampsia phenotypes could modify the choice of therapy, specifically the use of diuretics.Following delivery, fluid that has been sequestered in the extravascular space is mobilized, producing a large auto-infusion of fluid from the extravascular to the intravascular compartment.Trials in antepartum patients have shown insufficient evidence to draw reliable conclusions about diuresis, possibly because these could have also included the phenotype associated with intravascular depletion.35,36Several studies have demonstrated that diuretics may be useful in postpartum women with HDP, possibly including more individuals with a phenotype of increased intravascular volume.Small trials of individuals with severe preeclamptic features postpartum have shown decreased requirement for additional antihypertensives if a combination of furosemide and nifedipine was used.37Patients with preeclampsia with severe features randomized to treatment with 20 mg daily furosemide were found to have significantly lower BP by postpartum day 2 and required significantly less antihypertensive therapy on discharge compared to those treated with placebo.38Another study evaluating patients with gestational hypertension and preeclampsia with and without severe features demonstrated that patients randomized to furosemide 20 mg daily for the first 5 days postpartum were less likely to have persistent Radparvar et al J A C C : A D V A N C E S , V O L . 3 , N O . 3 , 2 0 2 4 Innovative Management of Hypertensive Disorders of Pregnancy M A R C H 2 0 2 4 : 1 0 0 8 6 4 Important considerations for future studies of CHW include: tailored scope of work, training, mentorship, supervision, ratio of CHW to patient, and financing.The National Academies of Sciences, Engineering, and Medicine 2019 report provides health systems with guidance on how social care integration may promote improved health outcomes. 69INNOVATING MULTILEVEL AND MULTIDISCIPLINARY MANAGEMENT OF HYPERTENSIVE DISORDERS OF PREGNANCY.A combination of strategies prepregnancy, antepartum, and postpartum can identify high-risk groups and patients at risk for developing HDP (Figure 1).Phone-based applications, blue-tooth or cellular upload of BP measurements to the EMR from BP self-monitoring, video-based EMR-integrated telemedicine, and CHWs could be integrated more fully to manage peripartum HDP (Central Illustration).Technology advances personcentered clinical care to increase health care access and empowers patients to engage in selfmanagement for improved health outcomes.A model of prioritized health care access that meets each patient where they are as opposed to relying on the patient coming to the health system may be more useful to identify acutely uncontrolled BP both ante and postpartum.CHWs can provide enhanced health education and coaching, peer support, and streamline patient-provider communication, aiding therapeutic optimization.CHWs could be incorporated into the cardio-obstetrics team, as consensus guidelines encourage early involvement of the cardio-obstetrics team to prevent maternal morbidity and mortality. 70Furthermore, an equity-focused approach is crucial, with dedicated CHWs responsible for overseeing and encouraging retention in routine care, completion of referrals to primary and specialty care postpartum, screening for social needs, and introducing community resources.Assessment of clinical and social needs and coordination of real-time care plan management will allow for earlier identification, timely intervention, and prevention of unnecessary readmission and use of emergent/hospital-based care, as well as morbidity from uncontrolled BP. FIGURE 1 4 Innovative FIGURE 1 Integrating Innovative Management Strategies for Hypertensive Disorders of Pregnancy in Patient Care Across the Prepregnancy, Antepartum, and Postpartum Periods TABLE 1 Definitions of Hypertensive Disorders of Pregnancy >20 wk with proteinuria (>300 mg on 24-h urine protein collection or 0.3 mg on urine point of care) With severe features: SBP $140 mm Hg or DBP $90 mm Hg on at least 2 occasions 4 h apart at a gestational age of >20 wk with proteinuria and evidence of end organ injury SBP $160 mm Hg or DBP $110 mm Hg on at least 2 occasions 4 h apart OR persistent severe hypertension requiring IV antihypertensives for control to bring SBP <160 mm Hg or DBP <110 mm Hg Eclampsia Preeclampsia þ seizures DBP ¼ diastolic blood pressure; HTN ¼ hypertension; SBP ¼ systolic blood pressure. by the ACOG guidelines, treatment with first-line agents in patients with acute severe hypertension, identified as systolic BP $160 mm Hg or diastolic BP $110 mm Hg, should be initiated within 30 to 60 minutes to reduce risk of maternal stroke. First-line agents for treatment in both pregnancy and postpartum include IV labetalol and hydralazine, or more recently, the use of oral nifedipine if IV access is not established.BP targets in cases of severe hypertension should not aim for normalization of BP but rather target a range of 140 to 150/90 to 100 mm Hg. TABLE 2 Rigor and Reproducibility of Randomized Controlled Trials and Observational Studies of Blood Pressure Monitoring in Peripartum Cohorts DBP ¼ diastolic blood pressure; ED ¼ emergency department; HTN ¼ hypertension; NA ¼ not available; RCT ¼ randomized controlled trial; SBP ¼ systolic blood pressure; SPEC ¼ severe preeclampsia. TABLE 3 Rigor and Reproducibility of Previous Studies of Telemedicine in Peripartum Cohorts NA ¼ Not applicable; BP ¼ blood pressure; RCT ¼ randomized controlled trial.
2024-02-27T17:23:40.631Z
2024-02-21T00:00:00.000
{ "year": 2024, "sha1": "9790dc915acd3175924dce974eb69172ac6ddd2b", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1016/j.jacadv.2024.100864", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "fcbf176896d8e5cb41ba39b161f00e70e6843fc4", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
199552332
pes2o/s2orc
v3-fos-license
Beyond The Wall Street Journal: Anchoring and Comparing Discourse Signals across Genres Recent research on discourse relations has found that they are cued not only by discourse markers (DMs) but also by other textual signals and that signaling information is indicative of genres. While several corpora exist with discourse relation signaling information such as the Penn Discourse Treebank (PDTB, Prasad et al. 2008) and the Rhetorical Structure Theory Signalling Corpus (RST-SC, Das and Taboada 2018), they both annotate the Wall Street Journal (WSJ) section of the Penn Treebank (PTB, Marcus et al. 1993), which is limited to the news domain. Thus, this paper adapts the signal identification and anchoring scheme (Liu and Zeldes, 2019) to three more genres, examines the distribution of signaling devices across relations and genres, and provides a taxonomy of indicative signals found in this dataset. Introduction Sentences do not exist in isolation, and the meaning of a text or a conversation is not merely the sum of all the sentences involved: an informative text contains sentences whose meanings are relevant to each other rather than a random sequence of utterances. Moreover, some of the information in texts is not included in any one sentence but in their arrangement. Therefore, a high-level analysis of discourse and document structures is required in order to facilitate effective communication, which could benefit both linguistic research and NLP applications. For instance, an automatic discourse parser that successfully captures how sentences are connected in texts could serve tasks such as information extraction and text summarization. A discourse is delineated in terms of relevance between textual elements. One of the ways to categorize such relevance is through coherence, which refers to semantic or pragmatic linkages that hold between larger textual units such as CAUSE, CONTRAST, and ELABORATION etc. Moreover, there are certain linguistic devices that systematically signal certain discourse relations: some are generic signals across the board while others are indicative of particular relations in certain contexts. Consider the following example from the Georgetown University Multilayer (GUM) corpus (Zeldes, 2017), 1 in which the two textual units connected by the DM but form a CONTRAST relation, meaning that the contents of the two textual units are comparable yet not identical. (1) Related cross-cultural studies have resulted in insufficient statistical power, but interesting trends (e.g., Nedwick, 2014 ). [academic_implicature] However, the coordinating conjunction but is also a frequent signal of another two relations that can express adversativity: CONCESSION and AN-TITHESIS. CONCESSION means that the writer acknowledges the claim presented in one textual unit but still claims the proposition presented in the other discourse unit while ANTITHESIS dismisses the former claim in order to establish or reinforce the latter. In spite of the differences in their pragmatic functions, these three relations can all be frequently signaled by the coordinating conjunction but: symmetrical CONTRAST as in (1), CONCESSION as in (2), and ANTITHESIS as in (3). It is clear that but is a generic signal here as it does not indicate strong associations with the relations it signals. (2) This was a very difficult decision, but one 1 The square brackets at the end of each example contain the document ID from which this example is extracted. Each ID consists of its genre type and one keyword assigned by the annotator at the beginning of the annotation task. arXiv:1909.00516v1 [cs.CL] 2 Sep 2019 that was made with the American public in mind. [news_nasa] (3) NATO had never rescinded it, but they had and started some remilitarization. [inter-view_chomsky] As suggested by Taboada and Lavid (2003), some discourse signals are indicative of certain genres: they presented how to characterize appointment-scheduling dialogues using their rhetorical and thematic patterns as linguistic evidence and suggested that the rhetorical and the thematic analysis of their data can be interpreted functionally as indicative of this type of taskoriented conversation. Furthermore, the study of the classification of discourse signals can serve as valuable evidence to investigate their role in discourse as well as the relations they signal. One limitation of the RST Signalling Corpus is that no information about the location of signaling devices was provided. As a result, presented an annotation effort to anchor discourse signals for both elementary and complex units on a small set of documents in RST-SC (see Section 2.2 for details). The present study addresses methodological limitations in the annotation process as well as annotating more data in more genres in order to investigate the distribution of signals across relations and genres and to provide both quantitative and qualitative analyses on signal tokens. Background Rhetorical Structure Theory (RST, Mann and Thompson 1988) is a well-known theoretical framework that extensively investigates discourse relations and is adopted by Das and Taboada (2017) and the present study. RST is a functional theory of text organization that identifies hierarchical structure in text. The original goals of RST were discourse analysis and proposing a model for text generation; however, due to its popularity, it has been applied to several other areas such as theoretical linguistics, psycholinguistics, and computational linguistics (Taboada and Mann, 2006). RST identifies hierarchical structure and nuclearity in text, which categorizes relations into two structural types: NUCLEUS-SATELLITE and MULTINUCLEAR. The NUCLEUS-SATELLITE structure reflects a hypotactic relation whereas the MULTINUCLEAR structure is a paratactic relation (Taboada and Das, 2013). The inventory of relations used in the RST framework varies widely, and therefore the number of relations in an RST taxonomy is not fixed. The original set of relations defined by Mann and Thompson (1988) included 23 relations. Moreover, RST identifies textual units as Elementary Discourse Units (EDUs), which are non-overlapping, contiguous spans of text that relate to other EDUs (Zeldes, 2017). EDUs can also form hierarchical groups known as complex discourse units. Relation Signaling When it comes to relation signaling, the first question to ask is what a signal is. In general, signals are the means by which humans identify the realization of discourse relations. The most typical signal type is DMs (e.g. 'although') as they provide explicit and direct linking information between clauses and sentences. As mentioned in Section 1, the lexicalized discourse relation annotations in PDTB have led to the discovery of a wide range of expressions called ALTERNA-TIVE LEXICALIZATIONS (AltLex) (Prasad et al., 2010). RST-SC provides a hierarchical taxonomy of discourse signals beyond DMs (see Figure 1 for an illustration, reproduced from Das and Taboada (2017, p.752). Intuitively, DMs are the most obvious linguistic means of signaling discourse relations, and therefore extensive research has been done on DMs. Nevertheless, focusing merely on DMs is inadequate as they can only account for a small number of relations in discourse. To be specific, Das and Taboada (2017) reported that among all the 19,847 signaled relations (92.74%) in RST-SC (i.e. 385 documents and all 21,400 annotated relations), relations exclusively signaled by DMs only account for 10.65% whereas 74.54% of the relations are exclusively signaled by other signals, corresponding to the types they proposed. The Signal Anchoring Mechanism As mentioned in Section 1, RST-SC does not provide information about the location of discourse signals. Thus, presented an annotation effort to anchor signal tokens in the text, with six categories being annotated. Their results showed that with 11 documents and 4,732 tokens, 923 instances of signal types/subtypes were anchored in the text, which accounted for over 92% of discourse signals, with the signal type se- mantic representing the most cases (41.7% of signaling anchors) whereas discourse relations anchored by DMs were only about 8.5% of anchor tokens in this study, unveiling the value of signal identification and anchoring. Neural Modeling for Signal Detection Zeldes (2018a) trained a Recurrent Neural Network (RNN) model for the task of relation classification, and then latent associations in the network were inspected to detect signals. It is relatively easy to capture DMs such as 'then' or a relative pronoun 'which' signaling an ELABORA-TION. The challenge is to figure out what features the network needs to know about beyond just word forms such as meaningful repetitions and variable syntactic constructions. With the human annotated data from the current project, it is hoped that more insights into these aspects can help us engineer meaningful features in order to build a more informative computational model. Methodology Corpus. The main goal of this project is to anchor and compare discourse signals across genres, which makes the Georgetown University Multilayer (GUM) corpus the optimal candidate, in that it consists of eight genres including interviews, news stories, travel guides, how-to guides, academic papers, biographies, fiction, and forum discussions. Each document is annotated with different annotation layers including but not limited to dependency (dep), coreference (ref), and rhetorical structures (rst). For the purpose of this study, the rst layer is used as it includes annotation on discourse relations, and signaling infor- mation will be anchored to it in order to produce a new layer of annotation. However, it is worth noting that other annotation layers are great resources to delve into discourse signals on other levels. Moreover, due to time limitations and the fact that this is the first attempt to apply the taxonomy of signals and the annotation scheme to other genres outside RST-DT's newswire texts, four out of eight genres in the GUM corpus were selected: academic, how-to guides, interviews, and news, which include a collection of 12 documents annotated for discourse relations. The rationale for choosing these genres is that according to Zeldes (2018a)'s neural approach to discourse signal prediction on the GUM corpus, how-to guides and academic articles in the GUM corpus signal most strongly, with interviews and news articles slightly below the average and fiction and reddit texts the least signaled, as shown in Figure 2 (reproduced from Zeldes (2018b, p.19)). It is believed that the selection of these four genres is a good starting point of the topic under discussion. Annotation Tool. One of the reasons that caused low inter-annotator agreement (IAA) in is the inefficient and error prone annotation tools they used: no designated tools were available for the signal anchoring task at the time. We therefore developed a better tool tailored to the purpose of the annotation task. It is built over an interface offering full RST editing capabilities called rstWeb (Zeldes, 2016) and provides mechanisms for viewing and editing signals (Gessler et al., 2019). Annotation Reliability. In order to evaluate the reliability of the scheme, a revised inter-annotator agreement study was conducted using the same metric and with the new interface on three documents from RST-SC, containing 506 tokens with just over 90 signals. Specifically, agreement is measured based on token spans. That is, for each token, whether the two annotators agree it is signaled or not. The results demonstrate an improvement in Kappa, 0.77 as opposed to the previous Kappa 0.52 in . Taxonomy of Discourse Signals. The most crucial task in signaling annotation is the selection of signal types. The taxonomy of discourse signals used in this project is adapted from that of Das and Taboada (2017), with additional types and subtypes to better suit other genres. Two new types and four new subtypes of the existing types are proposed: the two new types are Visual and Textual in which the subtype of the former is Image and the subtypes of the latter are Title, Date, and Attribution. The three new subtypes are Modality under the type Morphological and Academic article layout, Interview layout and Instructional text layout under the type Genre. Signal Anchoring Example. Semantic features have several subtypes, with lexical chain being the most common one. Lexical chains are annotated for words with the same lemma or words or phrases that are semantically related. Another characteristic of lexical chains is that words or phrases annotated as lexical chains are open to different syntactic categories. For instance, the following example shows that the relation RESTATE-MENT is signaled by a lexical chain item corresponding to the phrase a lot of in the nucleus span and quantity in the satellite span respectively. (4) [They compensate for this by creating the impression that they have a lot of friends -] N [they have a 'quantity, not quality' mentality.] S [whow_arrogant] 4 Results & Analysis This pilot study annotated 12 documents with 11,145 tokens across four different genres selected from the GUM corpus. Academic articles, how-to guides, and news are written texts while interview is spoken language. Generally speaking, all 20 relations used in the GUM corpus are signaled and anchored. However, this does not mean that all occurrences of these relations are signaled and anchored. There are several signaled but unanchored relations, as shown in Table 1. In particular, the 5 unsignaled instances of the relation JOINT result from the design of the annotation scheme (see Section 5.1 for details). Additionally, the unanchored signal types and subtypes are usually asso-ciated with high-level discourse relations and usually correspond to genre features such as interview layout in interviews where the conversation is constructed as a question-answer scheme and thus rarely anchored to tokens. With regard to the distribution of the signal types found in these 12 documents, the 16 distinct signal types amounted to 1263 signal instances, as shown in Table 2. There are only 204 instances of DMs out of all 1263 annotated signal instances (16.15%) as opposed to 1059 instances (83.85%) of other signal types. In RST-SC, DM accounts for 13.34% of the annotated signal instances as opposed to 81.36% 2 of other signal types (Das and Taboada, 2017). The last column in Table 2 shows how the distribution of each signal type found in this dataset compares to RST-SC. The reason why the last column does not sum to 100% is that not all the signal types found in RST-SC are present in this study such as the combined signal type Graphical + syntactic. And since Textual and Visual are first proposed in this study, no results can be found in RST-SC, and the category Unsure used in RST-SC is excluded from this project. Table 3 provides the distribution of discourse signals regarding the relations they signal. The first column lists all the relations used in the GUM corpus. The second column shows the number of signal instances associated with each relation. The third and fourth columns list the most signaled and anchored type and subtype respectively. Distribution of Signals across Relations The results show a very strong dichotomy of relations signaled by DMs and semantic-related signals: while DMs are the most frequent signals for five of the relations -CONDITION, CON-CESSION, ANTITHESIS, CAUSE, and CIRCUM-STANCE, the rest of the relations are all most frequently signaled by the type Semantic or Lexical, which, broadly speaking, are all associated with open-class words as opposed to functional words or phrases. Furthermore, the type Lexical and its subtype indicative word seem to be indicative of JUSTIFY and EVALUATION. This makes sense due to the nature of the relations, which requires writers' or speakers' opinions or inclinations for the subject under discussion, which are usually expressed through positive or negative adjectives (e.g. serious, outstanding, disappointed) and other syntactic categories such as nouns/noun phrases (e.g. legacy, excitement, an unending war) and verb phrases (e.g. make sure, stand for). Likewise, words like Tips, Steps, and Warnings are indicative items to address communicative needs, which is specific to a genre, in this case, the how-to guides. It is also worth pointing out that EVALUATION is the only discourse relation that is not signaled by any DMs in this dataset. Even though some relations are frequently signaled by DMs such as CONDITION and ANTITHE-SIS, most of the signals are highly lexicalized and indicative of the relations they indicate. For instance, signal tokens associated with the relation RESTATEMENT tend to be the repetition or paraphrase of the token(s). Likewise, most of the tokens associated with EVALUATION are strong positive or negative expressions. As for SEQUENCE, in addition to the indicative tokens such as First & Second and temporal expressions such as later, an indicative word pair such as stop & update can also suggest sequential relationship. More inter- estingly, world knowledge such as the order of the presidents of the United States (e.g. that Bush served as the president of the United States before Obama) is also a indicative signal for SEQUENCE. Another way of seeing these signals is to examine their associated tokens in texts, regardless of the signal types and subtypes. Table 4 lists some representative, generic/ambiguous (in boldface), and coincidental (in italics) tokens that correspond to the relations they signal. Each item is delimited by a comma; the & symbol between tokens in one item means that this signal consists of a word pair in respective spans. The number in the parentheses is the count of that item attested in this project; if no number is indicated, then that token span only occurs once. The selection of these single-occurrence items is random in order to better reflect the relevance in contexts. For instance, lexical items like Professor Eastman in JOINT, NASA in ELABORATION, Bob McDonnell in BACKGROUND, and NATO in RESTATEMENT appear to be coincidental because they are the topics or subjects being discussed in the articles. These results are parallel to the findings in Zeldes (2018a, p.180), which employed a frequency-based approach to show the most distinctive lexemes for some relations in GUM. Table 6 (2), Additionally (2), also (2), they (2), it (2), Professor Chomsky (2) PREPARATION : (6), How to (2), Know (2), Steps (2), Getting (2) BACKGROUND Therefore, Indeed, build on, previous, Bob McDonnell, Looking back CONTRAST but (9)/But (4), or (2) (2), meaning (2), so that, capturing, thus, putting, the χ2 statistic, make MOTIVATION will (2), easier, the pockets, All it takes is, so, last longer PURPOSE to (6)/To, in order to (3)/In order to (2), so (2), enable, The aim CIRCUMSTANCE when (4)/When (2), On March 13, Whether, As/as, With, in his MIT office, the bigger & the harsher interviews Sarvis (14), What (12), Why (11), and (8), Noam Chomsky (8), but (5), Wikinews (4), because (3), interview (2), -(2), Well (2), So (2), Which lowing the vertical line is the corresponding proportional frequency. The label N/A suggests that no such relation is present in the sample from that genre. Distribution of Signals across Genres As can be seen from Table 6, how-to guides involve the most signals (i.e. 407 instances), followed by interviews, academic articles, and news. It is surprising to see that news articles selected from the GUM corpus are not as frequently signaled as they are in RST-SC, which could be attributed to two reasons. Firstly, the source data is different. The news articles from GUM are from Wikinews while the documents from RST-SC are Wall Street Journal articles. Secondly, RST-DT has finer-grained relations (i.e. 78 relations as opposed to the 20 relations used in GUM) and segmentation guidelines, thereby having more chances for signaled relations. Moreover, it is clear that JOINT and ELABORATION are the most frequently signaled relations in all four genres across the board, followed by PREPARATION in how-to guides and interviews or BACKGROUND in academic articles and news, which is expected as these four relations all show high-level representations of discourse that involve more texts with more potential signals. Table 5 lists some signal tokens that are indicative of genre (in boldface) as well as generic and coincidental ones (in italics). The selection of these items follows the same criteria used in Section 4.1. Even though DMs and and but are present in all four genres, no associations can be established between these DMs and the genres they appear in. Moreover, as can be seen from Table 5, graphical features such as semicolons, colons, dashes, and parentheses play an important role in relation signaling. Although these punctuation marks do not seem to be indicative of any genres, academic articles tend to use them more as opposed to other genres. Although some words or phrases are highly frequent, such as discrimination in academic articles, arrogant people in howto guides, IE6 in news, and Sarvis in interviews, they just seem to be coincidental as they happen to be the subjects or topics being discussed in the articles. Academic writing is typically formal, making the annotation more straightforward. The results from this dataset suggest that academic articles contain signals with diverse categories. As shown in Table 5, in addition to the typical DMs and some graphical features mentioned above, there are several lexical items that are very strong signals indicating the genre. For instance, the verb hypothesized and its synonym posited are indicative in that researchers and scholars tend to use them in their research papers to present their hypotheses. The phrase based on is frequently used to elaborate on the subject matter. Furthermore, Table 5 also demonstrates that academic articles tend to use ordinal numbers such as First and Second to structure the text. Last but not least, the word Albeit indicating the relation CONCESSION seems to be an indicative signal of academic writing due to the register it is associated with. How-to Guides are the most signaled genre in this dataset. This is due to the fact that instructional texts are highly organized, and the cue phrases are usually obvious to identify. As shown in Table 5, there are several indicative signal tokens such as the wh-word How, an essential element in instructional texts. Words like Steps, Tips, and Warnings are strongly associated with the genre due to its communicative needs. Another distinct feature of how-to guides is the use of imperative clauses, which correspond to verbs whose first letter is capitalized (e.g. Know, Empty, Fasten, Wash), as instructional texts are about giving instructions on accomplishing certain tasks and imperative clauses are good at conveying such information in a straightforward way. News articles, like academic writing, are typically organized and structured. As briefly mentioned at the beginning of this section, news articles selected in this project are not as highly signaled as the news articles in RST-SC. In addition to the use of different source data, another reason is that RST-DT employs a finer-grained relation inventory and segmentation guidelines; as a result, certain information is lost. For instance, the relation ATTRIBUTION is signaled 3,061 times out of 3070 occurrences (99.71%) in RST-SC, corresponding to the type syntactic and its subtype reported speech, which does not occur in this dataset. However, we do have some indicative signal tokens such as market and the major source. Interviews are the most difficult genre to annotate in this project for two main reasons. Firstly, it is (partly) spoken language; as a result, they are not as organized as news or academic articles and harder to follow. Secondly, the layout of an interview is fundamentally different from the previous three written genres. For instance, the relation SOLUTIONHOOD seems specific to interviews, and most of the signal instances remain unanchored (i.e. 11 instances), which is likely due to the fact that the question mark is ignored in the current annotation scheme. As can be seen from Table 5, there are many wh-words such as What and Why. These can be used towards identifying interviews in that they formulate the questionanswer scheme. Moreover, interviewers and interviewees are also important constituents of an interview, which explains the high frequencies of the two interviewees Sarvis and Noam Chomsky and the interviewer Wikinews. Another unique feature shown by the signals in this dataset is the use of spoken expressions such as Well and So when talking, which rarely appear in written texts. Annotation Scheme For syntactic signals, one of the questions worth exploring is which of these are actually attributable to sequences of tokens, and which are not. For example, sequences of auxiliaries or constructions like imperative clauses might be identifiable, but more implicit and variable syntactic constructions are not such as ellipsis. In addition, one of the objectives of the current project is to provide human annotated data in order to see how the results produced by machine learning techniques compare to humans' judgments. In particular, we are interested in whether or not contemporary neural models have a chance to identify the constructions that humans use to recognize discourse relations in text based on individual sequences of word embeddings, a language modeling technique that converts words into vectors of real numbers that are used as the input representation to a neural network model based on the idea that words that appear in similar environments should be represented as close in vector space. Another dilemma that generally came up during the discussion about signal anchoring was whether or not to mark the first constituent of a multinuclear relation. In Figure 3, four juxtaposed segments are linked together by the JOINT relation, with associated signal tokens being highlighted. The first instance of JOINT is left unsignaled/unmarked while the other instances of JOINT are signaled. The rationale is that when presented with a parallelism, the reader only notices it from the second instance. As a result, signals are first looked for between the first two spans, and then between the second and the third. If there is no signal between the second and the third spans, then try to find signals in the first and the third spans. Because this is a multinuclear relation, transitivity does exist between spans. Moreover, the current approach is also supported by the fact that a multinuclear relation is often found in the structure like X, Y and Z, in which the discourse marker and is between the last two spans, and thus this and is only annotated for the relation between the last two spans but not between the first two spans. However, the problem with this approach is that the original source for the parallelism cannot be located. Distribution of Discourse Signals So far we have examined the distributions of signals across relations (Section 4.1) and genres (Section 4.2) respectively. Generally speaking, DMs are not only ambiguous but also inadequate as dis-course signals; most signal tokens are open-class lexical items. More specifically, both perspectives have revealed the fact that some signals are highly indicative while others are generic or ambiguous. Thus, in order to obtain more valid discourse signals and parse discourse relations effectively, we need to develop models that take signals' surrounding contexts into account to disambiguate these signals. Based on the results found in this dataset regarding the indicative signals, they can be broadly categorized into three groups: register-related, communicative-need related, and semanticsrelated. The first two are used to address genre specifications whereas the last one is used to address relation classification. Words like Albeit are more likely to appear in academic papers than other genres due to the register they are associated with; words like Steps, Tips, and Warnings are more likely to appear in instructional texts due to the communication effect they intend to achieve. Semantics-related signals play a crucial role in classifying relations as the semantic associations between tokens are less ambiguous cues, thereby supplementing the inadequacy of DMs. Validity of Discourse Signals It is also worth pointing out that some tokens are frequent signals in several relations, which makes their use very ambiguous. For instance, the coordinating conjunction and appears in JOINT, RE-STATEMENT, SEQUENCE, and RESULT in this dataset. Similarly, the subordinating conjunctions since and because serve as signals of JUSTIFY, CAUSE, and EVIDENCE in these 12 documents. These ambiguities would pose difficulties to the validity of discourse signals. As pointed out by Zeldes (2018a), a word like and is extremely ambiguous overall, since it appears very frequently in general, and is attested in all discourse functions. However, it is noted that some 'and's are more useful as signals than others: adnominal 'and' (example (5)) is usually less interesting than intersentential 'and' (example (6)) and sentence initial 'and' (example (7)). (5) The owners, [William and Margie Hammack], are luckier than any others. 3 -ELABORATION-ADDITIONAL Hence, it would be beneficial to develop computational models that score and rank signal words not just based on how proportionally often they occur with a relation, but also on how (un)ambiguous they are in contexts. In other words, if there are clues in the environment that can tell us to safely exclude some occurrences of a word, then those instances shouldn't be taken into consideration in measuring its 'signalyness'. Conclusion The current study anchors discourse signals across several genres by adapting the hierarchical taxonomy of signals used in RST-SC. In this study, 12 documents with 11,145 tokens across four different genres selected from the GUM corpus are annotated for discourse signals. The taxonomy of signals used in this project is based on the one in RST-SC with additional types and subtypes proposed to better represent different genres. The results have shown that different relations and genres have their indicative signals in addition to generic ones, and the indicative signals can be characterized into three categories: register-related, communicative-need related, and semantics-related. The current study is limited to the rst annotation layer in GUM; it is worth investigating the linguistic representation of these signals through other layers of annotation in GUM such as coreference and bridging, which could be very useful resources constructing theoretical models of discourse. In addition, the current project provides a qualitative analysis on the validity of discourse signals by looking at the annotated signal tokens across relations and genres respectively, which provides insights into the disambiguation of generic signals and paves the way for designing a more informative mechanism to quantitatively measure the validity of discourse signals.
2019-08-11T10:32:29.573Z
2019-09-02T00:00:00.000
{ "year": 2019, "sha1": "393d7df50ef6d56cc2d1ed3540cef13ef2031abd", "oa_license": null, "oa_url": "https://arxiv.org/pdf/1909.00516", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "d2fd7af1671675f79a03876125bc31fa70351567", "s2fieldsofstudy": [ "Sociology" ], "extfieldsofstudy": [ "Computer Science", "Sociology" ] }
205686516
pes2o/s2orc
v3-fos-license
Predictive Performance of a Fall Risk Assessment Tool for Community-Dwelling Older People (FRAT-up) in 4 European Cohorts Background and objective: The fall risk assessment tool (FRAT-up) is a tool for predicting falls in community-dwelling older people based on a meta-analysis of fall risk factors. Based on the fall risk factor profile, this tool calculates the individual risk of falling over the next year. The objective of this study is to evaluate the performance of FRAT-up in predicting future falls in multiple cohorts. Methods: Information about fall risk factors in 4 European cohorts of older people [Activity and Function in the Elderly (ActiFE), Germany; English Longitudinal Study of Aging (ELSA), England; Invecchiare nel Chianti (InCHIANTI), Italy; Irish Longitudinal Study on Aging (TILDA), Ireland] was used to calculate the FRAT-up risk score in individual participants. Information about falls that occurred after the assessment of the risk factors was collected from subsequent longitudinal follow-ups. We compared the performance of FRAT-up against those of other prediction models specifically fitted in each cohort by calculation of the area under the receiver operating characteristic curve (AUC). Results: The AUC attained by FRAT-up is 0.562 [95% confidence interval (CI) 0.530–0.594] for ActiFE, 0.699 (95% CI 0.680–0.718) for ELSA, 0.636 (95% CI 0.594–0.681) for InCHIANTI, and 0.685 (95% CI 0.660–0.709) for TILDA. Mean FRAT-up AUC as estimated from meta-analysis is 0.646 (95% CI 0.584–0.708), with substantial heterogeneity between studies. In each cohort, FRAT-up discriminant ability is surpassed, at most, by the cohort-specific risk model fitted on that same cohort. Conclusions: We conclude that FRAT-up is a valid approach to estimate risk of falls in populations of community-dwelling older people. However, further studies should be performed to better understand the reasons for the observed heterogeneity across studies and to refine a tool that performs homogeneously with higher accuracy measures across different populations. All the parameters of the core block of FRAT-up were derived from the literature. In particular, the parameters that determine the contribution of each risk factor to the overall risk of falling were determined from the odds ratios obtained in the systematic review and meta-analysis by Deandrea et al. 24 Until now, FRAT-up has been evaluated only in the Invecchiare nel Chianti (InCHIANTI) cohort. 22 However, because the meta-analysis by Deandrea collated results from numerous epidemiologic studies, with risk factors assessed by different risk estimators, we hypothesize that FRAT-up is a suitable screening tool for different populations and can be adapted to different methods for risk factors assessment (ie, different risk estimators). With the present study, we aim to further validate FRAT-up and verify this hypothesis, evaluating its predictive performance on 4 datasets from relevant European epidemiologic studies including community-dwelling older adults. The performance of a predictive model depends on the model itself but also on the cohort on which it is tested. To gain better insight on the robustness of FRAT-up performance across different datasets, we also aim to compare the predictive performance of FRAT-up with data-driven prediction models, each specifically fitted on 1 of the 4 cohorts. Methods The FRAT-up validation process is described in this article in compliance with the Transparent Reporting of a Multivariable Prediction Model for Individual Prognosis or Diagnosis checklist for transparent reporting. 25,26 To achieve the objectives listed above, we used 4 datasets from cohort studies conducted in different European countries (Germany, England, Italy, and Ireland). The 4 datasets were initially harmonized to obtain estimates of risk exposure on a standard list of risk factors. The FRAT-up risk score was calculated and 4 cohort-specific prediction models were developed for comparison. All analyses were run with R version 3.0.2 (R Core Team, Vienna, Austria). 27 Included Study Populations The Activity and Function in the Elderly (ActiFE) in Ulm study is a population-based observational study on a cohort of community-dwelling older adults. Its principal aim is to investigate the relation-ship of physical activity, measured with body-worn accelerometers, with a number of health outcomes. The study design has been pre-viously described in detail. 28,29 Briefly, inclusion criteria were living in the area of greater Ulm or Neu-Ulm, located in the South of Germany; being 65 to 90 year old; not being institutionalized; being able to walk independently through their own room; not having serious difficulties in German language, and no severe deficits in cognition. Older age strata were oversampled to recruit an equal number of persons for each age group. At baseline (2009-2010), 1506 participants were assessed on a number of health parameters, including the fall risk estimators used in the present study. Successively, they were prospectively followed for 12 months to monitor the occurrence of falls using fall calendars as recommended by the Profane consortium. 30 We excluded 90 people (6%) on whom follow-up information about falls was missing. The English Longitudinal Study of Aging (ELSA) is a panel study of a cohort that is representative of the population of noninstitutionalized men and women aged 50 years or older living in England. Its broad scope is to study aging in England in its health, economic, and social aspects. 31 In 2004-2005 (wave 2), the participants underwent a home interview and a nurse visit, which included the fall risk estimators used in the present study. 32 About 2 years later (wave 3), they were asked about falls experienced since the last interview. 33 Four thousand fifty-six participants aged 65 years or older concluded the interview and the nurse visit. Of those, we excluded 753 (19%) participants that at wave 3 were not reinterviewed or did not answer questions about experienced falls. The InCHIANTI study is an observational cohort study on older adults living in the Chianti region, Italy. Its principal aim is to investigate the factors contributing to the decline of mobility in older persons and to establish clinical variables and thresholds to evaluate mobility in geriatric practice. 34 The invited persons were sampled from the municipality registries of Greve in Chianti and Bagno a Ripoli. Those aged 90 years or older were oversampled. At baseline (1999)(2000), 1155 participants aged 65 years or older were assessed on a number of health parameters, including the fall risk estimators considered in the present study. After 3 years, they were re-interviewed and asked about falls experienced during the previous 12 months. We excluded 263 (23%) participants who at the first followup were not re-interviewed or did not answer questions about previous falls. The Irish Longitudinal Study on Aging (TILDA) is a cohort study representative of noninstitutionalized men and women aged 50 years or older living in Ireland. It aims to study aging in Ireland in its health, economic, and social aspects. [35][36][37] The fieldwork relative to the baseline was carried out between October 2009 and February 2011. At baseline, the participants were asked about falls experienced during the last year and were assessed on a number of fall risk estimators. The first follow-up was carried out after about 2 years (from April 2012 to January 2013). At the follow-up, the participants were asked about falls experienced since the baseline interview. Two thousand three hundred seventy-two participants aged 65 years or older concluded the interview and the health assessment. We excluded 271 (11%) participants who at the first follow-up were not re-interviewed or did not answer questions about experienced falls. TILDA and ELSA are considered sister surveys, as both were designed similarly according to the United States Health and Retirement Study. 38 Each of these 4 studies has received ethical approval by local competent ethics committees. Variable Harmonization We had to develop 4 harmonization blocks because the 4 cohort studies are different in the way they were designed and carried out. The process of deriving common variables from different existing datasets is often called "retrospective harmonization." It allows the utilization of data coming from different sources within 1 combined analysis. 39 We call "target variables" the variables that are desired as a result of the harmonization process. We distinguish between "predictor target variables" and an "outcome target variable." Predictor target variables are all the fall risk factors obtained as output of FRAT-up harmonization block and taken as input by the FRAT-up core block. The outcome target variable is the object of prediction (ie, occurrence of any fall during 1 year after the assessment, hereinafter "subsequent falls"). We call "source variables" the variables that are native of each dataset and that are used to construct the target variables. Predictor source variables are the risk estimators received in input by the FRAT-up harmonization block. For each dataset, harmonization rules were developed and applied whenever possible to construct the target variables from the source variables. This process was fully blinded, meaning that the effect of the different choices of the harmonization process on the performance of any predictive model was not evaluated. It was considered impossible to construct 5 and 3 risk factors in the ELSA and TILDA datasets, respectively. The outcome variable was harmonized imperfectly in all the datasets except ActiFE. In the InCHIANTI dataset, the corresponding source variable is relative to a time span that comes 2 years after the assessment, whereas in the ELSA and TILDA datasets, the corresponding source variables are relative to a time span that covers 2 years instead of 1. A more detailed description of the source and target variables and of the harmonization process is provided in an Appendix that is available upon request from the corresponding author. Statistical Analysis Use of sample weights-In health surveys, it is often the case that the study sample, which is available for the analyses, is not fully representative for the target population. This happens because some population strata are purposely oversampled or because there can be differential response and drop-out rates. As a consequence, it may happen that the distribution of some quantities of interest in the sample population differs substantially from the distribution in the target population. Sample weights are, thus, used to make sample estimates closer to their respective target population quantities. 40 The ELSA and TILDA datasets are released with a set of sample weights. Among those, for ELSA, we have considered the weights assigned to the participants who underwent the nurse visit. For TILDA, we have considered the weights assigned to the participants who completed the health assessment, either at home or at the health center. The weights for the samples of the ActiFE and InCHIANTI datasets were calculated after stratifying by age group and sex (for the InCHIANTI we also stratified by site, Greve in Chianti or Bagno a Ripoli 34 ). More in particular, each participant in stratum h was assigned a weight N h /n h , with N h (n h ) being the total number of participants in stratum h in the target population (in the available sample, respectively). Data imputation Missing data are less of an issue for FRAT-up because of the ability of the tool to handle missing information through use of prevalence proportions. 22 Conversely, missing data imputation is a necessary preprocessing step before computation of the data-driven models. Missing data have been imputed in 11 copies with multivariate imputations by chained equations. 41 Percentage of missing values, when different from zero, is indicated in square brackets in Table 1. Totally missing variables in a dataset (eg, number of medications in the ELSA dataset) were replaced with the prevalence rates used in FRAT-up (Table 1). Descriptive statistics Descriptive statistics of the 4 cohorts were calculated for the harmonized variables using sample weights. Univariate associations between single risk factors and subsequent falls were quantified with odds ratios (ORs) and corresponding 95% confidence intervals (CIs). Development of cohort-specific risk models FRAT-up was applied on the 4 harmonized datasets. Its performance on the datasets was then compared with the performances of data-driven, cohort-specific risk models, estimated by 10-fold cross-validation. In particular, each harmonized dataset was once used as a training set and, to this aim, randomly divided in 10 folds, balanced with respect to number of fallers. In turn, one of the imputed copies of 9 folds was used to fit a stepwise logistic regression with Akaike information criterion as model selection metrics. 42 All FRAT-up risk factors were included as candidate regressors, together with their 2-way interactions. This regression model was then used to calculate the risk score on the test fold of the same dataset. This procedure was repeated 10 times, to calculate risk scores on all the samples of the dataset. One randomly chosen model among these 10 was used to obtain risk scores also on the other 3 harmonized datasets used as testing sets. To calculate risk scores, each regression model was applied on each imputed copy of the samples, obtaining 11 risk scores for each participant. These 11 scores were then averaged to obtain a unique risk score for each participant. Model evaluation The area under the receiver operating characteristic curve (AUC) was chosen to evaluate FRAT-up and the other cohort-specific risk models because this is the most common statistics to evaluate the discriminative ability of prediction models. Mean and 95% CIs for model AUCs were derived by means of bootstrapping. 43,44 Observations were sampled with replacement with probability proportional to their sample weights. FRAT-up was also graphically evaluated for calibration. To draw the calibration plot, the FRAT-up 1-year risk of falling (p1) was adjusted to a 2-year risk of falling (p2) for the ELSA and the TILDA dataset according to the formula p2 = p1(2℃p1). The method is further explained in the Appendix, available upon request from the corresponding author. The values of FRAT-up AUCs attained on the 4 populations were pooled with random effects meta-analysis using the R package "meta." 45 In particular, mean AUC was estimated with inverse variance weighted average. Between-study heterogeneity was quantified with Higgins-Thompson I 2 , 46 and between-study variance with the DerSimonian-Laird estimate. Table 1 describes the 4 cohorts with respect to main sociodemographic and medical characteristics as obtained after the harmonization process. Most characteristics showed a large variation and difference among the 4 cohorts, except sex, history of diabetes, and use of antiepileptics. In each cohort, FRAT-up discriminant ability is surpassed, at most, by the cohort-specific risk model fitted on that same cohort. On the InCHIANTI cohort, FRAT-up has higher discriminative accuracy than the InCHIANTIspecific risk model. Results The mean FRAT-up AUC estimated by pooling results obtained on the 4 cohorts with random effects meta-analysis is 0.646 (95% CI 0.584-0.708). The between-cohort variance is 0.0038 and the Higgins-Thompson I 2 measure of heterogeneity is 95.1% (95% CI 90.3% −97.5%). Cochran's Q-test for heterogeneity is highly significant (P < .0001) indicating substantial heterogeneity among the included studies ( Figure 1). Figure 2 shows the calibration curves of FRAT-up for the 4 datasets. Participants of ActiFE with low (high) risk scores, experienced more (respectively, less) falls than expected. This pattern, sometimes referred to as low resolution, 48 is also present in the participants of InCHIANTI who were assigned to the lowest or highest risk score deciles. In ELSA and TILDA, FRAT-up overestimated the risk consistently across the risk strata. Discussion In this comparative study, we investigated the performance of FRAT-up as a prediction tool for falls in 4 cohorts of European community-dwelling older adults, and we compared its discriminative ability with those of cohort-specific, data-driven risk models. Overall, FRATup seems suitable to be applied across different cohorts, thereby being a valid approach to estimate risk of falls in populations of community-dwelling older adults, although the performance varied among the different cohorts. FRAT-up mean AUC for any fall was estimated to be 0.646 by meta-analysis of the AUCs obtained from the 4 cohorts. Compared with prediction tools for other health outcomes, such as prediction tools for cardiovascular health, 1 this value per se cannot be considered high. However, previous research has already shown that the FRAT-up discriminative ability is superior to other screening tools, 49 such as gait speed and the Short Physical Performance Battery (SPPB). 50 Also, the Timed Up and Go test has been shown to have discriminative ability for falls similar to gait speed. 51 Its ability to predict falls 6,15 has been quantified with an AUC ranging from 0.61 52 to 0.71 (value obtained when discriminating recurrent fallers). 53 The AUC of the Tinetti Balance test 54 has been reported to be around 0.56 52 and 0.62. 55 Thus, the risk models for falls that have been proposed and validated so far, have left a conspicuous part of the phenomenon unexplained. Nevertheless, these considerations and the results of our study suggest that FRAT-up is a suitable screening tool to use in populations of community-dwelling older people. In all cohorts, FRAT-up risk score was predictive for future falls. Without considering the results obtained when fitting and testing a model on the same population (also known as internal validation 56 ), we note that for any given test cohort, the FRAT-up discriminative ability was comparable to or even greater than the other cohort-specific risk models. Furthermore, ELSA was the test cohort on which the models attained the highest and ActiFE the one with the lowest AUCs, respectively. Besides differences among the studies in terms of risk factor prevalence rates and ORs, the I 2 statistics indicated substantial heterogeneity among the 4 included studies. It is not possible to unequivocally determine to which degree this heterogeneity is attributable to true population dissimilarities (eg, differences in the distribution of the SPPB score) or to differences in the study protocols and data collection procedures (eg, methods of recording fall occurrences). This limitation is partly due to the lack of consistent data across the studies (eg, SPPB is not available in TILDA), and to the small number of datasets included (ie, 4), therefore, not allowing to conduct a meta-regression, which might have shed further light on potential reasons. Nevertheless, first a high heterogeneity in terms of risk factors ORs was already found in the meta-analysis on which FRAT-up was built. 24 Second, some variables and the resulting heterogeneity might be the result of a sometimes imperfect harmonization process. For example, estimating exposure to the risk factor "pain" requires having a consistent and specific definition of it. However, in the actual implementation of the harmonization process, we had to deal with questionnaires being different across the 4 datasets in terms of frequency (eg, assessment of frequent or occasional pain), reference time period (eg, 12 months or 2 years before the assessment), or differences in location (eg, pain in any body location or in specific areas). Therefore, some limitations are intrinsic in the 4 different datasets; others might have been mitigated by an expert consensus process. Other considerations to explain heterogeneity in results regard the outcome target variable (ie, occurrence of at least 1 fall in the 12 months after the assessment). First, from theoretical analyses, we expect that longer follow-ups lead to higher AUCs. 57 This may explain why, excluding results from internal validation, AUC is consistently higher on ELSA and TILDA, where participants report about falls experienced during a time period of 2 years, which is twice longer than in ActiFE and InCHIANTI. Second, the differences in the approaches used to assess fall incidence could have played a role. In particular, use of prospective falls calendars (as employed in ActiFE) is expected to be more precise, 30 whereas retrospective questionnaire assessment (as used in ELSA, InCHIANTI, and TILDA) might register only more severe fall events, that are supposedly more easily predictable from information about exposure to standard risk factors. Finally, the differences in fall incidence among the study populations provide another potential explanation for the different behavior of FRAT-up in calibration. In particular, FRAT-up was developed assuming an average 1-year prevalence of 31% for at least 1 fall. 22 This value is similar to the prevalence of 35% found in ActiFE, where FRAT-up is substantially calibrated, whereas is much higher than the prevalence of at least 1 fall of 23% and 21% found in ELSA and InCHIANTI, respectively, where FRAT-up overestimates the risk. Discrepancies of fall incidence figures among populations is indeed a debated issue in the literature. 58,59 Externally validating a prediction model means to evaluate the performance of the model on data that were not used for its development. It is of fundamental utility as it allows evaluating the generalizability of the model outside the derivation cohort. In addition, it allows estimating its predictive ability excluding some sources of bias that may intervene in other types of validation procedures. 60,61 External validation is rarely performed, partly because it is time-consuming and costly. Also, in the domain of falls, only few prediction models for community-dwelling older adults have been externally validated, and they have shown modest predictive accuracy. 6,16 By performing a harmonization process, that is connatural with the FRAT-up 2-block architecture, we have been able to apply and evaluate this tool on 4 datasets relative to 4 cohort studies of European older people. The issues discussed above related to the harmonization process can be thought as the price to pay for avoiding a long and expensive data collection campaign. However, if FRAT-up is conceived to be applied on multiple data sources after the construction of specific harmonization blocks, our approach to validate it reflects its intended way of using it. Conclusions Despite extensive research, falls are still difficult to predict because of the multiplicity of risk factors involved. Applying FRAT-up on different cohorts where risk factors were assessed according to different procedures and policies resulted in a risk score that was significantly predictive for falls, although with very heterogeneous discrimination ability. Overall, FRAT-up seems more suitable to be transferred across different cohorts than datadriven fall-risk models stemming from individual cohorts, thereby being a valid option to use on populations of community-dwelling older people if no specifically validated, population-specific fall risk tools already exist for the respective population. Nevertheless, further studies should be performed to better understand the reasons for the observed heterogeneity and to refine a tool that performs homogeneously with higher accuracy measures across different populations. by the Office for National Statistics. The developers and funders of ELSA and the Archive do not bear any responsibility for the analyses or interpretations presented here. The InCHIANTI study is currently supported by a grant from the National Institute on Aging (National Institutes of Health, National Institute on Aging, Bethesda, MD) and is coordinated by the Calibration plot for FRAT-up fall risk score on the 4 datasets. The calibration curves for ELSA and TILDA are relative to falls occurred during a time span of 2 years. Error bars indicate 95% CIs. Table 3 Comparison Among Models Applied on the 4 Cohorts The discriminative ability is quantified with AUC (95% CI). The results from internal validation (fitting and testing on the same cohort) are in italics.
2018-04-03T06:19:58.806Z
2016-09-01T00:00:00.000
{ "year": 2016, "sha1": "efa53dce55b5bd95c351a269b13c7276fd3ddc50", "oa_license": "CCBYNCND", "oa_url": "http://www.jamda.com/article/S1525861016302936/pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "efa53dce55b5bd95c351a269b13c7276fd3ddc50", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
119322607
pes2o/s2orc
v3-fos-license
Anomalous magnetoresistance of EuB$_{5.99}$C$_{0.01}$: Enhancement of magnetoresistance in systems with magnetic polarons We present results of measurements of electrical, magnetic and thermal properties of EuB$_{5.99}$C$_{0.01}$. The observed anomalously large negative magnetoresistance as above, so below the Curie temperature of ferromagnetic ordering $T_C$ is attributed to fluctuations in carbon concentration. Below $T_C$ the carbon richer regions give rise to helimagnetic domains, which are responsible for an additional scattering term in the resistivity, which can be suppressed by a magnetic field. Above $T_C$ these regions prevent the process of percolation of magnetic polarons (MPs), acting as"spacers"between MPs. We propose that such"spacers", being in fact volumes incompatible with existence of MPs, may be responsible for the decrease of the percolation temperature and for the additional (magneto)resistivity increase in systems with MPs. We present results of measurements of electrical, magnetic and thermal properties of EuB5.99C0.01. The observed anomalously large negative magnetoresistance as above, so below the Curie temperature of ferromagnetic ordering TC is attributed to fluctuations in carbon concentration. Below TC the carbon richer regions give rise to helimagnetic domains, which are responsible for an additional scattering term in the resistivity, which can be suppressed by a magnetic field. Above TC these regions prevent the process of percolation of magnetic polarons (MPs), acting as "spacers" between MPs. We propose that such "spacers", being in fact volumes incompatible with existence of MPs, may be responsible for the decrease of the percolation temperature and for the additional (magneto)resistivity increase in systems with MPs. EuB 6 is a rare example of a low carrier density hexaboride that orders ferromagnetically at low temperatures and undergoes a metal-insulator phase transition. The ferromagnetic order is established via two consecutive phase transitions 1,2,3 at T M = 15.5 K and T C = 12.6 K, respectively. 3 The high-temperature magnetoresistance is large and negative; the absolute value is increasing with decreasing temperature, and reaches the maximum value of about 100% at the magnetic ordering temperature (15.6 K). 4 In the ferromagnetic regime, the magnetoresistance is positive and reaches a value up to 700% at 7 T and 1.7 K. 4 Physical properties of EuB 6 are thought to be governed by magnetic polarons (MPs), which are in fact carriers localized in ferromagnetic clusters embedded in a paramagnetic matrix. 3,5,6,7 As suggested by Süllow et al., 3 the magnetic phase transition at T M represents the emergence of the spontaneous magnetization accompanied by metalization. At this temperature the MPs begin to overlap and form a conducting, ferromagnetically ordered phase that acts as a percolating, low-resistance path across the otherwise poorly conducting sample. 3 With decreasing temperature, the volume fraction of the conducting ferromagnetic phase expands, until the sample becomes a homogeneous conducting bulk ferromagnet at T C . 3 As indicated by Raman scattering measurements, 5 the polarons appear in EuB 6 at about 30 K. Because of the very low number of intrinsic charge carriers (∼10 20 cm −3 ), 8 even a slight change of the concentration of conduction electrons (e.g. due to a change of chemical composition or number of impurities) can drastically modify the electric and magnetic properties of EuB 6 . 9,10 Substitution of B by C enhances the charge carrier concentration in EuB 6 . As shown by neutron diffrac-tion studies, 11 the predominant ferromagnetic ordering in the stoichiometric EuB 6 changes with increasing carbon content through a mixture of the ferromagnetic phase and helimagnetic domains into a purely antiferromagnetic state. The paramagnetic Curie temperature θ p of EuB 6−x C x changes its sign for x = 0.125. 9 The helimagnetic domains are associated with carbon richer regions (with higher carrier density) due to local fluctuations of the carbon concentration. Different impact of the RKKY interaction in carbon richer and carbon poorer regions yields to different types of magnetic order. 11 The unusual transport properties of carbon doped EuB 6 single crystal were reported more than a decade ago. 12 The results have shown that the electrical resistivity becomes strongly enhanced below 15 K and exhibits a maximum around 5 K. The residual resistivity is exceptionally high; it is even higher than the room temperature resistivity ρ(300 K). Application of a magnetic field of 3 T at 4.2 K causes a dramatic reduction of the resistivity yielding ρ(0 T)/ρ(3 T) = 3,7. The huge residual resistivity has been ascribed to the scattering of conduction electrons on boundaries between the ferromagnetic and helimagnetic regions. 12 In this paper we present an extended study of the electrical resistivity, magnetoresistance, susceptibility and heat capacity on a EuB 5.99 C 0.01 single crystal. We bring further experimental results supporting the afore mentioned hypothesis of the dominant scattering process at temperatures below T C originating from the mixed magnetic structure. In addition, our results advert that above T C the electrical transport is governed by MPs and can be well understood within a recently proposed scenario involving the "isolated", "linked" and "merged" MPs. 7 Moreover, we argue that regions of proper size and space distribution, incompatible with existence of MPs can be the clue for understanding the origin of the colossal magnetoresistance in systems with MPs. Samples used for magnetization and resistivity measurements were cut from the crystal used in previous studies, 12 which has been grown by means of the zonefloating. Recent micro-probe analysis of this crystal revealed the carbon content corresponding to the stoichiometric formula EuB 5.99 C 0.01 . The electrical resistance, magnetoresistance, heat capacity and ac-susceptibility were measured in the Quantum Design PPMS and MPMS. The direction of the applied magnetic field was perpendicular to electrical current in all magnetoresistance measurements. The electrical resistivity of EuB 5.99 C 0.01 decreases upon cooling from 300 K until it reaches a shallow minimum at about 40 K. Below 10 K it increases steeply, passes a maximum at T RM ∼5 K, and subsequently falls off having tendency to saturate at lowest temperatures. The low-temperature part of its dependence is shown in Fig. 1 as curve a). The temperature derivative of the resistivity in zero magnetic field, depicted in the figure inset, shows a sharp maximum at T m = 4.1 K indicating a proximity of magnetic phase-transition. Since the optical reflectivity data of the studied system have not revealed any shift in the plasma frequency between 4.2 and 20 K, 12 the charge carrier concentration can be regarded as constant in this temperature interval. Therefore, we tentatively associate the anomalous resistivity behavior with magnetism in this material. In Fig. 2 we plot temperature dependences of magnetoresistance M R = [ρ(B) − ρ(0)]/ρ(0) for selected values of the applied magnetic field between 50 mT and 12 T, derived from the data shown in Fig. 1. The absolute value We suppose that below T C , the scattering of conduction electrons originates from phase boundaries of the mixed magnetic structure consisting of helimagnetic domains, associated with carbon richer regions, in the ferromagnetic matrix. Sufficiently high magnetic field makes the helimagnetic domains energetically unfavorable and therefore reduces their volumes (and probably destroys them completely at highest fields), giving rise to negative magnetoresistance. The magnetic field influence on the resistivity and magnetoresistance behavior between 2 and 20 K and the magnetic field dependences of resistivity depicted in Fig. 1, 2 and 3 respectively, reveal two different magnetoresistance regimes: (i) for temperatures lower than T RM -the resistivity is enhanced by small fields (B ≤ 0.3 T) and reduced by higher fields (B ≥ 0.5 T); (ii) above T RM -the resistivity monotonically decreases with increasing applied magnetic field. The low-field magnetoresistance measured at 2 K is dependent on magnetic history and exhibits large hysteresis. Fig. 4 shows the hysteresis behavior of the resistivity, including the virgin curve taken at 2 K after cooling from 30 K to 2 K in zero magnetic field. As it is visible in the figure, the hysteresis is significant for |B|≤ 0.3 T. The hysteresis of magnetisation is very weak, but not negligible in the interval where the resistivity hysteresis is observed, suggesting that the positive magnetoresistance in low magnetic fields is due to the conduction-electron scattering on the domain walls within the ferromagnetic matrix. With the aim to get more information on the magnetic properties and the phase transition(s), we measured the real part of the temperature dependence of the acsusceptibility χ , (T ) and the specific heat C(T ) in the temperature range 2 -86 K and 2 -30 K, respectively. The 1/χ , (T ) satisfies the Curie-Weiss law in the region above ∼29 K and yields the paramagnetic Curie temperature θ p = 7 K. Fig. ?? shows the χ , (T ) and C(T ) data below 10 K. The χ , (T ) dependence indicates two distinct regimes, one above and other below ∼ 4 K, such as it obeys almost linear behavior, in the intervals 2 -3.6 K and 4.1 -4.8 K, respectively, however with different slopes. The specific heat exhibits a broad peak at 5.7 K, which we tentatively associate with the magnetic ordering transition at T C ∼ 5.7 K. The position of the peak correlates well with the position of the inflexion point of the χ , (T ) dependence (5.5 K). There is also a side anomaly at 4.3 K in the C(T ) dependence, which almost coincides with the afore mentioned resistivity anomaly at T m = 4.1 K and with the change of the regime of the χ , (T ) dependence. Detailed microscopic investigation (e.g. neutron diffraction) is desired to elucidate the relation between the specific-heat and resistivity anomalies with magnetic phenomena in the studied material. The observed behavior of EuB 5.99 C 0.01 can be consistently explained within the framework of results obtained by Yu and Min 7 , who investigated the magnetic phase transitions in MP systems using the Monte Carlo method. They supposed three consecutive temperature scales: T * , T C and T p . Upon cooling from the hightemperature paramagnetic state the isolated MPs with random magnetization directions begin to form at T * . 7 At further cooling the MPs grow in size. Down to T C carriers are still confined to MPs, thus the metallic and magnetic regions are separated from the insulating and paramagnetic regions. 7 The isolated MPs become linked at the bulk ferromagnetic transition temperature T C . Eventually, the polaron percolation occurs expressing itself as a peak in the heat capacity at T p < T C . 7 Below T p all carriers are fully delocalized and the concept of MPs becomes meaningless. The other issue, which should be mentioned, is that the impurities reduce both, T C and T p , but the discrepancy ratio (T p /T C = 7/9 . = 0.77) between these two temperatures is retained. 7 According to the concept of Yu and Min, 7 we interpret the obtained experimental results as follows. Consistently with the temperature dependence of the magnetization, EuB 5.99 C 0.01 is paramagnetic above ∼29 K. We expect the formation of isolated MPs at lower temperatures. The magnetic phase transition temperature reflected in the broad maximum of the C(T ) dependence, we associate with the temperature of ferromagnetic ordering T C . We suggest that the isolated MPs begin to link at T C . The MPs become merged and percolation occurs at the temperature of the (side) specific-heat anomaly T p = 4.3 K. Here is an excellent correspondence between the theoretically expected ratio T p /T C . = 0.77 and our experimental value T p /T C = 4.3/5.6 . = 0.75. The transition to the percolated phase is accompanied by the abrupt increase of the electrical conductivity of the percolated/merged phase at T m = 4.1 K. The fact that T m is lower than both T p and T C supports the supposition that Fisher-Langer relation 13 is not valid in MP systems because of spatial inhomogeneity. 7 The concept outlined above allows us also to explain the very interesting issue connected with the larger value of the resistivity maximum observed for EuB 5.99 C 0.01 (∼390 µΩ cm at ∼5 K) than for EuB 6 (∼350 µΩ cm at ∼15 K), 2 despite EuB 5.99 C 0.01 is at room temperature about four times better conductor, having ρ(300 K)∼180 µΩ.cm, than EuB 6 with ρ(300 K)∼730 µΩ.cm. 2 Since MPs can exist only in low carrier density environment, we suggest that the carbon richer regions with an enhanced carrier concentration act as "spacers" between MPs preventing them to link and merge. As a consequence, the system persists in poorly conducting state down to lower temperatures. Due to the extension of the temperature interval, in which the resistivity increases with decreasing temperature, an additional resistivity increase is observed, resulting in the higher value of the resistivity maximum (and the larger negative magnetoresistance). It may be generalized that the processes preventing MPs to link and to percolate extend the region of thermally activated transport (governed by MPs) to lower temperatures, giving rise to a higher value of the resistivity maximum, resulting in the higher magnetoresistance. It seems that the colossal magnetoresistance of Eu 1−x Ca x B 6 16 might be also explained by this scenario assuming that the calcium richer regions play a role similar to the carbon richer regions in EuB 6−x C x . From the point of view of tuning the properties of magnetoresistive materials here arises an interesting anology between the role of non-ferromagnetic "spacers" in the magne-toresistance enhancement in this class of materials, and the role of the (non-superconducting) pinning centers in the increase of the critical field in superconductors. In summary, our studies reveal a large negative magnetoresistence of EuB 5.99 C 0.01 well above and bellow the temperature of the bulk ferromagnetic ordering. In the temperature region, where the resistivity maximum appears and transport properties are governed by MPs, the results have been consistently explained within the picture of isolated, linked and merged MPs. 7 We have observed three distinctive temperatures: T C = 5.7 K, T p = 4.3 K and T m = 4.1 K. As we suppose, at T C and T p , being the temperatures of heat capacity maxima, the isolated MPs become linked and merged, respectively. The peak in the dρ/dT vs. T dependence at T m , lying slightly below the percolation temperature, we consider as a sign of rapid enhancement of the conductivity in the merged phase. The unusually high value of the electrical resistivity maximum we associate with the presence of carbon richer regions. We suppose that these regions are responsible for the higher value of the resistivity maximum at correspondingly lower temperatures, and consequently, for the larger magnetoresistance. Finally, we emphasize that introducing such "spacers", which prevent the percolation of MPs may strongly enhance the magnetoresistance of systems with transport governed by MPs. The "spacers" are in fact regions of appropriate size and scale distribution, which are not compatible with ferromagnetic ordering. This might show a route for future research efforts in relation with the colossal magnetoresistance effect.
2007-06-01T08:58:54.000Z
2007-06-01T00:00:00.000
{ "year": 2007, "sha1": "a977e0fd117aab4940d4604c4f6d0841a5b1947c", "oa_license": null, "oa_url": "http://arxiv.org/pdf/0706.0091", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "a977e0fd117aab4940d4604c4f6d0841a5b1947c", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
233713114
pes2o/s2orc
v3-fos-license
Estimation of Adaptation Parameters for Dynamic Video Adaptation in Wireless Network Using Experimental Method A wireless network gives flexibility to the user in terms of mobility that attracts the user to use wireless communication more. The video communication in the wireless network experiences Quality of Services (QoS) and Quality of Experience (QoE) issues due to network dynamics. The parameters, such as node mobility, routing protocols, and distance between the nodes, play a major role in the quality of video communication. Scalable Video Coding (SVC) is an extension to H.264 Advanced Video Coding (AVC), allows partial removal of layers, and generates a valid adapted bit-stream. This adaptation feature enables the streaming of video data over a wireless network to meet the availability of the resources. The video adaptation is a dynamic process and requires prior knowledge to decide the adaptation parameter for extraction of the video levels. This research work aims at building the adaptation parameters that are required by the adaptation engines, such as Media Aware Network Elements (MANE), to perform adaptation on-the-fly. The prior knowledge improves the performances of the adaptation engines and gives the improved quality of the video communication. The unique feature of this work is that, here, we used an experimental evaluation method to identify the video levels that are suitable for a given network condition. In this paper, we estimated the adaptation parameters for streaming scalable video over the wireless network using the experimental method. The adaptation parameters are derived using node mobility, link bandwidth, and motion level of video sequences as deciding parameters. The experimentation is carried on the OMNeT++ tool, and Joint Scalable Video Module (JSVM) is used to encode and decode the scalable video data. Introduction Video communication applications, such as video conferencing, telemedicine, video chat, and video-on-demand, are attracting users more and more in this COVID-19 pandemic situation. The communication applications involve a wireless network to reach a large number of users and enable seamless communication. Wireless networks provide the flexibility of mobility and ease of use to the users, which increases the streaming challenges and issues in providing better communication quality [1,2]. The recent advances in wireless technology and video coding formats have opened many challenges in real-time video streaming over wireless networks. As video communication is very sensitive to jitter and throughput, it is difficult to achieve Quality of Services (QoS)and Quality of Experience (QoE) in wireless networks. The challenges of providing better quality can be handled with the help of technologies, such as Content-Aware Networking (CAN), Content-Centric Networking (CCN) [3,4], layered video coding, and Scalable Video Coding (SVC) an Extension to H.264/Advanced Video Coding (AVC) The layered video coding techniques enable the video adaptation. The H.26x series of video compression techniques is popular and mostly used currently. In this series, H.264/AVC [5] is most commonly used, and the majority of the video communication applications use it to encode and decode. The H.265/HEVC (High-Efficiency Video Coding) [10] is the latest compression that is available, but the application that uses this compression technique is limited. Recently, an international committee was formed to develop a new compression method called H.266/VVC (Versatile Video Coding) [11]. In this experiment, we considered Scalable Video Coding (SVC) an extension to H.264/AVC [6], as streaming and adaptation using SVC encoding is still a major research challenge that needs to be addressed. Adaptation is a method that can be used with any layered encoders. SVC supports scalability in terms of spatial, temporal, and quality resolutions. Spatial scalability represents variations of the spatial resolution with respect to the original picture. The temporal scalability describes subsets of the bit-stream, which represent the source content with a varied frame rate. Quality scalability is also commonly referred to as fidelity or Signal-to-Noise Ratio (SNR) scalability. In Scalable Video Coding, one base layer and multiple enhancement layers are generated. The base layer is the independent layer, and each enhancement layer is coded keeping previous layers as a reference layer. As a result, it generates a single bit-stream and enables the removal of partial bit-stream in such a way that it forms a valid bit-stream, as shown in Figure 1. The base layer consumes more bandwidth compared to the enhancement layer. Consequently, effective bandwidth consumption is much less. The increase in efficiency comes at the expense of some increase in complexity as compared to simulcast coding. In simulcast, multiple video sequences are generated to meet the different resolution and frame rates. The SVC standard enhances the temporal prediction feature of AVC. Here, instead of a single-layer coding, multi-layer method is followed. The major difference between AVC and SVC in terms of temporal scalability is signaling the temporal layer information. The hierarchical prediction concept is being used in SVC. The dyadic hierarchical prediction has more coding efficiency than that of other prediction structures, like a non-dyadic and no-delay prediction. The spatial layer represents the spatial resolution, and the dependency identifier used for it is D. The base layer is equated to level 0 and the level is increased for each enhanced layer. In each spatial layer, motion-compensated prediction and intra-prediction are employed for single-layer coding. In simulcast, the same video will be coded with different spatial resolution, but, in spatial dependency, inter-layer prediction mechanisms are incorporated to improve the coding efficiency. The inter-layer prediction includes techniques for motion parameter and residual prediction, and the temporal prediction structures of the spatial layers should be temporally aligned for efficient use of the interlayer prediction. For Quality or SNR scalability, coarse-grain SNR scalability (CGS) and medium grain SNR scalability (MGS) are distinguished in scalable video coding. The Quality scalable layer is identified by Q, where each spatial layer will have many quality layers. The decoder selects the Q value based on the requirement and decodes the quality for each spatially enhanced frame. The SVC encoder combines all the above-mentioned scalability to code a video sequence as shown in Figure 2. The original video sequence is initially down-sampled up to the minimum resolution expected in video communication. Later, the base layer is coded independently. By keeping the base layer and spatial levels as a reference, enhancement layers are generated. The number of the enhancement layer is decided by the temporal levels. Finally, coded video is packetized according to Network Abstraction Layer standards and later used to store or stream over the network. Media Aware Network Elements (MANE) MANEs are CAN-enabled intermediate devices that implement intelligent modules, such as routing, adaptation, and so on [12][13][14]. Figure 3 depicts the architecture of MANE. It mainly implements Adaptation Decision Engine (ADE) and Extractor. The modules, such as Network Analyzer and SVC Header Analyzer, are supporting the ADE module. The Network Analyzer monitors the network conditions and availability of the network resources, and then it feeds the same to ADE, which decides the number of layers that need to be extracted. The module continuously monitors the congestion status and Packet Delivery Ratio (PDR) of the network to understand the dynamic nature of the network resources. This monitored data helps in improving the efficiency of adaptation. Along with these parameters, Bandwidth, Buffer availability, and terminal availabilities are monitored for improving the decision-making. In wireless communication, reachability is also an important parameter because it decides the performance of routing protocols and the quality of the data received. Wireless routing protocols have many overheads, such as hello packets and echo packets. Hence, studying the influence of routing protocol and bandwidth availability was the aim of this research. The SVC header analyzer parses the packets and then extracts the layer information from the bit-stream. The scalable video levels that are decided at ADE are fed into Extractor to extract the SVC levels accordingly. Unwanted scalable layers are removed from the fully scalable video bit-stream, and an adaptation bit-stream is delivered to the network. The adapted video bit-stream provides maximum video quality that can be achieved in the available resources and conditions. Related Works The majority of the research works support receiver and sender-driven adaptation methods, which are carried at end devices and server-side, respectively. In the receiverdriven approach [15], the content is adapted by the receiving device just before displaying the content. Guo et al. [16] proposed a multi-quality steaming method using SVC video coding. In this approach, multiple qualities of video data are streamed in a multicast communication and receivers will choose the quality of the video. Ruijian et al. [17] developed a Resource Allocation and Layer Selection method to choose the scalable video levels in a mobility scenario. In the sender-driven method [18], receivers signal the device capabilities while creating the session; then, accordingly, the sender adapts the content and streams the adapted content. A Video Optimizer Virtual Network Function [19] has been proposed to implement dynamic video optimization, where a video processing module at kernel and Network function virtualization (NFV) are used to improve the quality of the video in 5G network. There are many work on Hypertext Transfer Protocol (HTTP)based dynamic video adaptation methods [20][21][22][23][24] that use server-driven method. In these techniques, the server will collect the feedback on video quality, and accordingly, the video will be streamed to improve the QoE of the communication. There are many adaptation techniques for client-side adaptation, which mainly use Dynamic Adaptive Streaming over HTTP (DASH) and HTTP Adaptive Stream (HAS) for streaming adapted video data. Pu et al. [25] proposed a Dynamic Adaptive Streaming over HTTP mechanism for wireless domain (WiDASH). Similarly, Kim et al. [26] proposed a client-side adaptation technique to improve QoE of HTTP Adaptive Stream. They considered the dynamic variation of both network bandwidth and buffer capacity of the client. Tian et al. [27] demonstrate the video adaptation using a feedback mechanism. Similarly, there few implementation that use client-side adaptation method and resource allocation technique [28,29]. These implementations display the adapted content once it is fully received by the receivers. However, these techniques consider full quality while streaming from sender to receivers; hence, they consume more resources in the network. This leads us to explore more about in-network adaptation methods. Chen et al. [30] presented a dynamic adaptation mechanism to improve QoE of the video communication. The model considers the multiple video rates for the communication. In Reference [31], a traffic engineering method has been proposed to feature the video adaptation. Here, a study has been carried to understand the importance of SDN for video streaming. A physical layer-based dynamic adaptation has been proposed in Reference [32]. In this work, the carrier sensing-based method has been developed. Quinaln et al. [33] proposed a streaming class-based method to stream scalable video. Here, quality levels and each level are streamed independently. These research works aim at in-network adaptation techniques, but they fail in handling network dynamics and video motions together. The adaptation while streaming is difficult because the adaptation module requires dynamic network conditions and video metadata to decide the adaptation parameters. The literature does not discuss the role of video metadata and network parameters, such as mobility and bandwidth availability. Additionally, adaptation requires prior knowledge of the adaptation parameters to improve the adaptation on-the-fly. The majority of the literature concentrate on adaptation techniques and lack in discussing the prior knowledge required by the adaptation engine. Hence, we are carrying experimental analysis and then derive the adaptation parameters in this research work. Scalable Video Streaming over Wireless Network The video adaptation over wireless network experiences the following major challenges: • Node Mobility: The nodes in a wireless network are free to move and that leads to disruption in the communication. The node mobility affects the bandwidth availability between the source and receiver. The change in the bandwidth degrades the quality of the communication. This experimental work considers an ad-hoc wireless environment consisting of mobile nodes and forwarders. The routing algorithm considered for the experimentation is Ad-hoc On-Demand Distance Vector (AODV) [34], which is considered to be a stable routing algorithm in the wireless domain. AODV is capable of routing both unicast communication and multicast packets. It is an on-demand algorithm; it means that the route between source and destination is created when the source has data or packets to send. The routes established are preserved as long as the source requires them for communication. Furthermore, AODV forms trees that connect multicast group members by removing routing loops. To obtain knowledge of network topology, nodes exchange the HELLO packet and Reply packets. Once the route is established, it starts streaming the video packets. In the wireless domain, nodes are acting as a source, forwarder, and destination; they can read the packets. Hence, it is assumed that each node in the wireless network is acting as MANE. The size of the scalable video bit-stream varies with the motion level of the video sequence. Here, we considered three video sequences, which are Honeybee, Jockey, and Bosphorus, which represent high, medium, and low motion, respectively. Figure 4 shows the video dataset that is considered in this work. SVC encodes motion parameters along with the video data, therefore as motion increases in the video, the number of packets that need to be transmitted over the network also increases. When these packets are transmitted over a bandwidth-limited network, the dropping of a packet that has motion parameters adversely affects the decoded video at the receiver. That is the reason that a decision taken at ADE varies with the motion level of video sequences. With the above-said methodology, we address the challenges listed in this section. The planned wireless network setup considers the listed network challenges and dynamic conditions. Additionally, we consider the background communications and noises by explicitly creating the communications. These setups make the simulation environment more real-time. Experimentation and Discussion In order to study the performance of Scalable Video streaming over wireless networks, we chose OMNeT++ tool [9], and, for encoding and decoding the video sequences, Joint Scalable Video Module (JSVM) tool with version 9.18 [8] was used. The OMNeT++ is a simulator which supports the connection of real devices to a simulation environment. Hence, real network applications, such as VLC streaming, can be used. JSVM is a reference software developed by Joint Video Team (JVT). It supports up and down-sampling, SVC encoding and decoding, and bit-stream extraction of video data. In this experimental study, video sequences are encoded for 3 temporal levels, 3 spatial levels, and 2 quality levels. The frame rates considered are 15 fps, 30 fps, and 60 fps, which are represented by value T. In spatial scalability, 480p, 720p, and 1080p standard resolutions are considered. The value of D denotes the spatial level. The quality scalability Q denotes 2 levels of video quality, which is achieved by considering 2 different quantization levels. Table 1 represents the video levels and bitrates of each level that are considered in this experimental study. The network scenario and parameters considered in this experimentation are as shown in Table 2. The wireless networking environment is created using OMNeT++ Tool, as shown in Figure 5. The source and destination nodes of the simulation environment are attached to the real computer device. The node mobilities are predefined in the OMNeT++ Tool and the same used to study the performances of the video streaming. To realize the wireless environment, background communications using UDP were used. The UDP communication is set in such a way that, periodically, a device broadcasts 100 kB data, and, additionally, routing consumes resources for route management. The variation in the network resources influences the delivery of the video data packets, and that is studied in this experimental work. The video streaming is carried with the help of a VLC media player at source and destination. The VLC player is used for generating the stream-ready video data and then streaming over the simulation environment. It uses Real Time Streaming Protocol (RTSP) and UDP protocols for streaming over the network. In addition, the VLC is used to capture the video sequence at the receiver side. Since VLC does not support SVC encoding and decoding, we use VLC for streaming and capturing only. The Packet Delivery Ratio is calculated using Wireshark and also OMNeT++ tool, where total packets transmitted and received are considered. Figure 6 depicts the PDR obtained for streaming Bosphours video over a wireless network. Here, bandwidth variations with 24 Mbps and 48 Mbps are considered for streaming the video sequences. From the result, it is evident that a wireless network having 48 Mbps can transmit a fully scalable video sequence that has all scalable levels. As there are background communications to keep the network active and ready for communication, the portion of the network bandwidth is allocated for network routing overheads. Hence, video streaming experiences a lack of network resources for the streaming of fully scalable bit-stream. In 24 Mbps, scalable video with lower frame-rates, i.e., up to 15 fps and 720p gives better PDR. However, higher quality and resolutions consume more video bit-rate and additional resources in the network for streaming. As a result, high-quality video streaming suffers a lack of network resources to provide better communication quality. The experiment is carried to study the influence of node mobility and routing overheads on video communication. The results obtained are plotted as shown in Figure 7. In a mobility scenario, the nodes keep changing the position and, hence, connectivity. This change in the node connectivity leads to re-route computation in the network, which affects the communication by dropping the video data packets. Figure 7a,b show the PDR calculation for 24 Mbps and 48 Mbps network, respectively. From the result, it is observed that the base layer is stable both with and without node mobility scenario. The bitrate of the base layers is less compared to the higher levels. However, the increased bitrate leads to more packets in the communication and fails to provide stable quality in the mobility scenario. In a non-mobility, the life-time of the calculated route leads to re-computation of the streaming path; hence, the packet drops are observed in the experiment. The experimentation is carried to study the influence of video bitrates on video streaming over a wireless network. The results are shown in Figure 8. In this experiment, all scalable video bit-streams are streamed over the network and captured the received video at the receiver side, and then PDR is calculated to analyze the streaming performances. The video sequence with more motions produces more bit-rate; hence, more packets are generated while streaming over the network. The Bosphorus video sequence is more stable compared to Jockey and Honeybee sequences. The Bosphorus video sequence has a slowmoving object in the front and very far background objects; hence, the bitrate generated by the SVC encoded is less compared to Jockey and Honeybee. In Jockey and Honeybee, objects in the foreground and background are moving frequently; therefore, the bitrate generated is higher. Figure 9 shows the PDR calculation for non-mobility wireless scenario. From these experiments, the influence of node mobility, bandwidth, and motion levels are observed used for deriving the pre-knowledge required for ADE. As the aim of this research work was to generate the extraction points for ADE, adaptation parameters were estimated considering 80% PDR is required for decoding the bit-stream and display visual quality video. The adaptation parameters for ADE are derived as shown in Tables 3-5. These tables show the adaptation parameters to obtain better QoE in regard to PDR in a bandwidth-constrained network. Tables 3-5 are the adaptation parameters for video streaming over the wireless network. These parameters are estimated based on the PDR that is observed in the experimental evaluation process. Now, the MANE can use these Tables as a reference to decide the extraction points for an available network resource and network condition. This helps the MANE to implement the adaptation on-the-fly since the knowledge that is built is available for reference and decides the scalable levels for removal. Additionally, the adaptation parameters are estimated from the experimentation that confirms that the video received meets the quality requirements (PDR) of the video communication. The major advantage of the estimation is a reduction in the delay that is involved in the decision-making. The reduced processing delay ensures the implementation of the adaption process in-network and on-the-fly. Conclusions The SVC is a suitable encoding technique to attain better QoE/QoS in wireless communication. The network topology is highly dynamic, and routing protocols have more overhead in the wireless network. In addition, most bandwidth is used for maintaining the topology knowledge. In this paper, we estimated the adaptation parameters considering mobility, bandwidth availability, and motion levels of video sequences for deciding the adaptation parameter. The experimental method uses different scalable video levels and network conditions to stream the video over the wireless network. Here, High-definition video sequences are considered for estimating the adaptation parameters. The knowledge built in this work help in the continuous streaming of video over a CAN-enabled Wireless network. Hence, the adaptation is on-the-fly, considering dynamic network conditions and resource availability. In the future, the knowledge built will be used for developing a dynamic video adaptation method. We plan to use machine learning-based approaches to develop dynamic adaptation techniques. In addition, we will simulate various network scenarios and estimate more adaptation parameters to use in dynamic adaptation algorithms.
2021-05-05T00:08:21.062Z
2021-03-24T00:00:00.000
{ "year": 2021, "sha1": "73ec7a42e457a76c92eb2fe61ba06dacba97896f", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2073-431X/10/4/39/pdf", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "9d802bc7ff9431798df55e543d6c0005a418a4a9", "s2fieldsofstudy": [ "Computer Science", "Engineering" ], "extfieldsofstudy": [ "Computer Science" ] }
240058617
pes2o/s2orc
v3-fos-license
A Rare Case of Pregnancy Complicated by Bladder Exstrophy and Uterine Prolapse - Uterine prolapse and bladder exstrophy (BE) during pregnancy is a rare condition. The aim of this study was to present a rare case of pregnancy complicated by both bladder exstrophy and uterine prolapse. A 39-year-old pregnant woman (gravida 2, para 1) presented to the maternity department at 39 weeks of gestation with labor pain. Physical examination showed regular uterine contractions; the cervix was completely out of the vaginal opening with dilatation of 3 cm and effacement of 30%. She had a history of multiple surgeries for correction of bladder exstrophy and also suffered from uterine prolapse. In active labor, abnormal fetal heart rate tracing happened, so an emergent cesarean section was planned, and a healthy neonate with the normal Apgar score was born. At regular follow-up until four months after delivery, there was no sign or symptom of uterine proplase. Multidisciplinary management of patients with BE and uterine prolapse may result in optimal perinatal outcomes. Uterine prolapse may disappear after delivery, even in the complicated case of bladder exstrophy. Introduction Bladder exstrophy (BE) is a complex congenital anomaly involving the musculoskeletal system and the urinary, reproductive and intestinal tract (1,2). The reported incidence ranges from 3 to 5 per 100,000 live births and occurs more often in males than females. It is also more frequent in firstborn children (2). Uterine prolapse during pregnancy is a rare condition, and the number has reduced further worldwide over the past decades, probably due to a decrease in parity (3). There are some pieces of literature that reported uterine prolapse or bladder exstrophy during pregnancy (4-6), but the aim of this study was to present a rare case of pregnancy complicated by both bladder exstrophy and uterine prolapse. Case Report A 39-year-old pregnant woman (gravida 2, para 1) presented to the maternity department at 39 weeks of gestation with labor pain. Physical examination showed normal vital signs and regular uterine contractions; the cervix was completely out of the vaginal opening with dilatation of 3 cm and effacement of 30%. The amniotic sac was intact, and fetal presentation was cephalic. She had a history of multiple surgeries for correction of bladder exstrophy (8 times). She also suffered from uterine prolapse that happened immediately after the first normal vaginal delivery since three years ago, and she used intravaginal pessary (ring with support number 5) to overcome the bulging symptoms. Comprehensive pelvic examination confirmed stage 3 of uterine prolapse ( Figure 1). Urologic consultation was performed to guide for any procedure that needed during delivery. During active labor, fetal distress happened, so an emergent cesarean section was planned, and a healthy female neonate with the normal Apgar score was born (weight 3150 gr). The abdomen was opened by a midline incision after entry to the peritoneal cavity; multiple small bowel adhesion bundles were dissected from the peritoneum and uterine surface. About 1 cm of ileum was perforated at that time, which was repaired by two layers of Gamby and lambert sutures using silk numbers 3-0. The day after cesarean, vaginal examination showed no uterine prolapse. At the first visit after the cesarean, the uterus was completely elevated in the abdominopelvic cavity. At regular follow-up until four months after delivery, there was no sign or symptom of uterine prolapse, and the uterus was palpated at the normal position in the pelvis without any significant prolapse. Discussion The aim of this case report was to introduce a patient with a history of multiple surgeries for correction of bladder exstrophy and uterine prolapse that made the challenges for optimal management of pregnancy termination. Acute postpartum uterine prolapse may affect nearly half of BE women (7), just like the current case that the uterine prolapse occurred immediately after her first vaginal delivery. The main problem in this situation is the management of pelvic organ prolapse (POP), especially uterine prolapse, at the time of delivery. Some pieces of the literature indicated successful conservative treatment of POP during pregnancy; by intravaginal devices (pessary), also successful vaginal delivery was reported in this group of pregnant women whose pregnancies were complicated by the uterine prolapse (8)(9)(10). However, there are some reports of uncomplicated normal vaginal delivery in BE patients (6), but with regard to anatomic limitations and the risk of uterine prolapse, currently, it is generally accepted that BE patients undergo planned cesarean section with a multidisciplinary team (11,12). Technical considerations with cesarean section include performing high midline or paramedian incision to avoid damage to the urinary reservier (13). In the current case, there were lots of thick adhesive bandels extended from small bowel under the perineum near the umbilicus to the uterine wall, at the time of peritoneal opening a small bowel perforation about 1 cm was occurred which repaired after uterine repair. Fortunately, the urinary reservoir remained intact. Sharma et al., reported 5 cases of pelvic organ prolapse which happened during pregnancy and resolved after vaginal delivery (4), but they were different from the current case by the time of uterine prolapse (in our case, she had POP before pregnancy). Actually, it seems that uterine prolapse before pregnancy is exacerbated during pregnancy and at least remain unchanged after delivery. Maybe pelvic adhesions play a significant role to elevate the uterus in this situation. Moreover, maybe cesarean section has a protective role against the persistence of prolapse after delivery (14). Multidisciplinary management of patients with BE and uterine prolapse by a team which consist of expert urologist, gynecologist and perinatologist may result in the optimal perinatal outcomes. Uterine prolapse may disappear after delivery even in the complicated case of bladder exstrophy.
2021-10-28T15:13:08.917Z
2021-10-20T00:00:00.000
{ "year": 2021, "sha1": "918609b55ce9109baa5334649e142f49d1655e6a", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.18502/acta.v59i9.7559", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "cb5b0178e0368cc91f54f0a72a9991491d9422bd", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
233894804
pes2o/s2orc
v3-fos-license
Numerical Investigation on Improving the Computational Efficiency of the Material Point Method Based on the basic theory of the material point method (MPM), the factors affecting the computational efficiency are analyzed and discussed, and the problem of improving calculation efficiency is studied. This paper introduces a mirror reflection boundary condition to the MPM to solve axisymmetric problems; to improve the computational efficiency of solving large deformation problems, the concept of "dynamic background domain (DBD)" is also proposed in this paper. Taking the explosion and/or shock problems as an example, the numerical simulation are calculated, and the typical characteristic parameters and the CPU time are compared. The results show that the processing method introducing mirror reflection boundary condition and MPM with DBD can improve the calculation efficiency of the corresponding problems, which, under the premise of ensuring its calculation accuracy, provide useful reference for further promoting the engineering application of this method. Introduction In this paper, we want to discuss how to improve the computational efficiency of axisymmetric problems and large deformation problems when the meshfree method material point method (MPM) is used. Meshfree/meshless methods have been progressed significantly and achieved remarkable progress in recent years [1,2]. Smoothed particle hydrodynamics (SPH) is the first meshless method [3], and it can solve problems of solid mechanics [4], fluid dynamics [5,6], machining [7], heat conduction [8], and so on. To remove the instabilities in SPH methods, a reproducing kernel particle method (RKPM) [9] and Meshless local-Petrov Galerkin (MLPG) [10] method were developed. e extended finite-element method (XFEM) was introduced by Belytschko and Black [11], as an extension of the standard FEM, and it has been used to deal with crack problems and so on [12][13][14]. Rabczuk and Belytschko [15] proposed the cracking particles method (CPM), and it has been used to model the dynamic crack propagation [16][17][18]. Also, isogeometric analysis (IGA) [14,19] has recently surfaced as a potential technique in the field of solid mechanics analysis. In 1994, the EFG method [20] was developed and used a global weak form to solve a variety of problems whether under various loadings or conditions immaculately [21][22][23]. To deal with the problem of crack tracking, the concept of dual-horizon is introduced to Peridynamics (DH-PD) to consider the unbalanced interactions between the particles with different horizon sizes, which can solve the "ghost force" issue [24,25]. e phase field modelling was also used to deal with the crack propagation [26][27][28][29]. A numerical manifold method was developed to deal with the Kirchhoff plate [30][31][32]. e material point method is an extension of the fluid implicit particle (FLIP) [33]/particle in cell (PIC) [34] method, proposed by Sulsky, Chen, and Schreyer [35]. In comparison with other methods, the MPM takes advantage of both Lagrangian and Eulerian descriptions [36,37], which have proven useful for solving impact and contact problems [38,39], penetration [40,41], explosion problems [42][43][44], combustion reaction [45], cracks and fracture [46][47][48][49], stochastic and random problems [50,51], and so on. In the framework of the MPM, the background area is discretized by Eulerian grid and grid nodes, the material domain is discretized by a finite number of particles, and the calculation time is comprised of multiple time steps. ere are two phases in the single time step: In the first phase, the physical information on material points are mapped to grid nodes by shape functions; after obtaining the kinematic solution on the grid nodes, the physical information is mapped back to material points to update their characteristic information such as positions and velocities. Hence, the MPM avoids the mesh distortion in the Lagrangian method and the convection problems in Eulerian methods [33,34]. Also, the MPM shows many advantages over other meshless methods in tension stability and efficiency [36][37][38]. Many interesting scenarios from the preceding categories lend themselves to being treated as axisymmetric. ese include, for example, impact and cratering events, rod penetration, and shaped charge jet formation. e material point method proposed earlier treated particles as point masses, and it has long been recognized that this original MPM (i.e., particles represented spatially as Dirac delta functions) has serious element-crossing artifacts. In addition to reformulation of axisymmetry using GIMP methods, numerous other extensions to the MPM have developed since the original work on axisymmetric MPM [52,53]. Nairn and Guilkey have established a series of new shape functions which differ mostly from planar shape functions near the origin where r � 0 [36]. In this paper, on the basis of not changing the right-angled coordinate system, the axisymmetric boundary conditions are introduced, and the three-dimensional problem is simplified to a two-dimensional plane problem for processing. e authors have been extensively researching the MPM as it appears to resolve the large deformation problems. However, like all novel techniques, it suffers from a few deficiencies that require further research. A common problem is that when using the MPM to calculate the problem, the physical values of the particles should be updated strictly according to the dimension characteristics of the problem; it means that the two-dimensional problems and three-dimensional problems are handled in different ways. erefore this paper introduces the mirror reflection boundary condition on the basis of the theory of the material point method, which can simplify the three-dimensional axisymmetric problem into a two-dimensional problem and can solve the three-dimensional axisymmetric problems more efficiently. Furthermore, when dealing with the large deformation problems, the physical information of all material points requires two mappings from the material points to the background grid and the background grid to the material points within each calculation step in the original MPM [54][55][56]. In order to improve the computational efficiency, the concept of dynamic background domain (DBD) is presented in this paper, and it allows the method to skip the mapping process of background grids and material points which not are updated in the physical information during the calculation process. e algorithm, thereby, allows a more thorough search of all the materials point and seems to offer a greater ability to find global minima. is paper is organized as follows. e algorithm of the explicit MPM is reviewed briefly in Section 2. e detailed formulation of the axisymmetric problems solution is presented in Section 3, which also describes the concept of dynamic background domain. e numerical simulations and several conclusions are given in Section 4. Finally, Section 5 draws a summary and some concluding remarks. e source codes of the mirror reflection boundary conditions and dynamic background domain are given in Algorithms 1 and 2. Governing Equations. In the MPM, the background grid is divided to calculate the momentum equation, and the material domain Ω is discretized into a collection of particles, as shown in Figure 1. For continuous materials, they are governed by the conservation of momentum equations: is the stress tensor, and b � b(x, t) is the body force. e mass conservation equation is and the conservation of energy equation is where ε � ε(x, t) is the strain tensor and dε/dt is the strain rate tensor. MPM Solution Scheme. e first step in the single time step of the MPM is to extrapolate the particle momenta and masses to the background. e equations are where p i is the nodal momentum, v p is the particle velocity, m p is the particle mass, N i, p is the shape function for node i evaluated at the current position of particle p, and m ij is the nodal mass in the lumped (or diagonal) mass matrix. In this representation, the acceleration vector at each grid point is found by where f int i and f ext i are internal and external force vectors associated with node i, and they are defined by module specular_reflection use param, only: parts, dy, partstotal, round, symmetry, reptotal implicit none contains open(1,file � "evaluate.dat") subroutine evaluate() implicit none integer i, j, k, t if (round �� 1) then (2))%x(1) � parts(symmetry(i)%p(1))%x(1) parts(symmetry(i)%p(2))%x(2) � -parts(symmetry(i)%p(1))%x(2) parts(symmetry(i)%p(2))%vx(1) � parts(symmetry(i)%p(1))%vx(1) parts(symmetry(i)%p(2))%vx(2) � -parts(symmetry(i)%p(1))%vx(2) end do end subroutine close (1) end module ALGORITHM 1: Source code for mirror reflection boundary condition. Mathematical Problems in Engineering where Γ τ is the stress boundary and τ i is the surface traction associated with node i. Calculating the nodal acceleration, the nodal velocities can be updated, and after mapping from nodes to particles, the acceleration and density of particles can be updated and combined with the constitutive equation and EOS: After these steps, the now-deformed background grid is discarded and the original grid is used for the next time step [52,55,57].To begin the next time step, the nodal velocities must be calculated by extrapolation from the material points to the grid, and it can be expressed as Axisymmetric Problems Solution and Dynamic Background Domain As the MPM has matured, various extensions have been developed. e two core issues in this paper are that the material point method, which introduces the mirror reflection boundary condition, is used to calculate the axisymmetric problems and the concept of "dynamic background domain" is put forward to calculate the large deformation problems. It is worth noting that when calculating the axisymmetric problems useing the MPM with a mirror reflection boundary condition, the rectangular coordinate system, instead of the column coordinate system mentioned in [36], was used. Mirror Reflection Boundary Condition. Many interesting scenarios lend themselves to being treated as axisymmetric. ese include, for example, rod penetration, explosion, and impact events. Many scholars have also conducted research on formulation for an axisymmetric MPM. e particles were treated as point masses with the axisymmetric formulation presented by Sulsky and Schreyer [52], and they use column coordinate systems to deal with the Taylor impact problems. A new shape function and gradients specific for axisymmetry were presented by Nairn and Guilkey [36]; the transformation and extensions of shape function included traction boundary conditions, thermal conduction, explicit cracks, solvent diffusion, and convected particle-domain integration (CPDI). When solving the specific axisymmetric problem, it is only necessary to analyze the motion of the research object on the plane containing the symmetric axes [36]. However, we want to use rectangular coordinate system to solve the explosion problems, and the simple application of traditional symmetric boundaries cannot be used to solve those axisymmetric problems within the framework of the MPM. In the specific calculation process, we need to solve the following two problems: (1) e influence domain of a single particle is not limited to the initial background grid in which the particle is located, so it is necessary to establish the virtual particles of multiple rows of meshes when solving the axisymmetric problem by using MPM. In the theory of the GIMP (Generalized Interpolation Material Point Method), each material point affects the nodes of its upper and lower three-row grids during the first mapping process. Meanwhile, each material point affects the nodes of the only grid which is located during the first mapping process based on the theory of the standard MPM. In order deallocate(parts(i)%nodes,s(i)%pnode,s(i)%s,s(i)%sx) if(parts(i)%sign��0) then else if(parts(i)%x(1)<�zxmax.and.parts(i)%x(1)>�fxmax.and.parts(i)%x(2)<�zymax.and.parts(i)%x (2) to broaden the applicability of the "Mirror reflection boundary condition," we choose to generate three rows of virtual grids and generate corresponding virtual particles within the virtual grid on the other side of the symmetric axis, as shown in Figure 2. (2) After the arrangement of virtual mesh and virtual particles is completed, the constraints that the particles should meet need to be considered. According to the characteristics of axisymmetric problems, the physical information on the two particles relative to the symmetrical axis should be identical. At the end of each time step, considering the calculation process of the material point method in a single time step, the displacement and velocity of the virtual particles should be related to the displacement and velocity of the corresponding particles: in the direction parallel to the symmetrical axis, the size is equal and the direction is the same. In the direction perpendicular to the symmetrical axis, the size is equal and the direction is opposite. It can be expressed as At this point, after the physical information is mapped from the particles to the grid node in the next time step, the physical information, such as the quality of the virtual nodes and the momentum of the virtual particles, are exactly the same as the physical information of its corresponding nodes and particles, and the calculated results will be identical, so that the axisymmetric problems can be resolved with the MPM introducing the boundary conditions; meanwhile, the number of the material point can be greatly reduced during the process of calculation. e source code for this section is detailed in Algorithm 1. Dynamic Background Domain (DBD). According to the basic theory of the material point method, there are two mapping processes between the particles and the grids in a single time step; the calculation area needs to be larger than the arrangement area of particles in case of the large deformation problems, such as explosion and impact engineering problems. At the beginning of the numerical calculation, the transfer of the pressure, temperature, and other physical indicators of the detonation product exists only in the small area near the energy-containing material, and the physical information of all particles in the calculated area needs to be updated at each time step when the conventional material point method is used. If we can determine the influenced area of the current time step and only update the particles in this area, the calculation time can be shortened and the computational efficiency of the numerical method can be improved under the premise of ensuring the accuracy of the calculation results. e key to the method is to search for the affected domain in the current time step. Although particles carry a lot of physical information in a single time step, the most primitive physical quantities are velocity and pressure when the feature values of particles are updated. e velocity and pressure are given by where T is a four-order tensor. erefore, we can treat the velocity and pressure of one particle as a sign of whether it reacts or not, that is, whether it is in the affected domain or not. When the speed or pressure is not the same as the initial value, it can be determined to be "active" particles. After getting the coordinate extremes of these "active" points, the affected domain is determined by the coordinate extremes, which can contain all the "active" particles. After the affected domain is determined, we only need to update the physical information of the "active" particles and the affected domain. Also, for the sake of insurance, we can still expand the domain identified by "active" particles into other three meshes, thus treating them as a new affected domain. At this point, we only need to update the physical information for the particles which are in the affected domain. e affected domain is called the "dynamic background domain, DBD," and the specific solution steps for dynamic background domains are as follows: (1) Before the calculation begins, we define the "Zone Properties" of all particles as "1" (2) At the time t, we determine whether physical information, such as pressure, temperature, and velocity, of a single particle is an initial value; if yes, we go to Type. 5; if not, we go to Type. 3 (3) All the particles are treated as "active" particles, and the maximum and minimum values of coordinates of such "active" particles are determined (4) e boundary determined by the extremum by three meshes, respectively is expanded, to determine the dynamic background domain (5) We determine if the material point is in the dynamic background domain; if yes, we go to Type 6; if not, we go to Type 9 (6) We update the zone properties of particles as "0" (7) We calculate the particles by the MPM, whose zone property is "0" (8) We determine the time increment dt Mathematical Problems in Engineering (9) We update the time step by "t�t + dt" and determine whether the defined calculation time is reached; if yes, we finish this calculation process; if not, we go to Type 3 again, until the end of the calculation program In Algorithm 2, the source code of the abovementioned expressions is given, and the flowchart for this processing method is shown in Figure 3. Numerical Example Two numerical examples based on steel under explosion and impact problems are presented, in which these examples involve not only large deformations but also multiple-field coupling. In the first example, both the original MPM and the MPM with a mirror reflection boundary condition are used to calculate an axisymmetric problem. In the second example, the original MPM and MPM with DBD are compared to solve the large deformation problem. In both examples, numerical simulations are all occurring in the water medium, and the equation descriptions and the determined values of the material are employed according to [57]. All the simulations presented in this section are performed with a FORTRAN-based material point method. e calculations are processed on a personal computer with an Intel core Q6600 processor and 4 GB of RAM. e calculated results are described as follows. Mirror Reflection Boundary Condition (Axisymmetric Problem). In this numerical simulation, the explosive impact dynamic response of explosives in waters with approximate infinite size is numerically simulated, and the explosives are still TNT materials. In the symmetrical surface of the three-dimensional problem, the computational domain is full of water with 2 m × 2 m dimension. TNT is a cuboid of 0.02 m × 0.02 m dimension and located in the middle of the domain. A thick steel plate of 0.01 m × 2 m dimension is mounted on one side of the TNT, with the clamped-edge condition assumed. e model of the axisymmetric problem takes the upper part of the model of the 3D problem, which is shown in Figure 4. In the process of calculation, the grid size is 0.005 m × 0.005 m, material point size is 0.0025 m × 0.0025 m, and calculation time is 220 μs. e detonation product is described by the JWL state equation [58]: where A�3.738 × 10 5 MPa, B�3.75 × 10 3 MPa, R 1 � 4.15, R 2 � 0.9, and w � 0.35. e water medium and steel plate are described by the Mie-Grüneisen equation of state (EOS) [58]: where the parameter values for water and steel plates are shown in Table 1. e stress-strain relationship of steel is described by the Johnson-Cook constitutive model [58]: where the parameter values for water and steel plates are shown in Table 2. e damage failure of steel is described by the Johnson-Cook failure model [58]. where ε f is the destroying strain of materials, which can be described by where the values of D 1 -D 5 are 0.05, 3.44, -2.12, 0.002, and 1.61. Particles of diverse positions and materials are selected as observation points, and the pressure and internal energy of the observation points are recorded. e detonation point is located in the center of TNT and takes the detonation point coordinates as (0,0), and the eigenvalues of the observation points are shown in Table 3. Pressure-time curves and energy-time curves of the gauges are shown in Figures 5 and 6. Pressure information from different locations is also recorded. e coordinates of locations are shown in Table 4, and the pressure-time curves are shown in Figure 7. e results of the 3D problem and axisymmetric problem are basically exactly the same, and the peak and changing trends of pressure at the observation point are exactly the same. Because the explicit integration algorithm was used during the process of calculation, the updated time step will be different from 3D problems and 2D problems. As the calculation progresses, the results of the pressure-time curve and energy-time curve will also be different. e pressure contours of the whole model at different typical times are also shown in Figures 8-11. Dynamic Background Domain (Explosion and Impact Dynamic Response Problem). e problem of explosion impact dynamic response of TNT explosive to steel plate in a water medium is numerically simulated. e computational domain is full of water with 1 m × 1 m × 1m dimension. TNT is a cuboid of 0.02 m × 0.02 m × 1m dimension and located in the middle of the domain. A thick steel plate of 0.01 m × 1 m × 1 m dimension is mounted on one side of the TNT, with the clamped edge condition assumed. Because of symmetry, the analogy model can be further simplified as a 2D planar model, as shown in Figure 12. In Table 6. e results of pressure-time curves are shown in Figure 13 and Table 7. Also, the pressure-time change relationship at the same observation point is basically the same. e pressure contours of the whole model are shown in Figure 14. e result shows the characteristics of large deformation and multifield coupling, and to these two results, the radius of the detonation wave is about 43 cm and vacuum areas with a radius of about 6 cm are both formed in the center area of the explosive. Table 8 a quarter of times faster than that of the original MPM simulation, and the computation efficiency of the method with DBD is greatly increased. Hence, the MPM with DBD makes it more efficient and convenient to solve the large deformation and large-scale structure. Conclusions With the progress of science and technology and the development of computer technology, it is gradually becoming a trend to solve the problems of large deformation or fluid-solid coupling such as explosion and penetration by means of numerical simulation. When the Lagrangian algorithm is used to calculate the large deformation problems such as explosion detonation, there is a shortcoming of mesh distortion, which will make a great error in the calculation results, and it is difficult to calculate the fluid-solid coupling problem by the Euler algorithm, such as the material boundary and the historical deformation of the tracking material. As one of the nongrid algorithms, the material point method has high computational efficiency, and there are no mesh distortion problems, and the coupling conditions are automatically satisfied, so it has outstanding advantages in dealing with the problems of large deformation and fluidsolid coupling. On the basis of the theory of the material point method and aiming at the problem of large deformation, this paper puts forward the concept of the dynamic background domain, introduces the mirror reflection boundary condition, studies the axisymmetric problem, uses Fortran 95 language to write a set of calculation programs suitable for the axisymmetric problem, and compares and validates the concrete examples. On the premise of ensuring the accuracy of its calculation, the method with a mirror reflection boundary condition can save about 85% of the calculation time when it is used to solve axisymmetric problems; to solve the large deformation problem mentioned in this paper, the method with DBD can also save about 75% of the calculation time. (1) In view of the axisymmetric problem, the introduction of specular reflection boundary conditions can ensure that the three-dimensional axisymmetric problem is reduced to a two-dimensional problem without changing the original right-angled coordinate system, and the calculation efficiency is improved on the premise of ensuring the accuracy of the results. (2) is paper analyzes the characteristics of the problem of large deformation, puts forward the concept of dynamic background domain, calculates the influence area of a large deformation problem, updates only the particle information in the affected area, and improves the calculation efficiency under the premise of ensuring the accuracy of the calculation results. Data Availability e data used to support the findings of this study are available from the corresponding author upon request. Conflicts of Interest e authors declare that they have no conflicts of interest.
2021-05-08T00:04:14.555Z
2021-02-13T00:00:00.000
{ "year": 2021, "sha1": "be06553bef5f24c20266488da5c3de68a9281be5", "oa_license": "CCBY", "oa_url": "https://downloads.hindawi.com/journals/mpe/2021/8854318.pdf", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "4474d1a719be6897ac851ba7720fa73b330ed8ad", "s2fieldsofstudy": [ "Engineering", "Physics" ], "extfieldsofstudy": [ "Computer Science" ] }
1487187
pes2o/s2orc
v3-fos-license
Severe Insulin Resistance and Intrauterine Growth Deficiency Associated With Haploinsufficiency for INSR and CHN2 OBJECTIVE Digenic causes of human disease are rarely reported. Insulin via its receptor, which is encoded by INSR, plays a key role in both metabolic and growth signaling pathways. Heterozygous INSR mutations are the most common cause of monogenic insulin resistance. However, growth retardation is only reported with homozygous or compound heterozygous mutations. We describe a novel translocation [t(7,19)(p15.2;p13.2)] cosegregating with insulin resistance and pre- and postnatal growth deficiency. Chromosome translocations present a unique opportunity to identify modifying loci; therefore, our objective was to determine the mutational mechanism resulting in this complex phenotype. RESEARCH DESIGN AND METHODS Breakpoint mapping was performed by fluorescence in situ hybridization (FISH) on patient chromosomes. Sequencing and gene expression studies of disrupted and adjacent genes were performed on patient-derived tissues. RESULTS Affected individuals had increased insulin, C-peptide, insulin–to–C-peptide ratio, and adiponectin levels consistent with an insulin receptoropathy. FISH mapping established that the translocation breakpoints disrupt INSR on chromosome 19p15.2 and CHN2 on chromosome 7p13.2. Sequencing demonstrated INSR haploinsufficiency accounting for elevated insulin levels and dysglycemia. CHN2 encoding β-2 chimerin was shown to be expressed in insulin-sensitive tissues, and its disruption was shown to result in decreased gene expression in patient-derived adipose tissue. CONCLUSIONS We present a likely digenic cause of insulin resistance and growth deficiency resulting from the combined heterozygous disruption of INSR and CHN2, implicating CHN2 for the first time as a key element of proximal insulin signaling in vivo. T he genetic susceptibility to insulin resistance can involve the disruption of a single gene (e.g., INSR) or may involve the interplay of many genetic loci (including PPARG, FTO, HNF1B, etc.). However, there is currently only one known digenic disorder of insulin resistance resulting from mutations in peroxisome proliferator-activated receptor gamma (PPARG) and protein phosphatase 1 regulatory subunit 3 (PPPTR3A) (1). Compound heterozygous mutations in these genes, which are primarily involved in carbohydrate or lipid metabolism, respectively, can combine to produce a phenotype of extreme insulin resistance and lipodystrophy (1). Interestingly, individuals who possess only one mutation have normal insulin levels demonstrating that disruption of both genes and therefore pathways is necessary to result in disease (1). INSR encodes the insulin receptor with a key role in both major arms of the insulin signaling pathways, specifically the metabolic pathway mainly via IRS-1/Akt2/AS160 signaling and the growth pathway mainly via IRS-2/extracellular signal-related kinase (ERK) signaling (2). INSR mutations are the most common cause of monogenic insulin resistance and cause a clinical spectrum of disease ranging from type A insulin resistance to the most severe form of insulin receptoropathy, leprechaunism (also known as Donohue's syndrome) (3)(4)(5)(6). Growth is a complex biological process with multiple interacting pathways. The key pathways involved in growth include the insulin signaling pathway and the growth hormone/insulin-like growth factor pathway (7)(8)(9). Intrauterine growth is also regulated by multiple fetal and maternal factors including genetic and epigenetic factors and various environmental factors (10). We describe a family with a reciprocal translocation [t (7,19)(p15.2;p13.2)] cosegregating with insulin resistance and pre-and postnatal growth deficiency. We demonstrate that the breakpoint on chromosome 19 disrupts INSR, causing monoallelic expression. Haploinsufficent INSR individuals have type A insulin resistance with no apparent severe growth deficiency (5). Given the short stature and intrauterine growth retardation also seen in this family, we hypothesized this could be due to either complete loss of the functional INSR protein (due to the second INSR allele harboring a mutation or due to a dominant negative effect of a mutant INSR protein) or a digenic syndrome due to disruption of a second gene involved in growth by the chromosome 7 breakpoint. The first two hypotheses were excluded by the demonstration of a normal INSR DNA sequence and monoallelic INSR expression, while further cytogenetic analysis established that the second breakpoint on chromosome 7p15.2 disrupted CHN2 encoding ␤-2 chimerin. The demonstration that disruption of CHN2 affects both pre-and postnatal growth suggests that chimerins may play an important role in early growth and development. RESEARCH DESIGN AND METHODS We report a family with pre-and postnatal growth deficiency, insulin resistance, and early-onset diabetes ( Table 1). The female proband was born at term weighing 1.85 kg (Ͻ5th centile), and all other intrauterine growth parameters were Ͻ5th centile (Table 1). Her growth was maintained below the 5th centile. She was hypoglycemic at birth and treated with intravenous 10% glucose infusion for 36 h. She has mild dysmorphic features with a triangular face, irregular teeth, hypertrichosis, a masculine appearance, and no history of miscarriage. The proband was diagnosed with diabetes requiring insulin treatment at age 15 years and since then has been on insulin therapy with variable dosage, notably requiring less insulin throughout pregnancy and dietary treatment alone for 2 months postnatally. She is currently managed with gradually increasing insulin doses (see supplementary methods Table 1, available in an online appendix at http://diabetes.diabetesjournals.org/cgi/ content/full/db09-0787/DC1). Her fasting insulin and C-peptide levels are greatly elevated (Table 1). Karyotype analysis identified the following karyotype: [46, XX, t (7,19) The proband's son was the first-born child of a nonconsanguineous marriage delivered by caesarean section at 35 weeks' gestation. Aminocentesis at 16 weeks' gestation identified the following karyotype [46, XY, t(7,19)(p15.2;p13.2)]. The son weighed 1.9 kg (Ͻ10th centile), and his length was Ͻ5th centile (Table 1). He has no dysmorphic features. He had recurrent neonatal hypoglycemia and was noted to have preprandial hypoglycemia and postprandial hyperglycemia at age 3 months. His fasting insulin and C-peptide levels were greatly elevated (Table 1). He is currently age 22 months and continues on dietary management; his growth trajectory remains below the 3rd centile. The proband's parents are nonconsanguineous and have no evidence of diabetes, growth retardation, or dysmorphic features. The height of the proband's mother, father, and older sister is 162, 174, and 173 cm, respectively. Chromosomal analysis in both parents revealed a normal karyotype. Biochemical analysis. Insulin and C-peptide levels were assayed using a radioimmunoassay (Immunotech, Prague, Czech Republic). Adiponectin was also assayed using a radioimmunoassay (Linc, Millipore, U.K.), and normative data were acquired from 27 healthy control subjects. Bioinformatic tools. The University of California Santa Cruz Web site (http://genome.ucsc.edu/), Online Mendelian Inheritance in Man (OMIM) (www.ncbi.nlm.nih.gov), and GeneSniffer (www.genesniffer.org) were used to identify and prioritize biological candidate genes within the region of both breakpoints. Chromosome preparation, DNA isolation, and establishment of patient cell lines. Peripheral blood samples were obtained from the proband and her son, and conventional methods were used to prepare metaphase spreads for FISH analysis, extract genomic DNA, and establish an Epstein-Barr virus (EBV)-transformed lymphoblastoid cell line. Sequencing. All 22 exons and exon-intron boundaries of INSR were amplified and sequenced on an ABI 3700 genetic analyzer (Applied Biosystems, Warrington, U.K.). Sequences were compared with the published sequence (NM_000208) using Mutation Surveyor version 3.4 (Softgenetics, Cambridge, U.K.). The entire cDNA sequence of INSR was amplified in nine overlapping fragments and sequenced as above. Coding sequences with known single nucleotide polymorphisms (SNPs) of CHN2 and adjacent biologically plausible genes (GHRHR, JAZF1, GRB10) were amplified and sequenced in patient genomic DNA. All primer sequences are available in the online supplementary methods. RNA extraction, retrotranscription, and gene expression studies. RNA was extracted and retrotranscribed from patient EBV-transformed lymphoblastoid cell lines, and the proband's subcutaneous adipose tissue was obtained by needle biopsy using standard methods (see online supplementary methods). A commercially available RNA library from a standard panel of tissues (Human Total RNA Master Panel II) was purchased from Clontech (Saint-Germain-en-Laye, France). Gene expression analysis was performed by quantitative real-time PCR (qRT-PCR) on an ABI 7900HT analyzer using inventoried and designed assays (Applied Biosystems). Data were normalized to the mean of three housekeeping genes (11). Details of assays and housekeeping genes are provided in the online supplementary methods. RESULTS Karyotype analysis revealed an apparently balanced translocation [t(7,19)(p15.2;p13.2)] in both the proband and her affected son. The proband's unaffected parents had normal karyotypes, suggesting the translocation arose de novo in the proband. We hypothesized that the breakpoint of this translocation disrupted one or more genes involved in the etiology of the insulin resistance and growth deficiency in both subjects. Bioinformatics identified a total of 67 genes within the breakpoint regions on chromosomes 19p13.2 and 7p15.2. The most plausible biological candidate was INSR on chromosome 19. Biochemical analysis showed raised insulin, C-peptide, insulin-to-C-peptide ratio, and adiponectin levels in both individuals, suggesting disruption of INSR (Table 1). FISH analysis of proband chromosomes using bacterial artificial chromosomes (BACs) and fosmids narrowed down the region of the breakpoint on chromosome 19 to an ϳ10-Kb region entirely within the genomic sequence of INSR (Figs. 1A and 2A), predicted to be between exons 13 and 15. To investigate possible cryptic microdeletions within the INSR sequence, MLPA analysis was performed in both patients. Taking into account the resolution limits of FISH, we decided to perform MLPA analysis on a broader DNA region spanning exons 11-17. A microdeletion between 2.4 -6.6 Kb in size including exons 15 and 16 was detected (Fig. 2C). Direct sequencing of the entire coding region of INSR in patient genomic DNA excluded an INSR mutation. An informative heterozygous SNP (c.1650GϾA;p.A550A) identified in patient genomic DNA was shown to be monoallelic in patient RNA, establishing INSR haploinsufficiency and excluding a potential dominant negative mutational mechanism (Fig. 2D). Reduced INSR expression in both patient-derived EBV cell line and adipose tissue cDNA was also demonstrated (Fig. 2E and F). To explain the clinical phenotype observed in this family and determine the mutational mechanism for the growth deficiency, we also mapped the translocation breakpoint on chromosome 7. A number of strong biological candidate genes for growth mapped to the region including JAZF1 (12)(13)(14) and GHRHR (15). Disruption of both genes was excluded by FISH analysis (data not shown), while gene expression studies on patient EBV-transformed lymphocyte cell line derived-cDNA, compared with healthy control subjects, demonstrated that JAZF1 expression was not altered (see online supplementary results Fig. 1). GHRHR is only expressed in the pituitary so it was not possible to investigate patient gene expression levels. To exclude a growth hormone-releasing hormone (GHRH) receptor defect, the proband underwent a GHRH-arginine stimulation test that illustrated a normal response (see online supplementary results Table 1). Further mapping on chromosome 7 restricted the breakpoint to 25.3 Kb entirely within the genomic sequence of CHN2 (Figs. 1B and 2B). Gene dosage studies established that there was no loss of the coding region of CHN2 (data not shown). Monoallelic CHN2 expression could not be demonstrated as no informative coding SNPs were identified in patient genomic DNA. Gene expression studies established that CHN2 is expressed in human brain and insulin-sensitive tissues including liver, adipose tissue (subcutaneous and omental), and muscle (Fig. 3A). Expression analysis in patient adipose tissue-derived cDNA compared with healthy control subjects demonstrated reduced CHN2 expression (Fig. 3B). Given the proximity of the translocation breakpoint to an imprinted region on the chromosome implicated in Silver-Russell syndrome (SRS) and the possibility of a positional effect of the translocation on gene expression, we excluded involvement of GRB10-the SRS candidate gene in this region-by establishing that GRB10 gene expression levels were normal in patient adipose tissue (see online supplementary results Fig. 2) (16). Monoallelic GRB10 expression could not be confirmed due to a lack of informative coding SNPs in the GRB10 coding sequence. DISCUSSION Digenic causes of human disease are rare in the literature but present an opportunity to model possible gene-gene interactions, which may provide insights into common metabolic disorders including type 2 diabetes. We report a family with a novel reciprocal translocation [t(7,19)(p15.2; p13.2)] resulting in the first reported case of severe insulin resistance, diabetes, and growth deficiency resulting from the synergistic disruption of INSR and CHN2. It is well established that INSR mutations are the most common cause of monogenic insulin resistance. Most mutations are missense mutations in the ␤ subunit that have dominant negative activity. However, truncating mutations have a similar effect. Biochemical features supporting the diagnosis of an INSR defect in the family described include markedly elevated insulin and C-peptide levels as a response to severe insulin resistance and due to reduced insulin clearance (17), a raised insulin-to-C-peptide ratio (17), and raised adiponectin levels relative to the degree of insulin resistance in these subjects (18). This is confirmed by our genetic investigations, which demonstrated disruption of INSR by a reciprocal translocation and monoallelic INSR expression. However, our genetic studies at the INSR locus do not explain the severe pre-and postnatal growth deficiency observed in both subjects. Variable penetrance of INSR haploinsufficiency may result in phenotypic heterogeneity. Published evidence from the parents of children with Donohue's syndrome shows that haploinsufficient parents have a mildly deranged or normal metabolic phenotype, although some may have marked insulin resistance. However, this is usually in the presence of obesity or other risk factors (5,19). The family described shares some features with the more severe autosomal recessive insulin receptoropathies (Donohue's syndrome and Rabson-Mendenhall syndrome). Both affected individuals have an intermediate phenotype with neonatal hypoglycemia, severe insulin resistance, hyperglycemia in childhood, and intrauterine and postnatal growth deficiency; however, they have no evidence of premature mortality. The proband also has mild dysmorphic features with minimal subcutaneous fat, overdeveloped musculature (masculinization), and abnor-mal dentition-all features seen in severe autosomal recessive insulin receptoropathies. The genotype-phenotype correlation in patients with INSR mutations is likely to be affected by the mutation site and possible modifier loci (20). Moreover, evidence from mice that are double heterozygous for null alleles in insr and irs-1 (insrϩ/Ϫirs-1ϩ/Ϫ) genes display a synergistic effect on insulin resistance with a 5-to 50-fold rise in insulin levels despite the expected ϳ50% reduction in the protein levels of INSR and IRS-1 (21), suggesting that gene-gene interactions may play a significant role in severe insulin resistance. In this family, we have the rare opportunity to define a possible modifier locus by mapping the second breakpoint of the translocation. Mapping of the breakpoint on chromosome 7 demonstrated disruption of CHN2, which encodes ␤-2 chimerin. Chimerins are ligandactivated Rac-specific GTPase-activating proteins (22), which are expressed in many human tissues, especially brain, pancreas, and insulin-sensitive tissues. Chimerins are downstream effectors of tyrosine kinase receptors and have been shown to regulate growth in an inhibitory manner via suppression of Rac and extracellular signalrelated kinase phosphorylation (23). Decreased expression of CHN2 is associated with high-grade malignant gliomas, duodenal adenocarcinomas, and breast tumors (23), suggesting that chimerins are tumor suppressors; however, increased expression of CHN2 is reportedly associated with lymphomas (24). We therefore propose that reduced CHN2 expression, a gene with a known role in growth pathways, contributes to a novel digenic syndrome of insulin resistance and dysglycemia due to disruption of INSR combined with marked pre-and postnatal growth deficiency due to disruption of CHN2. There is a possible role for phenotypic modification caused by disruption of regulation of other biologically plausible genes close to the chromosome 7p breakpoint as there are several strong biological contenders. Although these genes are a long way from the breakpoint, disruption of the spatial relationship between these genes and unknown regulatory elements, as seen with other translocations, cannot be excluded (25). GHRHR maps to chromosome 7p15.2, and mutations in GHRHR are a known cause of dwarfism of Sindh (15). However, a GHRH test performed on the proband confirmed a normal response, thereby excluding a defect in the GHRH receptor. JAZF1 on chromosome 7p15.1-15.2 is a recently described type 2 diabetes susceptibility gene with a known role in growth (12)(13)(14). However, JAZF1 gene expression levels are not altered in either the proband or her son. SRS has also been linked to chromosome 7p, and the minimal overlap region has been delineated to chromosome 7p11.2, ϳ25 Mb away from the breakpoint (26). This region contains multiple biological candidates with the strongest evidence supporting GRB10 (16). GRB10 encodes growth factor receptor binding protein 10, which is also known as insulin receptor binding protein and is maternally imprinted (16). Mice carrying maternally inherited targeted deletions of grb10 display a growth phenotype that recapitulates the SRS phenotype described in patients (16). However, our investigations have shown that there are no differences in GRB10 gene expression levels in the proband and her son compared with normal healthy control subjects. In summary, we describe a novel cytogenetic defect resulting in a syndrome of severe insulin resistance, dysglycemia, and pre-and postnatal growth deficiency. The simultaneous heterozygous disruption of INSR and CHN2 without loss of other candidate growth-related genes in the region suggests this represents a novel digenic disorder and implicates CHN2 for the first time in insulin's metabolic and growth-promoting actions in vivo.
2016-08-09T08:50:54.084Z
2009-08-31T00:00:00.000
{ "year": 2009, "sha1": "aaccdf1f3341d9067e9ac34e8f6b505f911e0347", "oa_license": "CCBYNCND", "oa_url": "https://diabetesjournals.org/diabetes/article-pdf/58/12/2954/393652/zdb01209002954.pdf", "oa_status": "GREEN", "pdf_src": "PubMedCentral", "pdf_hash": "aaccdf1f3341d9067e9ac34e8f6b505f911e0347", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
14068506
pes2o/s2orc
v3-fos-license
Immature Stages and Hosts of Two Plesiomorphic, Antillean Genera of Membracidae (Hemiptera) and a new species of Antillotolania from Puerto Rico Abstract The nymphs of Antillotolania Ramos and Deiroderes Ramos are described for the first time, along with the first host record for the genus Antillotolania, represented by Antillotolania myricae, sp. n. Nymphal features of both genera, such as a ventrally fused, cylindrical tergum IX (anal tube), the presence of abdominal lamellae, and heads with foliaceous ventrolateral lobes confirm their placement in Membracidae and are consistent with phylogenetic analyses placing them in Stegaspidinae but in conflict with a cladistic analysis showing a closer relationship to Nicomiinae. Head processes and emarginate forewing pads in the last instars of both genera support an earlier estimate, based on nuclear genes, that the two genera form a monophyletic group in Stegaspidinae. Distinguishing features of the four species of Antillotolania are tabulated. Introduction Treehoppers of the family Membracidae are best known for an enlarged, often extravagant pronotum expanded posteriorly over the scutellum (completely or partially concealing the scutellum) or more usually over the entire body. Indeed, until recently this was a diagnostic feature of the family Membracidae. Deitz (1985) was the first since 1928 to include species in this family with pronota that did not project posteriorly -Hemicentrus Melichar (Leptocentrini) of the Old World and Abelus Stål (Abelini) of the New World. Both Hemicentrus and Abelus have a distinctly emarginate scutellum, which was characteristic of all membracids that have the pronotum expanded over, but not concealing the scutellum (extant Stegaspidinae and Centrotinae). The emarginate scutellum therefore suggests that the posteriorly projecting pronotum was secondarily lost in Hemicentrus and Abelus. Other treehoppers lacking a posteriorly projecting pronotum, but with an acuminate or truncate scutellum, were placed in the treehopper families Aetalionidae, Biturritiidae, and Nicomiidae (Metcalf and Wade 1965). Based on a the first phylogenetic analysis of the superfamily membracoidea , Deitz and Dietrich (1993) referred some nicomiid species to the treehopper family Aetalionidae but incorporated many of these taxa into a newly defined Membracidae as the subfamilies Endoiastinae and Nicomiinae, except for two genera for which they erected the new family Melizoderidae. They left four genera, previously placed in Nicomiidae (with short pronotum) unplaced within Membracidae: Holdgatiella Evans, Euwalkeria Goding, Antillotolania Ramos, and Deiroderes Ramos. Antillotolania and Deiroderes (Ramos 1957) are endemic to the northern Antilles. There are also a number of membracids without a posteriorly projecting pronotum known from Eocene-Miocene amber deposits from the Dominican Republic (McKamey 1998); none have been described but one was correctly identified (Shcherbakov 1992) as a member of the subfamily Stegaspidinae. Several attempts have been made to determine the phylogenetic placement of Deiroderes and Antillotolania within Membracidae. In a molecular phylogenetic investigation of Membracidae, found these two genera to be placed with Microcentrini (Stegaspidinae), although the subfamily was paraphyletic in that analysis. Dietrich et al. (2001) recovered, in a cladistic analysis of Membracidae based on morphological evidence, Deiroderes within the subfamily Stegaspidinae, whereas Antillotolania was placed as the sister group to (Nicomiinae + (Centronodinae + Centrodontinae)); statistical support for those placements was equivocal, however. In a separate phylogenetic analysis based on morphological evidence, Cryan et al. (2003) recovered [Deiroderes + Antillotolania] as the monophyletic sister-group to [Microcentrini + Stegaspidini]. Later, Cryan et al. (2004) presented the results of an analysis combining molecular and morphological evidence, yielding similar placements of Deiroderes and Antillotolania as in the study; they concluded that both Deiroderes and Antillotolania should remain unplaced within Stegaspidinae until further analysis could resolve these relationships. Cryan and Bartlett (2002) described two new species of Antillotolania but left the genus unplaced, noting conflicting hypotheses of relationship between the Dietrich et al. (2001) morphological analysis, which suggested it was allied to Nicomiinae, and that of , in which Antillotolania was most closely related to Deiroderes and some Stegaspidinae. They suggested that it may be warranted to expand the concept of Stegaspidinae to include both Antillotolania and Deiroderes. Cryan and Deitz (2002) described a new species of Deiroderes and a new genus, Togotolania, also from the Antilles that lacks a posteriorly projecting pronotum. They referred Deiroderes to unplaced Stegaspidinae and argued that their new genus most likely is allied to Nicomiinae. Both cladistic estimates incorporating morphology (Dietrich et al. 2001, Cryan et al. 2004) used features of immatures, hitherto unknown for Antillotolania and Deiroderes. Both genera are exceedingly rare in collections and no immatures were known. In the present paper we describe a new species of Antillotolania, with host and habitat based on multiple series of adults and immatures collected along the central mountain range of Puerto Rico and describe its immature stages. We also describe the fifth instar of Deiroderes, based on one specimen collected from the reported host and adjacent to the type locality of D. inermis Ramos in the xeric region of Guánica, Puerto Rico. We also discuss the subfamiy placement of the two genera in the light of the new evidence. Antillotolania Ramos Prior to this work, this genus contained three species: A. doramariae Ramos and A. extrema Cryan & Bartlett from Puerto Rico, and A. microcentroides Cryan & Bartlett from Guadeloupe and Tortola (British West Indies). These are represented by a total of seven specimens and nothing is known of their biology. No male of A. extrema has been collected. The new species is represented by 11 adult specimens and nymphs. The originally monotypic genus was described based on one female, lost, and one male, both from Maricao, Puerto Rico. The forewing venation, which contains phylogenetically important characters, differed in the two illustrations. In recent years, a few additional Antillotolania have been captured by sweeping vegetation in the U.S. Virgin Islands (J. Cryan, C. Bartlett, pers. comm.), which has enabled their incorporation into phylogenetic estimates using molecular data Cryan et al. 2004 Description. Dimensions (mm): Length with forewings in repose female 6.2, male 5.8, width between humeral angles female 1.8, male 1.6. Head and thorax densely pilose. Head quadrate in anterior view, in dorsal view with two subtriangular projections, longitudinally carinate behind eyes. Forewing (Fig. 9) M and Cu fused at base, 3 m-cu crossveins, 2 r-m veins, R branched into R 1-3 and R 4+5 basad of fork of vein M. Hindwing ( Fig. 10) with 1 r-m crossvein and 1 m-cu crossvein, cubital vein un-branched, anal vein branched. Pro-and mesothoracic legs lacking cucullate setae. Metathoracic tibia with cucullate setae in rows I, II, and III as follows: ca. 20 in row I along entire length; ca. 10 in row III throughout distal half; and fewer than 10, larger cucullate setae in row II irregularly spaced in conjunction with darkly pigmented sections of tibiae (pale row II edge densely pilose but setae without cucullate bases). Abdomen lacking abdominal lamellae, vestiture (see Dietrich 1989) consisting of microtrichia (Fig. 17), as in Microcentrus Stål. Nymph : Fifth instar length 6.2 mm. Densely pilose and dorsoventrally compressed throughout. Head with subtriangular projections directed anteriorly as in adult, in anterior view ventral margin carinate, excavated, with ventrolateral lobes, in lateral view posterolaterally emarginate. Thoracic nota lacking scoli. Forewing ventrally emarginate. Abdominal terga IV-VII with large lateral lamellae directed posterolaterally, smaller on IV and subequal on V-VIII; tergum IX fused ventrally, forming 'anal tube', length subequal to remaining terga combined in last instar, as long as remainder of abdomen and thorax combined in earlier instars. Terga III-VIII with 2 pairs of enlarged chalazae, the first near mid line and the second between the first and the abdominal lamellae. Tergum IX slightly wider at base, otherwise parallel-sided, completely fused ventrally. Nascent genitalia barely exceeding posterior limit of tergum VIII lamellae (Fig. 15) Host. All specimens collected from Myrica splendens (Sw.) DC., Myrtaceae, a weedy species of the West Indies, Mexico, Central and South America. Habitat. Moist highlands of Puerto Rico. Remarks. Based on our series of 11 adults and over 20 immatures, the venation and nymphal characters coded ambiguously in phylogenetic estimates of the family have been determined, as discussed below. No adults of the new species were obtained from rearing, but both adults and nymphs were repeatedly obtained from the same host at the same time, at a variety of localities, without finding any other membracids. Note that the male of A. extrema, if discovered, may have smaller suprahumeral horns, as evidenced by the strong sexual dimorphism exhibited by the new species. The first couplet in the key provided by Cryan and Bartlett (2002) divides species by the presence or absence of developed suprahumeral horns, hence the males and females of the new species would key out separately. The following table enables identification of adults of all species in the genus. Description. Nymph (fifth instar): Length 3.5 mm. Glabrous throughout. Head with small protrusions (Fig. 18) in same placement as the large subtrianglar projections of Antillotolania, in anterior view ventral margin carinate, head ventrally excavated, with foliaceous ventrolateral lobes (Figs 20, 21), in dorsal view quadrate, in lateral view not emarginate. Thoracic nota lacking scoli. Forewing emarginate. Abdominal terga IV-VII with large lateral lamellae, directed posterolaterally, smallest on tergum IV and increasing in size posteriorly; tergum IX fused ventrally, forming short 'anal tube', length about 2 × longer than tergum VIII. Terga III-VIII each with 1 pair of enlarged chalazae near mid line. Tergum IX dorsoventrally compressed. Nascent genitalia barely exceeding posterior limit of tergum VII lamella . Remarks. Caught sweeping a uniform stand of Capparis indica, a recorded host of D. inermis, adjacent to Guánica State Forest, which is the type locality of D. inermis. The Guánica area of Puerto Rico is arid and other membracids were previously unknown from there until L.L. Deitz (North Carolina State University) and SHM discovered Nessorhinus abbreviatus Ramos (Fig. 32) in February, 2007, on a different, unidentified host in February, 2007 and the fifth instars are several millimeters longer than that of D. inermis. Discussion The anal tube (a ventrally fused abdominal segment IX, from which the anal segments protrude when defecating) present in nymphs of both Antillotolania and Deiroderes is a diagnostic feature of Membracidae. The nymphs of both genera display many features characteristic of other cryptic membracid nymphs: a flattened body and large abdominal lamellae that break up their body outline and an emarginate forewing pad providing a space for the mesothoracic tibia to rest, increasing their crypsis, suggesting that the two genera are correctly placed in that family. In some other membracid immatures with emarginate forewing pads, the tibiae are also flattened, but this is not the case in Antillotolania or Deiroderes. Instead, these have a pronotum that is posterolaterally emarginate, providing a resting place for the prothoracic tibia as well, as also occurs in some other membracids, such as some Darninae. Placing Antillotolania and Deiroderes to subfamily and tribe is more problematic. The two possible subfamilies (with short pronotum) are Nicomiinae and Stegaspidinae. There are few nicomiine immatures known and all have been associated indirectly with adults due to the solitary nature of the species and difficulty of rearing adults. An illustration of a Tolania Stål nymph, which lacks any trace of abdominal lamellae, was provided by Dietrich et al. (2001). Thus, as far as known, nicomiine immatures lack abdominal lamellae. In contrast, all stegaspidines whose immatures are known, encompassing both Stegaspidini and Microcentrini, have well developed abdominal lamellae (Cryan and Deitz 1999a, 1999bCryan et al. 2003). The presence of foliaceous ventrolateral lobes on the head of both Antillotolania and Deiroderes also allies them with Stegaspidinae. The surface vestiture of the adult abdomen in Antillotolania and Deiroderes is shared with Microcentrus. This feature should not be construed as additional supporting evidence for their inclusion in Microcentrini or even Stegaspidinae because, firstly, other Membracoidea inside and outside the family Membracidae have the same character state and, secondly, the vestiture of Antillotolania and nicomiids were not examined in Dietrich's (1989) survey. The only known membracid nymphs with elongate, ventrally fused 'anal tubes' (Figs 12-16) are Tolania and Antillotolania, suggesting that this feature may be a synapomorphy and thus evidence of a nicomiine relationship. In a phylogenetic study, Dietrich et al. (2001) recovered Deiroderes in Stegaspidinae and correctly predicted character states of the immatures, including the synapomorphy of the subfamily (head with foliaceous ventrolateral lobes, Figs 18, 20-22). In contrast, Antillotolania was recovered as a sister-group to [Nicomiinae + Centronodinae], but the analysis incorrectly predicted several character states: the anal tube is cylindrical, not ventrolaterally angulate, there are two rows, not one, of enlarged chalazae on each side of the abdomen (Fig. 12) and the head has foliaceous ventrolateral lobes (Figs 11, 14) again, a synapomorphy of Stegaspidinae). Based on these findings the subfamily placement of the two genera treated here remains unclear. It may be that the head processes and emarginate forewing pads (Figs 13,16) found in Antillotolania, Deiroderes, and some other cryptic membracids (but not in stegaspidine immatures) are homologous, giving morphological support to the hypothesis of that these two enigmatic Antillean genera are sister-taxa.
2016-05-04T20:20:58.661Z
2013-05-17T00:00:00.000
{ "year": 2013, "sha1": "33843f37685ac0c56bc3def1d7b1a8c0158b3b03", "oa_license": "CCBY", "oa_url": "https://zookeys.pensoft.net/article/3129/download/pdf/", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "33843f37685ac0c56bc3def1d7b1a8c0158b3b03", "s2fieldsofstudy": [ "Biology", "Environmental Science" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
236606304
pes2o/s2orc
v3-fos-license
Famotidine and Celecoxib COVID-19 Treatment Without and With Dexamethasone; Retrospective Comparison of Sequential Continuous Cohorts We seek to rapidly identify, test and develop combinations of repurposed drugs to enable cost-effective treatments that reduce the risk of disease or death from SARS-CoV-2 infection. We hypothesize that the morbidity and mortality of COVID-19 reects overactive host inammatory responses to infection and is not principally due to the primary direct cellular, organ and tissue damage attributable to viral infection. Stepwise clinical development has identied the combination of High Dose (HD) famotidine and celecoxib (famcox) as a promising adjuvant anti-inammatory protocol. We now report results from a retrospective observational comparative cohort study designed to provide an estimate of the potential benets, risks, prognosis and diagnostic laboratory ndings associated with administration of dexamethasone in addition to famcox for treatment of newly hospitalized COVID-19 disease in a community hospital setting. Study enrollment was restricted to patients at WHO 4–5. In the group receiving adjuvant treatment with famcox without dexamethasone (active control) there were no deaths during hospitalization (0/18 = 0% mortality). A total of six deaths occurred in the group receiving famcox + dexamethasone (6/21 = 29% mortality). There was a signicant difference in mortality between the two groups, Χ 2 (1, N = 43) = 7.305, p < 0.007. Median time to event for reaching WHO score of < 4 was 3.5 days in the control group (famcox (–) dex) versus 10 days for the experimental group (famcox (+) dex) P < 0.001. We conclude that use of the potent non-specic anti-inammatory corticosteroid dexamethasone in addition to the specic anti-inammatory famcox protocol should only be considered in late stage COVID-19 disease in patients less than 70 years of age. The effects of added dexamethasone on laboratory biomarkers, and particularly on neutrophil count, lymphocyte count, and neutrophil to lymphocyte ratio raise concerns about the long-term effects of dexamethasone treatment with or without famcox during acute COVID-19 on the incidence and severity of chronic COVID (“long COVID” or PASC). Introduction Coronavirus Disease 2019 (COVID-19) develops in a subset of patients infected by the severe acute respiratory syndrome-coronavirus 2 (SARS-CoV-2). The estimated or measured incidence of COVID-19 disease in those with positive PCR tests for SARS-CoV-2 RNA varies. In some reports COVID-19 incidence may be as low as 20%, but disease incidence is generally considered to be approximately 50% of infected patients Ing et al., 2020;Long et al., 2020;Mizumoto et al., 2020;Tian et al., 2020). Of those that do not develop clinical COVID-19, approximately half of PCR positive asymptomatic patients eventually develop lung abnormalities which may be detected by computed tomography (Kronbichler et al., 2020). Therefore, SARS-CoV-2 infection is a necessary precursor, but infection alone does not predict either risk or severity of COVID-19 disease development. Our overall objective is to rapidly identify, test and clinically develop combinations of repurposed drugs to enable cost-effective treatments that reduce the risk of disease or death from SARS-CoV-2 infection. We hypothesize that the morbidity and mortality of COVID-19 is largely driven by overactive host in ammatory responses to infection, and is not principally due to direct cellular, organ and tissue damage attributable to viral infection. This hypothesis is neither original nor unique to COVID-19; the predominant role of host in ammatory responses in other causes of acute respiratory distress syndrome pathophysiology is well documented (Wong et al., 2019). Rather than focusing on direct-acting antiviral drugs (a historically ineffective strategy for acute viral pneumonias (Ruuskanen et al., 2011)), we seek to identify and develop combinations of pharmaceutical agents with complementary mechanisms of action (MOA) that will mitigate the hyperin ammatory cascade triggered by SARS-CoV-2 infection. We propose that selection of pharmaceuticals which target more speci c mechanistic pathways involved in in ammation will reduce off-target non-speci c effects which may complicate clinical management, reduce e cacy and increase treatment emergent adverse events. The decades of experience in developing therapeutic drug cocktails for AIDS guides our strategy; optimal solutions typically require multiple drugs with complementary MOA. In the United States during July 2020, public disclosure in the lay press followed by publication of preliminary results concerning the use of dexamethasone in hospitalized patients with COVID-19 (Recovery_Collaborative_Group et al., 2020) triggered a surge of dexamethasone treatment in all phases of hospitalized COVID-19 (including the President of the United States). Like many others during this period, hospitalists practicing in our community hospital setting (Beloit Memorial, Beloit, WI) began combining dexamethasone with other treatment regimens for hospitalized COVID-19, including the binary (HD) famotidine + celecoxib adjuvant protocol previously pioneered at this location 2021). This created an opportunity to retrospectively collect, analyze and compare clinical data and outcomes from continuous consecutive cohorts of patients receiving adjuvant therapy that targets speci c mechanistic pathways (histamine H1R and COX-2 activities) with or without the added effects of the non-speci c, broad-spectrum anti-in ammatory dexamethasone (which does not have speci c COVID-19 mechanistic pathways). Herein we report outcomes of treating consecutive cohorts of hospitalized COVID-19 patients using High Dose (HD) famotidine plus celecoxib (famcox) treatment without or with added dexamethasone. Study Purpose, Strengths and Limitations The purpose of this retrospective observational comparative cohort study is to provide an initial estimate of the potential bene ts, risks, prognosis and diagnostic laboratory ndings associated with administration of dexamethasone in addition to HD famotidine and celecoxib (famcox) for treatment of newly hospitalized COVID-19 disease in a community hospital setting. In this study, the HD famotidine + celecoxib group serves as the (active) control for the HD famotidine + celecoxib + dexamethasone group. Addition of dexamethasone is the experimental variable. Study strengths include a) comparison of two continuous cohorts drawn sequentially from a single community hospital, b) the reported outcomes rely on objective rather than subjective measures, and c) multiple clinically signi cant objective outcome variables are analyzed and reported. A key weakness is that this study design is not able to control for all factors (known, unknown and unknowable) that differ between the two cohorts at the point of enrollment into the study, and so is subject to uncontrolled confounding during group selection. Reliance on a single hospital site and continuous accrual into each cohort based on date of admission may partially mitigate confounding due to selection bias. Methods De nitions of sample groups and primary outcome measures. The 25 patient active control group (HD famotidine + celecoxib) selected for this analysis has been previously summarized (Tomera et al., 2021). This group consists of a consecutive series of 25 cases of COVID-19 treated with the combination of HD famotidine + celecoxib (famcox) as adjuvant therapy to standard of care from the same hospital. From May 20, 2020 to August 5, 2020, all hospitalized patients (unless absolutely contraindicated) that were admitted to Beloit Memorial Hospital with a clinical diagnosis of COVID-19 as well as a positive RT-PCR test for SARS-CoV-2 received both HD famotidine 80mg oral four times a day (QID) and a celecoxib oral (PO) loading dose of 400mg followed by 200mg PO two times a day (BID) beginning within 24 hours of admission. After the RECOVERY trial preliminary analysis publication concerning use of dexamethasone to treat COVID-19 (Recovery_Collaborative_Group et al., 2020) and changes in NIH guidelines, a second consecutive series of 25 cases of COVID-19 treated with dexamethasone and the combination of HD famotidine + celecoxib (famcox) as adjuvant therapy plus standard of care from the same hospital were identi ed and summarized. This series of cases serves as the experimental group for the analysis described herein. From August 21, 2020 to October 9, 2020, all hospitalized patients (unless absolutely contraindicated) that were admitted to Beloit Memorial Hospital with a clinical diagnosis of COVID-19 as well as a positive RT-PCR test for SARS-CoV-2 received dexamethasone intravenously (IV) either 4mg every 12 hours or 6mg daily with both HD famotidine 80mg oral (per os, or PO) four times a day (QID) and a celecoxib PO loading dose of 400mg followed by 200-400mg celecoxib PO two times a day (BID) beginning within 24 hours of admission. From each group of 25 sequentially admitted patients, only those admitted with a WHO score of ≥4 (8point scale, see Table 1) were selected for inclusion in the subsequent comparative study analyses. Primary outcome measures were pre-de ned prior to analysis. Two primary outcome analyses were performed; 1) statistical analysis of time to event for achieving a WHO score of ≤ 3 in each group (with correction for censoring due to death) and 2) comparative analysis of in hospital mortality (prior to discharge). No follow up outcomes were captured or analyzed after discharge from the hospital. For each patient, baseline and outcome laboratory test data as well as WHO category assessment on a daily basis during hospital course were extracted from hospital records and entered into an Excel spreadsheet by one of the investigators (KT). Exploratory outcomes consisted of comparative analysis of the following objective test data at entry into study and at discharge or death; a) D-dimer, b) eGFR (censoring for renal replacement therapy), c) Serum creatinine (censoring for renal replacement therapy), d) percent lymphocytes as a fraction of total white blood cells, e) absolute lymphocyte count, f) neutrophil to lymphocyte ratio, g) serum C-reactive protein, h) serum lactate dehydrogenase, i) serum ferritin, and j) serum aspartate aminotransferase. This study has been conducted under approval of the Beloit Memorial Health system Institutional Review Board (IORG001171, le 20-05-1). Data analysis For both groups, patients were ltered for inclusion in study analyses based on WHO score at admission, with only those patients meeting study inclusion criteria of WHO score ≥ 4 being included in subsequent analyses. Eighteen of the original twenty-ve consecutive cases (72%) in the control group (famcox without dexamethasone) met the analysis inclusion criteria, while twenty-one of the twenty-ve consecutive cases (84%) in the experimental group (famcox with dexamethasone) met inclusion criteria. Demographic characteristics, risk factor characteristics, and laboratory test characteristics at entry (baseline) were summarized for both groups. Statistical comparison of the distribution of these characteristics between the two groups were performed using the Stata v14.2 software package, and both test type (Mann-Whitney, Wilcox Signed Rank, t test (two arm), Fisher's exact, chi squared) and outcome of null hypothesis testing (P value) were summarized. Prior to comparative analysis, in the case of continuous data, each data group (control and experimental) was tested for normal distribution using a Shapiro-Wilk W test. Based on outcome, either parametric or non-parametric statistical tests were employed for grouped data comparisons of these baseline characteristics. The primary time-to-event endpoint was graphically summarized as Kaplan-Meier curves for the two groups. Statistical comparison of curves was performed using both log-rank (Mantel-cox) and Gehan-Breslow-Wilcoxon testing with correction for censoring due to death during hospitalization using GraphPad Prism software v9.0.0 and the product-limit method. Hazard ratios were calculated using both methods. Null hypothesis test P value (two tailed) re ects the null hypothesis that the Kaplan-Meir time to event curves are identical in the overall population. The statistical analysis of the mortality primary endpoint was performed using the Pearson's chi-squared test and the Stata v14.2 software package. Median time to event was de ned as the time at which the Kaplan-Meir staircase survival curve crosses the 50% event rate. Grouped or pair-matched (admission vs discharge) laboratory data were also statistically compared (+/-) were assigned to each data point based on improvement or deterioration relative to baseline at study entry. Resulting data were grouped into related biomarker categories (in ammatory, hematologic, etc.) and plotted as heat maps using GraphPad Prism software v9.0.0. Table 2 summarizes the baseline demographics, clinical characteristics, and grouped laboratory values for the control (famcox (-) Dex) and experimental (famcox (+) Dex) treatment groups. At entry into the study, 73% of the control group met WHO criteria for mild hospitalized COVID requiring oxygen (WHO 4) and 27% met WHO criteria for severe hospitalized COVID (WHO 5). In contrast, 67% of the experimental group met WHO criteria for mild hospitalized COVID requiring oxygen (WHO 4) and 33% met WHO criteria for severe hospitalized COVID (WHO 5). Fischer's exact test did not meet criteria for rejecting the null hypothesis of no difference between the two groups (P = 1.00). Mean age was 63.4y (control) versus 64.5y (experimental) (t-test, P = 0.76). Gender distribution was also well balanced between the two groups (59% male in control, 57% in experimental, P = 0.66), as were the pre-existing risk factors of body mass index (BMI; P = 0.81), obesity de ned as BMI > 30 (P = 1.00), hypertension (P = 0.53), prior diagnosis of cardiovascular disease (P = 0.34), diabetes (P = 1.00), renal disease (P = 0.43), or asthma/COPD (P = 0.43). Although overall ethnic distribution differences between the two groups did not meet statistical signi cance (P = 0.87); the control group overall included a larger proportion of black (41%) and Hispanic (23%) patients relative to the experimental group (5% black, 10% Hispanic). No signi cant differences between the two groups were observed between baseline hematologic laboratory test values, in ammatory marker laboratory test values, serum D-dimer levels, renal function test values (eGFR and creatinine), and hepatic function laboratory test values. Results Time to event analysis. The time to event for progression to WHO category 3 or less (WHO < 4) for control (famcox (-) dex; black line) and treatment (famcox (+) dex; red line) groups are summarized in Fig. 1 as Kaplan-Meir staircase curves. Curve comparison using Log-rank (Mantel-Cox) test yields a chi squared value of 20.19, P value of < 0.0001, and supports the conclusion that the two curves are signi cantly different. Similarly, curve comparison using the Gehan-Breslow-Wilcoxon test yields a chi squared value of 21.46, P value of < 0.0001, and also supports the conclusion that the two curves are signi cantly different. Median time to event for reaching WHO score of < 4 was 3.5 days in the control group (famcox -dex) versus 10 days for the experimental group (famcox + dex). Hazard Ratio (Mantel-Haenszel) for the treatment group (+ dex) relative to control group (-dex) was 7.23 (95% con dence interval of 3.05 to 17.14). Hazard Ratio (log rank) for the treatment group relative to the control group was 3.43 (95% con dence interval of 1.64 to 7.17). Mortality analysis. Cumulative mortality during hospitalization in the two treatment groups was analyzed. In the group receiving adjuvant treatment with HD famotidine + celecoxib (without added dexamethasone), all enrolled patients were successfully discharged; there were no deaths during hospitalization (0/18 = 0% mortality). A total of six deaths were recorded during hospitalization in the group receiving HD famotidine + celecoxib + dexamethasone (6/21 = 29% mortality). Of these deaths, four had been admitted and enrolled at WHO grade 5, and two admitted and enrolled at WHO grade 4. Of those enrolled at WHO grade 4, death occurred at days 18 and 20 post enrollment. For those enrolled at WHO grade 5, death occurred at days 9, 11, 12, and 17. Chi squared comparison of these outcomes was performed, with 2 x 2 table comparison of outcome (discharged or deceased) and treatment group ((-) or (+) dexamethasone (Stata v14.2). Results demonstrate that there was a signi cant difference in mortality between the two groups, Χ 2 (1, N = 43) = 7.305, p < 0.007. Comparison of Oxygen usage over time. Oxygen usage, and the technology employed to administer oxygen, is one of the most straightforward and objective measures of clinical status for COVID-19, a disease with a strong component of compromised gas exchange in uencing clinical outcomes. As demonstrated by Table 1, oxygen usage plays a major role in the initial eight-point WHO scale (or nine, if you include grade zero). Figure 2 provides a simple plot of oxygen usage for enrolled patients over time, with censoring of those patients that became intubated and/or died. Only the group treated with HD famotidine + celecoxib + dexamethasone included patients that progressed to requiring intubation or death (ergo, WHO grade 6, 7 or 8). No patients in either group were initially enrolled at WHO grade 6 or 7. Comparison of C-reactive protein levels over time. C-Reactive Protein (CRP) is both an acute in ammatory phase marker and an important prognostic marker for poor outcomes in acute respiratory distress syndrome (Ridker et al., 2008;Sharma et al., 2016), when elevations are sustained re ects a persistent state of in ammation (Bajwa et al., 2009), and is also a prognostic marker for COVID-19 (Potempa et al., 2020). Particularly when considered in the context of oxygen usage as a function to time, CRP provides an excellent moving summary of the overall status of pulmonary in ammation and compromised gas exchange in COVID-19 patients. In Fig. 3, changes in CRP over time are graphically summarized for each patient enrolled in the two comparison groups of this study; the famcox (-) dex group are plotted in black, and famcox (+) dex plotted in red. Biomarker analysis; enrollment and discharge As described above in the methods section, laboratory biomarker test results were collected for each enrolled patient at study entry and as close to study exit (discharge or death) as available. For purpose of this report, these results were clustered as 1) in ammatory biomarkers analyzed included CRP, Lactate Dehydrogenase (LDH), and ferritin. 2) Primary hematologic biomarkers included lymphocyte fraction of total white blood cells, absolute neutrophil count, and absolute lymphocyte count. In addition, the derived parameter of (absolute) neutrophil count to (absolute) lymphocyte count ratio (neutrophil to lymphocyte ratio or NLR) was also calculated and analyzed. 3) The coagulation marker chosen for this analysis was the D-dimer values for each patient at admission to the study and at exit (discharge or death). 4) Renal function biomarkers selected for evaluation included estimated glomerular ltration rate (eGFR) and serum total creatinine levels (creatinine). 5) Hepatic damage biomarkers similarly evaluated include aspartate aminotransferase (AST) and total (direct and indirect) serum bilirubin. In ammatory Biomarkers. Serum LDH is recognized as an important surrogate biomarker for the activity and severity of the chronic lung diseases known as idiopathic pulmonary brosis and severe pulmonary interstitial disease (Kishaba et al., 2014). Serum ferritin is an acute phase reactant. Ferritin concentration on admission is an independent risk factor for disease severity in COVID-19 patients (Lin et al., 2020). In the cohort treated with HD famotidine + celecoxib (famcox without dexamethasone) as adjuvant therapy, grouped pairwise analysis of all three in ammatory biomarkers show statistically signi cant improvement between enrollment and discharge (Table 3a). In the cohort treated with both famcox and dexamethasone, grouped pairwise analysis of all three in ammatory biomarkers failed to demonstrate statistically signi cant changes between enrollment and discharge (or death). In addition to pairwise comparison of results over time within treatment group, results were compared between treatment cohorts at enrollment (Day 0) and separately at discharge or death (Table 3b). This comparison is designed to detect statistically signi cant differences between treatment groups in laboratory test ndings between the treatment cohorts at time of entry and exit from the study (of necessity using grouped rather than paired data comparisons). In general, for a given sample size, the statistical power of either parametric or nonparametric statistical tests to detect differences between grouped data sets is less than for paired data sets. In the case of the in ammatory biomarker data, the only data comparison reaching statistical signi cance (p < 0.05) at study exit was serum LDH. The LDH data comparison across treatment groups was close to meeting criteria for statistical signi cance between the two cohorts at entry into the study, and so this statistical difference should be interpreted conservatively. symptoms and may be associated with both disease severity and mortality (Chan et al., 2020). At Beloit Memorial Hospital, the normal range for the lymphocyte fraction as a percentage of white blood cells is de ned as 30-50%. Absolute lymphocyte count normal range in this hospital laboratory is 1.0 x 10 3 /microliter (low) to 4.8 x 10 3 /microliter (high). Elevated levels of peripheral blood neutrophils are also associated with increased risk of progression to more severe COVID-19 (Elshazli et al., 2020). Absolute neutrophil count normal range in this laboratory is 2.4 x 10 3 /microliter (low) to 8.1 x 10 3 /microliter (high), with the lower bound for critical neutropenia of < 0.5 x 10 3 /microliter. Multiple studies have demonstrated that the absolute neutrophil to lymphocyte ratio is a predictor of COVID-19 outcomes in hospitalized patients . A neutrophil/lymphocyte ratio of > 6 is characteristic of severe COVID-19 (Gottlieb et al., 2020). The neutrophil to lymphocyte ratio is plotted as log 10 data to compensate for outlier values which distorted the overall violin plot, resulting in di culty in comparing the distribution of the data values for each group. Results of comparative statistical analysis of the treatment groups are summarized in Table 4a and Table 4b. As shown in Table 4a, statistical comparison of hematologic biomarkers within the famcox (-) dexamethasone adjuvant treatment group at study entry and exit consistently demonstrates signi cant (P < 0.05) improvement in paired laboratory results. In contrast, in all cases, analysis of hematologic biomarker laboratory responses in the famcox (+) dexamethasone group fails to demonstrate statistically signi cant improvement in these parameters during the enrollment period. Noting the caveats discussed previously concerning statistical power when comparing grouped (rather than paired) data, as shown in Table 4b no signi cant differences in the analyzed hematologic biomarkers were detected between the (-) and (+) dexamethasone cohorts at time of enrollment. Surprisingly, despite the statistical power limitations of comparing grouped data sets, lymphocyte fraction comparison (P = 0.0005), absolute neutrophil count comparison (P = 0.0001), and the neutrophil and lymphocyte ratio (P = 0.0019) all met criteria for statistically signi cant improvement at study exit in the famcox (-) dexamethasone group compared to the (+) dexamethasone cohort. In contrast, comparative analysis of absolute lymphocyte count (P = 0.49) did not meet criteria for signi cant differences between the treatment groups at study exit, suggesting that addition of dexamethasone to famcox adversely impacted the peripheral blood neutrophil to lymphocyte ratio predominantly due to impact on the peripheral blood neutrophil compartment. Coagulopathy is frequently reported in COVID-19 , and deep venous thrombosis and other pathologic coagulation events were present on admission in a subset of these patients. Figure 6 provides a graphical summary of the differences in D-dimer responses to famcox without (-) or with (+) co-administration of dexamethasone. D-dimer is a marker of brinolysis, and elevated D-dimer is a poor prognostic factor for COVID-19 (Tang et al., 2020a;Zhang et al., 2020;Zhou et al., 2020). Plasma D-dimer analysis is typically used for diagnosis of acute pulmonary embolism, deep vein thrombosis, intravascular coagulation and brinolysis, also disseminated intravascular coagulation. D-dimer levels < 1.56 have been associated with survival from hospitalized COVID-19, with levels > 1.56 favoring mortality (Wendel Garcia et al., 2020). At the clinical testing laboratory employed for this study, a D-dimer level of < 0.5 mg/L brinogen-equivalent units (FEU) is considered within normal range, and > 1 mg/L FEU is considered critical. A summary of results from statistical comparison of the two treatment cohorts is provided in Table 5. Similar to all of the other biomarker evaluations, as summarized in Table 5a, statistical analysis of the cohort that was adjuvant treated with famcox but without dexamethasone demonstrated a signi cant improvement in D-dimer laboratory values during the enrollment period (P = 0.0011). In contrast, with addition of dexamethasone this signi cant improvement was not observed (P = 0.687). Table 5b summarizes grouped data comparison between the treatment cohorts. No signi cant difference was observed in D-dimer levels at entry into the study. At exit from the analysis, D-dimer values for the two sets of grouped data did meet statistical criteria for signi cant difference (P = 0.019). Based on prior COVID literature and clinical experience, estimated glomerular ltration rate and total serum creatine were selected as biomarkers for comparing the effects of famcox adjuvant treatment (with or without dexamethasone) on renal function. The estimated glomerular ltration rate (eGFR) for this comparison was calculated using the CKD-EPi equation and when applicable the African American correction was implemented. In this hospital, GFR > 90 mL/min/1.73 sq meters is considered within the normal range. Serum creatinine measurements are a key diagnostic biomarker employed in assessing acute kidney injury (AKI) (Kellum et al., 2013); criteria for the syndrome are met if there is an increase in serum creatinine by ≥ 0.3 mg/dL within 48 hours, or an increase in serum creatinine to ≥ 1.5 times baseline within the prior 7 days. In these cohorts, none of the patients treated with famcox (-) dex met criteria for AKI. Two of the patients (10%) within the group treated with famcox (+) dex did develop AKI and were placed on renal replacement therapy. For purposes of summary graphing (Fig. 7) and statistical comparisons (Table 6), renal function biomarker data from these two patients were not included and were considered as censored. Absent dexamethasone co-administration, the famcox cohort met statistical test criteria for signi cant improvement in both eGFR (P = 0.003) and serum creatinine level improvement (P = 0.036) (Table 6a) with a median eGFR improvement of 25ml/min /1.73 sq meters. In contrast, the famcox cohort treated with dexamethasone did not meet statistical test criteria for improvement of either renal function biomarker. When comparing the two cohorts, no statistically signi cant difference was observed at entry. 10% of the patients on dexamethasone required dialysis and were censored for the statistical comparison analysis (Table 6b), further reducing the sample size for the (+) dexamethasone cohort and increasing the imbalance between the groups for the renal function bioanalysis comparisons. (Lei et al., 2020). For high-level comparison of hepatic function for the famcox (-) dexamethasone and famcox (+) dexamethasone cohorts, aspartate aminotransferase (AST) and total (direct and indirect) serum bilirubin were selected for analysis. AST is present in both the cytoplasm and mitochondria of liver, heart, skeletal muscle, and kidney cells, and is released into the circulatory compartment after damage of any of these organs. Elevated serum AST is observed after myocardial infarction, acute liver cell damage, and viral hepatitis among other conditions. In the case of COVID-19 (in contrast to most causes of hepatitis), AST levels are highly correlated with mortality relative to other liver injury markers (Lei et al., 2020). In this hospital, the upper boundary of the normal to high range for serum AST is 41 units/L. Bilirubin, a metabolized byproduct of red cell hemolysis, is typically used as a biomarker for hepatic blood ltration and secretory function. Elevated total bilirubin can be an indirect indicator of direct hepatic damage, or of either micro or macro biliary tree blockage. In the context of COVID-19, elevated total bilirubin levels have been associated with antifungal, antiviral, and systemic corticosteroid administration (Lei et al., 2020). Figure 8 provides a graphical summary of AST (Panel 8a) and total bilirubin levels (Panel 8b) at the time of entry into the study (Day 0) and at study exit (Discharge). For purpose of maintaining a legible linear violin plot which allows the range and distribution of values to be assessed, both summaries employ log10 transformation prior to plotting due to scale distortion due to outlier values. Table 7 provides a summary of serum AST and total bilirubin results obtained by statistical comparison of the two treatment cohorts, with 7a summarizing intra-cohort comparisons and 7b summarizing inter-cohort comparisons. As with all other analyzed biomarkers, there were statistically signi cant improvements in both AST (P = 0.0024) and bilirubin (P = 0.0080) levels during the enrollment period for the cohort treated with famcox (-) dexamethasone as adjuvant therapy (Table 7a). In contrast, adjuvant treatment with famcox (+) dexamethasone as adjuvant therapy did not result in signi cant improvement of either marker during the study. Inter-treatment group analyses (Table 7b) did not detect statistically signi cant differences in either AST or bilirubin levels between the two cohorts at study entry. At exit from the study, no statistical difference between the cohorts was detected for AST levels, but a statistically signi cant difference (P = 0.0012) was observed between the groups at exit for bilirubin, with mean and median levels of 0.69 and 0.6 mg/dL (respectively) in the cohort not treated with dexamethasone and mean and median levels of 1.21 and 0.8 mg/dL (respectively) in the cohort treated with dexamethasone. Biomarker summaries categorized by WHO category at enrollment To enable rapid review and comparative analysis of trends in biomarker responses between the two treatment groups as a function of disease severity at the time of enrollment, heatmap plots were developed for all patients in the famcox (-) dexamethasone (Fig. 9a) and famcox (+) dexamethasone ( Fig. 9b) groups. To facilitate heatmap analysis, the pre-and post-treatment laboratory biomarker data were normalized to percent change from baseline expressed as the log 10 transformed ratio of pretreatment value divided by post-treatment value. Log 10 transformation was used to compress value range to support heat map analysis due to skewing by occasional outlier values. After logarithmic transformation, the sign of the change (positive if improved, negative if deteriorated) was added prior to plotting. The data were then sorted by WHO score at time of admission into the study. Heatmap plots were generated for the risk factors of age and body mass index (BMI) as well as individual biomarker results for each patient. Laboratory biomarkers were clustered consistent with those used above. The plots suggest a clear trend towards deterioration of biomarker results during the treatment course in the famcox (+) dexamethasone group relative to the famcox (-) dexamethasone group, but do not show a marked intragroup difference between those enrolled at a WHO score of 4 or 5. Discussion Our objective is to identify, test and clinically develop combinations of currently licensed pharmaceutical agents that show promise for reducing morbidity and mortality of acute and long term (chronic) COVID-19. Ideally, these drug combinations would employ inexpensive, generic, off-patent oral pharmaceuticals that are readily manufactured, globally available, and thermostable. We have intentionally chosen to not le intellectual property claims on the combinations being tested in the interest of enabling global access to these combinations of repurposed drugs. By analogy to the successful development of pharmaceutical agent combinations for treatment of AIDS, ideal combinations would incorporate drugs with complementary mechanisms of action in the hope that this will reduce evolutionary escape opportunities for the diverse SARS-CoV-2 viral swarms generated during infection and replication in patients and populations. COVID-19 is a complicated multi-organ system disease which impacts on a wide range of fundamental physiology and regulatory pathways (Curran et al., 2020;Gupta et al., 2020;Koralnik and Tyler, 2020). Our focus has been on mitigating the hyperin ammatory cascade symptoms of both acute and chronic COVID-19 (Gustine and Jones, 2021;Nalbandian et al., 2021). Building on our initial discovery of the activity and mechanism of action of the histamine H2 receptor inverse agonist famotidine for reducing COVID-19 symptoms (Malone, 2021;Malone et al., 2021), we have combined this agent with the COX-2 inhibitor celecoxib (Baghaki et al., 2020;Hong et al., 2020) to produce a binary treatment protocol (Tomera et al., 2021). This yielded an adjuvant anti-in ammatory combination with speci c and well-characterized MOA which appears to provide potent clinical responses in hospitalized COVID-19 patients (Tomera et al., 2021). Both of these agents are off-patent, generic, inexpensive, widely available, have been administered for many years to many patients, and have extensive safety and drug-interaction databases. MOA, dosing and safety considerations for these agents when used to treat COVID-19 have been investigated and discussed in preceding publications (Hong et al., 2020;KM Tomera, 2020;Malone et al., 2021;Tomera et al., 2021). In brief, one working hypothesis is that viral activation of COX-2 expression in infected cells (mediated by SARS-CoV-2 spike and nucleocapsid proteins (Yan et al., 2006;Liu et al., 2007)) drives production of elevated Prostaglandin E2 (PGE2) and related signaling molecules, mast cell/basophil recruitment and degranulation, and a self-reinforcing positive feedback loop with some features of mast cell activation syndrome. B cell activation is also associated with increased COX-2 expression and elevated levels of PGE2. Therefore, activated B cells could also be major or predominant drivers of elevated PGE2 levels (Kim et al., 2018) (Ryan et al., 2005) (Bryn et al., 2008). Elevated PGE2 is also associated with induction of FOXP3 + T cells (Treg). As COVID-19 disease progresses, FOXP3 + Treg cells become a major fraction of the overall CD4 + T cell population (Kaneko et al., 2020), correlating with and supporting the hypothesized role of COX-2/PGE2 in COVID-19 pathogenesis. Treg-mediated immunosuppression is a common characteristic of patients with prolonged COVID-19 disease course (Tang et al., 2020b). Depleted germinal centers are found in lymphoid tissues from COVID-19 patients examined at autopsy, and an early speci c block in Bcl-6 + TFH cell differentiation combined with increased T-bet + TH1 cells and aberrant extra-follicular TNF-a accumulation may account for the dysregulated humoral immune response observed early in COVID-19 disease (Kaneko et al., 2020). PGE2 elevations in COVID-19 disease may also be a consequence of renal impacts of SARS-CoV-2 on the RAAS pathway (Curran et al., 2020), which is involved in homeostatic control of extracellular volume, arterial pressure, tissue perfusion, electrolyte balance, and wound healing (Atlas, 2007;Ingraham et al., 2020). Sensors located within macula densa cells (which reside within the juxtaglomerular apparatus) control glomerular perfusion (and thereby glomerular ltration rate) via production of adenosine and ATP (Kriz, 2004;Peti-Peterdi and Harris, 2010). These cells also produce COX-2, which in turn produces PGE2 and other prostaglandins. Juxtaglomerular cells in the afferent glomerular arteriole then release pro-renin in response to stimulation by prostaglandins. Pro-renin is then proteolytically converted to active renin, which then cleaves angiotensin setting the ACE/Ang II/AT 1 R cascade in motion. The SARS-CoV-2 receptor ACE2 functions as an endogenous inhibitor of the ACE/Ang II/AT1R pathway and opposes the vasoconstrictive, in ammatory, prothrombotic, and brotic effects associated with ACE/Ang II/AT 1 R activity, but ACE2 is downregulated upon binding and uptake during SARS-CoV-2 infection. Prolonged dysregulation in the RAAS system promotes renal and cardiovascular diseases (Munoz and Covenas, 2014). In acute pulmonary infections, the RAAS pathway contributes to the development of acute respiratory distress syndrome (ARDS) and subsequent pulmonary brosis (Kuba et al., 2006), which is often associated with COVID-19 . Whatever the principal drivers of COX-2 expression in COVID-19 and associated elevated PGE2 levels, celecoxib is the only speci c COX-2 inhibitor currently authorized for marketing by the US FDA. This combination of repurposed drugs is based on the labeled MOA of famotidine and celecoxib as antiin ammatory agents. To distinguish from suboptimal treatment protocols which typically employ inadequate doses of famotidine (Malone, 2021;Shoaibi et al., 2021;Sun et al., 2021), this binary drug combination is referred to as High Dose (HD) famotidine + celecoxib or famcox. Dexamethasone, a glucocorticoid receptor agonist, is an anti-in ammatory corticosteroid with relatively high a nity for the alpha subunit of the glucocorticoid receptor (GRα). The potency of GRα agonists are typically measured based on suppression of lymphocyte proliferation, and the wide range of mechanisms of action may be grouped as genomic, non-genomic, and mitochondrial signaling pathway responses . GRα plays a central role in all phases of development and resolution of host in ammatory responses, including activation and re-enforcement of innate immunity, downregulation of pro-in ammatory transcription factors, and restoration of anatomy and function . The three general pathway categories (genomic, non-genomic, mitochondrial) differ in sensitivity (EC 50 or IC 50 ) to agonist binding; glucocorticoids generally have higher potency for genomic (ergo transcriptional regulation) than non-genomic effects. Genomic effects include regulation of a wide variety of genes including those involved in histamine metabolism (Juszczak and Stankiewicz, 2018). Nongenomic effects include inhibition of prostaglandin E2 (PGE2) and arachidonic acid release through suppression of phospholipase A2 synthesis. Therefore, the broad-spectrum, dose-dependent, relatively non-speci c clinical activity of dexamethasone overlaps with the speci c activities of famotidine and celecoxib. Corticosteroids have a profound effect on the concentration of peripheral blood leukocytes. Lymphocyte, monocyte, and basophil counts decrease in response to corticosteroid administration, while neutrophil counts increase. The peak effects are seen within 4 to 6 hours after a dose of corticosteroid. Glucocorticoids including dexamethasone also have profound effects on cellular functions of leukocytes and endothelial cells, which in aggregate reduce the ability of leukocytes to adhere to vascular endothelium and exit from the circulation, leading to a neutrophilia. Entry to sites of infection and tissue injury is impaired, resulting in suppression of the in ammatory response (Fauci et al., 1976;Fauci et al., 1980;Boumpas et al., 1993). Functional properties of monocytes including phagocytosis and microbial killing are particularly susceptible to glucocorticoid suppression at clinically relevant doses. The observed reduction in endothelial adhesion may be due to direct effects of glucocorticoids on expression of adhesion molecules on both leukocytes and endothelial cells, as well as indirect effects due to the inhibitory effects of glucocorticoids on transcription of cytokines, such as IL-1 or tumor necrosis factor (TNF), which upregulate endothelial adhesion molecule expression. The prospective randomized RECOVERY platform clinical trial has previously examined the use of dexamethasone in hospitalized COVID-19 patients (Recovery_Collaborative_Group et al., 2020;2021), and the two interim reports of study outcomes have triggered widespread use of dexamethasone for treatment of COVID-19 throughout much of the world including the United States. The initial interim result publication was criticized for "limitations and cause for caution" (Mahase, 2020), speci cally "The authors have used relative reductions and chosen the subgroup with the biggest bene t to generate a headline of a one third reduction in deaths. The subgroup analysis was not speci ed in the trial registry and may be misleading.". After age adjustment, the preliminary second interim analysis of the RECOVERY dexamethasone data indicated a bene cial effect on 28-day mortality of patients receiving invasive mechanical ventilation (29.3% vs. 41.4%; 12.1%), a modest impact on 28-day mortality for those receiving oxygen without invasive mechanical ventilation (23.3% vs. 26.2%; 2.9%), and no statistically signi cant effect on those who were receiving no respiratory support at randomization (17.8% vs. 14.0%; -3.8%). These apparent bene ts were restricted to those less than 70 years old; for patients aged greater than 70 years, no statistically signi cant bene t was observed on 28-day mortality associated with dexamethasone adjuvant treatment. No statistically signi cant 28-day mortality bene ts were observed for enrolled patients identi ed as "white" race; only among patients identi ed as "Black, Asian, of minority ethnic group" or of unknown race. Similarly, subgroup analysis by gender supported statistically signi cant bene t for males only. Adjuvant treatment with dexamethasone was not associated with reduced 28-day mortality among those reporting less than 8 days of COVID symptoms; reduction in 28day mortality was restricted to only those with a longer period since symptom onset. Secondary outcome ndings from this second interim analysis included a one-day shorter duration of hospitalization (median, 12 days vs. 13 days), and greater probability of discharge alive within 28 days (67.2% vs. 63.5%; 3.7%) with the largest effect observed in patients receiving invasive mechanical ventilation at randomization. Demonstration of these statistically signi cant ndings required a large patient population -2,104 patients were enrolled in the dexamethasone + usual care arm, and 4,321 in the usual care arm. Despite the large study population, demonstration of statistical signi cance required age adjustment, and without age adjustment statistical signi cance was not reached. Risk of progression to invasive mechanical ventilation was also lower in the dexamethasone group relative to standard of care. Not yet reported for this trial are the effects of dexamethasone on cause-speci c mortality, the need for renal dialysis or hemo ltration, and the duration of ventilation. Unknown at this time are the long-term effects of dexamethasone treatment of COVID-19 patients, including the risk for development of the post-acute COVID syndrome (long COVID or post-Acute Sequelae of SARS-CoV-2 infection; e.g PASC) after broadspectrum immunosuppression during the acute phase of disease. In many US hospital settings, subsequent to publication of the RECOVERY trial data, dexamethasone has become standard of care at all stages of hospitalized COVID-19 disease, and this broad-spectrum immunosuppressive agent is often administered to non-hospitalized patients who are not receiving supplemental oxygen. This over-use and broadening of indications beyond those supported by current clinical data is reminiscent of the prior widespread global consensus concerning use of hydroxychloroquine for treating both outpatient and inpatient COVID. A comprehensive meta-analysis of randomized clinical trial data involving use of corticosteroids for COVID-19 patients being treated in intensive care units (including data from the RECOVERY trial) demonstrated that "patients with COVID-19 who might bene t from corticosteroids therapy are a small minority". This meta-analysis report concludes that "use of corticosteroids may be considered in severe critically ill patients with COVID-19 but must be discouraged in all patients who do not require oxygen support. Given the small effect on survival of critically ill patients, they must be compared with other anti-in ammatory drugs" (Pasin et al., 2021). Broad spectrum, non-speci c immunosuppression for treating COVID-19 hyperin ammation may dampen the effects of cytokine storm but does so at the expense of anti-viral Type 1 interferon responses and adaptive cellular and humoral immunity (Thomas et al., 2014;Singanayagam et al., 2018;Ritchie and Singanayagam, 2020). The abrupt introduction of dexamethasone into the usual management of hospitalized COVID-19 at Beloit Memorial hospital (which included adjuvant famcox in most cases) created a unique opportunity to perform a natural experiment; a retrospective comparison study of sequential continuous hospitalized COVID-19 cohorts treated without or with added dexamethasone. In the hierarchy of clinical evidence upon which clinical management decisions should be based, randomized controlled trials (Level I) are followed by cohort studies (Level II), case-control studies (Level III), case series (Level IV), and expert opinion (Level V) (Brighton et al., 2003). Herein, we report a retrospective observational comparative cohort study, which is one form of a case-control study (Level III). The clinical data reported herein provides an initial analysis of the effects of dexamethasone treatment in addition to the antiin ammatory adjuvant combination of HD famotidine + celecoxib. Given the modest number of patients enrolled in this comparative retrospective study, it is remarkable that the addition of dexamethasone to famcox and usual care yields such striking, statistically signi cant differences in the time required to reach improvement in clinical status to WHO < 4 (3.5 days vs. 10 days), overall mortality (0% vs.29%), and a wide range of laboratory biomarker analysis results. Particularly notable is that the clear and consistent positive trends in biomarkers relevant to COVID-19 prognosis observed with famcox adjuvant treatment are no longer present when dexamethasone is added to the treatment regime (Figs. 3 through 8 and Tables 3 through 7). Dexamethasone, like other glucocorticoids, causes leukocytosis. This clinical response is dominated by increased numbers of circulating neutrophils; in clinical practice glucocorticoids inhibit lymphocyte proliferation and function. This side effect of dexamethasone is more than theoretical. COVID-19 related neutrophilia and lymphophenia, typically measured by the neutrophil lymphocyte ratio (NLR), is associated with morbidity and mortality (Gottlieb et al., 2020;Qin et al., 2020). Even with these small cohorts, comparative laboratory biomarker analysis of lymphocyte fraction (P = 0.0005), absolute neutrophil count comparison (P = 0.0001), and the neutrophil and lymphocyte ratio (P = 0.0019) all met criteria for statistically signi cant improvement at study exit in the famcox (-) dexamethasone group compared to the (+) dexamethasone cohort. These data suggest that addition of broad spectrum pharmaceutical immunosuppression using corticosteroids must indeed be "compared with other antiin ammatory drugs", and that the targeted famcox antihistamine/COX-2 anti-in ammatory strategy should not be combined with potent broad spectrum, non-speci c immunosuppression when treating COVID-19. However, conclusive analysis of this preliminary observation will require larger prospective randomized clinical trials. A Phase 2 open label randomized clinical trial of HD famotidine, celecoxib, dexamethasone and remdesivir in severe hospitalized COVID-19 is currently enrolling (I-Spy COVID-19 trial NCT04488081). In this trial, the sponsor (Quantum Leap Healthcare Collaborative) determined that the data summarized above were insu cient to justify withholding dexamethasone in the selected patient population based on the RECOVERY trial ndings and has proceed with assessing the treatment protocol together with dexamethasone. Results of this clinical trial are pending, and may con rm the effects of adding dexamethasone to HD famotidine + celecoxib when treating hospitalized COVID-19 patients reported herein. This study builds upon previously published original research, a case report, and a case series report concerning use of High Dose famotidine for COVID-19 treatment (Janowitz et al., 2020;Malone et al., 2021), a prospective randomized controlled trial of celecoxib for COVID-19 treatment (Hong et al., 2020), and a detailed case series and reports involving COVID-19 treatment with the combination of High Dose famotidine + celecoxib 2021). A key weakness in this comparative consecutive cohort analysis is that this study design is not able to control for all factors (known, unknown and unknowable) that differ between the two cohorts at the point of enrollment into the study, and so is subject to uncontrolled confounding during group selection. Another weakness is the absence of data from this site involving standard of care with added dexamethasone without famotidine and celecoxib treatment, as well as an absence of data involving standard of care alone (during this timeframe) without these adjuvant treatments. Because this is a retrospective comparison of ndings from a natural experiment which occurred due to the ongoing rapid evolution of COVID-19 treatment protocols during the current crisis, historic data involving outcomes from standard of care prior to initial introduction of famotidine and celecoxib into this community hospital setting do not represent current best management practices in this setting and timeframe. As a consequence, analyses regarding the full effects of dexamethasone in addition to standard of care in this hospital setting and with this population cannot be performed. Reliance on a single hospital site and continuous accrual into each cohort based on date of admission may partially mitigate confounding due to selection bias. A rigorous assessment of the relative effects of HD famotidine + celecoxib (famcox) versus standard of care (with or without dexamethasone) will require a prospective, placebo controlled, blinded and randomized trial. Based on these accumulated ndings, one Phase 2 open label randomized controlled trial has begun enrollment (NCT04488081), and two additional well powered prospective, blinded, randomized controlled trials are in late-stage planning (one in an outpatient population, one in newly admitted inpatients). Conclusion The overall objective of this retrospective observational comparative cohort study was to provide an initial estimate of the potential bene ts, risks, prognosis and diagnostic laboratory ndings associated with administration of dexamethasone in addition to High Dose (HD) famotidine and celecoxib (famcox) for treatment of newly hospitalized COVID-19 disease in a community hospital setting. In this study, the HD famotidine + celecoxib group serves as the active control for the HD famotidine + celecoxib + dexamethasone group. Addition of dexamethasone is the experimental variable. Study strengths include a) comparison of two continuous cohorts drawn sequentially from a single community hospital, b) the reported outcomes rely on objective rather than subjective measures, and c) multiple clinically signi cant objective outcome variables are analyzed and reported. A key weakness is that this study design is not able to control for all factors (known, unknown and unknowable) that differ between the two cohorts at the point of enrollment into the study, and so is subject to uncontrolled confounding during group selection. Reliance on a single hospital site and continuous accrual into each cohort based on date of admission may reduce this source of selection bias. A second key weakness is absence of cohorts treated with standard of care with or without dexamethasone. Lack of these controls precludes comparison to mortality and time to event outcomes under these conditions. We report characteristics, analysis and outcomes of this natural clinical experiment comparing the effects of HD famotidine + celecoxib in addition to usual care to effects of HD famotidine + celecoxib + dexamethasone in addition to usual care. Patient inclusion criteria and primary outcome analyses were pre-de ned. Primary analysis ndings included statistically signi cant differences in the time to event for progression to WHO category 3 or less (WHO < 4) comparison of control (famcox (-) dex) and treatment (famcox (+) dex) groups [Log-rank (Mantel-Cox) test chi squared value of 20.19, P value of < 0.0001], and curve comparison using the Gehan-Breslow-Wilcoxon (chi squared value of 21.46, P value of < 0.0001). Median time to event for reaching WHO score of < 4 was 3.5 days in the control group (famcox (-) dex) versus 10 days for the experimental group (famcox (+) dex). Hazard Ratio (Mantel-Haenszel) for the treatment group ((+) dex) relative to control group ((-) dex) was 7.23 (95% con dence interval of 3.05 to 17.14). Hazard Ratio (log rank) for the treatment group relative to the control group was 3.43 (95% con dence interval of 1.64 to 7.17). Differences in cumulative mortality during hospitalization in the two treatment groups were also signi cantly different. In the group receiving adjuvant treatment with HD famotidine + celecoxib (without added dexamethasone), all enrolled patients were successfully discharged; there were no deaths during hospitalization (0/18 = 0% mortality). A total of six deaths were recorded during hospitalization in the group receiving HD famotidine + celecoxib + dexamethasone (6/21 = 29% mortality). Of these deaths, four had been admitted and enrolled at WHO grade 5, and two admitted and enrolled at WHO grade 4. Of those enrolled at WHO grade 4, death occurred at days 18 and 20 post enrollment. For those enrolled at WHO grade 5, death occurred at days 9, 11, 12, and 17. Chi squared comparison of these outcomes was performed, with 2 x 2 table comparison of outcome (discharged or deceased) and treatment group ((-) or (+) dexamethasone (Stata v14.2). Results demonstrate that there was a signi cant difference in mortality between the two groups, Χ 2 (1, N = 43) = 7.305, p < 0.007. We conclude from both this study and the RELIANCE study data that use of the potent non-speci c anti-in ammatory corticosteroid dexamethasone in addition to the speci c antiin ammatory HD famotidine and celecoxib protocol should only be considered in late stage COVID-19 disease in patients less than 70 years of age. Furthermore, the effects of added dexamethasone on laboratory biomarkers, and particularly on neutrophil count, lymphocyte count, and neutrophil to lymphocyte ratio raise concerns about the long term effects of dexamethasone treatment with or without famcox during acute COVID-19 on the incidence and severity of chronic COVID ("long COVID" or PASC) which may at least partially be attributable to dysregulation of myeloid and lymphoid compartments during acute COVID-19 (De Biasi et al., 2020;Duan et al., 2020;Kaneko et al., 2020;Schulte-Schrepping et al., 2020;Vabret et al., 2020;Perez-Gomez et al., 2021;Su et al., 2021). If dexamethasone is to be considered for use together with famcox in the narrowly de ned cohort of severe hospitalized COVID-19 patients aged less than 70 years as supported by the RECOVERY trial data, then laboratory biomarker ndings (in particular the neutrophil to lymphocyte ratio) should be carefully monitored. Tomera was employed by the Beloit Memorial Hospital Department of Urology and declares that the research was conducted in the absence of any commercial or nancial relationships that could be construed as a potential con ict of interest. Author Leo Egbujiobi was employed by the Beloit Memorial Hospital Cardiology Department and declares that the research was conducted in the absence of any commercial or nancial relationships that could be construed as a potential con ict of interest. Author Joseph Kittah was employed by the Beloit Memorial Hospital Pulmonary Department and declares that the research was conducted in the absence of any commercial or nancial relationships that could be construed as a potential con ict of interest. No intellectual property, patent rights or commercialization Figure 3 Page 36/42 Comparison of serum C-reactive protein concentrations over time Combined line plot of serum C-reactive protein serum biomarker concentrations as a function of time for both control (famcox without dexamethasone, black lines) and experimental (famcox with dexamethasone, red lines) treatment groups. Heatmap of individual biomarker analysis (Top panel) Figure 9a provides an overview heatmap plot of control group (famcox without dexamethasone) biomarker data, and 9b summarizes the experimental cohort (famcox with dexamethasone) biomarker results. To facilitate heatmap display, the pre-and posttreatment laboratory biomarker data were normalized to yield percent change relative to baseline expressed as log 10 transformed ratios of pre-treatment value divided by post-treatment value. Log 10
2021-08-02T00:05:21.654Z
2021-05-17T00:00:00.000
{ "year": 2021, "sha1": "6a9f1d5b7695cffb553f9f4d5c4f38f4b7721ffc", "oa_license": "CCBY", "oa_url": "https://www.researchsquare.com/article/rs-526394/latest.pdf", "oa_status": "GREEN", "pdf_src": "ScienceParsePlus", "pdf_hash": "df2008a23a10626dffdb3401003469bf68728439", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
16305163
pes2o/s2orc
v3-fos-license
Prospective evaluation of Ki-67 system in histological grading of soft tissue sarcomas in the Japan Clinical Oncology Group Study JCOG0304 Background The correct clinical staging of soft tissue sarcomas (STS) is critical for the selection of treatments. The staging system consists of histological grade of the tumors and French Federation of Cancer Center (FNCLCC) system based on mitotic count is widely used for the grading. In this study, we compared the validity and usefulness of Ki-67 grading system with FNCLCC system in JCOG0304 trial which investigated the efficacy and safety of perioperative chemotherapy with doxorubicin and ifosfamide for STS. Methods All 70 eligible patients with STS in the extremities treated by perioperative chemotherapy in JCOG0304 were analyzed. Univariate and multivariate Cox regression analyses were conducted to investigate an influence on overall survival. Results The reproducibility of Ki-67 grading system in the histological grading of STS was higher than FNCLCC system (κ = 0.54 [95 % CI 0.39–0.71], and 0.46 [0.32–0.62], respectively). Although FNCLCC grade was not associated with overall survival (OS) in univariate analysis (HR 2.80 [0.74–10.55], p = 0.13), Ki-67 grading system had a tendency to associate with OS in univariate analysis (HR 4.12 [0.89–19.09], p = 0.07) and multivariate analysis with backward elimination (HR 3.51 [0.75–16.36], p = 0.11). Conclusions This is the first report demonstrating the efficacy of Ki-67 grading system for the patients with STS in the prospective trial. The results indicate that Ki-67 grading system might be useful for the evaluation of histological grade of STS. Background Soft tissue sarcomas (STS) in adults are rare malignant tumors, and the incidence of STS is approximately 1 % of all malignant tumors. According to the Soft Tissue Tumor Registry reported by the Musculoskeletal Tumor Committee of the Japanese Orthopaedic Association, only 1540 cases of STS were registered in 2012 in Japan [1]. The prognosis and standard treatments of STS differ in the clinical stages of the tumor. The American Joint Committee on Cancer (AJCC)/International Union against Cancer (UICC) staging system is the most widely used for the staging of STS [2]. Surgical resection of the tumor with or without radiotherapy is the standard treatment and highly successful for stage I and II STS, and systemic chemotherapy with doxorubicin (DOX)based regimen is standard for stage IV STS [3]. The standard therapeutic modality for stage III STS is mainly surgical resection; however, systemic perioperative chemotherapy with DOX plus ifosfamide (IFO) is also the promising treatment for high-risk STS [4]. Therefore, the precise diagnosis and staging is critical for the selection of treatments and improvement of outcome of the patients with STS. The clinical staging system is based on the histological grade of STS. It has been shown that the histological grade correlates well with prognosis of the patients with STS [5]. Various histological grading systems of STS have been proposed. Among them, French Federation of Cancer Center (FNCLCC) system is one of the most popular and widely used grading systems [6]. In the FNCLCC system, histological grading is rated by the total of the scores for three parameters, including the tumor differentiation, degree of necrosis, and mitotic count. However, the mitotic count is affected by various factors such as the interval between surgical resection and fixation of tumor tissue, cell size and tumor cellularity, and the experience of the pathologists [7][8][9][10], so the development of more precise and objective grading system has been required. Recently, the usefulness of Ki-67 (MIB-1) score has been reported in some retrospective studies. Using the retrospective data of 95 patients with STS of the extremities, trunk, head, and neck treated in the single institution, we have previously reported the usefulness of a novel histological grading system based on the three parameters: tumor differentiation, degree of necrosis, and Ki-67 (MIB-1) score [7]. In this system, the mitotic count in FNCLCC system was replaced by cell count in Ki-67 immunohistochemical staining. We have shown that the Ki-67 grading system was the most significant independent prognostic factor for the patients with STS in the multivariate analysis [11]. We also retrospectively analyzed validity (sensitivity and specificity) and reproducibility (agreement) of diagnosis of histological grade using Ki-67 grading and FNCLCC systems for STS by comparing the independent diagnosis of four pathologists and the gold standard which was diagnosed by two experts who developed Ki-67 grading system [12]. The results indicated that the validity and reproducibility of Ki-67 grading system in the diagnosis of histological grading of STS was higher than that of FNCLCC system. We have conducted the phase II clinical trial for STS, Japan Clinical Oncology Group Study JCOG0304. In JCOG0304, patients with operable, high-grade STS in the extremities were treated with perioperative DOX 60 mg/m 2 plus IFO 10 g/m 2 for three courses with 3week interval, followed by operation and postoperative two courses of the same regimen [13,14]. To further evaluate the validity and reproducibility of Ki-67 grading system in prospective study, we compared Ki-67 grading system with FNCLCC system using the clinical data of the patients in JCOG0304. We also analyzed the factors including both Ki-67 and FNCLCC grades that might influence on survival of the patients treated in JCOG0304. The patients were treated with preoperative chemotherapy consisted of DOX (30 mg/m 2 /day, days 1 and 2) and IFO (2 g/m 2 /day, days 1 to 5), and was repeated for three courses every 3 weeks. The tumor was resected within 5 weeks of the last course of preoperative chemotherapy. When the tumor resection was completed, two courses of the same regimen as preoperative chemotherapy with DOX and IFO were carried out every 3 weeks. No additional therapy was given until the patient exhibited treatment failure including local recurrence and/or distant metastasis. The study protocol was approved by the Clinical Trial Review Committee of JCOG, and also approved by the Institutional Review Boards of each of the 27 participating institutes. All patients gave written informed consent before entry to the study. Pathological central review Histological diagnosis and grading of the biopsy samples from all patients in the present study were reviewed by the Central Pathological Committee of JCOG-BSTTSG. To obtain enough amounts of tumors, all samples were collected by open biopsy. Needle biopsy was not allowed in the present study. The committee consisted of three pathologists specialized in diagnosis of STS from three institutions (TH, YO, and TN). The review was independently performed by each pathologist; then, the consensus diagnosis of the tumor was determined by the committee meeting. Ki-67 immunostaining was carried out by the Central Pathological Committee for the grading of all tumor samples as previously described [7]. Briefly, 4-μm-thick sections were stained with antibodies for Ki-67 (clone: MIB-1, 1:100 dilution, DAKO, Tokyo, Japan). The sections were subjected to heat-induced epitope unmasking with Target Retrieval Solution (pH 9, DAKO) using microwave for 20 min. Ki-67 score was assessed by counting the percentages of Ki-67-positive nuclei per 1000 tumor cells in the region of the tumor with the greater density of staining, which usually corresponded to the areas with the highest mitotic activity in the tumor. The histologic grade was calculated by adding scores of three factors; tumor differentiation, tumor necrosis, and Ki-67 immunostaining, each of which was given a score of 0 to 3. Thus, in this system, the mitotic count in FNCLCC system was simply replaced by cell count in Ki-67 immunostaining. A Ki-67 score of 1 was assigned to lesions with 0-9 % of the tumor cells positive for Ki-67 immunostaining, a score of 2 was those with 10-29 %, and a score of 3 was those with ≥30 % of the tumor cells positive for Ki-67 immunostaining, respectively. The standard FNCLCC grading system was also used in this study. Data management and treatment evaluation The JCOG Data Center performed data management and statistical analysis. The center also performed central monitoring to ensure data submission, patient eligibility, protocol compliance, safety, and on-schedule study progress. None of the orthopedic surgeons who performed the protocol treatment were involved in the data analysis. Statistical method As a measure of reproducibility, weighted kappa statistics (κ) with Cicchetti-Allison weight type [15] in each pair of the three pathologists, i.e., three combinations, was calculated both for FNCLCC and Ki-67 grading systems. Confidence interval of weighted kappa statistics for overall agreement between pairs of the pathologists was estimated by bootstrap sampling method [16]. As a measure of agreement and reproducibility, κ value is commonly interpreted as follows: 0.00-0.20, slight; 0.21-0.40, fair; 0.41-0.60, moderate; 0.61-0.80, substantial; and 0.81-1.00, almost perfect agreement [17]. Overall survival (OS) was defined as the time from enrollment to death and censored at the date of last contact for a surviving patient. The overall survival was estimated by Kaplan-Meier method. Univariate and multivariate Cox regression analysis was performed to investigate the impact on overall survival. Hazard ratios and p values were derived by Cox regression model. The following factors were investigated: age, sex, ECOG performance status, tumor location, tumor size, histological subtype, tumor differentiation score, tumor necrosis score, tumor mitosis score, histological grade assessed by FNCLCC system, and histological grade assessed by Ki-67 grading system. Of these, data retrieved by institutional decision was used only for univariate analysis, while the data reviewed by the Central Pathological Committee was used both for univariate and multivariate analyses. As a sensitivity analysis, multivariate analysis with backward elimination method with alpha of 0.2 was also performed. Statistical analysis was done with SAS version 9.1 or more (SAS Institute, Cary, NC). Patient characteristics From March 2004 to September 2008, a total of 72 patients were enrolled into the JCOG0304 trial, and 70 eligible patients in the trial were included in the present analysis. The characteristics of the patients were as follows. Briefly, 36 patients were male and 34 patients were female, and the median age of patients was 48.5 years old (range 21-66 years). Tumor location included the thigh in 34 patients, the calf in 9 patients, the other sites of the lower extremity in 14 patients, the shoulder in 6 patients, the upper arm in 2 patients, the forearm in 1 patient, and the other sites of the upper extremity in 4 patients. The median tumor size was 7.45 cm. The histological diagnosis of tumors by institutional decision was as follows: synovial sarcoma in 20 patients, undifferentiated pleomorphic sarcoma in 17 patients, leiomyosarcoma in 11 patients, fibrosarcoma in 5 patients, liposarcoma in 4 patients, undifferentiated sarcoma in 4 patients, pleomorphic rhabdomyosarcoma in 2 patients, and other histological subtype in 7 patients. Histological grading using Ki-67 and mitosis Among 70 tumors, according to the grading system using mitosis, 36 and 34 tumors were assessed as grade 2 and grade 3, respectively, by the Pathological Central Committee. On the other hand, with the grading system using Ki-67 immunostaining, 32 and 38 tumors were assessed as grade 2 and grade 3, respectively (Table 1). Seven tumors assessed as grade 2 using mitosis were evaluated as grade 3 using Ki-67, whereas three tumors assessed as grade 3 using mitosis were evaluated grade 2 using Ki-67, respectively. The distribution of Ki-67 immunostaining ranged from 1 to 90 % (median 25 %) (Fig. 1). The distribution pattern was similar to that in the previous report [17]. Next, the reproducibility of each grading system was evaluated. Of 70 tumors, 67 tumors were evaluated for the analysis of the reproducibility. The agreement between pathologist 1 and 2, 1 and 3, and 2 and 3 for (Fig. 3, Table 4). While FNCLCC grading system was not selected as a prognostic factor by multivariate analysis with backward elimination, Ki-67 grading system was also tended to associate with OS on multivariate analysis with backward elimination (HR 3.51, 95 % CI 0.75-16.36, p = 0.11) ( Table 5). Discussion In the present study, we investigated the differences in the diagnosis of the histological grade of STS assessed by Ki-67 expression levels and mitosis using the biopsy samples of STS in JCOG0304. We further analyzed the impact of prognostic factors including the histological grade on survival of the patients with STS in the clinical trial. The results demonstrated that there was the substantial disagreement (14.3 %) between Ki-67 grading system and FNCLCC system, and that Ki-67 grading system might exhibit better reproducibility in the assessment of histological grading of STS in the extremities than mitotic score. Potential prognostic value of Ki-67 has also been shown. We have previously demonstrated that the grading system using Ki-67 immunostaining was better in terms of the validity and reproducibility than FNCLCC system using mitotic score [12]. We have also indicated that the Ki-67 grading system was significantly associated with the prognosis of the patients with STS in the multivariate analysis [11]. However, those reports were retrospective analyses from a single institution. Therefore, we investigated the validity and reproducibility of the Ki-67 grading system in the grading of STS treated by the same regimen in the multi-institutional clinical trial, JCOG0304 [13,14]. In the present study, the histological grade of 10 tumors among 70 (14.3 %) showed disconcordance between FNCLCC system and Ki-67 grading system, indicating that the disagreement between both systems was not negligible. Thus, the reproducibility was calculated as kappa statistics to elucidate which system would be better. The averaged weighted kappa statistics between pairs of three expert pathologists for Ki-67 was κ = 0.54 (95 % CI 0.39-0.71), and for mitosis was κ = 0.46 (95 % CI 0.32-0.62). In the retrospective study, the κ score was 0.61 for Ki-67 immunostaining and 0.54 for mitotic score, suggesting the superiority of Ki-67 grading system to FNCLCC system [9]. It was also reported that the κ statistics using mitosis of FNCLCC system was 0.38 for FNCLCC grade 2 and 0.48 for grade 3 tumors, respectively [18]. The possible reason for the better reproducibility of Ki-67 grading system over FNCLCC system is that Ki-67 is expressed in all phases of the cycle except G0 and is a better measure of dividing cells than HE staining [19]. Our results were consistent with those in the previous reports, and demonstrating that Ki-67 grading system is also valid in the prospective study. On the other hand, Ki-67 grading However, there is a possibility that the results of the present study might be affected by the variation of Ki-67 immunostaining among specimens. The results in the present study also exhibited a substantial disagreement of tumor grading using Ki-67 and mitosis. It has been reported that overall percent agreement of Ki-67 and mitosis grading among four pathologists was reported to be 79 % (95 % CI, 76-83) and 69 % (95 % CI, 65-73) [12]. These observations demonstrated an obvious limitation in the current method for assessment of tumor grading even with Ki-67 system by manual counting of Ki-67 immunostaining. Further improvement such as measurement by a digital image analysis system for Ki-67 assessment should be needed to overcome the difficulties in correct evaluation of histological grade of STS. Regarding the prognostic relevance, there was a tendency that the Ki-67 grading system was the potential prognostic factor associated with OS of the patients in the present study. However, the difference of OS in the present study was smaller than that in the previous report [11]. One of the possible reasons for this discrepancy is that the outcome of JCOG0304 was far better than expected. An Italian randomized study for STS in the extremities has demonstrated that 4-year OS of the patients treated by adjuvant chemotherapy with epirubicin plus IFO was 69 % (95 % CI, 68.9-89.7 %) [20]. On the other hand, 5-year OS for eligible patients was 81.8 % in JCOG0304. Furthermore, our retrospective study has been reported that 5-year OS for grade 2 and grade 3 STS was 71.8 and 44.3 %, respectively [11]. In JCOG0304, 5-year OS for grade 2 and grade 3 STS was 91.8 and 73.5 %, respectively. The results suggested that the prognosis of the patients with STS in JCOG0304 was remarkably improved by the intensive pre-and postoperative chemotherapy with DOX and IFO. The survival of the patients with grade 3 tumors in the present study was comparable to that with grade 2 tumors in the previous report. The high survival rate of the patients with grade 3 tumors in this trial might be one of the reasons why the difference of the survival between grade 2 and grade 3 tumors was not statistically significant. Since the trial was phase II study, JCOG0304 had selection biases including many patients with good prognosis histologic subtypes of the tumors, and the majority of grade 2 tumor in FNCLCC grading (47 patients out of 72). Thus, there is a possibility that these imbalanced factors might have affected to the results of the present study. In summary, it is suggested that Ki-67 grading system might show better reproducibility and validity in the assessment of histological grade for STS in the extremities than FNCLCC system. Furthermore, among factors tested in the present study, there was a tendency that Ki-67 grading system was associated with survival in univariate analysis. Ki-67 grade was also suggested as a candidate of prognostic factor in multivariate analysis with backward elimination.
2016-05-12T22:15:10.714Z
2016-04-18T00:00:00.000
{ "year": 2016, "sha1": "825ccdd560308747c86f94b90e6c82ab2f885e8a", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1186/s12957-016-0869-6", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "825ccdd560308747c86f94b90e6c82ab2f885e8a", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
11360827
pes2o/s2orc
v3-fos-license
Combined Effects of Lanthanum (III) and Acid Rain on Antioxidant Enzyme System in Soybean Roots Rare earth element pollution (REEs) and acid rain (AR) pollution simultaneously occur in many regions, which resulted in a new environmental issue, the combined pollution of REEs and AR. The effects of the combined pollution on the antioxidant enzyme system of plant roots have not been reported. Here, the combined effects of lanthanum ion (La3+), one type of REE, and AR on the antioxidant enzyme system of soybean roots were investigated. In the combined treatment of La3+ (0.08 mM) and AR, the cell membrane permeability and the peroxidation of cell membrane lipid of soybean roots increased, and the superoxide dismutase, catalase, peroxidase and reduced ascorbic acid served as scavengers of reactive oxygen species. In other combined treatments of La3+ (0.40 mM, 1.20 mM) and AR, the membrane permeability, malonyldialdehyde content, superoxide dismutase activity, peroxidase activity and reduced ascorbic acid content increased, while the catalase activity decreased. The increased superoxide dismutase activity, peroxidase activity and reduced ascorbic acid content were inadequate to scavenge the excess hydrogen peroxide and superoxide, leading to the damage of the cell membrane, which was aggravated with the increase in the concentration of La3+ and the level of AR. The deleterious effects of the combined treatment of La3+ and AR were stronger than those of the single treatment of La3+ or AR. Moreover, the activity of antioxidant enzyme system in the combined treatment group was affected directly and indirectly by mineral element content in soybean plants. Introduction Rare earth elements (REEs) exhibit useful physical and chemical properties that enable their wide applications in petroleum, metallurgy, textiles, ceramics, glassmaking, new materials (catalyst, permanent magnet, optical and hydrogen-storage material) production, and medicines [1][2][3]. At the suitable concentrations, REEs are also used in agriculture to improve the yield and quality of crops [4]. The uses of REEs have accelerated the accumulation of REEs in soils [5], which has become a global environmental issue [6][7]. For example, the average contents of REEs in soils in China, Australia, Japan and German are 197.67, 104.30, 97.57 and 15.48 mg/ kg, respectively [7]. The accumulation of REEs in soils inevitably affects plant growth [8][9][10][11]. Antioxidant enzyme system in plants is the important protective mechanism in the response to stress [9]. It has been reported that REEs at suitable concentrations could promote plants to resist environmental stress (e.g. acid rain, heavy metals, ozone, low temperature, salinity, drought, and so on) by increasing the antioxidant capacity of plants [12][13][14]. However, little information on the concern on the potential risks of high-concentration REEs on the antioxidant enzyme system of plants has been presented [15][16]. Acid rain (AR) is a global environmental issue [17]. When its pH level reaches a certain damage threshold, AR inhibits the growth of plants through direct deposition to leaves as well as indirect acidification of surface water and soil [17]. It subsequently changes the plant population structure and finally inhibits the community functions [17][18]. It has been reported that AR exerts deleterious effects on both the physiological and biochemical characteristics of various plants [19][20][21]. The studies on cucumber (Cucumis sativus L.), muskmelon (Cucumis melo L.) and birch (Betula pendula R.) indicated that the effects of AR on the activities of superoxide dismutase (SOD), catalase (CAT) and peroxidase (POD) depend on the plant species, the pH level of AR and the duration of treatments [22][23][24]. The simultaneous pollution of REEs and AR occurs in many regions [25], and it produces a new environmental issue, i.e. the combined pollution of REEs and AR. Thus, it is very important to investigate the combined effects of REEs and AR on plants, including the combined effects of low-concentration REEs (or the improvement concentration) and AR on plants, which is commonly ignored; and the combined effects of REEs and AR at the current and the future levels. The action mechanism of the combined pollution of La 3+ and AR on plants remains largely unclear although we have tried to clarify the mechanism from the view of photosynthesis in leaves [26]. Plant roots are the important compartments of the combined pollution of REEs and AR, and they absorb nutrients and moisture from soil. Therefore, the investigations on the combined effects of REEs and acid rain on plant roots are of great significance. Our group has previously conducted a preliminary study on the effects of the combined pollution of La 3+ and AR on root phenotype of soybean [27]. In order to further understand the response mechanism, the studies on the combined effects of REEs and AR on antioxidant enzyme system in plant roots are important, and these effects have not been reported. Lanthanum (La), the first lanthanide element in periodic table, is ubiquitous in soils [5]. Soybean is an important economic crop and recommended for the study on phytotoxicity by the US EPA. Thus, in this study, the combined effects of La 3+ and AR on antioxidant enzyme system, cell membrane and metal element contents in soybean roots were investigated. The results could provide some references for the scientifically evaluating the potential ecological risk of REEs and AR on plants. Preparation of Solutions The control rain at pH 7.0 was prepared by adding Ca 2+ , Na + , K + , NH 4 [28][29]. The simulated AR at pH 3.0, 3.5 and 4.5 were prepared by adjusting the pH of control rain with the additions of the concentrated H 2 SO 4 and HNO 3 in a ratio of 1.10:1 (v/v, by chemical equivalents) [28][29]. The-P nutrient solution was prepared by replacing 1 mM KH 2 PO 4 in the Hoagland's solution (pH 7.0) with 1 mM KCl to avoid precipitation of lanthanum phosphate. The La 3+ solutions with different concentrations (0.08, 0.40 or 1.20 mM) were prepared by dissolving appropriate quantities of LaCl 3 (Sigma-Aldrich, USA) in-P nutrient solution. Plant Culture and Treatment Soybeans were cultured as described in our previous study [8,27]. The soybean seeds (Zhonghuang 25, Wuxi Seed Co., Ltd., China) were sterilized in a HgCl 2 (0.1%) solution for 5 min, rinsed several times with distilled water and germinated in an incubator at 25 ± 1°C. Three uniform seedlings with a radicle length of approximately 0.5 cm were transplanted into each pot (diameter = 15 cm, height = 30 cm) containing the 1.0 kg air-dried substrate (vermiculite and pearlite, 1:1, v/v). The-P nutrient solution was added to maintain substrate water content of 60%. Pots were placed in a greenhouse at 25 ± 3°C, a day/night cycle of 16/8 h, and a relative humidity of 70 − 80%. Photosynthetic photon flux density provided by the incandescent lamps at the greenhouse was selected as 300 μmol m -2 s -1 , based on the fact that the light saturation point of soybean plant is usually 250~500 μmol -2 s -1 [30][31] and the light saturation point of soybean plant treated with La 3+ and AR was approximately 310 μmol -2 s -1 , measured with photometer (Fluke 941, US). The-P nutrient solution was used to irrigate the plants and to maintain substrate water content of 60%, and 1 mM KH 2 PO 4 was sprayed on the leaves every other day to apply the required inorganic phosphate of plants. Filling-stage soybean plants were subjected to 16 treatments of La 3+ and AR in four classes [27], e.g. the control treatment, single treatment of La 3+ or AR as well as the combined treatment of La 3+ and AR. First was the control treatment. Soybean plants were irrigated with the-P nutrient solution (pH 7.0) and sprayed with the control rain. The substrate water content was 60%. Second was the La 3+ treatment. Soybean plants were irrigated with the La 3+ solution (0.08, 0.40 or 1.20 mM, pH 7.0) and then sprayed with the control rain. The substrate water content was 60%. Third was the AR treatment. Soybean plants were irrigated with the-P nutrient solution and sprayed with AR (pH 3.0, 3.5, 4.5). The substrate water content was 60%. Fourth was the combined treatment of La 3+ and AR. Soybeans were irrigated with the La 3+ solution (0.08, 0.40 or 1.20 mM, pH 7.0), and then sprayed with AR. The water content of substrate was 60%. Soybeans were sprayed with AR by a sprayer. The diameter of drop was approximately 0.5 mm, which caused the optimal detention time and distribution area of AR on the surface of leaves. For the 16 treatments of La 3+ and AR, the amount of simulated AR or control rain was 300 mL per pots, which was calculated according to the precipitation and evaporation in the southeast of China. All treatments were replicated in five pots, and KH 2 PO 4 solution was sprayed every other day to apply the required inorganic phosphate of plants. After the La 3+ and AR treatment for 7 d, the roots were collected for the determination of the test indices. Determination of Membrane Permeability Membrane permeability measurements were based upon the previous method [32]. Leaves were sliced to yield three 3×9 mm discs, representing the central pith, midparenchyma and cortex of the tuber tissue. The twenty four discs were rinsed, placed into 40 mL deionized water and gently tumbled at ambient room temperature. Conductance of deionized water was measured after 15 min (C 1 ), 2 h (C 2 ), and 2 h after a freeze-thaw treatment (C total ). The membrane permeability was expressed as %/h = 100 × (C 2 -C 1 )/(1.75 C total ). Determination of Malonyldialdehyde (MDA) The level of lipid peroxidation was expressed as the content of MDA [33]. Samples (0.5 g) were repeatedly extracted with the mixed solution of ethanol and water (4:1, v/v) containing 1 mg L -1 butylated hydroxytoluene (BHT) using the sonication. After centrifugation, supernatants were pooled and an aliquot of appropriately diluted sample was added to a test tube with an equal volume of either (1)-thiobarbituric acid (TBA) solution containing 20% (w/v) trichloroacetic acid and 0.01% (w/v) BHT, or (2) +TBA solution containing the above materials plus 0.65% TBA. Samples were heated at 95°C for 25 min and after cooling, the absorbance was read at 440 nm, 532 nm, and 600 nm, respectively. The content of MDA was expressed as equation: MDA (nmol/mL) = 10 6 × (A-B)/157000; A = Abs 532+TBA -Abs 600+TBA -(Abs 532-TBA -Abs 600-TBA ); B = (Abs 440+TBA -Abs 600+TBA ) × 0.0571. Here 157000 was the molar extinction coefficient for MDA. The molar absorbance of 1-10 mM sucrose at 532 nm and 440 nm was 8.4 and 147, respectively, giving a ratio of 0.0571. Determination of Hydrogen Peroxide (H 2 O 2 ) Content Hydrogen peroxide content was determined by the previous method with some modifications [34]. Root tissues (0.5 g) were homogenized in an ice bath with 3% (w/v) trichloroacetic acid. The homogenate was centrifuged at 12,000×g for 15 min, and 1 mL of supernatant was added to 1 mL of 100 mM potassium phosphate buffer (pH 7.0) and 2 mL of 1 M KI. The absorbance was measured at 390 nm. The content of H 2 O 2 was calculated based on a standard curve. Determination of Superoxide (O 2 -) Content Superoxide content was determined by a modified method according to Elstner and Heupel [35]. Two grams of root tissue were homogenized in 3 mL of 3% trichloride acetic acid. The homogenate was centrifuged at 12,000×g for 15 min, and 1 mL of supernatant was added to 1 mL of 50 mM potassium phosphate buffer (pH 7.0) containing 1 mM hydroxylammonium chloride and the mixture was incubated in 25°C for 20 min. The mixture was then incubated with 2 mL of 17 mM sulfanilic acid and 2 mL of 7 mM α-naphthyl amine at 25°C for 20 min. The final solution was mixed with an equal volume of ether, and the absorbance of the pink phase was measured at 530 nm. The content of O 2 was calculated based on a standard curve. Determination of Superoxide Dismutase (SOD), Catalase (CAT) and Peroxidase (POD) Activities Roots (5 g) were homogenized in 50 mM potassium phosphate buffer (pH 7.8) including 5 mM ascorbic acid, 5 mM dithiothreitol, 5 mM ethylene diamine tetraacetic acid (EDTA), and 2% (v/v) polyvinylpyrrolidone. The homogenates were centrifuged at 15,000×g for 15 min and the supernatants were used for enzyme activity assaying. The SOD activity was determined essentially as described by Spychalla and Desborough [36]. Each 3 mL of reaction mixture contained 50 mM Na 2 CO 3 /NaHCO 3 buffer, pH 10.2, 0.1 mM EDTA, 0.015 mM ferricytochrome c, and 0.05 mM xanthine. One unit of SOD was defined as the amount of enzyme that which caused 50% inhibition of the rate of ferricytochrome c reduction. The CAT activity was determined by following the consumption of H 2 O 2 at 240 nm according to the literature [37]. Each 3 mL of reaction mixture contained 100 mM potassium phosphate buffer, pH 7.0, and 50 μL of the enzyme extract. The reaction was initiated by adding 15 mM H 2 O 2 . The POD activity was determined by the literature [38]. The reaction mixture contained phosphate buffer (25 mM, pH 7.0), guaiacol (0.05%), H 2 O 2 (10 mM), and crude peroxidase. Activity was determined by the increase per minute in the absorbance at 470 nm due to guaiacol oxidation (E = 26.6 mM cm -1 ). Determination of Reduced Ascorbic Acid (AsA) Content The reduced AsA content was determined by high performance liquid chromatography (HPLC) (1100LC, Agilent, USA) at 254 nm according to previous research [39]. Plant tissues (1 g) were homogenized in liquid nitrogen and extracted in ice-cold metaphosphoric acid 5% (w/v). The extract was centrifuged for 10 min at 6,000 × g. The column employed was the Alltima C18 column (4.6 × 250 mm, 5 μm; Alltech Italia srl). The mobile phase consisted of 0.05 M sodium acetate and acetonitrile (95:5, v:v, pH 2.8). Isocratic elution was selected. The temperature of the column was adjusted at 26°C and the flow rate at 1 mL min -1 . The total run time was 15 min. Calibration was achieved using purified ascorbic acid as standard. Determination of Mineral Element and La Contents The mineral element and La contents in soybean roots and leaves were determined by inductively coupled plasma mass spectrometry (ICP-MS) (POEMS, Thermo Jarrel Ash, USA) [40][41]. The roots and leaves were collected, cleaned and washed three times with deionized water. These roots were dried in an oven and crushed into 1 mm segments. Then 0.5 g samples were digested with 8 mL oxidizing solution (15 M HNO 3 and 9 M H 2 O 2 , v/v) for 30 min at 2600 kPa (80 psi) in a MDS-2000 microwave oven (CEM Corp., Matthews, NC, USA). The samples were diluted with deionized water to a final volume of 25 mL for determination of La and mineral element contents. In addition, standard solutions were used for the calibration. Statistical Analysis Each treatment was replicated five times. All values were presented as the means ±SD. The significance of differences between different treatments was analyzed by one-way ANOVA using SPSS 17. The interaction between La 3+ and AR was analyzed by two-way ANOVA. Table 1 shows the effects of La 3+ and AR on membrane permeability and MDA content of soybean roots. When soybean roots were treated with 0.08 mM La 3+ , the membrane permeability and MDA content of soybean roots were unchanged compared with those of the control (Table 1). When the concentration of La 3+ increased to 0.40 (1.20) mM, the membrane permeability and MDA content of roots increased by 49.27% (58.19%) and 21.18% (40.51%), respectively, in comparison with the control ( Table 1). Combined Effects of La 3+ and AR on Membrane Permeability and MDA Content of Soybean Roots The treatment of AR at pH 4.5 did not change the membrane permeability and MDA content of soybean roots. When the pH value of AR decreased to 3.5, the membrane permeability and MDA content increased by 28.99% and 18.78%, respectively, compared with those of the control (Table 1), and the higher increase in the membrane permeability and MDA content in the treatment of AR at pH 3.0 (33.09% and 25.17%) were observed. When soybean roots were treated with 0.08 mM La 3+ and AR at pH 4.5, the membrane permeability was increased by 21.46%, 19.30% and 20.32%, respectively, compared with those of the control and the single treatment of 0.08 mM La 3+ or AR at pH 4.5 (Table 1). Similarly, the increase degrees in the MDA content were as follows: 12.79%, 10.04% and 16.50%. Relative to the control treatment and the single treatment of La 3+ or AR, the membrane permeability and MDA content in other combined treatments increased. The increase degrees rose with the increase in the La 3+ concentration and the decrease in the pH value of AR (Table 1). The results of two-way ANOVA indicated that there was an obvious interaction between La 3+ and AR that affect the membrane permeability and MDA content in soybean roots ( Table 1). The increase degrees of membrane permeability and MDA content caused by the combined treatment were less than the sum of those caused by the single treatment of La 3+ or AR, and that is namely synergistic effect. (Table 1). When soybean roots were treated with 0.08 mM La 3+ and AR at pH 4.5, the H 2 O 2 content of roots increased by 11.18%, 1.73% and 9.44%, respectively, compared with that of the control and the single treatment of 0.08 mM La 3+ or AR at pH 4.5 (Table 1). Combined Effects of La 3+ and AR on the SOD, CAT and POD Activities as well as the Reduced AsA Content in Soybean Roots Table 3 shows the effects of La 3+ and AR on the SOD, CAT and POD activities as well as the reduced AsA content in soybean roots. When soybean roots were treated with 0.08 mM La 3+ , the SOD and CAT activities in soybean roots increased by 6.04% and 15.81%, respectively, while the POD activity and the reduced AsA content were unchanged, compared with the control. As the concentration of La 3+ increased to 0.40 and 1.20 mM, the CAT activity was lower, and the SOD activity, POD activity and reduced AsA content significantly were higher than those of the control (Table 3). In comparison of the control, the SOD, CAT and POD activity and reduced AsA content were unchanged in roots treated with AR at pH 4.5. When the pH value of AR decreased to 3.5 and 3.0, the SOD, CAT and POD activities as well as the reduced AsA content were significantly increased compared with those of the control (Table 3). When soybean roots were treated with 0.08 mM La 3+ and AR at pH 4.5, the SOD and CAT activities as well as the reduced AsA content increased by 16.81%, 20.92% and 20.19%, respectively, while the activity of POD was unchanged, compared with those of the control (Table 3). In other combined treatments of La 3+ and AR, the SOD and POD activities as well as the reduced AsA content were significantly increased, the activity of CAT was decreased compared with those of the control ( Table 3). The results of two-way ANOVA indicated that there was an obvious interaction between La 3+ and AR that affected the SOD, CAT and POD activities, as well as the reduced AsA content in soybean roots ( negatively correlated with the CAT activity, and positively correlated with the SOD activity, POD activity and reduced AsA content (p<0.05) ( Table 4). Table 5 showed the contents of macroelements (K, Ca and Mg) and microelements (Cu, Mn, Zn and Fe) in soybean roots and leaves. When soybean roots were treated with 0.08 mM La 3+ , the contents of K, Ca, Mg and Mn in the roots and leaves were decreased, compared with those of the control. This effect was more evident at higher concentration of La 3+ (0.40 and 1.20 mM), excepted for Mg in leaves. The Cu, Fe and Zn contents in the roots and leaves treated with La 3+ were increased compared with those of the control except that the Zn content in the leaves decreased at 1.20 mM of La 3+ . The increased effects rose as the level of La 3+ increased. When soybean roots were treated with AR, the K and Mg contents in the roots and leaves were increased, the Ca content in these organs was unchanged, the Cu, Mn, Fe and Zn contents in the roots and leaves (excepted for Zn content in the roots) were increased, compared with those of the control. When soybean roots were treated with both 0.08 mM La 3+ and acid rain at pH 4.5, the K, Ca, Mg and Mn contents were decreased, but the Cu, Zn and Fe contents were increased, compared with those of the control. These effects were more evident in other combined treatments of La 3+ and AR. The results of two-way ANOVA indicated that there was an obvious interaction between La 3+ and AR that affected the contents of mineral elements in the roots and leaves ( Table 5). The K, Ca content in the roots and leaves, and Mn content in roots positively correlated with the CAT activity (excepted for Ca in leaves), and negatively correlated with the SOD activity, POD activity, reduced AsA content and La content in the roots ( Table 6). The Fe content in the roots and leaves negatively correlated with the CAT activity (excepted for Fe in leaves), and positively correlated with the SOD activity, POD activity, reduced AsA content and La content in the roots ( Table 6). The Mg content in the roots negatively correlated with the POD activity, reduced AsA content and La content ( Table 6). The Cu content in the leaves positively correlated with the CAT activity (Table 6). Combined Effects of La 3+ and AR on La Content of Soybean When soybean roots were treated with La 3+ , the La contents in soybean increased compared with those of the control (Table 5). In addition, the increase was more evident as the La 3+ concentration increased. The La content in soybean plants treated with AR was unchanged compared with that in the control (Table 5). In the combined treatments with La 3+ (0.08, 0.40 or 1.20 mM) and acid rain at pH 4.5 (3.5 or 3.0), the La contents in the roots and leaves were increased, compared with that in the control and La 3+ or AR single treatment (excepted for La in leaves). The results of two-way ANOVA indicated that there was an obvious interaction between La 3+ and AR that affected the La contents in soybean roots and leaves (Table 5). Moreover, the La content in the roots and leaves positively correlated with the SOD activity, POD activity and reduced AsA content, but negatively correlated with the CAT activity (Table 6). Discussion Aerobic biological metabolism produces active oxygen species (ROS), including H 2 O 2 , O 2 -, and ÁOH [42]. The balance between the generation and scavenging of ROS exists in normal cells, which would not damage plants [10]. Abiotic stress can cause the excess accumulation of ROS in plants [10]. Excess ROS can rapidly attack all types of biomolecules such as nucleic acid, proteins, lipids, and amino acids [14], triggers free-radical chain reaction, and then causes membrane lipid peroxidation, which is one of the primary consequences of oxidative damage [14]. The injury level can be reflected by the MDA content and membrane permeability [43][44]. In plants, SOD, CAT and POD are major antioxidant enzymes, which can effectively remove the excess ROS to protect the plant cells from the damage of abiotic stress [22,45]. AsA is the most abundant and powerful antioxidant in plants, and it can also prevent or minimize the damage caused by ROS [46]. In the present work, the effects of La 3+ and AR on the antioxidant enzyme system of plant roots were understood. The treatment of 0.08 mM La 3+ did not affect the membrane permeability and MDA content in roots. Thus the generation and the elimination of ROS in roots remained in dynamic equilibrium under this treatment ( Table 1). The treatments of 0.40 and 1.20 mM La 3+ inhibited the CAT activity, promoted the SOD and POD activities, and increased the content of reduced AsA. The effects on the antioxidant enzyme system led to the accumulation of ROS (H 2 O 2 and O 2 -) in roots, which induced the membrane lipid peroxidation and the increase in the membrane permeability (Tables 1 and 2). The same effects have been observed in rice (Oryza sativa) treated with Ce 3+ and wheat (Triticum aestivum) treated with La 3+ [9,11,47]. The treatment of AR at pH 4.5 did not affect the test indices. As the pH value of AR deceased, the SOD, CAT Effects of Lanthanum (III) and Acid Rain on Soybean Roots [22,48]. In contrast with the single treatment with La 3+ and AR, the combined treatment of La 3+ and AR showed the different effects on the antioxidant enzyme system, which depended on the concentration of La 3+ and the pH of AR (Tables 1 and 3). In the combined treatment of 0.08 mM La 3+ and AR, H 2 O 2 and O 2 serve as signaling molecules to activate SOD, POD, CAT and reduced AsA in root cells [10]. But the activated SOD, CAT, POD and reduced AsA can not efficiently eliminate the ROS, leading to the excess accumulation of ROS in roots (Table 1). Excess ROS rapidly attacked unsaturated fatty acids in cell membrane, induced membrane lipid peroxidation and increased the membrane permeability (Table 1). In the combined treatments of La 3+ (0.40, 1.20 mM) and AR, the CAT activity was inhibited, on the contrary, the SOD activity, POD activity and the reduced AsA content were increased ( Table 2). The activated SOD and POD, and the increased reduced AsA could remove H 2 O 2 , but the ROS (ÁOH and O 2 -) still excessively accumulated in soybean roots ( Table 1). The excess ROS oxidized the unsaturated fatty acids in the membrane lipid of root cells [49], leading to the peroxidation of cell membrane lipid (Table 1) [43][44], the damage of the cell membrane and the destruction of the selective permeability of the cell membrane ( Table 1). The deleterious effects on the cell membrane aggravated the electrolyte leakage from the cytoplasm (the membrane permeability). Meanwhile, we analyzed the interaction between La 3+ and AR on the antioxidant enzyme system in soybean roots, and found that there was a synergistic effect of La 3+ and AR on antioxidant enzyme system, ROS accumulation and membrane lipid peroxidation in soybean roots. We speculated that AR treatment made soybean roots absorb more La 3+ , leading to a higher accumulation of La 3+ in the roots in comparison with the single treatment of La 3+ [27]. Meanwhile, La 3+ treatment promoted the uptake of H + by roots compared with the single treatment of AR [50]. Anyway, as a new combined pollutant, the combined toxic effects should be paid more attention to. Our results of correlation analysis showed that the K, Ca, Mg, Fe and Mn contents in the roots (the K, Ca and Fe contents in the leaves) positively (or negatively) correlated with the activities of antioxidant enzyme system in the roots (Table 6). These results were consistent with previous reports [51][52]. What's more, the K, Ca, Mg, Fe and Mn contents in roots (K and Ca content in leaves) positively (or negatively) correlated with La content in the roots (Table 6), which disturbed the effect of La on the activities of antioxidant enzyme system in the roots [53][54]. Effects of Lanthanum (III) and Acid Rain on Soybean Roots Conclusion The combined treatment of La 3+ and AR increase the membrane permeability and the peroxidation of membrane lipid. The increases resulted from the excess accumulation of H 2 O 2 and O 2 together with the changes in the activities of the antioxidant enzymes and the content of antioxidant. Moreover, the changes in the contents of mineral elements in soybean plants directly and indirectly affected the activities of the antioxidant enzyme system in roots. These effects mentioned above were higher than those of the single treatment of La 3+ or AR. Thus more attention should be paid on the potential threat of the combined pollution of La 3+ and AR. Furthermore, in this study, the experimental design excluded soil and the effects from xenobiotics in soil (e.g., microbial metabolism, sorption). Therefore, the obtained results need to be confirmed by extending the investigation in real and natural soil-plant systems.
2017-06-05T19:03:47.930Z
2015-07-31T00:00:00.000
{ "year": 2015, "sha1": "1071e3a14b58cf634efd40b40edd3f903aa2eb5a", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1371/journal.pone.0134546", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "1071e3a14b58cf634efd40b40edd3f903aa2eb5a", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }
14846551
pes2o/s2orc
v3-fos-license
The computational power of normalizer circuits over black-box groups This work presents a precise connection between Clifford circuits, Shor's factoring algorithm and several other famous quantum algorithms with exponential quantum speed-ups for solving Abelian hidden subgroup problems. We show that all these different forms of quantum computation belong to a common new restricted model of quantum operations that we call \emph{black-box normalizer circuits}. To define these, we extend the previous model of normalizer circuits [arXiv:1201.4867v1,arXiv:1210.3637,arXiv:1409.3208], which are built of quantum Fourier transforms, group automorphism and quadratic phase gates associated to an Abelian group $G$. In previous works, the group $G$ is always given in an explicitly decomposed form. In our model, we remove this assumption and allow $G$ to be a black-box group. While standard normalizer circuits were shown to be efficiently classically simulable [arXiv:1201.4867v1,arXiv:1210.3637,arXiv:1409.3208], we find that normalizer circuits are powerful enough to factorize and solve classically-hard problems in the black-box setting. We further set upper limits to their computational power by showing that decomposing finite Abelian groups is complete for the associated complexity class. In particular, solving this problem renders black-box normalizer circuits efficiently classically simulable by exploiting the generalized stabilizer formalism in [arXiv:1201.4867v1,arXiv:1210.3637,arXiv:1409.3208]. Lastly, we employ our connection to draw a few practical implications for quantum algorithm design: namely, we give a no-go theorem for finding new quantum algorithms with black-box normalizer circuits, a universality result for low-depth normalizer circuits, and identify two other complete problems. Introduction What are the potential uses and limitations of quantum computers? Arguably, the most attractive feature of quantum computers is their ability to efficiently solve problems for which no efficient classical solution is known. To date, a successful number of quantum algorithms have been discovered [5][6][7][8][9][10]. Yet it remains one of the greatest challenges of the field of quantum computing to understand for which precise problems quantum algorithms can be exponentially (or super-polynomially) faster than their classical counterparts. Some reasons why this question seems to be hard to answer have been discussed by Shor [11]. A fruitful approach to understand the emergence and the structure of exponential quantum speed-ups is to study restricted models of quantum computation. Ideally, the latter should exhibit interesting quantum features and, at the same time, have less power than universal quantum computers (up to reasonable computational complexity assumptions). To date, several models studied in the literature seem to have these desirable properties, including Clifford circuits [12][13][14][15], nearest-neighbor matchgates [16][17][18][19], Gaussian operations [20][21][22], the one-clean qubit (DQC1) model [23], and commuting circuits [24][25][26][27] (a more complete list is given at the end of this section). In this work, we introduce black-box normalizer circuits, a new restricted family of quantum operations, and characterize their computational power. Our circuit model extends the so-called model of normalizer circuits over Abelian groups [1][2][3], which, in turn, generalizes the better-known model of Clifford circuits [12][13][14][15]. In [1][2][3], normalizer circuits act in high and infinite dimensional systems associated with an Abelian group G. The Hilbert space H has a standard basis {|g } g∈G labeled by the elements of G. The allowed operations, called normalizer gates, can be of three types. 1. Quantum Fourier transforms over subgroups of G. 2. Group automorphism gates: permutation-like gates that implement automorphisms of the group G. 3. Quadratic phase gates: diagonal gates which multiply standard basis states by quadratic phases. In all previous work [1][2][3], the group G is assumed to be given in a factorized form (1), which endows the Hilbert space of the computation with a tensor-product structure: In (1) Z is the group of integers, Z N the group of integers modulo N , and T is the circle group, consisting of angles from 0 to 1 (in units of 2π) with the addition modulo 1. The Hilbert space H Z has a standard basis labeled by integers (Z basis) and a Fourier-basis labeled by angles (T basis). A normalizer circuit over G [1][2][3] is any quantum circuit composed of normalizer gates. In particular, n-qubit Clifford circuits are normalizer circuits over the group Z n 2 . Despite containing arbitrary numbers of quantum Fourier transforms (which play an important role in Shor's algorithms [28]) and entangling gates (automorphism, quadratic phase gates), it was shown in [1][2][3] that normalizer circuits can be efficiently simulated by classical computers. This result exploits an extended stabilizer formalism to track the evolution of normalizer circuits and generalizes the celebrated Gottesman-Knill theorem [12,13]. The key new element in our work are normalizer circuits that can be associated with Abelian black-box groups [4], which we may simply call "black-box normalizer circuits". An group B (always Abelian in this work) is a black-box group if it is finite, its elements are uniquely encoded by strings of some length n and the group operation is performed by a black-box (the group oracle) in one time-step. We define black-box normalizer circuits to be a normalizer circuits associated with groups of the form G = G prev × B, where G prev is of form (1). The key new feature in our work is that the black-box group B is not given to us in a factorized form. This is a subtle yet tremendously important difference: although such a decomposition always exists for any finite Abelian group [29], finding just one is regarded as a hard computational problem; indeed, it is provably at least as hard as factoring 1 . Our motivation to adopt the notion of black-box group is to study Abelian groups for which 1 Knowing B ∼ = Z d 1 × · · · × Z dm implies that the order of the group |G| = d1d2 · · · dm. Hardness results for computing orders [4,30] imply that the problem is provably hard for classical computers in the black-box setting. For groups Z × N , computing ϕ(N ) := |Z × N | (the Euler totient function) is equivalent to factoring [31]. the group multiplication can be performed in classical polynomial-time while no efficient classical algorithm to decompose them is known. A key example is Z × N , the multiplicative group of integers modulo 1 , which plays an important role in Shor's factoring algorithm [28]. With some abuse of notation, we call any such group also a "black-box group" 2 . Statement of results This work focuses on understanding the potential uses and limitations of black-box normalizer circuits. Our results (listed below) give a precise characterization of their computational power. On one hand, we show that several famous quantum algorithms, including shor's celebrated factoring algorithm, can be implemented with black-box normalizer circuits. On the other hand, we apply our former simulation results [1][2][3] to set upper limits to the class of problems that these circuits can solve, as well as to draw practical implications for quantum algorithm design. Our main results are now summarized: 1. Quantum algorithms. We show that many of the best known quantum algorithms are particular instances of normalizer circuits over black-box groups, including Shor's celebrated factoring and discrete-log algorithms; it follows that black-box normalizer circuits can achieve exponential quantum speed-ups. Namely, the following algorithms are examples of black-box normalizer circuits. • Discrete logarithm. Shor's discrete-log quantum algorithm [28] is a normalizer circuit over Z 2 p−1 × Z × p (theorem 1 5.1). • Factoring. We show that a hybrid infinite-finite dimensional version of Shor's factoring algorithm [28] can be implemented with normalizer circuit over Z × Z × N . We prove that there is a close relationship between Shor's original algorithm and our version: Shor's can be understood as a discretized qubit implementation of ours (theorems 2, 3). We also discuss that the infinite group Z plays a key role in our "infinite Shor's algorithm", by showing that it is impossible to implement Shor's modular-exponentiation gate efficiently, even approximately, with finitedimensional normalizer circuits (theorem 4). Last, we further conjecture that only normalizer circuits over infinite groups can factorize (conjecture 1). • Elliptic curves. The generalized Shor's algorithm for computing discrete logarithms over an elliptic curve [32][33][34] can be implemented with black-box normalizer circuits (section 5.3); in this case, the black-box group is the group of integral points E of the elliptic curve instead of Z × p . • Group decomposition. Cheung-Mosca's algorithm for decomposing black-box finite Abelian groups [35,36] is a combination of several types of black-box normalizer circuits. In fact, we discuss a new extended Cheung-Mosca's algorithm that finds even more information about the structure of the group and it is also based on normalizer circuits (section 5.5). • Hidden subgroup problem. Deutsch's [37], Simon's [38] and, in fact, all quantum algorithms that solve Abelian hidden subgroup problems [39][40][41][42][43][44][45][46], are normalizer circuits over groups of the form G× O, where G is the group that contains the hidden subgroup H and O is a group isomorphic to G/H (section 5.4). The group O, however, is not a black-box group due to a small technical difference between our oracle model we use and the oracle setting in the HSP. • Hidden kernel problem. The group O ∼ = G/H in the previous section becomes a black-box group if the oracle function in the HSP is a homomorphism between black-box groups: we call this subcase the hidden kernel problem (HKP). The difference does not seem to be very significant, and can be eliminated by choosing different oracle models (section 5.4). However, we will never refer to Simon's or to general Abelian HSP algorithms as "black-box normalizer circuits", in order to be consistent with our and pre-existing terminology. 2. Group decomposition is as hard as simulating normalizer circuits. Another main contribution of this work is to show that the group decomposition problem (suitably formalized) is, in fact, complete for the complexity class Black-Box Normalizer, of problems efficiently solvable by probabilistic classical computers with oracular access to black-box normalizer circuits. Since normalizer circuits over decomposed groups are efficiently classically simulable [1][2][3], this result suggests that the computational power of normalizer circuits originate precisely in the classical hardness of learning the structure of a black-box group. We obtain this last result by proving a significantly stronger theorem (theorem 6), which states that any black-box normalizer circuit can be efficiently simulated step by step by a classical computer if an efficient subroutine for decomposing finite Abelian groups is provided. 3. A no-go theorem for new quantum algorithms. In this work, we provide an negative answer to the question "can new quantum algorithms based on normalizer circuits be found? ": by applying the latter simulation result, we conclude that any new algorithm not in our list can be efficiently simulated step-by-step using the extended Cheung-Mosca algorithm and classical post-processing. This implies (theorem 7) that new exponential speed-ups cannot be found without changing our setting (we discuss how the setting might be changed in the discussion 1). This result says nothing about polynomial speed-ups. 4. Universality of short normalizer circuits. A practical consequence of our no-go theorem is that all problems in the class Black Box Normalizer can be solved using short normalizer circuits with a constant number of normalizer gates. (We may still need polynomially many runs of such circuits, along with classical processing in between, but each individual normalizer circuit is short.) We find this observation interesting, in that it explains a very curious feature present in all the quantum algorithms that we study [28,[32][33][34][35][36][37][38][39][40][41][42][43][44][45][46] (section 5): they all contain at most a constant number of quantum Fourier transforms (actually at most two). 5. Other complete problems. As our last contribution in this series, we identify another two complete problems for the class Black Box Normalizer (section 8): these are the (afore-mentioned) Abelian hidden kernel problem, and the problem of finding a general-solution to a system of linear equations over black-box groups (the latter are related to the systems of linear equations over groups studied in [2,3]. A link between Clifford circuits and Shor's algorithm The results in this work together with those previously obtained in [1][2][3] demonstrate the existence of a precise connection between Clifford circuits and Shor's factoring algorithm. At first glance, it might be hard to digest that two types of quantum circuits that seem to be so far away from each other might related at all. Indeed, classically simulating Shor's algorithm is widely believed to be an intractable problem (at least as hard as factoring), while a zoo of classical techniques and efficient classical algorithms exist for simulating and computing properties of Clifford circuits [12][13][14][15][51][52][53][54][55][56][57]. However, from the point of view of this paper, both turn out to be intimately related in that they both are just different types of normalizer circuits. In other words, they are both members of a common family of quantum operations. Remarkably, this correspondence between Clifford and Shor, rather than being just a mere mathematical curiosity, has also some sensible consequences for the theory of quantum computing. One that follows from theorem 6, our simulation result, is that all algorithms studied in this work (Shor's factoring and discrete-log algorithms, Cheung-Mosca's, etc.) have a rich hidden structure which enables simulating them classically with a stabilizer picture approach "à la Gottesman-Knill" [12,13]. This structure let us track the evolution of the quantum state of the computation step by step with a very special algorithm, which, despite being inefficient, exploits completely different algorithmic principles than the naive brute-force approach: i.e., writing down the coefficients of the initial quantum state and tracking their quantum mechanical evolution through the gates of the circuit 3 . Although the stabilizer-picture simulation is inefficient when black-box groups are present (i.e., it does not yield an efficient classical algorithm for simulating Shor's algorithm), the mere existence of such an algorithm reveals how much mathematical structure these quantum algorithms have in common with Clifford and normalizer circuits. In retrospect, and from an applied point of view, it is also rather satisfactory that one can gracefully exploit the above connection to draw practical implications for quantum algorithm design: in our work, we have actively used our knowledge of the hidden "Clifford-ish" mathematical features of the Abelian hidden subgroup problem algorithms in deriving results 2, 3, 4 and 5 (in the list given in the previous section). 3 Note that throughout this manuscript we always work at a high-level of abstraction (algorithmically speaking), and that the "steps" in a normalizer-based quantum algorithm are always counted at the logic level of normalizer gates, disregarding smaller gates needed to implement them. In spite of this, we find the above simulability property of black-box normalizer circuits to be truly fascinating. To get a better grasp of its significance, we may perform the following thought experiment. Imagine, we would repeatedly concatenate black-box normalizer circuits in some intentionally complex geometric arrangement, in order to form a gargantuan, intricate "Shor's algorithm" of monstrous size. Even in this case, our simulation result states that if we can decompose Abelian groups (say, with an oracle), then we can efficiently simulate the evolution of the circuit, normalizer-gate after normalizer-gate, independently of the number of Fourier transforms, automorphism and quadratic-phase gates involved in the computation (the overhead of the classical simulation is always at most polynomial in the input-size). As a side remark, we regard it a memorable curiosity that replacing decomposed groups with black-box groups not only renders the simulation methods in [1][2][3] inefficient (this is, in fact, something to be expected, due to the existence of hard computational problems related to black-box groups), but it is also precisely this modification that suddenly bridges the gap between Clifford/normalizer circuits, Shor's algorithms, Simon's and so on. Finally, it is mathematically elegant to note that all normalizer circuits we have studied are related through the so-called Pontryagin-Van Kampen duality [58][59][60][61][62][63][64], which states that all locally-compact Abelian (LCA) groups are dual to their characters groups. The role of this duality in the normalizer circuit model was discussed in our previous work [3]. Relationship to previous work Up to our best knowledge, neither normalizer circuits over black-box groups, nor their relationship with Shor's algorithm or the Abelian hidden subgroup problem, have been previously investigated. Normalizer circuits over explicitly-decomposed finite groups Z N 1 × · · · × Z Na were studied in [1,2], by two of us. We recently extended the formalism in [3] to infinite groups of the form Z a × T b × Z N 1 × · · · × Z Na . Clifford circuits over qubits and qudits (which can be understood as normalizer circuits over groups of the form Z m 2 and Z m d ) have been extensively investigated in the literature [12-15, 51-53, 56, 55, 57]. Certain generalizations of Clifford circuits that are not normalizer circuits have also been studied: [52,65,19,55,57] consider Clifford circuits supplemented with some non-Clifford ingredients; a different form of Clifford circuits based on projective normalizers of unitary groups were investigated in [66]. The notion of black-box group, which is a key concept in our setting, was first considered by Babai and Szméredi in [4] and have since been extensively studied in classical complexity theory [88][89][90][91]30]. In general, black-box groups may not be Abelian and do not need to have uniquely represented elements [4]; in the present work, we only consider Abelian uniquelyencoded black-box groups. Discussion and outlook We finish our introduction by discussing a few potential avenues for finding new quantum algorithms as well as some open questions suggested by our work. In this work, we provide a strict no-go theorem for finding new quantum algorithms with black-box normalizer circuits, as we define them. There are, however, a few possible ways to modify our setting leading to scenarios where one could bypass these results and, indeed, find new interesting quantum algorithms. We now discuss some. A second possibility would be to consider more general types of normalizer circuits than ours, by extending the class of Abelian groups they can be associated with. However, looking at more general decomposed groups does not look particularly promising: we believe that the methods here and in [3] can be extended, e.g., to simulate normalizer circuits over groups of the form [3]). On the other hand, allowing more general types of groups to act as blackboxes looks rather promising to us: one may, for instance, attempt to extend the notion of normalizer circuits to act on Hilbert spaces associated with multi-dimensional infrastructures [129,130], which may, informally, be understood as "infinite black-box groups" 5 We expect, in fact, that known quantum algorithms for finding hidden periods and hidden lattices within real vector spaces [131][132][133][134] and/or or infrastructures [129,130] (e.g., Hallgren's algorithm for solving Pell's equation [131,132]) could be at least partially interpreted as generalized normalizer circuits in this sense . Addressing this question would require a careful treatment of precision errors that appear in such algorithms due to the presence of transcendental numbers, which play no role in the present paper 6 . Some open questions in this quantum algorithm subfield have been discussed in [130]. A third possible direction to investigate would be whether different models of normalizer circuits could be constructed over algebraic structures that are not groups. One could, for instance, consider sets with less algebraic structure, like semi-groups. In this regard, we highlight that a quantum algorithm for finding discrete logarithms over finite semigroups was recently given in [135]. Alternatively, one could study also sets with more structure than groups, such as fields, whose study is relevant to Van Dam-Seroussi's quantum algorithm for estimating Gauss sums [136]. Lastly, we mention some open questions suggested by our work. 5 An n-dimensional infrastructure I provides a classical presentation for an n-dimensional hypertorus group R n /Λ ∼ = T n , where Λ is an (unknown) period lattice Λ. The elements of this continuous group are represented with some classical structures known as f -representations, which are endowed with an operation that allows us to compute within the torus. Although one must deal carefully with non-trivial technical aspects of infinite groups in order to properly define and compute with f -representations (cf. [129,130] and references therein), one may intuitively understand infrastructures as "generalized black-box hypertoruses". We stress, though, that it is not standard terminology to call "black-box group" to an infinite group. 6 No such treatment is needed in this work, since we study quantum algorithms for finding hidden structures in discrete groups. In this work, we have not investigated the computational complexity of black-box normalizer circuits without classical post-processing. There are two facts which suggest that power of black-box normalizer circuits alone might, in fact, be significantly smaller. The first is the fact that the complexity class of problems solvable by Clifford circuits alone is ⊕L [52], believed to be a strict subclass of P. The second is that normalizer circuits seem to be incapable of implementing most classical functions coherently even with constant accuracy (this has been rigorously shown in finite dimensions [1,2]). Finally, one may study whether considering more general types of inputs, measurements or adaptive operations might change the power of black-box normalizer circuits. Allowing, for instance, input product states has potential to increase the power of these circuits, since this already occurs for standard Clifford circuits [65,57]. Concerning measurements, the authors believe that allowing, e.g. adaptive Pauli operator measurements (in the sense of [2]) is unlikely to give any additional computational power to black-box normalizer circuits: in the best scenario, this could only happen in infinite dimensions, since adaptive normalizer circuits over finite Abelian groups are also efficiently classically simulable with stabilizer techniques [2]. With more general types of measurements, it should be possible to recover full quantum universality, given that qubit cluster-states (which can be generated by Clifford circuits) are a universal resource for measurement-based quantum computation [137,138]. The possibility of obtaining intermediate hardness results if non-adaptive yet also non-Pauli measurements are allowed (in the lines of [111] or [57, theorem 7]) remains also open. Abelian groups The most general groups we will consider in this work are abelian groups of the form where a, b, N 1 , · · · , N c are arbitary integers and B is a finite abelian black box group, to be defined more precisely later. We will discuss each of the constitutent groups in turn. Z: the group of integers Z simply refers to the group of integers under addition; it is infinite, but finitely generated (by the element 1). T: the torus group T refers to the group of real numbers in the interval [0, 1) under addition modulo 1. Unlike all the other components we will consider, it is both infinite and not finitely generated. The introduction of T is necessary to allow the use of quantum Fourier transforms over Z, as we'll see in the next section. Finite Abelian groups Let us start by stating a very important theorem we will use for our results: Theorem 1 (Fundamental Theorem of Finite Abelian Groups [29]). Any finite Abelian group G has a decomposition into a direct product of cyclic groups, i.e. Actually finding such a decomposition for a group G may be difficult in practice. For example, consider the set of integers modulo N that are also relatively prime to N ; this set forms a group under multiplication. (This group is known as the multiplicative group of integers modulo N , or Z × N .) It is not known classically how to decompose Z × N into its cyclic subgroups. For example, if N = pq for p, q prime then Z × pq ∼ = Z p−1 × Z q−1 , and hence decomposing Z × pq is at least as hard as factoring pq or, equivalently, breaking RSA [48]. More generally, decomposing Z × N is known to be polynomial time equivalent to factoring [31]. In the quantum case, however, Cheung and Mosca gave an algorithm [35,36] to decompose any finite abelian group. In equation (2), the factors Z N 1 × · · · × Z Nc represent an arbitrary finite Abelian group for which the group decomposition is known. The case where the decomposition is unknown will be covered by the black box group B. Black box groups In this work, we define a black-box group B [4] to be a finite group whose elements are uniquely encoded by binary strings of a certain size n, which is the length of the encoding. The elements of the black-box group can be multiplied and inverted at unit cost by querying a black-bock, or group oracle, which computes these operations for us. The order of a blackbox group with encoding length n is bounded above by 2 n : the precise order |B| may not be given to us, but it is assumed that the group oracle can identify which strings in the group encoding correspond to elements of the group. When we say that a particular black-box group (or subgroup) is given (as the input to some algorithm), it is meant that a list of generators of the group or subgroup is explicitly provided. From now on, all black-box groups in this work will be assumed to be Abelian. Although we only consider finite Abelian black-box groups, we stress now, that there is a subtle but crucial difference between these groups and the explicitly decomposed finite Abelian groups in [1,2]: although, mathematically, all Abelian black-box groups have a decomposition (3), it is computationally hard to find one and we assume no knowledge of it. In fact, our motivation to introduce black-box groups in our setting is precisely to model those Abelian groups that cannot be efficiently decomposed with known classical algorithms that have, nevertheless, efficiently classically computable group operations. With some abuse of notation, we shall call all such groups also "black-box groups", even if no oracle is needed to define them; in such cases, oracle calls will be replaced by poly(n)-size classical circuits for computing group multiplications and inversions. As an example, let us consider again the group Z × N . This group can be naturally modeled as a black-box group in the above sense: on one hand, for any x, y ∈ Z × N , xy and x −1 can be efficiently computed using Euclid's algorithm [139]; on the other hand, decomposing Z × N is as hard as factoring [31]. Note, in addition, that a generating set of Z × N can found by taking a linear number of samples 7 of Z × N . 7 Sampling Z × N can be done by sampling {0, · · · , N − 1} uniformly and then rejecting samples that are The Hilbert space of an Abelian group In this section we introduce Hilbert spaces associated with Abelian groups of the form where Z N is the additive group of integers modulo N , Z is the additive group of integers and B is a black-box group. Apart from the short discussion on Hilbert space associated with black-box groups, we follow the definitions given in our previous works [1][2][3]. Finite Abelian groups First we consider Z N . With this group, we associate an N -dimensional Hilbert space H N having a basis {|x : x ∈ Z N }, henceforth called the standard basis of H N . A state in H N is (as usual) a unit vector |ψ = ψ x |x with |ψ x | 2 = 1, where ψ x ∈ C and where the sums are over all x ∈ Z N . Second, we consider a black box group B. With such a group we associate a |B|dimensional Hilbert space H B with standard basis states |b where b ranges over all elements of B. The integers Z An analogous construction is considered for the group Z, although an important distinction with the former cases is that Z is infinite. We consider the infinite-dimensional Hilbert space H Z = ℓ 2 (Z) with standard basis states |z where |z ∈ Z. A state in H has the form with |ψ x | 2 = 1, where the infinite sum is over all x ∈ Z. More generally, any sequence (ψ x : x ∈ Z) with |ψ x | 2 < ∞ can be associated with a quantum state in H Z via normalization. Sequences whose sums are not finite give rise to "unnormalizable" states. These states do not belong to H Z and are hence unphysical; however it is often convenient to consider such states nonetheless. Examples are the "plane wave states" |p := z∈Z e 2πizp |z p ∈ [0, 1). We denote T := [0, 1) as a group with addition modulo 1, called the (one-dimentional) torus group. Even though the |p themselves do not belong to H, every state in the Hilbert space H Z can be written as a linear combination of them: for some complex function φ : T → C, where dp denotes the Haar measure on T. Thus the states |p form an alternate basis 8 of H which is infinite, parametrized by a continuous set. not relatively prime to N ; this takes O(log log N ) trials to succeed with high probability. A similar approach works, in general, for sampling generating-sets of uniquely-encoded finite Abelian groups [140]. 8 As discussed in [3], the states |p are unnormalizable and, strictly speaking, do not form a "basis" in the usual sense (they are not elements of H Z ) but they can nevertheless used as a basis for our practical purposes. Rigorously speaking, the |p "states" can be understood as Dirac-delta measures or Schwartz-Bruhat tempered distributions [141,142]. Although we will not do it here, our results can be stated more formally through the theory of rigged Hilbert spaces [143][144][145][146], which is often applied to study observables with continuous spectra. We call this basis the Fourier basis. The Fourier basis is orthonormal in the sense that where δ(·) is the Dirac delta. In the following, when dealing with the Hilbert space H Z , we will use both the standard and Fourier basis. More precisely, in our computational model (cf. section 4), there are two "standard" bases of the Hilbert space H Z , parametrized by the groups Z and T (we reserve the term standard basis for the first, the Z basis). This is different from the finite space H N where we only use the standard basis labeled by Z N . Total Hilbert space In general we will consider consider groups of the form (4). The associated Hilbert space has the tensor product form In this work, we treat H as the underlying Hilbert space of a quantum computation with m := a + c + 1 computational registers. The first a registers H Z are infinite dimensional. The latter c + 1 registers are finite dimensional, and the last one, H B , is associated with some Abelian black-box group B. The group-element bases of H As afore-mentioned, a normalizer computation works within multiple "standard bases" of H, which can change during the computation. Input states and final measurements in any of these bases are allowed. This is a central feature of the computational model we describe in the next section. The allowed bases of a normalizer computation are what-we-call the group-element bases of H, which we now describe. A group element basis B G of H is a basis of group element states {|g , g ∈ G} parametrized by an Abelian group G of the form The notation G i = T indicated that the state |g(i) (locally) is a Fourier state of Z (6). Note that the states |g are product-states with respect to the tensor-product structure (9) of H. By construction, there are 2 a possible choices of groups for the same Hilbert space, so that the number of group-element bases is 2 a . As discussed in [3] all groups (10) are related by the Pontryagin duality 9 . We now consider some examples. First, note that, by construction, the standard basis of H is the group-element basis B G with G of the form Z a × Z N 1 × · · · × Z Nc × B: 9 Groups (10) form a family (and a category) of groups generated by replacing the factors Gi of the original group Z a × ZN 1 × · · · × ZN c × B with their character groups, and then identifying isomorphic groups [3]. The so-called Pontryagin duality [58][59][60][61][62] determines the number 2 a of groups in the family. The number of group-element bases 2 a is larger than 1 iff infinite groups are involved. Due to, there are not multiple "standard-bases" in finite-dimensional normalizer circuits (nor in Clifford circuits) [1,2]. and, clearly, Alternatively, by choosing the basis of some (say the ath) H Z register to be the Fourier basis, we get where |p is now a Fourier basis state (6). Now p ∈ T and the basis is parametrized by the elements of (Z a−1 × T) × Z N 1 × · · · × Z Nc × B. Black box normalizer circuits In this section we define black-box normalizer circuits acting on Hilbert spaces of the form (9). We will split the discussion in two parts: in section 4.1 we discuss black box normalizer circuits for finite-dimensional spaces, i.e. spaces where H Z does not occur in the decomposition. Restricting to the finite-dimensional case will allow us to introduce black box normalizer circuits without many technical complications. In a second step, in section 4.2 we allow for general spaces of the form (9). The definition of black box normalizer circuits will be technically more involved owing to the fact that in H Z both the standard basis and the Fourier basis need to be considered; this technical element is however not essential to understand the basic idea behind black box normalizer groups and the reader may skip section 4.2 in a first reading. However, the general definition of black box normalizer circuits is necessary to make a rigorous connection with e.g. Shor's factoring algorithm. Finite groups Let G be a finite Abelian group of the form G = Z N 1 × · · · × Z Nc × B. The associated Hilbert space has standard basis vectors |g where g ranges over all elements of G (cf. section 3). A normalizer gate over G is either an automorphism gate, quadratic phase gate or quantum Fourier transform, as defined next: Automorphism gates. Recall that a group automorphism is an invertible map α : G → G satisfying α(g + h) = α(g) + α(h) for every g, h ∈ G. An automorphism gate over G is an operation U α : |h → |α(h) where α is an automorphism; we consider automorphism gates to be available as black-box quantum gates (oracles). Note that each U α acts as a permutation on the standard basis and is hence a unitary operation. Quadratic phase gates. A function χ : G → U (1) (from the group G into the complex numbers of unit modulus) is called a character if for some bicharacter B(g, h). A quadratic phase gate is any diagonal unitary operation acting on the standard basis as D ξ : |h → ξ(h)|h , where ξ is a quadratic function of G. Similar to automorphism gates, we consider quadratic phase gates to be available as black-box quantum gates. Quantum Fourier transform. In contrast to both automorphism gates and quadratic phase gates, which act on the entire system H, we will only consider settings where quantum Fourier transforms never act on the black box portion H B of the total system. This a natural restriction; in particular, (to our knowledge) in all existing quantum algorithms that do use QFTs, these QFTs act on systems of the form H N 1 ⊗ · · · ⊗ H Nc . This is precisely the case we consider here. To define the QFT, consider the Hilbert space H N with standard basis vectors |x with x ∈ Z N . The QFT F N over Z N is a unitary operation which acts on |ψ = ψ x |x in H N as In this work we will consider the quantum Fourier transform F N i to act on H N i , for any system i = 1, . . . , a. A normalizer circuit over G [1,2] is any unitary circuit composed of normalizer gates. As input state, we will consider any standard basis state |g with g ∈ G. After all gates in the circuit are applied, a measurement in the standard basis is performed. We do not consider intermediate measurements. We also recall, as mentioned above, that all automorphism and quadratic phase gates are to be given as black-box operations. Infinite groups Here we extend the definition of black box normalizer circuits to general Hilbert spaces of the form (9). This will be technically somewhat more involved than the finite-dimensional case. The main technical complication is not related the black-box portion H B of the Hilbert space, but rather to the infinite-dimensional space H Z and, in particular, to the fact that we work with two different bases in this space, labeled by two completely different groups Z and T. We have already introduce these bases in section 3. The role of these bases in the normalizer circuit model has been discussed in detail in our previous work [3], but we give here a self-contained summarized account. Designated bases B G . Consider a Hilbert space of the form (9). Fix any member G of the family of 2 a groups defined in (10). Note that G thus generally contains factors Z N i , Z, T and B. We consider the corresponding group-element basis B G = {|g : g ∈ G} as defined in (11). We set B G to be (what we call) the designated basis of the computation. The group G that labels B G choice of basis will determine what normalizer gates (read below). B G will be the basis in which measurements are performed. Automorphism gates. The definition of automorphism gates is similar to above, but now the action of these gates is defined relative to the designated basis. That is, we consider a group automorphism α : G → G and define the corresponding automorphism gate U α by its action on the basis B G , as follows: U α : |h → |α(h) . As above, automorphism gates are given as oracles. Furthermore, in the infinite case we impose that the group automorphism must be a continuous map. Quadratic phase gates. The action of quadratic gates is also defined relative to the designated basis. A function χ : G → U (1) is called a character if its continuous and if for some bicharacter B(g, h). A quadratic phase gate is any diagonal unitary operation acting on B G as D ξ : |h → ξ(h)|h , where ξ is a quadratic function of G. As above, quadratic gates are given as oracles. Quantum Fourier transforms. In contrast to both automorphism gates and quadratic phase gates, which leave the designated basis unchanged, the role of the quantum Fourier transform (QFT) is precisely to change the designated basis B G (at a given time) into another group-element basis B G ′ . This transformation follows certain rules [3], described next. Roughly speaking, the QFT of H Z is a basis change between the standard and the Fourier basis (6). Note that the QFT over Z N , which we introduced as a gate (14), could be defined also in this way (as a change of basis). However, the Fourier transform has now more exotic features than their finite-dimensional counterparts. First, strictly speaking, the QFT over H Z is not a quantum gate: despite being a change of basis, it does not define a unitary rotation. Second, there are actually two inequivalent Fourier transforms of H Z . These technicalities deserve further discussion. QFTs over H Z are not gates. In the case of H N , both the standard and the Fourier basis have the same cardinality (since Z N is isomorphic to its character group) and such a change of basis can be actively performed by means of a unitary rotation, which defines the QFT over Z N . In the case of H Z , the standard basis {|x : x ∈ Z} and Fourier basis {|p : p ∈ T} have different cardinality (recall section 3.2) and cannot be "rotated" into each other. Therefore, the QFT, while corresponding to a change of basis, will as such not be a unitary gate in the usual sense 10 . Because of this asymmetry, there are two QFTs, defined as follows. • QFT over Z. If the standard basis (Z basis) is the designated basis of H Z , states are represented as Gates are defined according to this (integer) basis, which is also our measurement basis. When we say that the QFT over Z is applied to |ψ , we mean that the designated basis is changed from the standard basis to the Fourier basis. The state does not actually change (no gate is physically applied 11 ), but the normalizer gates acting after the QFT 10 Mathematically, this Fourier transform is a unitary transformation between two different functional spaces, L 2 (Z) and L 2 (T). The latter two define one quantum mechanical system with two possible bases (of Dirac-delta measures) labeled by Z and T. In the finite dimensional case, the picture is simpler because the QFT is a unitary transformation of L 2 (ZN ) onto itself. (These facts are consequences of the Plancherel theorem for locally compact Abelian groups [61,62].) 11 We choose this notation to be consistent with our previously existing terminology [1,2]. will be associated with T (not Z), and measurements will be performed in the T basis. We therefore ought to write the wavefunction of the state |ψ in the Fourier basis: • QFT over T. In the opposite case, the designated basis of H Z is the Fourier basis, in which a state looks like When we say that the QFT over T is applied to |ψ , we mean that the designated basis is changed from the Fourier basis to the standard basis. Like in the previous case, we must re-express the state |ψ in the new designated basis: Note that, by definition, the QFT over Z may only be applied if the designated basis is the standard basis and, conversely, the QFT over T may only be applied of the designated basis is the Fourier basis. Full and partial QFTs over G. We last consider the total Hilbert space H with designated basis B G . We allow for the application of a QFT on any of the individual spaces H N i or H Z in the tensor product decomposition of H (we call this a partial QFT ). The designated basis is changed according to the rules described above on all subsystems H Z where a QFT is applied. The full QFT over G is the combination of all partial QFTs acting on the smaller registers. The black-box normalizer circuit model We are now ready to introduce normalizer circuits in precise terms. Roughly speaking, a black-box normalizer circuit of size T is a quantum circuit C = U T · · · U 1 composed of T normalizer gates U i , which we have introduced in the previous sections. More precisely, a normalizer circuit over G = Z a+b × Z N 1 × · · · × Z Nc × B is a quantum circuit that acts on a Hilbert space H associated with the group G. In this decomposition, the parameters a, b, c, N i and the Abelian black-box group B can be chosen arbitrarily. To define a complete circuit model, we specify next the allowed inputs, gates and measurements of the computation. • Designated basis. In a normalizer computation there is no fixed "standard basis", in the usual sense of the word that comes from the standard model of quantum circuits [147]. Instead, there is a designated basis B Gt at every time step t of the circuit, that is subject to change along the computation. B Gt is restricted to be group-element basis, as in equation (11). • Input states. The input states of a normalizer computation are elements of some designated group basis B G 0 at time zero 12 . Without loss of generality, we assume that the registers H a Z and H b Z are fed, respectively, with standard-basis |n , n ∈ Z and Fourier-basis states |p , p ∈ T. In our notation, this is equivalent to choosing the basis • Structure of the circuit : • At time t = 1, the gate U 1 is applied, which is either an automorphism gate, quadratic phase gate over G 0 or a QFT. Recall that automorphism gates and quadratic phase gates are given as black boxes. The designated basis is changed from B G 0 to B G 1 , for some group G 1 in the family (10), which is only different from G 0 if a QFT is applied (recall the update rules from the previous section). • At time t = 2, the gate U 1 is applied, which is, again, either an automorphism gate, quadratic phase gate or a QFT over G 1 . The designated basis is changed from B G 1 to B G 2 , for some group G 2 . • The gates U 3 , . . . , U t are considered similarly. We denote by B Gt the designated basis after application of U t (for some group G t in the family (10)), for all t = 3, . . . , T . Thus, after all gates have been applied, the designated basis is G T . • After the circuit, a measurement in the designated basis G T is performed. Precision requirements In the model of quantum circuits above, input states and final measurements in the Fourierbasis {|p , p ∈ T} of H Z can never be implemented with perfect accuracy, a limitation that stems from the fact that the |p states are unphysical. This can be quickly seen in two ways: first, in the Z basis, these states are infinitely-spread plane-waves |p = e 2πizp |z ; second, in the T basis, they are infinitely-localized Dirac-delta pulses. Physically, preparing Fourier-basis states or measuring in this basis perfectly would require infinite energy and lead to infinite precision issues in our computational model. In the algorithms we study in this work (namely, the order-finding algorithm in theorem 2), Fourier states over Z can be substituted with realistic physical approximations. The degree of precision used in the process of Fourier state preparation is treated as a computational resource. We model the precision used in a computation as follows. Since our goal is to use the Fourier basis |p , p ∈ T, to represent information in a computation, we require the ability to store and retrieve information in this continuousvariable basis. Our assumption is that for any set X with cardinality d = |X|, we can divide the continuous circle-group T spectrum into d equally sized sectors of length 1/d and use them to represent the elements of X. More precisely, to each element of X we assign a number in Z d . The element We call the latter states d-approximate Fourier states and refer to d as the precision level of the computation. We assume that these states can be prepared and distinguished to any desired precision d in the following way: 1. State preparation assumption. Inputs |ψ i with at least 2 3 fidelity to some element of V i,d can be prepared for any i ∈ Z d . 2. Distinguishability assumption. The subspaces V i,d can be reliably distinguished. Note that d determines how much information is stored in the Fourier basis. Definition 1 (Efficient use of precision 13 ). A quantum algorithm that uses d-approximate Fourier states to solve a computational problem with input size n is said to use an efficient amount of precision if and only log d is upper bounded by some polynomial of n. Analogously, an algorithm that stores information in the standard basis {|m , m ∈ Z} is said to be efficient if the states with m larger than some threshold log (m max ) ∈ O(poly n) do not play a role in the computation. Classical encodings for normalizer gates We finish this section discussing how the normalizer gates of a given normalizer circuit are presented in a classical encoding. Since quantum Fourier transforms can be specified by mere bit strings storing the circuit locations where they act, we focus on automorphism and quadratic phase gates. We again let G = Z a × T b × Z N 1 × · · · × Z Nc × B be the group that defines the designated basis B G in a normalizer circuit and define m = a + b + c. From now on, we restrict ourselves to studying group automorphisms and quadratic functions which are efficiently computable rational functions. This limits the class of classical functions that we consider. 1. Rational. 14 An automorphism (or an arbitrary function) α : G → G is rational if it returns rational outputs for all rational inputs. A quadratic function ξ is rational if it can be written in the form ξ(g) = exp (2πi q(g)) where q is a rational function from G into R modulo 2Z. 2. Efficiently computable. α and q can be computed by polynomial-time uniform family of classical circuits {α i }, {q i }. All α i , q i are poly(m, i) size classical circuits that query the black-box group oracle at most poly(m, i) times: their inputs are strings of rational numbers whose numerators and denominators are represented by i classical bits (their size is O(2 i )). For any rational element g ∈ G that can be represented with so many bits (if G contains factors of the form T these are approximated by fractions), it holds that α i (g) = α(g) and q i (g) = q(g). In certain cases (see section 5) we will consider groups like Z × N which, strictly speaking, are not black-box groups (because polynomial time algorithms for group multiplication for them are available and there is no need to introduce oracles). In those cases, the queries to the black-box group oracle (in the above model) are substitute by some efficient subroutine. We add a third restriction to the above. 3. Precision bound. For any q or α that acts on an infinite group a bound n out is given so that for every i, the number of bits needed to specify the numerators and denominators in the output of q i or α i exactly is at most i + n out . The bound n out is independent of i and indicates how much the input of each function may grow or shrink along the computation of the output 15 . This bound is used to correctly store the output of maps α : Z a → Z a , α ′ : Z a → T a and to detect whether the output of a function α ′′ : The allowed automorphism gates U α and quadratic phase gates D ξ are those associated with efficiently computable rational functions α, ξ. We ask these unitaries to be efficiently implementable as well 16 , by poly(m, i, n out )-size quantum circuits comprising at most poly(m, i, n out ) quantum queries of the group oracle. The variable i denotes the bit size used to store the labels g of the inputs |g and bounds the precision level d of the normalizer computation, which we set to fulfill log d ∈ O(i + n out ). The complexity of a normalizer gate is measured by the number of gates and (quantum) oracle queries needed to implement them. In the next section 5, we will see particular examples of efficiently computable normalizer gates. We will repeatedly make use of automorphism gates of the form where k i are integers and b j , x are elements of some black-box group B. These gates are allowed in our model, since there exist well-known efficient classical circuits for modular exponentiation given access to a group multiplication oracle [139]. In this case, a precision bound can be easily computed: since the infinite elements k i do not change in size and all the elements of B are specified with strings of the same size, the output of α can be represented with as many bits as the input and we can simply take n out = 0 (no extra bits are needed). Many examples of efficiently computable normalizer gates were given in [1,2], for decomposed finite group Z N 1 × · · · × Z Nc . It was also shown in [1] that all normalizer gates over such groups can be efficiently implemented. Quantum algorithms In this section we consider the discrete-logarithm problem studied by Shor [28]. For any prime number p, let Z × p be the multiplicative group of non-zero integers modulo p. An instance of the discrete-log problem over Z × p is determined by two elements a, b ∈ Z × p , such that a generates the group Z × p . Our task is to find the smallest non-negative integer s that 15 For infinite groups there is no fundamental limit to how much the output of α or q may grow/shrink with respect to the input (this follows from the normal forms in [3]). The number nout parametrizes the precision needed to compute the function. Similarly to 2., this assumption might me weakened if a treatment for precision errors is incorporated in the model. 16 Recall that, in finite dimensions, the gate cost of implementing a classical function α as a quantum gate is at most the classical cost [147] and that computing q efficiently is enough to implement ξ using phase kick-back tricks [33]. We expect these results to extend to infinite dimensional systems of the form H Z . is a solution to the equation a s = b mod p; the number is called the discrete logarithm s = log a b. We now review Shor's algorithm [28,6] for this problem and prove our first result. Theorem 1 (Discrete logarithm). Shor's quantum algorithm for the discrete logarithm problem over Z × p is a black-box normalizer circuit over the group Z 2 p−1 × Z × p . Theorem 1 shows that black box normalizer circuits over finite Abelian groups can efficiently solve a problem for which no efficient classical algorithm is known. In addition, it tells us that black-box normalizer circuits can render widespread public-key cryptosystems vulnerable: namely, they break the Diffie-Helman key-exchange protocol [47], whose security relies in the assumed classical intractability of the discrete-log problem. Proof. Let us first recall the main steps in Shor's discrete log algorithm. Output. The least nonnegative integer s such that a s ≡ b (mod p). We will use three registers indexed by integers, the first two modulo p − 1 and the last modulo p. The first two registers will correspond to the additive group Z p−1 , while the third register will correspond to the multiplicative group Z × p . Two important ingredients of the algorithm will be the unitary gates U a : |s → |sa and U b : |s → |sb . This is equivalent to applying the controlled-U x a gate between the first and third registers, and the controlled-U y b between the second and third registers. 4. Measure and discard the third register. This step generates a so-called coset state where γ is some uniformly random element of Z p−1 and s is the discrete logarithm. 5. Apply the quantum Fourier transform over Z p−1 to the first two registers, to obtain 6. Measure the system in the standard basis to obtain a pair of the form (k ′ , k ′ s) mod p uniformly at random. 7. Classical post-processing. By repeating the above process n times, one can extract the discrete logarithm s from these pairs with exponentially high probability (at least 1 − 2 −n ), in classical polynomial time. Note that the Hilbert space of the third register precisely corresponds to H B if we choose the black-box group to be B = Z × p . It is now easy to realize that Shor's algorithm for discrete log is a normalizer circuit over Z p−1 × Z p−1 × Z × p : steps 2 and 4 correspond to applying partial QFTs over Z p−1 , and the gate U applied in state 3 is a group automorphism over We stress that, in the proof above, there is no known efficient classical algorithm for solving the group decomposition problem for the group Z × p (as we define it in section 5.5): although, by assumption, we know that Z × p = a ∼ = Z p−1 , this information does not allow us to convert elements from one representation to the other, since this requires solving the discrete-logarithm problem itself. In other words, we are unable to compute classically the group isomorphism Z × p ∼ = Z p−1 . In our version of the group decomposition problem, we require the ability to compute this group isomorphism. For this reason, we treat the group Z × p as a black-box group. Shor's factoring algorithm In this section we will show that normalizer circuits can efficiently compute the order of elements of (suitably encoded) Abelian groups. Specifically, we show how to efficiently solve the order finding problem for every (finite) Abelian black-box group B [4] with normalizer circuits. Due to the well-known classical reduction of the factoring problem to the problem of computing orders of elements of the group Z × N , our result implies that black-box normalizer circuits can efficiently factorize large composite numbers, and thus break the widely used RSA public-key cryptosystem [48]. We briefly introduce the order finding problem over a black-box group B, that we always assume to be finite and Abelian. In addition, we assume that the elements of the black-box group can be uniquely encoded with n-bit strings, for some known n. The task we consider is the following: given an element a of B, we want to compute the order |a| of a (the smallest positive integer r with the property 17 a r = 1). Our next theorem states that this version of the order finding problem can be efficiently solved by a quantum computer based on normalizer circuits. Theorem 2 (Order finding over B). Let B be a finite Abelian black-box group with an n-qubit encoding and H B be the Hilbert space associated with this group. Let V a be the unitary that performs the group multiplication operation on H B : V a |x = |ax . We denote by c -V a the unitary that performs V a on H B controlled on the value of an ancillary register H Z : for any m in Z. Assume that we can query an oracle that implements c -V a in one time step for any a ∈ B. Then, there exists a hybrid version of Shor's order-finding algorithm, which can compute the order |a| of any a ∈ B efficiently, using normalizer circuits over the group Z × B and classical post-processing. The algorithm runs in polynomial-time, uses an efficient amount of precision and succeeds with high probability. In theorem 2, by "efficient amount of precision" we mean that instead of preparing Fourier basis states of H Z or measuring on this (unphysical) basis, it is enough to use realistic physical approximations of these states (cf. section 4). Proof. For simplicity, we assume that a generating set of B with O(n) elements is given (otherwise we could generate one efficiently probabilistically by sampling elements of B). We divide the proof into two steps. In the first part, we give an infinite-precision quantum algorithm to randomly sample elements from the set Out a = { k |a| : k ∈ Z} that uses normalizer circuits over the group Z × B in polynomially many steps. In this first algorithm, we assume that Fourier basis states of H Z can be prepared perfectly and that there are no physical limits in measurement precision; the outcomes k/|a| will be stored with floating point arithmetic and with finite precision. The algorithm allows one to extract the period |a| efficiently by sampling fractions k/|a| (quantumly) and then using a continued fraction expansion (classically). In the second part of the proof, we will remove the infinite precision assumption. Our first algorithm is essentially a variation of Shor's algorithm for order finding [28] with one key modification: whereas Shor's algorithm uses a large n-qubit register H n 2 to estimate the eigenvalues of the unitary V a , we will replace this multiqubit register with a single infinite dimensional Hilbert space H Z . The algorithm is hybrid in the sense that it involves both continuous-and discrete-variable registers. The key feature of this algorithm is that, at every time step, the implemented gates are normalizer gates, associated with the groups Z × Z × N and T × Z × N (which are, themselves, related via the partial Fourier transforms F Z and F T ). The algorithm succeeds with constant probability. Algorithm 2 (Hybrid order finding with infinite precision). Input. A black box (finite abelian) group B, and an element a ∈ B. Output. The order s of a in B, i.e. the least positive integer s such that a s = 1. We will use multiplicative notation for the black box group B, and additive notation for all other subgroups. 1. Initialization: Initialize H Z on the Fourier basis state |0 with 0 ∈ T, and H B on the state |1 , with 1 ∈ B. In our formalism, we will regard |0, 1 as a standard-basis state of the basis labeled by T × B. 2. Apply the Fourier transform F T to the register H Z . This changes the designated basis of this register to be the one labeled by the group Z. The state |0 in the new basis is an infinitely-spread comb of the form m∈Z |m . 3. Let the oracle V a act jointly on H Z × H B ; then the state is mapped in the following manner: Note that, in our formalism, the oracle c -V a can be regarded as an automorphism gate U α . Indeed, the gate implements a classical invertible function on the group α(m, x) = (m, a m x). The function is, in addition, a continuous 18 group automorphism, since α ((m, x)(n, y)) = α(m + n, xy) = (m + n, (a m+n )(xy)) = (m + n, (a m x)(a n y)) = (m, a m x)(n, a n y) = α(m, x)α(n, y). 4. Measure and discard the register H B . Say we obtain a s as the measurement outcome. Note that the function a m is periodic with period r = |a|, the order of the element. Due to periodicity, the state after measuring a s will be of the form   j∈Z |s + jr   |a s . After dicarding H B we end up in a periodic state |s + jr which encodes r = |a|. 5. Apply the Fourier transform F Z to the register H Z . We work again in the Fourier basis of H Z , which is labelled by the circle group T. The periodic state |s + jr in the dual T basis reads [149] r−1 k=0 e 2πi sk r k r 6. Measure H Z in the Fourier basis (the basis labeled by T). Since we that the initial state of the computation is as close to |0 as we wish, the wavefunction of the final state (23) is sharply peaked around values p ∈ T of the form k/r. As a result, a high resolution measurement will let us sample these numbers (within some floating-point precision window ∆) nearly uniformly at random. 7. Classical postprocessing: Repeat steps 1-7 a few times and use a (classical) continuedfraction expansion algorithm [147,150] to extract the order r from the randomly sampled multiples {k i /r} i . This can be done, for instance, with an algorithm from [151] that obtains r with constant probability after sampling two numbers k 1 r , k 2 r , if the measurement resolution is high enough: ∆ ≤ 1/2r 2 is enough for our purposes. Manifestly, there is a strong similarity between algorithm 2 and Shor's factoring algorithm: the quantum Fourier transforms F T in our algorithm F Z play the role of the discrete Fourier transorm F 2 n , and c -V a acts as the modular exponentation gate [28]. In fact, one can regard algorithm 2 as a "hybrid" version of Shor's algorithm combining both continuous and discrete variable registers. The remarkable feature of this version of Shor's algorithm is that the quantum part of the algorithm 1-6 is a normalizer computation. Algorithm 2 is efficient if we just look at the number of gates it uses. However, the algorithm is inefficient in that it uses infinitely-spread Fourier states |p = m∈Z e −2πipm |m (which are unphysical and cannot be prepared with finite computational resources) and arbitrarily precise measurements. We finish the proof of theorem 2 by giving an improved algorithm that does not rely on unphysical requirements. 1-2 Initialization: Initialize H B to |1 . The register H Z will begin in an approximate Fourier basis state 0 = 1 √ 2M +1 +M −M |m , i.e. a square pulse of length 2M + 1 in the integer basis, centered at 0. This step simulates steps 1-2 in algorithm 2. 3-4 Repeat steps 3-4 of algorithm 2. The state after obtaining the measurement outcome a s is now different due to the finite "length" of the comb M m=0 |m ; we obtain where L = L a +L b +1 and s is obtained nearly uniformly at random from {0, . . . , r−1}. The values L a , L b are positive integers of of the form ⌊M/r⌋ − ǫ with −2 ≤ ǫ ≤ 0 (the particular value of ǫ depends on s, but it is irrelevant in our analysis). Consequently, we have L = 2⌊M/r⌋ − (ǫ a + ǫ b ). 5 Apply the Fourier transform F Z to the register H Z . The wavefunction of the final stateψ is the Fourier transform of the wavefunction ψ of (24). We computeψ using formula (17): (to derive the equation, we apply the summation formula of the geometric series and re-express the result in terms of the Dirichlet kernel [61] D L,r (p) = sin (πLpr) sin (πpr) . 6 Measure H Z in the Fourier basis. We show now that, if the resolution is high enough, then the probability distribution of measurement outcomes will be "polynomially close" to the one obtained in the infinite precision case (23). Intuitively, this is a consequence of the fact that in the limit M → ∞ (when the initial state becomes an infinitelyspread comb), we have also L → ∞ and that the function D L , r(p) converges to a train r−1 k=0 δ k/r (p) of Dirac measures [61]. In addition, for a high finite value of M , we find that the probability of obtaining some outcome p within a ∆ = 1 Lr window of a fraction k r is also high. where we use the mean value theorem and the bound sin(x) 2 ≤ x 2 . It follows that with constant probability (larger than 4/π 2 ≈ 0.41) the measurement will output some outcome ∆ 2 -close to a number of the form k/r. (A tighter lower bound of 2/3 for the success probability can be obtained by evaluating the integral numerically.) Lastly, note that although the derivation of (27) implicitly assumes that the finial measurement is infinitely precise, it is enough to implement measurements with resolution close to ∆. Due to the peaked shape of the final distribution (27), it follows that Θ( 1 M ) resolution is enough if our task is to sample ∆ 2 -estimates of these fractions nearly uniformly at random; this scaling is efficient as a function of M (cf. section 4). 7 Classical postprocessing: We now set M (the length of the initial comb state) to be large enough so that ∆ 2 = 1 2Lr ≤ 1 2r 2 ; taking log M = O(poly n) is enough for our purposes. With such an M , the measurement step 6 will output a number p that is 1 2r 2 close to a k r with high probability, which can be increased to be arbitrarily close to 1 with a few repetitions. We then proceed as in step 7 of algorithm 2 to compute the order r. Shor's algorithm as a normalizer circuit Our discussion in the previous section reveals strong a resemblance between our hybrid normalizer quantum algorithm for order finding and Shor's original quantum algorithm for this problem [28]: indeed, both quantum algorithms employ remarkably similar circuitry. In this section we show that this resemblance is actually more than a mere fortuitous analogy, and that, in fact, one can understand Shor's original order-finding algorithm as a discretized version of our finite-precision hybrid algorithm for order finding 2. Note that the theorem does not imply that all possible quantum algorithms for order finding are normalizer circuits (or discretized versions of some normalizer circuit). What it shows is that the one first found by Shor in [28] does exhibit such a structure. Proof. Our approach will be to show explicitly that the evolution of the initial quantum state in Shor's algorithm is analogous to that of the initial state in algorithm 3 if we discretize the computation. Recall that Shor's algorithm implements a quantum phase estimation [41] for the unitary V a . Let D be the dimension of the Hilbert space used to record such phase. We assume D to be odd 19 So far, we have simulated step 1 in algorithm 3 by constructing the same periodic state. These first two steps are also clearly analogous to steps 1-2 in algorithm 2. 3-4 Apply the modular exponentiation gate U me , which is the following unitary [28] U me |m, x = |m, a m x , to the state. Measure the register H Z × N in the standard basis. We obtain, again, a quantum state of the form (24), with L ≤ D. 6 We apply the discrete Fourier transform F Z D to the register H Z D again. We claim now that the output state will be a discretized version of (25) due to a remarkable mathematical correspondence between Fourier transforms. Note that any quantum state |ψ of the infinite-dimensional Hilbert space H Z can be regarded as a quantum state of H D given that the support of |ψ is limited to the standard basis states |0 , |±1 , . . . , |±M . Let us denote the latter state |ψ D to distinguish both. Then, we observe a correspondence between letting F Z act on |ψ and letting F Z D act on |ψ D . The correspondence (equation 30) tells us that, since we have ψ(x) = ψ D (x), it follows that the Fourier transformed functionψ D (k) is precisely the functionψ(p) evaluated at points of the form p = k D . The final state can be written as which is, indeed, a discretized version of (25). 7-8 The last steps of Shor's algorithm are identical to 7-8 in algorithm 3, with the only difference being that the wavefunction (31) is now a discretization of (25). The probability of measuring a number k such that k D is close to a multiple of the form k ′ r will again be high, due to the properties of the Dirichlet kernel (26). Indeed, one can show (see, e.g. [6]) with an argument similar to (27) that, by setting D = N 2 , the algorithm outputs with constant probability and almost uniformly a fraction k D among the two closest fraction to some value of the form k/r (see e.g. [28] for details). The period r can be recovered, again, with a continued fraction expansion. Normalizer gates over ∞ groups are necessary to factorize At this point, it is a natural question to ask whether it is necessary at all to replace the Hilbert space H n 2 with an infinite-dimensional space H Z with an integer basis in order to be able to factorize with normalizer circuits. We discuss in this section that, in the view of the authors, this is a key indispensable ingredient of our proof. We begin our discussion by showing rigorously, in the black-box set-up, that no quantum algorithm for factoring based on modular exponentation gates (controlled V a rotations) can be efficiently implemented with normalizer circuits over finite Abelian groups, in a strong sense. We prove the theorem in appendix A. We highlight that a similar result was proven in [1, theorem 2]: that normalizer circuits over groups of the form Z 2 n × Z N also fail to approximate the modular exponentiation. Also, we point out that it is easy to see that the converse of theorem 4 is also true: if |a| divides M , then an argument similar to (49) shows that (m, x) → (m, a m x) is a group automorphism of Z M ×B, and the gate U mf automatically becomes a normalizer automorphism gate. The main implication of theorem 4 is that finite-group normalizer circuits cannot implement nor approximate the quantum modular exponentiation gate between H B , playing the role of the target system, and some ancillary control system, unless a multiple M = λ|a| of the order of a is known in advance. Yet the problem of finding multiples of orders is at least as hard as factoring and order-finding: for B = Z × N , a subroutine to find multiples of orders can be used to efficiently compute classically a multiple of the order of the group ϕ(N ), where ϕ is the Euler totient function, and it is known that factoring is polynomial-time reducible to the problem of finding a single multiple of the form λϕ(N ) [31]. We arrive to the conclusion that, unless we are in the trivial case where we know how to factorize in advanced, a factoring algorithm based on finite-group normalizer gates cannot comprise controlled-V a rotations. We further conjecture that any other approach based on finite-group normalizer gates cannot work either. Conjecture 1. Unless factoring is contained in BPP, there is no efficient quantum algorithm to solve the factoring problem using only normalizer circuits over finite Abelian groups (even when these are allowed to be black-box groups) and classical pre-and post-processing. We back up our conjecture with two facts. On one hand, Shor's algorithm for factoring [28] (to our knowledge, the only quantum algorithm for factoring thus far) uses a modular exponentiation gate to estimate the phases of the unitary V a , and these gates are hard to implement with finite-group normalizer circuits due to theorem 4. On the other hand, the reason why this does works for the group Z seems to be, in the view of the authors, intimately related to the fact that the order-finding problem can be naturally casted as an instance of the Abelian hidden subgroup problem over Z (see also section 5.4). Note that, although one can always cast the order-finding problem as an HSP over any finite group Z λϕ(N ) for an integer λ, this formulation of the problem is unnatural in our setting, as it requires (again) the prior knowledge of a multiple of ϕ(N ), which we could use to factorize and find orders classically without the need of a quantum computer [31]. Elliptic curves In the previous sections we have seen that black-box normalizer circuits can compute discrete logarithm in Z × p and break the Diffie-Hellman key exchange protocol. In the proof, we showed that Shor's algorithm for this problem decomposes naturally in terms of normalizer gates over Z × p , treated as a black-box group. It is known that Shor's algorithm can be adapted in order to compute discrete logarithms over arbitrary black-box groups. In particular, this can be done for the group of solutions E of an elliptic curve [32][33][34], thereby rendering elliptic curve cryptography (ECC) vulnerable. Efficient unique encodings and fast multiplication algorithms for these groups are known, so that they can be formally treated as black-box groups. In this section, we show that a quantum algorithm given by Proos and Zalka [32] to compute discrete logarithms over elliptic curves can be implemented with black-box normalizer circuits. Basic notions To begin, we review some rudiments of the theory of elliptic curves. For simplicity, our survey focuses only on the particular types of elliptic curves that were studied in [32], over fields with characteristic different than 2 and 3. Our discussion applies equally to the (more general) cases considered in [33,34], although the definition of the elliptic curve group operation becomes more cumbersome in such settings 20 . For more details on the subject, in general, we refer the reader to [6,152]. Let p > 3 be prime and let K be the field defined by endowing the set Z p with the addition and multiplication operations modulo p. An elliptic curve E over the field K s a finite Abelian group formed by the solutions (x, y) ∈ K × K to an equation together with a special element O called the "point at infinity"; the coefficients α, β in this equation live in the field K. The discriminant ∆ := −16(4α 3 + 27β 2 ) is nonzero, ensuring that the curve is non-singular. The elements of E are endowed with a commutative group operation. If P ∈ E then P + O = O + P = P . The inverse element −P of P is obtained by the reflection of P about the x axis. Given two elements P = (x P , y P ) and Q = (x Q , y Q ) ∈ E, the element P + Q is defined via the following rule: In the case P = Q, the point R is computed as follows: if P = Q and y P = 0 R can also be defined, geometrically, to be the "intersection between the elliptic curve and the line through P and Q" (with a minus sign) [6]. It is not hard to check form the definitions above that the elliptic-curve group E is finite and Abelian; from a computational point of view, the elements of E can be stored with n ∈ O(log |K|) bits and the group operation can be computed in O(poly n) time. Therefore, the group E can be treated as a black box group. Finally, the discrete logarithm problem (DLP) over an elliptic curve is defined in a way analogous to the Z × p case, although now we use additive notation: given a, b ∈ E such that xa = b for some integer x; our task is to find the least nonnegative integer s with that property. The elliptic-curve DLP is believed to be intractable for classical computers and can be used to define cryptosystems analog to Diffe-Hellman's [6]. Finding discrete logarithms over elliptic curves with normalizer circuits In this section we review Proos-Zalka's quantum approach to solve the DLP problem over an elliptic curve [32]; their quantum algorithm is, essentially, a modification of Shor's algorithm to solve the DLP over Z × p , which we covered in detail in section 5.1. Our main contribution in this section is that Proos-Zalka's algorithm can be implemented with normalizer circuits over the group Z × Z × E. The proof reduces to combining ideas from sections 5.1 and 5.2 and will be sketched in less detail. Input. An elliptic curve with associated group E (the group operation is defined as per (33)), and two points a, b ∈ E. It is promised that sa = b for some nonnegative integer s. Output. Find the least nonnegative integer s such that sa = b. 2. Fourier transforms are applied to the ancillas to create the superposition (x,y)∈A |x, y, O . 3. The following transformation is applied unitarily: 4. Fourier transforms are applied again over the ancillas and then measured, obtaining an outcome of the form (x ′ , y ′ ). These outcomes contain enough information to extract the number s, with similar post-processing techniques to those used in Shor's DLP algorithm. Algorithm 4 is not a normalizer circuit over Z N × Z N × E. Similarly to the factoring case, the algorithm would become a normalizer circuit if the classical transformation in step 3 was an automorphism gate; however, for this to occur, N needs to be a common multiple of the orders of a and b (the validity of these claims follows with similar arguments to those in section 5.2). In view of our results in sections 5.1 and 5.2, one can easily come up with two approaches to implement algorithm 1 using normalizer gates. (a) The first approach would be to use our normalizer version of Shor's algorithm (theorem 2) to find the orders of the elements a and b: normalizer gates over Z × E would be used in this step. Then, the number N in algorithm 4 can be set so that all the gates involved become normalizer gates over Z N × Z N × E. (b) Alternatively, one can choose not to compute the orders by making the ancillas infinite dimensional, just as we did in algorithm 2. The algorithm becomes a normalizer circuit over Z × Z × E: as in algorithm 2, the ancillas are initialized to the zero Fourier basis state, and the discrete Fourier transforms are replaced by QFTs over T (in step 2) and Z (in step 4). A finite precision version of the algorithm can be obtained in the same fashion as we derived algorithm 2. Proos-Zalka's original algorithm could, again, be interpreted as a discretization of the resulting normalizer circuit. The hidden subgroup problem All problems we have considered this far-finding discrete logarithms and orders of Abelian group elements-fit inside a general class of problems known as hidden subgroup problems over Abelian groups [43][44][45][46]. Most quantum algorithms discovered in the early days of quantum computation solve problems that can be recasted as Abelian HSPs, including Deutsch's problem [37], Simon's [38], order finding and discrete logarithms [28], finding hidden linear functions [39], testing shift-equivalence of polynomials [40], and Kitaev's Abelian stabilizer problem [41,42]. In view of our previous results, it is natural to ask how many of these problems can be solved within the normalizer framework. In this section we show that a well-known quantum algorithm that solves the Abelian HSPs (in full generality) can be modeled as a normalizer circuit over an Abelian group O. Unlike previous cases, the group involved in this computation cannot be regarded as a black-box group, as it will not be clear how to perform group multiplications of its elements. This fact reflects the presence of oracular functions with unknown structure are present in the algorithm, to which the group O is associated; thus, we call O an oracular group. We will discuss, however, that this latter difference does not seem to be very substantial, and that the Abelian HSP algorithm can be naturally regarded as a normalizer computation. The quantum algorithm for the Abelian HSP In the Abelian hidden subgroup problem we are given a function f : G → X from an Abelian finite 21 group G to a finite set X. The function f is constant on cosets of the form g + H, where H is a subgroup "hidden" by the function; moreover, f is different between different cosets. Given f as a black-box, our task is to find such a subgroup H. The Abelian HSP is a hard problem for classical computers, which need to query the oracle f a superpolynomial amount of times in order to identify H [6]. In contrast, a quantum computer can determine H in polynomial time O(polylog |G|), and using the same amount of queries to the oracle. We describe next a celebrated quantum algorithm for this task [43,44,35]. The algorithm is efficient given that the group G is explicitly given 22 in the form G = Z d 1 × · · · × Z dm [35,36,46]. Algorithm 5 (Abelian HSP). Input. An explicitly decomposed finite abelian group G = Z d 1 × · · · × Z dm , and oracular access to a function f : G → X for some set X. f satisfies the promise that f (g 1 ) = f (g 2 ) iff g 1 = g 2 + h for some h ∈ H, where H ⊆ G is some fixed but unknown subgroup of G. Output. A generating set for H. 1. Apply the QFT over the group G to an initial state |0 in order to obtain a uniform superposition over the elements of the group g∈G |g . 2. Query the oracle f in an ancilla register, creating the state 1 |G| g∈G |g, f (g) (35) 21 In this section we assume G to be finite for simplicity. For a case where G is infinite, we refer the reader back to section 5.2, where we studied the order finding problem (which is a HSP over Z). 22 If the group G is not given in a factorized form, the Abelian HSP may still be solved by applying Cheung-Mosca's algorithm to decompose G (see next section). 3. The QFT over G is applied to the first register, which is then measured. 4. After repeating 1-3 polynomially many times, the obtained outcomes can be postprocessed classically to obtain a generating set of H with exponentially high probability (we refer the reader to [86] for details on this classical part). We now claim that the quantum part of algorithm 5 is a normalizer circuit, of a slightly more general kind than the ones we have already studied. The normalizer structure of the HSP-solving quantum circuit is, however, remarkably well-hidden compared to the other quantum algorithms that we have already studied. It is indeed a surprising fact that there is any normalizer structure in the circuit, due to the presence of an oracular function, whose inner structure appears to be completely unknown to us! Theorem 5 (The Abelian HSP algorithm is a normalizer circuit.). In any Abelian hidden subgroup problem, the subgroup-hiding property of the oracle function f induces a group structure O in the set X. With respect to this hidden "linear structure", the function f becomes a group homomorphism, and the HSP-solving quantum circuit becomes a normalizer circuit over G × O. The proof is the content of the next two sections. Unweaving the hidden-subgroup oracle The key ingredient in the proof of the theorem (which is the content of the next section) is to realize that the oracle f cannot fulfill the subgroup-hiding property without having a hidden homomorphism structure, which is also present in the quantum algorithm. First, we show that f induces a group structure on X. Without loss of generality, we assume that the function f is surjective, so that imf = X. (If this is not true, we can redefine X to be the image of f .) Thus, for every element x ∈ X, the preimage f −1 (x) is contained in G, and is a coset of the form f −1 (x) = g x + H, where H is the hidden subgroup and f (g x ) = x. With these observations in mind, we can define a group operation in X as follows: x · y =f f −1 (x) + f −1 (y) . In (36) we denote byf the functionf (x + H) = f (x) that sends cosets x + H to elements of X. The subgroup-hiding property guarantees that this function is well-defined; moreover, f andf are related via f (x) =f (x + H). The addition operation on cosets f −1 (x) = g x + H and f −1 (y) = g y + H is just the usual group operation of the quotient group G/H [29]: By combining the two expressions, we get an explicit formula for the group multiplication in terms of coset representatives: x · y = f (g x + g y ). It is routine to check that this operation is associative and invertible, turning X into a group, which we denote by O. The neutral element of the group is the string e in X such that e = f (0) = f (H), which we show explicitly: The group O is manifestly finite and Abelian-the latter property is due to the fact that the addition (37) is commutative. Lastly, it is straightforward to check that the oracle f is a group homomorphism from G to O: for any g, h ∈ G let x := f (g) and y := f (h), we have It follows from the first isomorphism theorem in group theory [29] that O is isomorphic to the quotient group G/H via the mapf . The HSP quantum algorithm is a normalizer circuit We will now analyze the role of the different quantum gates used in algorithm 5 and see that they are examples of normalizer gates over the group G × O, where O is the oracular group that we have just introduced. The Hilbert space underlying the computation can be written as H G ⊗ H O with the standard basis {|g, x : g ∈ G, x ∈ O}. associated with this group. We will initialize the ancillary registers to the state |e , where e = f (0) is the neutral element of the group; the total state at step 1 will be |0, e . The Fourier transforms in steps 1 and 3 are just partial QFTs over the group G, which are normalizer gates. The quantum state at the end of step 1 is |g, e . Next, we look now at step 2 of the computation: This step can be implemented by a normalizer automorphism gate defined as follows. Let α : G × O → G × O be the function α(g, x) = (g, f (g) · x). Using the fact that f : G → O is a group homomorphism (39), it is easy to check that α is a group automorphism of G × O. Then the evolution at step 2 corresponds to the action of the automorphism gate U α : Of course, our choice to begin the computation in the state |0, e and to apply U α in step 3 is only one possible way to implement the first three steps of the algorithm. We could have alternatively initialized the computation on some |0, 0 state and used a slightly different gate U add = |g, x + f (g) g, x| in step 3. The latter sequence of gates can however be regarded as an exact gate-by-gate simulation of the former, so that it is perfectly licit to call the algorithm a normalizer computation-at least up to steps 3 and 4. Finally, note that in the last step of the algorithm we measure the H G in the standard basis like a normalizer computation. Therefore, every step in the quantum algorithm 5 corresponds to one of the allowed operations in a normalizer circuit over G × O. This finishes the proof of theorem 5. The oracular group O is not a black-box group (but almost) We ought to stress, at this point, that although theorem 5 shows that the Abelian HSP quantum algorithm is a normalizer computation over an Abelian group G × O, the oracular group O is not a black-box group (as defined in section 2.4), since it is not clear how to compute the group operation (36), due to our lack of knowledge about the oracular function which defines the multiplication rule. Yet, even in the absence of an efficiently computable group operation, we regard it natural to call the Abelian HSP quantum algorithm a normalizer circuit over G × O. Our reasons are multi-fold. First, there is a manifest strong similarity between the quantum circuit in algorithm 5 and the other normalizer circuits that we have studied in previous sections, which suggests that normalizer operations naturally capture the logic of the Abelian HSP quantum algorithm. Second, it is in fact possible to argue that, although O is not a black-box group, it behaves effectively as a black-box group in the quantum algorithm. Observe that, although it is true that one cannot generally compute x · y for arbitrary x, y ∈ O, it is indeed always possible to multiply any element x by the neutral element e, since the computation is trivial in this case: x · e = e · x = x. Similarly, in the previous section, it is not clear at all how to implement the unitary transformation U α |g, x = |g, f (g) · x for arbitrary inputs. However, for the restricted set of inputs that we need in the quantum algorithm (which is just the state |e ), it is trivial to implement the unitary, for in this case U α |g, e = |g, f (g) ; since quantum queries to the oracle function are allowed (as in step 2 of the algorithm), the unitary can be simulated by such process, regardless of how it is implemented. Consequently, the circuit effectively behaves as a normalizer circuit over a black-box group. Third, although the oracular model in the black-box normalizer circuit setting is slightly different from the one used in the Abelian HSP they are still remarkably close to each other. To see this, let x i be the elements of X defined as x i := f (e i ) where e i is the bit string containing a 1 in the ith position and zeroes elsewhere. Since the e i s form a generating set of G, the x i s generate the group O. Moreover, the value of the function f evaluated on an m , since f is a group homomorphism. It follows from this expression that the group homomorphism is implicitly multiplying elements of the group O. We cannot use this property to multiply elements of O ourselves, since everything happens at the hidden level. However, this observation shows that the assuming that f is computable is tightly related to the assumption that we can multiply in O, although slightly weaker. (See also the next section.) Finally, we mention that this very last feature can be exploited to extend several of our main results, which we derive in the black-box setting, to the more-general "HSP oracular group setting" (although proofs become more technical). For details, we refer the reader to sections 6-8 and appendix C. A connection to a result by Mosca and Ekert Prior to our work, it was observed by Mosca and Ekert [45,35] that f must have a hidden homomorphism structure, i.e. that f can be decomposed as E • α where α is a group homomorphism between G and another Abelian group Q ∼ = G/H, and E is a one-to-one hiding function from Q to the set X. In this decomposition, E hides the homomorphism structure of the oracle. Our result differs from Mosca-Ekert's in that we show that X itself can always be viewed as a group, with a group operation that is induced by the oracle, with no need to know the decomposition E • α. It is possible to relate both results as follows. Since both Q and O are isomorphic to G/H, they are also mutually isomorphic. Explicitly, if β is an isomorphism from Q to G/H (this map depends on the particular decomposition f = E • α), then Q and O are isomorphic via the mapf • β. Decomposing finite Abelian groups As mentioned earlier, there is a quantum algorithm for decomposing Abelian groups, due to Cheung and Mosca [35,36]. In this section, we will introduce this problem, and present a quantum algorithm that solves it, which uses only black-box normalizer circuits supplemented with classical computation. The algorithm we give is based on Cheung-Mosca's, but it reveals some additional information about the structure of the black-box group. We will refer to it as the extended Cheung-Mosca's algorithm. The group decomposition problem In this work, we define the group decomposition problem as follows. The input of the problem is a list of generators α = (α 1 , · · · , α k ) of some Abelian black-box group B. Our task is to return a group-decomposition table for B. A group-decomposition table is a tuple (α, β, A, B, c) consisting of the original string α and four additional elements: (a) A new generating set β = β 1 , . . . , β ℓ with the property B = β 1 ⊕ · · · ⊕ β ℓ . We will say that these new generators are linearly independent. (b) An integer vector c containing the orders of the linearly independent generators β i . (c) Two integer matrices A, B that relate the old and new generators as follows: This last equation should be read in multiplicative notation (as in e.g. [153]), where "vectors" of group elements are right-multiplied by matrices as follows: given the ith column a i of A (for the left hand case), we have β i = (α 1 , . . . , α k )a i = α Our definition of the group decomposition is more general than the one given in [35,36]. In Cheung and Mosca's formulation, the task is to find just β and c. The algorithm they give also computes the matrix A in order to find the generators β i (cf. the next section). What is completely new in our formulation is that we ask in addition for the matrix B. Note that a group-decomposition table (α, β, A, B, c) contains a lot of information about the group structure of B. First of all, the tuple elements (a-b) tell us that B is isomorphic to a decomposed group G = Z c 1 × · · · × Z c k . In addition, the matrices A and B provide us with an efficient method to re-write linear combinations of the original generators α i as linear combinations of the new generators β j (and vice-versa). Indeed, equation (43) implies for any x ∈ Z k , β y 1 1 · · · β y ℓ 1 = β 1 , . . . , β ℓ y = α 1 , . . . α k (Ay), for any y ∈ Z ℓ . It follows that, for any given x, the integer string y = Bx (which can be efficiently computed classically) fulfills the condition α x 1 1 · · · α x k 1 = β y 1 1 · · · β y ℓ 1 . (A symmetric argument proves the opposite direction.) As we discussed earlier in the introduction, the group decomposition problem is provably hard for classical computers within the black-box setting, and it is at least as hard as Factoring (or Order Finding) for matrix groups of the form Z × N (the latter being polynomial-time reducible to group decomposition). It can be also shown that group decomposition is also at least as hard as computing discrete logarithms, a fact that we will use in the proof of theorems 6, 7: Lemma 1 (Multivariate discrete logarithms). Let β 1 , . . . , β ℓ be generators of some Abelian black-box group B with the property B = β 1 ⊕ · · · ⊕ β ℓ . Then, the following generalized version of the discrete-logarithm problem is polynomial time reducible to group decomposition: for a given β ∈ B, find an integer string x such that β x 1 1 · · · β x ℓ ℓ = β. Proof. Define a new set of generators for B by adding the element β ℓ+1 = β to the given set {β i }. The array α ′ := (β 1 , . . . , β ℓ + 1) defines an instance of Group Decomposition. Assume that a group decomposition table (α ′ , (β ′ 1 , . . . , β ′ m ), A ′ , B ′ , c ′ ) for this instance of the problem is given to us. We can now use the columns b ′ i of the matrix B ′ to re-write the previous generators β i in terms of the new ones: Here, e i denotes the integer vector with e(i) = 1 and e(j) = 0 elsewhere. Conditions (a-b) imply that the columns b ′ i can be treated as elements of the group G = Z c ′ 1 ×· · ·×Z c ′ m . Using this identification, the original discrete logarithm problem reduces to finding an integer string i (now in additive notation). The existence of such an x can be easily proven using that the elements β 1 , . . . , β ℓ generate B: the latter guarantees the existence of an x such that . By finding such an x, we can solve the multivariate discrete problem, since β (44). Finally, note that we can find x efficiently with existing deterministic classical algorithms for Group Membership in finite Abelian groups (cf. lemma 3 in [2]). We highlight that, in order for the latter result to hold, it seems critical to use our formulation of group decomposition instead of Cheung-Mosca's. Consider again the discrete-log problem over the group Z × p (recall section 5.1). This group Z × p is cyclic of order p − 1 and a generating element a is given to us as part of the input of the discrete-log problem. Although it is not known how to solve this problem efficiently, Cheung-Mosca's group decomposition problem (find some linearly independent generators and their orders) can be solved effortlessly in this case, by simply returning a and p − 1, since a = Z × p ∼ = Z p−1 . The crucial difference is that Cheung-Mosca's algorithm returns a factorization Z c 1 × · · · × Z c ℓ of B, but it cannot be used to convert elements between the two representations efficiently (one direction is easy; the other requires computing discrete logarithms). In our formulation, the matrices A, B provide such a method. Quantum algorithm for group decomposition We now present a quantum algorithm that solves the group decomposition problem. 1. Use the order finding algorithm (comprising normalizer circuit over Z × B and classical postprocessing) to obtain the orders d i of the generators α i . Then, compute (classically) and store their least common multiplier d = lcm(d 1 , . . . , d k ). Define the function , which is a group homomorphism and hides the subgroup ker f (its own kernel). Apply the Abelian HSP algorithm to compute a set of generators h 1 , . . . , h m of ker f . This round uses normalizer circuits over Z k d × B and classical post-processing (cf. section 5.4). 3. Given the generators h i of ker f one can classically compute a k × ℓ matrix A (for some ℓ) such that (β 1 , . . . , β ℓ ) = (α 1 , . . . , α k )A is a system of linearly independent generators [36, theorem 7]. β, A and the orders c i of the β i s (computed again via an order-finding subroutine) will form part of the output. It is easy to see that a matrix X fulfilling (a-b) always exists, since for any α i , there exists some y i such that α i = (β 1 , . . . , β ℓ )y i (because the β i s generate the group). It follows that α i = (α 1 , . . . , α k )(Ay i ). Then, the matrix with columns x i = Ay i has the desired properties. Our existence proof for X is constructive, and tells us that X can be computed in quantum polynomial time by solving a multivariate discrete logarithm problem (lemma 1). However, we will use a more subtle efficient classical approach to obtain X, by reducing the problem to a system of linear equations over Abelian groups [2,3]. Let H be a matrix formed column-wise by the generators h i of ker f . By construction, the image of the map H : Z m d → Z k d fulfills imH = ker f . Properties (a-b) imply that the ith column x i of X must be a particular solution to the equations x i = Ay i with y i ∈ Z ℓ and x i = e i + Hz i mod d, with z i ∈ Z m d . These equations can be equivalently written as a system of linear equations over Z m+ℓ : which can be solved in classical polynomial time using e.g. algorithms from [3]. Then, the matrix X can be constructed column wise taking x i = Ay i . Finally, given such an X, it is easy to find a valid B by computing a Hurt-Waid integral pseudo-inverse A # of A [154,155]: In the third step, we used that A # acts as the inverse of A on inputs x ∈ Z k that live in the image of A [154]. Since integral pseudo-inverses can be computed efficiently using the Smith normal form (see e.g. our dicussion in [3, appendix D]), we finally set B := A # X. Simulation of black-box normalizer circuits Our results so far show that the computational power of normalizer circuits over black-box groups (supplemented with classical pre-and post-processing) is strikingly high: they can solve several problems believed to be classically intractable and render the RSA, Diffie-Hellman, and elliptic curve public-key cryptosystems vulnerable. In contrast, standard normalizer circuits, which are associated with Abelian groups that are explicitly decomposed, can be efficiently simulated classically, by exploiting a generalized stabilizer formalism [1][2][3] over Abelian groups. It is natural to wonder at this point where the computational power of black-box normalizer circuits originates. In this section, we will argue that the hardness of simulating blackbox normalizer circuits resides precisely in the hardness of decomposing black-box Abelian groups. An equivalence is suggested by the fact that we can use these circuits to solve the group decomposition problem and, in turn, when the group is decomposed, the techniques in [1][2][3] render these circuits classically simulable. In this sense, then, the quantum speedup of such circuits appears to be completely encapsulated in the group decomposition algorithm. This intuition can be made precise and be stated as a theorem. Theorem 6 (Simulation of black-box normalizer circuits). Black-box normalizer circuits can be efficiently simulated classically using the stabilizer formalism over Abelian groups [1][2][3] if a subroutine for solving the group-decomposition problem is provided as an oracle. The proof of this theorem is the subject of section B in the appendix. Since normalizer circuits can solve the group decomposition problem (section 5.5), we obtain that this problem is complete for the associated normalizer-circuit complexity class, which we now define. Definition 2 (Black-Box Normalizer). The complexity class Black-Box Normalizer is the set of oracle problems that can be solved with bounded error by at most polynomially many rounds of efficient black-box normalizer circuits (as defined in section 4.3), with polynomial-sized classical computation interspersed between. In other words, if N is an oracle that given an efficient (poly-sized) black-box normalizer circuit as input, samples from its output distribution, then Corollary 1 (Completeness of group decomposition). Group decomposition is a complete problem for the complexity class Black-Box Normalizer under classical polynomialtime Turing reductions. We stress that theorem 6 tells us even more than the completeness of group decomposition. As we discussed in the introduction, an oracle for group decomposition gives us an efficient classical algorithm to simulate Shor's factoring and discrete-log algorithm (and all the others) step-by-step with a stabilizer-picture approach "à la Gottesman-Knill". We also highlight that theorem 6 can be restated as a no-go theorem for finding new quantum algorithms based on black-box normalizer circuits. Theorem 7 (No-go theorem for new quantum algorithms). It is not possible to find "fundamentally new" quantum algorithms within the class of black-box normalizer circuits studied in this work, in the sense that any new algorithm would be efficiently simulable using the extended Cheung-Mosca algorithm and classical post-processing. This theorem tells us that black-box normalizer circuits cannot give exponential speedups over classical circuits that are not already covered by the extended Cheung-Mosca algorithm; the theorem may thus have applications to algorithm design. Note, however, that this no-go theorem says nothing about other possible polynomial speed-ups for black-box normalizer circuits; there may well be other normalizer circuits that are polynomially faster, conceptually simpler, or easier to implement than the extended Cheung-Mosca algorithm. Our theorem neither denies that investigating black-box normalizer could be of pedagogical or practical value if, e.g., this led to new interesting complete problems for the class Black-Box Normalizer. Finally, we note that theorem 6 can be extended to the general Abelian hidden subgroup problem to show that the quantum algorithm for the Abelian HSP becomes efficiently classically simulable if an algorithm for decomposing the oracular group O is given to us (cf. section 5.4 and refer to appendix C for a proof). We discuss some implications of this fact in the next sections. Universality of short quantum circuits Since all problems in Black-Box Normalizer are solvable by the extended Cheung-Mosca quantum algorithm (supplemented with classical processing), the structure of said quantum algorithm allows us to state the following: Theorem 8 (Universality of short normalizer circuits). Any problem in the class Black-Box Normalizer can be solved by a quantum algorithm composed of polynomiallymany rounds of short normalizer circuits, each with at most a constant number of normalizer gates, and additional classical computation. More precisely, in every round, normalizer circuits containing two quantum Fourier transforms and one automorphism gate (and no quadratic phase gate) are already sufficient. Proof. This result follows immediately form the fact that group decomposition is complete for this class (theorem 1) and from the structure of the extended Cheung-Mosca quantum algorithm with this problem, which has precisely this structure. Similarly to theorem 6, theorem 8 can be extended to the general Abelian HSP setting. For details, we refer the reader to appendix C. We find the latter result is insightful, in that it actually explains a somewhat intriguing feature present in Deutsch's, Simon's, Shor's and virtually all known quantum algorithms for solving Abelian hidden subgroup problems: they all contain at most two quantum Fourier transforms! Clearly, it follows from this theorem than no more than two are enough. Also, theorem 8 tells us that it is actually pretty useless to use logarithmically or polynomially long sequences of quantum Fourier transforms for solving Abelian hidden subgroup problems, since just two of them suffice 24 . In this sense, the Abelian HSP quantum algorithm uses an asymptotically optimal number of quantum Fourier transforms. Furthermore, the normalizer-gate depth of this algorithm is optimal in general. Other Complete problems We end this paper by giving two other complete problems for the complexity class Black Box Normalizer. Theorem 9 (Hidden kernel problem is complete). Let the Abelian hidden kernel problem (Abelian HKP) be the subcase of the hidden subgroup problem where the oracle function f is a group homomorphism from a group of the form G = Z a × Z N 1 × · · · × Z N b into a blackbox group B. This problem is complete for Black Box Normalizer under polynomial-time Turing reductions. Proof. Clearly group decomposition reduces to this problem, since the quantum steps of the extended Cheung-Mosca algorithm algorithm (steps 1 and 3) are solving instances of the Abelian kernel problem. Therefore, the Abelian HKP problem is hard for Black Box Normalizer. Moreover, Abelian HKP can be solved with the general Abelian HSP quantum algorithm, which manifestly becomes a black-box normalizer circuit for oracle functions f that are group homomorphisms onto black-box groups. This implies that Abelian HKP is inside Black Box Normalizer, and therefore, it is complete. Note. Although we originally stated the Abelian HSP for finite groups, one can first apply the order-finding algorithm to compute a multiple d of the orders of the elements f (e i ), where e i are the canonical generators of G. This can be used to reduce the original HKP problem to a simplified HKP over the group Z a d × Z N 1 × · · · × Z N b The latter result can be extended to the HSP setting to show that the Abelian hidden subgroup problem is polynomial-time equivalent to decomposing groups of the form O (cf. appendix C). Theorem 10 (System of linear equations over groups). Let α be a group homomorphism from a group G = Z a × Z N 1 × · · · × Z N b onto a black-box group B. An instance of a linear system of equations over G and B 25 is given by α and an element b ∈ B. Our task is to find a general (x 0 , K) solution to the equation where x 0 is any particular solution and K is a generating set of the kernel of α. This problem is complete for Black Box Normalizer under polynomial-time Turing reductions. 24 This last comment does not imply that building up sequences of Fourier transforms is useless in general. On the contrary, this can be actually be useful, e.g., in QMA amplification [156]. 25 These systems where studied extensively in our previous works [2,3] in the decomposed-group setting. Proof. Clearly, this problem is hard for our complexity class, since the Abelian hidden kernel problem reduces to finding K. Moreover, this problem can be solved with black-box normalizer circuits and classical computation, proving its completeness. First, we find a decomposition B = β i ∼ = H = Z c 1 × · · · × Z c ℓ with black-box normalizer circuits. Second, we recycle the de-black-boxing idea from the proof of theorem 6 to compute a matrix representation of α, and solve the multivariate discrete logarithm problem b = β b(1) 1 · · · β b(ℓ) ℓ , b ∈ H, either with black-box normalizer circuits or classically (recall section 5.5). The original system of equations can now be equivalently written as Ax = b (mod H). A general solution of this system can be computed with classical algorithms given in [3]. A Proof of theorem 4 To prove the result we can assume that we know a group isomorphism ϕ : Z × N → G that decomposes the black-box group as a product of cyclic factors G = Z N 1 × · · · × Z N d . Let U ϕ : H B → H G be the unitary that implements the isomorphism U ϕ |b = |ϕ(b) for any b ∈ Z × N . It is easy to check that C is a normalizer circuit over Z M × G if and only if (I ⊗ U ϕ )C(I ⊗ U ϕ ) † is a normalizer circuit over Z M × Z × N : automorphism (resp. quadratic phase) gates get mapped to automorphism (resp. quadratic phase) gates and vice-versa; isomorphic groups have isomorphic character groups [58], and therefore Fourier transforms get mapped to Fourier transforms. As a result, it is enough to prove the result in the basis labeled by elements of Z × M × G. The advantage now is that we can use results from [1,2]. In fact, the rest of the proof will be similar to the proof of theorem 2 in [1]. We define α := ϕ(a). The action of U me in the G-basis reads U me |m, g = |m, mα + g , in additive notation. Define a fuction F (m, g) = (m, mα + g). We now assume that M is not divisible by |a| and that there exists a normalizer circuit C such that C − U me ≤ δ with δ = 1 − 1/ √ 2 and try to arrive to a contradiction. This property implies that C|m, g − U me |m, g ≤ δ for any standard basis state, and consequently It was shown in [1] that C|m, g is a uniform superposition over some subset x+H of G, being H a subgroup. If H has more than two elements, then C|m, g is a uniform superposition over more than two computational basis states. It follows that n, h|C|m, g ≤ 1 √ 2 for any basis state |n, h in contradiction with 49, so that we can assume H = {0} and that C|m, g is a standard basis state. Then (49) implies that |F (m, g) and C|m, g must coincide for every (m, g) ∈ Z M × G, so that C must perfectly realize the transformation |(m, g) → |F (m, g) ; however, the only classical functions that can be implemented by normalizer circuits of this form are affine maps [1], meaning that F (m, g) = f (m, g) + b for some group automorphism f : Finally, we arrive to a contradiction showing that F (m, g) is not affine unless M is a multiple of |a|. First, by evaluating F (m, g) = f (m, g) + b = (m, mα + g) at (0, 0),(1, 0) and elements of the form (0, g), we can compute b and a matrix representation A of the automorphism f [1]: we obtain b = 0, so that F (m, g) must be an automorphism, and A = 1 0 α 1 . However, for the matrix A to be a matrix representation of a group automorphism, the first column a 1 needs to fulfill the equation: M a 1 (mod Z M ×G) [2, lemma 2]. Expanding this equation, we finally get that M α = 0 (mod G), which means that M needs to be a multiple of the order of α. B Proof of theorem 6 In this section we will prove theorem 6. The proof uses results of Section 5.5; the reader may wish to review that section before proceeding with this proof. We first state the simulation result of [3], summarized below: Theorem 11 ([3, theorem 1]). Let G = Z a ×Z N 1 ×· · ·×Z Nc ×T b be an explicitly decomposed elementary abelian group. Suppose we are given a normalizer circuit over G, where each gate is specified as follows (assume each gate is such that all entries of the matrices and vectors below are rational): • A (partial) quantum Fourier transform is specified by the elementary subgroups it acts on. • A group automorphism is specified as a matrix A, as in the normal form that we will introduce in Thm 12. • A quadratic phase gate is specified as (M, v), where M is a matrix and v is a matrix, as in the normal form that we will introduce in Thm 13. Then the output distribution of this normalizer circuit can be sampled classically efficiently. We will describe precisely what we mean by the normal forms of normalizer gates in the following sections. Given a black-box normalizer circuit acting on a black-box group G = Z a × T b × Z N 1 × · · · × Z Nc × B, there are two things we need to do to de-blackbox it, so that the circuit can be classically simulated: 1. Decompose the black-box portion of G, B, into cyclic subgroups: B = Z N c+1 × · · · × Z N c+d . 2. Calculate normal forms for each of the normalizer gates in the computation. Assuming we have access to an oracle for Group Decomposition, Task 1 can already be done. In this proof we will concentrate on tackling task 2, for group automorphisms and quadratic phase gates (a quantum Fourier transform is easily specified by the subgroup it acts on). B.1 Group automorphisms Suppose we have an abelian group G = Z a × Z N 1 × · · · × Z Nc × T b ; we can represent each element g ∈ G as an a+b+c-tuple of real numbers g = (g 1 , · · · , g m ), where each of the g i 's are only defined modulo the characteristic char(G i ) of the group G i , multiplied by some integer. (We have char(Z) = 0, char(T) = 1, and char(Z N ) = N .) Using this matrix representation, it turns out that there exist normal forms for the group automorphisms and quadratic phase functions: Theorem 12 (Normal form of a matrix representation [3, lemmas 7, 8]). Let G = G 1 × · · · × G m be an elementary Abelian group. A real n × m matrix A is a valid matrix representation of some group automorphism α : G → G iff A is of the form with the following restrictions: 1. A ZZ and A TT are arbitrary integer matrices. 2. A F Z , A F F are integer matrices: the first can be arbitrary, while the coefficients of the second must be of the form where α i,j can be arbitrary integers, and N i is the order of the i-th cyclic subgroup of F , Z N i . The coefficients of the i-th rows of these matrices can be chosen w.l.o.g. to lie in the range [0, N i ) (by taking remainders). 3. A TZ and A TF are real matrices: the former is arbitrary, while the coefficients of the latter are of the form A(i, j) = α i,j /N j , where α i,j can be arbitrary integers, and N i is the order of the i-th cyclic subgroup of F , Z N i . (Due to the periodicity of the torus, the coefficients of A TZ , A TF can be chosen to lie in the range [0, 1).) Recall the underlying group is G = Z a × T b × Z N 1 × · · · × Z Nc × B with B ∼ = Z N c+1 × · · · × Z N c+d . Assume we are given black box access to a group automorphism α : G → G implemented as a classical function (a uniformly generated circuit family, say). We wish to find a matrix representation for A for f . We will assume (for the efficiency of this algorithm) that the size and precision of the coefficients are upper bounded by some known constant D, i.e. each element of M can be written as A i,j = α i,j /β i,j for integers α i,j , β i,j with absolute value no more than D. 26 We will show how to find the matrix representation A in two steps: 1. Given access to α, we show how to switch the input and output of α from the blackbox encoding (where the group action is implemented as a black-box circuit) to the decomposed group encoding (where elements of the group are given as a list of numbers, and the group action is simply addition of vectors), and vice versa. cases listed above; the only case we still need to treat is A TT , whose entries are arbitrary integers (and c i = char(T) = 1 in this case). We can instead evaluate f (e j /α) for some large integer α: which allows us to determine A i,j modulo αc i for our choice of α. Choosing α > 2D then allows us to determine A i,j exactly for the case of A TT . B.2 Quadratic phase functions We will continue to use the matrix representation referenced in the last section. Let G = Z a × Z N 1 × · · · × Z Nc × T b be an elementary Abelian group. Define Z • N = {0, 1/N, · · · , (N − 1)/N } to be a group under addition modulo 1, and let G • = T a × Z • N 1 × · · · × Z • Nc × Z b . Then a function ξ : G → U (1) is quadratic if and only if ξ(g) = e πi (g T M g + C T g + 2v T g) (55) where C, v, M are, respectively, two vectors and a matrix that satisfy the following: • v is an element of G • ; • M is the matrix representation of a group homomorphism from G to G • , which necessarily has the form with the following restrictions: -M ZZ and M TT are arbitrary integer matrices. -M F • Z and M TF are rational matrices, the former with the form M (i, j) = α i,j /N i and the latter with the form M (i, j) = α i,j /N j , where α i,j are arbitrary integers, and N i is the order of the i-th cyclic subgroup Z N i . -M F • F is a rational matrix with coefficients of the form where α i,j are arbitrary integers, and N i is the order of the i-th cyclic subgroup Z N i . -M TZ is an arbitrary real matrix. Recall the underlying group is G = Z a × T b × Z N 1 × · · · × Z Nc × B with B ∼ = Z N c+1 × · · · × Z N c+d . Assume we are given a quadratic phase gate ξ, implemented as a classical circuit family q : G → Q such that ξ(g) = e 2πiq(g) ∀g ∈ G. Since we can switch between the black-box group and decomposed group encodings (see section B.1.1), we can assume the elements of G are treated as a vector in Q a+b+c+d . We wish to write the quadratic function ξ(g) in the normal form given by theorem 13, i.e. find M, c, v as in theorem 13 such that ξ(g) = e πi (g T M g + C T g + 2v T g) . q, and hence M , c, and v, are rational by assumption. We will furthermore assume, as before, that the size and precision of the coefficients are upper bounded by some known constant D, i.e. each element of M can be written as M i,j = α i,j /β i,j for integers α i,j , β i,j with absolute value no more than D. To do this, let us first determine the entries of M . This can be done in the following manner: it should be straightforward to verify that ξ(x + y) = ξ(x)ξ(y)e 2πi x T M y (60) for any x, y ∈ G, and therefore x T M y ≡ q(x + y) − q(x) − q(y) mod Z. We can use this method to determine nearly all the entries of M exactly, by taking x and y to be unit vectors e i and e j ; this would determine M ij up to an integer, i.e. M i,j = e T i M e j ≡ q(e i + e j ) − q(e i ) − q(e j ) mod Z. This determines all entries of M except for those in M ZZ and M TT (the other entries can be assumed to lie in [0, 1)). To deal with M ZZ we take x = α −1 e i , and y = e j , such that the coefficient M (i, j) is in the submatrix M ZZ and 1/α is an element of the circle group with α < 2D, where D is the precision bound. We obtain an analogous equation which allows us to determine M i,j : since the number M i,j /α is smaller than 1/2 in absolute value, the coefficient is not truncated modulo 1. One can apply the same argument to obtain the coefficients of M TT , choosing x = e i , and y = α −1 e j . Once we determine all the entries of M in this manner, we get immediately the vector C as well (since C(i) = c i M (i, i)). It is then straightforward to calculate the vectorṽ. Thus we can efficiently find the normal form of ξ(g) through polynomially many uses of the classical function q. C Extending theorem 6 to the Abelian HSP setting In this appendix, we briefly discuss that theorem 6 (and some of the results that follow from this theorem) can be re-proven in the general hidden subgroup problem oracular setting that we studied in section 5.4. This fact supports our view (discussed in the main text) that the oracle models in the HSP and in the black-box setting are very close to each other. Recall that the main result in this section (theorem 5) states that the quantum algorithm Abelian HSP is a normalizer circuits over a group of the form Z d 1 × · · · × Z dm × O, where O is a group associated with the Abelian HSP oracle f via the formula (36). The group O is not a black-box group, because no oracle to multiply in O was provided. However, we discussed at the end of section 5.4 that one can use the hidden subgroup problem oracle to perform certain multiplications implicitly. We show next that theorem 6 can be re-casted in the HSP setting as "the ability to decompose the oracular group O renders normalizer circuits over Z d 1 ×· · ·×Z dm ×O efficiently classically simulable". To see this, assume a group decomposition table (α, β, A, B, c) is given. Then we know O ∼ = Z c 1 × · · · × Z cm . Let us now view the function α(g) = (g, f (g)) used in the HSP quantum algorithm as a group automorphism of G × Z c 1 × · · · × Z cm , where we decompose O. Then, it is easy to check that 1 0 B 1 is a matrix representation of this map. It follows that the group decomposition table can be used to de-black-box the HSP oracle, and this fact allows us to adapt the proof of theorem 6 step-by-step to this case. We point out further that the extended Cheung-Mosca algorithm can be adapted to the HSP setting, showing that normalizer circuits over G × O can be used to decompose O. This follows from the fact that the function f that we need to query to decompose B using the extended Cheung-Mosca algorithm (algorithm 6) has precisely the same form as the HSP oracle. Using the HSP oracle as a subroutine in algorithm 6 (which we can query by promise), the algorithm computes a group decomposition tuple for O. Finally, we can combine these last observations with theorem 9 and conclude that the problem of decomposing groups of the form O is classically polynomial-time equivalent to the Abelian hidden subgroup problem. The proof is analogous to that of theorem 9.
2014-09-16T13:46:15.000Z
2014-09-16T00:00:00.000
{ "year": 2014, "sha1": "ec6dbf9fa688ea00b14341fcf3170c62b8657922", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "ec6dbf9fa688ea00b14341fcf3170c62b8657922", "s2fieldsofstudy": [ "Computer Science", "Mathematics" ], "extfieldsofstudy": [ "Mathematics", "Physics", "Computer Science" ] }
251349991
pes2o/s2orc
v3-fos-license
Synthesis and characterization of violurate-based Mn(II) and Cu(II) complexes nano-crystallites as DNA-binders and therapeutics agents against SARS-CoV-2 virus Synthesis and structural characterization of nano crystallites of bis-violurate-based manganese(II) and copper(II) chelates is the subject of the present study. Analytical data and mass spectra as well as thermal analysis determined the molecular formulas of the present metal chelates. Spectroscopic and magnetic measurements assigned the structural formula of the present violurate metal complexes. The spectroscopic and magnetic investigations along with structural analysis results indicated the square planar geometry of both the Mn(II) and Cu(II) complexes. The structural analysis of the synthesized metal complexes was achieved by processing the PXRD data using specialized software Expo 2014. Spectrophotometeric and viscosity measurements showed that violuric acid and its Mn(II) and Cu(II) complexes successfully bind to DNA with intrinsic binding constants Kb from 38.2 × 105 to 26.4 × 106 M−1. The antiviral activity study displayed that the inhibitory concentrations (IC50) of SARS-CoV-2 by violuric acid and its Mn(II) and Cu(II) complexes are 84.01, 39.58 and 44.86 μM respectively. Molecular docking calculations were performed on the SARS-CoV-2 virus protein and the computed binding energy values are −0.8, −3.860 −5.187 and −4.790, kcal/mol for the native ligand, violuric acid and its Mn(II) and Cu(II) complexes respectively. Insights into the relationship between structures of the current compounds and their degree of reactivity are discussed. Introduction Severe acute respiratory syndrome corona virus 2 (SARS-CoV-2) has now been proven to be the causative agent of the COVID-19 pandemic, as announced by the World Health Organization (WHO) on March 11, 2020. In this context, the relevant studies reported that one of the ways to eliminate SARS-CoV-2 comes from inhibiting the work of some enzymes necessary for the continuation of the life cycle of the corona virus [1,2]. In this regard, some studies have proven the success of Au-complexes in eliminating Corona virus by inhibiting the activity of some vital enzymes such angiotensin-converting enzyme II (ACE 2) [3]. In addition, the Au complexes showed inhibitory activity of the Papainlike protease (PL pro ) enzyme, which along with 3-Chymotrypsin-Like Protease (3CL pro ), are essential for the survival of SARS-CoV-2 virus [4][5][6][7]. In this context, the therapeutic use of these gold complexes with a half-maximal inhibitory concentration (IC 50 ) of 4 lM reduced SARS-CoV-2 infection in human cells by 95% after 48 h. [8]. Despite the success of these gold complexes in inhibiting the activity of corona virus, their high toxicity prevented them from being used as treatments against SARS-CoV-2 [1,2]. However, this promising therapeutic use of gold complexes against SARS-CoV-2 has excited chemists and pharmacologists to prepare metal complexes with low toxicity and effective therapeutic potential. Among these metal complexes that have shown a lethal effect against the Corona virus are Re(I) tricarbonyl complexes that acted as 3-Chymotrypsin-like protease (3CL pro ) inhibitors [1,2]. In the same respect, a series of Bi(III) complexes showed an inhibitory effect on the activity of RNA -responsible for viral replication and thus they were considered replication inhibitors, making them potential therapeutics for SARS-CoV-2 [9]. It should be noted here that the WHO has approved the use of safe antiviral agents that act as inhibitors of SARS-CoV-2 replication in the treatment protocol for COVID-19 [1,2]. A survey in the literature indicates the limited or paucity of experimental studies of the therapeutic uses of metal complexes towards the SARS-CoV-2 [1,2,[10][11][12][13]. On the other hand, the application of computational docking method to discover the therapeutic ability of metal complexes against SARS-CoV-2 virus indicated the therapeutic potential of many metal complexes [14][15][16][17][18]. In this regard and based on molecular docking calculations a series of the transition metal complexes including metal ions belonging to the three series of transition elements, showed significant antiviral activity against a series of pathogenic viruses, including COVID-19 by inhibiting the entry of the virus into the host cells, the RNA replication process or virus budding processes [16]. In the same respect, the results of the molecular docking calculations of these metal complexes showed their virucidal effect against SARS-CoV-2 by inhibiting the RNA replication of the virus making them therapeutic agents against COVID-19 [18]. It is scientifically proven in the medical and pharmaceutical communities, that the ability to bind any drug with DNA is the initial spark in the therapeutic protocol for many diseases. In this regard, several studies have indicated the ability of several metal complexes to bind to DNA, thus giving them the potential to be used as therapeutic agents. In the same context, a number of scientific reports stated that the therapeutic effect of metal complexes against viruses is due to their binding to the virus protein and the inhibition of its replication and activity [19][20][21][22][23][24]. The lack of a specific drug to treat infection with the Corona virus requires intensifying the efforts of chemists and pharmacists to discover an effective and safe treatment against SARS-CoV-2, which is responsible for the outbreak of the Corona virus pandemic. In this regard, extensive in vitro investigations to determine the potential of a number of metal complexes as therapeutic agents against SARS-CoV-2 replication will lead to the discovery of an effective and safe treatment for COVID- 19. The occurrence of an oximato functional group in the organic ligand provides multiple possibility of bonding to a metal ion, resulting in the formation of many stereochemical forms of the formed metal-containing compound [25]. One of these oxime-containing organic ligands is violuric acid in which its deprotonated form, violurato anion, forms versatile metallic complexes. In this respect, bis-violurate -based metal complexes exhibited promising biological potentials such as antitumour, antifungal and antibacterial activity [25]. It is worth noting here that iron, copper and manganese are the most abundant trace elements in human cells. Therefore, the use of complexes of these elements as therapeutic agents will provide higher safety aspect over the complexes of other elements. A literature survey indicates the scarcity of use of both Cu(II) and Mn(II) complexes as therapeutic agents against SARS-CoV-2. Accordingly, studying the synthesis and characterization of violurate-based Cu(II) and Mn(II) complexes and examining their therapeutic potential against SARS-CoV-2 will enrich efforts to reach an effective and safe treatment that limits the spread of the Corona epidemic. The aim of the current study is to synthesize bis-violuratebased manganese(II) and copper(II) complexes and to test them as DNA binders and in vitro as therapeutic agents against SARS-CoV-2 virus. Chemicals and materials The chemicals used in the current study were obtained from Aldrich and Merck, which are distinguished for the high purity and quality of their products. Synthesis of violurate-based Mn(II) and Cu(II) complexes Both the violurate-based Mn(II) and Cu(II) complexes synthesized according to the following procedure: An ethanolic solution of (20 mL) containing the hydrated MnCl 2 or CuCl 2 (0.01 M) was added dropwise to the stirred hot ethanolic solution (30 mL) of violuric acid (0.02 M). This reaction mixture was refluxed for 1 h during which a solid colored precipitate was formed. The vessels containing these solid colored products were left to cool to room temperature and then filtered and washed several times with hot ethanol and finally with ether. Drying these metal complexes was performed by keeping them in a desiccator over CaO for one week. The purity of these isolated solid colored metal complexes was confirmed by the analytical data presented in Table 1. Synthesis of [CuL 2 BF 2 ] complex One gram of the synthesized violurate -copper(II) complex was suspended in 50 mL diethyl ether and stirred for half an hour at room temperature. To this stirred mixture 5 mL of boron trifluoride etherate (BF 3 OEt 2 ) dissolved in 50 mL diethyl ether was added dropwise. After 24 h of stirring at room temperature, the black suspension formed was separated by filtration which was washed with aqueous methanol solution several times and finally left to dry in a desiccator over CaO for 1 week. Analytical data determined the molecular formula and confirmed the purity of the separated precipitate. Physicochemical measurements DNA-binding studies, investigations of antiviral activity and method of molecular docking calculations are described in the supplementary materials (S1). General Violuric acid is a heterocyclic compound containing pyrimidine nucleus and known also as 6-Hydroxy-5-nitroso-1H-pyr imidine-2,4-dione (H 3 L). It is a tribasic acid showing three values of pK a which are 4.56, 9.60 and 13.10 [26]. The structural features of the parent violuric acid I (Scheme 1) permit the presence of 10 tautomeric forms at room temperature [27]. Based on PM3 calculation the energy difference between the most stable and the second most stable tautomers is 1.59 kcal mol À1 [27]. Among these tautomers are II and III shown in Scheme 1 which can be used as promising chelators. Despite there are three ionizable protons of a molecule of violuric acid (H 3 L), in the present case it behaves like a monobasic acid. Previous studies have reported that the common mode of coordination of the violurate anion to metal ion is the bidentate pattern via the oximato nitrogen and neighboring carbonyl oxygen of the tautomeric form I [25]. However, in an ethanolic solution, the interaction of violuric acid and MnCl 2 or CuCl 2 salts in a stoichiometric ratio 1:2 (metal: Ligand) gave the binary metal complex ML 2 ; M = Mn(II) or Cu(II). The molecular formulas of the pure isolated complexes were set mainly based on the analytical data (Table 1). In the solid stat, manganese(II) violurate complex appears as beautiful bright pink microcrystalline and copper(II) complex is powder crystals olive green in color. The metal complexes under study show a high degree of stability in the atmospheric ambient conditions. These metal complexes display good solubility only in DMF and DMSO. Measurements of the molar conductivity in a DMF solution of 0.001 M concentration of the present metal violurate chelates are in the range 17.87-20.56 X À1 cm 2 mol À1 indicating their non-electrolytic nature [28]. This finding points out to the removal of the counter anions (chloride ions) in the form of HCl as a result of their union with the ionizable acidic proton of violuric acid molecule upon complex formation. Accordingly violuric acid behaves as a mono acidic bidentate ligand as previously reported in similar studies [25]. Mass spectra measurements To increase the certainty of the molecular formulas that were determined from the analytical data, the mass spectra of the metal violurate complexes in question were measured by electron ionization mass spectrometry (EI-MS) technique. In this regard, the results obtained for the present monovalent violurato metal chelates indicate that the corresponding peaks of the molecular ions (M + ) at m/z values (Table 1) highly correspond to the specific molecular formulas based on the analytical information. The m/z values in Table 1 indicate the purity and the monomeric nature of the prepared metal violurate chelates. The mass spectrum of manganese(II) violurate complex is shown in Fig. 1, while the corresponding EI-MS chart of copper(II) violurate complex is presented in the Supplementary Information S2. TGA and DTG measurements Thermal analysis is a practical technique that supports the results of elemental analysis to determine the molecular formula of metallic complexes. From this point of view, the thermal analysis of the violurate -based metal chelates under Synthesis and characterization of violurate-based Mn(II) and Cu(II) complexes nano-crystallites as DNA-binders study was studied. In this regard TGA and DTG measurements were performed within the temperature range 50-1000°C in an N 2 atmosphere. The resulting thermograms are included in the Supplementary Information S3 and S4 while the relevant pyrolysis information is recorded in the Table 2. Investigation the thermograms of the present metal violurate complexes shows that the pyrolysis proceeds in three successive phases. The first stage that represents the initial weight loss starts at relatively high temperature i.e. 180-245°C with DGT max peaks at 210 and 205°C. This thermal behavior indicates the absence of any type of water content and confirms the anhydrous nature of these metal complexes in accordance with the analytical data. Comparison the theoretical (39.34-42.08%) and practical (39.00-41.50%) weight loss values for this stage indicates the thermal degradation corresponding to the partial loss in the organic content of the metal complex. Pyrolysis of the organic content continues in the second phase of thermal decomposition, which is accompanied by weight loss that falls in the temperature range 210-450°C. The corresponding DGT max peaks appear at 385 and 380°C for manganese(II) and copper(II) complexes respectively. The final pyrolysis stage saw the complete thermal removal of the remaining organic moiety with the concomitant formation of metal oxide (MnO 2 and CuO) as a residual from the overall thermal degradation process of the investigated metal complexes. Theoretical and experimental values for the formed metal oxides are in good agreement based on the percentage of metal content determined by the elemental analysis technique as shown in Table 1. Kinetic and thermodynamic coefficients determination Further elucidation the thermal nature of the present violurate-based Mn(II) and Cu(II) complexes can be obtained from the determination of the kinetic and thermodynamic parameters of the three successive pyrolysis steps. To achieve this goal, the Costa-Redefine (CR) [29] and Horowitz-Metzger (HM) [30] approaches were used for calculating the activation energy (E a ), pre-exponential factor (A), and thermodynamic parameters namely DH*, DG* and DS* and the obtained data are collected in Table 3. In the same respect, the relevant data derived from the analysis of TGA and DTG curves for both Mn(II) and Cu(II) complexes based on the CR and HM approaches are represented in Figs. 2 and 3 in the case of first step, while the corresponding plots for the second and third steps are included in S5-S8. Some relations including Boltzmann's (k) and Plank's (h) constants have also been used to calculate the thermodynamic factors such as DH*, DG* and DS* as shown in the following equations: Table 3 Thermal parameters for the three pyrolysis stages of violurate-based Cu(II) and Mn(II) complexes. Comp. Step Synthesis and characterization of violurate-based Mn(II) and Cu(II) complexes nano-crystallites as DNA-binders By examining the data given in Table 3, the following facts can be drawn out: i) There is a marked difference in the average values of the total activation energy for the complete pyrolysis of the violurate-based Mn(II) and Cu(II) complexes. This result can be attributed to the difference in the radii lengths of the Cu(II) and Mn(II) ions. Since the radius of the copper(II) ion is shorter than that of the manganese(II) ion, a stronger bonding between the ligand and the copper(II) ion is expected here. Accordingly, the activation energy is expected to be higher for Cu (II) complex than for Mn(II) complex and this is the case for the data presented in Table 3. However, the E a values indicate the thermal stability of present metal complexes due to the strong bonding between the monobasic violurate anions and Mn(II) and Cu(II) ions. ii) The negative sign of DS* (Table 3) indicates that the current pyrolysis process proceeds slowly and the reactants are less ordered than the activated complexes [31]. In the same context, negative values of DS* can be traced back to the low value of A and possibly due to the nonspontaneous behavior of the studied pyrolysis steps [32]. This interpretation is supported by the positive value of Gibbs free energy DG* of the studied thermal decomposition step for the violurate-based Mn(II) and Cu(II) chelates [33]. In the same respect, the calculated positive values of DH* indicate that the current pyrolysis processes proceed in an endothermic pattern. iii) The correlation coefficients (R 2 ) of the Arrhenius plots for this thermal degradation steps for both Mn(II) and Cu(II) complexes are in the range 0.9502-0.9995, indicative of good agreement with the linear function. iv) The convergence of the thermodynamic and kinetic coefficients values determined by both Costa-Redefine and Horowitz-Metzger methods confirms the accuracy of the results. v) First-order kinetics (n = 1) is the pattern of kinetics of pyrolysis reactions for all phases of the two metal complexes under study. Verification of the coordination pattern FTIR spectra of violuric acid and its Mn(II) and Cu(II) chelates were recorded to verify the coordination mode and the related charts are given in S9-S12. The frequencies of characteristic groups of the free violuric acid are tabulated in Table 4. The distinctive stretching vibrations appear at 3450 (broad), 1760 (strong) and 1595 cm À1 (strong) are assignable to t (OH), t(C‚O), and t(C‚N) of the oximato linkage respectively [34]. The broad pattern of the t(OH) band and its frequency value indicate the presence of a hydrogen bond in the free violuric acid molecule [34,35]. Previous studies demonstrated that violuric acid exhibits many tautomeric forms as shown in scheme 1 because the energy difference between the most stable and the second most stable tautomers is 1.59 kcal mol À1 [25]. These structural features of violuric acid lead to different modes of coordination to the metal ion. In this regard, violurate Mn(II) complex was formed as a result of interaction of Mn(II) ion with the tautomer I (Scheme 1) through the donor oxygen atoms of oximato group and carbonyl oxygen neighboring to oxime linkage. This mode of coordination has led to the formation of six-membered chelate ring. The validity of this explanation can be inferred from the fact that the spectrum of Mn(II) violurate displays the stretching frequency of the t(NAO) band at wavenumber value of 1200 cm À1 . The current case is comparable to the reported bands for t(NAO) that appear at 1171 and 1143 cm À1 for coordinating oximato-oxygen in nickel(II) and copper(II) complexes [36,37]. As shown in the spectrum of the free violuric acid the t(NAO) band is located at wavenumber value (1240 cm À1 ) higher than in the case of Mn(II) complex. It is worth noting here that the bonding of the oxime group to the metal ion through oxygen would reduce the double bond character of the (NAO) bond and thus reduce its stretching vibration frequency [38]. However, the O,Obidentate coordination mode of violurate anion with formation of a six-membered chelate ring has been reported in the case of a bis-violurate Sr(II) complex [39]. As regards violurate-based copper(II) complex, its FTIR spectrum displays different vibrations from what is found in the manganese(II) violurate complex, which indicates the different pattern of the coordination in this metal complex. Of these vibrational differences is the observed higher wavenumber for t(NAO), 1270 cm À1 , in the case of Cu(II) complex as compared to that of free violuric acid and Mn(II) violurate complex. This is a vibrational evidence for oximato nitrogen coordination to Cu(II) ion in the five membered chelate ring [40]. Another spectral difference in the spectrum of the copper complex is the appearance of the broad, medium intensity band at 3350 similar to that observed in the free violuric acid but shifted to lower wavenumber and assignable to OH of t (OAH. . .O) of the hydrogen bond [35,41]. This finding supports the formation of the copper(II) violurate complex through the reaction of the Cu(II) ion with tautomeric form II or III (Scheme 1). To confirm the non involvement of the oximato OH group (=N-OH) in complex formation and the presence of a hydrogen bond in the existing violurate-based Cu(II) compound, one gram of this complex is suspended in ether and then treated by BF 3 OEt 2 [42] [37]. Other weak bands observed at 1040 and 1005 cm À1 are attributed to the B-F stretches of the BF 2 derivative [43]. In the same regard, the OH-deformation mode of (‚N-OH) is known to appear around 1300 cm À1 , so the medium peak observed at 1280 cm À1 in the spectrum of violuric acid is assignable to the OH-deformation stretching mode [34]. In the spectrum of [CuL 2 ], this peak appears at almost the same position as in the case of free violuric acid indicating that the oximato OH group (‚N-OH) is not involved in the complex formation. On the other hand, the OH deformation peak disappeared in the spectrum of [MnL 2 ], indicating the participation of OH in the complex formation. This interpretation is consistent with the present results of structural analysis by Xray diffraction as described below. The formation of the hydrogen bond in the case of the copper(II) complex led to the formation of macrocyclic in which two of its members are hydrogen bonds. The ability of the macrocyclic chelated ring system to assimilate the Cu(II) ion into its central cavity may be easier than in the case of the Mn(II) ion because the Cu(II) ion has a smaller size than the Mn(II) ion. The spectra of Cu(II) and Mn(II) violurate display new peaks at 550 and 470-480 cm À1 attributable to t(M-N) and t(M-O) which are additional spectral evidence for the formation of these metal chelates [34]. Based on the results obtained during this work so far, it is possible to visualize the structural formulas of the metal complexes under study in Scheme 2. Stereochemistry diagnostics Measurements of both electromagnetic spectra (in the ultraviolet and visible regions) and magnetic properties of transition metal complexes are commonly used techniques to determine the stereochemistry and the electronic properties of the metal complex. In this context, the room temperature spectral measurements were performed for DMF solution of violuric acid the Mn(II) and Cu(II) violurate and the recorded spectral data are given as charts in S13 -S15. The electronic absorption spectrum of violuric acid exhibits two distinctive bands at 340 and 410 nm. The spectra of violurate -Mn(II) and Cu (II) complexes show these transitions at 370 and 415 nm. The spectral activity of the uncomplexed monovalent violurate anion is due to the n ? p* electronic transition [44]. Regarding the manganese(II) violurate complex, Mn(II) is known to exist in its mononuclear complexes, mostly in the high spin state where S = 5/2 of the 6 S ground term. A survey of the literature indicates that the presence of Mn(II) in its complexes in the low -spin state is rare, and the reported cases are found to be in the square-planar coordination geometry [45][46][47][48][49]. The recorded spectrum of the synthesized Mn(II) violurate complex exhibits two absorption peaks at 580 and 690 nm arising from 4 A 1g ? 4 E g and 4 A 1g ? 4 B 1g transitions characteristic to square-planar geometry of the fourcoordinated Mn(II) ion [46][47][48][49]. Concerning the Cu(II) violurate its UV-vis spectrum shows three d-d spin allowed transitions at wavenumber values of 560, 595 and 685 nm assignable to 2 B 1g ? 2 B 2g , 2 B 1g ? 2 A 1g and 2 B 1g ? 2 E g , respectively [50][51][52]. These spectral features are characteristic to the square-planar geometry of the fourcoordinated Cu(II) complexes [53,54]. Confirmation of this result is based on the fact that no electronic transitions were observed at a wavenumber value below 100,000 cm À1 which confirms the exclusion of the tetrahedral or the pseudotetrahedral geometry of the present Cu(II) violurate complex [50]. The effective magnetic moment value of the present Mn(II) violurate chelate is 4.16 BM characteristic to three unpaired electrons in a dsp 2 hybridization conformation and confirms the square -planar stereochemistry of this Mn(II) complex. In the same regard, the experimental magnetic moment of the violurate -based Cu(II) complex is 1.97 BM indicating the magnetically dilute environment around copper(II) ion. EPR spectra are complementing spectroscopic technique for determining the stereochemistry of metal complexes. Accordingly the X-band EPR spectra of the current violurate -based Mn(II) and Cu(II) complexes were recorded for the polycrystalline samples at ambient temperature (22°C) and the relevant charts are included in S16. The spectral features of the studied mineral complexes are characterized by the type of axial symmetry and show two values of g at magnetic field strengths 2400 and 3200 Gauss and can be assigned to g || and g \ . The computed values of g || and g \ are 2.312 and 2.051 respectively for Mn(II) complex and 2.192 and 2.044 in the case of Cu(II) complex. For the four-coordinated copper(II) complexes the trend g || > g \ > 2.0023 indicates that the d x 2 y 2 orbital is the ground state ( 2 B 1g ) of the square-planar structure [55,56]. Concerning Mn(II) violurate complex its EPR spectrum indicates that the violurate ligand system maintains the bivalent state of manganese ion in the square planar coordination polyhedron. The calculated g av values are 2.138 and 2.093 for Mn(II) and Cu(II) complexes confirm the covalent character of the metal-ligand bond [57]. PXRD -structural analysis In many cases, it is difficult to obtain an appropriate single crystal of the metal complex so that its exact composition can be proven by X-ray structural analysis. Recently, XRD data processing by specialized software has become well established as a scientific technique for the structural determination of microcrystalline sample for metal complexes [58][59][60][61]. However, the XRD spectra of the metal violurate chelates were measured and the related PXRD-charts are displayed in Figs. 4 and S17. In this regard the well known software Expo 2014 was used for performing the structural analysis of metal violurate chelates under study. In the same context the Rietveld refinement approach was used to maximize the fit between the XRD spectroscopic data and the computationally generated data as shown in Figs. 5 and S18. The obtained results are given in Tables 5-7 where contain crystallographic data and the related structural parameters e.g. selected bond length and bond angles. Both Mn(II) and Cu(II) violurate complexes crystallized in the crystal systems triclinic of the space group P1 and monoclinic with space group P21/n respectively ( Table 5) [62]. It is known that if the value of s 4 is zero, then the geometry is ideal square planar while for s 4 equals 1 the geometry is perfect tetrahedron [62]. In this respect, the computed value of s 4 is 0.001 approaching zero indicating mostly ideal squareplanar stereochemistry. The crystallites size of the present violurate -based metal complexes were determined by using Scherer-equation: where D is the grain size of the particle (nm), k is Scherer constant (k = 0.94), X-ray wavelength (1.54178 Å ) was given by k, b is full width at half maximum (FWHM) of the diffraction peak and h is the angle of diffraction. The obtained results, which are listed in Tables 6 and 7, indicate that the crystallites size of the manganese(II) and copper(II) complexes are 25.754 and 21.267 nm, respectively. Concerning the four-coordinated violurate-based Cu(II) complex the four corners of the equatorial plan of the coordination polyhedron around Cu(II) center are occupied by the donor atoms O(1), n(1), O(2) and N (2). As in the case of Mn(II) violurate complex, the given data in Table 7 was uti- lized to determine the value of the geometrical coefficient. The computed value s 4 is 0.0006 getting closer to zero pointing to the perfect square-planar geometry and the corresponding optimized structure is given in Fig. 7. DNA binding study It has become known that DNA is the primary drug target for many drugs that treat some diseases, and therefore, understanding the mechanism of binding between DNA and small compounds such as organic ligands and their metal complexes is necessary for the potential therapeutic use of these substances. Spectral study The spectral activity of the DNA is due to its structure, which includes a chromophore consisting of purine and pyrimidine Fig. 5 The maximize fit between the XRD spectroscopic data and the computationally generated data. Fig. 6 The optimized structure of violurate -based Mn(II) complex, hydrogen atoms have been omitted for clarity. Synthesis and characterization of violurate-based Mn(II) and Cu(II) complexes nano-crystallites as DNA-binders rings. These structural properties allow DNA to absorb light well in the ultraviolet and visible regions. Accordingly, the interaction of DNA and its binding to another small molecule can be studied by spectrophotometry. The scientific literature reported that interaction of DNA with a small molecule such as a metal complex or organic ligand may cause hypochromism or hyperchromism accompanied by a bathochromic or hypsochromic shift in the electronic absorption spectra of these substances [63]. The shape of the spectrogram resulting from the increased addition of DNA to a constant concentration of ligand or metallic complex can determine the mode of DNA binding. In this context, if the spectrum shows hypochromism (a decrease in the absorbance) the binding mode of DNA is the intercalation while the presence of hyperchromism (an increase in the absorbance) indicates the groove/electrostatic binding mode [64]. In the present work, spectrophotometeric titrations experiments were performed using a constant concentration (20 lM) of violuric acid and its Mn(II) and Cu(II) complexes treated with increments (5-45 lM) of CT-DNA solution and the resulting spectral changes are shown in Figs. 8, 10 and S19. All the tests conducted at 22°C in presence of a tris-HCl buffer (pH = 7.6). DNA purity was ascertained by spectrophotometry under ongoing experimental conditions. The obtained spectrum shows the two characteristic absorption bands of proteinfree DNA at 260 and 280 nm in a ratio of 1.9:1 [65]. By knowing the absorptivity coefficient value of 6600 M À1 cm À1 at 260 nm the concentration of DNA per nucleotide was determined [66]. In the absence of ct-DNA the absorption spectra of violuric acid and its metal chelates display-one well-resolved band at 311 nm. The high energy band observed at the wavelength of 311 nm is attributed to the internal ligand charge transfer transitions and is assigned to the p ? p* transition. Addition of CT-DNA to solution of the studied compounds leads to a marked hypochromism of 5.323% without a change in the band position for Cu(II) complex (S19). The same hypochromism of 15.09% with hypsochromic shift (1 nm) is observed for Mn(II) complex (Fig. 9) and indicative of stabilization of the DNA helix. On the other hand, the spectrum of violuric acid showed a significant hyperchromic effect of about 19.89% and no change occurred in the band position as shown in the Fig. 10. The observed hypochromicity in the spectrograms of Mn (II) and Cu(II) indicates that these metal chelates bind to CT-DNA via the intercalation mode [67,68]. The present spectral features are consistent with that seen for many Metallointercalators indicating that these violurate -based metal chelates bind strongly to DNA via the intercalating mode [67,68]. With respect to violuric acid the observed hyperchromism could be due to non-covalent synergistic reactions such as electrostatic, hydrogen and groove bonding along the outer portion of the DNA helix. This interpretation finds support from the observation that the absorption site at 311 nm (Table 8) in the spectrum of violuric acid does not change when a CT-DNA solution is added [69]. Quantitative comparison of the binding strength of violuric acid and its Mn(II) and Cu(II) complexes with DNA can be Fig. 7 The optimized structure of violurate -based Cu(II), hydrogen atoms have been omitted for clarity. Quantitative comparison of the degree of binding of the compounds under study with DNA can be achieved by determining their intrinsic binding constants (k b ) with CT-DNA based on the spectrophotometeric titration results and using the Benesi -Hildebrand relation [70]: The graphical representation between [DNA]/(e a -e f ) on the y-axis and [DNA] on the x-axis gives the plot in Figs. 9, 11 and S20 from which the slope to y-intercept ratio is equal to K b . and the obtained data are listed in Table 8. e f and e b are the extinction coefficients of the free and bound complex to DNA respectively. Table 8 shows that the calculated intrinsic binding constants are 2.38 Â 10 5 , 15.09 Â 10 6 and 5.32 Â 10 6 M À1 for violuric acid, Mn(II) and Cu(II) chelates respectively. Therefore the binding strength of violuric acid and its metal chelates with ct-DNA follows the sequence Mn(II) > Cu(II) > violuric acid. The question now is why the calculated K b in the case of violuric acid is lower than in the case of its Mn(II) and Cu(II) chelates? This may be attributed to the different binding pattern in the case of metal complexes (intercalation) than in the case of violuric acid (groove/electrostatic). This reasoning finds support from has been stated in the literature that the cal-culated binding constant (K b ) in the case of groove binder was found to be lower than in the case of classical intercalator [71]. In the same context, the K b value of the Mn(II) complex is greater than that of the Cu(II) complex which means that the Mn(II) complex forms a more stable complex with the double helix of DAN than that of the Cu(II) complex where the binding mode is intercalation. The reason why the Cu(II) complex forms a less stable complex with DNA than the Mn(II) complex may be attributed to structural reasons. Both Mn(II) and Cu(II) complexes are four coordinate in a square planar geometrical polyhedron. Since the binding mode of these metal complexes is intercalation one expected that two organic bases (coordinative sites widely diffused in many protein and DNA targets) of the double helix of DNA form two covalentcoordinate bonds with the metal ion and consequently the coordination number becomes six. The Mn(II) ion forms a stable six-coordinate complex in the octahedron structure without difficulties while Cu(II) cannot easily form a stable octahedron complex due to the Jahn-Teller effect. Viscosity titration measurements The change in the viscosity of the CT-DNA solution as a result of the increased addition of the violuric acid and its manganese (II) and copper(II) chelates gives an indication of the binding affinity of these compounds as well as the type of bonding between them and the DNA [72,73]. An increase the viscosity of the CT-DNA solution is attributed to the classical intercalation of the molecules of these substances under test, as this requires the separation of the DNA base pairs to accommodate these bonding compounds [74]. While, the decrease in the viscosity of the CT-DNA solution with the addition of increasing amounts of the studied compounds is indicative of the partial and/or non-classical intercalation behavior [75]. To further identify the binding mode of the studied compounds with CT-DNA solution, viscosity titration measure- ments were performed at room temperature and the results obtained are shown in the graph of Fig. 12. Fig. 12 shows that, the viscosity of the CT-DNA solution increases continuously with the increase in the concentrations of the compounds under study. The results presented in the Fig. 12 indicate a convergent increase in the viscosity of the DNA solution in the case of violuric acid and its Cu(II) complex. The larger linear increase in viscosity for the Mn(II) complex indicates a stronger association with CT-DNA in agreement with the results of the current spectroscopic studies. In conclusion the present viscosity titration results are consistent with the intercalation pattern of the interaction of violuric acid and its Mn(II) and Cu(II) chelates with the DNA helix [72]. Antiviral activity study Since the outbreak of the Corona virus (Covid-19) pandemic in the world, efforts have joined forces to discover an effective drug against this epidemic, and this has led to the production of some drugs that are currently being evaluated in clinical trials [76,77]. The research strategy has been to focus on traditional small organic molecules or antibody-based therapies [78,79]. Since the metal complexes have proven successful in treating some viral diseases [80], this has encouraged researchers to test some of them in vitro as therapeutic agents against Covid-19 [81][82][83][84][85][86]. In this context, the metal complexes in the test must be effective and clinically appropriate and showing an acceptable toxicity. Cytotoxicity The cytotoxicity of the current candidates as anti SARS-CoV-2 agents, namely violuric acid and its Mn(II) and Cu(II) chelates, was first examined to determine the half-maximal cytotoxic concentration (CC 50 ) in Vero-E6 cells using MTT assay. As shown in the Fig. 13, the obtained results showed that the half-maximal cytotoxic concentration (CC 50 ) values are 43.87, 93.45 and 88.38 lM for the violuric acid its Mn (II) and Cu(II) complexes, respectively. These results indicate that the cytotoxicity of the tested compounds is dose dependent, and therefore the non-toxic doses were used in the subsequent antiviral assays of the compounds under study. Inhibition of SARS-CoV-2 virus replication In the going study, a plaque reduction assay used to determine the half-maximal inhibition concentration (IC 50 ) in Vero-E6 cell line model of SARS-CoV-2 virus in presence and absence of violuric acid and its manganese(II) and copper(II) chelates. Fig. 14 and Table 9 show that the inhibitory concentrations (IC 50 ) of SARS-CoV-2 by violuric acid, Mn(II) and Cu(II) complexes are 84.01, 39.58 and 44.86 lM respectively. These results indicates to dose dependent antiviral behavior and confirm the inhibitory ability of SARS-CoV-2 replication by violuric acid and its manganese(II) and copper(II) chelates. In the same respect the highest inhibition percent of SARS-CoV-2 virus replication of the tested compounds are 20, 49 and 72%, for violuric acid, manganese(II) and copper(II) chelates respectively. It should be noted here that in the blank experiment, in the absence of the compounds under study, no significant decrease in viral inhibition was observed. It is more appropriate to describe the antiviral activity using the selectivity index ratio CC 50 /IC 50 , and the values calculated in Table 9 show approximately the same order as in the case of IC 50 . However, based on the values of CC 50 /IC 50 ratio given in Table 9 the antiviral potency follows the order Mn(I I) > Cu(II) > violuric acid. The remarkable discrepancy in the ability of the compounds under study to inhibit SARS-CoV-2 virus replication may be due to the difference in their ability to bind to the virus, as evidenced by DNA binding studies and also demonstrated by molecular docking calculations. As shown in the Table 9, the potential for antiviral activity of Mn(II) and Cu(II) chelates are comparable with other inhi- bitors of SARS-CoV-2 virus replication. In the same regard, Remdesivir remains more effective and safer compared to the compounds listed in Table 9. In the same context, violuric acid exhibits lower antiviral activity than its Mn(II) and Cu(II) complexes and this finding is reported in previous related studies where the metal complexes showed a higher antiviral activity than the single organic ligand [10,14,16]. In general, it is known that the bonding of a metal atom or ion to the organic ligand gives the resulting metal complex molecule biological potentials that do not appear in the case of a single organic ligand [10,14,16]. Molecular docking study At present, molecular docking calculations are an accepted scientific approach that aids in understanding the way in which metal complexes as therapeutic agents interact with a microbial target substrate. To confirm the experimental antiviral results and to clarify the relationship between the structural features and the potential therapeutic ability of violuric acid and its Mn(II) and Cu(II) complexes, molecular docking calculations were performed on the SARS-CoV-2 virus protein. In this regard, the spike protein (PDB code: 6LZG), of the experimentally studied virus cells (Vero-E6 cells) was selected. The computational chemistry method used is included in the S1 and results of the ongoing molecular docking calculations are given in Table 10, Figs. 15, 16 and S21-S22. The measure of a drug's success in binding to a biological protein containing an organic ligand is that the drug binds to the protein at the same binding sites as the organic ligand after its exclusion. The current molecular docking calculations demonstrated that the native organic ligand which associated to the spike protein 6LZG binds to this protein via Hacceptor interaction between Asn 322 amino acid residue of chain (A) of spike protein and O(6) of acetamide moiety of this ligand as shown in Fig. 15. In the same respect 2D and 3D diagrams for the interaction of violuric acid and its Mn(II) and Cu(II) chelates with active sites of 6LZG protein Figs. 16 and S21-S22 showed Hacceptor interaction between Asn 322 amino acid residue of chain (A) of spike protein. The data in Table 10 point out to the binding -bond distance of the native ligand, violuric acid and its Mn(II) and Cu(II) complexes are 2.82, 2.97, 3.33, and 2.97 Å respectively. In the same regard the computed binding energy values or dock score (Table 10) are À0.8, À3.860, À5.187 and À4.790, kcal/mol for the native ligand, violuric acid and its Mn(II) and Cu(II) complexes respectively. The overall molecular docking calculations demonstrated that the Mn(II), Cu(II) complexes and violuric acid successfully bind to the spike protein of 6LZG in 3D direction and thus contributed in the inhibition of SARS-CoV-2 virus replication. However, the docking score energy in kcal/mol, RMSD and type of ligand interaction (Table 10) determined the binding affinity of the studied compounds. In the same context, these results indicate that Mn(II) and Cu(II) chelates showed a greater binding affinity than violuric acid in an agreement with the experimental results of the current DNA binding and antiviral study. The current docking analysis showed that Mn(II)-complex has lower docking energy value (À5.1875 kcal/mol) with half-maximal inhibition concentration (IC 50 ) of 39.38 lM and an antiviral activity ratio CC 50 / IC 50 of 2.36. These data are comparable with those reported for manganese carbonyl complex as an inhibitor for protein (PDB ID: 5 V13), of SARS-CoV-2 virus which revealed the binding energy of À5.45 kcal/mol and IC 50 value of 101.07 lM [87]. Concerning Cu(II) violurate complex its antiviral activity data (Table 9) are comparable with Cu(II)based curcumin complex [88] which showed docking binding energy of À7.06 kcal/mol with IC 50 value of 6.63 mM for SARS-CoV-2 virus. CC 50 and IC 50 are the concentrations of both half-maximal cytotoxic and the half-maximal inhibitory respectively. Insights into the relationship between structure and reactivity Several studies have shown that metal complexes exhibit significant pharmacological effects that are not observed when the parent ligand or the inorganic form of the metal is used alone [89]. In the current work, the compounds under study showed the same trend of biological and pharmacological activity in agreement with the results of similar studies. In this context, the superiority of metal complexes over organic ligands can be understood based on chelation theory [89]. Chelation of the organic ligand to a metal ion blocks the charges on the donor sites and thus reduces the overall charge on the ligand molecule. In addition, the bonding of the ligand to the metal ion also reduces the polarity of the metal ion. These two effects cooperate to increase the susceptibility of cell wall lipids to metal chelates and enhance their penetration across the lipid layer of the cell membrane [90]. Spectroscopic measurements of the binding of the compounds under study to DNA indicated that violuric acid binds to DNA by non-covalent synergistic reactions while Mn(II) and Cu(II) bind strongly to DNA via the intercalating mode. In intercalation mode covalent bonding between the metal center of metal complex and the bases of DNA helix can not be ruled out. Metallic complexes with a low coordination number as in the case of square planar structure are most suitable for binding with DNA. This is because, the nitrogen donors of the helical bases of DNA can easily bind to the metal center without the need for dissociation energy to provide an empty coordination site on the metal center as in the case of coordinationally saturated metal complexes. Accordingly, the superiority of the Mn(II) complex in the degree of bonding with DND over the Cu(II) complex may be attributed to the degree of stability of the complex resulting from the binding of these complexes to DNA. Since DNA is rich in nitrogen bases, it is expected that the metal ion of the metal complex will bind to the two helixes and produce hexacoordinated complex in an octahedral structure. Here the degree of stability of the Cu(II) complex with DNA is expected to be lower than in the case of the Mn(II) complex due to the Jahn-Teller effect that reduces the stability of the six coordinate copper(II) complexes. This discussion can also be applied to explain the antiviral superiority of manganese(II) complex over copper(II) complex against SARS-CoV-2 virus. Conclusion Violuric acid interacts with Mn(II) and Cu(II) ions in a stoichiometric ratio 1:2 (metal:Ligand) to give the binary metal complex ML 2 where L is violurate monovalent anion and M is Mn(II) or Cu(II) ions. The bis-violurate ligand system coordinates to copper(II) ion via the N 2 O 2 coordination chromophore which results in the formation of equatorial macrocycle stabilized by an intramolecular hydrogen bond. The ability of the macrocyclic chelated ring system to accommodate the Cu(II) ion in its central cavity is easier than in the case of Mn(II) due to the smaller size of the Cu(II) ion than the Mn(II) ion. Accordingly, the bis-violurate system binds to Mn (II) ion by the O,O-bidentate coordination mode with formation of a six-membered chelate ring. The exact structure of the metal complexes under study was confirmed by processing the XRD-data using the software Expo 2014. Spectroscopic investigations and viscosity measurements indicate the success of the compounds under study in binding to CT-DNA. Intercalation is the binding mode in the case of Mn(II) and Cu(II) complexes while the groove/electrostatic is the binding mode for violuric acid. Based on the results of spectroscopic measurements of the binding of the compounds under study with DNA, it can be concluded that the pattern of binding and type of metal ion control the degree of binding of these compounds to DNA. The results of cytotoxicity tests for the examined compounds indicate that they are dose-dependent, and non-toxic doses were used in the studied antiviral assays. Molecular docking calculations reported that the native organic ligand present for the studied 6LZG protein from SARS-CoV-2 has lower binding affinity compared to violuric acid and its metal complexes. The overall results (biological assays and computational study) are consistent and emphasize the fact that violuric acid and its Mn(II) and Cu(II) complexes have a virucidal effect in vitro infection of SARS-CoV-2. The results of the study indicate that the Mn(II) complex can be used as a treatment for the COVID-19 pandemic after appropriate in vivo and clinical trials are performed. Overall, the stability degree of the complex formed between DNA and the metal ion of the present metal complexes demonstrates the superiority of the Mn(II) complex over the Cu(II) complex as a DNA-binder and therapeutic agent against SARS-CoV-2. Declaration of Competing Interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
2022-08-06T13:04:14.573Z
2022-08-01T00:00:00.000
{ "year": 2022, "sha1": "7eeda821fc8a7eedba7c0432582067bf448efc1d", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1016/j.jscs.2022.101528", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "10358ee3074ca6705a7adff0227812c1e48c4a02", "s2fieldsofstudy": [ "Chemistry" ], "extfieldsofstudy": [ "Medicine" ] }
218710585
pes2o/s2orc
v3-fos-license
Interview with Edel Bhreathnach Irish Medieval History and its Possible Future Directions This interview took place at the Discovery Programme, Dublin, on 25th September 2014. Edel Bhreathnach discussed the state-of-art in Early Irish Medieval History and her opinions about the Irish educational system and the future of Irish Medieval Studies. She also provides some hints about the directions she is taking with her own research projects. she worked for the Department of Foreign Affairs, before she started to work for the National Council of Educational Award. She returned to UCD for formal training and obtained her doctoral degree in 1991 under the supervision of Professor Francis John Byrne. After its completion, she worked in the Discovery Programme (DP) as a research fellow on the "Tara Project" for fifteen years, publishing widely in the field. In 2000 she was awarded a Moore Institute (NUI Galway) post-doctoral research fellowship, and became in 2002 a UCD Mícheal Ó Cléirigh Institute research fellow. In 2007 she became Deputy Director and Academic Project Manager of the UCD Mícheal Ó Cléirigh Institute, where she coordinated several projects such as on the material culture of the mendicant orders in Ireland, funded by the Irish Research Council, and another on the early book and manuscript heritage of the Irish Franciscans. She acted as national coordinator of "Louvain 400", the 400th anniversary of the foundation of St Anthony's College, Louvain. In 2013 she returned to the DP in the position of Chief Executive Officer (CEO), a position that she holds at the moment. Concomitantly, she is one of the principal investigators of the project "Monastic Ireland: Landscape and Settlement" funded by the Irish Research Council. Elaine Pereira Farrell (EPF): Edel, you are a very complete scholar. You have a deep knowledge of the Irish sources, historiography and archaeology, you are bilingual, being fluent in both English and Irish and have a great command of Old Irish, Latin and other European modern languages. Do you think that the Irish schools and universities are still training people at this high level? Edel Bhreathnach (EB): No, but teachers are not given the time or scope to examine their subjects in depth with their students even to post-graduate level. The system militates against analysis and discourse, which incidentally still happens elsewhereas I know from my son who is currently studying pure philosophy in the University of Edinburgh. He has six contact hours per week but is expected to read a considerable amount of primary and secondary material in preparation for his seminars, and if he does not he will be unable to participate properly. That challenges a student and leads to deep and independent thinking. EPF: The European Commission and its funding schemes and the Irish Research Council (IRC) are promoting gender equality in research and innovation. 1 You were expecting your daughter soon after you completed your doctoral degree and yet you developed an admirable academic career at the same time as you fulfilled your role as mother. Was it difficult to be mother and scholar at the same time? How did you master these responsibilities? Did you encounter resistances and barriers in the academic environment for being a woman and a mother? Would you have any advice for younger female scholars? EB: I was lucky when Sorcha and Muiris were born because I worked as a Research Fellow on the DP Tara Research Project. Then I was appointed to Research Fellowships in NUI Galway and UCD and both posts were flexible in their schedules and could be worked around school hours and other commitments. But even then it was difficult and I was only able to continue to work with the support of Raghnall, my parents and a network of friends, especially school mothers who were willing to mind them after school. The barriers to women -with or without family -are structural and social. Many decisions are taken over pints and in dark corridors -mainly by male colleagues when you are not around. But then my view has been not to care about status or title -as long as treated fairly -and to concentrate on moving the subject forward and encouraging students. EPF: During the years you were achieving your qualifications the debate between "nativists" and "revisionists" or "anti-nativists" was still a quite lively one. In a 2010 publication Colmán Etchingham used the term "post-anti-nativist" to define the stateof-art and the current position of most scholars. 2 Would you classify yourself as a "post-anti-nativist"? EB: I have never regarded myself as post-or anti-anything. The great advantage of having Raghnall around is that he gave me a different -and often more level-headed -perspective that led me to steer clear of these often very destructive and personal debates. I also gained a different view of my field by living away from Ireland for a while and by pursuing a very diverse career. No one can be absolutely correct in such a debate. Humanity is too complex to be depicted as either black or white. There are constant changes and a depth of various levels of beliefs, traditions and customs in most societies. EPF: Etchingham has also defined that Irish scholarship is now in a "post-monastic" phase, meaning that scholars have departed from the traditional "Hughesian model" that defined that the Irish church was essentially monastic and because of that very peculiar. 3 You have also done a lot of work on Irish monasticism and church history and you are currently looking at the role of bishops in monastic orders. How "monastic" do you think Ireland was? And how similar or different do you think that the Irish church was from other medieval churches? EB: Many years ago I suggested that we view early medieval Irish monasticism in a new way and move from depicting the medieval institution as post-Reformation and even post-Napoleonic reformed orders. Monasticism is guided by a rule and how a rule is followed varies from the large organised community to the individual anchorite, from the royal foundation to the monastery founded by the abbot-saint and directed by the father abbot. All such forms of monasticism existed in medieval Ireland and they often co-existed within the same foundation. In the post-twelfth century period, most of the early monasteries -though not all! -shifted to new structures offered by new orders -monastic and mendicant. EPF: One aspect of Irish historiography that has intrigued Brazilian scholars is its chronology. It is commonly acknowledged that traditional division of Ancient, Medieval and Modern histories are artificial and biased. However, even though historians criticise their own artificial chronological divisions they are indispensable. In the Irish case there is traditionally a gap between pre-history and medieval history, with the fourth to fifth centuries as a temporal boundary with the arrival of Christianity and of writing. 4 Nevertheless, it is a conservative way of defining history purely based on writing, as history can also be uncovered through archaeology. Besides, Ireland was in touch with the rest of Europe, which was in the "Ancient" period, or "Late Antiquity". Would you think that there is a need to revise these nomenclatures? What do you think that could be suggested to replace the traditional "pre-history" terminology for the Irish period before the fifth century? EB: Late Antiquity is a term commonly used to describe the period you mention and I am perfectly comfortable using it in relation to Ireland. After all, Christianity and literacy emerge strongly during that period and Ireland is caught up in these movements. We are somewhat obsessed with the influence of Rome on Ireland. The Romans did not come here but the legacy of the Roman Empire -Latin, literacy and Christianity -certainly did. Once we move into the sixth/seventh centuries we are in the early medieval period with everyone else -and have our own variations during that period. We must remember that chronological narratives differ depending on geographic or modern cultural perspectives. EPF: In the recent years, during and after the Irish economic boom there was a great development of the field of archaeology, which you followed very closely. It obviously inspired you to found the "Mapping Death" project with Dr Elizabeth O'Brien, which stimulated conferences and scholarly outputs, (http://www.mappingdeathdb.ie). The database is available online. How can scholars still benefit from the data available there? Are there future plans for this project? EB: The "Mapping Death" database is constantly being updated and corrected thanks to the heroic work of Dr O'Brien. She continues to work on an analysis of the data and following up on particular sites with C14 dating and other scientific investigations. The "Mapping Death" project really framed the narrative of my recent book and continues to inform my work as it moved me away from dealing with documentary evidence to facing the harsh realities of life found in the burial record. We hope to improve the database next year and to add to the historical and osteological information. EPF: In 2013, you, Dr Rachel Moss (Trinity College Dublin -TCD) and Dr Malgorzata Krasnodebska-D'Aughton (University College Cork -UCC) were granted €369,000 by the IRC to finance the project "Monastic Ireland: Landscape and Settlement" (www. monastic.ie). Through this grant you were able to employ three recent PhDs as postdoctoral fellow and research assistants. Do you think that there are many other scholars with that vision, of trying to attract major funding in order to generate jobs in the field? How relevant you think it is for the current Irish economic context? EB: We are always being told that employment prospects are best for students if they take science or technology degrees. That is true but it should not be to the detriment of other disciplines. If we consider that there is no full professor of medieval history in Ireland today -in a country with a huge medieval legacy. A German colleague has likened this situation to Germany having no full professors of engineering! A strongly supported subject, especially relating to Irish culture, will inevitably attract good students and investment in the subject, and most especially in major cultural institutions will pay off. They should not be regarded as a drain on economic resources. Quite the oppositeperhaps our politicians and policy-makers should visit places such as Aachen and Köln to see how investment in culture works for these cities -and medieval culture at that! EPF: Why "Monastic Ireland"? What inspired/motivated you to develop this project? EB: "Monastic Ireland" evolved out of a great project "Monastic Wales" (http://www. monasticwales.org) directed by Professor Janet Burton (Trinity St David's, Lampeter) and Dr Karen Stöber (University of Lleida, Barcelona). It was also informed by my work on the mendicant orders while working in the UCD Mícheál Ó Cléirigh Institute and also by constant and inspirational chats with Dr Colmán Ó Clabaigh osb of Glenstal Abbey, a leading expert on medieval monasticism. When the Department of Arts, Heritage and the Gaeltacht and Fáilte Ireland (the Irish Tourist Board) granted me funds to produce an accessible database, I was fortunate to recruit energetic graduates, Dr Niamh NicGhabhann (now University of Limerick) and Dr Keith Smith (TCD) to build the database and website. That laid the foundation for a further research phase and for the large grant that Dr Rachel Moss (TCD) and Dr Malgorzata Krasnodebska-D'Aughton (UCC) and myself received from the IRC. EPF: The project "Monastic Ireland" is taking a collaborative approach dialoguing with scholars leading similar projects such as the "Monastic Wales". At important conferences such as the International Medieval Congress (IMC), that attracts hundreds of scholars to Leeds every July, there is currently a tendency for people to present papers at sessions organized by major research networks. Do you think that besides the obvious positive aspects, such as sharing of information and fund attraction, there could be negative ones? Would the big projects suffocate the possibility of smaller projects and research questions that are not on "fashion" or in the concern of the major "networkers"? EB: This is always a problem although I feel that it is good for Irish scholars to be part of these international networks. It brings the Irish material and research to the attention of a wider audience who would not otherwise come across it. The main problem with international funding is that money attracts money and you find that the major universities in Britain and Europe gobble up an awful lot of funds and attract "big name" scholars. This leads to an imbalance and can be a barrier to smaller institutions, which have excellent projects and scholars but do not have the administrative or reputational powers of the others. EPF: A good deal of your research has been dedicated to Leinster, the place you are from. But you have also developed research of broader scope, such as your newest book, and the "Monastic Ireland" project. There is a great need for local case studies as much as for broader comparative studies. On a practical level, do you think it is possible to balance the dialogue between the "local" and the "global"? What are your methods and approaches to achieve this balance? EB: Over the past ten years, as I get older (!), I find that I have established a methodology for myself in which I delve very deeply into a locality through all possible sources and themes emerge which I then pursue by reading around them in a more universal context. I also find that I no longer confine myself to the history of one particular period but I can move fairly easily from prehistory to the early modern period. This enriches many of my studies. You can lose perspective if you are overly restricted in your scholarship. EPF: The Irish historiography in the time of Eoin Mac Neill was very nationalistic, largely due to its historical context during the process of independence. Later, in the mid-20 th century there was a strong emphasis in the field of history for "neutrality". 5 How do you see these questions fitting in the 21 st century? Is it possible to tackle a national history without being a nationalist and having a political agenda? EB: Eoin Mac Neill did have a nationalistic approach to his scholarship at times but nevertheless he was a brilliant scholar who produced scientific history. He differed in that regard to many of his contemporaries who were not scientific in their approach and did not have the skills to tackle primary sources. There is too much emphasis on the influence of nationalism on the writing of Irish history and modern commentators are often far too simplistic in their analysis. Historical research is a science with a particular methodology and pursuit of that science as such should deter the historian from producing works that are biased or simplistic. People can be too influenced by contemporary dialogues (as in the case of twenty-first century damning of twentiethcentury Ireland). These dialogues are necessary to improve society or rid it of some oppression, but they should not encroach on scientific historical writing. EPF: In Brazil, history teachers have an important role in promoting political awareness and contributing to the formation of conscious citizens. Due to the current context of effervescent protests and public manifestations, scholars and society are debating the role of educators, both at secondary and higher educational level. Some argue that they should not manifest in class their political views and they should teach history with political "neutrality". Do you have any parallels to that in Ireland? Do you think that teachers and lecturers in Ireland have or should have an impact on how the Irish perceive their own past and how they should design their future? EB: I am not aware of this debate among teachers in Ireland. I suppose that it was keenly argued when I was at school in the 1970s and we were in the middle of the Northern "Troubles". A very politically active history teacher taught me in my senior school year. Tony Gregory, who later became a member of parliament, was a left-wing socialist republican who contributed hugely to his own community in the deprived areas on inner city Dublin. He may not have been a talented historian but we did have very lively discussions in class and he produced three professional historians -Colmán Etchingham (NUI Maynooth), Niall Ó Ciosáin (NUI Galway) and myself! EPF: On your DP profile webpage you listed among your research interests "The historiography of history writing in Ireland". Some countries have a very strong scholarship of theory of history, in the sense that there are very strong conceptual debates. In Brazil for example, in most departments, the Bachelor Degree in History Programme would include modules on theory and methodology of history. Would I be correct in concluding that in Ireland the focus of both historical research, and history teaching at university level do not stress theoretical debates? My perception would be that the teaching practice in Ireland is highly focused on primary sources and less into how historians have been reading those sources. In Brazil, there is, generally speaking, a limited training in medieval languages and palaeography that needs to be improved. In comparison, the Irish are stronger on that, but do you think that the Irish scholars need to stimulate more theoretical approaches and be more multidisciplinary, dialoguing more with social sciences? EB: Yes, I do think that we need more philosophical and cognitive approaches to our historical discourses in Ireland. Otherwise, we either confine ourselves to narrative or in modern history to journalism. I am particularly proud of my work with the archaeologists Conor Newman, Joe Fenwick, Dr Roseanne Schot and Professor John Waddell (all NUI Galway) on the Irish "royal sites". We have broken through the old narrative by using anthropological and more conceptual approaches and by seeking universal patterns and examining comparative evidence. In Irish historical studies, Charles Doherty has been very courageous and imaginative in opening up the Irish evidence to new perspectives. I do not agree that Irish history graduates or their teachers are particularly strong on languages, either Latin or continental languages, as scholars are elsewhere. Difficulties with languages pervade the Irish education system and put us at a great disadvantage in so many fields. EPF: In your recent book, you argued that "Irish universities no longer value medieval Irish history" (p. 241). Recently there was a debate about the replacement of the UCD Professor of Early Irish, which featured in the national press in Ireland. Despite the academic protests, it was decided that UCD was not going to hire a professor under a permanent contract, but instead a lecturer with a short-term contract. Fortunately, since that happened in UCD, the University College Cork (UCC), the School of Celtic Studies of DIAS and the University of Utrecht advertised professorship positions of Celtic Languages. What impact do you think this kind of negative attitude towards the field of Celtic Studies will have on the future of the Irish Studies? Do you think that the fields of Celtic and Irish studies are still perceived as relevant in the 21 st century? Is future research in the area sustainable? EB: Celtic Studies (which was my primary degree) as designed in the nineteenth century mainly following a German linguistic model is probably an outmoded model and I feel that this is the reason for its decline in so many universities. The model in UCC is cohesive and has created a lively department. This should be the model perhaps elsewhere. In any event, one hopes that the success of the Irish Medievalist Conference (ICM) at UCD will lead to a renewal of the discipline -perhaps in a different formatin the institution. EPF: Some countries still have a tendency to fund only national histories and some academic circles are still pretty closed for non-national scholars. However, European funding agencies are now promoting mobility and international networking and knowledge transfer. How important do you judge this to be? EB: There are two aspects to this trend. If it relates to universities attracting non-EU nationals for higher fees and lower standards, I view that as unethical. If it means, however, that funding non-EU graduates attracts scholars with very different academic backgrounds into a field, this is to the good of a subject. It may require teachers to put a greater amount of time into forming these students, but if there is a good response, it is worth the effort. But teachers have to be supported in such endeavours by their institutions. EPF: Do you think that Irish academics are prepared to welcome both ideas, firstly to move abroad to work and research, and secondly to increase the number of international scholars in their institutions? EB: Irish academic vary in their willingness to move abroad and too often it is to Britain and the US. It should also be said that institutions are not always hugely supportive of enabling their academics to travel. If our language skills were better we could gain a broader experience by travelling to European universities and delving into literature in language other than English. EPF: Recently the DP and the Royal Irish Academy (RIA) became Institutions of High Education, meaning that both now qualify to welcome and train postgraduate and postdoctoral researches and attract with them funding from bodies such as the IRC. Why do you think this was important and necessary? EB: Firstly, I would stress that like the Dublin Institute for Advanced Studies (DIAS) and the RIA, the DP is primarily a research institution and its remit is to be an archaeological research centre. This status needs to be strengthened and being recognised as such by the IRC is hugely important. The DP aims to train archaeologists to analyse data and to work alongside scholars of other disciplines. It has also build a strong reputation in geo-surveying and other modern techniques, and also in methodologies of genuine collaboration with other disciplines. I want to see this type of analysis and collaboration strengthened and I am also passionate about bringing our research to the attention of a wider audience, especially schools and local communities. I have outlined all these aims in the DP's Strategic Plan 2014-2017. 6 Over the next few years, the DP will not compete for post-graduate funds but will consider working with universities in their applications for PhD scholarships. As to post-doctoral and major project funding, it is most likely that bids for money will be done as parts of collaborative networks -as in the case of "Monastic Ireland". EPF: Your career and life story proves that you have always been an innovative and driven scholar. What are your personal research projects for the next years? EB: I have so many projects on my mind that there are not enough hours in the day or night to do them at the moment. My current focus is on "Monastic Ireland" and "Mapping Death" and will be for some time. A major objective in my current and future work will be to see the Irish evidence -which is considerable and relatively unknownbecome part of the international dialogues relating to so many aspects of the medieval world, and indeed leading on some aspects of these dialogues. I certainly will not be idle for the foreseeable future!
2020-02-20T09:16:28.820Z
2014-11-17T00:00:00.000
{ "year": 2014, "sha1": "dae9a00cbe199529caf041a4622c5be18c36c22b", "oa_license": "CCBY", "oa_url": "http://revistas.fflch.usp.br/abei/article/download/3559/2912", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "34ffe367f950a7a120258859b661837451dd5036", "s2fieldsofstudy": [ "History" ], "extfieldsofstudy": [ "History" ] }
51981972
pes2o/s2orc
v3-fos-license
Power Battery Recycling Mode Selection Using an Extended MULTIMOORA Method In order to improve the efficiency of the recycling of the electric vehicle power batteries and reduce the recycling cost, it is of great importance to select an optimal power battery recycling mode. In this paper, an extended MULTIMOORA (Multiobjective Optimization by Ratio Analysis plus full Multiplicative form) approach which combines with the two-dimension uncertain linguistic variables (TDULVs) and the regret theory, called TDUL-RT-MULTIMOORAmethod, is developed for solving the power battery recycling mode decision-making (PBRMDM) problem. Firstly, the evaluations of the power battery recycling modes over criteria are given by the experts using the TDULVs, and the evaluations of all experts are aggregated into a group linguistic decision matrix by the TDULDWA operator. On the basis of the regret theory, the perceived utility decision matrix is constructed. And then, in order to avoid the disadvantages of the subjective weighting methods, such as the deviation from the measured data and the dependence on the experience and knowledge of the experts, an objective entropy weighting method is applied. After that, the MULTIMOORA method is introduced to rank the power battery recycling modes. In the end, an illustrative example is given to verify the effectiveness and practicability of the proposed method. Introduction Compared with the traditional fuel vehicles, the electric vehicles have the characteristics of lower emission, lower noise, and lower pollution.In addition, due to the fact that the energy structure required by electric vehicles can be diversified, it helps to get rid of the dependence on nonrenewable oil resources.Therefore, it has a very important practical significance to develop the electric vehicle industry.Nowadays, in China, the electric vehicle industry has been strongly supported by the government and has played an important role in the reduction of greenhouse gas [1].However, as the energy source of the whole electric vehicle, the life length of the power battery is limited.By 2020, the accumulative number of the power batteries going to enter the end-of-life period in China will reach 120000-170000 tons.If the wasted power batteries are not properly recovered or reused, this will not only cause the wastes of resources but also cause serious pollution to the environment [2].Therefore, based on the theory of the sustainable development, the reasonable recycling of power batteries is one of the important factors to promote the development of electric vehicle industry. Issues related to the development of electric vehicles have been widely studied in China recently [3].Nowadays, there are several recycling modes of power batteries for the electric vehicle manufacturers to adopt.The best selection of the recycling mode will help the manufacturers improve the efficiency of the recovery and reduce the cost.However, in the process of the power battery recycling, because of the complexity and uncertainty of objective things and the fuzziness of human being's thinking, it is difficult to describe the vague information by precise values.The multiple criteria decision-making (MCDM) problem proposed by Churchman et al. [4] is a discipline for supporting experts to figure out an optimal choice from all options based on multiple criteria [5].Since the power battery recycling mode decisionmaking (PBRMDM) problem involves many qualitative and quantitative evaluation criteria, therefore, it is a feasible way to solve the PBRMDM problem as a MCDM problem.At present, there are few studies on the PBRMDM problem, and 2 Scientific Programming most of the related researches about the selection of the power battery recycling mode are mainly under the consideration of the recycling cost control.For example, Yun et al. [6] summarized two main basic aspects of recycling batteries, including mechanical procedure and chemical recycling, and proposed a framework for recycling batteries.Ordonez et al. [2] presented a qualitative analyzing approach for solving the recovery and regeneration technology of lithium batteries, which can recycle the valuable elements in the battery.Liu and Gao [7] put forward some corresponding battery recycling countermeasures based on the analysis of the urgency of power battery recycling in China.Tang et al. [8] proposed a rewardpenalty mechanism including some policies for recycling the power battery and the costs of three single recovery modes and three competitive dual recovery modes were also tested by using the Stackelberg game theory. In the real process of decision-making, it is hard for the decision makers to give their evaluations to the fuzzy or uncertainty information by exact numeric values.Recently, most researchers prefer to represent their opinions by means of the uncertain linguistic information.The uncertain linguistic variables (ULVs) presented by Xu [9] can express the evaluations of decision makers more accurately.An uncertain linguistic variable (ULV) is composed of a lower limited value and an upper limited value [9].It can be used in more fuzzy and uncertain situation [10].However, the ULVs do not consider the reliability of the experts' subject evaluations.Liu and Zhang [11] developed the two-dimension uncertain linguistic variables (TDULVs) to represent the fuzziness of the information on the basis of the ULVs.A two-dimension uncertain linguistic variable (TDULV) is composed of two parts, which includes the Ι class and the ΙΙ class linguistic information, where the Ι class information represents the assessment of decision maker to the evaluated objects, and the ΙΙ class linguistic information denotes the reliability of the Ι class assessment denoted by the decision maker.Until now, the TDULVs have been applied in many areas, such as the technology innovation ability evaluation problem [12], the extraefficient economic industry system selection [13], and the river basin ecosystem health evaluation problem [14]. The regret theory was firstly introduced by Loomes and Sugden [15] with the intention of depicting intuitive judgments simply and consistently.It is an important behavioural decision-making theory by considering the outcomes of the alternative choices and the possible results of unselected alternatives.In the regret theory, the perceived utility values are used to measure the expected value of satisfaction by choosing one alternative and rejecting another.Recently, the regret theory has been applied to solve different kinds of problems, such as the selection of the human-agent collaborative teams [16], trip distribution and traffic assignment [17], environmentally friendly supplier selection [18], and the selection of charging facility design for electric vehicles [19]. Entropy weighting method was originally proposed by Shannon [20].The entropy can measure the probability of objective, and it can show the direct reflection of the information size and its uncertainty [21], and it is also a method with precise calculation process.Due to the characteristic of entropy weighting method, it has been widely applied in many different fields.For example, Liu and Li [22] proposed the comprehensive forecasting model by using the entropy weighting method.Delgado and Reyes [23] used the entropy weighting method to select the best alternative plants.Zhang et al. [24] proposed a novel ship detection method by using the entropy weighting method to extract the features of the synthetic aperture radar images. Brauers and Zavadskas [25] proposed a MOORA method in 2006.In 2011, Chakrabory checked the robustness of six common MCDM methods, including the MOORA method [25], the AHP method [26], the TOPSIS method [27], the VIKOR method [28], the ELECTRE method [29], and the PROMETHEE method [30], as nonsubjectively as possible, and the results showed that only MOORA method satisfied all conditions of robustness of the MCDM [31].Inspired by the MOORA method, Brauers and Zavadskas [32] developed a MULTIMOORA method by improving and synthesizing the MOORA method, and the results of the MULTIMOORA method show more robustness and accuracy compared with the MOORA method.The decision process of the MULTI-MOORA method includes three parts: the ratio system, the reference point method, and the full multiplicative form of multiple objectives.Until recently, no other method is known as meeting all conditions of robustness for the multiple objects optimization; therefore, the MULTIMOORA method is regarded as the most robust technique for solving the MCDM problem [33].So far, the MULTIMOORA method has been applied in many areas, such as the materials selection of power gears [34], the biomaterials selection [35], the pharmacological therapy selection [36], the selection of sites for ammunition depots [37], the supplier selection [38], and the risk evaluation problem [39]. Based on the above discussions, in this paper, to solve the PBRMDM problem, an extended MULTIMOORA method with two-dimension uncertain linguistic variables and the regret theory, called the TDUL-RT-MULTIMOORA method, is developed.The remainder of this paper is organized as follows: the preliminaries of this work are introduced in Section 2. Section 3 presents the framework of the TDUL-RT-MULTIMOORA method.An illustrative instance is conducted to demonstrate the effectiveness and practicality of the proposed method in Section 4. In the end, some conclusions are drawn in Section 5. Uncertain Linguistic Variables Definition 1 (see [9]).Let be a continuous linguistic term set, ∈ and = [ , ], where , ∈ ( ≤ ) are the lower and upper limit value of , respectively, and then is called an uncertain linguistic variable (ULV) of . Two-Dimension Uncertain Linguistic Variables Definition 2 (see [11,41]).Let ŝ = ([ ṡ , ṡ ][ s , s ]) be a TDULV, where [ ṡ , ṡ ] is the first class of ŝ, which expresses the assessment of the decision maker to an evaluated object, while [ s , s ] is the second class of ŝ, which denotes the decision maker's subjective evaluation on the reliability of the first class result.ṡ , ṡ ( ≤ ) are the lower and upper limit value of the first class, and s , s ( ≤ ) are the lower and upper limit value of the second class, respectively, and then ŝ is called a two-dimension uncertain linguistic variable (TDULV). Regret Theory. Loomes and Sugden where where (V( ) − V( * )) ≤ 0, which indicates that the expert feels regret after choosing the alternative rather than the alternative * ; (0 < < 1) is the risk aversion coefficient, and the smaller value of , the greater risk aversion of the expert, or vice versa [50]; ( > 0) is the regret aversion coefficient; the larger value of , the greater regret aversion tendency of the expert [51].Tversky and Kahneman [50] gave the value of which is equal to 0.88 and equals 0.3 after experimental verification. 2.6.The MULTIMOORA Method.The MULTIMOORA method was first developed by Brauers and Zavadskas [32] on the basis of the MOORA method.It is a powerful tool for dealing with the MCDM problem.The process of MULTIMOORA method is made up of three parts: the ratio system, the reference point method, and the full multiplicative form of multiple objectives.The MULTIMOORA method is the most robust system of multiple objectives optimization than other multiple criteria decision-making methods [33]. Step 1 (calculate the importance coefficients of criteria).The importance coefficients of criteria for criteria with reference to the criteria are calculated by where (j=1,2, . .., n) is the weight of criterion , satisfying ∈ (0, 1) and ∑ =1 = 1; Step 2 (normalize the decision matrix into * ).The decision matrix is normalized into * by Step 3 (the ratio system).In order to obtain the optimization, based on the ratio system, the best alternative is obtained by where the overall evaluation value * of alternative refers to all criteria and is added in the circumstances of maximization and subtracted in the circumstances of minimization for every alternative [52]: where = 1, 2, . . ., are the benefit criteria; = + 1, + 2, . . ., are the cost criteria. Step 4 (the reference point approach).The best alternative is got by Then the absolute value * between the reference point and the normalized evaluation value of alternative refers to criteria calculated by where Step 5 (the full multiplicative form).The preferred alternative is obtained by where Step 6 (rank the alternatives).Firstly, the overall evaluations are ranked in descending order, the absolute values are ranked in ascending order, and the overall utility values are ranked in descending order.Then, after the calculation of the subordinate rank results, the above three rankings of alternatives are integrated into a final MULTIMOORA ranking on the basis of the generalized dominance relations of the dominance theory.The dominance theory [52] is a tool for ranking the subordinate alternatives by the MULTIMOORA method, which includes the plurality rule assisted with a kind of lexicographic method and the method of correlation of ranks.Step 1.1: Aggregate the experts' decision information. Step 1.2: Normalize the decision matrix. Step 1.3: Calculate the expectation value of all evaluation information. Step 1.4: Construct the perceived utility decision matrix based on the regret theory. Step 3.1: e ratio system. Step 3.2: e reference point approach. Step 3.3: e full multiplicative form. Step 3.4: Rank the alternatives. Stage 3: Obtain the optimal alternative by the MULTIMOORA method Step 2.1: Calculate the entropy value. Step 2.2: Calculate the difference degree. The TDUL-RT-MULTIMOORA Approach for the PBRMDM Problem The flowchart of the TDUL-RT-MULTIMOORA method is demonstrated in Figure 1. TDUL-RT-MULTIMOORA Method. To solve the PBR-MDM problem, an extended MULTIMOORA method with the TDULVs and the regret theory, called the TDUL-RT-MULTIMOORA method, is put forward.The decision processes of the TDUL-RT-MULTIMOORA method are described as follows. Stage 1. Construct the perceived utility decision matrix. Stage 2 (calculate the weights of criteria).The entropy weighting method was produced by Shannon [20].It is a method that employs probability theory to measure the uncertainty of information, which can avoid the negative effect of subjective elements.The entropy weighting method is a useful method to measure the uncertainty in the decision-making problem [53].The steps of the entropy weighting method are as follows. Step 2.1.Calculate the entropy value by Step 2.2.Calculate the difference degree by Step 2.3.Calculate the entropy weight of each criterion by Stage 3. Rank the recycling modes by the MULTIMOORA method. Step 3.1 (the ratio system).In order to obtain the optimization, the best recycling mode is determined by where Step 3.2 (the reference point approach).The best recycling mode is determined by where Step 3.3 (the full multiplicative form).The preference recycling mode is got by where Step 3.4 (rank the recycling modes).Firstly, the overall evaluation values are ranked in descending order, the absolute values are ranked in ascending order, and the overall utility values are ranked in descending order.Then, after the calculation of the subordinate rank results, a final MULTIMOORA ranking of recycling modes is got based on the dominance theory, and the mode ranking in the first place is the optimal one. Illustrative Example Enterprise B is an electric automobile manufacture company in China.This enterprise began to sell electric vehicles to the market in 2011.By the end of 2017, the total number of electric Recycling modes Criteria The ability to control the supply chain (C 1 ) The independent recycling mode (A 1 ) Recovery facilities (C 2 ) The alliance recycling mode (A 2 ) Professional construction of recycling (C 3 ) The third-party recycling mode (A 3 ) R e c y c l i n g s c a l e ( C 4 ) Recycling cost (C 5 ) The situation of recycling resources (C 6 ) vehicles sold by enterprise B had exceeded 290,000.According to the "national application of new energy automobile production enterprises and product record management rules" of China, the service life of the power battery is 5-8 years, which means the power batteries sold in the early stage by enterprise B start to enter the scraping period.Therefore, it is necessary for enterprise B to choose the optimal power battery recycling mode in order to save the cost and make a long-term and sustainable development.There are three power battery recycling modes = { 1 , 2 , 3 } for enterprise B to choose from, and six criteria = { 1 , 2 , 3 , 4 , 5 , 6 } are considered (as shown in Table 1).Among the six criteria, it can be seen that 1 , 2 , 3 , and 6 belong to the benefit criteria, and 4 , 5 are the cost criteria.Five experts = { 1 , 2 , 3 , 4 , 5 } are invited to give their TDULV evaluations to the three recycling modes with respect to the six criteria.Suppose the weights of five experts are the same; that is, = ( 1 , 2 , 3 , 4 , 5 ) = (0.2, 0.2, 0.2, 0.2, 0.2).Then, five TDULV evaluation decision matrices Ŝ() = [ŝ () ] 3×6 ( = 1, 2, 3, 4, 5) are obtained.The target of PBRMDM is to determine the optimal recycling mode.Due to the limitation of space, here we only give the TDULV evaluation decision matrix given by the first expert 1 , which is shown in Table 2. The Decision Process and Results Stage 1. Construct the perceived utility decision matrix. Step 1.4.The perceived utility decision matrix Û = [ ] 3×6 is obtained by calculating the perceived utility value according to ( 25)- (28), and the results are shown in Table 6. Stage 2. Calculate the weights of criteria by the entropy weighting method. Step 3.1 (the ratio system).First, the overall evaluation value * of the recycling mode refers to each criterion which is calculated according to (33), and the results are * 1 = 0.349, * 2 = 0.377, and * 3 = 0.391.Then, based on the ratio system and by (32), the ranking of three recycling modes is 3 ≻ 2 ≻ 1 . Step 3.2 (the reference point approach).First, the absolute value between the reference point and the normalized evaluation value of the recycling mode refers to criteria calculated according to ( 35)- (36), and the results are * 1 = 0.060, * 2 = 0.057, and * 3 = 0.098.Then, based on the reference point approach and by (34), the ranking of three recycling modes is 2 ≻ 1 ≻ 3 . Step 3.3 (the full multiplicative form).First, the overall utility of the recycling mode is calculated according to (38), and the results are * 1 = 0.322, * 2 = 0.366, and * 3 = 0.348.Then, based on the full multiplicative form and by (37), the ranking of three recycling modes is 2 ≻ 3 ≻ 1 . Step 3.4 (rank the recycling modes).According to the ranking results obtained by Steps 3.1, 3.2, and 3.3, combining with the dominance theory, the final ranking of three recycling modes is 2 ≻ 3 ≻ 1 .As a result, 2 is the optimal recycling modes. Comparative Analysis. In this section, a comparative analysis is conducted to demonstrate the effectiveness and advantages of the proposed TDUL-RT-MULTIMOORA method.The result of the proposed method is compared with the result of the VIKOR method [28] and the TODIM method [54], and the ranking results of the three methods are listed in Table 7. Method Ranking of the recycling modes The TDUL-RT-MULTIMOORA method 2 ≻ 3 ≻ 1 The VIKOR method The TODIM method 2 ≻ 1 ≻ 3 From Table 7, it can be seen that the optimal power battery recycling modes gained by the three methods are the same, which illustrate the effectiveness of the proposed method, while the second and the third place of the ranking obtained by the proposed method are slightly different from the other two methods.The main reasons for the differences are as follows: firstly, the TDUL-RT-MULTIMOORA method not only considers the outcome of the recycling mode choice but also pays attention to the possible result of the unselected recycling modes.Secondly, in the process of decision-making, the VIKOR method and the TODIM method focus on considering the limited rationality of the decision makers, without taking the robustness of the decision-making system into consideration.In the MULTIMOORA method, a robust system is constructed in the entire process of decision-making, which helps to enhance the accuracy and stability of the decision-making result.Thirdly, in the reality, the recycling of the power batteries involves multiple organizations, professional knowledge, and special equipment.The competitive strategy made by enterprise B involves the innovation and manufacturing of the electric vehicles.Compared with the independent recycling mode, the third-party recycling mode can not only help enterprise B reduce the recycling cost but also improve the efficiency and quality of recycling. Compared with the VIKOR and TODIM method, the advantages of the proposed method are listed as follows: (1) In this paper, the evaluations to alternatives over criteria given by decision makers are presented by the TDULVs.The TDULVs can not only represent the assessments of experts but also consider the reliability of the experts' subject evaluations, which can express the fuzziness or uncertainty information well and keep the integrity of the primary information. (2) The entropy weighting method avoids the disadvantages of subjective weight method, such as completely deviating from the measured data and heavily depending on the experience and knowledge of the experts, and it is an important measure of uncertainty by fully using the data. (3) The VIKOR method and the TODIM method only consider the limited rationality of the decision makers, while the proposed method not only considers the decision makers' limited rationality but also takes the possible regret psychological behaviour that the decision makers may produce in the process of decisionmaking into account, which is more in line with the reality. (4) The regret theory applied in the proposed method is an important behaviour decision-making theory by considering both the outcomes of the alternative choices and the possible results of unselected alternatives, which is easier and more consistent to depict intuitive judgments of decision makers. (5) The proposed method takes the robustness of the decision-making system into consideration by extending the MULTIMOORA method. In short, the proposed method not only improves the accuracy of the evaluations by decision makers but also takes the decision makers' psychological behaviour and the robustness of decision system into account; thus, the proposed method in this paper is more comprehensive and precise than the VIKOR and TODIM method. Conclusions In order to solve the PBRMDM problem, a TDUL-RT-MULTIMOORA method combining with the regret theory and the MULTIMOORA method based on the TDULVs is proposed.The TDUL-RT-MULTIMOORA method not only can improve the accuracy of the evaluations by decision makers but also takes the robustness of decision system into account, which helps to ensure the stability of the result.Firstly, the assessments of decision makers are expressed by the TDULVs, and the assessments of decision makers are aggregated into a group linguistic decision matrix by the TDULDWA operator.On this basis, the regret theory is introduced to describe the limited rationality of the decision makers, including taking both the outcomes of the power battery recycle mode choice and the possible result of unselected power battery recycle modes into consideration.Secondly, the weights of criteria are obtained by the entropy weighting method.Furthermore, the MULTIMOORA method is applied to rank the power battery recycle modes.Finally, an example is given to illustrate the efficiency and the practicability of the proposed method. The novelty aspects of the proposed method are listed as follows: (1) The TDULVs used to represent the assessments of experts considers the reliability of the experts' subject evaluations, which is better for expressing the fuzziness and uncertainty of assessment information.Therefore, the proposed approach can be able to represent the evaluations of decision makers more precisely and practically.In addition, the TDULDWA operator can make sure effective aggregation of evaluation information is given by experts. (2) The weights of criteria are calculated by the entropy weighting method, which can fully use the original information and avoid the deviation from the measured data, which makes the result more precise.(3) An extended MULTIMOORA method combined with the TDULVs and regret theory is proposed for solving the PBRMDM problem.The proposed method can not only improve the accuracy of evaluations by decision makers but also consider the robustness of decision system, and it ensures the stability of the result.The proposed method provides a new method to solve this problem in a more precise way. In terms of future research, it is required that the proposed method can be extended to support PBRMDM problems by considering more complex influencing factors.In addition, we will try to investigate more linguistic computing techniques to improve the reliability and accuracy of representing the decision makers' evaluation information. Stage 2 :Stage 1 : Calculate the weights of criteria Construct the perceive utility decision matrix Table 1 : The recycling modes and criteria for the PBRMDM problem. Table 3 : The group linguistic decision matrix. Table 4 : The normalized linguistic decision matrix. Table 5 : The crisp decision matrix. Table 6 : The perceived utility decision matrix. Table 7 : Ranking results using different methods.
2018-08-16T21:57:32.504Z
2018-07-08T00:00:00.000
{ "year": 2018, "sha1": "1cb9e978a1d61231026c52611a036ccfe38b8ad3", "oa_license": "CCBY", "oa_url": "http://downloads.hindawi.com/journals/sp/2018/7675094.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "1cb9e978a1d61231026c52611a036ccfe38b8ad3", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Computer Science" ] }
119413072
pes2o/s2orc
v3-fos-license
Memory in Self Organized Criticality Many natural phenomena exhibit power law behaviour in the distribution of event size. This scaling is successfully reproduced by Self Organized Criticality (SOC). On the other hand, temporal occurrence in SOC models has a Poisson-like statistics, i.e. exponential behaviour in the inter-event time distribution, in contrast with experimental observations. We present a SOC model with memory: events are nucleated not only as a consequence of the instantaneous value of the local field with respect to the firing threshold, but on the basis of the whole history of the system. The model is able to reproduce the complex behaviour of inter-event time distribution, in excellent agreement with experimental seismic data. After the pioneering work of Bak, Tang and Wiesenfeld [1], Self Organized Criticality (SOC) has been proposed as a successful approach to the understanding of scaling behaviour in many natural phenomena. The term SOC usually refers to a mechanism of slow energy accumulation and fast energy redistribution driving the system toward a critical state. The prototype of SOC systems is the sand-pile model in which particles are randomly added on a two dimensional lattice. When the number of particles σ i in the i-th site exceeds a threshold value σ c , this site is considered unstable and particles are redistributed to nearest neighbor sites. If in any of these sites σ i > σ c , a further redistribution takes place propagating the avalanche. Border sites are dissipatives and discharge particles outside. The system evolves toward a critical state where the distribution of avalanche sizes is a power law obtained without fine tuning: no tunable parameter is present in the model. The simplicity of the mechanism at the basis of SOC has suggested that many physical and biological phenomena characterized by power laws in the size distribution, represent natural realizations of the SOC idea. For instance, SOC has been proposed to model earthquakes [2,3], the evolution of biological systems [4], solar flare occurrence [5], fluctuations in confined plasma [6] snow avalanches [7] and rain fall [8]. Moreover, SOC models can be also considered as cellular automata generating stochastic sequences of events. An important quantity showing evidence of time correlations in a sequence is the distribution of time intervals between successive events. Defining ∆t as the time elapsed between the end of an avalanche and the starting of the next one, for the sand-pile model one obtains that ∆t is exponentially distributed [9]. This behaviour reveals the absence of correlations between events typical of a Poissonian process. Conversely the inter-event time distribution N (∆t) of many physical phenomena has a non-exponential shape, as for instance in the case of earthquakes [10], solar flares [9] and confined plasma [11]. The failure in the description of temporal occurrence is generally considered the main restriction for the applicability of SOC ideas to the description of the above phenomena. In this letter we address the problem of introducing time correlations within SOC and, in order to validate our model, we compare our results with experimental records from seismic catalogs. Seismicity is considered here as a typical physical process with power law in the size distribution but also strong correlations between events. In this case, the sand pile model can be directly mapped [12][13][14] in the Burridge-Knopoff model, proposed for the description of earthquake occurrence. In this model a continental plate is represented as a series of blocks interconnected with each other and with a rigid driver plate by springs, then the quantity σ i represents the global force acting on the i-th block and σ c the threshold for slippage. We introduce memory within the SOC context: the local instability depends not only on the instantaneous value σ i but on the whole history of energy accumulation. Our memory ingredient is analogous to recent ideas [15] introduced for the understanding of earthquake interactions. The first observation of correlations between earthquakes, dates back to Omori [16] who suggested that earthquakes tend to occur in clusters temporally located after main events: the number of aftershocks following a main event after a time t, r(t), decays as a power law r(t) ∼ 1/t. Furthermore, a large earthquake produces also an abrupt modification in seismic activity across a widespread area [17]. A striking example of this remote triggering mechanism is the Landers earthquake of magnitude 7.3 occurred in 1992, which triggered three hours later the 6.5 magnitude event in the town of Big Bear on a different fault, together with a general increase in activity across much of the Western United States. A clear understanding of the physical processes responsible for this behaviour is still lacking: the non-local triggering mechanism cannot be explained in terms of static stress changes responsible for aftershocks and one must invoke non linear interactions to modify the friction law on remote faults [17,19,18,15]. The inter-time distribution combines both the effect of the local clustering of the aftershocks sequence described by the Omori law, with the remote triggering mechanism involving larger distances. The presence of both features give rise to an intertime distribution N (∆t) that is not a power law but has a more complex shape [20]. Nevertheless, Corral has shown that this shape is quite independent on the geographical region and the magnitude range considered [10]. This observation indicates that N (∆t) is a fundamental quantity to characterize the temporal distributions of earthquakes. Here we introduce within SOC a non local mechanism for event nucleation. In our approach seismic fracture depends on a collective behaviour of the earth crust: the triggering of a new event is determined by the combined effect of the increase in the static stress together with the local weakening in a fault due to the loading global history. To this extent, we consider a square lattice of size L, each site being characterized not only by the value of the local stress σ i but also by a site-counter c i that represents the local memory. At t = 0 local stresses are assigned at random between σ c − z and σ c , where z is the lattice coordination number and σ c > z, whereas c i is randomly set between zero and one. The simulation proceeds as follows. At each time t all sites are loaded with an uniformly increasing external stress, by adding one unit to all σ i 's, and the local variable p i is defined as pi is evaluated, measuring the local instability with respect to slippage, and its minimum value in the system, α min , is found. This value indicates the site most susceptible to seismic failure because of both the high local stress at that instant of time and the cumulated history of loads saved into the counters c i . If α min is larger than a critical value α c , all counters are updated as Then the external stress is uniformly loaded at constant rate and at each step the new value of α min in the system is evaluated and Eq.(2) applied. As soon as α min < α c , the site i with α i = α min becomes the epicenter where the earthquake nucleates and its counter is set to zero. Other sites with α min < α i < α c are considered stable unless involved in the fracture propagation. This choice is expression of fracture being a phenomenon controlled by the extreme value statistics. When a site nucleates an earthquake, it discharges elastic energy uniformly by the transfer of a unit stress to all nearest neighbours, as in the sand pile model. The process goes on letting unstable sites, characterized by α i < α c , discharge energy and in this way propagating the seismic event farther and farther from the epicenter. The counters of all discharging sites are set to zero during the evolution, whereas counters of all other sites are updated at the end according to Eq.(2) with the actual value of α min . Energy back-flow allows to activate sites found stable during the forward propagation triggering further energy redistributions. At the end of the process the external load is increased again at constant rate until another event takes place. The updating rule (2) is equivalent to consider a time dependent friction law [19], whose evolution is controlled not only by the local state at previous times but also by the instability condition for the whole system. Eq.(2) then introduces long range interactions and remote triggering by means of α min , since all sites in the system, even far from the epicenter, share this common information. The more a site is stressed (high p i ), the stronger it will react to this information. We have checked that our results are substantially unchanged if Eq.(2) is applied to a finite region of size l < L centered in the site with α i = α min , for large enough l. A breaking rule similar to Eq. (2) has been successful in providing a good description of the propagation of stress corrosion cracks [21], a fracture process where local mechanical resistance of materials is weakened in time by chemical agents. It is possible to calculate in a mean field approximation the fraction of active sites as function of time. This approximation is based on the assumption that the external stress is kept fixed and therefore can describe the behaviour of the system only at short time scales. As a consequence of this hypothesis, the fraction of active sites is related to the rate of occurrence of aftershocks, r(t), happening over time scales shorter than the characteristic time of the loading mechanism. Let us consider at each time t and each site the quantity q i (t) = 1 − α i for 0 ≤ α i ≤ 1 and q i (t) = 0 for α i > 1. According to Eq.(2) one has, at constant load condition, that the value of q i (t) at the next time step is given by q i (t + 1) = q i (t) + α min , if the i-th site does not discharge energy and q i (t + 1) = 0 otherwise. Hence, the statistical average is with P s i (t) the probability for the site i to be stable at time t. Since, the spatial average, q(t) = 1 , is an estimate of the fraction of sites that could become active at the next time step, q(t) is the probability for a generic site to be unstable at time t and then q(t) ≃ 1 − P s i (t). In the hypothesis α min ≪ α i , valid for a system close to trigger an event, one can neglect α min in Eq.(3). Finally, supposing that q i (t) is a self-averaging quantity one has q(t + 1) ≃ q(t) 1 − q(t) , which gives the Omori law q(t) ∼ t −1 . In order to evaluate the complete inter-time distribution N (∆t), we numerically generate a large statistics of events and calculate the time distance between every couple of successive events involving more than one single site. Fig.1 shows, the experimental and the numerical data for α c = 0.9 and L = 500. By re-scaling the numerical waiting time with an appropriate constant value, our data provide a very good agreement with the experimental distribution from the Southern California Catalogue [22], whereas the original SOC model exhibits exponential decay. We have monitored the behaviour of the distribution for different values of α c (Fig.2): For α c ≤ 0.3, an exponential decay is observed whereas for intermediate values of α c a complex behaviour starts to set in. For α c ≥ 0.7 the data follow a unique universal curve. In order to fully validate the theoretical ideas of our model, we have also analyzed other statistical properties of seismic catalogues: energy and epicenters distance distributions. The energy release in an earthquake is expressed in terms of the magnitude, which is proportional to the logarithm of the fractured area A where M min is a constant depending on the area units. The Gutenberg-Richter law implies that N (M ), the number of earthquakes with magnitude M , follows an exponential law [23] N (M ) ∼ 10 −bM (5) where b is an experimental constant generally close to one. In order to compare our numerical results with experimental data, we evaluate the magnitude of an event using Eq. (4) with A being the number of discharging sites and k ≃ 0.65 as for the Southern California Catalogue. The choice of the minimum magnitude M min is arbitrary since it is related to the unit cell area, and in our case we set M min = 2. The magnitude distribution, after an initial transient at small magnitudes (M < 3), follows the expected exponential behaviour of the Gutenberg-Richter law (Fig.3) over a magnitude range increasing with the system size L. Strong fluctuations observed at large M for small system sizes are finite size effects. The value of the best fit exponent b depends on the parameter α c and becomes parameter independent for α c ≥ 0.7, where b ∼ 0.84 (inset Fig.3). Comparing our numerical results with the data from the California Catalogue (Fig.4) good agreement is found between the experimental best fit value b exp ∼ 0.86 and numerical prediction. In the non conservative case, SOC also provides good agreement with the experimental size distribution [14]. Seismic catalogues also record the spatial coordinates of earthquake epicentres. We numerically evaluate the cumulative distribution of distance between all possible couple of events at a distance smaller than d, N (d). We obtain a power law behaviour with an exponent equal to 1.84. Agreement with experimental data is observed for small distances. N (d) calculated for the original SOC model provides similar results. It is worth noticing that the behaviour of all experimental distributions is reproduced by numerical simulations without any fine tuning, i.e. numerical results are parameter independent for α c > 0.7. The complex seismic activity over large regions of the world is controlled by both local stress redistributions, generating aftershocks, and long range load transfer in the surrounding crust. In our approach these mechanisms are simply implemented in self-consistent local laws containing long range memory of stress history. A large event could then increase the seismic activity by inducing global weakening in the system, or else could even inhibit future earthquakes by resetting the local memory. This global memory ingredient could correspond to a variety of physical mechanisms inducing weakening in time for real faults [24], as stress corrosion [21], fault gauge deterioration [25] or pore pressure variation [26]. We have considered earthquake triggering as an example of physical problems in which time correlations are extremely important. We suggest that this SOC model with memory may be relevant for other physical phenomena described by a SOC approach and exhibiting a non-exponential decay in the inter-time distribution [9]. [1] P.Bak,C. Tang, K. Wiesenfeld, Phys. Rev. Lett. 59, 381 (1987).
2019-04-14T02:08:25.338Z
2005-05-05T00:00:00.000
{ "year": 2005, "sha1": "d6e06a958664505cae41502ae87f00d6016b00db", "oa_license": null, "oa_url": "http://arxiv.org/pdf/cond-mat/0505129", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "d6e06a958664505cae41502ae87f00d6016b00db", "s2fieldsofstudy": [ "Geology" ], "extfieldsofstudy": [ "Physics" ] }
89744544
pes2o/s2orc
v3-fos-license
PROPOSED USE OF THE PLENARY POWERS TO VARY THE TYPE-SPECIES OF THE GENUS The strata of the Carboniferous System were deposited before the onset of continental drift and before the break-up of the ancient land masses of the world. In consequence, horizons in lands now many thousands of miles apart can be accurately correlated by the classical methods of stratigraphical palaeontology. In the chronostratigraphic classification of the Carboniferous, as adopted in north-west Europe, the Carboniferous is divided into two Subsystems — the Dinantian (below) and the Silesian (above). The Silesian is further divided into Series of which the Namurian is the lowest. The term Namurian is used over all Europe and Asia, but in North America a different terminology is used. The limits of the Namurian itself and of the standard stage and zonal divisions within it are based on the occurrences of ammonoid cephalopods known as goniatites, the species of which display very rapid dispersal in space and rapid replacement of one species by another in time. Intercontinental correlations of Carboniferous rocks are largely made with their aid. 2. The stages of the Namurian Series now accepted as standard are : Yeadonian Marsdenian Kinderscoutian Alportian Chokierian Arnsbergian Pendleian he strata of the Carboniferous System were deposited before the onset of continental drift and before the break-up of the ancient land masses of the world. In consequence, horizons in lands now many thousands of miles apart can be accurately correlated by the classical methods of stratigraphical palaeontology. In the chronostratigraphic classification of the Carboniferous, as adopted in north-west Europe, the Carboniferous is divided into two Subsystems -the Dinantian (below) and the Silesian (above). The Silesian is further divided into Series of which the Namurian is the lowest. The term Namurian is used over all Europe and Asia, but in North America a different terminology is used. The limits of the Namurian itself and of the standard stage and zonal divisions within it are based on the occurrences of ammonoid cephalopods known as goniatites, the species of which display very rapid dispersal in space and rapid replacement of one species by another in time. Intercontinental correlations of Carboniferous rocks are largely made with their aid. 2. The stages of the Namurian Series now accepted as standard are : Yeadonian Arnsbergian Pendleian The goniatite genus Homoceras, as interpreted for the last 47 years, is especially characteristic of the Chokierian and Alportian stages. These represent the lower (Hi) and upper (H2) divisions of the original H (for Homoceras) Zone of Bisat (1924); the names for these divisions were proposed by Hodson (1957). The strata concerned can be recognized and correlated by means of these goniatites from Britain through Belgium (Bouckaert, 1961) and Germany (Schmidt, 1925) to Russia at least as far as Central Asia (Ramsbottom, 1957). The Chokierian (Hi) and Alportian (H2) stages are divided into five zones, four of which are named from species of Homoceras. The topmost zone of the Alportian, the Homoceratoides prereticulalus Zone, is divided into two subzones, one of which is named from a species of Homoceras. 3. The type-species of Homoceras Hyatt, 1884 (p. 330), is Goniatites calyx Phillips (1836, p. 236, pi. 20, figs 22, 23) 5. The generic name Homoceras, therefore, being based on a type-species whose syntypes, so far as known, are not only taxonomically useless (because immature), but not certainly available, is a nomen dubium. Nevertheless, although the name of the type-species has virtually disappeared from the hterature, the generic name has been applied to several species which have long been widely used in stratigraphical studies. The name is used in teaching. Its stratigraphical importance has been drawn upon in palaeogeographical reconstructions (Hodson 1959). There is thus a clear case for conserving the name, and the evidence shows that this would be better achieved by varying the typespecies than, for example, by designating a neotype for Hyatt's type-species. 6. For many years Homoceras beyrichianum (Haug) and its close relatives H. smithii (Brown) and H. undulatum (Brown) have been regarded as typical forms of the genus, and it is they that are the common forms in the Chokierian and Alportian stages. H. smithii is one of the most typical and widespread of all the species now referred to the genus, and it is proposed that it should be substituted for the unsatisfactory and unidentifiable H. calyx (Phillips). Goniatites smithii Brown (1841, p. 218, pi. 7, figs 34, 35) was first referred to Homoceras by Bisat (1924, p. 103) is found only in a thin band at the base of the Alportian (H2) Stage; but though thin, the band is found in Ireland, Great Britain, France, Belgium and Germany and it is one of the most reliable and widespread of the Namurian goniatite marker horizons (Hodson, 1957). H. smithii has also been reported from the southern Urals and from Central Asia (Ramsbottom, 1957, vnth references). 7. The International Commission on Zoological Nomenclature is accordingly asked : (1) to use its plenary powers to set aside all designations of type-species for the nominal genus Homoceras Hyatt, 1884, hitherto made and, having done so, to designate the nominal species Goniatites smithii Brown, 1841, to be the type-species of that genus.
2019-04-02T13:12:04.827Z
1971-01-01T00:00:00.000
{ "year": 1971, "sha1": "8a2c08553edef553f5d32ed7a3206dd8f6069119", "oa_license": "CCBYNCSA", "oa_url": "https://doi.org/10.5962/bhl.part.6129", "oa_status": "HYBRID", "pdf_src": "MergedPDFExtraction", "pdf_hash": "e1dd5577bfffade3b7e05d1a877eeca5ad353bd4", "s2fieldsofstudy": [], "extfieldsofstudy": [ "Biology" ] }
58547351
pes2o/s2orc
v3-fos-license
Title Page Title: Simultaneous isotope dilution quantification and metabolic tracing of deoxyribonucleotides by liquid chromatography high resolution mass spectrometry Authors: Quantification of cellular deoxyribonucleoside mono(dNMP), di(dNDP), triphosphates (dNTPs) and related nucleoside metabolites are difficult due to their physiochemical properties and widely varying abundance. Involvement of dNTP metabolism in cellular processes including senescence and pathophysiological processes including cancer and viral infection make dNTP metabolism an important bioanalytical target. We modified a previously developed ion pairing reversed phase chromatographymass spectrometry method for the simultaneous quantification and 13C isotope tracing of dNTP metabolites. dNMPs, dNDPs, and dNTPs were chromatographically resolved to avoid mis-annotation of in-source fragmentation. We used commercially available 13C15N-stable isotope labeled analogs as internal standards and show that this isotope dilution approach improves analytical figures of merit. At sufficiently high mass resolution achievable on an Orbitrap mass analyzer, stable isotope resolved metabolomics allows simultaneous isotope dilution quantification and 13C isotope tracing from major substrates including 13C-glucose. As a proof of principle, we quantified dNMP, dNDP and dNTP pools from multiple cell lines. We also identified isotopologue enrichment from glucose corresponding to ribose from the pentose-phosphate pathway in dNTP metabolites. Introduction Intracellular deoxyribonucleoside triphosphate (dNTP) supply is tightly controlled by de novo synthesis, salvage, and degradation pathways (1,2).Aberrant concentrations of dNTPs and their metabolites are associated with control of senescence (3)(4)(5), cancer (6,7), metabolic diseases (3,8), neurodegeneration (9), and viral infection (10).Basal dNTP levels during G1/G0 are extremely low (femtomolar range) and are mainly used for DNA repair and mitochondrial DNA synthesis (11,12).During S phase, dNTP levels increase ten-fold to accommodate nuclear DNA replication (13).Excess dNTP levels reduce genome stability, replication fidelity, and reduce the length of, or delay entry into S phase (14).On the other hand, low dNTP levels are deleterious by increasing the erroneous incorporation of rNMPs as well as the incidence of replication stress, leading to fork arrest, collapse, and double strand breaks (15).Mitochondrial depletion also results from the imbalance of dNTP levels (16).Therefore, dNTP pool balance is critical for the health of the cell. Low basal concentrations of dNTPs and interference from analogs including NTPs, dideoxynucleoside triphosphates and mono-or di-phosphate deoxynucleosides presents an analytical challenge (17,18).Detection with an enzymatic assay, in which a radioactive or fluorescence labeled dNTP is incorporated by a DNA polymerase proportionally to the unknown complementary base, is relatively sensitive with limits of detection (LoD) in the low pmol range (12).DNA polymerases sometimes incorporate rNTPs, thus often overestimating the concentration of dNTPs, especially at low dNTP concentrations or in complex matrices (12,19).Furthermore, the multiplexing ability of these assays are limited and they cannot simultaneously quantify all nucleosides and their corresponding mono-, di-, and tri-phosphate forms (20).Liquid chromatography (LC)-UV detection methods are useful in dNTP bioanalysis but are limited in sensitivity, specificity, and multiplexing ability (21). Mass spectrometry (MS) based quantification of dNTP metabolism offers unique benefits in terms of multiplexing, sensitivity and specificity.However, in-source fragmentation in electrospray ionization (ESI) sources of tri-phosphates to mono-and di-phosphates is problematic without chromatographic resolution or preparative separation of the mono-, di-and tri-phosphates (22).Specifically, this complicates direct infusion high resolution mass spectrometry (HRMS) methods developed for other nucleotide metabolites (23).Triple quadrupole LC-tandem MS (LC-MS/MS) based methods have been developed to provide increased multiplexing and more specific measurements of dNMP and dNTP metabolites at adequate sensitivity for most biological samples (24).Methods for dNDPs are sparse in the literature, despite the biological importance of the generation of these metabolites via ribonucleotide reductase.Alternatively, due to the analytical challenges in quantifying dNTPs, many studies indirectly quantify dNTP metabolism by strategically assaying precursor metabolites including ribose-5-phosphate (R5P) (25). MS based methods can incorporate isotope dilution by adding an isotope labeled internal standard at the beginning of a bioanalytical workflow to adjust for matrix effects and losses during extraction and analysis (26).13 C, 15 N-labeled ribonucleosides are available commercially, but methods to date have not utilized extensive isotope dilution.Aside from use of stable isotopes for more rigorous quantification, MS-based analysis can also enable isotope tracing, where the incorporation of a stable isotope labeled substrate in a metabolically active system is quantified by measuring the incorporation of the isotope label into an analyte.Synthesis of dNTPs incorporate carbon and nitrogen from diverse precursors, in separate pathways for purines and pyrimidines (Fig. 1).Carbon sources for de novo dNTP synthesis include glucose, aspartate, serine, and glycine whereas nitrogen is derived from glutamine, or aspartate (13).These precursors can be derived from multiple metabolic pathways, including glycolysis, glutaminolysis, and TCA cycle metabolism (3).Preferential utilization of these metabolites through each of these pathways is both cell-type and context dependent.Therefore, tracing the fate of atoms from different metabolites to dNM/D/TP pools is useful for understanding pathophysiological metabolic rewiring and pharmacological intervention. To overcome the analytical challenges of dNTP quantification and maximize the analytical benefits of MS based analysis, we developed an ion pairing LC-HRMS method for the simultaneous isotope dilution based quantification and 13 C-isotope tracing of dNTP from major carbon precursors.Mono-, di-, and tri-phosphates of deoxyribonucleosides were quantified across different cell lines, and isotopic incorporation from 13 C-glucose was examined.We demonstrate improved analytical parameters over label-free quantification and the ability to discern the contribution of major carbon sources into biologically informative patterns of isotope incorporation. Liquid chromatography-high resolution mass spectrometry LC-HRMS was as previously described with minor modifications (29).Briefly, an Ultimate 3000 UHPLC equipped with a refrigerated autosampler (at 6 °C) and a column heater (at 55 °C) with a HSS C18 column (2.1 × 100 mm i.d., 3.5 μm; Waters, Milford, MA) was used for separations.Solvent A was 5 mM DIPEA and 200 mM HFIP and solvent B was methanol with 5 mM DIPEA and 200 mM HFIP.The gradient was as follows: 100 % A for 3 min at 0.18 mL/min, 100 % A at 6 min with 0.2 mL/min, 98 % A at 8 min with 0.2 mL/min, 86 % A at 12 min with 0.2 mL/min, 40 % A at 16 min and 1 % A at 17.9 min-18.5 min with 0.3 mL/min then increased to 0.4 mL/min until 20 min.Flow was ramped down to 0.18 mL/min back to 100 % A over a 5 min re-equilibration.For MS analysis, the UHPLC was coupled to a Q Exactive HF mass spectrometer (Thermo Scientific, San Jose, CA, USA) equipped with a HESI II source operating in negative mode.The operating conditions were as follows: spray voltage 4000 V; vaporizer temperature 200 °C; capillary temperature 350 °C; S-lens 60; in-source CID 1.0 eV, resolution 60,000.The sheath gas (nitrogen) and auxiliary gas (nitrogen) pressures were 45 and 10 (arbitrary units), respectively.Single ion monitoring (SIM) windows were acquired around the [M-H] -of each analyte with a 20 m/z isolation window, 4 m/z isolation window offset, 1e 6 ACG target and 80 ms IT, alternating with a Full MS scan from 70-950 m/z with 1e6 ACG, and 100 ms IT. Ion pairing reversed phase separation allows chromatographic resolution of intact mono-, di-and tri-phosphate forms of nucleosides Tri-and di-phosphates are known to easily lose a phosphate group in the ESI source to generate di-and mono-phosphates (22).Therefore, it is critical that the mono-, di-, and tri-phosphates of dNTP metabolites are resolved by chromatography.We first determined retention times and confirmed high abundance ions by injection of each individual standard.Mono-phosphates eluted in the 4-8 minute range, di-phosphates in 12-14.5 and triphosphates from 15-onwards.This provided baseline resolution of the major in-source fragments that had the same retention time of the precursor tri-or diphosphate.Standard for dAMP was cross-contaminated with other NTP metabolites and were not included in our analysis for purposes of absolute quantification.To confirm that this method was adequate for detection in cell samples, methanolic extract of IMR90 cells was analyzed and extracted ion chromatograms for all analytes and their available stable isotope labeled internal standards were plotted (Fig. S1).For analytes without matched isotope labeled analogs, the identity of peaks were confirmed by co-elution with pure standards to ensure mis-identification with the in source fragment of related analytes (Fig. S2).Resolution of the in-source fragments was maintained in cell matrix.Operation of the HRMS in full scan alone was insufficiently sensitive to detect all dNTP metabolites, thus in the final method single ion monitoring (SIM) was used, with a 20 m/z window with an offset of 4 m/z around each m/z corresponding to each analyte [M-H] -.This window includes all internal standards used, and allowed for simultaneous mass isotopologue distribution analysis described below. We were not able to chromatographically baseline resolve the isobaric dGTP/ATP, dGDP/ADP, dGMP/AMP pairs in our system.Overlap of these nucleotides was noted in other methods, and negative mode fragmentation was reported to be less specific than positive ion mode.In the cases of both dGM/D/TPs, we could resolve the deoxynucleotides with LC-MS/HRMS, but at drastically reduced sensitivity with more limited and complicated isotopologue analysis since the entire nucleotide is not captured by the detected ions (data not shown).Thus, we excluded this from analysis in this method.The concentration of AMP/ADP/ATP is expected to be orders of magnitude higher than the deoxyguanosine metabolites; thus, quantification by this method would reflect by majority AMP/ADP/ATP. Stable isotope dilution improves analytical performance of LC-HRMS based quantification of dNTPs Standard curves were generated for dNTPs except dAMP and dGM/D/TP.Pilot testing indicated that using higher amounts of dNTP internal standards improved detection in the lower range of standard curves, without interference from residual unlabeled impurities.Analysis of curves for dTM/D/P was conducted with isotope dilution or by label-free quantification using peak area around the levels detected in cell samples.For isotope dilution, the simplest model that fit the data with linear least-squares regression around the range detected in 1x10 6 cells (0.97-500 ng/sample) was used, with R 2 values of 0.9979 (no weighting), 0.9973 (1/x weighting), and 0.9933 (1/x weighting) for dTMP, dTDP, and dTTP, respectively.In comparison with the same weightings, label-free quantification gave R 2 values of 0.9977, 0.8258, and 0.8592 (Fig. 2 A,B). Calibration curves for dCM/D/TP were also linear across the range encountered in cell samples with excellent R 2 values of 0.9999, 0.9968, 0.9965 for dCMP, dCDP and dCTP respectively.Since previous methods have used surrogate internal standards (most commonly using 1 or two internal standards to normalize multiple analytes), we tested the utility of dCMP analysis by normalization to either the matched dCMP analog or dCTP.Normalization of dCMP signal area under the curve by dCTP-internal standard area under the curve yielded a poor least squares fit at the same concentration ranges (Fig. 2 C). The signal intensity in the blank containing only isotope labeled internal standards had consistent background with 0 intensity.Therefore, for purposes of comparison to other methods of the limit of quantitation (LoQ), we conservatively estimated the LoQ as the first point that would fall within the experimentally confirmed linear range of the method at around 0.03 pmol (30 fmol) on column for dTMP, dTDP and dTTP.This was more than sufficient to quantify the levels within cells lines tested.Quality controls were created to bracket levels observed in cell studies and from remnant tumor and nontumor pathologic tissue (Table 1).This ranged from 1 ng/sample to 600 ng/sample with pmol amounts reported for on column and pmol per sample for the lowest quality control. To examine the effect of isotope dilution versus label free quantification we analyzed data from IMR90 cells with various short hairpin knockdown of genes found to disrupt nucleotide metabolism (28).This provided a bioanalytical meaningful way to modulate dNTP metabolites within similar cellular backgrounds.For this analysis, the analyst was blinded to identity of each sample (with respect to control versus knockdown and inhibition).Comparison of values obtained from quantification from cells revealed better precision with isotope dilution for dTMP (Fig. 3 A,D), dTDP (Fig. 3 B,E), and dTTP (Fig. 3 C,F).Bland-Altman plots showing the % difference between the estimated value by each method and the averages of both approaches revealed an interesting concentration dependent bias in in all three analytes, with the lower values exhibiting a stronger negative bias in label-free quantification.This bias was especially pronounced in the IMR90 cell lines which had the three lowest values of all dTM/D/TP metabolites.Label-free quantification under-estimated the pools in IMR90 by around 175%, 90%, and 50% for dTMP, dTDP and dTTP.The reasons for this differential bias across nucleotide amounts is unknown, but has implications for the reporting of nucleotide metabolites from metabolomics data. Stable isotope resolved metabolomics of dNTPs is possible at ultra-high resolution Carbon atoms are derived from glucose, CO2/bicarbonate, formate, aspartate, and glycine, with nitrogen atoms coming from glutamine and aspartate.Since none of these are essential amino acids or nutrients they can be derived from the diverse sources of all of these precursors as well.These precursors can be traced via stable isotope labeling strategies which incorporate atoms from a labeled substrate into the metabolic product.Isotopic tracing requires higher sensitivity than quantification as isotopologue analysis requires, by definition, analysis of less abundant isotopologues.Furthermore, tracing with a stable isotope splits the signal intensity across isotopologues as they incorporate the isotopic label.Theoretically, at sufficient mass resolution, simultaneous quantification and isotope tracing can be accomplished with differential neutron encoded labels (e.g., 13 C vs. 15 N) due to the mass defect in the additional neutron in atoms with varying nuclei (30). At sufficient mass resolution, stable isotopes of commonly used tracing atoms ( 2 H, 13 C, 15 N, 18 O, etc.) can be resolved from each other.Thereby, neutron-encoded information enables simultaneous metabolic tracing by stable isotope labeling with orthogonal stable isotope labeling used as an internal standard for isotope dilution based quantification (30).Since dNTPs and nucleoside metabolites incorporate C, H, N, and O atoms, and commercially available 13 C, 15 N-labeled pure standards are available for isotope dilution, we examined the ability to simultaneously trace 13 C-labeling via metabolic tracing with 13 C6-glucose with 13 C, 15 N internal standards.This allows an experimental design where the origin of dNTP pools can be quantitatively examined, by quantifying the substrate-product relationship from labeled precursors, as well as the relative importance of de novo versus different salvage or uptake pathways.The limiting factor is resolution of the 13 C and 15 N labels, so we modeled theoretical resolution at different resolving powers.With Orbitrap mass analyzers, resolution falls off with increasing m/z (31), thus the most difficult to resolve isotopologues are the triphosphates with the highest possible number of labels.240,000 resolving power was sufficient to baseline resolve the 15 N1 labeled isotopologue from 13 C1 labels (Fig. 4A-D).A lower resolution setting would be capable of resolving the dNMPs, or utilization of MS/HRMS and analysis of the dNMP product ions (Fig. 4E). We tested this theoretical possibility by a proof-of-principle experiment using isotope dilution absolute quantification in the analysis of a 13 C6-glucose labeling experiment.IMR 90 cells were grown with or without 13 C6-glucose, and with either empty vector control or RRM2 knockdown.We analyzed the dTM/D/TP pool size by isotope dilution based quantification (Fig. 5 A) and the isotopologue enrichment of each analyte (Fig. 5 B,C,D).We integrated the peak area of each isotopologue of dTM/D/TP corresponding to the sequential incorporation of 13 C atoms.After correction for natural isotopic abundance isotopologue enrichment of dTM/D/TP revealed the quantitative incorporation of 13 C labeled glucose in a pattern of mixed sequential M1 and M2 labeling with a large increase in M5 labeling and some M6 labeling.This peak at M5 likely corresponds to the incorporation of 13 C5. Since full scan data was acquired in addition to the targeted SIMs, we re-interrogated the data for relative abundance and labeling into ribose-5-phosphate, the pentose-phosphate pathway product that provides the ribose in dNTPs (Fig. 5 E,F).Labeling of ribose-5-phosphate mirrored that of dNTPs, providing confirmation that we were measuring the incorporation of the 13 C into dNTPs.Recent work has demonstrated the importance of nucleotide tracing.For instance, that response of cancer cells to metformin correlates with nucleotide levels derived from TCA carbon sources but not the pentose phosphate pathway (32).Similarly, resistance to the antimetabolite gemcitabine correlates with glucose carbon flux specifically through the non-oxidative pentose phosphate pathway (33).Thus, tracing of metabolites to nucleotides/dNTPs may determine which patients will respond to metabolic inhibitors and reveal metabolic vulnerabilities for subsets of cancers.However, many studies have not confirmed tracing of metabolites from nucleotide precursors to dNTPs.It will be important in the future to further dissect the regulation of dNTP synthesis using tracing studies in both normal and diseased states. There are two major caveats of this method.First, further improvement of this method could be made by chromatographically resolving the remaining isobars (AMP/dGMP, ADP/dGDP, ATP/dGTP).Although we could find a specific fragment on tandem MS/HRMS to differentiate the adenosine nucleotide phosphates from the deoxyguanosine nucleotide phosphates, this reduced sensitivity of the method below that useful for our purposes.Second, although we did not quantify it due to an impurity in the dAMP standard, sensitivity of dAMP was reduced by a high-intensity contaminant ion present across the LC-gradient falling in the same SIM window.This is problematic on our Orbitrap instrument as the sensitivity benefit gained from using the quadrupole for isolation to restrict ions allowed into the C-trap and Orbitrap is negated by a high-intensity background ion within the SIM window when the quadrupole isolation window was sufficient to allow complete isotopologue enrichment analysis. Conclusion Direct quantification and isotope tracing of dNTP metabolites is analytically challenging.Ion pairing reversed phase LC coupled with Orbitrap based high-resolution mass spectrometry provides a platform for simultaneous isotope dilution based quantification of dNTPs with isotope tracing from multiple potential substrates. C/ 15 N isotopologues requires ultra-high resolution mass spectrometry.Theoretical Gaussian resolution (FWHM) with 40 samples per peak is shown at increasing levels of resolving power per nucleotide.Predicted centroid masses for [M-H] -at each resolution are shown for a 1:1 mixture each of (A) 13 C1dATP/ 15 N1dATP (B) 13 C1dTTP/ 15 N1dTTP (C) 13 C1dGTP/ 15 N1dGTP (D) 13 C1dCTP/ 15 N1dCTP and (E) 13 C1dGMP/ 15 N1dGMP. Figure 1 . Figure 1.dNTP metabolism integrates atoms from a number of potential substrates. Figure 2 . Figure 2. Calibration curves of dTM/D/P in cell samples using A) label-free quantification B) isotope dilution.C) Calibration curves of dCMP using dCMP-internal standard (ISTD) or dCTP-ISTD. Figure 3 . Figure 3. Bias in quantification of dTM/D/TP in cell samples using isotope dilution or label-free quantification.Quantification of A) dTMP B) dTDP C) dTTP.Bland-Altman plot showing % difference in isotope dilution versus label-free quantification for D) dTMP E) dTDP and F) dTTP. Table 1 .Supplemental Figure 1 .Supplemental Figure 2 . Analytic figures of merit for quality control samples bracketing cell and tissue levels of dNM/D/TPs.Extracted ion chromatograms of deoxynucleotides with their stable isotope labeled analogs.Extracted ion chromatograms of deoxynucleotides.Where multiple peaks are observed in the chromatogram, the correct peak as determined by co-elution with a pure standard are indicated by labeling the retention time of the empirically assigned peak. . CC-BY-NC 4.0 International license a certified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity.It is made available underThe copyright holder for this preprint (which was not this version posted October 29, 2018.; https://doi.org/10.
2023-09-23T17:31:49.563Z
2018-01-01T00:00:00.000
{ "year": 2018, "sha1": "29a886008f6b01e09dc350890b16e5737e3ec772", "oa_license": "CCBYNC", "oa_url": "https://www.biorxiv.org/content/biorxiv/early/2018/10/29/454322.full.pdf", "oa_status": "GREEN", "pdf_src": "MergedPDFExtraction", "pdf_hash": "29a886008f6b01e09dc350890b16e5737e3ec772", "s2fieldsofstudy": [ "Chemistry", "Biology" ], "extfieldsofstudy": [ "Medicine", "Chemistry", "Biology" ] }
46930460
pes2o/s2orc
v3-fos-license
The importance of the exposome and allostatic load in the planetary health paradigm In 1980, Jonas Salk (1914–1995) encouraged professionals in anthropology and related disciplines to consider the interconnections between “planetary health,” sociocultural changes associated with technological advances, and the biology of human health. The concept of planetary health emphasizes that human health is intricately connected to the health of natural systems within the Earth’s biosphere; experts in physiological anthropology have illuminated some of the mechanisms by which experiences in natural environments (or the built environment) can promote or detract from health. For example, shinrin-yoku and related research (which first emerged from Japan in the 1990s) helped set in motion international studies that have since examined physiological responses to time spent in natural and/or urban environments. However, in order to advance such findings into planetary health discourse, it will be necessary to further understand how these biological responses (inflammation and the collective of allostatic load) are connected to psychological constructs such as nature relatedness, and pro-social/environmental attitudes and behaviors. The exposome refers to total environmental exposures—detrimental and beneficial—that can help predict biological responses of the organism to environment over time. Advances in “omics” techniques—metagenomics, proteomics, metabolomics—and systems biology are allowing researchers to gain unprecedented insight into the physiological ramifications of human behavior. Objective markers of stress physiology and microbiome research may help illuminate the personal, public, and planetary health consequences of “extinction of experience.” At the same time, planetary health as an emerging multidisciplinary concept will be strengthened by input from the perspectives of physiological anthropology. Background "Sophisticated technology, intended to advantages for humankind, sometimes has had unforeseen adverse effects on human health...[environmental degradation] threatens human and planetary health. The latter must also be added to the consideration of biological and sociocultural influences on health throughout the human life span" [1]. Jonas Salk, MD, 1980 In the quote above, found within a nearly 40-year-old medical anthropology textbook, Jonas Salk introduces the term "planetary health" into multidisciplinary research. Although best known for developing the vaccine that helped to eradicate polio, Salk spent large portions of his scientific career championing the idea that human health is dependent upon biodiversity and healthy ecosystems. Moreover, he argued that the human body was an extension of the functioning whole of the external environments-including its biodiversity, social policies, and cultural practices: "We must see ourselves as part of the ecosystem. Where we were once a product of evolution, we are now part of the process" [2]. In underscoring planetary health in medical anthropology, Salk was referring to the health of the Earth's natural systems as an upstream driver of human health and vitality. He emphasized the need to study the interconnected biological (hence, physiological), social, and cultural aspects of health from the planetary health perspective. While the term planetary health has since been used by many different scientific, health, and environmental advocacy groups-each generally referring to the health of ecosystems within the biosphere [3]-the 2015 Lancet Commission on Planetary Health report formally defined the planetary health paradigm as "the health of human civilization and the state of the natural systems on which it depends" [4]. Put simply, there is no human health without planetary health. Of high-level relevance to physiological anthropology, the Lancet Commission on Planetary Health report also emphasizes integration of biological, social, and cultural aspects of health in the modern environment. Further, the report "accepts the complexity and non-linearity of the dynamics of natural systems" and underscores the need to study potential health benefits derived from the maintenance and restoration of natural systems. Physiological anthropology will play an important role in the emergent planetary health paradigm; indeed, for the last several decades, physiological anthropology has been a leading contributor in understanding the physiological consequences of modern pressures placed upon humans. Specifically, physiological anthropology has focused on the ways in which the modern environmentwith its high technology, dominance of ultra-processed foods, and diminished human contact with biodiversity-can impact upon normal physiological functioning; understanding the gulf between the psychological and physiological requirements of individuals-and the (in)ability of the modern environment to help fulfill those needs-is central to the aims of physiological anthropology [5]. Since biological responses are a product of our ancestral past, signs of metabolic dysregulation can unveil an evolutionary mismatch that otherwise contributes to a global epidemic of NCDs. Roadmap to the current review Here in our narrative review and commentary, we illustrate the importance of physiological anthropology in the context of planetary health. In order to emphasize this connection, we first discuss "extinction of experience" with nature, a term which loosely describes the loss of experiential contact with biodiversity and natural environments. The term is related to other theories and phrases such as "shifting baseline syndrome" and "environmental generational amnesia" which propose that individuals gauge their perceptions (of, for example, biodiversity losses or environmental degradation) from their own experiences in the surrounding environment; it is difficult to truly appreciate "what once was" (that is, an environment formerly rich in biodiversity), and thus, a "baseline" awareness of the health of nature by successive generations is reset in a way that underestimates the full extent of degradation. Next, we focus on the exposome and recent findings in the science of allostatic load, underscoring how the total lived experience of individuals-including missed opportunities and experiences-influences health at the personal, public, and planetary scales. Alterations in biological responses to the modern environment-immune and nervous system functioning in particular-can drive low-grade inflammation which, in turn, can compromise mental health. However, under the rubric of "extinction of experience," the extent to which humans in westernized and industrialized nations are aware of connections between health of self and biodiversity may be increasingly obscured. Thus, it is our contention that progress toward the goals of planetary health is predicated upon a greater understanding of how collected experiences in the natural environment influence physiology and behavior (Fig. 1). Extinction of experience "I would like to say". Coyote is forever. Inside you. "But it's not true". Gary Snyder, "The Call of the Wild", 1974 Salk maintained that scientists should look toward the arts and humanities in order to identify fundamental questions worthy of scientific pursuit [6]. In this context, Fig. 1 How does accumulated experience (or lack thereof) in the modern environment influence human physiology and help illuminate the links between personal, public, and planetary health? we highlight the work of scholar, environmentalist, and Pulitzer-prize-winning poet Gary Snyder; greatly influenced by his many years living in Japan, studying the rich biodiversity of the land, Snyder's versification through the 1950s-1960s celebrated the ways in which our ancestral experience in natural environments could reassert itself (in the form of vitality and joy) when primed. This, according to Snyder, was most obvious when an individual was once again immersed in nature [7]. However, in his later Pulitzer-winning book Turtle Island (1974), Snyder expressed concern about the trans-generational loss of experience with nature, and the subsequent ability of nature's deep (albeit hidden) resonance-the coyote as a metaphor-to survive in post-modern humans [8,9]. Several years later, scientist Robert Pyle coined a term for this hypothesis-"extinction of experience." Writing in Horticulture (1978), Pyle stated that the disappearance of neighborhood biodiversity was a threat to the "collective psyche" and that its represented "the loss of opportunitiesthe extinction of experience"; in particular, Pyle was concerned about vanishing opportunities for children, "the ones whose sensibilities must be touched by the magic reaction with wildlife if biologists, conservationists and concerned citizens are not to become endangered themselves. What is the extinction the condor to a child who has never seen a wren?" [10]. The concept of extinction of experience has been expanded upon, but the primary theme remains the same-loss of direct, personal, cognitive-emotional contact with wildlife and elements of the natural world could lead to disaffection, apathy, and irresponsibility in behaviors toward the environment [11,12]. At the same time, a related hypothesis was expanding; the hygiene hypothesis and its variants proposed that diminishing early life exposures to microbes-due to a more "sanitized" environment, antibiotic use, smaller family sizes, and lower exposure to bacteria in foods and the overall environment-could compromise normal training of the immune system. The recent biodiversity hypothesis updates and unifies this proposal by emphasizing that biodiversity losses at the neighborhood scale could translate into loss of contact with microbiotic diversity. Specifically, "biodiversity loss leads to reduced interaction between environmental and human microbiotas. This in turn may lead to immune dysfunction and impaired tolerance mechanisms in humans" [13,14]. However, research in the more biologically oriented biodiversity hypothesis and the more psychologically oriented extinction of experience hypothesis has largely remained separated in silos. In the twenty-first century, there have been several studies which support the idea that adults and children in westernized, industrial, and technologically mature nations are spending more time indoors [15,16] and less time in natural environments [17,18]. There are also hints that declines in local biodiversity and environmental degradation are associated with greater time spent indoors [19]; since lack of time spent outdoors is associated with chronic disease [20], this could present a double burden of increased risk of NCDs and decreased the awareness of further threats to local and global biodiversity. The best evidence of extinction of experience has emerged from Japanese research; using a range of 21 different neighborhood flowering plants as a measure of interaction with visible aspects of biodiversity, researchers have shown an age-related, cross-generational decline in childhood experiences with nature [21]. International research demonstrates that such neighborhood changes may be compounded by cultural changes in media representations of biodiversity which focus only on a miniscule sliver of well-known species [22][23][24]. Extinction of experience is, of course, worrisome from a conservation perspective; the ability to develop an emotional connection with the natural world (measurable with the psychological construct of nature-relatedness [25])and subsequently develop pro-environmental attitudes and behaviors-is dependent upon experience [26]. Nature relatedness (see also nature connectivity, nature connectedness) allows researchers to determine individual levels of awareness of, and fascination with, the natural world; nature relatedness also captures the degree to which subjects in research studies have an interest in making contact with nature. From the planetary health perspective, nature relatedness is positively associated with empathy, pro-environmental attitudes, and humanitarianism (and negatively with materialism) [27][28][29]. However, nature relatedness is also highly relevant to physiological anthropology, and human biology in general, because a substantial body of research has linked appreciation of (and relatedness to) the natural environment with general health and mental wellbeing [30,31]. Sitting in parallel to research on the psychology of nature relatedness-unintegrated into the planetary health paradigm-is a growing body of in vivo research involving physiological endpoints which demonstrate that time spent in natural environments might be protective against allostatic load (described in more detail shortly). While there are now many studies in this realm, it is perhaps best exemplified by shinrin-yoku (now generally referred to in Japanese studies as simply "forest medicine" or "forest therapy") research; shinrin-yoku loosely translates from Japanese as forest-air bathing or "absorbing the forest air" and places emphasis on the entire forest experience wherein the individual tales in all the "components emitted from the forest" [32]. Studies under the rubric of shinrin-yoku have shown that spending time in a forest environment can beneficially influence stress physiology, markers of inflammation, immune defenses, blood pressure, and heart rate variability [33][34][35][36][37][38][39][40]. To appreciate the contribution of shinrin-yoku and related research, consider that a 2018 systematic review identified a total of 43 studies which measured physiological and psychological stress responses to outdoor environments-nearly half of the studies were conducted in Japan [41]. Although limited by small sample sizes, these and other studies with physiological endpoints provide potential mechanistic pathways (e.g., immune activation, oxidative stress, blood pressure, cortisol response) for the associative links between green space and health in large epidemiological studies [42]. Moreover, these studies can be viewed in the context of studies which link markers of biodiversity with mental and physical health [43,44]. On the other hand, relatively rapid environmental degradation and/or visible losses in species (e.g., the loss of millions of ash trees due to the invasive emerald ash borer) are linked to declines in physical and mental health [45][46][47]. Extinction of experience research also forces questions concerning shifting cultural norms and time use; in other words, if time spent in outdoors in nature is being displaced, then how specifically is that time displaced? These are connected conversations. For example, excess screen time and problematic smartphone use is linked with lower levels of personal nature relatedness [48]. It is also important to point out that "extinction of experience" is not exclusive to psychological losses in contact with biodiversity (or even biodiversity per se); it could be argued that for children in westernized nations, the loss of whole plant foods (relatively unprocessed, high in fiber) in the dietary and the massive encroachment of the "invasive species" known as ultra-processed foods (which now dominate the nutritional landscape, like weeds, displacing nutrient-dense foods) is also an extinction of experience [49]. Moreover, from the biological perspective, urbanization and loss of contact with biodiversity [50]-as well as related changes to contact with diversity of the microbiome [51,52]-could be viewed as an "immunological extinction of experience." Since the health benefits derived from experiences in natural environments may be determined by baseline nature relatedness [53], researchers will need to examine the physiological consequences of the interplay between the presence (use) of certain technologies and the absence (disuse) of natural environments and biodiversity. Thus, the challenge for physiological anthropology in the context of planetary health is to help bridge the knowledge gaps between three large, research-based silos-that is, (1) the psychological and cognitive aspects of nature relatedness and the loss of experience, (2) the physiological pathways involved in the risk of NCDs, and (3) the ways in which human health and wellbeing are, emotionally and biologically, predicated upon biodiversity and the health of the Earth's natural systems. It is our contention that an "exposome perspective" will help break down silos and incorporate the ongoing work of physiological anthropology into planetary health. The exposome refers to the science of accumulated "exposures" (meaning both emotional experiences and physical/ sensory exposures) over time. As we explain below, this view emphasizes that genes alone cannot explain health disparities and underscores that each individual exposure (e.g., airborne particulate matter or fast-food, beneficial microbes, or phytoncides) does not occur independent of the total environment. Moreover, from the physiological perspective, the most direct path to understanding the connections between personal and planetary health-gains and losses from extinction of certain experiences and the birth and flourishing of others-may be to examine allostatic load (the physiology associated with the "wear and tear" of stress). We will elaborate on this shortly. Exposome "Human biology should be primarily concerned with the responses that the body and the mind make to the surroundings and ways of life…little effort has been made to develop methods for investigating scientifically the interrelatedness of things. Epidemiological evidence leaves no doubt that many chronic and degenerative disorders which constitute the most difficult and costly medical problems of our societies have their origin in the surroundings and in the ways of life rather than in the genetic constitution of the patient. But little is known of these environmental determinants of disease" [54]. Rene J. Dubos, PhD, 1969 While he did not coin the term "exposome," microbiologist and environmentalist Rene Dubos (1901Dubos ( -1982 urged scientists to study the response of the "total organism to the total environment" [55]; Dubos, of course, not only celebrated the value of single-variable studies but also warned of their limitations in the context of chronic diseases, environmental degradations, and the complexities of the human condition [56,57]. Today, the total accumulated environmental exposures (both detrimental and beneficial) that can help predict the biological responses of the "total organism to the total environment" over time are referred to as the exposome [58]. The temporal aspect of exposome science is important because the physiological responses of the human organism are a product of accumulated experiences and may differ across time depending on shifting environmental variables. The interpretation of stress physiology in the here-and-now requires an understanding of the interplay between time scales of stress, including but not limited to early life stress, acute and chronic stressors, experience of daily hassles, and the aggregate of life events [59]. The term exposome is now an essential feature of the planetary health discourse because it helps to demonstrate why genome-wide association studies cannot explain the reasons for health disparities; it also helps us understand why NCDs are increasing over time and in non-random ways. In particular, the burden of NCDs, especially in westernized nations, is most often shouldered by disadvantaged populations [58]. Furthermore, the exposome view of total health encompasses the World Health Organization's interpretation of the word health; that is, not simply the absence of specific disease criteria, but rather the fulfillment of human potential. Although genetics matter, health includes a state of complete physical, mental, and social wellbeing-it is not a genetic trait. Rather, the study of physiology related to health promotion and/or risk is better understood when it is placed into the context of the total exposures experienced by humans-some positive, some negative-and their interactions with genes over time [60]. From a life-course perspective, exposome science emphasizes that certain windows of vulnerability (for disease risk) and opportunity (for health promotion) are especially important [61]. In the context of physiological anthropology, this means that socioeconomic advantage or disadvantage can produce differing biological responses to specific "beneficial" or "detrimental" exposures-e.g., spending time in nature or consuming a fast-food meal-depending on many other background variables. The interplay of these potentially beneficial and detrimental experiences is central to the concept of resiliency; as researchers explore how and why positive adaptation and outcomes occur in the face of adversities, and why certain individuals (who score high on validated resiliency scores) seem protected against the negative health-related consequences of adverse events, it will be necessary to tease apart the ways in which resiliency is built in the first place [62]. The available evidence allows for the hypothesis that exposure to elements of natural environments-e.g., microbial-can play a role in resiliency. Indeed, early-life exposure to diverse microbes found in natural environments is part of the normal "training" of the immune system, and it may decrease vulnerability to later life stress-associated disorders; for example, researchers have found that urban upbringing without pets (vs. rural upbringing around animals) is associated with compromised resolution of systemic immune activation (low-grade inflammation) following an experimenter-induced social stress [63]. To further appreciate the saliency of how accumulated experiences influence physiology in the total environment, we can look to research on allostatic load. Allostatic load Human responsiveness to environmental threats has been shaped by experience over millennia. In particular, an elegant and active process of allostasis-the normal initiation, orchestration, and termination of neuroendocrine, metabolic, autonomic, and immune "mediators"helps ensure a physiological state which supports survival. Acutely, these multisystem physiological responses to stress are, under normal circumstances, effectively initiated, maintained, and extinguished without harm. However, with repetitive and/or prolonged stimulation in the modern environment, these compensatory physiological responses can lead to metabolic disturbances and cellular damage. The collective toll of this physiological wear and tear-including the associated consequences of unhealthy lifestyle choices which compound the physiological dysregulation-is known as allostatic load [64]. Over time, the combined disturbances of allostatic load leads to allostatic overload and contribute to altered behavior and disease risk [65]. Epidemiological research indicates that links between lower socioeconomic position and disease mortality are mediated by allostatic load [66]. In other words, socioeconomic advantage is associated with lower allostatic load, which is in turn link to lowered risk of mortality. Such findings are supported by volumes of research indicating that disadvantage is accompanied by chronic psychosocial stress and daily hassles, lower optimism (an asset in physical and mental health), and significantly higher biomarkers of metabolic dysregulation, inflammation, and oxidative stress [67][68][69][70][71][72][73][74]. Indeed, within westernized-industrialized nations, allostatic load appears to bear witness to the ways in which socioeconomic disadvantage "gets under the skin" and "into the gut," ultimately decreasing longevity [66,75]. These links with socioeconomic disadvantage have been found at the individual and neighborhood levels; for example, allostatic load persists in low-income neighborhoods even after adjusting for individual-level income. Beyond income, anxious arousal is linked to allostatic load, as well as other lifestyle factors such as fast-food consumption, exercise habits, and smoking [76]; moreover, neighborhood-level income is associated with better physical and mental health over time [77]. Since allostatic load transcends purely genetic influences [78], it reinforces the exposome perspective and underscores the need to consider the context in which exposures are experienced. Moreover, it also allows for the introduction of epigenetic research and opportunities to determine how exposures (age, diet, physical activity, time in nature, positive and/or negative emotions, and accumulated experiences) modify DNA methylation, which in turn, alters gene expression [79]. Putting it all together, future directions The biological underpinnings of the exposome perspective indicate that the extent to which an organism can buffer against the detrimental physiological consequences of particular exposures will determine the risk of NCDs [58]. Hence, it is essential to understand how certain psychological "assets"-such as nature relatedness, positive emotions, mindfulness, and optimism-are accumulated and employed to act as physiological buffers in the modern environment. The available research, outlined above, suggests that scientists need to look more closely at the total lived experience of individuals, and the total environment which surrounds them. From the planetary health perspective, this means looking at the presence (or absence) of natural environments and specific features of the built environment that might offset (or contribute to) allostatic load. Looking at research from the exposome perspective allows researchers to consider the "big picture" and interconnectivity of humankind's most pressing problems. For example, living closer to green space and having greater access to safe, local parks, and open space is associated with health in general and mental health in particular [80]. However, the presence of green space may be a surrogate marker for healthier dietary habits, lower density of fast-food outlets, and better access to healthy foods [81][82][83]. Consider also the psychological asset of optimism which we have alluded to several times; optimism is generally defined as positive outcome expectancy for future events across life domains. Optimism has been linked to lower body mass index and lower rates of chronic disease and all-cause mortality [84][85][86][87][88]. In the physiological realm, optimism is linked to optimal metabolic markers of cardiovascular health, lower inflammatory cytokine and C-reactive protein levels, and lower inflammatory response to experimental stress [89][90][91]. Research suggests that optimism is only about 25% heritable, leaving plenty of room for the influence of the total lived experience over time; indeed, higher levels of optimism are associated with socioeconomic advantage [92,93]. Since optimism is malleable [94], experts in physiological anthropology might query on biological links between optimism, nature relatedness, and extinction of experience. For example, higher levels of optimism are associated with protection against the detrimental effects of environmental toxins (this appears to operate through epigenetic mechanisms) [95]. Scientists are beginning to tie these strands together; for example, researchers have found that close residential proximity to vegetated land cover is associated with lower allostatic load and depression [96]. In addition, researchers have begun to establish links between residential (or school) proximity and green vegetation-and degrees of neighborhood urbanization-with exposure to diverse, non-harmful microbes that may influence health and behavior [97][98][99][100]. However, these studies are missing key bits of information; how do measurements on the psychological construct of nature relatedness-and responses related to extinction of experience at the neighborhood level-match up with the objective markers of allostatic load, epigenetics and the microbiome? Does the age-related decline in direct experiences with neighborhood biodiversity manifest physiologically (allostatic load), and if so, are there connections between allostatic load and nature relatedness? These are essential questions for the planetary health paradigm. So far, the research focus on the human emotional connections to nature/local lands in the context of planetary health (exemplified by content within the aforementioned and highly cited Lancet Commission on Planetary Health report) has been on the real and potential mental health consequences of environmental degradation. Although there is good research on the psychological aspects of pro-environmental and pro-social beliefs and behaviors, its place in the discourse of planetary health is minimal. Moreover, the wealth of information gathered in the field of physiological anthropology (and related disciplines) on differential physiological responses to natural and built environments (e.g., shinrin-yoku, forest bathing research) has not penetrated the planetary health discourse. In addition to nature relatedness, the inclusion of other psychological constructs in the literature, especially those investigating place attachment measurements (e.g., topophilia scales) [101,102], will help provide a better understanding of how physiological endpoints might match individual and community-level emotional connections to the land. We suspect that the absence of cohort studies which simultaneously measure deep aspects of socioeconomic histories, allostatic load (and other objective markers such as the microbiome), residential proximity to "assets" (green space) and "liabilities" (clustering of fast-food outlets), along with measures of positive psychology/nature-relatedness/environmental attitudes is a barrier to multidisciplinary breakthroughs in planetary health. Available research indicates that the loss of experience (especially immunological) can shape acute biological responses in context over time; as we have pointed out previously, these are intertwined with income, education, race, immigrant status/segregation, social cohesion, evaluations of neighborhood esthetic quality, and/ or aspects of neighborhood safety (both real and perceived) [103]. While constituents of a diet which simultaneously promotes human and planetary health is generally agreed upon [104][105][106], less is known concerning the ways in which nature relatedness, optimism, and pro-environmental attitudes/behaviors and allostatic load intersect with adherence to such a diet. Macro-scale, multi-factorial, multi-indicator considerations such as the exposome, allostatic load, and planetary health present enormous challenges; it is easy to criticize such efforts because they include an essentially unlimited array of variables. While single-variable studies remain essential to scientific knowledge, large cohort studies are enjoying remarkable advances in "omics" research; clinically meaningful data sets are emerging from the analysis of functional proteins (proteomics), metabolites (metabolomics), gene expression (epigenomics, transcriptomics), and genetic influences on specific drugs or nutrients (pharmacogenomics) [107]. For example, large datasets in the area of the microbiome have provided clinically relevant information which may predict an individual's physiological responses to foods [108]. Thus, the ability of researchers to match environmental attitudes, nature relatedness, and other psychological indicators (based on experience or lack thereof) with important aspects of physiology at the individual and community-level is on the horizon [58]. As researchers begin to incorporate research on exposures and experiences into the planetary health perspective-including studies on physiological endpoints, resiliency, and allostatic load-we will also learn more concerning realistic expectations concerning the role of natural environments and health outcomes; access to green space is important, but there are many factors that push health inequalities and social injustices, including those that may have far more corrosive effects on health. We may have unwittingly given the impression that the health implications of experiences, exposures, and allostatic load are linear-that is, where more of a certain sort of experience/exposure is better or the more of another sort of experience or load is worse. These are not aggregate responses with a universal dose-response relationship; indeed, researchers are already discovering that the potential benefits of nature are not found along a neat continuum of benefit [109,110]. Finally, this entire conversation can be viewed through an evolutionary lens. What we need to eat-as opposed to the ultra-processed foods that surround us-is what we are adapted to eat. The exercise we need is obviously part-and-parcel of the physical activity to which we are adapted; corals and mussels need not count steps! So, too, our requirements for the natural settings to which we are adapted can be viewed, scientifically, from the evolutionary perspective and can help guide future research questions. It allows us to ask "why do we humans need nature to be whole?" in modernity. The answer is blowing in the wind, complete with microbes, natural light, and phytoncides, because we are adapted to it as a part of us, and us as a part of it-for all the same reasons we need to breathe the atmosphere native to the planet that generated us. Conclusion Scientifically, the grand challenges of our time-environmental degradation, a global non-communicable disease (NCD) epidemic, gross biodiversity losses, climate change, health and other socioeconomic inequalitiesare adisciplinary. In other words, these challenges are overlapping, and their causative complexities suggests that they will not be solved by linear research which otherwise remains in silos. The extinction of experience perspective suggests that each generation may accept the inherited state of their environment with a greater sense of "normalcy"; while experts in biodiversity conservation have a keen interest in extinction of experience research, a greater understanding of its physiological underpinnings seem necessary. In our narrative review and commentary, we have pointed to research on extinction of experience, nature relatedness, and the science of allostatic load to argue for a stronger presence of physiological anthropology in the planetary health paradigm. Over time, the burdensome biological consequences of detrimental exposures (and absence of beneficial exposures and psychological assets) will press upon those with higher allostatic load, translating into a biologically corrosive allostatic overload. While physiological anthropology has made tremendous contributions to the understanding of mechanisms that help explain the ways in which experience in natural environments (or exposure to individual constituent parts of nature) promote health, many gaps remain. In particular, a more persuasive argument for the connections between personal, public, and planetary health could be made via more detailed understanding of the biological pathways between nature relatedness, changing levels of local biodiversity, and allostatic load. The prospect of personalized medicine test results (based on physiological responses and large datasets) may provide much-needed incentives to motivate individuals to change lifestyle behaviors that are in the interest of personal and planetary health. The challenge is to illuminate the direct links between elements of natural environments with measurable parameters of human health; having "lab results" in hand may help individuals, communities, clinicians, and policy-makers to understand the direct lines between personal, public, and planetary health. In the meantime, the available evidence which supports the biodiversity hypothesis is not calling for a "back to nature" movement, but rather stepping "forward with nature" in the urbanized environment. With the momentum initiated by the 2015 Lancet Commission on Planetary Health report (now cited over 300 times on Google Scholar), the counsel of Jonas Salk to bring planetary health into alignment with the biological and socio-cultural objectives of anthropology seems wise. At the same time, the multidisciplinary effort of planetary health (adisciplinary in nature, planetary health cannot be viewed as a single discipline) should draw upon the expertise of professionals in physiological anthropology. In the evidence-informed practice, clinicians seek intervention studies (with physiology and controls in mind) to help guide recommendations; so, too, policy-makers need to make decisions based on the best available evidence. With each turn of the Earth, the grand challenges of our time loom larger-there is no time to waste. NCDs : Non-communicable diseases Authors' contributions SLP developed the commentary, project oversight, research analysis, and approved the final manuscript. TH assisted with research interpretation and input of early origins, life-course perspectives. ACL provided the research analysis and developed the manuscript draft. DLK is responsible for the commentary oversight, research interpretation, critical review of manuscript, and input of public health perspectives. All authors read and approved the final manuscript. Ethics approval and consent to participate Not applicable Competing interests SLP reports the following: Scientific Advisory Board and speakers fees from Danone Nutricia, Schiphol, Netherlands and Nestlé Nutrition Institute, Lausanne, Switzerland; consultancy fees from Bayer Dietary Supplements Division, Whippany, NJ, USA; speakers' fees from Health World Inc., Queensland, Australia; and royalties from a trade paperback which discusses the microbiome. ACL has received consultancy fees from Genuine Health, Toronto, Canada; speakers' fees from Health World Inc., Queensland, Australia; and royalties from a trade paperback which discusses the microbiome. The other authors declare that they have no competing interests. Publisher's Note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
2018-06-05T17:19:39.022Z
2018-06-04T00:00:00.000
{ "year": 2018, "sha1": "4ad36ff5f5dbc3e18334cad8273b66eefb2877b4", "oa_license": "CCBY", "oa_url": "https://jphysiolanthropol.biomedcentral.com/track/pdf/10.1186/s40101-018-0176-8", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "4ad36ff5f5dbc3e18334cad8273b66eefb2877b4", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
225824833
pes2o/s2orc
v3-fos-license
Mathematical Modeling and Simulation of Development of the Fires in Confined Spaces The mathematical models of fire distribution in a confined space – in underground garages and in buildings — are described. Integral and computational fluid mechanics methods are used. The chapter presents the results of a fire simulation using the software Fluent. It uses Reynolds-type turbulence models of the Fire Dynamic Simulation or PyroSim graphical interface with a solution model describing a turbulence. For both cases, the pictures of the spread of fire and smoke over time in an atrium of an administrative building and a five-story building of the TUS were presented. Introduction Mathematical modeling and numerical simulations of fires are an essential decisive part of the solution of important problems related to fire safety, analysis of the development of fires in the investigation of their consequences. The methods that are used must have the necessary accuracy and reliability, as close as possible to the physical picture of the processes. The actual fire, as it is known, is an uncontrollable combustion process, complex enough and difficult for mathematical interpretation. This is due to its nonstationarity and three-dimensionality, which complicate the modeling of the heat and mass transfer processes observed in them. In the case of fires indoors of underground garages, buildings, and rooms, the development of the fire is accompanied by a change in the chronicle composition and parameters of the combustion products. This chapter gives two different approaches in dealing with their complexity and implementation of solving the problem. On the other hand, an integrated, relatively simplified technical solution of a new system for preventing the spread of fires in underground garages is given, which is described in details in the chapter. The second part deals with the basic mathematical apparatus used in CFD-Fluent and FDS software. The results of two fire simulations made by the authors through Fluent and FDS using the PyroSim GUI are presented [1,2]. Fire extinguishing system in large underground garages: integral methods for investigation In the present part, a simple method (from a technological point of view) is offered for solution of the complex problem. It is suggested to isolate the parked in the garage cars in pairs by which will be operating a thick curtain of water at arisen burning. The necessary insulation for solid noncombustible barriers are replaced at this way [3][4][5][6]. Operating principal Referring to Figure 1, cars are placed to ensure the possibility between the pairs to have enough distance for the implementation of water curtains. In case of burning over the car is formed upward convective flow, because of differences of the density of the products of combustion and the environment. This stream is proportional to the lift force: where f is the area of fire ignition and dx is elementary stretch in the vertical direction. The power of convective updraft is determined by the number of Archimedes: where d h is the hydraulic diameter of the outbreak of fire and u 0 is the initial value of the velocity of the upward flow. The velocity is determined according to [6]: where Q, kW is the power of the fire. The convective flow that is formed is shown in Figure 2. The conditional flow can be divided into the following areas: Convective flow is formed in zone I (Figure 3). The ambient air enters the fire zone from all directions, which heats and reverses the direction vertically. The second zone is a free convective flow that continues until it reaches the ceiling of the room where the flow changes character (zone III). In this zone, the jet is transformed into a radially semi-enclosed stream and spreads over the garage ceiling (zone IV). The system includes fire sprinklers-quick response and standard sprinklers. Convective flow is reaching the garage ceiling under the influence of its temperature and a quick response sprinkler is switched on and the burning car is flushed with a water spray. Thus begins the process of extinguishing a fire in the initial stages. Further, propagating as a radial semi-closed jet, it reaches the "standard" reaction sprinklers that are included on the water curtain [7,8]. This stage is defined as the isolation of burning cars from the surrounding area and no other pairs are affected. Mathematical model of convective non-isothermal jet For the purpose of solving the task is used an integral method according to [9,10]. The used equations are as follows [11][12][13]: • for amount of movement d dx Þπy ð Þ j dy (4) • to preserve enthalpy flow d dx • for export of vertical upward mass flow d dx A simple solution can be made as (5) of enthalpy is replaced by a linear dependence on the widening of the jet b ¼ cx On the right-hand side of Eq. (4) is written the Archimedes buoyancy. The significance of included symbols is as follows: u is the jet velocity; y is the transverse coordinate; ρ is the current density; ρ ок is the density of the environment; and Δh is the enthalpy of the stream. The exponent j signifies: at j = 0 a flat stream and j = 1 an axis jet. The coordinate x is directed vertically upward. There is a correlation between density and temperature: where p is the pressure of the environment, R is the gas constant, and T is the absolute temperature. Similarity to transverse distribution of the velocity and the density (temperature) are initiated [1,2], where solving Eqs. (4) and (6) leads to the parameters of the upward convective stream: • the velocity of the upward stream • the temperature difference where D 0 is the initial diameter of the heat source of fire (the burning car); Fire Safety and Protection These values correspond to the case at x ¼ x D 0 ≥ 3÷3:5. When adopted fire size created from a burning car D 0 ¼ 0:5 m and height H ¼ 3÷4:5m, x of the garage will be always greater than the above values. At the relatively short distance to the ceiling, the high power of fire (the accepted conditions are Q ¼ 1500W and T ¼ 600K), the velocity and the temperature of the rising convective stream do not change significantly. The initial velocity calculated by Eq. (3) is u 0 ¼ 8:2 m=s and the time when the convective stream will reach the ceiling at different heights of the garage is given in Table 1. This means that less than 1 s sprinklers over the burning car will be activated and extinguishing stream will flow over the burning car. The expansion (increasing of thickness) of the jet in height can be determined by the expression: where, The density of the jet in the opening section is defined by Eq. (8): at R ¼ 287 J=kgK, T 0 ¼ 600 K, p ¼ 10 5 Pa in which for ρ 0 is received ρ ¼ 0:58 kg=m 3 . The density of the environment is ρ env ¼ 11:2 kg=m 3 at the same pressure and temperature T env ¼ 293 K. At this density, the widening of the jet in the present case is For a different height in the garage, the parameter b is given in Table 2. The last row in Table 2 is given the extension of the isothermal jet T 0 ≈ T env ð Þ . Obviously a slight extension of non-isothermal convective flow comparing with the isothermal. Reaching the ceiling vertical, the convective stream is transformed into radial jet (Figure 4) Due to the weak widening of the jet and the short distance to the ceiling, the mass flow is not increased significantly because the temperature, density, and relatively low mileage ceiling have not changed. The jet has retained its temperature and density, and the velocity according to [14] may be determined: where for u 0 ¼ 8:2 m=s it is u max ¼ 7:2 m=s . It is assumed that the starting size of the radial jet is equal to that obtained in Table 2 Width of the radial jet b 0 is determined by the flow rate Q at the intersection of the reverse flow. The flow rate is amount of initial flow rates Q 0 and increase its height due to suction of air from the environment. The flow rate of ejecting fluid is considered proportional to the square of the relative increase in the width of the jet b 1 Àb 0 b 0 2 and the distance x divided by the duration of the process Δt. In this, total flow rate is obtained as the sum of normal and ejecting flow rate: where Q 0 ¼ u 0 πd 2 n 4 , when d n ¼ 0:5 m and u 0 ¼ 8 m=s we have: The flow rate of the respective heights x ¼ 3; 3:5; 4; 4:5 m of the garage is shown in Figure 5 where it is defined by the relationship given in Eq. (12), respectively and in case of a leak by Eq. (13). According to Figure 4, it is assumed D 0 ¼ b 1 that is already known and for flow rate in Eq. (16), the original width of the radial jet b 0 0 is calculated: The Figure 6. The cross-section of the radial jet as a function of r is determined by the expression: where r is the current radius, b 0 is the width of the jet to the corresponding r. Since the resulting stream is parietal and has parietal boundary layer whose thickness is approximately 0.1b 0 , Eq. (19) can be recast in the form: The width b 0 is calculated by Eq. (12) and for the case in Eq. (13) by replacing x with r, then we have: respectively, b 0 ¼ 0:163r. When substituted in Eq. (19) we get the following: The average velocity of the ceiling of the room depending on r is obtained by: respectively: Parking average velocity depending on r at the four heights is shown in Figure 7. Figure 8 shows the time to reach the appropriate distance: This means that in the first 2 s, all sprinklers at distance of 2 m away from the burning car will be triggered. For longer distances, the remote sprinklers will act at a condition if the temperature of the burning car does not decrease too quickly. For maximum calculated time of 7.7 s could not be expected too much decrease of the temperature, which leads to the conclusion that the ceiling temperature will be much greater than the starting temperature of "fast" sprinklers so that at t p ¼ 57°C or T = 330°K will always remain less than the temperature of the wall jet which initial temperature is 600°K. With the removal from the water curtain, it is possible to turn on the other ceiling sprinklers that are in the range. In the vicinity of the burning car to sprinkler curtain, a distance of l ≤ 2 m will trigger three (to five) fast sprinklers. At a longer distance, it will trigger maximum of three quick sprinklers of water curtain plus the main ones over the burning car and eventually those are lying in the range of l ¼ 4 m ceiling sprinklers so that the number of activated sprinklers will increase [10]. To create a smokeless zone under a layer of smoke floating [14], air exhaust systems are designed and installed for smoke and hot gases. An exhaust ventilation system for smoke and hot gases is a scheme of safety equipment designed to perform a positive role in spin fire. The smoke is drawn in the direction of the noncarrier partition EI from a velocity of 2 m/s to 5 m/s. Standard allowed velocity of 5 m/s, but it should be taken into consideration that this velocity would affect negatively and lead to the merging of streams of pure air. From Abramovich [14], the density of the thermal load in the premises for the storage of combustible materials according to their purpose, is determined the heat capacity of the prevailing materials. The ventilation system to remove smoke and heat (VSRSH) has to reach its designed performance level within 60 s of receiving the command signal. Each VSRSH has to ensure receipt of sufficient fresh air that enters the room for the expense of the flue products. Thermal impact Heat transfer by convection and radiation is defined according to [3,10]. Thermal effects are expressed by the intensity of the heat flow h nbt , W=m 2 to the surface of the element is determined taking into account the heat transfer by convection and radiation, such as: where heat transfer by convection h nbt,c is given by the relationship radiation heat transfer h nbt,r is given by the dependence: Convection component of the intensity of the heat flow is determined by: where α c is the heat transfer coefficient by convection W m 2 K Â Ã ; θ g is the gas temperature near the exposed fire element [°C]; and θ m is the surface temperature of the element [°C]. The coefficient of heat transfer by convection α c is determined by the nominal curves corresponding to "temperature-time." On indirectly heated surface elements, the intensity of heat flow h nbt is determined by Eq. (16) The coefficient of heat transfer by convection has value α c ¼ 9 W m 2 K Â Ã , considering that the effects of heat transfer by radiation are included. Radiating components of net heat flux per unit surface area are defined as h nbt,r ¼ Transmission of fire is equal to ε f ¼ 1. Determination of the intensity of water curtain Because of the difficulties associated with the construction of fire walls, experiments are conducted so that these areas to be reduced to such proportions that the primarily split up do not disturb of the process. In many cases, such as in buildings of first degree of fire resistance, as already noted, firewalls did not provide the detriment of fire safety. In connection with this arises a need of using such fire barriers that could effectively limit the spread of fire and at the same time would give some freedom for internal layout of buildings with different functions, which is the case of the water curtain [15]. When calculating water curtains, the assumption must simultaneously satisfy the following conditions: Structural parts of the building to withstand the effects of fire on one side and the passage of flames or hot gases to be prevented by the transfer of heat to the unexposed side. The ability of the structural parts of the building to withstand the effects of fire on one side and prevent the transfer of heat from the exposed to the unexposed side. The transfer is limited so that it does not ignite either the unexposed surface, or any other material in the immediate vicinity. The structural element is designed to serve as a barrier against the heat and to ensure the protection of people who are close to it. The effectiveness of water curtains is assessed according to the amount of absorbed heat. It is known that the dependence of the growth temperature of the source of radiation of maximum energy moves to the side of the shorter waves. This follows from the law of Vin: where λ is the wavelength in m, T is the temperature at the surface of water curtain,°K. Good enough inter-phase and heat-absorbing surfaces have water drops of size 200 Â 10 À6 . It is considered that in the best case, sprinklers spray water of size less then 1000 μm. Required flow rate for air curtain The current has the following characteristics: density of the radiation heat flux is 1500 W/m 2 ; density of the irradiation protected material is 900 W/m 2 ; height of the hole-4 m; length of the hole-6 m; pressure of water in sprinkler-0.6 MPa (6 atm) and the radius of the water drops-0.0006 m (600 μm). Opacity density of the curtain: Thickness of the curtain: Flow rate of the water curtain for 1 m 2 of lateral surface is defined by: For the whole surface of the water curtain: Water curtains are constructed so that the entire hole is irrigated with finely dispersed water. For this purpose, sprinklers are placed over the hole and next to it. When they are placed at the top of the hole, it is possible for unprotected areas to remain through which it is possible for a penetration of hot gases to occur. Sprinkler heads that are used to spray jets are spaced 0.5 m in protecting small holes and 1.25-1.5 m in protecting large holes. For sprinkler heads which are situated at a distance greater than 3 m, it is required head pressure of the water 4-6 mH 2 O. Numerical simulations: mathematical model of flow in a confined space The mathematical model is based on the equations used in the computational mechanics of fluids. These are the continuity equations, the Navier-Stokes equations in modification according to the Businex hypothesis (μ eff = μ + μ t ), the energy equation (1st law of Thermodynamics), the Clapeyron equation for the gas mixture. Fire currents run at low speeds in the absence of detonation and explosions. In the case of a fire without detonation, combustion, and explosions, it can be assumed: divV ! ¼ 0, ∂u ∂x þ ∂v ∂y þ ∂w ∂z 0. To these are added the equations for smoke propagation (smoke content) and for the optical density of the gas mixture. Equations for heat exchange (1st law of Thermodynamics) where c p is the specific heat content at constant pressure; λ is the coefficient of thermal conductivity; λ i is the coefficient of turbulent thermal conductivity; λ p is the coefficient of radiation thermal conductivity; and q v is the intensity of internal heat sources. Here, q v , can be represented by q v ¼ q vc þ q vr þ q vb , where q vk is the intensity of internal convective heat sources; q vb is the intensity of internal combustion sources; and q vr is the intensity of internal sources due to radiation heat transfer. Gas condition equation is given by: where R is the universal gas constant. Law for the conservation of the mass of the ith gas that is a part of the mixture is where D is the diffusion coefficient, representing the sum of the coefficient of gas diffusion D i and the coefficient of turbulent diffusion D t D ¼ D i þ D t ð Þ ; χ is the mass concentration of the ith gas; m i is the intensity of internal mass sources arising from the formation (disappearance) of molecules of a gas, a consequence of the ongoing chemical reactions of combustion in fires. The law (equation) for preserving the optical density of smoke is of the form: where D on is the smoke-generating capacity of the combustible material and q D is the intensity of the internal sources of optical density of the smoke formed by the ongoing reaction of combustion in a fire [3]. The thermophysical parameters of the mixture of gases involved and the result of combustion in a fire take into account the chemical composition of this mixture. It consists of oxygen, nitrogen, and combustion products -carbon monoxide, nitrogen, sulfur, etc., involved in the process combustible ingredients. They are defined as follows: • density of the mixture • specific heat capacity where α i is the bulk concentration of the ith component and H i is its mass concentration. The values of these parameters are determined at constant pressure (p ¼ const). They can be considered as temperature dependent or considered permanent. A characteristic equation The characteristic equation summarizes the main partial differential equations, which are solved sequentially in software for each of the flow parameters. The type of equation is as follows: where Φ is the dependent variable-velocity components, enthalpy, concentration of the components of the gas medium, optical density of the smoke, respectively; Г is the diffusion coefficient for the corresponding Φ; and S is the source member. The values for Eq. (46) are given in [9]. Modeling the turbulence using CFD Most often, a CFD-Fluent turbulence k À ε model is applied. In this model, the coefficient of turbulent viscosity υ t is represented by the Kolmogorov-Prandtl dependence, as the ratio of kinematic turbulent energy k and the rate of dissipation ε: where FDS turbulence modeling To close the system of equations at FDS, as in all other cases in turbulent flows, it is necessary to use appropriate models of turbulence. In this case, the large eddy simulation model [9] known in this type of task as the LES model is recommended as the most appropriate. The model is described in detail in [9]. The model of large eddy simulation is based on the following: large-scale vortices differ markedly in the course of transition from one current to another, with the small-scale structure changing slightly. The field of large-scale structures needs to be defined. Continuity of flow parameters is assumed using Leonard's so-called filtering function. For each flow parameter, a = a + a 0 . Dissipative combustion processes such as viscous thermal conductivity, diffusion, and impurity transfer are modeled. What is special about the model is that the scale of the vortex structures is smaller than the size of the data network. The parameters μ, λ, and D in the equations describing the process are replaced by expressions modeling their effect. The strain rate tensor is used to determine μ. Thermal conductivity and impurity diffusion are determined by: In the case of laminar heat transfer and diffusion, respectively: The process of combustion in the fire is most often implemented using the "Part of the mixture" approach. It is a scalar quantity characterizing the mass concentration of one or more components of a gas mixture at a given point in the flow. To reduce the volume of calculations, the significant memorized are two components of the mixture: mass concentration of unburned fuel and burned, respectively. Combustion products. Radiant heat transfer is calculated by the equations for the emission of sulfur-containing gases, which, in fact, implies a constraint on the problem. Large-scale models may also be used in certain cases. FDS equations use the FVM finite volume method. In addition to using the LES turbulence model, successful attempts have been made to apply the direct numerical modeling method described in [9]. FDS has been tested in a number of laboratories and institutions in the United States. The validation done shows the possibility of its application in many cases [16]. Computer modeling and numerical simulations A detailed description of the Fluent (CFD) program interface is given in [17][18][19]. Development of fire in atrium space: The development of fire occurred in a certain object-the building shown in Figure 9 and Figure 10, located on Tsarigradsko shose Blvd., Sofia. The arrangement of the air exchange in the atrium space in case of fire is shown in Figure 10. Atrium air exchange was implemented, showing zones with critical parameters of radiation, smoke, and fire. It is important to note that all of the above is possible only by knowing the respective velocity or temperature field of the air in the room [5]. The geometric model so drawn shows the location of the fire, that is, hazard generator and flue gas outlet (smoke hatches). In real fires, there is a degree transition zone between lower cold smoke and higher hot smoke. The first smoke curtain signal may be calculated from the beginning of the transition zone formation. Thus, it can be assumed that forecasts using equations of this type depend on the exact application of the computer model. After the 3D Atrium Model has been built (in the Gambit work environment), it is necessary to proceed with the "networking" procedure of the volume. Due to the large volume, it is not appropriate to use crosslinking of the elements in the same step. For this reason, a fine mesh is selected at the site of fire generation and its departure from the room, while a larger one is used far from them. In the present case, triangular elements were selected for the site of fire generation and the smoke hatches for the networking of persons with step 0.3 m. For the other walls as well as the volume of the atrium, a step of 0.5 m is chosen. Figure 11 shows the velocity field in the atrium in vector form. It can be seen from the figure that high velocities are observed at the site of smoke generation, both near the walls and the high part of the atrium. Building with atrium subject to simulation study [20]. The temperature distribution in the volume of the atrium is shown in Figure 12. Areas with higher temperatures are clearly visible-near the source of smoke and the surrounding wall above it, and near the dome of the atrium. Figure 13 shows the distribution of smoke in the atrium at various points in time for 120 s until equilibrium between the ascending and descending currents in the atrium is reached. Figure 14 shows the change in turbulent kinetic energy in the atrium. What is striking is the fact that there is an intense transfer of substances from the outbreak of the fire along the wall of the atrium to the dome, and then it slowly subsides. When smoke reaches the floor of the room, the turbulent kinetic energy is approximately zero. Modern computer programs for numerical modeling of processes related to the simulation of air exchange in atriums can alleviate some regulatory requirements for protected premises (atriums), which can lead to significant savings for investors. If necessary, openings may be left open in the premises. With the use of fire ventilation, they will not have a negative effect on the parameters of the fire. In large areas, flue products may only be contained above the fire. The ability to make new, more practical, and economical architectural decisions is increasing. Application of the FDS environment for predicting and restoring the spread of fires and damage in the building [21,22]. An analysis is made in the FDS environment to look at the basic features on which it is based. In analyzing the program, it should be emphasized that it is related to the numerical mechanics of the fluids and software products built on this basis. The same system of private differential equations is used, with the difference between the CFD and the FDS medium in the equations used to describe the turbulence. Fluent programs utilize turbulence models, which narrows their applicability in the study of fires in unlimited space. Large Eddy Simulation (LES) is used for FDS. This expands the applicability in the study of currents and fires in open space, as well as the effect of wind, etc. Weather conditions when solving problems. The program is also used to analyze the spread of hazards in the work environment, both industrial and residential sites, as well as in the environment. This program allows to restore the development of fire in past events [5,9,16]. Closed-loop fire development modeling using the PyroSim (FDS) program This simulation product is applicable to the modeling of fire development and the determination of the evacuation and extinguishing route indoors. The software environment offers intuitive function menus (graphical user interface) and provides results for the propagation of flue gases, hydrocarbons, and other substances during a fire, as well as the temperature distribution along the cross section of the model's geometry. The program serves not only the prediction of the situation, but also the investigation of fire in the setting of the initial ignition zone, as well as training. The simulations in the program are based on the computational dynamics of fluids, and in particular, low-velocity convective currents. The capabilities of the software make it possible to investigate fires from cooking stoves to oil derivative stores (oil bases). The program is also applicable to simulation of flame-free processes, including building ventilation testing. A detailed description of how to work with the PyroSim interface is given in [17]. Development of fire in a training building: The development of a fire in study building 2 of TU-Sofia is investigated. The fire is assumed to start from the ground floor-one of the laboratories (Figure 15). Specific examples of the application of the PyroSim software product are shown in Figures 16-21 in a simulated fire in a training laboratory on the first floor of a technical building of the Technical University-Sofia. For the construction of the geometric model in Figure 16, the real barrier elements such as walls, doors, and windows, as well as the materials of which they are constructed with their respective melting/ignition temperatures, are taken into account. Instantaneous flue gas images of the building are shown in Figure 17 (for 50s), Figure 18 (for 60s), and Figure 19 (for 440 s). It is clear that the smoke is spreading as fast as possible on one of the stairs, which is a kind of chimney (chimney) for this part of the building. For the same period of time, smoke spreads down the corridor on the first floor. Since there are no smoke barriers (doors) installed between the same staircase and the corridors on the floors, it will spread to all floors and will make it difficult to evacuate people in the building. Partition doors are placed on the next staircase (to the right of the model shown in Figure 18 and Figure 19), which are intended to prevent the smoke from burning the floors in the direction from the staircase to the corridors, but in this case the flue gases will meet on both sides. The same barriers and reduced visibility in this enclosure will cause additional evacuation difficulties, because people will not easily notice where the barrier on the second staircase is and are likely to collide with it glass shutter door, which is closed by a mechanical machine (mechanism), which is a prerequisite for an accident during the evacuation and may lead to an increase in the number of casualties in the building. Figure 21 shows the temperature distribution along the vertical section of a building for the 480th second of the simulation. From here, it is reported that in the fire zone in the laboratory the temperature is above 200°C, and at the site in the hallway in front of it, where the nearby staircase is, the temperature is above 120°C. As the building climbs, the temperature drops to about 60°C until the third floor, indicating that there should be no escape route in this area without protective clothing. By linking the data from the previous figures, the instructions for the mandatory availability of respiratory protection may also be added, as this is also the main route for the distribution of flue gases. The results of the simulation of a fire occurring in a particular building give preliminary information about the flaws in its design with respect to fire safety. If taken into account, placing barriers in the right places, as well as revising the evacuation route from the building would lead to increased security in the event of a disaster or accident and to removal all people without damage to their health. Conclusion The results obtained in this chapter are first and foremost a practical application that allows solving problems related to fire prevention and analysis in a restricted area. The technical solution to limit the spread of fires is to use protective water curtains, as they isolate burning vehicles from the environment and thus prevent the transfer of fire to other vehicles in the underground garage. The solution can be applied to any particular similar object. The results of the two simulations of fire in specific buildings indicate the possibility of Fluent and FDS-PyroSim software in analyzing fire spread, smoke, temperature, and harmfulness in confined spaces. As shown, these simulations can be used: • in the case of designing buildings with fixed sprinklers and evacuation routes. • in judicial analysis of the consequences of the fire by initiation of its development over time.
2020-06-04T09:06:11.041Z
2020-05-22T00:00:00.000
{ "year": 2020, "sha1": "5186bff83586e6d356a9117d4930301d4523ba18", "oa_license": "CCBY", "oa_url": "https://www.intechopen.com/citation-pdf-url/72285", "oa_status": "HYBRID", "pdf_src": "Adhoc", "pdf_hash": "bdeebe5b20db6a3b24c4819be1506171a7c304e6", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Environmental Science" ] }
255793639
pes2o/s2orc
v3-fos-license
Parallel and private generalized suffix tree construction and query on genomic data Background Several technological advancements and digitization of healthcare data have provided the scientific community with a large quantity of genomic data. Such datasets facilitated a deeper understanding of several diseases and our health in general. Strikingly, these genome datasets require a large storage volume and present technical challenges in retrieving meaningful information. Furthermore, the privacy aspects of genomic data limit access and often hinder timely scientific discovery. Methods In this paper, we utilize the Generalized Suffix Tree (GST); their construction and applications have been fairly studied in related areas. The main contribution of this article is the proposal of a privacy-preserving string query execution framework using GSTs and an additional tree-based hashing mechanism. Initially, we start by introducing an efficient GST construction in parallel that is scalable for a large genomic dataset. The secure indexing scheme allows the genomic data in a GST to be outsourced to an untrusted cloud server under encryption. Additionally, the proposed methods can perform several string search operations (i.e., exact, set-maximal matches) securely and efficiently using the outlined framework. Results The experimental results on different datasets and parameters in a real cloud environment exhibit the scalability of these methods as they also outperform the state-of-the-art method based on Burrows-Wheeler Transformation (BWT). The proposed method only takes around 36.7s to execute a set-maximal match whereas the BWT-based method takes around 160.85s, providing a 4× speedup. Supplementary Information The online version contains supplementary material available at (10.1186/s12863-022-01053-x). Introduction In today's healthcare system, human genomics plays a vital role in understanding different diseases and contributes to several domains of our healthcare system. Over the years, genomic data have given us new areas of research such as genomic or personalized medicine and genetic engineering. Therefore, with the recent technological advancements, we can store millions of genomes from thousands of participants alongside their medical records. Today, medical professionals from different geolocation can utilize these massive interconnected datasets On the other hand, due to the unique nature of human genomes, privacy aspects of this sensitive data is surfacing over the last decade [4]. Therefore, the current privacy regulations do not allow genomic datasets to be publicly available without any formal application and require due diligence from the researchers [5]. This can attribute a delay to the scientific discoveries depending on sensitive genomic data and the participants' medical records [3]. Therefore, employing privacy-preserving techniques while performing sensitive queries on a genomic dataset is an important research area. This field has attracted the cryptographic community in general where several theoretically proven private frameworks are being investigated [3,6]. Specifically, the massive scale of genomic data and computational complexity of the queries have made this area challenging where we would protect the privacy of the participants while providing a timely response from the privacy-preserving computations. In this paper, we target suffix trees, specifically Generalized Suffix Tree (GST) which can be employed to perform several search operations on genomic data [7]. Firstly, we construct GST in parallel (published in the conference version [8], which is later extended with privacypreserving string query techniques using GST indexing. It is important to note that building a suffix tree efficiently and in parallel is a well-studied area and not our primary contribution. Instead, we target GSTs which can represent a genomic dataset containing multiple participants [9] where we employed distributed and shared memory architectures for parallel construction. Distributed architecture considers multiple machines with completely detached memory systems, connected with a network. Our mechanism utilizes the global memory in this case harnessing the parallel power of the several cores available. Primarily, we propose privacy-preserving methods to perform arbitrary string queries on the genomic dataset. The proposed method relies on a hash-based scheme combined with cryptographic primitives. With two different privacy-preserving schemes, we demonstrate that the proposed methods provide a realistic execution time for a large genomic dataset. The contributions of this paper are: • The novelty of this work lies in the proposed private query execution technique that incorporates a hashing mechanism (Reverse Merkle Hash) over a tree structure that additionally serves as a secure index allowing several string search operations. We further extend this method's security with Garbled Circuit [10] where the researcher's inputs are deemed private as well. • Initially, we propose a GST construction mechanism using different memory models using parallel computations. • Efficiency of the GST index along with the privacy-preserving queries are tested with multiple string searches. Specially, we analyze speedups altering the number of processors, input dataset size, memory components and different indexing. • As reported in our earlier version [8], experimental results show that the proposed parallel construction can achieve ∼ 4.7× speedup in comparison to the sequential algorithm for a dataset with 1000 sequences and each sequence with 1000 nucleotides (with 16 processors). • Our privacy-preserving query mechanism also demonstrates promising results as it only takes around 36.7 seconds to execute a set-maximal match in the aforementioned dataset. Additionally, we compared with a private Burrows-Wheeler Transform method [11] which takes around 160.85 seconds giving us a 4× speedup. Our secure query method is also faster than Sotiraki et al.'s [12] which needed 60 seconds under the same setting. The paper is organized as follows. Methodology section describes the proposed methods for parallel GST construction and privacy-preserving queries. Experimental results are shown and discussed in Experimental results and analysis section as potential limitations and future works are added as well. The related works and background techniques are described in the Supplementary Materials. Finally, Conclusion section presents the conclusion of the paper. It is noteworthy that the parallel GST construction is available in the conference version [8] which is summarized in Methodology and Experimental results and analysis sections as well. Methodology As we fist build the GST in parallel prior to the private execution of different queries, the proposed methods are divided into two major components. Nevertheless, the architecture of the problem and proposed method are summarized below. Notably, the parallel GST construction is also available in our conference version [8]: Problem architecture The architecture consists of three entities: a) Data Owner, b) Cloud Server and c) Researchers as outlined in Fig. 1. Here, data owner collects the genomic dataset D n×m where string queries q are executed by any researcher. The queries are handled by an intermediary cloud server as the data owner generates a Generalized Suffix Tree (GST) and stores it privately on the cloud. The background on GSTs are available on the supplementary material. We assume that the researchers have limited computational power since they are interested in a small segment of the dataset D. Also, researcher have no interaction with the Fig. 1 Computational framework of the proposed method where the data owner holds the genomic dataset and constructs the GST in parallel on a private computing cluster (one-time preprocessing). The GST is then outsourced securely to the Cloud Server (CS) where the query q from researcher is executed in a privacy-preserving manner data owner as all query operations are handled by the cloud server. In summary, the proposed method presented in this article has two steps: a) constructing the GST in parallel, and b) executing q with a privacy guarantee over the data. Parallel GST construction [8] Parallel GST construction will first evenly partition the genomic data into different computing nodes. Here, we employ two memory environments-a) distributed, and b) shared. Distributed memory setting has the machines interconnected via a network where they contain mutlicore processors and fixed-size memory (RAM). The multiples cores in these processors also have the physical memory namely shared memory. We propose the memory distribution to address the large memory requirement while constructing the trees. For example, n sequences with m genomes may take at least nm memory resulting in any real-world genomic dataset overfitting the memory. Therefore, this issue gave the motivation to build GST for a targeted genomic dataset in a distributed memory setting [8]. Private storage and queries After constructing the GST in parallel in a private cluster, the resulting GST is stored in a offshore semi-trusted cloud system. The utility of a commercial cloud service is motivated by its low cost and higher storage requirement from GSTs built on genomic data. Furthermore, cloud ser-vice provides a scalable and cost-effective alternative to the procurement and management of required infrastructure costs, which will primarily handle queries on genomic data. As shown in Fig. 1, the researchers only interact with the cloud server, which contains the parallel constructed GST. However, using a third-party vendor for storing and computing sensitive data is often not permissible as there have been reports of privacy attacks and several data leaks [3]. Therefore, we intend to store the genomic data on these cloud servers with some privacy guarantee and execute corresponding string queries alongside. Specifically, our privacy-preserving mechanisms will conceal the data from the cloud server; in case of a data breach, the outsourced genomic data cannot be traced back to the original participants. Further details on the threat model are available in Privacy model. String Queries q We considered different string queries to test the privacypreserving methods proposed based on GSTs and other cryptographic scheme (check supplementary materials). The four queries discussed here are incrementally challenging while the inputs to these queries will be the same D. Since we are considering a dataset of size n × m haplotypes, D will have {s 1 , . . . s n } records where s i ∈ [0, 1] m . The query needs to be less than the number of genomes (1 ≤ |q| ≤ m). Definition 1 (Exact Match-EM) For any arbitrary query q and genomic dataset D, exact match will only return the record x i such that q[ 0, m] = x i [ 0, m] where m is the number of nucleotides available on each genomic sequence in D. Table 1 of size n × m, where n = 5 and m = 6. For a query, q = {1, 0, 0, 0, 1, 0}, exact queries according to the aforementioned Definition 1 will perfectly match the first row x i ; hence the output set for this input q will be the first sequence in X. Definition 2 (Exact Substring Match-ESM) Exact substring match should return the records x i such that represents the query and x i [ j 1 , j 2 ] is a substring of the record x i given j 2 ≥ j 1 and j 2 − j 1 = |q| − 1. Example 2 For an exact substring match query, we need a query sequence, where |q| < m. For q = {1, 1, 1}, the output of the query (according to Definition 2) should contain the second row as the query sequence, q is present in the dataset, D as a substring. Definition 3 (Set Maximal Match-SMM) Set maximal match, for the same inputs will return the records x i , which have the following conditions: Example 3 A set maximal match can return multiple records that partially matches the query. For q = {1, 1, 0, 1}, it will return the records {2, 3, 4, 5} from D as outputs since they have 1101,110,101,101 substrings, respectively. Definition 4 (Threshold Set Maximal Match-TSMM) For predefined threshold t, TSMM will report all records following the constraints from SMM (Definition 3) and j 2 − j 1 ≥ t. Example 4 Inheriting from Definition 4, we have an additional parameter, threshold, t which determines the number of mismatches allowed in the output sequences. For a query q = {1, 0, 1, 1} and threshold t ≥ 3, the output will be {2, 4, 5} since the second and fourth record have 101 starting from positions 3 and 2, respectively and the fourth sequence completely matches the query from position 2. Parallel GST construction [8] In this section, we summarize the proposed techniques to construct the GST in parallel from our earlier work [8]. These approaches fundamentally differ in partitioning and agglomeration according to the PCAM (Partitioning, Communication, Agglomeration and Mapping) model [13]: Data partitioning scheme [8] The memory location and the number of distributed computing nodes allowed us to employ two data partitioning scheme: Horizontal and/or Vertical [8]: Horizontal partitioning makes a different group of sequences according to the computational nodes or processors available. Each node will receive one group and perform the GST construction in parallel. For example, for n = 1000 and p = 4, the data is split into 4 groups, each with |n i | = 25 sequences. Each processor node p i will build GST individually on |n i | sequences of m length. This process is done in parallel and does not require any communication. In supplementary materials, we discuss an example of our horizontal partition scheme for two nodes (n = p = 2). Vertical partitioning scheme cuts the data along the genomes or columns and follows a similar mechanism mentioned above. However, splitting across the columns presents an additional complexity upon merging which is discussed in Distributed memory model [8]. Bi-directional scheme performs data partitioning along the rows and columns, combining earlier approaches. It is noteworthy that this partition scheme only works with four or more processors or p ≥ 4. For example, with n = 100, m = 100 and p = 4, each processor will get a n i × m i = 50 × 50 records for their computations. Distributed memory model [8] The interconnected nodes receive the partitioned genomic data and start building their individual GSTs in parallel. For example, p 0 , p 1 . . . , p |p| nodes will create GST 0 , . . . , GST |p| suffix trees in parallel. It is noteworthy that, the underlying algorithm for constructing the GSTs is linear employing Ukkonen's algorithm [14], regardless of the partitioning mechanism. Once the build GST phase is completed, these nodes start the network activity by sharing their GSTs for the merge operation: Figure 2 shows a GST construction for horizontally partitioned data. Here, two different suffix trees are presented on the left side, nodes coloured in grey and white. The merged version of these trees is on the right. It is important to note that, the merge operation should not create any duplication of nodes at any particular level. However, for the other partitioning schemes (vertical and bidirectional), we will need to perform an extra step where the datasets are divided against the column (m i < m). Figure 3 shows this step where GSTs are constructed for S1,S2= {010101, 101010} with a setting of n = 2, p = 2, m = 6. Here, the first node p1 takes {010, 101} as input whereas p2 operates on {101, 010}. Here, the GST from p1 does not have the full sequence and needs to account for the tail-end suffixes that are generated over at p2. Therefore, we added different end characters to p1's suffix trees, representing the future addition. Based on this end character, a merge operation happens for all cases where sequences were partitioned across the columns, without the last genomes m i < m. However, suffix trees from the tail-end sequences (m i = m) can be described as linear or Path Graphs. For example, in Fig. 3, 101 and 010 are represented as %1, %2 where both are linear nodes or path graphs. We add these %1, %2 path graphs to the suffix trees on m i < m without any duplication. Finally, the trees on p1 and p2 are merged according to the previous method, following the horizontal partitioning scheme. Shared memory model [8] In summary, the distributed memory model had multiple instances with completely different memory environments where the GSTs were constructed. Now, these instances also have multiple CPU cores accessing a global memory (RAM). In this shared memory model, we utilize these in processor cores and perform a parallel merge operation. Our genome data consist of a fixed alphabet set considering the nucleotides available (A, T, G, C). We use this property here proposing an Intra-node parallel operation using the shared memories among the cores. Here, the number of children is always fixed due to the fixed alphabet size, we propagate the operations into multiple cores. For example, one core only handles the suffixes with 0 at the beginning (or root) whereas another one takes only the 1 branch. Figure 2 depicts this operation where p1 and p2 constructs individual GSTs from {01, 0101, 010101} and {1, 101, 10101}. Then, the output GSTs are merged, avoiding duplicates and added to the final GST's root. Notably, due to the limited main memory in this shared environment, we cannot process arbitrary large datasets only using this method. Merging GSTs [8] Since GSTs are constructed in multiple processors and memory environments, we need to merge them for the final GST representing the whole genome dataset. Here, Vertical partitioning with path graphs (%1, %2) merging [8] the merge operation takes multiple GST as input and produces a single tree without any duplicate node on a single level (Definition 5). Formally, for p processors, we need to merge |p| GSTs to create the final GST; GST = GST 0 + . . . + GST |p| . We use the technique discussed in Shared memory model [8] treating the 0 and 1 children of the root into separate cores. Notably, the branches from 0 or 1 child of root do not have any common edges between them. Therefore, we can perform merges in parallel availing the intra-node parallelism. Definition 5 (Merge GSTs) Given two suffix trees T 1 and T 2 from two sequences S1 and S2 with m length, the leaf nodes of the merged tree T 12 will contain all possible suffixes of S1 : i and S2 : i i ∈ [ 1, m]. An example of the merge operation is shown in Fig. 4 depicting a bi-directional partition and merging afterwards. Notably, merging any branch to another is a sequential operation. Here, different threads cannot operate simultaneously for the integrity of the tree or avoid race conditions. Nevertheless, the intra-node parallelism can be extended according to the number of cores available. For example, rather than only considering 0 and 1 branches, it can take 11, 10, 01, 00 branches. Communication and mapping [8] In our proposed mechanism, the computing nodes get a continuous segment of genomic data on which they construct their GSTs. The final GST in any node is saved in a file system which is later communicated through the network with the other participating nodes. We chose the merge operation to occur between the closest nodes or with the least latency present. As an example, for Fig. 4 p3p4 will share their GSTs with p1, p2, respectively. Both p1, p2 will perform the merge operation in parallel while the GSTs were received as files. Here, the primary reason behind using files or external memories is solely for the memory requirements from large genomic datasets which can create a memory overflow for a single node. Privacy preserving query execution In this section, we discuss the mechanisms that allow privacy preserving queries on suffix trees. Bi-Directional partitioning scheme where data is separated into both rows and columns and merged using the shared memory model [8] Merkle tree Merkle tree is a hash-based data structure which is often used as a data compression technique [15]. Here, the data are represented as leaf nodes of a binary tree and they are hashed together in a bottom-up fashion. The individual node values are determined from its children as they are concatenated and hashed with any cryptographic hash function (i.e., MD5, SHA-2 [16] etc.). For example, the parent A of leaf nodes with value 0 and 1 will denote where || represents concatenation. Reverse Merkle tree (RMT) In this work, we utilize a reverse of the Merkle Tree hash where the data is hashed in a top-down fashion. For example, a child node will have the hash value A = h(P || h(0)) where 0 and P is the hash value of the node and its parent, respectively. The sibling will have B = h(P || h(1)), analogously as shown in Fig. 5b. We initialize the root's hash value with a random value namely SALT for additional security which is mentioned in Privacy preserving query execution. Here, as the GST is constructed in parallel, we hash the content of the individual nodes alongside the SNP values. The hash values are passed down to the children nodes and added with their hashed SNP value. In Fig. 5, we show the example of a reverse hash tree for the sequence S1 = 010101. Here, in each node, we take the hash of the parent node and add it to the hash of that node's value. Notably, in Fig. 5 The leaf nodes will also have the position of the suffix appended together with the nucleotide value (represented as $ in Fig. 5b. The rational behind using the reverse Merkle tree is to represent the suffixes using the hash values for faster matching. Here, the hash values on the leaf nodes represent the corresponding suffixes of that edge in the GST. For example, the longest path in Fig. 5 will represent S1 : 0 and contains the hash for suffix 010101. We also keep the position of the suffix alongside the hash values. These leaf hash values are kept separately for incoming queries which accelerate the search process as we describe it in Privacy preserving query execution. Example 5 For a sequence S = 0110, RMT will initially produce the hash h(s 1 ) where s 1 = 0. It will proceed to the next character s 2 = 1 and concatenate both the hash outputs. However, h(s 1 ) || (s 2 ) doubles the size of the fixed bit hash output which is then hashed again to make it of the same size. h(h(s 1 ) || (s 2 )) is then concatenated with h(s 3 ) as RMT represents the final output Cryptographic hash function The cryptographic function employed to hash the values in each node is quite important. As there are multiple hash functions available (i.e., MD5, SHA-1 [16], etc.), they ultimately serve a similar purpose. These functions provide a deterministic, one-way method to retrieve a fixed bit size representation of the data. Therefore, it can also be considered as a compression technique that offers a fixed size for arbitrary large genomic sequences or suffixes. We utilized MD5 as an example function in our implementations as it was executed on every node as described in Reverse Merkle tree (RMT). Here, it is important to consider the size of the hashed values as MD5 provides a fixed 128-bits output. Using another hash function with better collision avoidance or more security (i.e., SHA-1) may result in longer (256 bits) hash values, which will increase the execution time linearly in order of the bit size. Nevertheless, MD5 is given as an example that can be replaced with any cryptographic hash function. Suffix tree storage One of the major limitations of Suffix Trees is the number of nodes and the storage they require for longer input sequences. In the worst case, a sequence of length m will have m + 1 unique suffixes. The number of suffixes also increases along with the values of sequence and genomes within (n, m). For example, m bi-allelic SNPs from one sequence can create 2 m+1 − 1 nodes on the suffix tree. The content of these nodes is hashed according to the aforementioned Reverse Merkle Tree method. Due to the size of the resulting tree and its dependency on the size of the sequence, we utilize file-based storage, in place of the main memory. Here, all operations on the suffix tree, construction and queries are run on physical files, which are later outsourced to the external semitrusted computational environment. We next discuss the privacy model and the privacy-preserving outsourcing mechanism. Privacy model The primary goal of the proposed method is to ensure the privacy of the data (located on the GST) in an untrusted cloud environment. Therefore, we expect the cloud to learn nothing about the genomic sequences beyond the results or patterns that are revealed from the traversal. Note that the proposed method do not guarantee the privacy derived from the query results as it might be possible for the researchers to infer private information of an individual using the query results. The proposed secure techniques do not defend the genomic data against such privacy attacks, where researchers may act maliciously. Nevertheless, we discuss some preventive measures using differential privacy in Discussion. The privacy assumption for the cloud service provider (CS) is different as we adopt the semi-honest adversary model [17]. We assume that CS will follow the implicit protocols but may attempt to retrieve additional information about the data from the underlying computations (i.e., logs). This is a common security definition, and realistic in a commercial cloud setting since any cloud service providers comply with the user agreement and cannot use/publish the stored data without lawful intervention. Furthermore, in case of a data breach on the server, our proposed mechanism should protect the privacy of the underlying genomic data. In addition, the system has the following properties: a) CS does not collude with any third party or researchers to learn further information, b) in case of an unwanted data breach on CS, the stored GST (or genomic data) does not reveal the original genomic sequences, and c) Researchers are assumed honest as they do not collude with other parties to breach the data. Formally, let researcher and cloud server be P 1 and P 2 , respectively. P 2 stores a private database D as P 1 wants to execute a string function f (q, D ) based on a query string q. For example, this function can be any string query defined in Definitions 1, 2, 3pdefsmm and 4. The privacy goal of the targeted method will be to execute f (q, D ) in a way that P 1 and P 2 , both are unaware of each other's input, but only knows the output of f. We assume that P 2 is semi-honest as it does not deviate from the protocol. Furthermore, no polynomially bounded adversary can infer the sensitive genomic data from outsourced D if it gets compromised. Privacy-Preserving outsourcing As the GST is constructed in parallel in a private cluster, the resulting suffix tree is stored (or outsourced) in a commercial cloud server (CS). The researchers will present their queries to this CS, and CS will search on the GST for the corresponding queries. For example, if we consider the four queries from String Queries q, each will warrant a different number of searches throughout the outsourced GST. Since we intend to ensure the privacy of the genomic data in an untrusted environment, we remove the plaintext nucleotide values from the GST replacing them with their Reverse Merkle hash value according to Definition 6. For example, GST in Fig. 5a will be hashed in a top-down fashion where the leaf nodes will contain the sequence number and corresponding suffix position. Since a genomic dataset will only have limited input characters (A, T, G, C), hashing them individually will always produce the same output. As a result, CS (or any third party) can infer the hashed genomic sequences. Therefore, to protect the privacy of the data, we utilize two methods: a) A random byte array is added to the root of the GST, kept hidden from the CS, and b) the final hash values are encrypted with Advanced Encryption Standard (AES) in the block cipher mode (AES-CBC) prior to their storage. As the one-way public hash function reveals the genomic sequence due to its limited alphabet size, we need to randomize the hash values so that no adversary can infer additional information. Such inference is avoided with a standard random byte array, namely SALT. Here, the root of the GST (Fig. 5a) contains a SALT byte array which is never revealed to CS. As this SALT array of the root node is appended to its children nodes, it will cascadingly alter all the hash values downstream making them appear random. For example, while generating Fig. 5b from a, the left and right child of root S1 will contain the value h(SALT || h(0)) and h(SALT || h(1)), respectively. For simplicity, the random SALT byte can be assumed to be of the same length as of the hash function output, k (128 random bits for MD5). Since CS does not know these random k bits, it will need to brute force through the 2 k possible values which is exponential in nature. Since the hashing is also done repeatedly, it can prove to be challenging to infer meaningful information from the RMT hash tree for an arbitrarily long genomic dataset. Notably, the SALT bytes are shared with the researcher as it is required to construct the queries as well. To further improve the security, these individual hash values are also encrypted with AES-CBC with 128 bit keys. This AES mode requires an random Initialization Vector (IV) which is also shared with the researcher but kept hidden from CS. This encryption provides an additional layer of security in an unlikely event if CS gets compromised. The encrypted hash values will be randomized and should prevent further data leakage. The procedure to get the Encrypted Reverse Merkle tree is described in Algorithm 1. In summary, the output from data owner to CS will be the encrypted GST, E GST where every node value is encrypted. We demonstrated the process in Fig. 6. Therefore, according to our privacy model in Privacy model, the RMT containing the encrypted hash values of the original dataset is safe to transfer over to a semi-honest party [17]. As we also assume the CS to be honest-butcurious [17], it will follow the assigned protocols and will not attempt any brute force attacks on the hashed values. However, under any data breach, the proposed encrypted tree will suffer the same limitations of symmetric encryption. Notably, some of them can be avoided by using asymmetric encryption or separate secret keys for different heights or depth of the GST which will strengthen the security; we discuss this in Discussion. It is important to note that the size of the suffix tree is an important factor to consider when deciding on the underlying cryptosystem. We picked the symmetric encryption scheme, AES partially due to this reason as it will not increase the size of the hash output. For example, the return AES-CBC(result,key) output from MD5 for every suffix tree node will be 128 bits. These 128 bits are later encrypted with AES-CBC which represents the final content stored on the suffix tree nodes. Here, the encrypted hash values do not increase the size of the content. Privacy-Preserving query execution The four queries mentioned in String Queries q will be executed over the AES-CBC encrypted RMT hash values as outlined in Reverse Merkle tree (RMT). These hash values compress the nucleotides available on each edge to a fixed number of bits (size of the hash) and offer an advantage when searching over the whole GST. Hash Index (HI): Prior to the query, CS creates another intermediary index on the encrypted hash values from E GST . Since our hash function will always provide a fixed sized output (in bits) for each node, a binary tree can effectively speed up the search which is constructed over the symmetrically encrypted bits of E GST . For example, MD5 will always output the same 128-bis for the same SALT and series of nucleotides using RMT. Encrypting these fixed size values with AES-CBC with the same key will produce ciphertexts which can later be utilized for searching as the researchers will come up with the same ciphertexts for any matching query. The output from the AES-CBC bits are kept in a binary tree having a fixed depth of 128 (from root to leaf ) as we use 128 bit encryption. Here, the leaf nodes will point towards the hash value or the nodes appearing on the RMT. We name this B-tree as HI as it replaces an exhaustive search operation on GST (outlined in Fig. 6). Notably, we added the positions of the suffixes from GST into the HI using the end character $ symbol which was appended to the genomic sequences. This positional value (i.e., $0 representing S : 0) contained the starting index of the suffix which was necessary for all queries with a targeted position. We can demonstrate the efficacy of HI for the Exact Match (EM) query as defined in Definition 1. Here, the researcher initiates the query as s/he can have one or multiple genomic sequences to search for in the dataset D. The researcher constructs the hash representation of the query using the secret key and random byte array (SALT) that was employed to make the GST stored in the CS. For example, if the query is 010101, then the query hash will be: (1)) . . .). Later, it will be encrypted with the key E Q = E(Q h , key, IV ) and sent to CS for matching. The procedure to retrieve E Q is briefed in Algorithm 2. CS will search for this E Q in the fixed size (HI) first. If the hash exists on the B-tree, CS returns the leaf node that HI is referencing. Here, only the leaf nodes of HI keep a reference of the Reverse Merkle Tree nodes which is sent as the final result to the researcher (in case of a match). For a mismatch, we will not have a node on HI for the query hash, resultingly, do not need to check GST anymore. Lemma 1 For a hash function with fixed output size k, Exact Match (Definition 1) will require a worst-case runtime of O(log k) for any arbitrary query. In Lemma 1 we consider the output size of the hash function for simplicity as AES will produce the same number of bits as inputs. Nevertheless, we can extend the method for EM (same runtime as Lemma 1) to execute the rest of the queries. For example, a substring Match can be an extension of EM where we will consider a query length, |q| smaller than the sequence length (< m) and only allowing exact matches of |q| length. This is also possible employing HI which represents the strings residing in a GST. Similarly, for the Set Maximal Matching (SMM-Definition 3), the researcher and CS perform an iterative protocol. The researcher initially searches for the whole query following the same protocol from Fig. 6 on the HI leading to the GST residing in CS with the specific position. For a mismatch, it reduces the query length by one and iterates the search until there is a match. The worstcase running time for such operation will be in order of O(|q| log k). PVSMM (Definition 4) is an extension of the same protocol where we have a threshold constraint which further reduces the computations to O(t log k) given t > |q|. Hiding query access pattern Garbled Circuit (GC) allows our two-party, CS and the researcher to execute a secure protocol between themselves which ensures their input privacy guarantee. In our proposed method, the encrypted HI does the major search operation as it is outsourced on CS. However, the input from the researcher is not hidden from CS. Such access pattern for an arbitrary query might reveal additional information which we avoid using this method. In this GC protocol, the hash values are represented as binary strings and matched against the HI using the oblivious transfer technique. The researcher produces the reverse Merkle hash value Q H according to algorithm 2. Here, the query sequence will be a Path as each node will have only one child. Each query node will denote the corresponding nucleotide in the specific position. For example, the root will have the first nucleotide (lowest position) as its child and the leaf node will be the last position of the sequence. We perform the reverse Merkle hashing on such a Path graph where the leaf node will represent the final query hash value. The root of this path will contain the same secret SALT used to construct the reverse Merkle tree. The resulting fixed-length hash value is then matched with the binary tree, HI on the CS through a GC protocol. Notably, the size of the query hash and the height of HI are the same as we use the same hash function. Here, CS and the researcher goes to a fixed round interaction where each bit from the query hash serves an input from the researcher while CS puts the bit value from HI. For example, in the first round, the researcher puts the first bit of the query hash in the GC protocol. CS randomly picks a child from the root of the HI (0 or 1) and sets it as its input for GC. The circuit utilized here is an XNOR operation between these two input bits which results in 1 only if the input bits are the same. Importantly, the input bits are not revealed to any party while the output is only available for CS. If their input bits matches then CS proceeds with the currently selected node's children or take its sibling. There are only 2 nodes at each level as HI is a B-Tree. CS then sends a request to match the next bit and both parties repeat the procedure until a mismatch. Here, a mismatch of the bits denotes that the CS does not have such hash value on the GST, hence the query sequence is not available in the dataset. In this case, CS returns that there are not sequence on dataset D matching the query. For a match on the leaf node, CS returns the encrypted suffix sequence that are referenced at the HI leaf nodes. Since the four queries can essentially reduce to matching the hash values on the query path and HI (Lemma 1), we do not discuss the corresponding GC protocols for each query in detail. Shortly, SMM or TSMM will require the researcher to remove the nucleotides from his/her query and iterate the GC protocol for each query edits. In our proposed method, the researcher and CS can guess each other's inputs as it is a XNOR circuit on the hash outputs. However, the input query sequence and the data on GST is kept hidden from both parties in case of a mismatch. We argue that under the targeted privacy model (Privacy model), the resulting sequences and the query can be public to both parties. Nevertheless, we also considered a non-iterative mechanism to perform the search which operated on the full binary tree of HI (input from CS). Here, the complete encrypted query hash E Q from the researcher is an input on the GC while CS inputs HI. This method matches each hash bits obliviously and only outputs the matching suffix; avoiding the disclosure from the in-between matches. However, it incurred a longer execution time which is further discussed in our limitations in Discussion. Experimental results and analysis Before discussing the findings, we will describe the implementation and underlying dataset details: Datasets and evaluation Since the proposed method is scaled and evaluated against different parameters, we use randomly distributed synthetic datasets. We generate different datasets with {n, m} ∈ {200, 300, . . . , 1000} and name them such as D n×m . We agree that genomic data of n, m in millions will portray the true benefit of our proposed parallel constructions, but due to our computational restrictions, we limited our experimental settings [18]. However, we argue that larger datasets will denote the same trend in terms of execution time as we increase the parallel computational power. Our implementations are publicly available in [19]. Suffix tree construction speedup [8] We measure the suffix tree construction speedup according to the dataset size (n, m), number of processors p and different memory models-distributed, shared and hybrid. The distributed or shared model does not employ intra-node or inter-node parallel operations whereas the hybrid method utilizes both. It is important to note that the results for the parallel GST construction is also available in [8] which we summarize here. Tables 2 and 3 contains the execution time for all three partitioning scheme-horizontal, vertical and bidirectional. We also report the results from the three memory architecture varying the number of processors p = {2, 4, 8, 16}. Notably, the sequential or serial execution is denoted by the p = 1 case which is a plain Ukkonen's algorithm [14]. Table 2 shows that GST building time for smaller datasets are almost the same for all memory models and experimental settings. However, the execution time difference starts to be clearer as we increased the dataset size (n, m). For example, D 200×200 needed 0.08 mins on the serial execution (p = 1) whereas D 1000×1000 required 14.55. The distributed model needed 6.09 minutes showing the added network complexity and operations required by inter-node communication. The hybrid memory model performed better taking only 3.08 mins with 16 processors. Interestingly, for these smaller datasets, the shared memory model outperforms the other memory settings. Unlike the distributed model, the shared architecture requires no network connectivity as it splits the work into different cores. It required the lowest time in all settings, around 2.28 minutes with 16 processors. However, it did successfully process larger dataset, greater than 1000 nucleotides and genomes. Since the memory and proces- sors in a shared model is fixed which can be extended in distributed setting by adding new machines, larger datasets will require the later approach. In Table 2, we show the results from vertical partitioning, having an extra step of path graph addition. This addition is not present on the horizontal partitioning. This additional step increased the execution time, taking 25.24 minutes to process D 1000×1000 , compared to 16.23 mins with the horizontal partitioning. The bi-directional partitioning results are shown in Table 3. Compared to the prior two data partitioning schemes, the tree build cost is reduced here as there are smaller sub-trees to join in this case. For example, with four processors (p = 4) and n = m = 1000, each processor will get an input of 25 × 25 genome dataset, leading to four subsets of 100 × 25 and 25 × 100 partitions for vertical and horizontal schemes, respectively. In Table 4, we report the execution time for individual operations: tree building, path graph addition and tree merge. Here we report the maximum time for each operation from each run since they were run in parallel. It is noteworthy that these operations are the building blocks for the execution time posted in Tables 2 and 3. Table 5 summarizes the speedup results for dataset D 1000×1000 . Speedup is defined as T par /T seq where the shared model performed better than the distributed one. However, the distributed model's results are comparable for p > 2 settings. Notably, the shared architecture could not process the dataset, D 10,000×10,000 due to memory constraints and excluded from the results. This limitation was not preset for both distributed and hybrid setting as they were able to construct the GST. One of the limitations of the proposed framework is the size of the resulting suffix tree. Since the node contents are hashed and encrypted, it also increases the memory requirements as we utilized file-based memory to handle queries. For example, for a dataset of {500 × 500, 1000 × 1000, 2000 × 2000, 5000 × 5000} takes around {109, 433, 1754, 11356} megabytes of storage space. Notably, this storage accounts for the hashed tree and the AES-CBC encrypted node values on the Merkle tree. Furthermore, we opted to experiment with a relational database (MySQL) to save the encrypted tree, which is detailed in our code repository [19]. Query execution Experimental setup In this section, we analyze and discuss the execution time of the four queries in private and non-private settings as defined in String Queries q. We utilized an Amazon EC2 cloud server (specification g4dn.xlarge) in Oregon, US as the cloud server and the researcher was located in Winnipeg, Canada. The average network latency between the CS and the researcher was around 49ms. The key components of the result analysis are as following: 1. Execution time for all queries with worst-case inputs, 2. Effect of dataset size and query length 3. The impact of GST and HI, and 4. The runtime comparison between hashing and GC We targeted the worst-case input queries as it will highlight the maximum execution time for each type of query. For example, for exact matches (EM), we randomly picked a query sequence from the dataset. As any mismatch on the HI will forcefully reduce the computations, we chose to pick available sequences for Query 1. For SMM and TSMM Queries (3 and 4), we preferred a random query sequence which was not present in the dataset. As for a mismatch, SMM (and TSMM) will redo the search altering the query sequence. This will show the maxi- mum execution time required. Alternatively, if we picked a sequence from the dataset (similar to EM), it was not necessary to traverse the HI and it will output the same execution time as EM. Therefore, our targeted four queries can essentially be reduced to EM. For example, we do not discuss the exact substring matches in this section as it took the same time as the EM. We also limit the execution time for two datasets D 1000 and D 500 as the data dimension will increase the size of the GST. Therefore, we examine the scalability issues with different query lengths |q| ∈ {300, 400, 500} and (n, m) ∈ {(1000, 1000), (500, 500)}. Execution Time for GST (w/o privacy) Initially, we analyze the execution time of the targeted queries on plaintexts without any privacy guarantee in Table 6. Here, we only execute the queries on the generalized suffix tree (GST) as they are outsourced on CS and simulate the researcher on the same server to avoid the random network latency. The execution time from the Table 6 clearly shows that longer query sequences (i.e., |q| = 500) require more time than smaller queries. As we are searching on the suffix tree, our GST indexing presents a runtime linear in the query length of |q|. Notably, GST allowed us to remove the runtime dependency with the number of sequences or nucleotide (n or m) which is often higher for genomic datasets. One interesting observation here is the scalability property of GST on different sized datasets. As we considered two different datasets D 1000 and D 500 with n, m = {1000, 500}, it seems that the runtime does not increase significantly. Ideally, traversing the GSTs from D 1000 or D 500 for a query should not be different but the increased number of nodes on memory adversely affects the query execution. Execution time for HI (with privacy) Since the query length |q| can also be arbitrarily large, we reduce its impact on execution time by employing HI. This index HI, built on the GST allows us only to search up to the hash output length |H| rather than |q|. We see its effect in Table 7, as for different |q|'s, the execution time for EM did not increase which was the opposite for plaintext GST as shown in Table 6. Since we considered the worst-case inputs (nonmatching query sequences) for SMM and PVMM, both types of queries required more matching requests on the cloud server. These iterative query executions increased the runtime incrementally. The effect of the dataset size is also analogous with our earlier argument as the time vary slightly for different sized datasets. We do not show the results for SMM over the garbled circuit as they required over an hour each on the worst-case inputs. We also benchmark with recent work from Shimizu et al. [11] which utilized positional Burrows-Wheeler Transformation with Oblivious Transfer (OT-secure protocol) for input privacy. From the results in Table 6, it is evident that our Merkle hash along with HI provides a 4× speedup compared to the earlier work as it takes 160.85 seconds to execute a set maximal match on D 1000 (our method required 36.76s). However, since this benchmarking method only used OT rather than more expensive GC operations, it was faster than the GC protocol. The implementations from Sotiraki et al. [12] was not available publicly which is why we could not add it to our benchmarking results. Discussion In this section, we discuss some of the core results, limitations and some potential future works as well: Parallel construction of GST: GST provides an efficient index to the genomic data which can be used in many string-based search queries, fundamental to the utility of genomic data. However, the best sequential algorithm is linear to the sequence length which can prove to be significant for a large dataset with longer genomic sequences n, m. Therefore, constructing such an expensive tree-based data structure is handled by the proposed parallel mechanism, which is required to be executed only once while pre-processing any dataset. Storage complexity of GST: On contrary, we use a filebased GST for two fundamental reasons: a) higher storage requirement for the suffix tree, and b) fixed main memory in comparison to persistent disk memory. This also warrants the usability of cloud servers, which offer less-expensive storage solutions. Here, GST warrants an expensive storage cost as the number of suffixes increases linearly in order of the length of the sequence (m). For example, a genomic sequence of length m has m + 1 suffixes which increases for increasing values of n, m. Also, for m genomes (bi-allelic SNPs), in the worst case, it can create 2 m+1 − 1 nodes on the suffix tree. Resultingly, we incorporate another fixed-size index HI on GST, which acts as the principal component while searching and can fit into the main memory. Privacy guarantee from encrypted hash value: The privacy of the data relies on the symmetric AES cryptosystem along with the random SALT bytes kept on the root node of Reverse Merkle Hashing. We did not use any asymmetric public-key encryption scheme due to the resulting ciphertext size expansion. Nevertheless, the recently improved homomorphic encryption schemes might be beneficial in this task and provide additional security guarantee [20,21] which is an interesting future work. Privacy Guarantee from GC: In our proposed method (Hiding query access pattern), GC plays its part in matching the bit values of the query hash and node values on HI. Here, the researcher and CS are unaware of each party's inputs unless there is a match. However, it still reveals the encrypted query for a query sequence that exists on HI. This could be avoided with a more rigorous GC protocol where the whole query E Q and HI will be taken as inputs. However, searching the whole query obliviously were not computationally feasible and we did not report it here. This process can be efficient with leveled execution of the searching on HI which can be investigated in the future. Output privacy: To protect the genomic data against any malicious researchers, we can perturb the outputs from CS with some privacy guarantee. One method to attain output privacy is by adding noise to the query results, and these techniques have been studied in the literature such as anonymization [22], differential privacy [3]. However, we did not opt for these strategies as they will thwart the exact results from any query and validity is quintessential in any scientific research. The realistic model in genomic research also assumes the researchers to be honest as they adhere and understands the privacy requirements of genomic data. Conclusion Executing string queries on genomic data is not a new research area; however, a privacy-preserving approach for string queries has received little attention in the literature. The primary contribution of this paper is a hash-based mechanism to outsource and execute privacy-preserving queries on genomic data. Due to the expensive construction operation, a parallel generalized suffix tree building is proposed that utilizes both distributed and shared processing capabilities and external memory. The proposed parallel constructions and privacy-preserving query techniques can also be generalized for other data structures (e.g., prefix trees [23], PBWT [11]) and thus can be useful for different genomic data computations. We also analyzed the performance using different datasets and sample
2022-06-18T13:16:33.426Z
2022-06-17T00:00:00.000
{ "year": 2022, "sha1": "97abfb48adb3f19ad4110fcd60d1aac1b0cbf706", "oa_license": "CCBY", "oa_url": "https://bmcgenomdata.biomedcentral.com/counter/pdf/10.1186/s12863-022-01053-x", "oa_status": "GOLD", "pdf_src": "SpringerNature", "pdf_hash": "97abfb48adb3f19ad4110fcd60d1aac1b0cbf706", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [] }
248672435
pes2o/s2orc
v3-fos-license
A Phone Consultation Call Line to Support SBIRT in Pediatric Primary Care Background Screening Brief Intervention Referral to Treatment (SBIRT) is recommended as a routine part of pediatric primary care, though managing patients with positive screens is challenging. To address this problem, the state of Massachusetts created a call line staffed by pediatric Addiction Medicine specialists to provide consultations to primary care providers and access to a behavioral health provider specially trained in managing adolescent substance use. Objective To describe the uptake and outcomes of a consultation call line and virtual counseling for managing substance use disorders (SUD) in pediatric primary care. Methods Service delivery data from consultations and counseling appointments were captured in an electronic database including substance, medication recommendations, level of care recommendations and number of counseling appointments completed for each patient. Summary data is presented here. Results In all, there were 407 encounters to 108 unique families, including 128 consultations and 279 counseling visits in a one-year period. The most common substances mentioned by healthcare providers were cannabis (64%), nicotine (20%), alcohol (20%), vaping (9%) and opioids (5%). Management in primary care was recommended for 87 (68%) of the consultations. Medications for SUD treatment were recommended for 69 (54%) consultations including two for opioid use disorder. Conclusion We found that both a statewide consultation call line and virtual counseling to support SBIRT in pediatric primary care were feasible. The majority of consultations resulted in recommendations for treatment in primary care. INTRODUCTION Adolescents and young adults are the age group most likely to use psychoactive substances (1). Worldwide, more than 25% of adolescents use alcohol (2016 data) and more than 10% use nicotine (2008-2018 data) (2). Substance use (SU) during vulnerable windows of brain development that occur during adolescence and young adulthood is associated with adverse functional outcomes across domains (education/employment, family/social, health). As such, substance use (SU) is among the most important health risk behaviors for youth. Healthcare professionals are called upon to help to mitigate the impact of substance use on youth, and screening for SU has long been recognized as important part of general healthcare. The World Health Organization (WHO) Alcohol, Smoking, and Substance Involvement Screening Test (ASSIST) has been validated in international settings, including with adolescents, and an accompanying manual has been developed to provide guidance on SU screening in primary care (3). In the United States (US), the American Academy of Pediatrics (AAP) recommends Screening, Brief Intervention, Referral to Treatment (SBIRT) for adolescents and has published detailed guidance on best practices (4). Pediatrician self-reported rates of SU screening and brief advice are high (5), though brief interventions and referrals are less common, and clinical expertise and community resources are significant barriers to SU treatment (6). The US has a national shortage of pediatric mental health and behavioral care providers (7), limiting access to specialty care and increasing the pressure on primary care to expand services. Phone consultation programs that connect primary care providers with specialists are a promising approach to leverage the limited supply of child psychiatrists (8). These access lines can provide tele-consultation, training, resources, and referrals to providers (9). Nationwide, 45 such programs have been funded by the Health Resources and Services Administration (HRSA) (8). Children living in states with a consultation program have significantly greater mental health service use (7). The Massachusetts Child Psychiatry Access Project (MCPAP) is the access line that serves the state of Massachusetts (10). More than 95% of pediatric primary care practices are enrolled in the MCPAP and the program provides mental health consultations to providers for more than 5,000 youth who receive care in a pediatric setting annually. There is no charge to any provider or patient for the consultation service. The majority of consultations come from pediatricians or pediatric nurse practitioners. For this project, the MCPAP created a new phone consultation service staffed by pediatric Addiction Medicine specialists specifically to address questions regarding adolescent substance use from youth-facing primary care providers. This new service is available to all enrolled MCPAP practices (11). Here, we report usage metrics in order to assess the uptake and outcomes of this innovative statewide SBIRT support service. METHODS This report presents results of a retrospective audit of telephone consultations and virtual visits over a 1-year period, from Jan 1 through December 31, 2021. Program Description The pediatric substance use consultation call line described here is available to any primary care provider within the state of Massachusetts with questions regarding adolescent substance use. Providers access the service by calling a central phone number that is shared with the MCPAP mental health consultation line. The service is available during normal business hours and is not designed to respond to emergencies. Trained administrators triage questions regarding substance use to the substance use line. New calls received by the substance use call intake coordinator are forwarded to the covering consultant. At inception, providers were made aware of the new service through an email announcement, an article in the quarterly MCPAP newsletter sent to all registered users and a webinar open to all registered users. Providers who requested consults regarding substance use through the general MCPAP line were informed about ASAP-MCPAP by an intake coordinator and those consultations were forwarded to ASAP-MCPAP. Consultants Consultants are all faculty members of the Adolescent Substance Use and Addiction Program at Boston Children's Hospital (ASAP) and Addiction Medicine Fellows. Primary specialties of the consultants include General Pediatrics, Developmental-Behavioral Pediatrics and Child and Adolescent Psychiatry. All consultants are board eligible, board certified or in training in Addiction Medicine or a nurse practitioner with extensive experience in adolescent substance use. Consultations Consultants return all calls directly to the primary care provider that requests consultation. Most calls are returned on the same day. Addiction Medicine consultants did not speak directly with patients or families. Virtual Counseling When appropriate, consultants recommend virtual counseling with the substance use BH provider. In general, patients were considered appropriate if they met the following criteria: 1) patient, parent or provider have concerns regarding substance use, 2) appropriate for outpatient therapy, 3) other forms of substance use counseling (integrated behavioral health or community referral) not available. Patients were considered ineligible for virtual care if referred to a higher level of care (i.e. outpatient substance use disorder treatment, intensive outpatient, residential treatment, etc.), if they were considered at high risk of withdrawal symptoms that require medical management (i.e. from alcohol or benzodiazepines) or if at high risk of overdose. In person assessment was recommended for patients with communication disorders for whom virtual care was not considered appropriate by the primary care provider, and patients for whom there were concerns of domestic violence. A specially trained behavioral health (BH) provider conducted all appointments virtually using the Boston Children's Hospital (BCH) virtual visit platform. In this program, the behavioral health provider is a licensed independent clinical social worker. After each initial counseling appointment, the BH provider reviewed treatment recommendations with the referring PCP and entered an appointment encounter in the electronic database and a clinic note into the BCH electronic medical record. For this project, we analyzed data from every encounter completed between January 1 and December 31, 2021. Encounters Each consultation request was entered into a secure electronic database that is compliant with the Health Insurance Portability and Accountability Act (HIPAA) of 1986. The encounter data fields include patient demographic information (age, sex, insurance plan, and de-identified member number), primary care practice, provider and encounter type, which were entered by an administrative assistant, and substance use concern, medication and outcome recommendations entered by the consultant (10). All identifiable patient information is encrypted and available only to consultants. The database is hosted by MCPAP, a third party contractor to the state of Massachusetts. Data summaries were provided by one of the authors (JS) who is the Founding Director of MCPAP. No personal health information was included in the database summary. This project was undertaken as a quality improvement effort and as such exempt from review by the Institutional Review Board. Primary Concern A list of 28 non-mutually exclusive concerns included eight substance use specific items (cannabis, nicotine, alcohol, vaping, opioids, stimulants, sedatives, non-specific substance). Medications Consultants selected from a list of 17 items including 14 commonly used psychopharmacologic agents, "Medication for Addiction Treatment (MAT)", "other" and "no meds after encounter". A free text field was available for "other" where specific medications were indicated. Outcome Consultants selected from nine outcomes describing recommended level of care, including: Primary Care Provider (PCP), bridge in primary care, therapist appointment -MCPAP (virtual ASAP therapist), therapist appointment non-MCPAP, ASAP, outpatient substance use program, Partial Hospital Program, Inpatient and Emergency Department. We considered recommendations for primary care provider, therapist appointment MCPAP, therapist appointment non-MCPAP as treated in primary care, while recommendations for outpatient substance use program, ASAP, partial hospital program and inpatient were considered specialty substance use treatment outside of primary care. We considered "bridge in primary care" to be a standalone category. RESULTS The ASAP-MCPAP program provided 407 encounters on behalf of 108 unique patients. Encounters were divided between consultation phone calls and virtual counseling visits as follows: • 128 consultation calls from Addiction Medicine specialists to Primary Care Providers. • 88 consultations were completed within a single call. • 20 consultations were completed over two calls. Counseling was recommended as part of the consultation for 49 patients; 36 patients (73%) completed at least one counseling visit. Monthly counseling appointment volume steadily increased over the 1-year observation period (Figure 1). The most common (non-mutually exclusive) substances mentioned by callers were cannabis (64%), nicotine (20%), alcohol (20%), vaping (9%) and opioids (5%). In 24 consultations (19%), callers did not identify the substance in question and for 9 consultations (7%) a mental health concern was considered primary and a substance was not listed. Recommendations for 87 consultations (68%) were for management in primary care, of those, 50 were also referred to an outpatient BH provider. Thirtyfour consultations (27%) resulted in a recommendation for specialty substance use disorder treatment, including 27(21%), 2 (2%), and 5 (4%) to outpatient, partial hospital and inpatient, respectively. For five calls, the consultant recommended "bridge treatment in primary care" and the level of care recommendation was not recorded. Two calls were referred to an Emergency Department for further evaluation. Medications for SUD treatment were recommended for 69 patients (54%), including two for opioid use disorder ( Table 1). DISCUSSION The unique pediatric substance use consultation and virtual counseling program described in this report provided 407 encounters on behalf of 108 unique families in a 1-year period. The volume of consultations decreased slightly between the first and fourth quarters of the observation period. We note that during the fourth quarter, Massachusetts experienced a surge in COVID-19 cases as the Omicron variant became prevalent. Additionally, during this period pediatric COVID vaccines became available. These two factors taxed pediatric healthcare resources and likely distracted attention from other issues including substance use. At the same time, virtual counseling appointments, which occurred outside of pediatric offices, increased over the course of the observation year. While there are no benchmarks to which to compare program volume, we consider our data an important demonstration of the utility of such a substance use consultation line. Cannabis was the most common substance identified by callers as the reason for concern. This finding is consistent with reports that have found cannabis the most common cause for adolescents to seek substance use treatment both in Massachusetts (12) and other states (13)(14)(15). While national data has found that alcohol use is more common than cannabis use among adolescents, daily or near daily cannabis use is reported by 3.1% of teens (16). Some of the consultation calls were seeking treatment advice for cannabis hyperemesis syndrome or acute psychotic reactions, two acute medical problems related to cannabis use. These problems are increasing in frequency (17,18) in association with policy changes that liberalize access to cannabis in Massachusetts and other states, and with increasing potency and variety of products available. These acute problems may cause patients to seek medical attention, and thereby shine a light on substance use in general for pediatric primary care providers. We provided six consultations regarding opioid use, and recommended medications for opioid use disorder for two patients. Compared to adults, adolescents are far less likely to receive medication (19) for opioid use disorder (MAT), despite the effectiveness for youth (20,21) and recommendations published by the AAP (22). Youth who do initiate treatment are more likely to be lost to follow up than older patients, (23). and it has been speculated that one of the reasons is that adult-centered substance use treatment programs do not meet youth's needs. Few OUD treatment programs for the general population provide services tailored for youth (24). Providing MAT in pediatric primary care can increase access to developmentally appropriate OUD treatment for youth, and is feasible (25). In this project, 73% of referred patients completed at least one counseling appointment and the mean number of counseling appointments was more than seven, which is similar to a recently published study that was conducted in a school-based setting (26). While the number of patients served by this project was small, providing MAT in pediatric primary care allows adolescents to access to treatment in the least restrictive setting. A consultation line can also be used to connect youth with OUD to other treatment settings where they can access MAT in combination with other treatments. Furthermore, consultation services may also help primary care providers appropriately address youth who report non-medical opioid use, but do not have an opioid use disorder. While rates of non-medical opioid use by high school students have decreased, approximately 2.3% of 12 th grade students currently report this behavior (16). These youth are at high risk of both acute (27) and long-term (28) consequences, thus attention to the behavior is warranted. Primary care, which offers adolescents an opportunity to have a confidential conversation with a trusted adult who can monitor their behavior over time, provides an excellent setting for this care. In this project, the majority of recommendations were for treatment in primary care, with consultants offering advice on SU management. Delivering brief interventions in primary care is critical because even when specialists are available, many adolescents decline referral (29). Integrating behavioral health services within primary care allows adolescent patients easier access and better protects confidentiality as compared to appointments in an unfamiliar setting (30). Research on primary care SBIRT is promising: an evaluation in a large medical system found that SBIRT was associated with decreased substance use diagnoses and emergency department visits at 3-year follow up post-implementation (31,32). Primary mental health concern/substance not listed 9 7% Outcomes 128 Treated in primary care 87 68% Therapist appointment -non-MCPAP 1 1% There is evidence that having specially trained BH providers do brief interventions may improve outcomes by decreasing rates of mental health disorders (33). More than half of the consultations recommending primary care management in this project were also referred for substance use counseling and all but one were referred to the program's own BH provider. In this model, counseling was delivered virtually, which may lower barriers for adolescent participation in substance use treatment (12). This program was designed to deliver coordinated care: healthcare professionals were the source of all referrals, received treatment recommendation summaries after each initial counseling visit and were encouraged to call the consultation line for support with medical components of care such as prescribing and drug testing as needed. The mean number of visits per patient was more than seven, representing substantial patient engagement, and supporting the acceptability of the program. Standard brief interventions do not include the use of laboratory testing or medications to treat withdrawal or suppress cravings; adolescents are far less likely to receive effective treatment for substance use disorders (19,34,35) compared to adults. Consultants in this project recommended medication for substance use treatment, including nicotine replacement, naloxone, naltrexone and others, for more than half of all calls, suggesting that consultation service may be a good way to increase dissemination of medication for addiction treatment in youth. Referral to treatment is the least studied aspect of adolescent SBIRT (30). Historically, few pediatric primary care providers refer adolescents with substance use concerns (36). While referrals and follow up appointments for problematic substance use may be becoming more common over time, healthcare providers report substantial barriers, (5). including unwillingness of adolescent patients to accept a referral or engage in care. Indeed, most adolescents with substance use disorders do not see their use as problematic (37). In this project, consultants recommended substance use specialty treatment in 27% of cases and provided support to PCP's including program information and suggestions for speaking with adolescent patients and their families about accepting a referral. Our work has limitations. Consultants entered secondhand information reported by primary care providers and were unable to make diagnoses. However, information we recorded accurately represents the concerns presented by callers and as such, may be useful for planning efforts in other locales. We drew data from a clinical database and it is possible that different consultants used codes in the encounter form differently from one another, though we believe these differences are small as the group of consultants work together closely and communicated often. We do not know how many patients received the recommended medications or accepted referrals to substance use specialty treatment, nor do we have detailed patient-level outcome data to determine improvement. These are important quality measures that could be assessed in a future study. Finally, the scope of the project was small, and the work should be considered a pilot; data from a larger, scaled up version could be assessed in the future. We conclude that in this project, provider to provider substance use consultation and provision of virtual substance use counseling enabled youth to access intervention for substance use within pediatric primary care. These services were offered through a statewide pediatric primary care access program. The infrastructure upon which these programs can be scaled exists because similar programs are available in most states and territories. Given the dearth of substance use treatment services for adolescents, innovative models such as this one may play an important role in building capacity. DATA AVAILABILITY STATEMENT The data analyzed in this study is subject to the following licenses/restrictions: The dataset used for this article reflects information about clinical encounters for current patients. Requests to access these datasets should be directed to sharon.levy@childrens.harvard.edu. AUTHOR CONTRIBUTIONS SL, AF, SK, JL, EW, and JS have participated in relevant study conception and design, acquisition of data, and analysis and data interpretation activities. All authors contributed to drafting or revising of the manuscript and have approved the manuscript as submitted. FUNDING This project was supported in part by a subcontract through the American Academy of Pediatrics, on HRSA grant #1H7AMC37566-01-00.
2022-05-11T13:25:45.663Z
2022-05-11T00:00:00.000
{ "year": 2022, "sha1": "8b8bb9b1e7841a19d91300df625dffc7057868aa", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Frontier", "pdf_hash": "8b8bb9b1e7841a19d91300df625dffc7057868aa", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
7279291
pes2o/s2orc
v3-fos-license
Incremental approaches to knowledge reduction of covering decision information systems with variations of coverings In practical situations, calculating approximations of concepts is the central step for knowledge reduction of dynamic covering decision information system, which has received growing interests of researchers in recent years. In this paper, the second and sixth lower and upper approximations of sets in dynamic covering information systems with variations of coverings are computed from the perspective of matrix using incremental approaches. Especially, effective algorithms are designed for calculating the second and sixth lower and upper approximations of sets in dynamic covering information systems with the immigration of coverings. Experimental results demonstrate that the designed algorithms provide an efficient and effective method for constructing the second and sixth lower and upper approximations of sets in dynamic covering information systems. Two examples are explored to illustrate the process of knowledge reduction of dynamic covering decision information systems with the covering immigration. Introduction Covering-based rough set theory [41] as a generalization of Pawlak's rough sets is a powerful mathematical tool to deal with uncertainty and imprecise information in the field of knowledge discovery and rule acquisition. To handle with uncertainty knowledge, researchers have investigated covering-based rough set theory [4,21,25,26] and presented three types approximation operators summarized by Yao [40] as follows: element-based operators, granular-based operators and system-based operators for covering approximation spaces, and discussed the relationships among them. Additionally, all approximation operators are also classified into dual and non-dual operators and their inner properties are investigated. Researchers have proposed many lower and upper approximation operators with respect to different backgrounds. Particularly, they [31, 32, 35-37, 39, 42, 45-50] have investigated approximation operators from the view of matrix. For example, Liu [15] provided a new matrix view of rough set theory for Pawlak's lower and upper approximation operators. He also represented a fuzzy equivalence relation using a fuzzy matrix and redefined the pair of lower and upper approximation operators for fuzzy sets using the matrix representation in a fuzzy approximation space. Wang et al. [35] proposed the concepts of the type-1 and type-2 characteristic matrices of coverings and transformed the computation of the second, fifth and sixth lower and upper approximations of a set into products of the type-1 and type-2 characteristic matrices and the characteristic function of the set in covering approximation spaces. Zhang et al. [42] proposed the matrix characterizations of the lower and upper approximations for set-valued information systems. He [45][46][47] also presented efficient parallel boolean matrix based algorithms for computing rough set approximations in composite information systems and incomplete information systems. Actually, because of the dynamic characteristic of data collection, there are a lot of dynamic information systems with variations of object sets, attribute sets and attribute values, and researchers [1-3, 5-14, 16-20, 22-24, 27-30, 33, 34, 38, 42-44] have focused on knowledge reduction of dynamic information systems. Especially, researchers [5,6,42] have computed approximations of sets for knowledge reduction of dynamic information systems from the view of matrix. For instance, Zhang et al. [42] provided incremental approaches to updating the relation matrix for computing the lower and upper approximations with dynamic attribute variation in set-valued information systems. They also proposed effective algorithms of computing composite rough set approximations for dynamic data mining. Lang et al. [5,6] presented incremental algorithms for computing the second and sixth lower and upper approximations of sets from the view of matrix and investigated knowledge reduction of dynamic covering information systems with variations of objects. In practical situations, there are many dynamic covering information systems with the immigration and emigration of coverings, and computing the second and sixth lower and upper approximations of sets is time-consuming using the non-incremental algorithms in these dynamic covering information systems, it also costs more time to conduct knowledge reduction of dynamic covering information systems with variations of coverings. Therefore, it is urgent to propose effective approaches to updating the second and sixth lower and upper approximations of sets for knowledge reduction of dynamic covering decision information systems with the covering variations. This work is to investigate knowledge reduction of dynamic covering decision information systems. First, we investigate the basic properties of dynamic covering information systems with variations of coverings. Particularly, we study the properties of the type-1 and type-2 characteristic matrices with the covering variations and the relationship between the original type-1 and type-2 characteristic matrices and the updated type-1 and type-2 characteristic matrices. We also provide incremental algorithms for updating the second and sixth lower and upper approximations of sets using the type-1 and type-2 characteristic matrices, respectively. We employ examples to illustrate how to update the second and sixth lower and upper approximations of sets with variations of coverings. Second, we generate randomly ten dynamic covering information systems with the covering variations randomly and compute the second and sixth lower and upper approximations of sets in these dynamic covering information systems. We also employ experimental results to illustrate the proposed algorithms are effective to update the second and sixth lower and upper approximations of sets in dynamic covering information systems. Third, we employ two examples to demonstrate that the designed algorithms are effective to conduct knowledge reduction of dynamic covering decision information systems with immigrations of coverings, which will enrich covering-based rough set theory from the matrix view. The rest of this paper is organized as follows: Section 2 briefly reviews the basic concepts of coveringbased rough set theory. In Section 3, we update the type-1 and type-2 characteristic matrices in dynamic covering information systems with variations of coverings. We design the incremental algorithms for computing the second and sixth lower and upper approximations of sets. We also provide examples to demonstrate how to calculate the second and sixth lower and upper approximations of sets. In Section 4, the experimental results illustrate the incremental algorithms are effective to construct the second and sixth lower and upper approximations of sets in dynamic covering information systems with the covering immigration. In Section 5, we explore two examples to illustrate how to conduct knowledge reduction of dynamic covering decision information systems with the covering immigration. Concluding remarks and further research are given in Section 6. Preliminaries In this section, we briefly review some concepts related to covering-based rough sets. Definition 2.1 [41] Let U be a finite universe of discourse, and C a family of subsets of U. Then C is called a covering of U if none of elements of C is empty and {C|C ∈ C } = U. Furthermore, (U, C ) is referred to as a covering approximation space. If U is a finite universe of discourse, and is called a covering information system, which can be viewed as a covering approximation space. Furthermore, if the coverings of D are classified into two categories: conditional attribute-based coverings and decision attribute-based coverings, then (U, D) is referred to as a covering decision information system. For convenience, a covering decision information system is denoted as (U, where D C and D D mean conditional attribute-based coverings and decision attribute-based coverings, respectively. [35] Let (U, C ) be a covering approximation space, and N(x) = {C i |x ∈ C i ∈ C } for x ∈ U. For any X ⊆ U, the second and sixth upper and lower approximations of X with respect to C are defined as follows: Definition 2.2 According to Definition 2.2, the second and sixth lower and upper approximation operators are important standards for knowledge reduction of covering information systems in covering-based rough set theory; they are also typical representatives of approximation operators for covering approximation spaces. , then M C is called a matrix representation of C . Additionally, we also have the characteristic function X X = a 1 a 2 . . . a n T for X ⊆ U, where We show the second and sixth lower and upper approximations of sets using the type-1 and type-2 characteristic matrices respectively as follows. Definition 2.4 [35] Let (U, C ) be a covering approximation space, and X X the characteristic function of X in U. Then We present the concepts of the type-1 and type-2 reducts of covering decision information systems as follows. Update the type-1 and type-2 characteristic matrices with variations of coverings In this section, we present incremental approaches to computing the type-1 and type-2 characteristic matrices with variations of coverings. is called a dynamic covering information system of (U, D). In practical situations, the cardinalities of coverings which describes objects in covering information systems are increasing with the development of science and technology. Moreover, (U, D) is referred to as a static covering information system of (U, D + ). we obtain a dynamic covering information system (U, D + ) of In what follows, we show how to construct Γ(D + ) based on Γ(D). For convenience, we denote where | * | denotes the cardinality of * . Proof. By Definitions 2.3 and 3.1, we get Γ(C ) and Γ(C + ) as follows: . a m+1 . a m+1 . a m+1 To obtain Γ(D + ), we only need to compute Γ(C m+1 ) as follows: Therefore, we have . We present the non-incremental and incremental algorithms for computing S H D + (X) and S L D + (X) in dynamic covering information systems. The time complexity of computing the second lower and upper approximations of sets is O( is the time complexity of Algorithm 3.5. Therefore, the time complexity of the incremental algorithm is lower than that of the non-incremental algorithm. Third, by Definition 2.4, we get In Example 3.6, we only need to calculate elements of Γ(C 4 ) for computing S H D + (X) and S L D + (X) using Algorithm 3.5. But we must construct Γ(D + ) for computing S H D + (X) and S L D + (X) using Algorithm 3.4. Thereby, the incremental approach is more effective to compute the second lower and upper approximations of sets. Proof. It is straightforward by Theorem 3.3. Proof. It is straightforward by Theorem 3.7. Subsequently, we construct (C + ) based on (C ). For convenience, we denote (C ) = (d i j ) n×n Proof. By Definitions 2.3 and 3.1, we get (D) and (D + ) as follows: . a m+1 . a m+1 To obtain (D + ), we only need to compute (C m+1 ) as follows: Therefore, we have We also provide the non-incremental and incremental algorithms for computing XH D + (X) and XL D + (X) in dynamic covering information systems. Step 1: Input (U, D + ); Step 2: Step 4: Output XH D + (X) and XL D + (X). The time complexity of computing the sixth lower and upper approximations of sets is O( is the time complexity of Algorithm 3.12. Therefore, the time complexity of the incremental algorithm is lower than that of the non-incremental algorithm. Example 3.13 (Continued from Example 3.2) Taking Second, by Theorem 3.10, we get Third, according to Definition 2.4, we obtain In Example 3.11, we must compute (C + ) for constructing XH D + (X) and XL D + (X) using algorithm 3.11. But we only need to calculate (C 4 ) for computing XH D + (X) and XL D + (X) using Algorithm 3.12. Thereby, the incremental approach is more effective to compute the sixth lower and upper approximations of sets. Theorem 3.14 Let (U, D + ) be a dynamic covering information system of (U, D), (C ) = (d i j ) n×n and (C + ) = (e i j ) n×n the type-2 characteristic matrices of D and D + , respectively. Then Proof. It is straightforward by Theorem 3.10. Proof. It is straightforward by Theorem 3.14. In practical situations, there are some dynamic covering information systems because of the emigration of coverings, which are presented as follows. In other words, (U, D) is also referred to as a static covering information system of (U, D − ). Furthermore, we employ an example to illustrate dynamic covering information systems given by Definition 3.17 as follows. 4 }}, and If we delete C 4 from D, then we obtain a dynamic covering information We also show how to construct Γ(C − ) based on Γ(C ). For convenience, we denote Γ(D) = (b i j ) n×n where Proof. It is straightforward by Theorem 3.3. Therefore, S H where Proof. It is straightforward by Theorem 3.10. Third, by Definition 2.4, we obtain ering information systems with the immigrations and emigrations of covering simultaneously using two steps as follows: (1) compute the type-1 and type-2 characteristic matrices by Theorems 3.3 and 3.10, respectively; (2) construct the type-1 and type-2 characteristic matrices by Theorems 3.19 and 3.21, respectively. Actually, there are more dynamic covering information systems given by Definition 3.1 than those defined by Definition 3.17. Therefore, the following discussion focuses on the dynamic covering information systems given by Definition 3.1. Experimental analysis In this section, we perform experiments to illustrate the effectiveness of Algorithms 3.5 and 3.12 for computing the second and sixth lower and upper approximations of concepts, respectively, in dynamic covering information systems with the immigration of coverings. To test Algorithms 3.5 and 3.12, we generated randomly ten artificial covering information systems The stability of Algorithms 3.4, 3.5, 3.11 and 3.12 In this section, we illustrate the stability of Algorithms 3.4, 3.5, 3.11 and 3.12 with the experimental results. First, we present the concept of sub-covering information system as follows. According to Definition 4.1, we see that a sub-covering information system is a covering information system. Furthermore, the number of sub-covering covering information systems is 2 |D| −1 for the covering information system (U, D). Example 4.2 Let (U, D) be a covering information system Then we obtain a sub-covering information system (U, D 1 ) by taking Second, according to Definition 4.1, we obtain ten sub-covering information systems {(U i , D j i )| j = 1, 2, 3, ..., 10} for covering information system (U i , D i ) outlined in Table 1, and show these sub-covering information systems in Table 2 (1) By adding a covering into D 1 1 , we obtain the dynamic covering information system (U 1 , D 1+ 1 ), where |U 1 | = 2000 and |D 1+ 1 | = 101. (2) Taking any X ⊆ U 1 , we compute the second lower and upper approximations of X in dynamic covering information system (U 1 , D 1+ 1 ) using Algorithms 3.4 and 3.5. Furthermore, we also compute the sixth lower and upper approximations of X in dynamic covering information system (U 1 , D 1+ 1 ) using Algorithms 3.11 and 3.12. To confirm the accuracy of the experiment results, we conduct each experiment ten times and show the average time of ten experimental results in Table 3, where t(s) denotes that the measure of time is in seconds. (3) We compute the variance of ten experimental results for computing the approximations of sets in each dynamic covering information system and show all variance values in Table 4. According to the experimental results, we see that Algorithms 3.4, 3.5, 3.11, and 3.12 are stable to compute the second and sixth lower and upper approximations of sets in dynamic covering information systems with the immigration of coverings. Especially, Algorithms 3.5 and 3.12 are more stable to compute the second and sixth lower and upper approximations of sets than Algorithms 3.4 and 3.11, respectively, in dynamic covering information systems. Table 4: Variance values of computational times using NIS, NIX, IS, and IX in ( The influence of the cardinality of object set In this section, we analyze the influence of the cardinality of object set on time of computing the second and sixth lower and upper approximations of sets using Algorithms 3.4, 3.5, 3.11, and 3.12 in dynamic covering information systems with the covering immigration. There are ten sub-covering information systems with the same cardinality of covering sets. First, we compare the times of computing the second lower and upper approximations of sets using Algorithm 3.4 with those using Algorithm 3.5 in dynamic covering information systems with the same cardinality of covering sets. From the results in Table 3, we see that the computing times are increasing with the increasing cardinality of object sets using Algorithms 3.4 and 3.5. We also find that Algorithm 3.5 executes faster than Algorithm 3.5 in dynamic covering information systems. Second, we also compare the times of computing the sixth lower and upper approximations of sets using Algorithm 3.11 with those using Algorithm 3.12 in dynamic covering information systems with the same cardinality of covering sets. From the results in Table 3, we see that the computing times are increasing with the increasing cardinality of object sets using Algorithms 3.11 and 3.12. We also find that Algorithm 3.12 executes faster than Algorithm 3.11 in dynamic covering information systems. Third, to illustrate the effectiveness of Algorithms 3.5 and 3.12, we show these results in Figures 1-10. In each figure, NIS , IS , NIX, and IX mean Algorithms 3.4, 3.5, 3.11, and 3.12, respectively; i stands for the cardinality of object set in X Axis, while the y-coordinate stands for the time to construct the approximations of concepts. Therefore, Algorithms 3.5 and 3.12 are more effective to compute the second and sixth lower and upper approximations of sets, respectively, in dynamic covering information systems. The influence of the cardinality of covering set In this section, we analyze the influence of the cardinality of covering set on time of computing the second and sixth lower and upper approximations of sets using Algorithms 3.4, 3.5, 3.11, and 3.12 in dynamic covering information systems with the covering immigration. In Table 2, there also exist ten sub-covering information systems with the same cardinality of object sets. First, we compare the times of computing the second lower and upper approximations of sets using Algorithm 3.4 with those using Algorithm 3.5 in dynamic covering information systems with the same cardinality of object sets. According to the experimental results in Table 3, we see that the computing times are almost not increasing with the increasing cardinality of covering sets using Algorithms 3.4 and 3.5. We also find that Algorithm 3.5 executes faster than Algorithm 3.4 in dynamic covering information systems. Second, we compare the times of computing the sixth lower and upper approximations of sets using Algorithm 3.11 with those using Algorithm 3.12 in dynamic covering information systems with the same cardinality of object sets. From the results in Table 3, we see that the computing times are increasing with the increasing cardinality of covering sets using Algorithms 3.11. But the computing times are almost not increasing with the increasing cardinality of covering sets using Algorithms 3.12. We also find that Algorithms 3.12 executes faster than Algorithm 3.11 in dynamic covering information systems. Third, to illustrate the effectiveness of Algorithms 3.5 and 3.12, we show these results in we can obtain a decision covering information system (U, D * ) by constructing a covering based on the decision attribute, where |U| = 625 and |D * | = 5. Therefore, we can obtain covering information systems and decision covering information systems by transforming Irvine(UCI)'s repository of machine learning databases. Since the purpose of the experiment is to test the effectiveness of Algorithms 3.5 and 3.12 and the transformation process costs more time, we generated randomly ten artificial covering information systems (U i , D i ) to test the designed algorithms in the experiments. Knowledge reduction of covering decision information systems with the covering immigration In this section, we employ examples to illustrate how to conduct knowledge reduction of covering decision information systems with the covering immigration. Second, by Definition 2.4, we have Third, according to Definition 2.3, we get Fourth, by Definition 2.4, we derive Therefore, according to Definition 2.5, {C 1 , C 3 } is a type-1 reduct of (U, D C ∪ D D ). In In what follows, we employ an example to illustrate how to compute the type-1 reducts of dynamic covering decision information systems with the immigration of coverings. Second, by Definition 2.4, we have Third, by Example 5.1, we get Therefore, according to Definition 2.5, {C 1 , C 3 } is a type-1 reduct of (U, D + C ∪ D D ). In Conclusions In this paper, we have updated the type-1 and type-2 characteristic matrices and designed effective algorithms for computing the second and sixth lower and upper approximations of sets in dynamic covering information systems with variations of coverings. We have employed examples to illustrate how to calculate the second and sixth lower and upper approximations of sets. We have employed experimental results to illustrate the designed algorithms are effective to calculate the second and sixth lower and upper approximations of sets in dynamic covering information systems with the immigration of coverings. We have explored two examples to demonstrate how to conduct knowledge reduction of dynamic covering decision information systems with the immigration of coverings. In the future, we will investigate the calculation of approximations of sets in other dynamic covering information systems and propose effective algorithms for knowledge reduction of dynamic covering decision information systems. Furthermore, we will provide parallel algorithms for knowledge reduction of dynamic covering decision information systems using the type-1 and type-2 characteristic matrices.
2015-12-09T02:13:51.000Z
2015-12-09T00:00:00.000
{ "year": 2015, "sha1": "8234c4e731918c607462b62034bef0e7b7090551", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "8234c4e731918c607462b62034bef0e7b7090551", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science", "Mathematics" ] }
250607572
pes2o/s2orc
v3-fos-license
Machine learning applications for weather and climate need greater focus on extremes Multiple studies have now demonstrated that machine learning (ML) can give improved skill for predicting or simulating fairly typical weather events, for tasks such as short-term and seasonal weather forecasting, downscaling simulations to higher resolution and emulating and speeding up expensive model parameterisations. Many of these used ML methods with very high numbers of parameters, such as neural networks, which are the focus of the discussion here. Not much attention has been given to the performance of these methods for extreme event severities of relevance for many critical weather and climate prediction applications, with return periods of more than a few years. This leaves a lot of uncertainty about the usefulness of these methods, particularly for general purpose prediction systems that must perform reliably in extreme situations. ML models may be expected to struggle to predict extremes due to there usually being few samples of such events. However, there are some studies that do indicate that ML models can have reasonable skill for extreme weather, and that it is not hopeless to use them in situations requiring extrapolation. This article reviews these studies and argues that this is an area that needs researching more. Ways to get a better understanding of how well ML models perform at predicting extreme weather events are discussed. Introduction Multiple studies have now demonstrated that machine learning (ML) can give improved skill for predicting or simulating fairly typical weather events, for tasks such as short-term and seasonal weather forecasting (Ham et al., 2019;e.g. Ravuri et al., 2021;Weyn et al., 2021;Pathak et al., 2022), downscaling simulations to higher resolution (e.g. Stengel et al., 2020;Harris et al., 2022) and emulating and speeding up expensive model parameterisations (e.g. Rasp et al., 2018;Gettelman et al., 2021). These used ML methods with very high numbers of parameters, such as neural networks, which are the focus of the discussion here. Not much attention has been given to the performance of these methods for extreme event severities of relevance for many critical weather and climate prediction applications. This leaves a lot of uncertainty about the usefulness of these methods, particularly for general purpose prediction systems that must perform reliably in extreme situations. ML models may be expected to struggle to predict extremes due to there usually being few samples of such events. However, as will be discussed below, there are some studies that do indicate that ML models can have reasonable skill for extreme weather, and that it is not hopeless to use them in situations requiring extrapolation. This makes it an area worth researching more. Some clarity is needed about the use of the term "extreme". One useful metric to represent the degree to which an event is extreme is the return period, the average time between events with a magnitude at least as large as for the event in question. A large number of studies use the term "extreme" to describe events around the 90-99 th percentile of daily data, which correspond to only a 10-100 day return period. It is indeed useful to assess the performance of ML models around such thresholds. However, these are far from event severities that are relevant to many applications of weather and climate models, and studies typically do not demonstrate how their methods would perform in these cases. At the high end of the scale, events with return periods in the thousands of years are sometimes studied in extreme event attribution (e.g. Risser & Wehner, 2017;Van Oldenborgh et al., 2017) and in the hundreds of years for designing infrastructure for flood and drought resilience (e.g. Environment Agency, 2014Agency, , 2020. In weather forecasting, the Met Office's most severe "red" weather warning was issued once every few years per event type in the system's first decade (Suri & Davies, 2021). The return period at individual locations that were most affected by these events will have been substantially higher. Forecast reliability will also need to be assured for even more extreme events. In keeping with these examples, in the rest of this article "extreme" is used to refer to events with return periods of more than a few years. It seems likely that for ML-based systems to be considered for use in operational weather and climate prediction systems, good performance in extreme situations needs to be shown. This should include events going beyond what is used for training systems, since it cannot be known in advance what range of input data the system will see. Operational systems need to predict events that are more severe than any in the historical record at times. It can be asked is there much value in continuing development of ML-based systems for weather and climate prediction without demonstrating at least satisfactory performance for extremes? If an approach is taken to try to first design systems to perform well for typical weather and then improve extreme event capabilities later, this could waste a lot of time if useful methods for the former are not the same as for the latter. This is an especially large concern for ML methods with large numbers of parameters (e.g. large neural networks) that require a lot of samples for training. Particular methods may also have their own vulnerabilities. For example, generative adversarial networks are prone to "mode collapse", where predictions seriously undersample parts of the data distribution, potentially very adversely affecting performance for extremes. Random forests cannot predict values beyond those seen in training data, so they may not be a good choice for applications where skilful prediction for beyond-sample events is important. Therefore evaluating how well such systems actually perform in extreme situations is very important for helping researchers choose the best methods to develop for their applications. The challenge in making predictions in extreme situations comes not just from these events being rare, but also from how far they can exceed historical records. The 2021 heatwave in the Northwest USA and western Canada beat previous temperature records by 5°C in Portland, standing far above previous values, with an estimated return period in the present climate of ~1000 years (Philip et al., 2021). Climate model simulations include events where weekly-average temperature exceeds previous records by over five standard deviations (Fischer et al., 2021). Rainfall extremes can exceed prior historical values by even greater margins. In 2018 and 2019 in Kerala, India, there were 14-day rainfall totals that exceeded 30 standard deviations, associated with strong convection (Mukhopadhyay et al., 2021). Convective rainfall in the USA has led to river discharges reaching over 20 times the 10-year return level on a large number of occasions, with the most extreme recorded discharge due to rainfall being 200 times that level (Smith et al., 2018). It therefore wouldn't be over the top to evaluate robustness of ML-based systems to this degree of extremity for cases where convection is important, and otherwise to perhaps ~5 standard deviation perturbations above the highest values in observed or simulated training data. Previous studies evaluating ML on extreme events There are six studies that I have been able to find in the literature that indicate that MLbased systems can have reasonable skill in extreme situations with return periods of more than a few years. These are summarised in table 1. These results show that there are good prospects that ML-based systems could have good skill for extreme events with multi-year return periods and beyond, but there are not enough studies to know whether this is true in most cases. I have not found any studies that explicitly show failure for extremes. It is hard to draw general rules for success from this small sample of results, but it does suggest some guidance. Five out of six studies evaluated neural network-based models, indicating that neural networks can be successful for this task. The other study, Nevo et al. (2021), used a bespoke approach for their flood inundation modelling. No study tested alternative complex methods like random forests, so they provide no evidence about such methods. Five out of six studies used at least 10 years of training data. Three studies obtained reasonable evaluation results for extreme events with estimated return periods much longer than the training dataset, indicating that generalisation to more extreme events is possible (Boulaguiem et al., 2022;Frame et al., 2022;Lopez-Gomez et al., 2022). However, it still seems wise to plan to require large training datasets to develop models for such cases. Four out of six studies did not change their model architecture or training procedure to particularly target achieving good performance on extremes, again suggesting that existing methods are capable of generalising to extreme events, but modifications are sometimes needed. Research gaps More research into any aspect of this problem would be very valuable, though there are some questions that need answers with higher urgency. One highly important area is simulating weather events with multi-decadal return periods, which are very important for understanding many aspects of climate risk. Another is simulating situations that are multiple standard deviations beyond historical records. This has only been examined by Lopez-Gomez et al. (2022). Another key gap is testing how well stochastic generative models (e.g. generative adversarial networks), which have become popular for high-resolution downscaling and forecasting, perform in extreme situations. The challenge for these may be even greater than for deterministic models, and it has only been studied by Boulaguiem et al. (2022). There is also a lack of studies that have examined ML models' extrapolation behaviour. Neural networks' extrapolation properties depend on their structure e.g. those using the common "ReLU" activation function would be expected to extrapolate linearly, though not necessarily with the same gradient as a line of best fit through the training data points (Xu et al., 2020;Ziyin et al., 2020). Hernanz et al. (2022) examined extrapolation behaviour of ML models that predicted surface air temperature and found that they performed poorly. Extrapolation errors may not strongly affect skill scores for events that do not lie very far outside the training data range. This makes it unclear if the models in the studies in table 1 contained this error. This kind of error would be more important for extremes far outside the training range (e.g. Fischer et al., 2021;Mukhopadhyay et al., 2021;Philip et al., 2021). Ways forward Firstly, studies could include diagnostics that indicate performance on extremes without requiring much extra work. For example: • Scatter plots of predictions versus truth values, which immediately show whether an ML model predicts sensible values in the most extreme situations in the test data, and how prediction skill for extremes compares to more typical situations (as shown in e.g. Adewoyin et al., 2021, fig. 7). • Quantile-quantile plots including percentiles corresponding to the highest allowed by the test data, which would greatly help to show whether the frequency of extreme events in predictions is reasonable. • When predicting a spatial field in two or more dimensions (e.g. in downscaling), showing that predictions for samples of the most extreme cases in the test data are sensible. • Statistics like root mean square error and correlation for the most extreme events only (e.g. the top 30 events). When these scores are calculated on a whole dataset, they are not very sensitive to errors in the distribution tails. • Making clear in the conclusions what is the maximum return period of events that were evaluated in the test data. To show how well ML-based systems perform in situations going beyond events seen in training, the most extreme events can be set aside in a second test dataset, as in Frame et al. (2022) and Nevo et al. (2021). This approach could be made even stronger by doing this before any model development is done, so the model structure and hyperparameters are chosen without being able to see the most extreme events beforehand. It would also be highly valuable to understand how ML-based systems would perform in situations that are far out-of-sample, addressing the extrapolation question. For certain applications, increasing the magnitude of anomalies in input fields would be expected to result in increased magnitudes of anomalies in predicted values (e.g. in downscaling, parameterisation emulation, short-range forecasting). Then it would be very useful to show how the predictions scale as anomalies in input fields are magnified to correspond to events much more severe than any in the source data, up to multiple standard deviations beyond the sample events. It may improve confidence in the system if the predictions varied smoothly, if there is no reason to expect a sharp change. Trustworthiness of predictions of extremes may also be informed by quantifying uncertainty associated with model structure and parameters (e.g. Abdar et al., 2021) and interpretability methods (e.g. McGovern et al., 2019;Ebert-Uphoff & Hilburn, 2020;Toms et al., 2020;Beucler et al., 2022). However, the reliability of interpretability methods has been questioned (e.g. Lipton, 2018;Rudin, 2019;Koch & Langosco, 2021). I am not aware of tests of these approaches on predictions of extreme events, and these would be very valuable. If existing machine learning approaches turn out not perform well enough at predicting extreme events, this would signal that more effort should be put into designing systems that are robust. For example, physical principles could be incorporated (e.g. Beucler et al., 2021) or systems that are hybrids of conventional and ML-based models could be developed, which may be more reliable (e.g. Watson, 2019;Bonavita & Laloyaux, 2020;Brajard et al., 2021). For emulating expensive conventional models, rare event simulation (Ragone et al., 2017;Webber et al., 2019) could be useful for obtaining sufficiently many extreme events for training. Better diagnostics of performance on extreme events in studies applying ML would be very valuable for determining whether more attention should turn to approaches like these. Conclusions In order for ML to be applied broadly in weather and climate prediction and simulation systems, it needs to be shown that it can perform at least reasonably well for extreme events. ML models with high numbers of parameters, such as neural networks, may be expected to struggle in these cases as they typically need large samples of events to be trained to make skilful predictions. However, the six studies reviewed here that do evaluate ML model skill on extremes actually indicate that ML-based systems can still perform well on out-of-sample extreme events, even for those with return periods of hundreds or thousands of years. This sample of studies is not enough to draw general conclusions from, though, and there are important questions that have not been addressed by any study that I could find. The situation could be greatly improved if study authors added certain simple diagnostics, and also if studies were designed to show the performance for extremes, as described above. This would be highly valuable for the rest of the community who would learn what ML methods are best to use to predict and simulate extreme events successfully.
2022-07-18T01:15:22.249Z
2022-07-15T00:00:00.000
{ "year": 2022, "sha1": "7811b67b2705323cbd0b7ec3cd1aa6beab2fdd97", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "30f18dfcffa6281ef9612846488a34db75542317", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Physics" ] }
3686625
pes2o/s2orc
v3-fos-license
Analysis of Asymptotically Optimal Sampling-based Motion Planning Algorithms for Lipschitz Continuous Dynamical Systems Over the last 20 years significant effort has been dedicated to the development of sampling-based motion planning algorithms such as the Rapidly-exploring Random Trees (RRT) and its asymptotically optimal version (e.g. RRT*). However, asymptotic optimality for RRT* only holds for linear and fully actuated systems or for a small number of non-linear systems (e.g. Dubin's car) for which a steering function is available. The purpose of this paper is to show that asymptotically optimal motion planning for dynamical systems with differential constraints can be achieved without the use of a steering function. We develop a novel analysis on sampling-based planning algorithms that sample the control space. This analysis demonstrated that asymptotically optimal path planning for any Lipschitz continuous dynamical system can be achieved by sampling the control space directly. We also determine theoretical bounds on the convergence rates for this class of algorithms. As the number of iterations increases, the trajectory generated by these algorithms, approaches the optimal control trajectory, with probability one. Simulation results are promising. Introduction In this paper, we are interested in optimal motion planning for robots with challenging dynamics. Given a robot with perfectly known dynamics and an environment map that includes the initial state of the agent, the goal region, and the obstacles, an optimal motion planner computes a set of control inputs that drive the agent from the initial state to the goal region with minimum cost and without colliding with the obstacles. In this paper, we are interested in robots with high-dimensional dynamics, thus we are using a sampling-based approach (Choset et al., 2005), (LaValle, 2006), which is the most successful approach for high dimensional motion planning problems. Up to a few years ago, sampling-based motion planners were only known to be probabilistically complete, in the sense that given enough time, they will find an admissible trajectory with probability one, whenever such a trajectory exists. The question of optimality was addressed recently in (Karaman and Frazzoli, 2011). They developed sampling-based algorithms that are asymptotically optimal, such as PRM * and RRT * , where asymptotically optimal means given enough time, the probability the planner returns a trajectory close to the optimal trajectory, is one. Although PRM * and RRT * perform well for robots with simple dynamics, such as holonomic robots, extending asymptotically optimal sampling-based planners to systems with complex dynamics remains an open problem. The difficulty lies in the fact that both RRT * and PRM * perform their sampling in the configuration space and therefore requires a steering function that drives the robot from one configuration to the other. Unfortunately, steering functions are often difficult to find and may not even exist for systems with complex dynamics, such as systems with non-holonomic constraints. The authors believe that for the general case, where we have non-linear under-actuated robots, sampling the control space directly may be a better choice. By sampling the control space directly, the need for a steering function can be eliminated, and the difficulty of dealing with challenging dynamics can be alleviated significantly. The main contributions of this paper are as follows: • We showed that asymptotic optimality in sampling-based motion planning, for any Lipschitz continuous dynamical system, can be achieved by sampling the control space directly. This result eliminates the need for a steering function, which is often computationally expensive or even infeasible to compute for systems with complex dynamics. • We derived a theoretical bound on the convergence rate for such planners. This result elucidate the problem parameters that effect the convergence rate (e.g. the dimension of the control space and the Lipschitz continuity constant of the underling dynamical system). To the best of our knowledge, the aforementioned results are the first for sampling-based planners that sample the control space directly. Our analysis only requires Lipschitz continuity of the differential constraints and the cost function. Furthermore, we present a family of asymptotically optimal motion planners for systems with differential constraints that sample the control space directly, and hence does not require a steering function. This algorithms build upon existing sampling based planners (Hsu et al., 1997), and compute admissible trajectories with decreasing cost. This paper is organized as follows. In the next section, Section (2), we describe related work, Section (3) formally defines the motion planning problem. In Section (4) we describe a family of algorithms that sample the control space and discuss their complexity and in the Analysis section (Section(5)), which is the main part of this paper, we provide our theoretical analysis that includes a novel optimality analysis and a study on the convergence rates of algorithms that sample the control space. In Section (6) we give some simulation results and finally we close with conclusions and future work in Section (7). Related Work Over the past two decades, several sampling-based motion planners have been proposed. Sampling-based planners can be divided into two broad categories, i.e., graph based approaches (multi-query) and tree based approaches (single-query). Graph-based approaches first construct a random graph in the configuration space and then use the graph to find admissible paths. This approach includes Probabilistic RoadMap (PRM) (Kavraki et al., 1996b) (Kavraki et al., 1996a), and it variants (Sun et al., 2005), (Kurniawati and Hsu, 2006), (Ekenna et al., 2013), , (Amato et al., 1998). A good summary of sampling-based approaches can be found in (Choset et al., 2005). Tree-based approaches construct a random tree and stop construction whenever the constructed tree contains a path from a given initial configuration to the goal region. This approach includes the widely used Rapidly-exploring Random Trees algorithm (RRT) (Lavalle, 1998) and its variants e.g. (Yershova and LaValle, 2007). Another type of tree-based sampling based planner is the Expansive Space Tree (EST) (Hsu et al., 1997). In terms of handling differential constraints, the main difference between the RRT planner and the EST planner is that the former performs its sampling in the configuration space and the latter performs its sampling in the control space. Recently, Karaman and Frazzoli showed that RRT and PRM converge to a non-optimal trajectory and they developed asymptotical optimal planners e.g. RRT * and PRM * (Karaman and Frazzoli, 2011), (Karaman and Frazzoli, 2010). However, a steering function is required to guarantee optimality and thus they are not suitable for general systems with differential constraints. Perez et al. proposed the use of the RRT * algorithm combined with LQR methods to solve path planning problems with challenging dynamics (Perez et al., 2012). Similarly, Tedrake et al. proposed the LQR-Trees for feedback motion planning (Tedrake et al., 2010). However, these methods perform well only close to the linearization area. As a result the optimality properties of the RRT * do not hold for the LQR-RRT based algorithms. More recently, Dobson and Bekris (Dobson and Bekris, 2014) proposed an algorithm that relaxes asymptotic optimality to near-optimality, to speed-up computing the solution. However, they still require a steering function. To avoid the use of steering function and linearization, one can sample the control space directly using forward propagation. Such planners were initially proposed by Hsu et al. (Hsu et al., 1997). However, (Hsu et al., 1997) only finds an admissible trajectory, rather than the optimal one. Recently, Littlefield et al. (Littlefield et al., 2013) proposed a planner that combines RRT and direct sampling of the control space, called Sparse-RRT. It builds a tree by choosing a node to expand using the RRT strategy, i.e., sampling a configuration uniformly at random and expanding a node that is nearest to the uniformly sampled configuration. However, it expands the node using random shooting, based on forward propagation of the dynamics. They showed that Sparse-RRT converges to the near-optimal solution. However, the proof is based on an assumption that random sampling in the control space, i.e., the random shooting process, will converge to a near optimal solution. In this paper, we show that planners that directly sample the control space can converge to the optimal solution without using the aforementioned assumption. Our proof is based on the simplest algorithm in the class of planners that sample the control space directly, i.e., uses uniform random sampling to choose the node to expand, and expands the node by sampling the control space uniformly at random. The idea is by showing that this simplest algorithm converges to the optimal trajectory, we have more confident that there will be a more efficient sampling strategies that samples the control space directly and converges to the optimal solution. And therefore, spending more effort to find a better strategy to directly sample the control space would be a worth while pursuit. We also present comparison results between the simplest strategy used for proving the convergence result, and the more sophisticated sampling strategy, including one that chooses a node to expand using the RRT strategy (similar to Sparse-RRT). Interestingly, our simulation results indicate that choosing a node to expand using the RRT strategy does not perform well when the system has challenging dynamics. The results and discussion on this are in Section (6). In summary, the main contribution of this paper is to analyse the sampling-based motion planning algorithms and shed some light on its challenges. We show that asymptotically optimal planning can be achieved by directly sampling the control space. Furthermore, we present comparison results of different sampling-based planners that sample the control space directly. As the number of iterations increases, these algorithms generate a trajectory that approaches the optimal control trajectory. Problem Definition Suppose the set of all possible robot states is S and the set of all possible control inputs is U . Then the equation of motion is given by: Where for each time t, γ(t) ∈ S is a state,γ(t) is the time derivative of the state and and φ(t) ∈ U is a control input. Assuming that S and U are manifolds of dimensions n and m where m ≤ n, using appropriate charts we can treat S as a subset of R n and U as a subset of R m . Furthermore we assume that f is a Lipschitz-continuous function with respect to state and control with maximum Lipschitz constant to be less than L p ∈ R + . Supposed S free ⊂ S is the collision free subset of the state space. Then we define path to be a time parametrized mapping from time to the obstacle free state space e.g. γ : [0, T ] → S free . We also define trajectory or control function to be a time parametrized mapping from time to the control space e.g. φ : [0, T ] → U . From Lipschitz-continuity it can be shown that for each trajectory (control function) there is a unique path that represents the solution to equation of motion (Slotine and Li, 1991). Let S goal ⊂ S be the goal region. We would like to compute the control function (trajectory) and the resulting path that drives the agent from the initial state γ(t = 0) to the goal region and minimizes a given cost function. More formally, let D(φ) be the resulting cost for control function φ and Φ to be the set all admissible trajectories. An admissible trajectory is a trajectory that induces an obstacle free state-space path (through Equation (1)) and stops within the goal region. Among all admissible trajectories we would like to compute the one that minimizes the cost function, e.g Planners that Sample the Control Space In this section we describe the proposed family of algorithms. Since we are interested in systems with challenging dynamics, all of the proposed algorithms sample the control space directly. Such algorithms are not new (Hsu et al., 1997) (Littlefield et al., 2013), but the question of whether such planners can converge to the optimal solution is still largely open. To answer this question, we first present two minor modifications of existing planners as detailed in Section (4.1). Algorithm Description Each of the proposed algorithms constructs a tree T = {M, E}, where the root is the initial state q 0 . Each node q ∈ M in the tree corresponds to a collision-free state while each edge qq ∈ E corresponds to a control input u ∈ U that drives the robot from q to q in a given time, without colliding with any of the obstacles while satisfying the equation of motion, Equation (1). Each node q ∈ M is annotated with the cost of reaching q from q 0 . The algorithms are presented in Algorithm 1 -Algorithm 3. To construct the tree, a node is chosen for expansion and then a control input is sampled uniformly at random. We present two different methods for selecting which node to expand. The first method chooses the node for expansion uniformly at random (Algorithm 2), the second method chooses the node for expansion in an RRT style, Algorithm 3, (with a Voronoi bias, similar to Sparse-RRT (Littlefield et al., 2013)). Once a node to be expanded is selected, the planner samples a control input uniformly at random, choose a time-interval ∆t on how long the sampled control input will be applied to the system, and use forward propagation to compute the resulting state. As we will see in Section (5), the integration time ∆t has to be chosen wisely. In order to guarantee asymptotic optimality, the integration time has to either be sampled uniformly from zero to a maximum integration time or to decrease slowly and approach to zero as the number of iterations approaches to infinity. When constant integration time is used the integration time has to be sufficiently small; In the results section we present results for both cases (constant integration time and uniformly sampled integration time). To improve computational efficiency, we can prune nodes that have cost larger than the cost of other nodes in their neighborhood (see Algorithm 4). Every time we add a node, we search its neighborhood to find other nodes within a range R(t). R(t) monotonically shrinks as the number of iterations increases and starts from a given number e.g. R 0 . If there is a node within a ball of radius R(t) with cost less than the cost of the new node, then we do not add the new node otherwise we add it. On the other hand, given that we add the new node, we remove all the other nodes that have cost greater than the cost of the new node. We also make sure not to delete nodes that are part of the best trajectory found so far. There are many different variants of the above algorithms that may substantially improve computational time. However, in this paper, we focus on answering the open question of whether a planner that samples the control space directly can converge to the optimal solution. To this end, we will analyze the simplest version of this class of planners, i.e., the uniform variant (Algorithm 2). The idea is if we can show that asymptotic convergence holds even for this simplest algorithm, then it is more likely there will be a more efficient strategy to sample the control space such that asymptotic optimality holds, and therefore spending more effort finding a better strategy to directly sample the control space would be a worth while pursuit. In addition to the analysis, we also present performance comparison between the three different sampling strategies mentioned above, on various motion planning problems involving robots with complex dynamics. Complexity The pruning is the most expensive part of the proposed framework. To compute the neighbors within a range, the complexity is O(log(N )) (where N is the number of nodes in the tree). To do the cost comparison of each one of the neighbors, the complexity is O(N e ) (N e is the number of neighbors, because of the pruning Algorithm 4 Prune(T, Child, R(t)) 1: [Neighbors]=NeighborsWithinRange(Child,R(t)) 2: for (i = 0; i <sizeof(Neighbors);i + +) do 3: CurrentNeighbor=Neighbors(i) 4: if CurrentNeighbor→cost <Child→cost then 5: ToBeRemoved(CurrentNeighbor) 6: return False 7: Remove(ToBeRemoved) 8: return True part N e << N ). To remove nodes from the tree, the complexity is O(N )) if we choose to free the allocated memory or O(N e ) if we do not. Therefore the complexity of the proposed algorithm is O(log(N )) per iteration. Analysis In this section we will provide a novel analysis on the optimality of planners that directly sample the control space. As we show here, asymptotically optimal planning for systems with differential constraints (e.g. nonholonomic systems) can be achieved without the use of a steering function. We start by showing that under certain conditions, as the number of samples goes to infinity, the probability a planner, that directly samples the control space, samples a sequence of controls that are "near" to the optimal control function goes to one. Afterwards, we show that under certain conditions, two nearby sequence of controls will induce state space paths that have similar costs. The optimality proof described in this section is proven for the case in which we choose a node to expand uniformly at random and without doing pruning. We believe that similar properties hold for the other (RRT). In the case we have non-linear systems, as we see from the results section, the convergence for an RRT-style algorithm is slower than the uniform one. Given we know the properties of the underlining sampling strategy (e.g. choose a node to expand and pruning characteristic), immediate results from the analysis below could allow researchers to evaluate any sampling strategy. Optimality First, let's define the notion of "nearby" sequence of control inputs more formally as Definition 1. Let u = (u 0 , u 1 , . . . , u k ) and u = (u 0 , u 1 , . . . , u l ) be two sequences of control inputs. Then, u and u are -close whenever max i∈[0,max(k,l)] u i − u i ≤ , where u i = 0 for i > k and u i = 0 for i > l. Let φ * (t) : [0, T g ] → U denote the optimal trajectory, where T g is the time to reach the goal state. Suppose the optimal trajectory is approximated with a step function, which is represented as a sequence of control n], and each control input in the sequence is applied for ∆t time. We can then state our assumptions as, • The optimal trajectory φ * (t) is differentiable twice. • Equation (1) (the equation of motion) is Lipschitz continuous on both state and control arguments. • The cost function is Lipschitz continuous. We also consider the standard δ-clearance assumption (Choset et al., 2005). This assumption is not essential for the convergence of the family of planners that sample the control space to the optimal trajectory (e.g. δ is arbitrarily small). If δ is large then we will be able to get admissible trajectories faster than the case δ is small, however from our analysis does not follow that the convergence to the optimal trajectory depends on δ. Before going into the details of our analysis we would like to outline our proof. Let φ * : [0, T g ] → U be the optimal trajectory and u * : [0, T g ] → U to be a piecewise approximation of the optimal trajectory. 1 In addition, we consider the trajectory u : [0, T g ] → U to be a trajectory returned by a planner that samples directly the control space. It is important to say that u is of the same form as u * (e.g. both of them are piecewise constant functions), in addition we assume that u is -close to u * . The above trajectories are illustrated in the Figure (1). We would like to prove that by sampling directly the control space the Figure 1: We can see the optimal control input (Φ * (t)), a representation of the optimal control input (u * (t)) and a control input that is close to the representation of the optimal control input (u(t) e.g. a control an algorithm that samples the control space returns). probability to get a trajectory that is sufficiently close to any approximation of the optimal trajectory approaches to 1 as the number of samples (in the control space) increases. Before proving the asymptotic optimality property for the proposed family of planners, we first need to relate the δ-clearance property (which is defined in the state space) to properties on the control space such as the -close property and the discretization interval ∆t. To this end, we relate the distance between two trajectories (e.g. the optimal one and the one the proposed algorithm returns) with the distance between their induced state space paths. Any sampling based algorithm chooses discrete control inputs (trajectory), however the optimal control input (trajectory) is continuous. In our analysis we would like to take into account both distance due to approximation errors and due to the -close property. Proof. We consider 2 control trajectories with the same time duration applied to the equations of motion with the same initial conditions, and we would like to study the distance of the resulting paths in the state space at each time. Direct application of Lipschitz continuity on the equations of motion gives: In the above equation ||.|| indicates the L 1 Norm. Because the equation of motion is Lipschitz continuous, it is important to note that for every trajectory there is a unique path at each time (Slotine and Li, 1991). Then, for the L 1 norm we havė E S (t) ≤ ||γ φ (t) −γ φ (t)||. Direct substitution to Equation (3) yields the differential inequality below that describes the evolution of the error on the state space: Solving the above differential equation and applying the boundary condition we get: Given the maximum distance of 2 trajectories in the control space ( -close trajectories), the above Lemma shows a bound on the distance of the induced paths on the state space. Proof. Using Taylor expansion series for φ(t) (in all dimensions of the control space) we get: Using results from Lemma (1) we get: Applying the above 2 Lemmata to the optimal trajectory, to its representation (approximation) and to a trajectory that is -close to the representation of the optimal trajectory, we can define the appropriate ∆t for a given δ and to be: Using the above requirement for ∆t, we can prove the following convergence theorem. Theorem 1. Let u * denote the optimal sequence of control inputs when the time domain is discretized uniformly into ∆t intervals. Then, for any > 0 and ∆t that satisfies Equation (8), as the number of samples goes to infinity, the probability the proposed planer samples at least one sequence of control input u that has the same number of elements as u * and is -close to u * , goes to one. Proof. Each path from the root to a node of the tree T encodes a particular sequence of control input ( e.g qq , the edges of the tree are labeled with control input). The probability the proposed algorithm selects a particular sequence to be extended is the same as the probability it selects a node of T to be expanded, which is 1 j . Let ρ i be the probability the proposed algorithm samples a control input within distance from u * i . Because the new planner samples control inputs uniformly at random (from the control space) then, for all i ∈ [0, |u * |], Let P j,k be the probability that from j valid samples (e.g. for up to j iterations), the proposed algorithm generates at least one control sequence u that is -close to the first k th subsequence of u * , i.e, |u| = k and dist(u i , u * i ) ≤ for i ∈ [1, k]. As we show in the Appendix (A), this probability can be written as: The above equation holds for all j > k, ∀k ∈ [1, n]. For the case j = k, ∀k ∈ [1, n] we have P k,k ≥ ρ k k! . In addition for the base case (k = 1) we have: Furthermore, induction on j and k would show that and P j,k is monotonically increasing with respect to both j for a given k for j ≥ k and k > 1, of course with an upper bound of 1. In addition, for any finite iteration P j−1,k is smaller than P j−1,k−1 (one is subset of the other). Our goal is to show that P ∞,k = P ∞,k−1 = P ∞,k−2 = ... = P ∞,1 = 1. Let's now focus on the inequality in Equation We can then calculate the following summation Taking µ to the limit at ∞ gives us Now, we can set a lower bound (P j−1,k−1 − P j−1,k ) ≥ λ 1 , and rewrite Equation (16) as Using similar arguments we used for the base case (k = 1), we get λ 1 = 0. Due to monotonicity properties and because X j−1,k is a subset of X j−1,k−1 (see Appendix A) thus P j−1,k < P j−1,k−1 for all finite j > k then P ∞,k = P ∞,k−1 = ... = P ∞,1 = 1. Now, the question is how far away the cost of u is from the optimal cost. Proof. We consider 3 trajectories and the resulting 3 paths, the first trajectory is the optimal trajectory, the second trajectory is an approximation of the optimal trajectory and the last one is a trajectory that is -close to the approximation of the optimal trajectory (that can be potentially sampled by planners that explore the control space). Using Lipschitz continuity on the cost function we get: Where L D is the maximum Lipschitz constant. From Lemma (1) and Lemma (2) we have: Using Equation (18) and Equation (19) we get: Theorem 2. The proposed family of algorithms, that sample directly the control space, return a trajectory that induces path with cost that asymptotically approaches to the optimal cost. Let φ * : [0, T g ] → U be the optimal trajectory and u * be the sequence of control inputs that approximates φ * with time intervals ∆t, and u j : [0, T g ] → U to be a trajectory that is -close to u * returned by the algorithm until iteration j, where ∆t is given by Equation (8). Suppose γ φ * , γ u * and γ u j are the state space paths induced by φ * , u * , and u j respectively. If D(γ u j ) is the cost of the path induced by trajectory u j after sampling j samples in the control space then: Proof. From Lemma (3) and for any ∆t, > 0 we get d ≤ L D E(e LpTg − 1), where E = ( + mα∆t), then there are choices of = * and ∆t = ∆t * such that d is arbitrarily small: * Where * d ∈ R + is arbitrarily small, e.g. we can always find * , ∆t * such that * d ≤ d . From Theorem (1) we know that for any ∆t, > 0 the probability of sampling a trajectory that is arbitrarily (for any ∆t, ) close to the optimal trajectory approaches to 1 as the number of samples increases. The Lyapunov Approach Similar results to Theorem (1) can be obtained using the Lyapunov approach. We consider the system in Equation (10) and Equation (11), which is in the discrete domain and describes the probability to get at-least one trajectory with k milestones. We multiply both sides of the above system times −1 and add 1 in both sides and we get: We define Q j,k = 1 − P j,k , ∀k. In terms of Q j,k the above System is written as follows: We transform this system to its equivalent in the continuous time domain and we get: Where δ t ∈ R + is a time scaling constant that takes the iteration i to time domain t, e.g. t = δ t i. The initial conditions of the above system are given as follows: Setting the "velocities" in the System in Equation (26) equal to zero we can compute the equilibrium point (Q eq,k , ∀k) as follows: Q eq,k = Q eq,k−1 = ...Q eq,1 = 0 In order to show that the System in Equation (26) reaches to the equilibrium point (e.g. Q eq,k = Q eq,k−1 = ...Q eq,1 = 0) we will use the Lyapunov approach. We consider the following Lyapunov function: Taking the time derivative of the above Lyapunov function we get: Using the System in Equation (26), we geṫ It is important to see that the Lyapunov function is equal to zero at the equilibrium point and its derivative is always negative (but at the equilibrium point). Therefore, the system in Equation (26) reaches its equilibrium value (e.g. Q eq,k = Q eq,k−1 = ...Q eq,1 = 0). Convergence Rate In this subsection we will perform an analysis on the convergence rate for algorithms that sample the control space directly (using uniform sampling and without pruning). In the previous subsection we have shown that the system in Equation (26) approaches its Equilibrium point (e.g. Q eq,k = Q eq,k−1 = ...Q eq,1 = 0) and therefore P ∞,k = 1, ∀k. The question arises here is how fast the system in question approaches to its equilibrium point. We re-write the system in Equation (26) in the following form: ... With initial conditions as described in the previous section. At first we will solve for Q (t),1 . Separating variables (t and Q (t),1 ) and integrating both parts we get: ln (Q (t),1 ) ≤ ln (t −ρ ) + ln (C) Where C is the integration constant. To compute C we use the initial conditions e.g. for t = t 1 , Q (t),1 = (1 − ρ). Using the initial condition and after some algebraic manipulation we get: Similar results we get for the discrete system, see Appendix (B). Now we will compute Q (t),2 . To do so we multiply both sides of the the differential equation in question with µ(t) = e ρdt t = t ρ and we get: For the the left hand side of the above equation we get: Using results from the previous milestone we get: Separating variables and using the initial conditions (e.g. Q (t2),2 = (1 − ρ 2 2! )) we get: Where Using similar techniques we can show that: For Q (t),4 we get: Where c 1 = ρb1 3 , c 2 = ρb2 2 , c 3 = ρb 3 , c 4 = (1 − ρ 4 4! )t ρ 4 − c 1 (ln(t 4 )) 3 − c 2 (ln t 4 ) 2 − c 3 ln(t 4 ). Now we can clearly see a pattern for the solution to the system in Equation (26). For the general case we will use induction. We assume that: Where d i , ∀i = [1, n − 1] are constants. Using the n-th equation from the system in Equation (26) we have: Multiplying both sides with t ρ we get: Integrating both parts we get Where C is an integration constant. Using integration tables we know that: (ln(t)) n dt t = (ln(t)) n+1 n + 1 Using the above mathematical formula, we get: ln(t)) n−1 + e 2 (ln(t)) n−2 + e 3 (ln(t)) n−3 ...e n−1 (ln(t)) + e n ] Where the constants e i = ρdi n−i , ∀i ∈ [1, n − 1] and e n is the constant C computed using the initial conditions (e.g. Q(t n ) = (1 − ρ n n! )). The above inequality gives a bound on the probability "not to get at least one trajectory with n milestones that is -close to an approximation of the optimal trajectory (with n milestones and sufficiently small ∆t such that n = Tg ∆t )" for any positive ρ, which is proportional to . In the case, that the integration time is sampled uniformly form an interval e.g. [0, τ ], τ > 0, then the probability not to get at least one trajectory that is -close to any approximation of the optimal trajectory, with n milestones (each of them resulted in after applying control input for at most ∆t), is given by a similar inequality (change ρ to ρ ∆t τ ). However to cover an optimal trajectory with time duration T g it takes 2Tg ∆t expected number of milestones. Different Sampling Strategies Both the optimality proof and the convergence analysis hold for planners that sample directly the control space by choosing a node to expand uniformly at random, applying control input at random and without using pruning. One can use artificial intelligence methods to guide sampling and thus improve the convergence rates. For example, let the probability to choose the "correct" node for expansion to be f 1 ( 1 j ) > 1 j (where f 1 : R + → R + ), then the summation in the right-hand side of Equation (14) approaches faster to infinity and thus the probability P j,k approaches faster to 1. One way to do this is to use pruning techniques, however when we use pruning techniques it is very difficult to compute the convergence rate. In addition, we see in the previous section the convergence rate depends directly on ρ. We recall that ρ is the probability to choose a control input that is within a ball of radius centered at the optimal one. In the case we use uniform sampling of the control space ρ is given by the ratio of the volume of that ball divided by the volume of the control space. Therefore, if we use artificial intelligence methods to guide the control sampling process then we can directly increase ρ and therefore effect the convergence of the proposed approach. Studying and analyzing different sampling strategies is not within the scope of this research; we recall that the goal of this work is to show that sampling in the control space directly can achieve asymptotically optimal planning. Simulation Results This section presents performance comparison between the simplest sampling strategy we have analyzed in the previous section with more sophisticated strategies, i.e., uniform with pruning and expandTreeRRT with pruning, which is a simplified version of Sparse-RRT (Littlefield et al., 2013). For this comparison, we use the problem of "Pendulum on a Cart" with obstacles (see Figure (2)). This is a highly-non-linear 2 nd order under-actuated system with infinite number of equilibrium points both stable and unstable. The system's state space is 4-dimensional, e.g. X ∈ R 4 , with X = [x, v, θ, Ω] T , where x , v is the velocity of the cart (e.g. v =ẋ), θ indicates the angle of the pendulum (with mass m, inertia I and length L), and Ω is the angular velocity of the pendulum (e.g. Ω =θ ). Using the Euler-Lagrange equations we can derive the equations of motion: Ω(t) = (−mL cos(θ(t)))(F + mLΩ 2 (t) sin(θ(t))) + (M + m)(−mgL sin(θ(t))) (M + m)(I + mL 2 ) − (mL) 2 cos 2 (θ(t)) For this problem we would like to minimize control effort and time to the goal, thus we define as a cost function the following: where: a t = 1000 cost s , a f = 1 cost N 2 s , M = 10kg, m = 5kg, I = 10kgm 2 , L = 2.5m (see Figure (2)), g = 9.86 N s 2 , the input to the system is a force F acting on the cart in the x direction, for this case F is uniformly sampled e.g. F = [0, 300]N . The initial conditions are all zero, the workspace is such that: x = [0, 60]m, and includes obstacles as shown in Figure (3). The goal region is located in the right side (48m < x < 52m) in the upright position ((180 − 10) • < θ < (180 + 10) • ) with −3.14rad/s < Ω < 3.14rad/s and (−4m/s < v < 4m/s). 3 We implemented the algorithms in C++ in a Dell computer that runs Linux on 32 Intel x86 − 64 processors at 1.2 GHz. Each one of the results presented here represents average results over 21 runs. Figure (4) shows the cost of the trajectories generated by simple-uniform (the algorithm we used for analysis), simple-uniform with pruning, and expandTreeRRT with pruning. As expected, the expandTreeRRT method computes an admissible trajectory faster than the uniform approaches. What is interesting is the expandTreeRRT with pruning converges to a further lower cost (2 × 10 8 ) much slower than simple-uniform with and without pruning. This happens because the optimal solution occupies only a small region of the state space. To reach a near optimal solution fast, one needs to sample more densely near the region of optimal solution. RRT's expansion strategy which tries to cover the entire state space well will actually "under sample" state space region that are near optimal solution and "over sample" other regions that do not contribute in generating the optimal trajectory. Note that this does not mean that the expansion strategy in simple-uniform is a suitable one, rather the simulation results indicate that finding sampling strategy that biases sampling towards regions near optimal trajectory would be a more fruitful avenue. In addition to comparing sampling strategies, we also try to understand the effect of different ways of setting time discretization. Figure (5) shows the statistics for the case of constant integration time ∆t = 1s and in the case we sample integration time from [0, 3s]. In both cases the results were taken using simple-uniform with pruning. When we use constant integration time (which is sufficiently small) we get results similar to the ones we get when we sample the integration time, however the results for constant integration time, for small number of iterations are slightly better than the ones we get by sampling the integration time. Summary and Discussion Most methods for solving the problem of optimal motion planning use direct exploration of the configuration space. These methods sample the configuration space and rely on steering functions to connect the configuration space with the control space. However, for general non-linear systems, a steering function is not always available and can be expensive to compute. This paper shows that we can solve optimal motion planning problem without steering functions by sampling the control space directly. In this paper, we present a novel analysis on asymptotic optimality of a family of algorithms that directly sample the control space. Our analysis is based on the simplest method in this family of algorithms. We also present a comparison result between the simplified method we used for analysis and more sophisticated sampling methods in this family of algorithms. As the number of iterations increases, the trajectory these algorithms sample approaches the optimal trajectory. Many avenues are possible for future improvement. Can we generalize the theoretical results further. For instance, will the asymptotic optimality property holds when Lipschitz continuity does not hold. Furthermore, this paper only shows that it is possible to solve optimal motion planning problems without a steering function. How to design such an efficient optimal motion planner remains an open problem. of u * , i.e, |u| = k and dist(u i , u * i ) ≤ for i ∈ [0, k]. Then using the total probability theorem and condition on the events X i−1,k andX i−1,k (up to the previous iteration the underlining algorithm (does not) return(s) at least one trajectory with the desired characteristics) is given by: P r(X j,k ) = P r(X j,k |X j−1,k )P r(X j−1,k ) + P r(X j,k |X j−1,k )(1 − P r(X j−1,k )) ⇒ P r(X j,k ) = P r(X j−1,k ) + P r(X j,k |X j−1,k )(1 − P r(X j−1,k )) To compute the term P r(X j,k |X j−1,k ) in the above equation, we use the total probability theorem condition on the events X j−1,k−1 andX j−1,k−1 . Appendix B: Convergence Rate for the First Milestone for the Discrete System Here we will perform an analysis on the discrete system for the convergence rate for the first milestone (e.g. k = 1). Again, we consider uniform sampling without pruning. To simplify our notations, Instead of using inequalities we will use equalities and work with bounds. We consider another system as follows: P j,1 =P j−1,1 + (1 −P j−1,1 ) ρ j , ∀j ≥ k, k = 1 (55) Thus,P j,1 ≤ P j,1 , we define Q j = 1 −P j,1 thus It is easy to see that as the number of iterations increases Q j goes to 0. Let α 1 be the convergence rate then we have: Substituting this equation on the above equations we get: Using Taylor series (Lagrange, 1813) we have log(1 − x) ≈ −x, x < 1 ∈ R thus the convergence rate for large j is equal to ρ. More formally, lim j→∞ log (1 − ρ j ) log (1 − 1 j ) = 0 0 (60) Using the "L'Hopital's Rule" (de L' Hopital, 1696) we get: The above results agree with our analysis for the continuous equivalent system.
2014-05-12T11:38:00.000Z
2014-05-12T00:00:00.000
{ "year": 2014, "sha1": "978e3bbd5a2e7ef9d627ad2077317d5c0b872301", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "978e3bbd5a2e7ef9d627ad2077317d5c0b872301", "s2fieldsofstudy": [ "Computer Science", "Mathematics" ], "extfieldsofstudy": [ "Computer Science", "Mathematics" ] }
219741895
pes2o/s2orc
v3-fos-license
Theoretical framework of the polymer waste thermal degradation The article provides a theoretical analysis of the pyrolysis of polymer waste. The main products of pyrolysis are considered. The thermal degradation of polymers was studied depending on the composition of the materials. The main factors affecting the product distribution of pyrolysis of waste plastics are determined. Gaseous medium in the pyrolysis chamber plays a significant role in the process. The operating conditions of the pyrolyser are considered. The use of pyrolysis makes it possible to significantly reduce the amount of solid waste and increase energy savings without harming the environment and human health. Introduction Pyrolysis is one of the most promising areas of MSW processing in terms of both environmental safety and the production of useful commercial products. Globally, MSW recycling is a profitable business. This is due to the fact that more developed countries have learned how to recycle household waste, to make it a valuable resource for receiving heat and different substances. However, in Russia, new waste management technologies are practically not used, and household garbage, in the best case, is simply incinerated, and in the worst, it is accumulated in landfills [1][2][3][4][5][6][7]. Conversion of plastic waste from municipal solid waste (MSW) into fuel by pyrolysis of long-chain hydrocarbon fragments into short-chain has several advantages. Firstly, it activates a new cycle of consumption of non-renewable energy sources. Secondly, it also provides a significant source of petrochemicals, which reduces the consumption of non-renewable energy resources. Thirdly, it offers an effective, innovative and alternative solution to eliminate waste, therefore, preventing environmental pollution by burning it or filling landfills and waterways. The thermal degradation products of plastics are: • pyrolytic gas (is the fuel for the operation of the plants); • coke residue of 4-5 hazard class (included in mixtures used in construction and reclamation); • synthetic oil (on its basis it is possible to obtain diesel fuel and a gasoline component); • heat generated during the pyrolysis process (supplied to space heating). The main components of pyrolytic gas are hydrogen, carbon monoxide, methane, which are used as raw materials for the synthesis of methanol, ammonia, oxoalcohols or other chemicals, as well as fuel [8]. To reduce the energy consumption for the operation of the pyrolysis installation, part of the energy from the gas obtained in the process goes to the heating of the air with its subsequent supply to the The indisputable advantage of pyrolytic gas is the absence of sulfur and nitrogen in it. Since the process proceeds in the absence of oxygen, the pyrolytic gas does not contain hazardous dioxins formed during the combustion of hydrocarbons [9]. The disadvantage is the low calorific value and the difficulty of storing pyrolytic gas, the main of which is the inability to transport it over considerable distances. This implies that the gas consumption after its receipt should occur at a distance of not 3 km. The disadvantages of pyrolysis include the complexity of the design of furnaces; high cost of equipment; the need for a large number of staff. Like any promising technology, improving pyrolysis is in demand and relevant. Mainly the modernization of pyrolysis consists in modifications to the design of plants in order to increase the energy efficiency of the process and reduce the proportion of harmful residues. Also, these goals can be achieved by changing the conditions of the process and the introduction or removal of certain chemical components. Depending on the decomposition temperature of inorganic compounds, low-temperature and hightemperature pyrolysis are distinguished. As a result of low-temperature pyrolysis, carried out at a temperature of 450-900 ° С, the resulting pyrolytic gas is minimized, but the maximum yield of residues as well as liquid products. While maintaining the pyrolysis temperature of more than 900 ° C, which is a high-temperature pyrolysis, the maximum gas yield is in contrast to the minimum yield of fixed residues and liquid product. Thermal degradation of polymers During thermal degradation, the destruction of the least strong bonds of the polymer substrate occurs first of all. Next, almost complete decomposition of the remaining bonds occurs while maintaining a higher temperature. A decrease in the mass of polymer raw materials is also observed in connection with depolymerization reactions, as well as static rupture of macromolecules and elimination of side groups [10]. The pyrolysis reaction rate increased, and thus the reaction time decreased with increasing temperature. High temperature easily breaks the bonds and, thus, accelerates the reaction and reduces the reaction time. The reaction kinetics for various plastics followed the trend of PVP> PNP> PP over the entire temperature range. This can be explained on the basis of the C -C bond strength and polymer chain orientation in various plastics. PVP with a long linear polymer chain with a low branching and a high degree of crystallinity has high strength and at the same time requires more time for decomposition. Therefore, it is necessary to carry out the process at higher temperatures, as well as the creation of more stringent conditions. Taking into account that PNP with high branching and low crystallinity has weak bonds, which are easily destroyed compared to PVP. In PP, the presence of the side chain of the group (-CH3) and low crystallinity reduces the overall strength, leading to a decrease in reaction time. Based on experimental data, the dependence of the heating rate on the pyrolysis product yields is revealed. The higher the heating rate of the reactor of the pyrolyser, the greater the yield of gaseous products and the lower the mass of coke. The dependence is presented in table 1. The process is also affected by the particle size of the raw materials used. The increase in the size of the used particles of raw materials contributes to an increase in the amount of solid residue, a decrease in the proportion of pyrolytic gas and liquid products by reducing the heating rate of raw materials [11]. An important role in the process is played by the gaseous medium in the pyrolysis chamber. It can be inert, with partial access of air, the atmosphere of hydrogen, methane, carbon dioxide, water vapor. If oxygen is present in the medium, the fraction of gaseous products contains CO and CO2, and oxygencontaining compounds, which adversely affect the quality of gaseous fuels and oils, will be another pyrolysis product. The pressure maintained in the pyrolysis reactor is also of great importance to the products obtained. So, when using low pressure, primary products are formed to a greater extent. If the installation operates at high pressure, the result of the process is the yield of mainly liquid products. Table 2 presents the main factors affecting the pyrolysis process and the quality of the process products. Table 2. The main factors affecting the distribution of products of polymer waste pyrolysis. Influence factor Effect The chemical composition of polymers The primary pyrolysis products are directly related to the chemical structure of the polymers, as well as to the mechanism their thermal degradation (thermal or catalytic) Pyrolysis temperature and heating rate High process temperatures and heating rates contribute to bond breaking and predominant formation of low molecular weight products Pyrolysis time A long residence time contributes to the secondary conversion of primary products, an increase in the yield of coke, tar, as well as temperature-resistant products, thus gradually hiding the effect of the original polymer structures Type of reactor It mainly determines the quality of heat transfer (heat transfer), mixing, retention time of liquid and gaseous phases, the formation of primary products Operating pressure Low pressure reduces the condensation of active fragments forming coke and high molecular weight products The presence of active gases such as oxygen or hydrogen The formation of heat, the dilution of products. Effect on balance, kinetics and mechanism The use of catalysts It affects the kinetics and mechanism and, therefore, the distribution of pyrolysis products Additives Usually evaporate or collapse. Some may affect kinetics or mechanism. Liquid or gaseous phase The liquid phase of pyrolysis delays the output of developing products, thereby increasing the internal interaction IOP Conf. Series: Materials Science and Engineering 862 (2020) 062016 IOP Publishing doi:10.1088/1757-899X/862/6/062016 4 The presence of a catalyst significantly affects the reaction rate, as well as the quality of the condensed fraction. The use of a catalyst improves the ability to decompose the polymer, thereby increasing the yield of condensed gas and leading to an increase in the yield of the liquid product. The use of a catalyst eliminates pressure fluctuations and reduces the time of processing by pyrolysis of plastic waste [12]. The fundamentals of pyrolysis in the processing of polymer waste Any pyrolysis process can be divided into four stages: drying of raw materials in the drying chamber; dry distillation or the pyrolysis process itself; combustion of solid residues; obtaining pyrolysis products such as pyrolytic gas, oil and carbon residue. The schematic diagram of pyrolysis is shown in figure 1. In the process of pyrolysis, the heat released as a product of the process is partially used to heat some stages, which can be observed when considering figure 1. The reactor is a major part of the pyrolyser. The principle of operation of the installation is as follows: the raw materials entering through the upper part of the reactor are dried in the upper layers of the reactor. At the same time, the temperature of 100-300 ° C is maintained in this area. Then it goes down through the superimposed pre-distillation retort. Next, the raw product moves to the middle of the reactor. There the pyrolysis process takes place, in which the raw material is thermally decomposed and its coking occurs. The temperature in this area is at around 500 ° C. The resulting ash is cooled to 100 ° C and removed from the chamber with subsequent reuse. The resulting gas products are collected through a water-cooled condenser. The oil yield was determined based on the initial mass of plastic waste. Compared to other methods of handling MSW, pyrolysis is the most preferable from an environmental point of view, since it reduces the carbon footprint of the products. The pyrolysis process also reduces emissions of carbon monoxide and carbon dioxide, effectively using an inert atmosphere free of oxygen to avoid the formation of dioxins as a result of the reaction of the product with oxygen. Pyrolysis is recognized as the cleanest technology that meets the objectives of energy security, and a measure to combat the depletion of fossil fuels [13]. The data studied in [14][15] show that, in mixtures with air, somewhat enriched in fuel, while maintaining normal pressure and an increased initial temperature, which is in the range of 60-90 ° С, the output pyrolysis products of PP have detonation ability. Based on this, studies are conducting to create a combustion chamber with detonation, rather than deflagration, combustion of chemical fuel. Low hazardous slag formed during burning, similar to rock, can be safely disposed. For example, it can be used as a substitute for road gravel as well as in construction for soundproofing walls, which is relevant for Germany, Holland and other European countries [16]. Conclusion The study of pyrolysis plants leads to better modeling of process parameters at the enterprise level. Such results are vital to realizing the potential of such technology to solve the problem of plastic waste worldwide. The development of such technologies should be included in the national policies of developing countries and those countries that currently do not have the necessary infrastructure for the collection, separation and processing of plastic waste. The widespread use of pyrolysis technology could be a viable solution to the growing problem of plastic waste worldwide and in a cost-effective way. Using waste such as plastic to generate energy can also, to some extent, satisfy the growing need for liquid fuels. Based on the foregoing, it follows that the pyrolytic decomposition of MSW is a developing and requiring attention method due to its effectiveness and the use of recycled materials. The process itself is virtually waste-free, and thermal decomposition products can serve as raw materials for subsequent production or processing.
2020-05-28T09:15:42.382Z
2020-05-28T00:00:00.000
{ "year": 2020, "sha1": "353d81c142f689ebdf73feb0289a391f9e4e0aba", "oa_license": null, "oa_url": "https://doi.org/10.1088/1757-899x/862/6/062016", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "06e645209279f864b449ed3bacf65cf0b150a137", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Physics", "Materials Science" ] }
235480272
pes2o/s2orc
v3-fos-license
COVID-19 Morbidity Among Oral Health Professionals in Brazil Background We evaluated and compared the cumulative incidence of confirmed COVID-19 cases between oral health professionals and the general population in Brazil. Methods Secondary data from notification of laboratory unconfirmed and confirmed cases of COVID-19 in the National data system for 41 epidemiological weeks were analysed and compared between oral health professionals (dentist + oral health technicians/assistants) and the general population. The cumulative incidences of COVID-19 were obtained by the ratio of the total number of confirmed cases to the total Brazilian population or the population of oral health professionals registered with the Federal Council of Dentistry and adjusted by age. The incidences were then compared. Results The age-standardised cumulative incidences were 18.70/1000 for oral health professionals and 17.71/1000 for the population, with a ratio of 1.05. The highest incidences were observed in the states of Roraima (67.05/1000), Tocantins (58.81/1000), and Amazonas (58.24/1000). In 14 states, the age-standardised cumulative incidences were higher among oral health professionals than in the general population. There was a decrease in the number of new cases between the 29th and 30th epidemiological weeks in both populations. Conclusions COVID-19 infections among oral health professionals was similar to that of the general population. However, the cumulative incidence was 5% higher among oral health professionals, varying among Brazilian states. Practical implications Infection control practices might help lower the risk of contamination in dental settings. Introduction The coronavirus disease (COVID- 19) was declared a Public Health Emergency of International Concern by the World Health Organisation on January 30, 2020. On March 11, COVID-19 was characterised as a pandemic following the disease's rapid spread outside China. 1 In Brazil, the first suspected case of COVID-19 was announced on February 25, 2020, and on March 20, the community transmission of COVID-19 was recognised throughout the Brazilian territory. 2 On October 23, Brazil had 5,323,630 confirmed cases, 3 the third highest number in the world, 4 with 155,900 cumulative deaths from the disease. 3 Due to its contagion dynamics, this disease is of particular concern among asymptomatic and presymptomatic people who can unknowingly spread the virus through the nose and mouth. 5,6 Health care professionals are, in general, at a high risk for infection. 7 The condition is aggravated in the dental office due to the proximity of the staff to the patient, frequent exposure to saliva and other body fluids, handling of sharp instruments, and aerosol generation. 5,[8][9][10] Therefore, dental boards and scientific associations in many countries, including Brazil, have recommended the suspension of elective dental care, maintaining emergency care only. [11][12][13][14] International organisations have published guidelines on how to control the spread of the disease among oral health care providers. [14][15][16] In Brazil, oral health professionals (OHPs) have been appointed to the COVID-19 fast-track multidisciplinary teams for rapid screening and diagnostic testing for COVID-19. 17 However, the occurrence of COVID-19 among OHPs is not well known, and studies concerning the differential risk between OHPs and the general population are lacking. 10 The information on the number of cases and disease morbidity among dental staff is greatly needed to subsidise effective actions from public health agencies and managers of private and public services, concerning information on the conduct of care, provision of personal protective equipment, and collective protection strategies, training opportunities, and physical and mental health support to workers and their families. This study evaluated and compared the cumulative incidence of confirmed COVID-19 cases observed in OHPs and the general population in Brazil. Methods Data on laboratory unconfirmed and confirmed cases of COVID-19 available in the national system website (e-SUS VE Notifica system) 18 from the Ministry of Health of Brazil were analysed. The e-SUS VE system was implemented on March 27, 2020, to register clinical signs and symptoms of COVID-19 and results of diagnostic testing for COVID-19 of all suspected cases. The confirmed cases were those with laboratory evidence (positive result) in the notification. The data were obtained on October 10, 2020, and included all laboratory unconfirmed and confirmed cases reported since the first record (January 1, 2020) with information on test results. The variables obtained were the report date, Brazilian state, sex, age, occupation, and diagnostic test (type and result). Age was stratified into ≤19 years, 20-29, 30-39, 40-49, 50-59, 60-69, and ≥70 years. OHPs were classified as dentists or oral health technicians/assistants according to the Brazilian Occupation Classification from the Ministry of Labor. Diagnostic test types were rapid antibody test, rapid antigen test, reverse transcription polymerase chain reaction (RT-PCR), or other (enzyme-linked immunosorbent assay, chemiluminescence immunoassay, or electrochemiluminescence). If results of the rapid test and RT-PCR were available, the recommendation is to consider the one from the RT-PCR. Data analysis Cumulative incidence of the COVID-19 tests: This was calculated by dividing the total number of laboratory unconfirmed and confirmed cases by the estimated Brazilian population for 2020 from the Brazilian Institute of Geography and Statistics 19 and the OHPs registered in the Federal Council of Dentistry 20 and multiplying by 1000. Number of confirmed COVID-19 cases: The disease evolution was assessed by analysing the number of new and cumulative confirmed COVID-19 cases for both the general population and OHPs during the epidemiological weeks (EWs) 1 to 41, from the first record date (January 1, 2020) until the date of data extraction (October 10, 2020). Crude cumulative incidence of confirmed COVID-19 cases: This estimates the risk of an individual in the population to develop the disease during a specific period. The cumulative incidence of confirmed COVID-19 cases in the general population (per 1000 inhabitants) was obtained by dividing the number of confirmed cases by the population at risk, that is, the Brazilian population. The age-specific cumulative incidence was calculated for age groups established by the Brazilian Institute of Geography and Statistics for 2020. 19 The cumulative incidence of confirmed COVID-19 cases in the OHPs (per 1000 OHPs) was obtained by dividing the number of confirmed cases by the total number of registered OHPs. 20 The age-specific cumulative incidence was indirectly estimated from the number of professionals enrolled per year in the Regional (for Sao Paulo) or Federal Councils of Dentistry, considering 24 years as the average age of graduation from Dentistry. 21 Age-standardised cumulative incidence of COVID-19 cases: The standardisation by age considers the different age composition in the populations, allowing comparisons of COVID-19 incidences among Brazilian states. The age-standardised cumulative incidence of confirmed COVID-19 cases in OHPs and the general population was calculated for each Brazilian state. The direct standardisation method recommended by the World Health Organisation was used. 22 Finally, the ratio between the age-standardised cumulative incidences of COVID-19 for OHPs and the general population was calculated for Brazil and by state, which provided an estimate of the risk of COVID-19 among OHPs. Since these were open-access secondary data, no approval was necessary from the Institutional Review Board. Results The e-SUS VE Notifica database contained 13,291,343 records with information on the test results. The percentage of records with missing data was 0.03% for date of notification (n = 3,858), 4.06% for health professional and occupation (n = 539,338), 0.10% for type of test (n = 13,782), 0.07% for age (n = 9,960), and 1.05% for sex (n = 139,807). Figure 1 shows the cumulative incidence of the COVID-19 test in the general population and among OHPs in each state. Twenty-one per 1000 OHPs were tested for COVID-19, with variations among states. In the general population, the cumulative incidences ranged from 4.01 to 154.78 per 1000 inhabitants. From 12,752,005 valid records for occupation, 48,301 were dentists (n = 31,666, 65.10% were women) or oral health technicians/assistants (n = 16,635, 93.24% were women). Considering all types of tests, the proportions of positive results in the general population and OHPs were 33.85% and 21.67%, respectively (Table 1 and 2, Supplementary data). Among OHPs, 21.19% and 22.62% of dentists and oral health technicians/assistants, respectively, had positive results, with large variations among states. The state of Cear a had no record in the Brazilian Occupation Classification for OHPs. Figure 2 shows the number of new and cumulative confirmed COVID-19 cases in the general population (Figure 2a, 2b) and among OHPs (Figure 2c, 2d) according to EW. The curves were similar, and slower growth in the number of cases was observed among OHPs, with the first cases reported after EW 12. Between EW 29 and 30, there was a decrease in the number of new cases, both in the general population and among OHPs. The age-standardised cumulative incidences were 18.70 per 1000 registered OHPs and 17.71 per 1000 inhabitants in the general population ( Table 1). The ratio between these two incidences was 1.05. The age-standardised cumulative incidence of confirmed COVID-19 cases among OHPs by Brazilian state is shown in Table 1 -Age-specific cumulative incidence and crude and age-standardised cumulative incidence of confirmed COVID-19 cases in the general population and in oral health professionals in Brazil from January 1 to October 10, 2020. Age groups (years) General population (per 1000 inhabitants) Oral health professionals (per 1000 registered oral health professionals) For oral health professionals, 55 patients aged <20 years were added in the category 20-29 years old to estimate the age-specific cumulative incidence. Data were obtained in the e-SUS VE system. . The cumulative incidences among OHPs were higher than that in the general population in 14 states (ratio > 1). In the general population, the cumulative incidences of confirmed COVID-19 cases were 20.49 in men (per 1000 men population) and 21.39 in women (per 1000 women population), and age-standardised cumulative incidences were 19.49 and 20.00, respectively. The cumulative incidence by sex among OHPs could not be calculated as the data by sex were not available. The data used to calculate cumulative incidences were shown in Tables 3 to 5 (Supplementary data). Discussion The results presented were based on laboratory unconfirmed and confirmed cases of COVID-19 registered in the national information system. A lower cumulative incidence of testing for COVID-19 was observed among OHPs compared to the general population. This result reflects the reality of low testing in the country, being considered as one of the lowest-testing countries in the world. In April 2021, Brazil appeared as the 131st in number of SARS-CoV-2 RT-PCR exams per million inhabitants. 23 Mass testing has been deemed an efficient strategy for controlling the spread of COVID-19 and proved to be adequate in several countries around the world. [24][25][26] In Brazil, despite national legislation 27 establishing the prioritisation of COVID-19 testing among health professionals, universal testing was not performed, contributing to the observed low testing records among OHPs. A previous study with 3,122 Brazilian dentists in May 2020 reported that testing was more frequent in dentists who had seen patients with COVID-19 in their offices. Although 90% feared contracting the disease at work, only 8% indicated they had been tested for COVID-19. 28 Our findings highlight the importance of offering mass testing of OHPs and improvement of the educational campaigns to motivate monitoring of serological status in professional practice. The evolution of COVID-19 among OHPs was similar to that observed in the general population. The crude and agestandardised cumulative incidences of confirmed COVID-19 cases were also similar between the 2 populations. This similarity between incidences is consistent with a recent finding by an American study that estimated a prevalence of 0.9% (95% confidence interval, 0.5 to 1.5) of confirmed or probable COVID-19 infection among dentists. According to that study, the likely source of SARS-CoV-2 transmission was identified through contact tracing by a health agency or clinic in only 5 cases, of which none had a dental practice as a source of transmission. According to the authors, the risks associated with nonclinical activities and community spread might pose the most substantial risk for the exposure of dentists to COVID 19. 10 Additionally, infection prevention and control procedures recommended by the Centers for Disease Control and Prevention for dental offices in the US contributed to the reduced risk of developing an infection during oral health care delivery. Similarly, another web-based study found a prevalence of 1.1% positive test results among Brazilian dentists. 28 Another previous online survey with a French dental professional population showed a prevalence of laboratoryconfirmed COVID-19 cases of 1.9% for dentists and 0.8% for dental assistants. 29 In Brazil, the similarity observed in the incidences of confirmed COVID-19 cases between the general population and OHPs can be attributed to the adherence of OHPs to the guidelines issued by the Federal Council of Dentistry and the Ministry of Health and Education for clinical practice during the pandemic. The recommendations included the suspension of elective consultations, maintenance of emergency and urgency care in public and private services, 17,30,31 and canceling of all in-person undergraduate and graduate academic activities. National surveys indicated great adherence by professionals to the guidelines, 12,28,32,33 following the trend in several other countries in the world that adopted similar strategies for the pandemic. 34,35 Another aspect is the effectiveness of infection control practices in dental offices. Previous research has shown that the vast majority of professionals (91%) follow official regulatory standards in their new routines and have made substantial efforts to cope with the latest clinical requirements. 32 Most of the professionals (95.5%) reinforced biosafety protocols in dental offices, such as the use of face shields and single-use disposable personal protective equipment, improved suction efficiency to avoid aerosols/droplets dispersion, mouth rinsing before dental procedures, rubber dam isolation, and increased time between dental care appointments. 12 The present study findings, then, reinforce that infection control practices must be kept as an approach to prevent the spread of SARS-COV-2 in dental offices. The 5% higher cumulative incidence of confirmed COVID-19 cases among OHPs compared to the general population should be a warning sign of the increased risk of infection of this professional category in Brazil, although the differential risk compared to the general population and whether dental practice increases the risk of COVID-19 is not well established. However, the aerosol produced during the delivery of dental procedures can contain infectious material and might be a potential vector for patient-to-practitioner and patientto-patient transmission. 5,8,9,36 Although cases of COVID-19 among dental professionals at the School and Hospital of Stomatology, Wuhan University, Wuhan, China, have been reported, whether these infections were due to community transmission or transmission associated with oral health care delivery is unknown. 9 The higher incidence among OHPs found in our study cannot be directly attributed to a higher risk of infection in dental practice. However, it highlights the importance of protective measures during dental care and the continuous monitoring of cases, besides the generation of scientific evidence on COVID-19 and associated factors in OHPs. Besides, the results of this study can provide information to health authorities about the infection status of OHPs, considering the need for attention to specific risk groups. The protection of OHPs must be a public health strategy in the control of the pandemic. There was a wide variation in the age-standardised cumulative incidence of confirmed COVID-19 cases in Brazilian states, both in the general population and among OHPs. The different testing rates make it difficult to compare the effect of different containment strategies in the national territory or even discuss differential risks between populations. However, hypotheses could be raised, such as the different timing of infection introduction in states, the speed with which states responded to COVID-19, 37,38 and different protocols for dental care in response to COVID-19 dynamics, 39,40 as occurred in other countries. 41 The disease dynamics in each state could also influence the decisions of offering dental care. A national study showed that dentists from states with greater case and death rates had higher odds of being fearful of contracting the disease. For each additional 1000 cases or 100 deaths, the odds of stopping work or providing emergency care increased by 36% and 58%, respectively. 42 Social inequalities can also explain the regional differences in the cumulative incidences of COVID-19. 38,43,44 The North region of Brazil (Roraima, Amap a, Rondônia, Acre, Tocantins, and Amazonas), which has the greatest social inequality condition in the country, had the highest agestandardised cumulative incidences of COVID-19 cases. Studies carried out in Brazil have shown a higher Social Distancing Index in neighborhoods with better living conditions, higher incidence of COVID-19 in cities with lower Human Development Index, and higher mortality in the most vulnerable regions of the country and within more vulnerable social groups. In this context, it is worth highlighting the statement by Horton 45 about COVID-19 being a syndemic due to the interaction between biological and social factors. The author highlighted that a purely biomedical solution for COVID-19 would not be sufficient to protect the most vulnerable populations. Policies and programmes focused on reversing the profound disparities in our societies will be needed. 45 Some limitations of this study should be considered. The high number of missing records for test type and result caused an underreporting of cases. However, 72.26% of the data was analysed, characterising the disease occurrence among health professionals. In this sense, the information about occupation is an advantage of this national information system. In addition, influenza syndrome cases are classified according to public or private care systems (primary care units, doctor offices, clinics, care centers, emergency care, among others). The implementation of the e-SUS VE system took place gradually during the pandemic, and this process can lead to errors. Besides, the test type is defined by notification flow, and the result must be entered after serological confirmation. It is believed that the missing records might be due to the failure to update serological confirmation by health facilities. The Osvaldo Cruz Foundation evaluated the consistency between the epidemiological data released by the states' Health Secretariats and those obtained in the e-SUS VE in August 2020 and found that the number of confirmed cases according to the e-system was 10% lower than that observed in the states. 46 According to the Ministry of Health guidelines, the e-SUS VE does not present data for states and municipalities that have their own COVID-19 reporting systems and, therefore, data for these locations could be inconsistent. Finally, one state did not have data on occupation; thus, incidences were not estimated for the two categories. Conclusions The evolution of COVID-19 among OHPs was similar to that observed in the general population in Brazil. However, the cumulative incidence of confirmed COVID-19 cases was 5% higher among OHPs, with large variations among Brazilian states. Conflict of interest None disclosed. Acknowledgements RCF received financial support from FAPEMIG, Brazil (Fundação de Amparo a Pesquisa do Estado de Minas Gerais -Programa Pesquisador Mineiro -PPM-00686-16). The authors would like to thank Pr o-reitoria de Pesquisa, Universidade Federal de Minas Gerais for financially supporting the articleprocessing charge. This study was financed in part by the Coordenação de Aperfeiçoamento de Pessoal de N ıvel Superior − Brasil (CAPES) − Finance Code 001.
2021-05-27T13:13:02.212Z
2021-05-01T00:00:00.000
{ "year": 2021, "sha1": "f6bf8ff8ea72e32bc406c9de3ad6938c4fc84e83", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1016/j.identj.2021.05.005", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "a996c6883fa256e99eb3f8237e0a9475c27d61b4", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
256687283
pes2o/s2orc
v3-fos-license
Controllable photomechanical bending of metal-organic rotaxane crystals facilitated by regioselective confined-space photodimerization Molecular machines based on mechanically-interlocked molecules (MIMs) such as (pseudo) rotaxanes or catenates are known for their molecular-level dynamics, but promoting macro-mechanical response of these molecular machines or related materials is still challenging. Herein, by employing macrocyclic cucurbit[8]uril (CB[8])-based pseudorotaxane with a pair of styrene-derived photoactive guest molecules as linking structs of uranyl node, we describe a metal-organic rotaxane compound, U-CB[8]-MPyVB, that is capable of delivering controllable macroscopic mechanical responses. Under light irradiation, the ladder-shape structural unit of metal-organic rotaxane chain in U-CB[8]-MPyVB undergoes a regioselective solid-state [2 + 2] photodimerization, and facilitates a photo-triggered single-crystal-to-single-crystal (SCSC) transformation, which even induces macroscopic photomechanical bending of individual rod-like bulk crystals. The fabrication of rotaxane-based crystalline materials with both photoresponsive microscopic and macroscopic dynamic behaviors in solid state can be promising photoactuator devices, and will have implications in emerging fields such as optomechanical microdevices and smart microrobotics. The preparation of materials that display macro-mechanical responses to external stimuli is challenging. Here, the authors synthesize metal-organic rotaxane frameworks that contain photoactive axles as linkers; light irradiation triggers photodimerization of the ligands, which leads to macroscopic photomechanical bending of individual bulk crystals. A s a mimic of the macroscopic counterparts, artificial molecular machines 1-3 that can undergo precisely controlled dynamic motion at the molecular level upon stimulation by external signals such as light 4,5 , electricity 6 or chemical reagents 7 have attracted considerable attention in recent years. Mechanically interlocked molecules (MIMs) including rotaxanes 8,9 , catenanes 10,11 , and other molecular assemblies with more sophisticated structure 12,13 , which incorporate different kinds of organic macrocyclic molecules such as crown ether 14,15 , cyclodextrin 16 and cucurbituril 17 and feature intrinsic dynamic nature, have been often utilized as key constituent components of certain molecular machines 18,19 . To date, most molecular machine systems are designed to work in solution where each work unit is isolated from each other and functions independently and incoherently. For example, a variety of molecular machines reported so far including molecular switches 20,21 , molecular pump 22 , molecular motors [23][24][25] , molecular muscle 26,27 , and molecular robot 28 are capable of making increasingly complex operations in aqueous or nonaqueous environments, where different sets of supramolecular motifs are dispersed and separated individually in solution. However, when extended to a solidstate system with significantly shortened intermolecular distances and massive intermolecular interactions that is dramatically different from a solution environment, it is still challenging to construct MIM-based molecular assemblies with structure dynamics in solid. It is supposed that the dynamic performance of MIMs in such a condensed state could be largely inhibited by great steric hindrance of neighboring atoms and a large number of weak interactions between them. To reduce the influence of surrounding environment on MIM motifs in solid, a feasible method would be to place the dynamic MIM components into solid-state materials with large pores such as metal-organic frameworks (MOFs) with attractive properties, such as easy synthesis, controllable structure, multiple functions and high thermal and chemical stability [29][30][31][32][33][34][35] . For example, when rotaxane or pseudorotaxane is utilized as the linker of MOF materials, the MIM unit of rotaxane would have more free space in the resultant metal-organic rotaxane framework (MORF) materials, and could be able to undergo motion of rotation, translation or isomerization inside this kind of porous materials 14,15,[36][37][38][39][40][41][42][43] . Nevertheless, the dynamic changes of MIM motifs in those solid-state materials reported so far are merely restricted at the molecular-or nano-scale, whilst the synergistic effects between different MIM units in solid have been rarely exploited 44 . Solid-state molecular machines have, as compared to molecular machines working in solution, a higher degree of molecular organization and structural order due to the characteristics of condensed phases. If the many molecules in solid state, especially in the crystalline state, can be appropriately arranged through reasonable assembly strategies to effectively accumulate molecular-scale strain or stress, it is possible to realize the conversion of microscopic structural changes to macroscopic deformation or motion of the bulk material for these solid-state molecular machines, just like an actuator that can transform external energy to mechanical work (Fig. 1a). For the design of macroscopic actuator, there are two key issues that need to be taken into account: the external energy input to trigger molecular-level dynamics and structure assembly strategies to realize the conversion from molecular level to macroscopic level. Among different forms of energy stimuli including mechanical force or pressure, heat, light and electric fields, light is one of the most attractive forms since it possesses several excellent attributes such as remote non-destructive control, adjustable wavelength, intensity and polarization, and easy operation 45,46 . Furthermore, the energy-conversion path for these lightcontrolled actuators (namely photoactuators) relies mostly on photochemical actuation involving reversible shape-changing photoreactions (such as photoisomerization, photodimerizaiton and photocycloaddition) of photoactive groups activated by visible or ultraviolet (UV) light. A typical case of photochemical reaction is the photodimerization of olefin derivatives first reported by Schmidt et al 47 as early as 50 years ago, of which the proceeding in solid requires proper stacking arrangement of photoactive ligands in space and strict distance between the photoactive ligands [48][49][50] . Different strategies such as using small organic molecules 51 , metal organic skeleton 52 , coordination with metal cations 53 , cation-π interaction 54 and supramolecular encapsulation 55 are employed to guarantee the structure requirement of photodimerization. In MIM-based actuator, this structural requirement could be fulfilled by elegant design of rotaxane units through simultaneously confining two photoactive guests in a macrocyclic cavity. Besides the energy route to achieve molecular dynamics, the structure assembly is also crucial to the macroscopic actuation of actuators, as it can realize the accumulation and amplification of molecular-scale stress caused by molecular deformation. Exactly, the anisotropic assembly and packing of crystalline MORF materials would be very helpful to effectively reduce the stress dissipation. Meanwhile, placing photoactive groups on the material framework, like incorporating the photoactive axles as linking struts into MORFs to promote greater photo-induced structural changes and internal stress could be a feasible architectural design strategy. Herein, a bifunctional styrene-derived photoactive ligand (E)-4-[2-(methylpyridin-4-yl) vinyl]benzoic acid ([HMPyVB]I) is designed, which can form a [G 2 @H] psudorotaxane (G: guest molecule, H: host molecule) with a pair of aromatic moieties of [HMPyVB] + or MPyVB encapsulated in the cavity of cucurbit [8] uril (CB [8]) macrocycle, and further coordinate with metal ions through the deprotonated carboxylate group (Fig. 1b). By utilizing the in-situ assembly between [HMPyVB]I, CB [8] and uranyl ion, we report a kind of photoresponsive MORF, U-CB[8]-MPyVB, that contain photoactive MPyVB guest molecules trapped in the cavity of macrocyclic CB [8] host. Benefitting from the presence of (MPyVB) 2 @CB [8] linker, the UV light-induced molecular dynamics in the solid-state U-CB[8]-MPyVB sample is achieved via a regioselective [2 + 2] photodimerization-based single-crystal-to-single-crystal (SCSC) transformation, and examined in detail by using a set of characterization techniques including single-crystal X-ray diffraction analysis, 1 H NMR, IR and fluorescence spectra. More strikingly, the photodimerization reaction at molecular scale even induces macroscopic photomechanical bending of individual bulk crystals, which demonstrates a photoactuation system based on photoresponsive MORFs. The macroscopic deformation of this crystalline photoactuator has been characterized, and a deep understanding of photomechanical actuation mechanism of U-CB[8]-MPyVB has been also provided through a comprehensive analysis and comparison. Fig. 2b), which meets the Schimdt's topochemical criteria (<4.2 Å) in solid state. By means of supramolecular inclusion and coordination linkage, two infinitely extended 1-D metal organic rotaxane chains are achieved along aaxis. These two one-dimensional chains are connected by Motif 2 ((MPyVB C ) 2 @CB[8] B , Fig. 2c) to form a ladder-like chain (Fig. 2d). Synthesis The distance between two C=C double bonds from this pair of MPyVB C is 4.50 Å, which seems to be too long to meet the requirement of [2 + 2] photodimerization reaction. Furthermore, noncovalent interactions including hydrogen bonding, π···π stacking and C-H···π interactions between adjacent CB [8] moieties finally lead to the formation of a 3-D framework structure (Fig. 2e). An interesting question is how U-CB [8]-MPyVB can be assembled by a one-pot method. Given possible competition between metal coordination of MPyVB and its supramolecular encapsulation by macrocyclic CB [8] during the formation process of U-CB[8]-MPyVB, two possible assembly routes are proposed: (1) Deprotonated MPyVB first coordinates to uranyl to form an intermediate of uranyl-MPyVB complex, which is further connected by CB [8] through supramolecular inclusion to obtain the final MORF (Fig. 3a). A test experiment employing a different feeding order, i.e., firstly mixing UO 2 (NO 3 ) 2 ·6H 2 O and [HMPyVB]I followed by the addition of CB [8] macrocycles, shows that, a large amount of precipitate generate immediately after simply mixing UO 2 (NO 3 ) 2 ·6H 2 O and [HMPyVB]I in the aqueous solution, which remains unchanged after hydrothermal treatment. Similar phenomenon is observed even if an interface diffusion method is used to reduce the contact velocity between UO 2 (NO 3 ) 2 solution and [HMPyVB]I solution (molar ratio is 1:1). The precipitation effect between [HMPyVB]I and uranyl should be attributed to strong coordination bonding between uranyl and the carboxylate group of [MPyVB], which has been identified in several previously-reported uranyl complexes with similar pyridium-or viologen-functionalized organic carboxylate linkers that are insoluble in aqueous solution ( Supplementary Fig. 6) [56][57][58] . Since the emerging of the precipitation reaction between [HMPyVB]I and uranyl will prevent pseudorotaxane formation between [HMPyVB]I and CB [8], the assembly mechanism of coordination followed by supramolecular inclusion can be excluded. (2) Fig. 7 and Supplementary Fig. 8). In addition, single crystal of CB[8]-HMPyVB intermediate was also successfully isolated, and single crystal structure analysis shows that two HMPyVB + motifs are encapsulated by a CB [8] host with the distance between C=C double bonds in them is 4.44 Å ( Supplementary Fig. 9). Furthermore, the hydrothermal reaction of deprotonated CB Supplementary Fig. 12 -MPyVB before UV irradiation, the vibration peak at 1622 cm −1 that is assigned to C=C absorption band is obviously weakened in U-CB[8]-MPyVB-A after UV irradiation, and the remaining absorption intensity may also belong to MPyVB without photodimerization (Fig. 4c). The shift trend of infrared spectrum in control experiment based on photoactive (HMPyVB + ) 2 @CB [8] complex in aqueous solution is consistent with the above result ( Supplementary Fig. 14). The above results show that the photodimerization of photoactive ligands in these crystalline materials has been successfully realized through structural changes under UV irradiation. With that in mind, a high-quality U-CB[8]-MPyVB-A crystal was selected and subject to UV irradiation in-situ for 1 h, which is subject to single X-ray diffraction analysis to unveil the photo-triggered single-crystal-to-single-crystal (SCSC) transformation. As is shown in Fig. 4d, the photochemistry does occur in Motif 1 (photoactive motif, Supplementary Fig. 15), which is not described here. While Motif 2 (named photoinert motif, Fig. 4f) does not undergo photoreaction, it shows only slight conformational changes. For instance, the distance between C=C double bonds from two MPyVB ligands changes from 4.50 Å to 4.68 Å. Thermogravimetric analysis shows that U-CB[8]-MPyVB-A has no weight loss before 300°C, indicating that the compound also has good thermal stability, which is similar to that of U-CB[8]-MPyVB ( Supplementary Fig. 3). It is worth mentioning that, when the irradiation time was reduced to as short as 10 min, an elegant intermediate named as U-CB[8]-MPyVB-Int was captured ( Supplementary Fig. 16), which contains both possible moieties at the initial site of photoactive motif, a pair of styrene groups without photodimerization and photodimerizing cyclobutene product, respectively. The successful isolation and characterization of U-CB[8]-MPyVB-Int confirms that the photodimerization transformation proceeds through a relatively slow kinetic process. Factors for promoting regioselective solid-state photodimerization: supramolecular inclusion and simultaneous coordination. Because molecular movement in the solid state is greatly restricted, the solid-state photodimerization reaction is more difficult than that in the liquid state. Many factors might have influence on this process, and the distance between C=C double bonds of adjacent photoactive ligands should be one of the most crucial for the realization of solid-state photodimerization 59 . For instance, we have tried to irradiate the crystals of single [HMPyVB]I ligand (Supplementary Fig. 17) with ultraviolet (365 nm) light. Even after a long irradiation time (150 min), Fig. 18 [8] has been used as a typical nano reactor to promote some photochemistry reaction because it could incorporate two photoactive guests and limit them to a suitable distance easily by the confinement effect, thus promoting the [2 + 2] photodimerization with high efficiency 60,61 . Up to now, CB [8] has been successfully applied to the photodimerization of olefins 62 (Fig. 5a), and 1 H NMR monitoring of CB[8]-HMPyVB after light irradiation confirms this point (Supplementary Fig. 19). The results suggest that, although the introduction of CB [8] macrocycles can restrain the photoactive ligands to a certain distance, this current distance is still not enough for the [2 + 2] photodimerization. Interestingly, the [2 + 2] photodimerization is finally achieved in U-CB[8]-MPyVB, a uranyl-linked metal-organic rotaxane compound. Inspired by the coordination ability of carboxyl groups at both ends of CB [8]-HMPyVB and carbonyl oxygen at the port of CB [8], uranyl ions with flexible coordination modes are further introduced as metal nodes of CB [8]-HMPyVB to construct the coordination compound. It's worth mentioning that there are two kinds of (MPyVB) 2 @CB[8] motifs in one MORF structure, which can be called, according to the distance of C=C double bonds and photoresponsive performance, as photoactive motif and photoinert motif, respectively (Fig. 5b, c). Although the chemical components of these two motifs are identical, their molecular conformations as well as photoactivities are totally different due to the differences between them in uranyl coordination patterns. This difference in assembly structure leads to the regioselectivity of the photodimerization reaction in the macrocyclic CB [8] cavity, which will be demonstrated in detail below. In photoinert motif, after coordination with uranyl ions by the carboxyl group (in bridging bidentate mode) at the end of MPyVB C , the benzoic acid motif will rotate around the C=C bond relative to pyridine ring in order to meet the needs of coordination environment (Supplementary Table 1). The whole MPyVB C is fixed by the tetranuclear uranyl center through coordination with a downward movement (about 3.9°) with respect to the initial conformation of HMPyVB B (Fig. 5b and Supplementary Fig. 20). This downward movement of the fixed end will lead to the upward movement of the other end so as to release the possible strain, and finally the distance of C=C bond increase slightly from 4.44 Å to 4.50 Å (the schematic model was proposed in Fig. 5d). The impossibility to carry out [2 + 2] photodimerization at such distance makes this motif to be a photoinert one. On the contrary, the situation is quite different if the photoactive guests and macrocycle parts participate in uranyl coordination at the same time as seen in Fig. 5c. Specifically, uranyl ions in tetranuclear uranyl center first coordinate with the carboxyl group of MPyVB D through μ 2 -(η 1 , η 2 ) coordination mode. Then in order to further coordinate with the carbonyl oxygen of the CB [8], the whole uranyl-MPyVB D system will move close to the portal of CB [8], and at the same time, it also pull the carboxyl end to bend up and rotate by 7.9°around C=C bond ( Supplementary Fig. 20), thus driving the C=C bond of MPyVB D to move toward the other photoactive ligand (MPyVB E ), and reducing the distance between them. Meanwhile, a proper rotation angle about 15.2°for MPyVB E helps to stabilize the conformation to a great extent by π-π stacking between these two MPyVB units. Ultimately, simultaneous coordination of MPyVB and CB [8] with one uranyl center induces the distances between C=C bonds of MPyVB D and MPyVB E to reach to a suitable value (3.69 Å and 3. 96 Å), which fit well with Schimdt's topochemical criteria (less than 4.2 Å) for [2 + 2] photodimerization in solid state and function as photoactive motifs (the schematic model was proposed in Fig. 5e). Structural adaptability and coordination capacity of macrocyclic CB [8]. Generally, the existence of axial guest molecules will greatly increase the steric hindrance around cucurbit[n]uril portals, thus reducing the coordination ability of these rigid macrocycles. On the other hand, we have reported that, self-adaptive CB [6] will be distorted after complexation with the "unsuitable" guest molecules, which is helpful for the carbonyl ports of CB [6] to reduce the steric hindrance from the guest molecules and endows the macrocycle CB [6] with certain coordination ability 70 . In this work, the adaptability of CB [8] also plays a very important role in inducing the formation of photoactive motif. As is shown above, it is the adaptive deformation of CB [8] that makes the wheel macrocyclic molecule participate together with the axle molecule in the coordination with the same uranyl node, and subsequently brings the C=C bonds of encapsulated axle molecules closer. In these interestingly MORF based on CB [8], the axis and the wheel participating in coordination with metal center at the same time. Moreover, this MORF structure even includes both coordinated CB [8] (photoactive motif) and non-coordinated CB [8] (photoinert motif). In order to quantitatively describe the adaptive deformation of CB [8] in different environments, we define the degree of deformation in adaptive CB [8] at diverse condition by the value d max /d min , where the d max and the d min are the maximum distance and minimum distance between two relative carbonyl oxygen in the portals of CB [8] respectively. As is shown in Fig. 6 [8] C can prove this deformation, which increases by 10.71% relative to CB [8] A and is almost six times as great as that of CB [8] B . Moreover, because only one end of carbonyl oxygen in CB [8] participates in the coordination (green atom in CB [8] Fig. 6), we wonder whether the effect of metal coordination on the deformation of CB [8] macrocycle is symmetrical. We rotate CB [8] by 180°along the symmetry axis with the plane to get the back view of these CB [8]. The back view of CB [8] A and CB [8] B have no change compared with that of front view, which is reasonable because both of two MPyVB participate in the coordination. On the other hand, we can see that the d max /d min values in the front view and the back view of CB [8] C are quite different because the d max /d min of the back view is 1.14, which is 8.8% lower than the former. The difference between the front view and the back view suggests that uranyl coordination can induce further deformation of CB [8]. In other words, the self-adaptability of CB [8] greatly reduces the steric hindrance of its portals and enables it to participate in the coordination, and in turn, the uranyl coordination of CB [8] portals further exacerbated the deformation of CB [8]. In all, the adaptivity of CB [8] plays a crucial role in the formation of photoactive motif in U-CB [8]-MPyVB through mutual effects of supramolecular inclusion and coordination bonding. Photoactuating behavior of bulk crystals of U-CB[8]-MPyVB. Besides microscopic photo-triggered dynamics as shown above, the photoresponsive behavior of U-CB [8]-MPyVB is first unveiled through its photoinduced motion and macroscopic bending ( Supplementary Fig. 10 and Supplementary Movie 1). Besides the rotaxane-based assembly of U-CB[8]-MPyVB to effectively accumulate molecular-scale strain or stress, this macroscopic photoactuating behavior of bulk crystals should be originated from the changes of microscopic structure of U-CB[8]-MPyVB, just like the cases in other photoactuating systems that link macroscopic mechanical response to contraction and expansion of anisotropic lattice [71][72][73][74] . Nevertheless, different from other photo-actuated systems, the U-CB[8]-MPyVB system reported here that is a MORF largely relies on both supramolecular inclusion and host-guest coordination. The participation of 9 rotaxane functional unit in the photoactuating process will make its macroscopic light actuation behavior be significantly different from traditional photoactuators. Therefore, continuing efforts are focused on the characterization of photoactuating behavior of bulk crystals of U-CB[8]-MPyVB. Three rodlike crystals (Crystal-A, Crystal-B and Crystal-C) in different sizes were first selected to figure out the effect of crystal dimension on photoinduced bending behavior. We fixed one end of the crystal on the test platform in air so that it stands vertically and then subject to incident light in specific direction (365 nm, 6 W). As is shown in Fig. 7a, Crystal-A with a length of about 320 μm bends slowly from a straight line towards the direction of incident light when exposed to ultraviolet light for 10 min. For Crystal-B with longer size (about 720 μm), as we expected, the bending degree is higher than that of Crystal-A under the same illumination time, and the bending phenomenon continues until 20 min (Fig. 7b). Crystal-C with thicker trunk gives a slight bending movement about 9°within 20 min and the degree of bending was significantly lower than that of Crystal-A and Crystal-B (Fig. 7c), proving that a crystal with a larger size requires larger the cumulative stress in bending progress. The bending phenomenon of these three pieces of crystals can be seen more clearly in the Supplementary Movies 2-4. The crystal still maintains the bending morphology when the incident light source is removed. Moreover, none of the three crystals show light-induced damage after a long period of radiation, which indicates that U-CB[8]-MPyVB has a good stability in air. These results suggest that this delicate system can convert photonic energy into mechanical energy at the macroscopic scale, and the observed light-driven bending is the result of the delicate balance of crystal size, tensile stress and exposure time to the incident light source, etc. In order to further explore the photo-induced bending behavior of U-CB[8]-MPyVB, we choose Crystal-B as the representative and quantify the function of bending angle (φ) changing with time (t) by an exponential function (Fig. 7d). The dynamic bending process of Crystal-B in 20 min can be expressed by the Eq. (1): 75 Where k represents the response time constant, t represents the irradiation time (min) and Φ the actuation response time coefficient. As is shown in Fig. 7e, the experimental results are perfectly agreed with the theory, and R 2 = 0.9997 can also proves its correctness, which means the relationship between irradiation time and bending angle has been established. Table 3). It can be seen that in the photodimerization process for two MPyVB ligands, the intermediate C=C bonds are close to each other and slowly dimerize into a cyclobutane with the final distance of about 1.60 Å. Meanwhile, methylpyridine and benzoic acid moiety at two ends of bisMPyVB are far away from each other, like a butterfly spreads its wings, from the original 3.47 Å increased to 5.21 Å ( Supplementary Fig. 21). As the host, the adaptive CB [8] also takes the corresponding conformational adjustment to match with the change of the guest molecules in the cavity. The vertical distance of CB [8] along the b-axis is contracted from 13.40 Å to 12.86 Å, which is account for the contraction/expansion of unit cell in U-CB[8]-MPyVB (Fig. 8b). Exactly, the length of crystallographic axis b decreased by 2.55% (from 21.21 Å to 20.67 Å, see Supplementary Table 3 for details). While there is no obvious change in a-axis and c-axis because of the existence of strong coordination bonds. Crystal face index analyses reveal that the whole 1-D chains stack layer by layer in (100) plane through weak interactions, which provides the possibility of accumulation of shrinkage stress in b-axis direction. When the light exposed surface (100) plane of U-CB [8]-MPyVB crystal is subject to UV radiation, the crystal plane pointing towards the incident light shrinks along the b axis, while the light shielding surface ( 100 100) plane keeps it as it is. A similar case of photomechanical bending of bulk crystals through an anisotropic photoresponsive mechanism was reported by Vittal's group 76 . The single crystal structure of intermediate state (U-CB [8]-MPyVB-Int) as discussed above can also prove the possible coexistence of both reacted and unreacted parts (Supplementary Fig. 16). Finally, the gradual stress accumulation between them accounts for the dynamic bending of the crystal at macroscopic level (Fig. 8b), and the conversion of microscopic structural changes to macroscopic deformation is realized. As far as we know, the light-response rate of most reported photoresponsive crystal materials based on solid-state photochemical reaction is often very fast (Supplementary Table 4) [48][49][50]74 . But the photo-induced bending rate of crystal materials reported here is much slower, and falls within the range from hundreds of seconds to thousands of seconds that is two orders of magnitude slower compared with other photoresponsive crystal materials. The main reasons for this difference in photoresponsive dynamics are proposed as followed. (1) 3) The most important point is that, although the structures of most photoactive ligands will change greatly after photodimerization, the confinement effect of CB [8] in U-CB [8]-MPyVB here restrain the photodimerizationinduced structural changes (Fig. 9). Meanwhile, due to the pillaring effect of macrocyclic CB [8] in 3D lattice through intensive hydrogen bonding interactions, the macroscopic bending behavior is mainly originated from the stress related with selfadaptive deformation of CB [8] accumulated in space after the photodimerization of MPyVB in CB [8] cavity. Since the structure change of the macrocycle is only an adaptive adjustment after the photodimerization of the photoacitve guests, the overall structure change is much smaller than that of the guests. Therefore, the photoresponsive effect of U-CB[8]-MPyVB is not that remarkable like those of other photodimerization systems without macrocyclic restriction (see Fig. 9) 50,77 . For instance, after photodimerization, the volume of the whole cell decreases only by 1.66%, and the b-axis, which is the most obvious change in the coordinate axis, also decreases only by 2.55%, which is far lower than the changes of cell parameters of other photoresponsive crystals before and after UV irradiation (Supplementary Table 4). In all, the photoresponsive performance and bending rate of photoactuators are influenced by many factors such as size of the crystal, the intensity of incident light and temperature, etc. It seems that, unlike most photoresponsive crystals are often bent at a large angle in a very short time, U-CB[8]-MPyVB is not sensitive enough to light stimulus here, thus providing possibility to accurately induce the bending motion by light in a controllable manner. Discussion In this work, we successfully synthesized a photoresponsive metalorganic rotaxane framework (MORF) compound, U-CB [ ), were added into a 15 mL polytetrafluoroethylene hydrothermal reactor, and then 2 mL deionized water was added. The solvent was evenly distributed by ultrasonic vibration for 5 min, and then 140 μL uranyl nitrate solution (0.5 M) was added. After heating at 150°C for 48 h and natural cooling to room temperature, light yellow rodlike single crystals was obtained, which was suitable for single crystal X-ray diffraction. The crystals were collected by centrifugation, then washed with deionized water (10 mL) for three times, filtered and dried in vacuum. Finally, 35.5 mg light yellow single crystals were obtained with the yield of 49.7%. Method 2. CB [8]-HMPyVB (51.6 mg, 0.025 mmol) was added into a 15 mL polytetrafluoroethylene hydrothermal reactor, and then 2 mL deionized water and uranyl nitrate solution (140 μL, 0.5 M) were added. The solvent was evenly distributed by ultrasonic vibration for 5 min. After heating at 150°C for 24 h and natural cooling to room temperature, light yellow rodlike single crystals was obtained, which was suitable for single crystal X-ray diffraction. The crystals were collected by centrifugation, then washed with deionized water (10 mL) for three times, filtered and dried in vacuum. Finally, 38.5 mg light yellow single crystals were obtained with the yield of 53.9%. Data availability All data needed to support the conclusions of this manuscript are provided in the main text or Supplementary Information Fig. 9 Effect of the supramolecular confinement of CB [8] on the ligand conformations after photodimerization. The confinement effect of CB [8] in U-CB [8]-MPyVB restrain the photodimerization-induced structural changes of photodimerization product, so the distance between methylpyridine and benzoic acid moiety at two ends of bisMPyVB is smaller than that in the traditional photodimerization product.
2023-02-09T15:29:31.707Z
2022-04-19T00:00:00.000
{ "year": 2022, "sha1": "67031dbf14dfe496ade07c347acb13fe33461fd4", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41467-022-29738-y.pdf", "oa_status": "GOLD", "pdf_src": "SpringerNature", "pdf_hash": "67031dbf14dfe496ade07c347acb13fe33461fd4", "s2fieldsofstudy": [ "Materials Science", "Chemistry" ], "extfieldsofstudy": [] }
259052289
pes2o/s2orc
v3-fos-license
Annual evolution of the ice–ocean interaction beneath landfast ice in Prydz Bay, East Antarctica . High-frequency observations of the ice–ocean interaction and high-precision estimation of the ice–ocean heat exchange are critical to understanding the thermodynamics of the landfast ice mass balance in Antarctica. To investigate the oceanic contribution to the evolution of the landfast ice, an integrated ocean observation system, including an acoustic Doppler velocimeter (ADV), conductivity–temperature– depth (CTD) sensors Abstract. High-frequency observations of the ice-ocean interaction and high-precision estimation of the ice-ocean heat exchange are critical to understanding the thermodynamics of the landfast ice mass balance in Antarctica. To investigate the oceanic contribution to the evolution of the landfast ice, an integrated ocean observation system, including an acoustic Doppler velocimeter (ADV), conductivity-temperaturedepth (CTD) sensors, and a sea ice mass balance array (SIMBA), was deployed on the landfast ice near the Chinese Zhongshan Station in Prydz Bay, East Antarctica, from April to November 2021. The CTD sensors recorded the ocean temperature and salinity. The ocean temperature experienced a rapid increase in late April, from −1.62 to the maximum of −1.30 • C, and then it gradually decreased to −1.75 • C in May and remained at this temperature until November. The seawater salinity and density exhibited similar increasing trends during April and May, with mean rates of 0.04 psu d −1 and 0.03 kg m −3 d −1 , respectively, which was related to the strong salt rejection caused by freezing of the landfast ice. The ocean current observed by the ADV had mean horizontal and vertical velocities of 9.5 ± 3.9 and 0.2 ± 0.8 cm s −1 , respectively. The domain current direction was ESE (120 • )-WSW (240 • ), and the domain velocity (79 %) was 5-15 cm s −1 . The oceanic heat flux (F w ) estimated using the residual method reached a peak of 41.3 ± 9.8 W m −2 in April, and then it gradually decreased to a stable level of 7.8 ± 2.9 W m −2 from June to October. The F w values calculated using three different bulk parameterizations exhibited similar trends with different magnitudes due to the uncertainties of the empirical friction velocity. The spectral analysis results suggest that all of the observed ocean variables exhibited a typical half-day period, indicating the strong diurnal influence of the local tidal oscillations. The large-scale sea ice distribution and ocean circulation contributed to the seasonal variations in the ocean variables, revealing the important relationship between the large-scale and local phenomena. The high-frequency and cross-seasonal observations of oceanic variables obtained in this study allow us to deeply investigate their diurnal and seasonal variations and to evaluate their influences on the landfast ice evolution. Introduction Antarctic sea ice plays a critical role in driving and modulating global climate change and local marine and ecosystem systems (Massom and Stammerjohn, 2010). However, in contrast to the rapid decline of the sea ice extent in the Arctic, the Antarctic has experienced a slight increase since the late 1970s (Comiso et al., 2008;Liu and Curry, 2010), with an extended peak of 20 × 10 6 km 2 observed in 2014, after which the summer minima and winter maxima exhibited decreasing trends (Parkinson and DiGirolamo, 2021;Raphael and Handcock, 2022;Wang et al., 2022). Landfast ice commonly exists along the Antarctic coast and is usually attached to the shorelines, ice shelves, glacier tongues, grounded icebergs, or shoals (Massom et al., 2001;Li et al., 2020;Fraser et al., 2021). In contrast to pack ice floes, landfast ice generally has a longer annual duration and a larger thickness, and its width can reach tens to hundreds of kilometres from the shore (Fraser et al., 2021). In winter in the Southern Hemisphere, landfast ice accounts for 3 %-4 % of the total sea ice area (Li et al., 2020) and a larger percentage, approximately 14 %-20 %, of the total sea ice volume (Fedotov et al., 2013). In particular, the proportion of landfast ice off the coast of East Antarctica is larger than that in other Antarctic regions (Giles et al., 2008;Li et al., 2020). As a natural boundary between the ocean and atmosphere, landfast ice strongly influences air-ocean interactions and heat and momentum exchange (Maykut and Untersteiner, 1971;Heil et al., 1996;Heil, 2006). The existence of landfast ice provides an efficient barrier to glaciers and ice sheets, preventing them from calving and vanishing into the Southern Ocean (Massom and Stammerjohn, 2010;Miles et al., 2017). The growth of landfast ice is mainly attributed to thermodynamic processes. The oceanic heat flux plays a critical role in the ice mass balance and influences the annual growth of landfast ice (Parkinson and Washington, 1979). The main challenge in studying ice-sea heat exchange is developing a method for accurately quantifying the oceanic heat flux and its seasonal variations. However, the oceanic heat flux is difficult to observe directly and is usually estimated by measuring the ice temperature and thickness, known as the residual energy method (McPhee and Untersteiner, 1982). Heil et al. (1996) estimated the annual oceanic heat flux to be 5-12 W m −2 based on ice observations at Australia's Antarctic Mawson Station. Lei et al. (2010) studied the seasonal variations in landfast ice in Prydz Bay in 2006 and obtained an oceanic heat flux of 11.8 ± 3.5 W m −2 in April and an annual minimum of 1.9 ± 2.4 W m −2 in September based on the residual method. Yang et al. (2016) analysed the oceanic heat flux in Prydz Bay using the high-resolution thermodynamic snow and ice (HIGHTSI) model (Launiainen and Cheng, 1998;Vihma, 2002;Cheng et al., 2006) and concluded that it gradually decreased from 25 to 5 W m −2 in winter. Zhao et al. (2019) estimated the oceanic heat flux using the residual method and found that the monthly oceanic heat flux in 2012 was 30 W m −2 in March-May, decreased to 10 W m −2 in July-October, and increased back to 15 W m −2 in November. In terms of the evolution mechanism of the oceanic heat flux, Allison (1981) found that the oceanic heat flux under the landfast ice near Mawson Station exhibited two peaks throughout the season due to the influence of the thermohaline convection caused by salt rejection and seasonal variations in the large-scale meridional thermal advec-tion in the Southern Ocean. McPhee et al. (1996) found that the oceanic heat flux changed on the sub-diurnal scale due to the sub-glacial cold and warm currents. High-frequency processes such as ocean tides and salt flux have an hourly impact on the oceanic heat flux, making it difficult for the residual method to capture short-term changes (Lei et al., 2010). Another more accurate approach to estimating the oceanic heat flux involves direct measurements of the turbulent vertical velocity and high-frequency temperature fluctuations or measurements of the frictional velocity and temperature difference from the ice-ocean interface to the mixed layer. However, this method requires precise and high-frequency measurements of the ocean current under the ice and the mixedlayer temperature. This method has been widely used in previous studies conducted in the Arctic and Antarctic (McPhee, 1992;McPhee et al., 1996McPhee et al., , 2008Maykut and McPhee, 1995;Sirevaag, 2009;Sirevaag and Fer, 2009;Kirillov et al., 2015;Peterson et al., 2017;Lei et al., 2022). Nonetheless, there is a lack of such detailed and high-frequency landfast ice-ocean observation data for Prydz Bay, Antarctica. Direct observations of high-frequency ocean temperature, salinity, and velocity beneath landfast ice are important for filling the data gap of the ice-ocean interaction near the Chinese Antarctic Zhongshan Station and for more accurately understanding how the oceanic heat flux affects the growth of sea ice in Prydz Bay on the diurnal and seasonal scales. In this study, a set of ice-ocean equipment, including an acoustic Doppler velocimeter (ADV), conductivitytemperature-depth (CTD) sensors, and a sea ice mass balance array (SIMBA), was deployed at a landfast ice site located approximately 1 km from Zhongshan Station during April-November 2021. The details of the field observations are presented in Sect. 2. The observations were deeply analysed and the oceanic heat flux was estimated using two different methods, i.e. the residual method and the bulk parameterization method, which are described in Sect. 3. The relationships between the tides and the oceanic heat flux, as well as the large-scale and local phenomena, are discussed in Sect. 4. The conclusions are presented in Sect. 5. Field observations The field observations were conducted at Zhongshan Station (69 • 22 S, 76 • 22 E), which is located in Prydz Bay, East Antarctica (Fig. 1a), and is surrounded by a 40-100 km-wide section of landfast ice in the cold season from February to December (Zhao et al., 2020). In the austral summer (i.e. late January), the landfast ice usually breaks into small floes due to mechanical forcings such as wind, waves, and tides, and then it completely disappears (Li et al., 2020), with the exception of some small ice floes in the narrow fjords that sur- vive to become second or multi-year sea ice in the subsequent winter. From 16 April to 7 November 2021, an integrated iceocean interaction observation system was established by the wintering team at the coastal landfast ice site, approximately 1 km from Zhongshan Station (Fig. 1b). A cable-type CTD sensor (ALEC ACTD-DF, Japanese JFE Advantech Co., Ltd., https://www.analyticalsolns.com.au/product/ conductivity_temperature_depth_logger_miniature_.html, last access: 24 February 2023) was deployed 2 m beneath the ice surface and 15 m from the shoreline. The CTD measured the ocean conductivity, temperature, and depth at a frequency of 30 s, with accuracies of ±0.02 mS cm −1 (±0.03 psu) for conductivity (salinity) and ±0.02 • C for temperature. An ADV (SonTek Argonaut-ADV, the xylem company, https://www.xylem.com/siteassets/brand/sontek/resources/ specification/sontek-argonaut-adv-brochure-s11-02-1119. pdf, last access: 24 February 2023) was deployed to observe the 3-D ocean velocity at 5 m below the ice surface and 5 m north of the CTD. The frequency of the ocean velocity observations was 40 s, and the accuracy was ±0.001 m s −1 . A SIMBA (SRSL SIMBA, SAMS Enterprise Ltd., https://www.sams-enterprise.com/services/ autonomous-ice-measurement/, last access: 24 February 2023) was deployed 5 m north of the ADV, which contained 240 temperature sensors at 2 cm intervals mounted on a thermistor string. The 4.8 m-long SIMBA temperature chains recorded the vertical temperature profiles of airsnow-ice-ocean every 6 h. The SIMBA had a resolution of ±0.0625 • C. The water depths at the CTD, ADV, and SIMBA sites were 4.5, 13, and 13 m, respectively. Manual observations, including snow and ice thickness measurements, were conducted around the integrated ice-ocean interaction observation system every 5 d by the wintering team. Due to the effect of the extremely cold conditions on the battery power supply, the observation system stopped working during part of the period, 24 April-11 May for the ADV and 7-15 July for the CTD. A data quality control was applied to the original time series to pick out the anomalous values. To match the different frequencies of the ADV and CTD in the inter-comparisons and the analysis of the oceanic heat flux, the observations were averaged and integrated into a new time series with 2 min intervals. Regarding the processing of the SIMBA observation data, three-point smoothing was introduced to minimize the noise influences, which has been used by Zhao et al. (2017) and Tian et al. (2017). Satellite and reanalysis products To further investigate the large-scale influences, satellite and reanalysis products were used. The Advanced Microwave Scanning Radiometer 2 (AMSR2) sea ice concentration based on the Arctic Radiation and Turbulence Interaction Study (ARTIST) sea ice (ASI) algorithm developed at the University of Bremen (https://seaice.uni-bremen.de/ sea-ice-concentration/amsre-amsr2/, last access: 24 February 2023) was adopted to obtain the percentage of open water in Prydz Bay. These data are updated daily and have a spatial resolution of 6.25 km (Spreen et al., 2008). The Operational Mercator global ocean reanalysis products, produced by the Copernicus Marine Environment Monitoring Service (CMEMS), provide the daily and monthly ocean currents and mixed-layer depth of the global ocean with a 1/12 • spatial resolution and 3 h frequency (for more information, see https://doi.org/10.48670/moi-00016, last access: 24 February 2023) for a large-scale analysis. To facilitate comparative analysis, in this study, the nearest-neighbour method was employed to interpolate the CMEMS products to the same projection and spatial resolution as the AMSR2 sea ice concentration. Residual method The residual method was adapted from the classical Stefan law. By obtaining measurements of the ice vertical temperature profiles and ice bottom growth or ablation, the residual method has been widely used to estimate the oceanic heat fluxes in previous studies (McPhee and Untersteiner, 1982;Lytle et al., 2000;Perovich and Elder, 2002;Purdie et al., 2006;Lei et al., 2010;Zhao et al., 2019). At the bottom of the sea ice, the heat balance can be expressed by an equilibrium equation as follows: where F w is the heat flux from the ocean to the sea ice, F c is the heat conduction flux through the sea ice, F l is the latent heat flux caused by the freezing or melting of the ice, and F s is the specific heat flux generated by the change in the ice temperature. In Eq. (1), the signs of the melting, heating, and upward heat flow are positive, while the signs of the cooling, freezing, and downward heat flow are negative. The three heat flux terms can be further expressed as follows (Semtner, 1976;Lei et al., 2014): where k i is the thermal conductivity of the sea ice, T 0 is the temperature of the ice in the reference layer (details are provided in Sect. 3.4), H is the corresponding sea ice thickness, T f is the freezing point, ρ i is the density of the ice, L i and c i are the latent and specific heat capacities of the sea ice, H is the sea ice thickness of the reference layer, dH /dt is the ice growth rate, and dT /dt is the change in the sea ice temperature (Untersteiner, 1961;Millero, 1978;McPhee and Untersteiner, 1982;Lei et al., 2010). The density and salinity of the landfast ice used in this study were 910 kg m −3 and 4 psu based on previous observations reported by Lei et al. (2010). k i , L i , and c i are functions of the salinity and temperature of the ice, and T f is a function of seawater salinity. These parameters were re-estimated based on the CTD observations. The vertical ice temperature gradient, ice growth/melt rate, and ice temperature changes were calculated from the SIMBA observations. Bulk parameterization method The oceanic heat flux can be determined from direct measurements of the high-frequency current velocity, temperature, and salinity in the mixed layer in the upper ocean beneath the ice cover in order to evaluate the turbulent heat flux at the ice-ocean interface, which is called the turbulent parameterization method (McPhee, 1992;McPhee et al., 2008). The oceanic heat flux F w from the ocean mixed layer to the bottom of the sea ice can be expressed as follows (Guo et al., 2015): where ρ w and c w are the density and specific heat capacity of the ocean mixed layer and w T is the turbulent heat flux. The heat transferred from the ocean to the ice depends on both the turbulent stress at the ice-ocean interface (characterized by the frictional velocity u * 0 as the square root of the kinetic stress at the interface) and the effective heat content of the fluid in the turbulent boundary layer, which is roughly proportional to the deviation of the ocean temperature above the freezing point (McPhee, 1992;McPhee et al., 1999;Kirillov et al., 2015). Therefore, the turbulent heat flux can be further parameterized as follows: where c H is the Stanton number of heat exchange efficiency, T is usually expressed as the difference between the ocean temperature and the freezing point, and u * 0 is the friction velocity at the interface. For the boundary layer beneath the sea ice, the Stanton number c H is usually assumed to be a constant value of 0.0057 (McPhee, 2002). Therefore, Eq. (5) can be expressed as follows: Owing to the roughness beneath sea ice and the fact that the data lack an ocean velocity profile, the friction velocity u * 0 is usually parameterized using the law of quadratic resistance related to the free-stream current. In this study, three different bulk parameterization methods were used to estimate the friction velocity (Table 1). V is the absolute flow velocity relative to the motionless landfast ice, which was observed by the ADV in this study. The velocity perturbations u , v , and w were estimated by removing the mean from the original time series with 15 min windows. Figure 2a shows the SIMBA observations from 16 April to 7 November 2021. The serial numbers of the thermistors start from the deep end of the string in the ocean. Sensor no. 180 represents the initial location of the ice surface on 16 April, when the SIMBA was deployed in the field (dotted lines in Fig. 2). Typically, the sensors above the dotted lines were located in the air and their temperature data exhibited significant daily variations. The sea ice temperature exhibited an obvious gradient of 0.11-0.24 • C cm −1 . The ocean temperature was stable, ranging from −1.7 to −1.9 • C, which was close to the freezing point. The bottom of the ice (dashed lines in Fig. 2) was identified through visual interpretation according to the method of Zhao et al. (2017). The ice surface did not experience obvious changes during the cold season, and therefore the changes in the ice thickness mainly occurred at the bottom of the ice. The landfast ice was 44 cm thick on the first observation day (16 April), continued to freeze from May to mid-October, and reached the maximum thickness of 142 cm on 22 October. After this, the bottom of the ice began to melt at a mean rate of −0.4 ± 0.2 cm d −1 until the end of the observation period. The mean growth rate during the study period was 0.5 ± 0.3 cm d −1 , and the maximum daily growth rate was Table 1. Three different parameterizations of the friction velocity u * 0 . 1.6 cm d −1 on 10 May 2021. The monthly mean growth rate was largest in May (0.8 ± 0.4 cm d −1 ) and smallest in October (0.1 ± 0.2 cm d −1 ), which is similar to the nearshore observations at Zhongshan Station in 2006 (Lei et al., 2010) and in 2012 (Zhao et al., 2019) but different to the offshore cases around this region, especially when grounded icebergs existed (Li et al., 2023). The vertical gradient of the ice temperature profiles shows that snow accumulation on top of the ice cover occurred from May to August and experienced discontinuous disappearance due to strong winds after September (thin blue lines in Fig. 2b). Finally, the snow completely disappeared in October, when the air temperature rose up to −2.7 • C. The ice surface began to melt under the strong solar radiation, and 6-8 cm of sublimation was observed by the SIMBA (thin red lines in Fig. 2b). In particular, shortly after the SIMBA was deployed, the landfast ice thickness experienced a 4 cm decrease during 21-26 April, when the warm air reached the observation site in the cold winter and the oceanic heat flux exhibited significant high values during this period. Ocean temperature, salinity, and density The time series of the ocean temperature were observed by the CTD deployed 2 m below the surface of the landfast ice. Figure 3a shows the 194 d high-frequency temperature record with a 2 min interval obtained from 16 April to 6 November 2021. The ocean temperature experienced a rapid increase during 16-23 April from −1.62 to −1.30 • C, and then it gradually decreased to −1.75 • C in the middle of May. In the following months, the ocean temperature remained at around −1.79 • C, with a small standard deviation of 0.01 • C, until the end of the observations. Therefore, the ocean beneath the ice was relatively warm and was highly variable before the middle of May (−1.64 ± 0.10 • C), while the ocean temperature dropped and remained close to the freezing point from then on (−1.79 ± 0.01 • C). Based on the spectral analysis, the time series of the ocean temperature exhibited an obvious half-day period, which may be related to the tidal oscillations. The temperature at the bottom of the sea ice (defined as the mean SMIBA sensor temperature in the lowest 10 cm of the sea ice) was lower than the ocean temperature, indicating that heat was transferred from the warm water to the cold sea ice and inhibited ice growth at the bottom of the ice. During April-May, the temperature at the bottom of the sea ice exhibited large variations (−5 to −2.5 • C) in response to the variations in the air temperature when the ice was thin and nearly no snow existed. After the thick snow cover formed, the temperature at the bottom of the sea ice became steady (−2 to −3 • C) from June to November, and the ocean temperature remained stable at around −1.8 • C. In particular, the SIMBA recorded a basal ice melting of 4 cm during 16-26 April. This event was accompanied by a concurrent increase in both the air temperature and ocean temperature, suggesting a heightened transfer of heat from both the air and ocean to the sea ice. The seawater salinity experienced a rapid increase from 33.34 psu in April to 34.08 psu in May, which was related to the salt rejection process caused by the high freezing rate of 1.1 ± 0.3 cm d −1 at the bottom of the ice (Fig. 3b). More specifically, from 19 to 23 April, the seawater salinity experienced a short period of decrease, different from the long and quickly increasing trend, which may have been related to the slowdown of the freezing at the bottom of the ice during this period due to the obvious warming of the air and ocean (Fig. 3a). From then on, the seawater salinity (around 34.13 ± 0.02 psu) largely remained stable with small daily and seasonal deviations. This corresponded to the occurrence of a relatively large and stable freezing rate at the bottom of the ice (around 0.5 ± 0.2 cm d −1 ) until the middle of October. When the warm season began, the bottom of the sea ice started to melt at a mean rate of −0.4 ± 0.3 cm d −1 (from the middle of October to the middle of November), and the seawater salinity slightly decreased, indicating that the salt rejection became weaker. As a function of the seawater temperature and salinity, the seawater density was calculated using the observations measured by the CTD and the equation proposed by Millero and Poisson (1981). The seawater density exhibited a trend similar to that of the seawater salinity, which increased significantly during the early winter, with a mean trend of 0.03 kg m −3 d −1 . In the following observation period, the seawater density was stable, with a mean value of 1027.5 ± 0.02 kg m −3 . After acquiring the seawater salinity by CTD, the seawater freezing point was calculated with the observed seawater temperature and salinity using the equation proposed by Millero (1978). The calculated freezing point decreases with the increase in the seawater salinity, from −1.83 • C in April to −1.86 • C in May, and then remained stable, with a mean value of −1.87 • C in the following seasons. Furthermore, the deviation of seawater temperature above the freezing point was calculated ( T , red lines in Fig. 3b), which increased quickly from 0.15 to 0.55 • C in April and decreased to around 0.1 • C in the middle of May and remained constant to November. Ocean current The 3-D current velocity in the meridional (U ), zonal (V ), and vertical (W ) directions at 5 m beneath the surface of the landfast ice was obtained by the ADV every 40 s. A rose diagram of the 2 min records of the horizontal current is shown in Fig. 4. The 2 min frequency records of U and V exhibited large oscillations, mainly varying within ±20 cm s −1 . In particular, 97 % of the U values and 96 % of the V values were within ±10 cm s −1 . W exhibited relatively small oscillations, mainly within ±4 cm s −1 , and 98 % of the W values were within ±2 cm s −1 . The typical periods of U , V , and W were all half-day periods. The domain direction was ESE (120 • )-WSW (240 • ), and 79 % of the velocity measurements were within 5-15 cm s −1 (Fig. 4a). The horizontal velocity was relatively small in April, less than 10 cm s −1 , and it gradually increased to the maximum value in June, when 75 % of the velocity measurements were greater than 10 cm s −1 . From then on, the horizontal current exhibited a similar distribution in the directions, while the range of the dominant velocity changed from 10-15 to 5-10 cm s −1 (Fig. 4b-i). The horizontal speed exhibited a mean velocity of 9.5 ± 3.9 cm s −1 and a maximum velocity of 29.8 cm s −1 for the 2 min interval records. Oceanic heat flux In the residual method, the vertical gradient of the sea ice temperature is a key term for calculating the conductive heat flux (F c ). Under cold and snow-free conditions, the surface air temperature and freezing point are usually used to calculate the vertical gradient (Lei et al., 2010;Zhao et al., 2019). However, in thick snow or warm cases, the vertical temperature profile of the sea ice is not linear. In this study, a reference layer close to the bottom of the ice was used to calculate the vertical gradient to avoid non-linear biases. McPhee and Untersteiner (1982) set the reference layer at 0.4 m above the bottom of the ice. Perovich and Elder (2002) set the reference layer at 0.4-0.8 m above the bottom of the ice for different ice thickness conditions. Lei et al. (2014) set the reference layer at 0.4-0.7 m above the bottom of the ice. In this study, we defined the reference layer as 0.2 m above the bottom of the ice, and the mean vertical gradient was calculated using the 2 cm interval temperature profile obtained by the SIMBA. In previous studies, the empirical value of the freezing point was usually used, but a practical value is more realistic in the F l calculation. Based on the seawater salinity observations recorded by the CTD, the freezing points were estimated following the equation derived by Millero (1978). During the observation period, the freezing point was around −1.83 • C in April, gradually decreased to −1.87 • C in June, and remained at this value until November. Figure 5 shows the heat fluxes calculated using the residual method. The variation in the latent heat flux (F l ) was strongly correlated with the growth and ablation of the sea ice. During the study period, F l was negative in the cold season, except for a short melting period in April. During 21-24 April, due to the influences of the warm air and ocean, the SIMBA recorded obvious melting at the bottom of the ice and F l exhibited a positive value of 20 W m −2 . In October, the melt season began and F l became positive. The specific heat flux F s was smaller throughout the study period, oscillating around 0 W m −2 . The conductive heat flux F c was relatively large before the middle of May (up to 80 W m −2 ), gradually decreased to 20 W m −2 in September, and finally reached 10 W m −2 in October and November. The oceanic heat flux exhibited a larger value of 41.3 ± 9.8 W m −2 in April and then decreased to around 10 W m −2 from June to October, but it quickly increased to 50 W m −2 in November before the observation period ended. The mean oceanic heat flux for the entire study period was 12.2 ± 10.9 W m −2 . In contrast to the residual method, previous studies have developed bulk parameterization methods for calculating the oceanic heat flux when the observations of ocean parameters are available (McPhee, 1979(McPhee, , 1992Sirevaag, 2009;Kirillov et al., 2015). In this study, the ocean velocity, temperature, and salinity in the ice-ocean boundary layer were recorded at a high frequency by the ADV and CTD, which provided a chance to evaluate the oceanic heat flux using bulk parameterization methods. During the observation period, the ocean temperature was always warmer than the freezing point, indicating that the heat flux was from the ocean to the ice. The temperature difference ( T ) between the ocean and the freezing point was 0.26 ± 0.08 • C in April and decreased gradually to 0.08 • C from June to November. Three different bulk parameterization methods were used in this study (Bulk A: Sirevaag, 2009;Bulk B: Kirillov et al., 2015;Bulk C: McPhee, 1979), and their main differences were due to the expressions of the fractional velocity and empirical parameters ( Table 1). The hourly oceanic heat flux values calculated using three bulk parameterization methods exhibit variations similar to those of the results of the residual method, that is, high values of 60-80 W m −2 in April and then gradually decreasing to 10-30 W m −2 . The mean oceanic heat flux values during the study period were 19.7 ± 5.3 W m −2 , 13.6 ± 3.1 W m −2 , and 24.4 ± 5.4 W m −2 for the Bulk A, Bulk B, and Bulk C methods, respectively, and 12.2 ± 10.9 W m −2 for the residual method (Fig. 6a). The values obtained using the bulk methods were 9.0 ± 8.9 W m −2 larger on average than that obtained using the residual method during the study period. According to the monthly oceanic heat flux trends shown in Fig. 6b, the oceanic heat flux values were 18.4, 15.7, 31.4, and 41.3 W m −2 in April for the Bulk A, Bulk B, Bulk C, and residual methods, respectively. In addition, the oceanic heat flux had large standard deviations in April, 10-20 W m −2 for the bulk methods and 10 W m −2 for the residual method, indicating a large variation in the hourly time series. From May to October, the standard deviations were generally less than 5 W m −2 . Among the three bulk parameterization methods, the results of the Bulk C method were relatively larger than those of the Bulk A and B methods. Previous studies estimated the oceanic heat flux under landfast ice in Prydz Bay using different methods. Allison (1981) estimated the oceanic heat flux near Mawson Station from monthly mean temperature and ice growth data. In the early stage of sea ice growth, the thermohaline convection caused by the brine rejection made the flux very high, and it could reach 50 W m −2 . Heil et al. (1996) used a multilayer thermodynamic model to simulate sea ice growth at In this study, the average oceanic heat fluxes calculated using the residual method and the bulk methods are consistent with those of previous studies on the seasonal scale, and the quantitative difference may be related to the specific methods and environmental parameters for the given years. Compared to the higher temporal resolution (6 h for the residual method and 2 min for the bulk methods) in this study, the estimation based on the traditional borehole observations may produce great errors within a short time window (Lei et al., 2010). Therefore, this high-frequency observation can more accurately capture the subtle changes in oceanic heat flux in the short term and better analyse the annual evolution of the ice-ocean interaction. Discussion The cross-seasonal minute-frequency observations of variables in the ice-ocean interface in this study provide a clear picture of how they varied on an hourly, daily, or seasonal scale and fill the knowledge gap at Zhongshan Station. Like the related studies in other regions, those variables may be Figure 7. The tidal oscillations were constructed using the harmonic analysis method (Pan et al., 2018) and the harmonic constants of E et al. (2013). The temporal resolution of this dataset is 1 h. affected by the short-term cycle of the sub-glacial current (McPhee et al., 1996) and ocean tide current (Lei et al., 2010). To further enrich our analysis, the relationships between processes on the local scale and pan-Prydz Bay scale were discussed here. Potential influences of local tidal oscillations The local tides may influence the evolution of sea ice . The tidal oscillations were reconstructed using the harmonic analysis method (Pan et al., 2018) and the harmonic constants from E et al. (2013). In this study, the periodogram method (Welch, 1967) was used to detect the periodicity of the long time-series observation data. Power spectrum analysis of the signal revealed that the tidal oscillations exhibited two peaks. The largest peak had a period of 1 d, and the second-largest peak had a half-day period, indicating that the tide near Zhongshan Station was an irregular diurnal tide (Fig. 7). To further investigate the relationships between the tidal oscillations and oceanic variables, the same spectral analysis was employed for all of the observed ocean variables. The ocean temperature exhibited the largest peak with a period of 1 d and a relatively low peak with a half-day period. In contrast, the seawater salinity, U , V , W , and the results of the three bulk parameterization methods exhibited the largest peak with a half-day period and a relatively low peak with a 1 d period (Fig. 8). The results of the spectral analysis indicate that the ocean temperature, salinity, U , V , W , and oceanic heat flux were greatly affected by the tidal oscillations. In April, the observed seawater temperature and salinity exhibited a special pattern; that is, the water was relatively warm and fresh in the equilibrium tide state, while it was cold and salty in the low and high tide states (Fig. 9a, b), which may have been related to the efficient horizontal heat transport when the surrounding area was not completely covered by ice. However, in the other months, the larger observed vertical velocity enhanced the vertical mixing, and therefore no significant variations in the seawater temperature and salinity and the oceanic heat flux were observed during the same period. Figure 8. (a) The results of the spectral analysis of the tidal oscillations and the observed ocean variables, and (b) the calculated F w . The periodogram method was used to detect the periodicity (Welch, 1967). Furthermore, when the tide level changed from low to high, the hourly U changed from a slightly positive distribution (0.7 ± 1.2 cm s −1 ) to a deeply positive distribution (1.2 ± 1.1 cm s −1 ), indicating predominantly eastward flow during the high-tide-level conditions (Fig. 9c). V changed from a slightly negative distribution (−1.3 ± 1.6 cm s −1 ) to an intensely negative distribution (−2.1 ± 1.3 cm s −1 ), suggesting that the southward flow became stronger when the tide level was high (Fig. 9d). W did not vary prominently, and the mean values were almost the same, 0.2 ± 0.3 cm s −1 and 0.2 ± 0.2 cm s −1 during the low and high tide levels, respectively (Fig. 9e). Relationships between large-scale and local phenomena Prydz Bay was covered by sea ice in the cold season. Ice floes appeared widely in March, and landfast ice started to form 1 month later in April near Zhongshan Station. From May to October, ice floes completely covered Prydz Bay, except for several large polynyas (Fig. 10d), e.g. Davis Polynya (DaP) and Four Ladies Bank Polynya (FLBP) on the eastern side and Mackenzie Bay Polynya (MBP) and Cape Darnley Polynya (CDP) on the western side (Hou and Shi, 2021;Nihashi and Ohshima, 2015;Williams et al., 2016). In addition, the landfast ice gradually extended to around 100 km along the zonal direction. In November, the ice floe concentration decreased, and the landfast ice cover reached the maximum extent (Fig. 10). The open water area accounted for nearly 80 % of the entire ocean grid in March, allowing more solar heat flux to be absorbed by the ocean, which was the energy basis for the warm ocean in April (Fig. 11). The largescale circulation in Prydz Bay indicated the existence of a westward current along the Antarctic coastline, which was stronger in the ice-free and low-ice-concentration months and weaker in the high-ice-concentration months. In April, the large-scale current carried the warm water from low latitudes to high latitudes, contributing to the observed rise in the ocean temperature near Zhongshan Station. From then on, the large-scale current weakened, and the horizontal heat transport decreased. The four large polynyas shown in Fig. 10d started to form in April, which led to the release of a large amount of salt through new ice production during their existence. As a result, the ocean mixed layer in the corresponding locations derived from Mercator global ocean reanalysis products exhibited obvious thickening from May to October (figure not shown). In addition, the thickening of the entire ice region in Prydz Bay contributed to the strengthened vertical mixing caused by the salt rejection as the sea ice continued to grow. The high seawater salinity observed by the CTD near Zhongshan Station (yellow lines in Fig. 11) confirms this assumption. Considering the reduced horizontal heat transport, the evolution of the ocean temperature was mainly affected by local factors. In this study, the observations were conducted close to the shore at a water depth of around 10 m, making full mixing of the shallow water possible. Therefore, the seawater temperature remained at a stable level from June to November (red lines in Fig. 11). The water depth near the shoreline may have affected the vertical mixing capacity. The observations of the seawater temperature from the SIMBA sensors at 2 m beneath the ice surface and the CTD were obviously different (mean difference of −0.17 ± 0.03 • C), which was largely beyond the errors of the instruments. The water depths of the SIMBA and CTD sensors were 4.5 and 13 m, respectively, and this difference is believed to have caused the different vertical mixing strengths and thus the different seawater temperatures. Conclusions The heat and momentum balances among the air, ice, and ocean are some of the most important processes in the polar regions. The air-ice interactions have been well investigated due to the fact that on-ice observations are relatively easy to conduct. However, the ice-ocean interactions have rarely been studied due to the difficulty and limitations of underwater observations. The oceanic boundary layer beneath sea ice plays an important role in the growth and melting of sea ice. In this study, an integrated ice-ocean observation system, including an ADV, CTD, and SIMBA, was deployed on the landfast ice 1 km from Zhongshan Station in Prydz Bay, East Antarctica. The ocean temperature, salinity, and velocity were observed with a 40 s resolution and an 8-month observation period and were investigated for the first time in this region. The SIMBA temperature chain recorded the vertical temperature profiles of air-snow-ice-ocean, which were used to estimate the snow and ice thicknesses and oceanic heat flux using the residual method. The results show that 98 cm of landfast ice formed from April to October, with a mean growth rate of 0.5 ± 0.3 cm d −1 , and 4 cm melted in November, with a rate of −0.4 ± 0.2 cm d −1 until the observation period ended. Approximately 6-8 cm of surface sublimation was observed in summer. The maximum snow thickness was around 30 cm in May and remained at 10-20 cm until August. The CTD recorded the 40 s resolution seawater temperature and salinity at a depth of 5 m beneath the ice surface. The seawater temperature rapidly increased from −1.62 to −1.30 • C in April and then gradually decreased to −1.75 • C in May. The seawater temperature remained stable from June to November, with a mean of −1.79 ± 0.01 • C. In April, the landfast ice was 44-50 cm thick and the ice surface was snow free; therefore, the variations in the air temperature exerted a larger influence on the ice and seawater temperatures. The significant increases in the air and seawater temperatures led to an increase in the temperature of the bottom of the ice, which contributed to the sudden melting of 4 cm from the bottom of the ice observed by the SIMBA. The thick snow cover from May to August provided an isolation layer for the ice and ocean, which contributed to the stability of the seawater temperature during this period. The seawater salinity increased from 33.34 in April to 34.08 psu in May, with a rate of 0.04 psu d −1 . From June to November, the seawater salinity was stable at around 34.13 ± 0.02 psu. The seawater density calculated from the observed seawater salinity increased from 1026.8 to 1027.4 kg m −3 from April to May and remained at 1027.5 ± 0.02 kg m −3 from then on. The current velocity was recorded by the ADV from April to November. The analysis of the 2 min resolution time series revealed that 79 % of the ocean velocity values were within 5-15 cm s −1 , and the mean values during the study period were 9.5 ± 3.9 cm s −1 . The maximum velocity of 29.8 cm s −1 was observed on 25 June 2021. The dominant current direction was ESE (120 • )-WSW (240 • ). The spectral analysis results suggest typical half-day periods for U , V , and W , which may be related to the tidal oscillations near Zhongshan Station. The meridional velocity V was dominated by the southward flow and became stronger when the tide level was higher. The oceanic heat flux was estimated using the residual method and three different bulk parameterization methods. The results exhibit a similar peak of 60-80 W m −2 in April-May and a decreasing trend to a stable level of 10-30 W m −2 from then on. The mean values were 12.2 ± 10.9 W m −2 , 19.7 ± 5.3 W m −2 , 13.6 ± 3.1 W m −2 , Fig. 10) and the seawater temperature (red lines), seawater salinity (yellow lines), mean oceanic heat flux from the Bulk A, B, and C methods (grey lines with rose shading), and the oceanic heat flux from the residual method (green lines). The open water area was defined as the sum of the grid cells where the sea ice concentration was less than 15 %. The rose shading indicates ±1 standard deviation. and 24.4 ± 5.4 W m −2 , respectively, for the residual and Bulk A, B, and C methods. The large differences were mainly caused by the different formulas for the friction velocity, indicating the uncertainties of the empirical equations. The estimated results obtained in this study are consistent with those of previous studies, which were usually based on low-frequency ice thickness observations. The oceanic heat fluxes exhibited similar half-day periods, which are also believed to be related to the tidal oscillations. The observations of seawater temperature, salinity, U , V , and W and the estimation of the seawater density and oceanic heat flux exhibited periods similar to those of the local tidal oscillations, suggesting that the tides were one of the main drivers of the oceanic variations near Zhongshan Station. The large-scale sea ice distribution and current transformation affected the absorption of solar radiation by the upper ocean and the horizontal heat transport, which was another main driver of the oceanic variations near Zhongshan Station. Both the local-and large-scale influences played important roles in the oceanic heat flux and thus the ice-ocean interactions. In this study, the attainment of high-frequency oceanic measurements provided an opportunity to investigate the details of the ice-ocean interactions beneath landfast ice on the diurnal and seasonal scales. The bulk parameterization was used to estimate the oceanic heat flux near Zhongshan Station, providing more interesting information than the residual method does. The use of more ice and ocean equipment, such as ice radar, ocean temperature chains, and ice thickness gauges, will be considered in the future to fill the remaining data gap. Data availability. The observation data are available from the Science Data Bank. The seawater temperature and salinity recorded from a cable-type CTD are publicly available at https://doi.org/10.57760/sciencedb.07693 (Zhao and Hu, 2023b). The air-ice-ocean temperature profile derived from the sea ice mass balance array (SIMBA) is publicly available at https://doi.org/10.57760/sciencedb.07684 (Zhao and Hu, 2023a). The 3-D current velocity 5 m beneath landfast ice recorded by an acoustic Doppler velocimeter (ADV) is publicly available at https://doi.org/10.57760/sciencedb.07692 (Zhao and Hu, 2023c). Author contributions. JC conceptualized this study and designed the numerical methods. HH carried out the experiments and wrote the manuscript. JC, PH, ZL, and FH helped analyse the results and revised the manuscript. JM provided and helped process the sea ice observation data. XC assisted during the writing process and critically discussed the contents. Competing interests. At least one of the (co-)authors is a member of the editorial board of The Cryosphere. The peer-review process was guided by an independent editor, and the authors also have no other competing interests to declare. Disclaimer. Publisher's note: Copernicus Publications remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
2023-06-04T15:06:31.913Z
2023-06-02T00:00:00.000
{ "year": 2023, "sha1": "c6504e057f44a217f3190ada42ea329ad3b4b23d", "oa_license": "CCBY", "oa_url": "https://tc.copernicus.org/articles/17/2231/2023/tc-17-2231-2023.pdf", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "7ba65f4518cfff698e7c40e87d008d1d8fca6303", "s2fieldsofstudy": [ "Environmental Science", "Geology" ], "extfieldsofstudy": [] }
249965871
pes2o/s2orc
v3-fos-license
Are there socioeconomic disparities in geographic accessibility to community first responders to out-of-hospital cardiac arrest in Ireland? Out-of-hospital cardiac arrest (OHCA) is a leading cause of death worldwide. Without appropriate early resuscitation interventions, the prospect of survival is limited. This means that an effective community response is a critical enabler of increasing the number of people who survive. However, while OHCA incidence is higher in more deprived areas, propensity to volunteer is, in general, associated with higher socioeconomic status. In this context, we consider whether there are socioeconomic disparities in geographic accessibility to volunteer community first responders (CFRs) in Ireland, where CFR groups have developed organically and communities self-select to participate. We use geographic information systems and propensity score matching to generate a set of control areas with which to compare established CFR catchment areas. Differences between CFRs and controls in terms of the distribution of catchment deprivation and social fragmentation scores are assessed using two-sided Kolmogorov-Smirnov tests. Overall we find that while CFR schemes are centred in more deprived and socially fragmented areas, beyond a catchment of 4 min there is no evidence of differences in area-level deprivation or social fragmentation. Our findings show that self-selection as a model of CFR recruitment does not lead to more deprived areas being disadvantaged in terms of access to CFR schemes. This means that community-led health interventions can develop to the benefit of community members across the socioeconomic spectrum and may be relevant for other countries and jurisdictions looking to support similar models within communities. Introduction Cardiac arrest occurs when the heart stops beating or beats in a manner that prevents blood from circulating around the body (Paradis et al., 2007). Out-of-hospital cardiac arrest (OHCA), defined as an unexpected cardiac arrest event that occurs in a location other than an acute hospital, is the third leading cause of death in Europe (Gräsner et al., 2020). While death from OHCA is frequent, it is not inevitable, particularly if treatment is initiated immediately or within minutes of collapse. As a result, the 'chain of survival' concept was approved by the American Heart Association (AHA) in 1990 (Cummins et al., 1991) and is used internationally to describe the series of resuscitation interventions required to restore consciousness or other signs of life in an OHCA patient. It includes four key 'links', namely (i) early recognition of OHCA and immediate call for help to the Emergency Medical Services (EMS), (ii) immediate, high-quality cardiopulmonary resuscitation (CPR), (iii) defibrillation within minutes of collapse, and (iv) effective advanced EMS and post-resuscitation care. In a recent iteration, the importance of the first two links in the chain of survival (call to emergency services and immediate, high quality cardiopulmonary resuscitation) were emphasised, given they have the greatest potential to impact on survival (Deakin, 2018). Importantly, in the case of OHCA, their success is invariably dependent on the community response, rather than the EMS. The third link, defibrillation, is also independently associated with survival. It involves stopping 'fibrillation' of the heart by administering a controlled electric shock, in order to allow restoration of the normal rhythm. Notably, however, the benefit of defibrillation reduces as the interval from time of emergency call to defibrillation increases (Hansen et al., 2015;Kiyohara et al., 2019). Ireland was to the forefront in making defibrillation available in the prehospital setting in the 1960s, first with the invention of a mobile defibrillator in Belfast (Adgey et al., 1969) and then with the provision of prehospital defibrillation by paramedics in Dublin (Gearty et al., 1971). Automated external defibrillators (AEDs) have become increasingly available in the community and defibrillation before EMS arrival is consistently associated with improved OHCA survival (Blom et al., 2014;Hallstrom et al., 2004;Nielsen et al., 2013;Wissenberg et al., 2013). This implies that a critical advantage of so-called 'first responders' lies in their potential to shorten the interval from collapse to first defibrillation. Various first responder models involving citizens, firefighters, police, and off-duty EMS personnel have been developed across many countries to support and improve the community-based first response to OHCA (Oving et al., 2019). Citizen responders are common across European countries, but the model of a citizen-based 'community first responder' (CFR) scheme is most closely linked to Ireland and the United Kingdom (UK). In Ireland, the setting for this study, the provision of emergency first response by volunteer CFR schemes was established over 25 years ago and the role and value of CFRs was formally recognised in the Report of the Task Force on Sudden Cardiac Death (Department of Health and Children, 2006). In particular, it recommended the enhancement of first responder programmes to reduce response times in the event of cardiac arrest. The potential impact of extending the scope and number of CFR schemes was subsequently assessed in the 'Lightfoot Report', which estimated that 'optimal CFR contribution' could significantly improve response times to life-threatening emergencies in Ireland (Lightfoot Solutions UK Limited, 2015). Health service provision is usually planned by health government agencies. Even where local health agencies or communities identify a local need, approval and funding for health service provision is usually controlled at a national level. In contrast, communities involved in the Irish CFR scheme are self-selecting. However, the propensity for community volunteerism has long been understood to be associated with higher personal and area-level socioeconomic status (Bell & Force, 1956;Niebuur et al., 2018;Youssim et al., 2015). For example, in 2016 the European Quality of Life Survey found that volunteering rates were greater when people were employed, had high educational attainment, and high income (Eurofound, 2017). More recently, in 2019, the UK National Council for Voluntary Organisations found that formal volunteering was more common among people who lived in the least deprived areas when compared to those who lived in the most deprived areas (29% vs. 14%) (The National Council for Voluntary Organisations, 2020). This is particularly relevant when considering services targeting OHCA, since previous research has found that lower socioeconomic status is associated with both a greater incidence of OHCA and poorer survival (Hallstrom et al., 1993;Sasson et al., 2012). Indeed, recent evidence from Ireland found a statistical association between greater area-level deprivation and increased OHCA incidence . Given all this, it is important to highlight that CFR schemes in Ireland rely on community self-selection and volunteerism. Thus, since OHCA incidence and outcomes are associated with lower socioeconomic status, while volunteerism is generally associated with higher socioeconomic status, it is important to consider if the specific nature and features of the Irish scheme have led to socioeconomic disparities in geographic accessibility to CFRs. This would be particularly problematic if CFR schemes were less likely to be located in more deprived areas, where the risk and incidence of OHCA tends to be greater, since it would imply that the 'organic' nature and development of CFR schemes is resulting in a sub-optimal geographic distribution of healthcare services. In this context, this paper considers the socioeconomic profile of areas with geographic accessibility to CFRs in Ireland and compares this to similar 'control' areas that have no accessibility. In particular, it investigates the presence or not of differences in both area-level deprivation and social fragmentation between CFR and control catchments in order to investigate if there are socioeconomic disparities in geographic accessibility to CFRs. The paper is structured as follows: Section 2 describes the setting for our analysis, Section 3 our materials and methods, while Section 4 presents our key results and findings. Finally, Section 5 discusses the implications of our research and concludes. Setting There are in the region of 5,000 OHCAs in Ireland each year, of which approximately 2,200 with resuscitation attempts are recorded in the OHCA Registry (Irish National Out-of-Hospital Cardiac Arrest Register, 2019). In 2019, 67% of OHCA patients were male and the median age of patients was 68 years (interquartile range: 54-79 years). The majority of incidents occurred in an urban area, though the incidence was similar in both urban and rural areas (53 vs. 50 OHCA patients per 100,000 population respectively). Most events occurred at home (68%) and the vast majority of patients had bystander CPR attempted (84%). Only 7% of patients had defibrillation attempted before the arrival of the EMS and in 2019 a total of 190 people survived an OHCA event. From the perspective of the impact of community intervention, 49% of survivors had defibrillation attempted before EMS arrival. As part of the health system response in Ireland, CFRs have been a feature of the health service for over 25 years. In 2004, the sudden death of a young high profile athlete raised the issue of community first response in the media and contributed to renewed interest in the establishment of CFR schemes (Irish Independent, 2016). In 2008, the Prehospital Emergency Care Council (PHECC) has established PHECC CFR education and training standards and the completion of a course which meets these standards is mandatory prior to becoming a CFR (Prehospital Emergency Care Council (PHECC) (2020). Schemes have been established with the assistance of ambulance personnel, community leaders, and organisations such as CFR Ireland and the Irish Heart Foundation. At the time of this study, there were 222 active CFR groups in Ireland that were linked to the National Ambulance Service (NAS) and these are included in our analysis. Previous research has estimated that Irish CFRs have the potential to reach approximately one million additional citizens before the ambulance service and within a timeframe where defibrillation is likely to be effective (Barry et al., 2018). In terms of the process involved in establishing a CFR scheme, communities self-select into the scheme and must first express an interest before Community Engagement Officers from the NAS offer assistance. Community Engagement Officers provide support during the CFR scheme set up and certify that necessary training and health and safety requirements are met before the CFR scheme can be alerted to an emergency call. They also provide ongoing support to existing groups in the form of training events, regular visits, and contacts. However, while Community Engagement Officers must certify that a CFR scheme is fully prepared to respond to emergency calls, individual CFR schemes remain responsible for: recruiting members; financing and purchasing equipment including defibrillators; and, ensuring that the clinical training, health and safety requirements, and vehicle insurance arrangements of CFR scheme members are met at all times. CFR schemes are linked with the NAS Emergency Operations Centre (NEOC) and are alerted in the event of a suspected OHCA within a designated area. The radius of the area covered may vary, depending on population density and geography, but is usually 5-10 kms. In the event that a cardiac arrest is suspected, and if the location is within the radius of the CFR scheme coverage, an automated text alert is sent from NEOC to the CFR scheme. If the CFR scheme members accept the call and travel to the scene, they are required to perform CPR and attempt defibrillation until the patient recovers or until the arrival of the statutory ambulance services. All CFR schemes are equipped with defibrillators, which they bring directly to the event location. It is of note that this model of direct dispatch has been shown to decrease time-to-defibrillation when compared to a model where the first responder must first access the closest public access AED (Auricchio et al., 2019). In terms of the population and geographic context for this study, Ireland has a population of over 5 million people, with approximately 37% of people living in rural areas (Central Statistics Office, 2019). The smallest legally defined area in Ireland is the electoral division (ED), of which there are 3,409. The population and geographic coverage of EDs varies considerably with a mean population of 1,397 (range: 66 to 38, 894). Since 2011, census outputs are also reported for 18,641 subdivisions of EDs, called small areas, which have a mean population of 255 (range: 50 to 1,629). Data Our analysis uses data from a range of sources. First, when a CFR scheme is established, the CFR scheme coordinator advises the NAS of the address that represents the most central location in their catchment area. This address is used as a centroid to generate a radius within which the CFR scheme will be alerted to cardiac arrest calls by NEOC. The centroid for each CFR group was supplied by the NAS, which allowed us to map the catchment areas of all 222 active CFR groups in Ireland. Second, in order to consider the relative socioeconomic profile of these CFR catchment areas, it was necessary to develop a set of control group catchment areas. To generate potential control group centroids, two house points were randomly selected from each ED to give a total of 6,818 potential locations. House points were used to ensure that centroids were in habitable locations and close to the road network. By sampling from all EDs, there was unbiased coverage of socioeconomic conditions. Third, to consider the socioeconomic profile of CFR catchment and control areas, measures of area-level deprivation and social fragmentation were used. Deprivation was defined using an index designed for health services research (Teljeur et al., 2019). It is based on four indicators from the 2016 census of population that are combined using principal components analysis (PCA), namely: unemployment; low social class; car ownership; and, local authority housing. The potential influence of social capital was also explored using an index of social fragmentation (Congdon, 2004). The index was also computed by applying PCA, this time to four census variables, namely: unmarried; single person households; rented accommodation; and, moved in last 12 months. This index was intended to capture populations in flux and which are likely to have diminished social capital as a result. Both indices were computed at small area level and expressed as a continuous variable. The Moran's I is 0.569 for deprivation and 0.613 for social fragmentation, indicating the relatively high degree of spatial autocorrelation in these indicators. Finally, the locations of ambulance stations and hospital emergency departments (ED) with 24/7 cover were geocoded. This allowed us to calculate travel time measures from the centroids of CFR and control catchments to these facilities. Catchment generation Catchment areas were defined based on small areas within a specified travel time of CFR and control centroids. Road network data for the whole island of Ireland were accessed from OpenStreetMap. Travel time calculations were undertaken using OSRM, accessed through R (Giraud, 2020;R Development Core Team, 2020). For the main analysis it was assumed that the effective coverage of a CFR group would be up to 8 min from the centroid based on travel by private car. The decision to use an 8-min catchment area was based on the response time key performance indicator for Irish ambulance services which requires that "patients with life-threatening cardiac or respiratory arrest incidents are responded to by a first responder … in 7 min and 59 s or less in 75% of all cases" (Health Information and Quality Authority, 2012). The characteristics of the small areas within a catchment were used to calculate population-weighted catchment-level characteristics (e.g., mean deprivation score). Propensity score matching To select matched controls from the full set of 6,818 potential locations described above, propensity score matching was used. Scores were computed on the basis of 9 variables: distance to the nearest ambulance station; distance to the nearest hospital ED; distance to the nearest city; distance to the nearest town; distance to the nearest village (classified into 'near' and 'remote'); area covered by the catchment (km 2 ); the population of the catchment; and, the population density of the catchment. Cities, towns, and villages were defined on the basis of an urbanrural index (Teljeur & Kelly, 2008). Propensity score matching was undertaken in R using the MatchIt package (Stuart et al., 2011). Both nearest neighbour and optimal methods were used. Distance metrics were calculated on the basis of both standard logit and Mahalanobis distance. For the nearest neighbour method, a range of caliper values were tested: 0, 0.1, and, 0.25. In all analyses a one-to-one ratio was used to generate 222 matched controls. As a sensitivity analysis, matching ratios of two, three and four to one were also tested for all eight model specifications. To compare the outputs of the various approaches to propensity score matching, the standardised differences in the matching variables between CFRs and controls were assessed using a Chi-square test (Baser, 2006). A statistically significant result (p ≤ 0.05) would indicate that at least one of the variables included in the model was creating an imbalance between CFRs and controls. The matching models were also compared using five balance checking criteria to investigate evidence of selection bias (Baser, 2006). Statistical analysis The difference between CFRs and controls in terms of the distribution of catchment deprivation and social fragmentation scores was assessed using a two-sided Kolmogorov-Smirnov test. The test is non-parametric and can distinguish between distributions that have the same mean but a different variance. Because some CFRs were in close proximity, the catchments in some cases overlapped. The degree of overlap between catchments in terms of shared small areas was calculated to compare CFRs to matched controls. Overlap was expressed as a proportion of small areas in a catchment that are also in another catchments. With increasing catchment size, the increasing heterogeneity of small areas included in catchments would likely lead to the mean socioeconomic conditions converging towards the national mean. To test whether this was an issue, the distribution of catchment deprivation and social fragmentation was also calculated for random selections of control catchments. As the propensity score matching was based on nine variables, there was a risk of over-matching. To explore this issue and the impact of excluding different variables, a sensitivity analysis using a leave-one-out approach was used. Results Eight different configurations of the propensity score matching algorithm were tested. The outputs of the optimal matching method indicated that there was an imbalance in at least one variablesee Table 1. At a catchment distance of 8 min, the lowest Chi-square value was for nearest neighbour matching with a caliper of 0.1, using either logit or Mahalanobis distance. On inspection of the balancing checking criteria, it was felt that the model using Mahalanobis distance was marginally better and that model configuration is used for subsequent reporting here (see supplementary appendix for details of balance checking criteria). The matched controls demonstrated a good balance across the matching variablessee Table 2. However, it should be noted that CFR groups were more likely to be centred in a town or village area, and less likely to be centred in a rural area, than the matched controls. The average overlap between CFR catchments was 0.22 (Interquartile range (IQR): 0.00 to 0.38) and the equivalent for matched controls was 0.24 (IQR: 0.00 to 0.44). Fig. 1 presents a map of CFR and matched control locations based on 8-min catchments. It clearly shows that the geographic distribution of CFR schemes in Ireland is far from uniform and that there are large areas of the country without a CFR scheme. While nationally the correlation between deprivation and social fragmentation is 0.514, the correlation was 0.670 for CFR groups and 0.542 for matched controls. Overall 21% of CFR centroids (n = 46) were within 8 min travel time of an ambulance station, compared with 20% of matched controls (n = 45) and 23% of all controls. The mean straight line distance from a CFR centroid and its nearest neighbouring CFR centroid was 8.5 km (range: 0.6-32.5 km). The equivalent for the matched controls was 8.8 km (range: 0.3-39.2 km). In terms of deprivation, CFRs were more deprived at their catchment centroids than the matched controls (D = 0.17, p = 0.005) but not for 8 min catchments (D = 0.05, p = 0.902) (Fig. 2(a) and (b)). The difference in catchment deprivation between CFRs and matched controls was significant to a catchment of 2.2 min, then occasionally significant to a catchment of 4.0 min, and not thereafter (Fig. 2(c)). Similarly for social fragmentation, CFRs were more fragmented at their catchment centroids than the matched controls (D = 0.18, p = 0.001) but not for 8 min catchments (D = 0.07, p = 0.612) (Fig. 3(a) and (b)). The difference between CFRs and matched controls in catchment social fragmentation is significant to a catchment of 2.1 min, then sporadically significant to a catchment of 3.6 min, and not thereafter (Fig. 3(c)). By comparison with control catchments generated by random selection rather than propensity score matching, the difference between the CFR catchments and the random selections decreases with increasing catchment extent. At a catchment extent of 5.8 min, there is a less than 50% chance that the difference in deprivation between the CFR catchments and a random selection will be statistically significant. At an extent of 6.7 min, there is a less than 50% chance that the difference in social fragmentation between the CFR catchments and a random selection will be statistically significant. A sensitivity analysis was undertaken to test the impact of leaving out each matching variable in turn on the findings. For both deprivation and social fragmentation, omission of either area, population or population density had an impact on the results. With omission of area, there was almost no catchment extents at which CFRs were significantly different to matched controls for deprivation or social-fragmentation. For population and population density, the extent to which significant differences were observed was reduced. However, in all cases the quality of the matching was based on the Chi-square tests. On inspection, the matched controls areas tended to cover a larger geographic area and have a notably lower population density than the CFR catchments. The inclusion of additional controls for each case did not improve the model fit, and generally resulted in increased variance observed in the matching variables. Discussion The aim of this study was to investigate if there are socioeconomic disparities in geographic accessibility to CFR groups in Ireland. We used a propensity score matching approach to select relevant control areas to compare with the established CFR catchments in terms of area-level deprivation or social fragmentation. The analysis provides evidence Abbreviations: SD = standard deviation; ED = emergency department. that while CFRs may be centred in areas that are, on average, more deprived and more socially fragmented, beyond a catchment extent of 4 min this is no longer apparent. Given the heterogeneity in deprivation at larger catchment extents, we would not expect to observe a difference at extents of more than 6 min. Given that OHCA incidence tends to be associated with lower socioeconomic status, this is a desirable finding. However, these differences in the socioeconomic status of CFR and matched control catchment centroids may be partly explained by the sampling method. The full list of control areas was based on two randomly sampled points for each of the 3,409 EDs in Ireland. Based on the distribution of centroids by area type (Table 2), it can be seen that CFRs and control areas were located in quite different areas. While 14% of control areas were centred in city EDs, only 7% of CFRs are. The distribution of the matched controls areas was closer to that of the CFR centroids, but there was still over-representation of rural EDs. The differences in socioeconomic status at the centroids may be partly explained by the different distribution by area type, although it should be noted that the large set of control points to sample from meant that a matching distribution by area type could have been achieved. The fact that there was a difference by area type between CFRs and matched controls, particularly in terms of the seeming under-representation of rural EDs, may suggest that an economy of scale, or minimum population, is required to generate sufficient volunteers or resources to support the establishment of a CFR. Overall, the fact that CFR centroids are more deprived and more socially fragmented is not particularly important, as in reality CFR members will respond from their homes. Nonetheless, it is interesting that centroids tend to be more deprived and socially fragmented, as it shows that the establishment of CFRs in Ireland is not centred on more socioeconomically advantaged areas. If anything, it is likely the reverse. So even if there is a propensity for higher socioeconomic status individuals to volunteer, this does not result in less chance of CFR coverage for areas with lower socioeconomic status. Our findings have important implications for the design and development of CFR schemes, both in Ireland and more generally. As discussed, voluntary first response is different to other health services in terms of its organic, rather than planned, development and, in particular, in relation to the way local communities self-select into the scheme. In addition, our results suggest that socioeconomically disadvantaged areas do not lose out as a result of the volunteering element of the Irish scheme. Early community intervention is the key to increasing survival after OHCA and CFRs play a critical role in tackling OHCA. For example, clinical trials have shown that first responders can increase rates of CPR and defibrillation before EMS arrival , while observational studies have suggested an important role for community-based AED use in increasing OHCA survival (Baekgaard et al., 2017). In a retrospective evaluation of OHCA data from North Carolina, Hansen and colleagues showed that first responders were responsible for the majority of instances (51.8%) of 'early' defibrillation (i.e. time from emergency call to defibrillation within 5 min) (Hansen et al., 2015). The Copenhagen Oslo STockholm Amsterdam (COSTA) Group reported on the survival status of 22,453 patients and observed that of the 2,957 patients who survived to at least 30 days post-event, 454 (20%) were defibrillated by a first responder AED (Zijlstra et al., 2018). Studies from The Netherlands and Sweden have also demonstrated significant reductions in time-to-defibrillation when dispatched first responders were compared to the EMS (Claesson et al., 2017;Zijlstra et al., 2014). Our results suggest that communities in Ireland are not disadvantaged by socioeconomic status in relation to such benefits. Nonetheless, despite our findings in relation to the lack of problematic socioeconomic disparities, it is worth noting that our mapping of CFRs in Ireland shows there are large areas of the country currently with no CFR coverage. This suggests that while the organic development of CFR schemes has been successful in avoiding potential socioeconomic disadvantage in coverage, there likely remains a need for increased coverage of CFR schemes in Ireland overall. Previous research in Scotland has suggested that CFR schemes that are supported are a sustainable model type once established (Farmer et al., 2015). Future research could examine the individual and community-level motivations for establishing a scheme and whether clustering is a feature of CFR scheme development. Finally, it is important to acknowledge some caveats associated with our results and findings. First, a central feature of the analysis was the comparison of the socioeconomic status of CFR and matched control catchments as defined by our choice of catchment extent. The use of an 8-min catchment was a pragmatic choice based on a response time key performance indicator (Health Information and Quality Authority, 2012). The choice of catchment size is clearly important as population heterogeneity increases with distance. To test whether our findings were robust to the choice of 8 min catchments, we used sensitivity analyses based on six and 10 min catchments. In both cases, the findings were the same and there was no difference in the socioeconomic status of CFR and matched control catchments. In addition, there were no data available on the numbers of active or on-duty volunteers for the CFRs, nor their exact residential location, which is likely their actual point of dispatch. This information may have helped to define the shape and extent of the CFR catchments more accurately. However, it would likely also have added substantially to the complexity of the analysis, without necessarily improving the accuracy of the outputs. In addition, it is important to note that the travel time data were derived from the OpenStreetMap routing machine, and were assumed to be representative for private car travel. If there is a bias in terms of the time of day at which OHCAs occur, then it is possible that driving conditions could be different to those described in the OSRM database. However, given that most of the CFRs are outside cities, there is unlikely to be a substantial impact from traffic congestion or other driving conditions that may impact the estimates of drive times. Our analysis incorporates two measures of socioeconomic conditions, namely deprivation and social fragmentation. Social fragmentation was originally created as a measure of the non-economic aspects of deprivation (Congdon, 1996). It is a reverse measure of social capital and is intended to capture populations in flux and where there may be a lack of social cohesion. Hence, residents may invest less in the community as a result. Although deprivation and social fragmentation do not always coexist, they have previously been observed to be correlated, especially in studies involving big towns and cities (Area Based Analysis Unit (Office for National Statistics 1) (2009); Congdon, 2004). Therefore, the utility of the measure of social fragmentation in our study may be undermined by a correlation with deprivation. The validity and reliability of both the deprivation and social fragmentation measures is open to debate. Establishing the validity of a deprivation index is challenging (Beduk, 2018). Both indices are limited to a small number of variables with a direct link to the concept being quantified. The deprivation index has shown a consistency over time in terms of the classification of small areas, and has been used widely for health services research in Ireland (Teljeur et al., 2019). A further limitation to our analysis is that it does not incorporate data on OHCA events. At a small-area level, OHCAs are rare events and subject to substantial variability from year to year. As a result, calculating an area-level risk of OHCA may not fully explain the distribution of CFRs but it might create the momentum needed to spur a community into action. While our focus was on the socioeconomic status of the catchment areas, it may be possible to consider the catchments in tandem with the socioeconomic status of OHCA cases and of the first responders themselves. The value of such an analysis would be to understand whether the responders represent the communities in which OHCAs are most likely to occur or the communities which are most likely to benefit from the provision of a CFR. A final caveat is that we analysed the difference between the catchment socioeconomic status of CFRs and matched controls with increasing catchment size (in increments of 0.1 min). At each increment a Kolmogorov-Smirnov test was carried out to compare distributions. Arguably the repeated application of the test creates an issue with multiple hypothesis testing. We made no adjustment for this, although the findings are consistent with the tests run for catchment centroids and full catchment extent, and the purpose was to identify the point at which the difference in the distribution of socioeconomic scores is no longer important. Overall, despite these limitations, this paper strongly suggests that a self-selection model for CFR recruitment does not disadvantage more deprived communities, though there is likely be a need for increased coverage of CFR schemes more generally. Community intervention is essential if we are to improve the rate of survival from OHCA. While OHCA incidence tends to be higher in more deprived areas and volunteerism is often higher in affluent areas, our findings show that those most at risk are not disadvantaged in terms of access to CFR schemes in Ireland and is a model that should be supported both in Ireland and other jurisdictions. Ethics statement This study received ethical approval from the National University of Ireland Research Ethics Committee (Ref 18-Sep13). Funding This work was supported by the Health Research Board, Ireland [grant number: APA-2016-1859]. This grant was paid to the National University of Ireland Galway, and Dr Masterson is the Primary Investigator for this grant award. Author contributions All authors contributed equally to conceptualisation of the research idea. Siobhán Masterson acquired funding, curated data and administered the project. Conor Teljeur designed the methodology and performed statistical analysis. John Cullinan performed validation of formal analysis. All authors contributed equally to writing the original draft, and subsequent review and editing. Declaration of competing interest The authors can confirm that they have no conflicts of interest to declare.
2022-06-24T15:23:27.817Z
2022-06-22T00:00:00.000
{ "year": 2022, "sha1": "2e2b183819164e185bf3554af030dd7dec12b9f5", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1016/j.ssmph.2022.101151", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "c12d77121f21263196e538ae368085ea0cb21687", "s2fieldsofstudy": [ "Economics" ], "extfieldsofstudy": [ "Medicine" ] }